Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
May 25, 2013, 23:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

May 25, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : street art (May 25, 2013, 12:00 UTC)




Sven Vermeulen a.k.a. swift (homepage, bugs)

Now that our regular user is allowed to execute incrontab, let’s fire it up and look at the denials to build up the policy.

$ incrontab --help

That doesn’t show much does it? Well, if you look into the audit.log (or avc.log) file, you’ll notice a lot of denials. If you are developing a policy, it is wise to clear the entire log and reproduce the “situation” so you get a proper idea of the scope.

# cd /var/log/audit
# > audit.log
# tail -f audit.log | grep AVC

Now let’s run incrontab –help again and look at the denials:

type=AVC msg=audit(1368707274.429:28180): avc:  denied  { read write } for  pid=7742 comm="incrontab" path="/dev/tty2" dev="devtmpfs" ino=1042 scontext=user_u:user_r:incrontab_t tcontext=user_u:object_r:user_tty_device_t tclass=chr_file
type=AVC msg=audit(1368707274.429:28180): avc:  denied  { use } for  pid=7742 comm="incrontab" path="/dev/tty2" dev="devtmpfs" ino=1042 scontext=user_u:user_r:incrontab_t tcontext=system_u:system_r:getty_t tclass=fd
type=AVC msg=audit(1368707274.429:28180): avc:  denied  { use } for  pid=7742 comm="incrontab" path="/dev/tty2" dev="devtmpfs" ino=1042 scontext=user_u:user_r:incrontab_t tcontext=system_u:system_r:getty_t tclass=fd
type=AVC msg=audit(1368707274.429:28180): avc:  denied  { use } for  pid=7742 comm="incrontab" path="/dev/tty2" dev="devtmpfs" ino=1042 scontext=user_u:user_r:incrontab_t tcontext=system_u:system_r:getty_t tclass=fd

You can start piping this information into audit2allow to generate policy statements, but I personally prefer not to use audit2allow for building new policies. For one, it is not intelligent enough to deduce if a denial should be fixed by allowing it, or by relabeling or even by creating a new type. Instead, it always grants it. Second, it does not know if a denial is cosmetic (and thus can be ignored) or not.

This latter is also why I don’t run domains in permissive mode to see the majority of denials first and to build from those: you might see denials that are actually never triggered when running in enforcing mode. So let’s look at the access to /dev/tty2. Given that this is a user application where we expect output to the screen, we want to grant it the proper access. With sefindif as documented before, we can look for the proper interfaces we need. I look for user_tty_device_t with rw (commonly used for read-write):

$ sefindif user_tty_device_t.*rw
system/userdomain.if: template(`userdom_base_user_template',`
system/userdomain.if:   allow $1_t user_tty_device_t:chr_file { setattr rw_chr_file_perms };
system/userdomain.if: interface(`userdom_use_user_ttys',`
system/userdomain.if:   allow $1 user_tty_device_t:chr_file rw_term_perms;
system/userdomain.if: interface(`userdom_use_user_terminals',`
system/userdomain.if:   allow $1 user_tty_device_t:chr_file rw_term_perms;
system/userdomain.if: interface(`userdom_dontaudit_use_user_terminals',`
system/userdomain.if:   dontaudit $1 user_tty_device_t:chr_file rw_term_perms;
system/userdomain.if: interface(`userdom_dontaudit_use_user_ttys',`
system/userdomain.if:   dontaudit $1 user_tty_device_t:chr_file rw_file_perms;

Two of these look interesting: userdom_use_user_ttys and userdom_use_user_terminals. Looking at the API documentation (or the rules defined therein using seshowif) reveals that userdom_use_user_terminals is needed if you also want the application to work when invoked through a devpts terminal, which is probably also something our user(s) want to do, so we’ll add that. The second one – using the file descriptor that has the getty_t context – is related to this, but not granted through the userdom_use_user_ttys. We could grant getty_use_fds but my experience tells me that domain_use_interactive_fds is more likely to be needed: the application inherits and uses a file descriptor currently owned by getty_t but it could be from any of the other domains that has such file descriptors. For instance, if you grant the incron_role to sysadm_r, then a user that switched roles through newrole will see denials for using a file descriptor owned by newrole_t.

Experience is an important aspect in developing policies. If you would go through with getty_use_fds it would work as well, and you’ll probably hit the above mentioned experience later when you try the application through a few different paths (such as within a screen session or so). When you think that the target context (in this case getty_t) could be a placeholder (so other types are likely to be needed as well), make sure you check which attributes are assigned to the type:

# seinfo -tgetty_t -x

Of the above ones, privfd is the important one:

$ sefindif privfd.*use
kernel/domain.if: interface(`domain_use_interactive_fds',`
kernel/domain.if:       allow $1 privfd:fd use;
kernel/domain.if: interface(`domain_dontaudit_use_interactive_fds',`
kernel/domain.if:       dontaudit $1 privfd:fd use;

So let’s update incron.te accordingly:

type incron_spool_t;

# incrontab policy


Rebuild the policy and load it in memory.

If we now run incrontab we get the online help as we expected. Let’s now look at the currently installed incrontabs (there shouldn’t be any of course):

$ incrontab -l
cannot determine current user

In the denials, we notice:

type=AVC msg=audit(1368708632.060:28192): avc:  denied  { create } for  pid=7968 comm="incrontab" scontext=user_u:user_r:incrontab_t tcontext=user_u:user_r:incrontab_t tclass=unix_stream_socket
type=AVC msg=audit(1368708632.060:28194): avc:  denied  { read } for  pid=7968 comm="incrontab" name="nsswitch.conf" dev="dm-2" ino=393768 scontext=user_u:user_r:incrontab_t tcontext=system_u:object_r:etc_t tclass=file
type=AVC msg=audit(1368708632.062:28196): avc:  denied  { read } for  pid=7968 comm="incrontab" name="passwd" dev="dm-2" ino=394223 scontext=user_u:user_r:incrontab_t tcontext=system_u:object_r:etc_t tclass=file

Let’s first focus on nsswitch.conf and passwd. Although both require read access to etc_t files, it might be wrong to just add in files_read_etc (which is what audit2allow is probably going to suggest). For nsswitch, there is a special interface available: auth_use_nsswitch. It is very, very likely that you’ll need this one, especially if you want to share the policy with others who might not have all of the system databases in local files (as etc_t files).


Let’s retry:

$ incrontab -l
cannot read table for 'user': Permission denied

# tail audit.log
type=AVC msg=audit(1368708893.260:28199): avc:  denied  { search } for  pid=7997 comm="incrontab" name="spool" dev="dm-4" ino=20 scontext=user_u:user_r:incrontab_t tcontext=system_u:object_r:var_spool_t tclass=dir

So we need to grant search privileges on var_spool_t. This is offered through files_search_spool. Add it to the policy, rebuild and retry.

$ incrontab -l
cannot read table for 'user': Permission denied

# tail audit.log
type=AVC msg=audit(1368709146.426:28201): avc:  denied  { search } for  pid=8046 comm="incrontab" name="incron" dev="dm-4" ino=19725 scontext=user_u:user_r:incrontab_t tcontext=root:object_r:incron_spool_t tclass=dir

For this one, no interface exists yet. We might be able to create one for ourselves, but as long as other domains don’t need it, we can just add it locally in our policy:

allow incrontab_t incron_spool_t:dir search_dir_perms;

Adding raw allow rules in a policy is, according to the refpolicy styleguide, only allowed if the policy module defines both the source and the destination type of the rule. If you look into other policies you might also find that you can use the search_dirs_patter call. However, that one only makes sense if you need to do this on top of another directory – just look at the definition of search_dirs_pattern. So with this permission set, let’s retry.

$ incrontab -l
no table for user

Great, we have successfully updated the policy until the commands worked. In the next post, we’ll enhance it even further while creating new incrontabs.

May 24, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

The next step after having a basic skeleton is to get incrontab running. We know however that everything invoked from the main daemon will be running with the rights of the daemon context (unless we would patch the source code, but that is beyond the scope of this set of posts). As a result, we probably do not want everyone to be able to launch commands through this application.

What we want to do is to limit who can invoke incrontab and, as such, limit who can decide what is invoked through incrond. First of all, we define a role attribute called incrontab_roles. Every role that gets this attribute assigned will be able to transition to the incrontab_t domain.

We can accomplish this by editing the incron.te file:

policy_module(incron, 0.2)

# Declare the incrontab_roles attribute
attribute_role incrontab_roles;

type incrontab_t;
type incrontab_exec_t;
application_domain(incrontab_t, incrontab_exec_t)
# Allow incrontab_t for all incrontab_roles 
role incrontab_roles types incrontab_t;

Next, we need something where we can allow user domains to call incrontab. This will be done through an interface. Let’s look at incron.if with one such interface in it: the incron_role interface.

## inotify-based cron-like daemon

## <summary>
##      Role access for incrontab
## </summary>
## <param name="role">
##      <summary>
##      Role allowed access.
##      </summary>
## </param>
## <param name="domain">
##      <summary>
##      User domain for the role.
##      </summary>
## </param>
                attribute_role incrontab_roles;
                type incrontab_exec_t, incrontab_t;

        roleattribute $1 incrontab_roles;

        domtrans_pattern($2, incrontab_exec_t, incrontab_t)

        ps_process_pattern($2, incrontab_t)
        allow $2 incrontab_t:process signal;

The comments in the file are somewhat special: if the comments start with two hashes (##) then it is taken into account while building the policy documentation in /usr/share/doc/selinux-base-*. The interface itself, incron_role, grants a user role and domain the necessary privileges to transition to the incrontab_t domain as well as read process information (as used through ps, hence the name of the pattern being ps_process_pattern) and send a standard signal to it. Most of the time, you can use signal_perms here but from looking at the application we see that the application is setuid root, so we don’t want to grant too many privileges by default if they are not needed.

With this interface file created, we can rebuild the module and load it.

# make -f /usr/share/selinux/strict/include/Makefile incron.pp
# semodule -i incron.pp

But how to assign this interface to users? Well, what we want to do is something like the following:

incron_role(user_r, user_t)

When interfaces are part of the policy provided by the distribution, the definitions of it are stored in the proper location and you can easily add it. For instance, in Gentoo, if you want to allow the user_r role and user_t domain the cron_role access (and assuming it doesn’t have so already), then you can call selocal as follows:

# selocal -a "cron_role(user_r, user_t)" -c "Granting user_t cron access" -Lb

However, because the interface is currently not known yet, we need to create a second small policy that does this. Create a file (called localuser.te or so) with the following content:

policy_module(localuser, 0.1)

        type user_t;
        role user_r;

incron_role(user_r, user_t)

Now build the policies and load them. We’ll now just build and load all the policies in the current directory (which will be the incron and localuser ones):

# make -f /usr/share/selinux/strict/include/Makefile
# semodule -i *.pp

You can now verify that the user is allowed to transition to the incrontab_t domain:

# seinfo -ruser_r -x | grep incron
# sesearch -s user_t -t incrontab_exec_t -AdCTS
Found 1 semantic av rules:
   allow user_t incrontab_exec_t : file { read getattr execute open } ; 

Found 1 semantic te rules:
   type_transition user_t incrontab_exec_t : process incrontab_t; 

Great, let’s get to our first failure to resolve… in the next post ;-)

May 23, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

So, in the previous post I talked about incron and why I think moving it into the existing cron policy would not be a good idea. It works, somewhat, but is probably not that future-proof. So we’re going to create our own policy for it.

In SELinux, policies are generally written through 3 files:

  1. a type enforcement file that contains the SELinux rules applicable to the domain(s) related to the application (in our example, incron)
  2. a file context file that tells the SELinux utilities how the files and directories offered by the application should be labeled
  3. an interface definition file that allows other SELinux policy modules to gain rights offered through the (to be written) incron policy

We now need to create a skeleton for the policy. This skeleton will define the types related to the application. Such types can be the domains for the processes (the context of the incrond and perhaps also incrontab applications), the contexts for the directories (if any) and files, etc.

So let’s take a look at the content of the incron package. On Gentoo, we can use qlist incron for this. In the output of qlist, I added comments to show you how contexts can be (easily) deduced.

# Application binary for managing user crontabs. We want to give this a specific
# context because we want the application (which will manage the incrontabs in
# /var/spool/incron) in a specific domain
/usr/bin/incrontab  ## incrontab_exec_t

# General application information files, do not need specific attention
# (the default context is fine)

# Binary for the incrond daemon. This definitely needs its own context, since
# it will be launched from an init script and we do not want it to run in the
# initrc_t domain.
/usr/sbin/incrond ## incrond_exec_t

# This is the init script for the incrond daemon. If we want to allow 
# some users the rights to administer incrond without needing to grant
# those users the sysadm_r role, we need to give this file a different
# context as well.
/etc/init.d/incrond ## incrond_initrc_exec_t

With this information at hand, and the behavior of the application we know from the previous post, can lead to the following incron.fc file, which defines the file contexts for the application.

/etc/incron.d(/.*)?     gen_context(system_u:object_r:incron_spool_t,s0)

/etc/rc\.d/init\.d/incrond      --      gen_context(system_u:object_r:incrond_initrc_exec_t,s0)

/usr/bin/incrontab      --      gen_context(system_u:object_r:incrontab_exec_t,s0)

/usr/sbin/incrond       --      gen_context(system_u:object_r:incrond_exec_t,s0)

/var/spool/incron(/.*)?         gen_context(system_u:object_r:incron_spool_t,s0)

The syntax of this file closely follows the syntax that semanage fcontext takes – at least for the regular expressions in the beginning. The last column is specifically for policy development to generate a context based on the policies’ requirements: an MCS/MLS enabled policy will get the trailing sensitivity with it, but when MCS/MLS is disabled then it is dropped. The middle column is to specify if the label should only be set on regular files (--), directories (-d), sockets (-s), symlinks (-l), etc. If it is omitted, it matches whatever class the path matches.

The second file needed for the skeleton is the incron.te file, which would look like so. I added in inline comments here to explain why certain lines are prepared, but generally this is omitted when the policy is upstreamed.

policy_module(incron, 0.1)
# The above line declares that this file is a SELinux policy file. Its name
# is incron, so the file should saved as incron.te

# First, we declare the incrond_t domain, used for the "incrond" process.
# Because it is launched from an init script, we tell the policy that
# incrond_exec_t (the context of incrond), when launched from init, should
# transition to incrond_t.
# Basically, the syntax here is:
# type 
# type 
type incrond_t;
type incrond_exec_t;
init_daemon_domain(incrond_t, incrond_exec_t)

# Next we declare that the incrond_initrc_exec_t is an init script context
# so that init can execute it (remember, SELinux is a mandatory access control
# system, so if we do not tell that init can execute it, it won't).
type incrond_initrc_exec_t;

# We also create the incrontab_t domain (for the "incrontab" application), which
# is triggered through the incrontab_exec_t labeled file. This again follows a bit
# the syntax as we used above, but now the interface call is "application_domain".
type incrontab_t;
type incrontab_exec_t;
application_domain(incrontab_t, incrontab_exec_t)

# Finally we declare the spool type as well (incron_spool_t) and tell SELinux that
# it will be used for regular files.
type incron_spool_t;

Knowing which interface calls, like init_daemon_domain and application_domain, we should use is not obvious at first. Most of this can be gathered from existing policies. Other frequently occurring interfaces to be used immediately at the skeleton side are (examples for a foo_t domain):

  • logging_log_file(foo_log_t) to inform SELinux that the context is used for logging purposes. This allows generic log-related daemons to do “their thing” with the file.
  • files_tmp_file(foo_tmp_t) to identify the context as being used for temporary files
  • files_tmpfs_file(foo_tmpfs_t) for tmpfs files (which could be shared memory)
  • files_pid_file(foo_var_run_t) for PID files (and other run metadata files)
  • files_config_file(foo_conf_t) for configuration files (often within /etc)
  • files_lock_file(foo_lock_t) for lock files (often within /run/lock)

We might be using these later as we progress with the policy (for instance, the PID file is a very high candidate for needing to be included). However, with the information currently at hand, we have our first policy module ready for building. Save the type enforcement rules in incron.te and the file contexts in incron.fc and you can then build the SELinux policy:

# make -f /usr/share/selinux/strict/include/Makefile incron.pp
# semodule -i incron.pp

On Gentoo, you can then relabel the files and directories offered through the package using rlpkg:

# rlpkg incron

Next is to start looking at the incrontab application.

May 22, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

In this series of posts, we’ll go through the creation of a SELinux policy for incron, a simple inotify based cron-like application. I will talk about the various steps that I would take in the creation of this policy, and give feedback when certain decisions are taken and why. At the end of the series, we’ll have a hopefully well working policy.

The first step in developing a policy is to know what the application does and how/where it works. This allows us to check if its behavior matches an existing policy (and as such might be best just added to this policy) or if a new policy needs to be written. So, what does incron do?

From the documentation, we know that incron is a cron-like application that, unlike cron, works with file system notification events instead of time-related events. Other than that, it uses a similar way of working:

  • A daemon called incrond is the run-time application that reads in the incrontab files and creates the proper inotify watches. When a watch is triggered, it will execute the matching rule.
  • The daemon looks at two definitions (incrontabs): one system-wide (in /etc/incron.d) and one for users (in /var/spool/incron).
  • The user tabfiles are managed through incrontab (the command)
  • Logging is done through syslog
  • User commands are executed with the users’ privileges (so the application calls setuid() and setgid())

With this, one can create a script to be executed when a file is uploaded (or deleted) to/from a file server, or when a process coredump occurred, or whatever automation you want to trigger when some file system event occurred. Events are plenty and can be found in /usr/include/sys/inotify.h.

So, with this information, it is safe to assume that we might be able to push incron in the existing cron policy. After all, it defines the contexts for all these and probably doesn’t need any additional tweaking. And this seems to work at first, but a few tests reveal that the behavior is not that optimal.

# chcon -t crond_exec_t /usr/sbin/incrond
# chcon -t crontab_exec_t /usr/bin/incrontab
# chcon -R -t system_cron_spool_t /etc/incron.d
# chcon -t cron_log_t /var/log/cron.log
# chcon -R -t cron_spool_t /var/spool/incron

System tables work somewhat, but all commands are executed in the crond_t domain, not in a system_cronjob_t or related domain.
User tables fail when dealing with files in the users directories, since these too run in crond_t and thus have no read access to the user home directories.

The problems we notice come from the fact that the application is very simple in its code: it is not SELinux-aware (so it doesn’t change the runtime context) as most cron daemons are, and when it changes the user id it does not call PAM, so we cannot trigger to handle context changes either. As a result, the entire daemon keeps running in crond_t.

This is one reason why a separate domain could be interesting: we might want to extend the rights of the daemon domain a bit, but don’t want to extend these rights to the other cron daemons (who also run in crond_t). Another reason is that the cron policy has a few booleans that would not affect the behavior at all, making it less obvious for users to troubleshoot. As a result, we’ll go for the separate policy instead – which will be for the next post.

May 21, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

My original post about loyalty cards missed the supermarkets that I’m actually using nowadays, because they are conveniently located just behind my building (for one) and right on the way back home from my office (for the other). Both of them are part of the EuroSpar chain and have the added convenience of being open respectively 24/7 and 7-22.

Mangled bill from EuroSpar

So, when I originally asked the store if they had any loyalty card, I was told they didn’t. I checked the website anyway and found the name of their loyalty program, which is “SuperEasy”, and the next time, I asked about it explicitly, and they gave me the card and a form to fill in; after filling almost all of it, I found that I could also do it online, so I trashed the paper form. They can’t get my name right anywhere here when I spell it.

On the website, strangely enough they even accept my surname as it should be, wow that’s a miracle, I thought… until I went to use the card at the shop and got back the bill that you see on the left. Yes that’s UTF-8 converted to some other 8-bit codepage which is not Latin-1. Indeed it reminds me of CP850 at the time of MS-DOS. Okay I give up, but the funniest part was getting the bill tonight, the one on the right.

The other mangled bill from EuroSpar

But beside them mangling my name in many different possible ways, is there anything that makes EuroSpar special enough for me to write a follow-up post on a topic that I don’t really care about or, honestly, have experience in? Yes of course. Compared with the various rewards I have been talking about last time, this seems to be mostly the same: one point per euro spent, and one cent per point redeemed.

The big difference here is that the points are accrued to the cent, rather than to the lower euro threshold! Not too shabby, considering that unlike Dunnes they do not round their prices to full euros most of the time. And the other one is that even though they have a single loyalty scheme for all the stores.. the cards are per-store, or so they proclaim. The two here are probably owned by the same person so they are actually linked and they work on each.

Another interesting point is that while both EuroSpar host an Insomnia café, neither accept Insomnia’s own loyalty card (ZapaTag) — instead they offer something similar in the sense that you get the 10th drink free. A similar offer is present at the regular Insomnia shops, but there, while you can combine the 10th drink offer with the ZapaTag points, you cannot combine it with other offers such as my usual coffee and brownie for €3,75 (the coffee alone is €3,25 while the brownie is €2,25)… at EuroSpar instead this is actually combinable, but of course if I use the free coffee while getting a brownie, I still have to pay almost as much as the coffee.. but sometimes I can skip on the pastry.

So yes, I think it was worth noting the differences about EuroSpar. And as a final note I’ll just say that even the pharmacy on the way to work has a loyalty card… and it’s the usual discount one, or as they call it “PayBack Card”. I have to see what Tesco does, but they somehow blacklisted my apartment in their delivery service.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
rabbitMQ : v3.1.1 released (May 21, 2013, 13:04 UTC)

EDIT: okay, they just released v3.1.1 so here it goes on portage as well !


  • relax validation of x-match binding to headers exchange for compatibility with brokers < 3.1.0
  • fix bug in ack handling for transactional channels that could cause queues to crash
  • fix race condition in cluster autoheal that could lead to nodes failing to re-join the cluster

3.1.1 changelog is here.

I’ve bumped the rabbitMQ message queuing server on portage. This new version comes with quite a nice bunch of bugfixes and features.


  • eager synchronisation of slaves by policy (manual & automatic)
  • cluster “autoheal” mode to automatically choose nodes to restart when a partition has occurred
  • cluster “pause minority” mode to prefer partition tolerance over availability
  • improved statistics (including charts) in the management plugin
  • quite a bunch of performance improvements
  • some nice memory leaks fixes

Read the full changelog.

Sven Vermeulen a.k.a. swift (homepage, bugs)

If you notice that a process is running in the unlabeled_t domain, the first question to ask is how it got there.

Well, one way is to have a process running in a known domain, like screen_t, after which the SELinux policy module that provides this domain is removed from the system (or updated and the update does not contain the screen_t definition anymore):

test ~ # ps -eZ | grep screen
root:sysadm_r:sysadm_screen_t    5047 ?        00:00:00 screen
test ~ # semodule -r screen
test ~ # ps -eZ | grep screen
system_u:object_r:unlabeled_t    5047 ?        00:00:00 screen

In permissive mode, this will be visible easily; in enforcing mode, the domains you are running in might not be allowed to do anything with unlabeled_t files, directories and processes, so ps might not show it even though it still exists:

test audit # ps -eZ | grep 5047
test audit # ls -dZ /proc/5047
ls: cannot access /proc/5047: Permission denied
test audit # tail audit.log | grep unlabeled
type=AVC msg=audit(1368698097.494:27806): avc:  denied  { getattr } for  pid=4137 comm="bash" path="/proc/5047" dev="proc" ino=6677 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:unlabeled_t tclass=dir

Notice that, if you reload the module, the process becomes visible again. That is because the process context itself (screen_t) is retained, but because the policy doesn’t know it anymore, it shows it as unlabeled_t.

Basically, the moment the policy doesn’t know how a label would be (should be), it uses unlabeled_t. The SELinux policy then defines how this unlabeled_t domain is handled. Processes getting into unlabeled_t is not that common though as there is no supported transition to it. The above one is one way that this still can occur.

May 20, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
A simple IPv6 setup (May 20, 2013, 01:50 UTC)

For internal communication between guests on my workstation, I use IPv6 which is set up using the Router Advertisement “feature” of IPv6. The tools I use are dnsmasq for DNS/DHCP and router advertisement support, and dhcpcd as client. It might be a total mess (grew almost organically until it worked), but as far as I’m concerned, it is working… and that is all that matters (for now). I’ll have to look deeper into the IPv6 stuff to understand it all better though.

On the client side, dhcpcd is ran with the following options:

dhcpcd_eth0="-t 5 -L --ipv6ra_own"

I had to enable --ipv6ra_own to get it to obtain its global address, otherwise it only got its link local one (fe80:: something). I also added a hook into /lib/dhcpcd/dhcpcd-hooks to get it to trigger a hostname update for IPv6.

$ cat 28-set-ip6-address 
if $ifup; then export new_ip_address=${ra1_prefix%%/64}; fi

SELinux-policy wise, I had to enable dhcpc_t to write to the hostname proc file and set the system hostname. The first one (21) is needed because of the --ipv6ra_own parameter.

# selocal -l | grep dhcpc_t
21: allow dhcpc_t self:rawip_socket create_socket_perms; # dhcpclient
22: kernel_rw_kernel_sysctl(dhcpc_t) # set hostname
23: allow dhcpc_t self:capability sys_admin; # set hostname

Finally, in /etc/dhcpcd.conf, I removed the nohook lookup-hostname and set the force_hostname one:

#nohook lookup-hostname
env force_hostname=YES

On the server side, I use the following configuration of dnsmasq (snippet):


As you can see, I use the documentation prefix for now (since it is meant for internal communication only, and makes it easier to copy/paste into documentation ;-) but when I am going to use full IPv6 access to the Internet, this prefix will of course change.

Finally, I enabled IPv6 forwarding on the tap0 interface because otherwise I continuously got the following messages on the clients:

May 12 18:43:07 test dhcpcd[3869]: eth0: adding default route via fe80::d848:19ff:fe0d:55c2
May 12 18:43:07 test dhcpcd[3869]: eth0: fe80::d848:19ff:fe0d:55c2 is no longer a router
May 12 18:43:07 test dhcpcd[3869]: eth0: deleting default route via fe80::d848:19ff:fe0d:55c2
May 12 18:43:13 test dhcpcd[3869]: eth0: fe80::d848:19ff:fe0d:55c2 is unreachable, expiring it

To enable IPv6 forwarding, you can use sysctl but I added it in the script that sets up the tap0 interface:

tunctl -b -u swift -t tap0
ifconfig tap0 add 2001:db8:81:e2::26b5:365b:5072/64
vde_switch --numports 16 --mod 777 --group users --tap tap0 -d
echo 1 > /proc/sys/net/ipv6/conf/tap0/forwarding

May 19, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I was very sceptic for a long time. Then, I slowly started to trust the kmail2/akonadi combination. I've been using it on my office desktop for a long time, and it works well and is very stable and fast there. (Might be related to the fact that the IMAP server is just across the lawn.) Some time ago, when I deemed things solid enough I even upgraded my laptop again, despite earlier problems. In Gentoo, we've been keeping kdepim-4.4 around all the time, and as you may have read, internal discussions led indeed to the decision to finally drop it some time ago.
What happened in the meantime?
1) One of the more annoying bugs mentioned in my last blog post was fixed with some help from Kevin Kofler. Seems like Debian stumbled into the same issue long ago.
2) I was on vacation. Which was fun, but mostly unrelated to the issue at hand. None of my Gentoo colleagues went ahead with the removal in the meantime. A lot of e-mails accumulated in my account.
3) Coming back, I was on the train with my laptop, sorting the mail. The train was full, the onboard WLAN slightly overstressed, the 4G network just about more reliable. Network comes and goes sometime with a tunnel, no problem. Or so I thought.
4) Half an hour before arriving back home I realized that silently a large part of the e-mails that I had (I though) moved (using kmail2-4.10.3 / akonadi-1.9.2) from one folder to another over ~3 hours had disappeared on one side, and not re-appeared on the other. Restarting kmail2 and akonadi did not help. A quick check of the webmail interface of my provider confirmed that also on the IMAP server the mails were gone in both folders. &%(/&%(&/$/&%$§&/
I wasn't happy. Luckily there were daily server backup snapshots, and after a few days delay I had all the documents back. Nevertheless... Now, I am considering what to do next. (Needless to say, in my opinion we should forget dropping kmail1 in Gentoo for now.) Options...
a) migrate the laptop back to kmail1, which is way more resistant to dropped connections and flaky internet connection - doable but takes a bit of time
b) install OfflineIMAP and Dovecot on the laptop, and let kmail2/akonadi access the localhost Dovecot server - probably the most elegant solution but for the fact that OfflineIMAP seems to have trouble mirroring our Novell Groupwise IMAP server
c) other e-mail client? I've heard good things about trojita...
Summarizing... no idea still how to go ahead, no good solution available. And I actually like the kdepim integration idea, so I'll never be the first one to completely migrate away from it! I am sincerely sorry for the sure fact that this post is disheartening to all the people who put a lot of effort into improving kmail2 and akonadi. It has become a huge lot better. However, I am just getting more and more convinced that the complexity of this combined system is too much to handle and that kmail should never have gone the akonadi way.

Sven Vermeulen a.k.a. swift (homepage, bugs)
The weird “audit_access” permission (May 19, 2013, 01:50 UTC)

While writing up the posts on capabilities, one thing I had in my mind was to give some additional information on frequently occurring denials, such as the dac_override and dac_read_search capabilities, and when they are triggered. For the DAC-related capabilities, policy developers often notice that these capabilities are triggered without a real need for them. So in the majority of cases, the policy developer wants to disable auditing of this:

dontaudit <somedomain> self:capability { dac_read_search dac_override };

When applications wants to search through directories not owned by the user as which the application runs, both capabilities will be checked – first the dac_read_search one and, if that is denied (it will be audited though) then dac_override is checked. If that one is denied as well, it too will be audited. That is why many developers automatically dontaudit both capability calls if the application itself doesn’t really need the permission.

Let’s say you allow this because the application needs it. But then another issue comes up when the application checks file attributes or access permissions (which is a second occurring denial that developers come across with). Such applications use access() or faccessat() to get information about files, but other than that don’t do anything with the files. When this occurs and the domain does not have read, write or execute permissions on the target, then the denial is shown even when the application doesn’t really read, write or execute the file.

#include <stdio.h>
#include <unistd.h>

int main(int argc, char ** argv) {
  printf("%s: Exists (%d), Readable (%d), Writeable (%d), Executable (%d)\n", argv[1],
    access(argv[1], F_OK), access(argv[1], R_OK),
    access(argv[1], W_OK), access(argv[1], X_OK));
$ check /var/lib/logrotate.status
/var/lib/logrotate.status: Exists (0), Readable (-1), Writeable (-1), Executable (-1)

$ tail -1 /var/log/audit.log
type=AVC msg=audit(1367400559.273:5224): avc:  denied  { read } for  pid=12270 comm="test" name="logrotate.status" dev="dm-3" ino=2849 scontext=staff_u:staff_r:staff_t tcontext=system_u:object_r:logrotate_var_lib_t tclass=file

This gives the impression that the application is doing nasty stuff, even when it is merely checking permissions. One way would be to dontaudit read as well, but if the application does the check against several files of various types, that might mean you need to include dontaudit statements for various domains. That by itself isn’t wrong, but perhaps you do not want to audit such checks but do want to audit real read attempts. This is what the audit_access permission is for.

The audit_access permission is meant to be used only for dontaudit statements: it has no effect on the security of the system itself, so using it in allow statements has no effect. The purpose of the permission is to allow policy developers to not audit access checks without really dontauditing other, possibly malicious, attempts. In other words, checking the access can be dontaudited while actually attempting to use the access (reading, writing or executing the file) will still result in the proper denial.

May 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo CUPS-1.6 status (May 18, 2013, 21:02 UTC)

We've had CUPS 1.6 in the Gentoo portage tree for a while now already. It has even been keyworded by most of the arches (hooray!), and from the bug reports quite some people use it. Sometime in the intermediate future we'll stabilize it, however until then quite some bugs still have to be resolved.
CUPS 1.6 brings changes. The move to Apple has messed up the project priorities, and backward compatibility was kicked out of the window with a bang. As I've already detailed in a short previous blog post, per se, CUPS 1.6 does not "talk" the printer browsing protocol of previous versions anymore but solely relies on zeroconf (which is implemented in Gentoo by net-dns/avahi). Some other features were dropped as well...
Luckily, CUPS was and is open source, and that the people at Apple removed the code from the main CUPS distribution did not mean that it was actually gone. In the end, all these feature just made their way from the main CUPS package to a new package net-print/cups-filters maintained at The Linux Foundation. There, the code is evolving fast, bugs are fixed and features are introduced. Even network browsing with the CUPS-1.5 protocol has been restored by now; cups-filters includes a daemon called cups-browsed which can generate print queues on the fly and accepts configuration directives similar to CUPS-1.5. As far as we in Gentoo (and any other Linux distribution) are concerned, we can get along without zeroconf just fine.
The main thing that is hindering CUPS-1.6 stabilization a the moment is that the CUPS website is down, kind of. Their server had a hardware failure, and since nearly a month (!!!) only minimal, static pages are up. In particular, what's missing is the CUPS bugtracker (no I won't sign up for an Apple ID to submit CUPS bugs) and access to the Subversion repository of the source. (Remind me to git-svn clone the code history as soon as it's back and push it to gitorious.)
So... feel free to try out CUPS-1.6, testing and submitting bugs for sure helps. However, it may take some time to get these fixed...

Sven Vermeulen a.k.a. swift (homepage, bugs)

To work on SELinux policies, I use a couple of functions that I can call on the shell (command line): seshowif, sefindif, seshowdef and sefinddef. The idea behind the methods is that I want to search (find) for an interface (if) or definition (def) that contains a particular method or call. Or, if I know what the interface or definition is, I want to see it (show).

For instance, to find the name of the interface that allows us to define file transitions from the postfix_etc_t label:

$ sefindif filetrans.*postfix_etc
contrib/postfix.if: interface(`postfix_config_filetrans',`
contrib/postfix.if:     filetrans_pattern($1, postfix_etc_t, $2, $3, $4)

Or to show the content of the corenet_tcp_bind_http_port interface:

$ seshowif corenet_tcp_bind_http_port
                type http_port_t;

        allow $1 http_port_t:tcp_socket name_bind;
        allow $1 self:capability net_bind_service;

For the definitions, this is quite similar:

$ sefinddef socket.*create
obj_perm_sets.spt:define(`create_socket_perms', `{ create rw_socket_perms }')
obj_perm_sets.spt:define(`create_stream_socket_perms', `{ create_socket_perms listen accept }')
obj_perm_sets.spt:define(`connected_socket_perms', `{ create ioctl read getattr write setattr append bind getopt setopt shutdown }')
obj_perm_sets.spt:define(`create_netlink_socket_perms', `{ create_socket_perms nlmsg_read nlmsg_write }')
obj_perm_sets.spt:define(`rw_netlink_socket_perms', `{ create_socket_perms nlmsg_read nlmsg_write }')
obj_perm_sets.spt:define(`r_netlink_socket_perms', `{ create_socket_perms nlmsg_read }')
obj_perm_sets.spt:define(`client_stream_socket_perms', `{ create ioctl read getattr write setattr append bind getopt setopt shutdown }')

$ seshowdef manage_files_pattern
        allow $1 $2:dir rw_dir_perms;
        allow $1 $3:file manage_file_perms;

I have these defined in my ~/.bashrc (they are simple functions) and are used on a daily basis here ;-) If you want to learn a bit more on developing SELinux policies for Gentoo, make sure you read the Gentoo Hardened SELinux Development guide.

May 17, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

It is a common request in squid to have it block downloading certain files based on their extension in the url path. A quick look at google’s results on the subject apparently gives us the solution to get this done easily by squid.

The common solution is to create an ACL file listing regular expressions of the extensions you want to block and then apply this to your http_access rules.




acl blockExtensions urlpath_regex -i "/etc/squid/blockExtensions.acl"


http_access allow localnet !blockExtensions

Unfortunately this is not enough to prevent users from downloading .exe files. The mistake here is that we assume that the URL will strictly finish by the extension we want to block, consider the two examples below :     // will be DENIED as expected    // WON'T be denied as it does not match the regex !

Squid uses the extended regex processor which is the same as egrep. So we need to change our blockExtensions.acl file to handle the possible ?whatever string which may be trailing our url_path. Here’s the solution to handle all the cases :



You will still be hated for limiting people’s need to download and install shit on their Windows but you implemented it the right way and no script kiddie can brag about bypassing you ;)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Life in the new city (May 17, 2013, 19:54 UTC)

Okay so now it’s over a month I’ve been staying in Dublin, it’s actually over a month I’m at my new job, and it is shaping up as a very good new experience for me. But even more than the job, the new experiences come with having an apartment. Last year I was leaving within the office where I was working, and before that I’ve been living with my mother, so finally having a place of mine is a new world entirely. Well, I’ll admit it: only partially.

Even though I’ve been living with my mother, like the stereotype of Italian guys suggests, it’s not like I’ve bee a parasite. Indeed, I’ve been paying all the bills for the past four years, and still I’m paying them from here. I’ve also been doing my share of grocery shopping, cleaning and maintenance tasks, but at least I did avoid the washing machine most of the time. So yeah, it wasn’t a complete revolution for my life, but it was a partial one. So right now I do feel slightly worse for wear, especially because I had a very bad experience with the kitchen, which was not cleaned before I moved in.

Thankfully, Ikea exists everywhere. And their plastic mats for drawers and cabinets are a lifesaver. Too bad I already finished the roll and I’ve not completed half the kitchen yet. I think I’ll go back to Ikea in two weeks (not next week because my sister’s visiting). With this time I bought the same identical lamp three times. Originally in Italy, then again in Los Angeles, and now in Dublin — only difference is that the American version has a loop to be able to orient it, probably because health and safety does not require having enough common sense as to not touch the hot cone…

The end line is that I’m very happy about having moved to Dublin. I love the place, and I love the people. My new job is also quite interesting, even if not as open-source focused as my previous ones (which does not mean it is completely out of the way of open source anyway), and the colleagues are terrific… hey some even read my blog before, thanks guys!

While settling down took most of my time and left me no time to do real Gentoo contributions or blogging (luckily Sven seems to have taken my place on Planet Gentoo), things are getting much better (among others I finally have a desk in the apartment, and tomorrow I’m going to get a TV as well, which I know will boost my ability to keep the house clean — because it won’t require me to stick to the monitor to watch something). So expect more presence from me soon enough!

Greg KH a.k.a. gregkh (homepage, bugs)

A few years ago, I gave a history of the 2.6.32 stable kernel, and mentioned the previous stable kernels as well. I'd like to apologize for not acknowledging the work of Adrian Bunk in maintaining the 2.6.16 stable kernel for 2 years after I gave up on it, allowing it to be used by many people for a very long time.

I've updated the previous post with this information in it at the bottom, for the archives. Again, many apologies, I never meant to ignore the work of this developer.

Sven Vermeulen a.k.a. swift (homepage, bugs)

There has been a few posts already on the local Linux kernel privilege escalation, which has received the CVE-2013-2094 ID. arstechnica has a write-up with links to good resources on the Internet, but I definitely want to point readers to the explanation that Brad Spengler made on the vulnerability.

In short, the vulnerability is an out-of-bound access to an array within the Linux perf code (which is a performance measuring subsystem enabled when CONFIG_PERF_EVENTS is enabled). This subsystem is often enabled as it offers a wide range of performance measurement techniques (see its wiki for more information). You can check on your own system through the kernel configuration (zgrep CONFIG_PERF_EVENTS /proc/config.gz if you have the latter pseudo-file available – it is made available through CONFIG_IKCONFIG_PROC).

The public exploit maps memory in userland, fills it with known data, then triggers an out-of-bound decrement that tricks the kernel into decrementing this data (mapped in userland). By looking at where the decrement occurred, the exploit now knows the base address of the array. Next, it targets (through the same vulnerability) the IDT base (Interrupt Descriptor Table) and targets the overflow interrupt vector. It increments the top part of the address that the vector points to (which is 0xffffffff, becoming 0×00000000 thus pointing to the userland), maps this memory region itself with shellcode, and then triggers the overflow. The shell code used in the public exploit modifies the credentials of the current task, sets uid/gid with root and gives full capabilities, and then executes a shell.

As Brad mentions, UDEREF (an option in a grSecurity enabled kernel) should mitigate the attempt to get to the userland. On my system, the exploit fails with the following (start of) oops (without affecting the system further) when it tries to close the file descriptor returned from the syscall that invokes the decrement:

[ 1926.226678] PAX: please report this to
[ 1926.227019] BUG: unable to handle kernel paging request at 0000000381f5815c
[ 1926.227019] IP: [] sw_perf_event_destroy+0x1a/0xa0
[ 1926.227019] PGD 58a7c000 
[ 1926.227019] Thread overran stack, or stack corrupted
[ 1926.227019] Oops: 0002 [#4] PREEMPT SMP 
[ 1926.227019] Modules linked in: libcrc32c
[ 1926.227019] CPU 0 
[ 1926.227019] Pid: 4267, comm: test Tainted: G      D      3.8.7-hardened #1 Bochs Bochs
[ 1926.227019] RIP: 0010:[]  [] sw_perf_event_destroy+0x1a/0xa0
[ 1926.227019] RSP: 0018:ffff880058a03e08  EFLAGS: 00010246

The exploit also finds that the decrement didn’t succeed:

test: semtex.c:76: main: Assertion 'i<0x0100000000/4' failed.

A second mitigation is that KERNEXEC (also offered through grSecurity) which prevents the kernel from executing data that is writable (including userland data). So modifying the IDT would be mitigated as well.

Another important mitigation is TPE – Trusted Path Execution. This feature prevents the execution of binaries that are not located in a root-owned directory and owned by a trusted group (which on my system is 10 = wheel). So users attempting to execute such code will fail with a Permission denied error, and the following is shown in the logs:

[ 3152.165780] grsec: denied untrusted exec (due to not being in trusted group and file in non-root-owned directory) of /home/user/test by /home/user/test[bash:4382] uid/euid:1000/1000 gid/egid:100/100, parent /bin/bash[bash:4352] uid/euid:1000/1000 gid/egid:100/100

However, even though a nicely hardened system should be fairly immune against the currently circling public exploit, it should be noted that it is not immune against the vulnerability itself. The methods above mentioned make it so that that particular way of gaining root access is not possible, but it still allows an attacker to decrement and increment memory in specific locations so other exploits might be found to modify the system.

Now out-of-bound vulnerabilities are not new. Recently (february this year), a vulnerability in the networking code also provided an attack vector to get a local privilege escalation. A mandatory access control system like SELinux has little impact on such vulnerabilities if you allow users to execute their own code. Even confined users can modify the exploit to disable SELinux (since the shell code is ran with ring0 privileges it can access and modify the SELinux state information in the kernel).

Many thanks to Brad for the excellent write-up, and to the Gentoo Hardened team for providing the grSecurity PaX/TPE protections in its hardened-sources kernel.

May 16, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened spring notes (May 16, 2013, 20:54 UTC)

We got back together on the #gentoo-hardened chat channel to discuss the progress of Gentoo Hardened, so it’s time for another write-up of what was said.


GCC 4.8.1 will be out soon, although nothing major has occurred with it since the last meeting. There is a plugin header install problem in 4.8 and its not certain that the (trivial) fix is in 4.8.1, but it certainly is inside Gentoo’s release.

Blueness is also (still, and hopefully for a long time ;-) maintaining the uclibc hardened related toolchain aspects.

Kernel and grSecurity/PaX

The further progress on the XATTR_PAX migration was put on a lower level the past few weeks due to busy, busy… very busy weeks (but this was announced and known in advance). We still need to do XATTR copying in install for packages that do pax markings before src_install() and include the user.pax XATTR patch in the gentoo-sources kernel. This will silence the errors for non-hardened users and fix the loss of XATTR markings for those packages that do pax-mark before install.

The set then needs to be documented further and tested on vanilla and hardened systems.

Zorry asked if a separate script can be provided for those ebuilds that directly call paxctl. These ebuilds might want to switch to the eclass, but if they need to call paxctl or similar directly (for instance because the result is immediately used for further building), a separate script or tool should be made available. Blueness will look into this.

On hardened-sources, we are now with stable 2.6.32-r160, 3.2.42-r1 and 3.8.6 due to some vulnerabilities in earlier versions (in networking code). There is still some bug (nfs-related) that is fixed in 3.2.44 so that part might need a bump as well soon.


The selocal command is now available for Gentoo SELinux users, allowing them to easily enhance the policy without having to maintain their own SELinux policy modules (the script is a wrapper that does all that).

The setools package now also uses the SLOT’ed swig, so no more dependency breakage.

On SELinux userspace and policy, both have seen a new release last month, and both are already in the Gentoo portage tree.

Finally, the SELinux policy ebuilds now also call epatch_user so users can customize the policies even further without having to copy ebuilds to their overlay.

Now that tar supports XATTR well, we might want to look into SELinux stages again. Jmbsvicetto did some work on that, but the builds failed during stage1. We’ll look into that later.


Nothing much to say, we’re waiting a bit until the patches proposed by the IMA team are merged in the main kernel.


Two no-multilib fixes have been applied to the hardened/amd64/no-multilib profiles. One was a QA issue and quickly resolved, the other is due to the profile stacking within Gentoo profiles, where we missed a profile and thus were missing a few masks defined in that (missed) profile. But including the profile creates a lot of duplicates again, so we are going to copy the masks across until the duplicates are resolved in the other profiles.

Blueness will also clean up the experimental 13.0 directory since all hardened profiles now follow 13.0.


The latest changes on SELinux have been added to the Gentoo SELinux handbook. Also, I’ve been slowly (but surely) adding topics to the SELinux tutorials listing on the Gentoo wiki.

The grSecurity 2 document is very much out of date, blueness hopes to put some time in fixing that soon.

So that’s about it for the short write-up. Zorry will surely post the log later on the appropriate channels. Good work done (again) by all team members!

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Fujifilm GF670W (May 16, 2013, 20:12 UTC)

It’s been so long since I switched to film-only photography that I decided a few months ago to sell all my digital equipment. I already own a Nikon FM2 camera which I love but I’ve to admit that I was and still am totally amazed by the pictures taken by my girlfriend’s Rolleiflex 3.5F. The medium format is the kind of rendering I was craving to get and that sooner or later I’d step into the medium format world. Well, I didn’t have to wait as when we were in Tokyo to celebrate new year 2013 I fell in love with what was the perfect match between my love for wide angles and medium format film photography : the Fujifilm GF670W !

For my soon to come birthday, I got myself my new toy in advance so I could use it in my upcoming roadtrip around France (I’ll talk about it soon, it was awesome). Oddly, the only places in the world where you can get this camera is in the UK and in Japan so I bought it from the very nice guys at Dale photographic. Here is the beast (literally) :


Yes, this is a big camera and it comes with a very nice leather case and a lens hood. This is a telemetric camera with a comfortable visor, it accepts 120 and 220 films and is capable of shooting in standard 6×6 and 6×7 !

In the medium format world, the 55mm lens is actually a wide angle one as it is comparable to a 28mm in the usual 24×36 world. Its performances are not crazy on paper with a 4.5 aperture and a shutter speed going from 4s to 1/500s (as fast as a 1956 Rolleiflex) but the quality is just stunning as it’s sharp and offers a somewhat inexistant chromatic abberation.

Want proof ? These are some of my first roll’s shoots uploaded at full resolution :



May 14, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
Spring Europen 2013 (May 14, 2013, 17:00 UTC)

Europen talkThis Monday I was the first time guest and speaker at (contrary to it’s name) local Czech conference Europen. It was interesting experience. And I would like to share a bit of what I experienced. What made it different from conferences I usually speak at was the audience. Not many Linux guys and quite some Windows guys. I was told that this conference is for various IT professionals and people from academia interested in Open Source.

I was asked to speak there about something techy, low-levelly, genericy, and not SUSE only stuff. I offered OBS and Studio introduction as these are crown jewels of openSUSE environment, but I was told that they would prefer something more generic and little bit more hardcore. So in the end I decided to speak about packaging as that is something I do that since a long time ago. And to make it nor a workshop nor SUSE specific talk, I put in two more packaging systems that I worked with apart from rpm – Portage (from Gentoo) and BitBake (from Open Embedded).

Whenever I visit open source event in Czech Republic, I always know quite some people there already. I know the most prominent people from Linux magazines, other distributions and some other people who are big open source enthusiasts. On this conference, I knew something like six attendees in total (and all of them were there to give a talk and not sure what to expect from audience). Almost everybody was running MS Windows with few MacOS exceptions. Really quite different world.

As I said, in the end I spoke about why do we do software packages in Linux and how do we do it. I spoke about rpm and spec files, about Portage and BitBake showing how nice it is to have inheritance. And in the end I put in part about how great OBS is anyway.

From the almost a day I was at the conference, most questions and feedback got LibUCW library, but Martin Mareš gave amazing presentation and he had a really interesting topic. LibUCW is cool. If I’ll find a free time, I’ll write something about it separately. Otherwise audience was quite calm and quiet. For my presentation, I got question about cross-compilation of rpms, so in the end after the talk I could recommend OBS once more ;-)

It was definitely interesting experience as these people were mostly out of our usual scope. If you are interested in browsing the slides, you can, sources are on my github, but they contain quite some pages of example recipes that I was commenting on the spot.

I use it since a long time, so since it works pretty good for me, I want to share how to handle the spam for your address with procmail.

First, you need to say that procmail will filter your email(s):
echo "| /usr/bin/procmail" > /home/${USER}/.forward

Then create a simple /home/${USER}/.procmailrc with this content:
* ^X-Spam-Status: Yes

* ^X-Spam-Level: \*\*\*

* ! ^List-Id
* ^X-Spam-Level: \*\*

* ^Subject:.*viagra*

* ^Subject:.*cialis*

* ^Subject:.*money*

* ^Subject:.*rolex*

* ^Subject:.*scount*

* ^Subject:.*Viagra*

* ^Subject:.*Cialis*

* ^Subject:.*Marketing*

* ^Subject:.*marketing*

* ^Subject:.*Money*

* ^Subject:.*Rolex*

* ^Subject:.*Scount*

* ^Subject:.*glxgug*

* ^Subject:.*offizielle sieger*

* ^Subject:.*educational*

:0 B:
* $ content-[^:]+:${WS}*.+(\<)*(file)?name${WS}*=${WS}*\/.+\.(pif|scr|com|cpl|vbs|mim|hqx|bhx|uue|uu|b64)\"?$

:0 B:
* ^Content-Type: .*;$[ ]*(file)?name=\"?.*\.(pif|scr|com|cpl|vbs)\"?$

:0 B:
* ^Content-Type: .*; [ ]*(file)?name=\"?.*\.(pif|scr|com|cpl|vbs)\"?$

With the filter for X-Spam-Status and X-Spam-Level you will avoid the majority of the incoming spam.
Some mails that does not have any Spam flag, contains subject like viagra, cialis ( which I absolutely don’t need :D ), rolex and scount.
Yes, I could you the (c|C)ase syntax, but I had problems, so I prefer to write twice the rules instead of have any sort of troubles.
Note: with this email address I’m not subscribed to any newsletter or any sort of offers/catalogs so I filtered scount, markerting, money.

Sometimes I receive mails from people that are not spammer, with the X-Spam-Level flag with one star, so I decided to move these email into a folder, they will be double-checked with naked eye:

* ^X-Spam-Level: \*

To avoid confusion I always prefer to use a complete path here.

After a stabilization you will always see the annoying mail from the bugzilla which contains ${arch} stable, so if you want to drop them:

:0 B
* ^*(alpha|amd64|arm|hppa|ia64|m68k|ppc|ppc64|s390|sh|sparc|x86) stable*

Now, if you are using more email clients, on more computers, you may need to set the filters here instead of on all clients you are using, so for example:

* ^From.*
* ^TO.*

And so on….
These, hints obviously are valid on all postfix-based mailserver; if you are using e.g. qmail, you need to move the .procmailrc, but this is still valid.
I hope this will help :)

If you need a particular set of rules, you can write it if you take a look at the source/header of the message, so If for example I don’t like to see the mails from bugzilla of the bugs that I reported:

the header says: X-Bugzilla-Reporter:

* ^From.*
* ^X-Bugzilla-Reporter.*

May 12, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Lab::Measurement 3.11 released (May 12, 2013, 10:34 UTC)

Lab::Measurement 3.11 has been uploaded to CPAN. This is a minor maintenance release, with small bug fixes in the voltage source handling (gate protect and sweep functionality) and the Yokogawa drivers (output voltage range settings).

May 11, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

When working on (the still on-going) migration of the Gentoo java project repositories from SVN to Git I ran into bugs with svn2git 1.0.8 and my own svneverever 1.2.1.

The bug with svn2git 1.0.8 was a regression that broke support for (non-ASCII) UTF-8 author names in identity maps. That’s fixed in dev-vcs/svn2git-1.0.8-r1 in Gentoo. I sent the patch upstream and to the Debian package maintainer, too.

For svneverever, a directory that re-appeared after deletion was reported to only live once, e.g. the output was

(2488; 9253)  /projects
(2490; 9253)      /java-config-2
(2490; 2586)          /trunk

if directory /projects/java-config-2/trunk/ got deleted at revision 2586, no matter if was re-created later. With 9253 revisions in total, the correct output (with svneverever 1.2.2) is:

(2488; 9253)  /projects
(2490; 9253)      /java-config-2
(2490; 9253)          /trunk

That’s fixed in svneverever 1.2.2.

If svneverever is of help to you, please support me on Flattr. Thanks!

May 10, 2013
Gentoo at LinuxTag 2013 in Berlin (May 10, 2013, 01:03 UTC)

LinuxTag 2013 runs from May 22nd to May 25th in Berlin, Germany. With more than 10,000 visitors last year, it is one of the biggest Linux and open source events in Europe.

You will find the Gentoo booth at Hall 7.1c, Booth 179. Come and visit us! You will meet many of our developers and users, talk with us, plus get some of the Gentoo merchandise you have always wanted.

May 07, 2013
Jan Kundrát a.k.a. jkt (homepage, bugs)
On Innovation, NIH, Trojita and KDE PIM (May 07, 2013, 08:03 UTC)

Jos wrote a blog post yesterday commenting on the complexity of the PIM problem. He raises an interesting concern about whether we would be all better if there was no Trojitá and I just improved KMail instead. As usual, the matter is more complicated than it might seem on a first sight.

Executive Summary: I tried working with KDEPIM. The KDEPIM IMAP stack required a total rewrite in order to be useful. At the time I started, Akonadi did not exist. The rewrite has been done, and Trojitá is the result. It is up to the Akonadi developers to use Trojitá's IMAP implementation if they are interested; it is modular enough.

People might wonder why Trojitá exists at all. I started working on it because I wasn't happy with how the mail clients performed back in 2006. The supported features were severely limited, the speed was horrible. After studying the IMAP protocol, it became obvious that the reason for this slowness is the rather stupid way in which the contemporary clients treated the remote mail store. Yes, it's really a very dumb idea to load tens of thousands of messages when opening a mailbox for the first time. Nope, it does not make sense to block the GUI until you fetch that 15MB mail over a slow and capped cell phone connection. Yes, you can do better with IMAP, and the possibility has been there for years. The problem is that the clients were not using the IMAP protocol in an efficient manner.

It is not easy to retrofit a decent IMAP support into an existing client. There could be numerous code paths which just assume that everything happens synchronously and block the GUI when the data are stuck on the wire for some reason. Doing this properly, fetching just the required data and doing all that in an asynchronous manner is not easy -- but it's doable nonetheless. It requires huge changes to the overall architecture of the legacy applications, however.

Give Trojitá a try now and see how fast it is. I'm serious here -- Trojitá opens a mailbox with tens of thousands of messages in a fraction of second. Try to open a big e-mail with vacation pictures from your relatives over a slow link -- you will see the important textual part pop up immediately with the images being loaded in the background, not disturbing your work. Now try to do the same in your favorite e-mail client -- if it's as fast as Trojitá, congratulations. If not, perhaps you should switch.

Right now, the IMAP support in Trojitá is way more advanced than what is shipped in Geary or KDE PIM -- and it is this solid foundation which leads to Trojitá's performance. What needs work now is polishing the GUI and making it play well with the rest of a users' system. I don't care whether this polishing means improving Trojitá's GUI iteratively or whether its IMAP support gets used as a library in, say, KMail -- both would be very succesfull outcomes. It would be terrific to somehow combine the nice, polished UI of the more established e-mail clients with the IMAP engine from Trojitá. There is a GSoC proposal for integrating Trojitá into KDE's Kontact -- but for it to succeed, people from other projects must get involved as well. I have put seven years of my time into making the IMAP support rock; I would not be able to achieve the same if I was improving KMail instead. I don't need a fast KMail, I need a great e-mail client. Trojitá works well enough for me.

Oh, and there's also a currently running fundraiser for better address book integration in Trojitá. We are not asking for $ 100k, we are asking for $ 199. Let's see how many people are willing to put the money where their mouth is and actually do something to help the PIM on a free desktop. Patches and donations are both equally welcome. Actually, not really -- great patches are much more appreciated. Because Jos is right -- it takes a lot of work to produce great software, and things get better when there are more poeple working towards their common goal together.

Update: it looks like my choice of kickstarter platform was rather poor, catincan apparently doesn't accept PayPal :(. There's the possiblity of direct donations over SourceForge/PayPal -- please keep in mind that these will be charged even if less donors pledge to the idea.

May 05, 2013

Since a long time I realized that is a pita every time that I keyword, receive a repoman failure for dependency.bad(mostly) that does not regard the arch that I’m changing.
So, checking in the repoman manual, I realized that –ignore-arches looks bad for my case and I decided to request a new feature: –include-arches.
This feature, as explained in the bug, checks only for the arches that you write as argument and should be used only when you are keywording/stabilizing.

Some examples/usage:

First, it saves time, the following example will try to run repoman full in the kdelibs directory:
$ time repoman full > /dev/null 2>&1
real 0m12.434s

$ time repoman full --include-arches "amd64" > /dev/null 2>&1
real 0m3.880s

Second, kdelibs suffers for a dependency.bad on amd64-fbsd, so:
$ repoman full
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs
dependency.bad 2
kde-base/kdelibs/kdelibs-4.10.2.ebuild: PDEPEND: ~amd64-fbsd(default/bsd/fbsd/amd64/9.0) ['>=kde-base/nepomuk-widgets-4.10.2:4[aqua=]']

$ repoman full --include-arches "amd64"
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs

Now when I will keyword the packages I can check for specific arches and skip the unuseful checks since they causes, in this case, only a waste of time.
Thanks to Zac for the work on it.

May 03, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
May 3rd = Day Against DRM (May 03, 2013, 14:23 UTC)

Learn more at (and

Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

Now, after tons of research and design/coding work, we are proud to announce the first release of our Devanagari to Bharati Braille converter! You can read more about the converter here, and download the source code on Github.

If you know anyone who might find this useful, please tell them about it!

May 01, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

If you’re a university student, time is running out! You could get paid to hack on Gentoo or other open-source software this summer, but you’ve gotta act now. The deadline to apply for the Google Summer of Code is this Friday.

If this sounds like your dream come true, you can find some Gentoo project ideas here and Gentoo’s GSoC homepage here. For non-Gentoo projects, you can scan through the GSoC website to find the details.

Tagged: gentoo, gsoc

April 28, 2013
Raúl Porcel a.k.a. armin76 (homepage, bugs)
The new BeagleBone Black and Gentoo (April 28, 2013, 18:02 UTC)

Hi all, long time no see.

Some weeks ago I got an early version of the BeagleBone Black from the people at to create the documentation I always create with every device I get.

Like always i’d like to announce the guide for installing Gentoo in the BeagleBone Black. Have a look at: . Feel free to send any corrections my way.

This board is a new version of the original BeagleBone, known in the community as BeagleBone white, for which I wrote a post for it:

This new version differs in some aspects with the previous version:

  • Cheaper: 45$ vs 89$ of the BeagleBone white
  • 512MB DDR3L RAM vs 256MB DDR2 RAM of the BeagleBone white
  • 1GHz of processor speed vs 720MHz of the BeagleBone white, both when using an external PSU for power

Also it has more features which the old BeagleBone didn’t had

  • miniHDMI output
  • 2GB eMMC

However the new version has missing:

  • Serial port and JTAG through the miniUSB interface

The reason for missing this feature is cost cutting measures, as can be read in the Reference manual.

The full specs of the BeagleBone Black are:
# ARMv7-A 1GHz TI AM3358/9 ARM Cortex-A8 processor
# SMSC LAN8710 Ethernet card
# 1x microSDHC slot
# 1x USB 2.0 Type-A port
# 1x mini-USB 2.0 OTG port
# 1x RJ45
# 1x 6 pin 3.3V TTL Header for serial
# Reset, power and user-defined button

More info about the specs in BeagleBone Black’s webpage.

For those curious as me, here’s the bootlog and the cpuinfo.

I’ve found two issues while working on it:

  1. The USB port doesn’t have a working hotplug detection. That means that if you plug an USB device in the USB port, it will be only detected once, if you remove the USB device, the USB port will stop working. I’ve been told that they are working on it. I haven’t been able to find a workaround for it.
  2. The BeagleBone Black doesn’t detect an microSD card when plugged in when its been booted from the eMMC. If you want to use a microSD card for additional storage, it must be inserted before it boots.

I’d like to thank the people at for providing me a Beaglebone Black to document this.

Have fun!

April 26, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB and Pacemaker recent bumps (April 26, 2013, 14:23 UTC)

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

All of this is live on portage, enjoy.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My time abroad: loyalty cards (April 26, 2013, 12:56 UTC)

Compared to most people around me now, and probably most of the people who read my blog, my life is not that extraordinary, in the terms of travel and moving around. I’ve been, after all, scared of planes for years, and it wasn’t until last year that I got out of the continent — in an year, though, I more than doubled the number of flights I’ve been on, with 18 last year, and more than doubled the number of countries I’ve been to, counting Luxembourg even though I only landed there and got on a bus to get back to Brussels after Alitalia screwed up.

On the other hand, compared to most of the people I know in Italy, I’ve been going around quite a bit, as I spent a considerable amount of time last year in Los Angeles, and I’ve now moved to Dublin, Ireland. And there are quite a few differences between these places and Italy. I’ve already written a bit about the differences I found during my time in the USA but this time I want to focus on something which is quite a triviality, but still is a remarkable difference between the three countries I got to know up to now. As the title suggest I’m referring to stores’ loyalty cards.

Interestingly enough, there was just this week an article on the Irish Times about the “privacy invasion” of loyalty cards.. I honestly don’t see it as big a deal as many others. Yes, they do profile your shopping habits. Yes, if you do not keep private the kind of offers they sent you, they might tell others something about you as well — the newspaper actually brought up the example of a father who discovered the pregnancy of the daughter because of the kind of coupons the supermarket was sending, based on her change of spending habits; I’m sorry but I cannot really feel bad about it. After all, absolute privacy and relevant offers are kinda at the opposite sides of a range.. and I’m usually happy enough when companies are relevant to me.

So of course stores want to know the habits of a single person, or of a single household, and for that they give you loyalty cards… but for you to use them, they have to give you something in return, don’t they? This is where the big difference on this topic appears clearly, if you look at the three countries:

  • in both Italy and Ireland, you get “points” with your shopping; in the USA, instead, the card gives you immediate discounts; I’m pretty sure that this gives not-really-regular-shoppers a good reason to get the card as well: you can easily save a few dollars on a single grocery run by getting the loyalty card at the till;
  • in Italy you redeem the points to get prizes – this works not so differently than with airlines after all – sometimes by adding a contribution, sometimes for free; in my experience the contribution is never worth it, so either you get something for free or just forget about it;
  • in Ireland I still haven’t seen a single prize system; instead they work with coupons: you get a certain amount of points each euro you spend (usually, one point per euro), and then when you get to a certain amount of points, they get a value (usually, one cent per point), and a coupon redeemable for the value is sent you.

Of course, the “European” method (only by contrast with American, since I don’t know what other countries do), is a real loyalty scheme: you need a critical mass of points for them to be useful, which means that you’ll try to get on the same store as much as you can. This is true for airlines as well, after all. On the other hand, people who shop occasionally are less likely to request the card at all, so even if there is some kind of data to be found in their shopping trends, they will be completely ignored by this kind of scheme.

I’m honestly not sure which method I prefer, at this point I still have one or two loyalty cards from my time in Los Angeles, and I’m now collecting a number of loyalty cards here in Dublin. Some are definitely a good choice for me, like the Insomnia card (I love getting coffee at a decent place where I can spend time to read, in the weekends), others, like Dunnes, make me wonder.. the distance from the supermarket to where I’m going to live is most likely offsetting the usefulness of their coupons compared to the (otherwise quite more expensive) Spar at the corner.

At any rate, I just want to write my take on the topic, which is definitely not of interest to most of you…

April 25, 2013
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Recently, I have been toying around with GateOne, a web-based SSH client/terminal emulator. However, installing it on my server proved to be a bit challenging: it requires tornado as a webserver, and uses websockets, while I have an Apache 2.2 instance already running with a few sites on it (and my authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were not officially supported by Apache... until recently! Jim Jagielski added the proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the mailing list, backporting to 2.4 is easy to do (and was suggested as an official backport), but 2.2 required a few additional changes to the original patch (and current upstream trunk).

A few fixes later, I got a working patch (based on Apache 2.2.24), available here:

Recompile with this patch, and you will get a nice and shiny module file!

Now just load it (in /etc/apache2/httpd.conf in Gentoo):
<IfDefine PROXY>
LoadModule proxy_wstunnel_module modules/

and add a location pointing to your GateOne installation:

<Location /gateone/ws>
    ProxyPass wss://
    ProxyPassReverse wss://

<Location /gateone>
    Order deny,allow
    Deny from all
    Allow from #your favorite rule


Reload Apache, and you now have Gateone running behind your Apache server :) If it does not work, first check GateOne log and configuration, especially the "origins" variable

For other websocket applications, Jim Jagielski comments here :

ProxyPass /whatever ws://websocket-srvr.example/com/

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed protocols between the client and the backend, so you tell Apache that you'll be talking 'ws' with the backend (same as ajp://whatever sez that httpd will be talking ajp to the backend).

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Tarsnap and backup strategies (April 25, 2013, 13:52 UTC)

After having had a quite traumatic experience with a customer’s service running on one of the virtual servers I run last November, I made sure to have a very thorough backup for all my systems. Unfortunately, it turns out to be a bit too thorough, so let me explore with you what was going on.

First of all, the software I use to run the backup is tarsnap — you might have heard of it or not, but it’s basically a very smart service, that uses an open-source client, based upon libarchive, and then a server system that stores content (de-duplicated, compressed and encrypted with a very flexible key system). The author is a FreeBSD developer, and he’s charging an insanely small amount of money.

But the most important part to know when you use tarsnap is that you just always create a new archive: it doesn’t really matter what you changed, just get everything together, and it will automatically de-duplicate the content that didn’t change, so why bother? My first dumb method of backups, which is still running as of this time, is to simply, every two hours, dump a copy of the databases (one server runs PostgreSQL, the other MySQL — I no longer run MongoDB but I start to wonder about it, honestly), and then use tarsnap to generate an archive of the whole /etc, /var and a few more places where important stuff is. The archive is named after date and time of the snapshot. And I haven’t deleted any snapshot since I started, for most servers.

It was a mistake.

The moment when I went to recover the data out of earhart (the host that still hosts this blog, a customer’s app, and a couple more sites, like the assets for the blog and even Autotools Mythbuster — but all the static content, as it’s managed by git, is now also mirrored and served active-active from another server called pasteur), the time it took to extract the backup was unsustainable. The reason was obvious when I thought about it: since it has been de-duplicating for almost an year, it would have to scan hundreds if not thousands of archives to get all the small bits and pieces.

I still haven’t replaced this backup system, which is very bad for me, especially since it takes a long time to delete the older archives even after extracting them. On the other hand it’s probably a lot of a matter of tradeoff in the expenses as well, as going through all the older archives to remove the old crap drained my credits with tarsnap quickly. Since the data is de-duplicated and encrypted, the archives’ data needs to be downloaded to be decrypted, before it can be deleted.

My next preference is going to be to set it up so that the script is executed in different modes: 24 times in 48 hours (every two hours), 14 times in 14 days (daily), and 8 times in two months (weekly). The problem is actually doing the rotation properly with a script, but I’ll probably publish a Puppet module to take care of that, since it’s the easiest thing for me to do, to make sure it executes as intended.

The essence of this post is basically to warn you all that, no matter whether it’s cheap to keep around the whole set of backups since the start of time, it’s still a good idea to just rotate them.. especially for content that does not change that often! Think about it even when you set up any kind of backup strategy…

April 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Hello Gentoo Planet (April 24, 2013, 08:51 UTC)

Hey Gentoo folks !

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.


For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.


The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is a follow-up on my last post for autotools introduction. I’m trying to keep these posts bite sized both because it seems to work nicely, and because this way I can avoid leaving the posts rotting in the drafts set.

So after creating a simple autotools build system in the previous now you might want to know how to build a library — this is where the first part of complexity kicks in. The complexity is not, though, into using libtool, but into making a proper library. So the question is “do you really want to use libtool?”

Let’s start from a fundamental rule: if you’re not going to install a library, you don’t want to use libtool. Some projects that only ever deal with programs still use libtool because that way they can rely on .la files for static linking. My suggestion is (very simply) not to rely on them as much as you can. Doing it this way means that you no longer have to care about using libtool for non-library-providing projects.

But in the case you are building said library, using libtool is important. Even if the library is internal only, trying to build it without libtool is just going to be a big headache for the packager that looks into your project (trust me I’ve seen said projects). Before entering the details on how you use libtool, though, let’s look into something else: what you need to make sure you think about, in your library.

First of all, make sure to have an unique prefix to your public symbols, be them constants, variables or functions. You might also want to have one for symbols that you use within your library on different translation units — my suggestion in this example is going to be that symbols starting with foo_ are public, while symbols starting with foo__ are private to the library. You’ll soon see why this is important.

Reducing the amount of symbols that you expose is not only a good performance consideration, but it also means that you avoid the off-chance to have symbol collisions which is a big problem to debug. So do pay attention.

There is another thing that you should consider when building a shared library and that’s the way the library’s ABI is versioned but it’s a topic that, in and by itself, takes more time to discuss than I want to spend in this post. I’ll leave that up to my full guide.

Once you got these details sorted out, you should start by slightly change the file from the previous post so that it initializes libtool as well:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Now it is possible to provide a few options to LT_INIT for instance to disable by default the generation of static archives. My personal recommendation is not to touch those options in most cases. Packagers will disable static linking when it makes sense, and if the user does not know much about static and dynamic linking, they are better off getting everything by default on a manual install.

On the side, the changes are very simple. Libraries built with libtool have a different class than programs and static archives, so you declare them as lib_LTLIBRARIES with a .la extension (at build time this is unavoidable). The only real difference between _LTLIBRARIES and _PROGRAMS is that the former gets its additional links from _LIBADD rather than _LDADD like the latter.

bin_PROGRAMS = fooutil1 fooutil2 fooutil3

libfoo_la_SOURCES = lib/foo1.c lib/foo2.c lib/foo3.c
libfoo_la_LIBADD = -lz
libfoo_la_LDFLAGS = -export-symbols-regex &apos^foo_[^_]&apos

fooutil1_LDADD =
fooutil2_LDADD =
fooutil3_LDADD = -ldl

pkginclude_HEADERS = lib/foo1.h lib/foo2.h lib/foo3.h

The _HEADERS variable is used to define which header files to install and where. In this case, it goes into ${prefix}/include/${PACKAGE}, as I declared it a pkginclude install.

The use of -export-symbols-regex ­– further documented in the guide – ensures that only the symbols that we want to have publicly available are exported and does so in an easy way.

This is about it for now — one thing that I haven’t added in the previous post, but which I’ll expand in the next iteration or the one after, is that the only command you need to regenerate autotools is autoreconf -fis and that still applies after introducing libtool support.

April 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.9 (April 18, 2013, 17:41 UTC)

First of all py3status is on pypi ! You can now install it with the simple and usual :

$ pip install py3status

This new version features my first pull request from @Fandekasp who kindly wrote a pomodoro module which helps this technique’s adepts by having a counter on their bar. I also fixed a few glitches on module injection and some documentation.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: and

The first of the two files ( is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic file:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use but we’ll create instead soon).

To build a program, you need then to create a similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.2 released (April 18, 2013, 10:53 UTC)

After the security issue related bumps of the previous releases which happened last weeks it was about time 10gen released a 2.4.x fixing the following issues:

  • Fix for upgrading sharded clusters
  • TTL assertion on replica set secondaries
  • Several V8 memory leak and performance fixes
  • High volume connection crash

I guess everything listed above would have affected our cluster at work so I’m glad we’ve been patient on following-up this release :) See the changelog for details.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : streets (April 18, 2013, 06:02 UTC)


April 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : chinatown (April 16, 2013, 20:41 UTC)


Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece


FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)


We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.

April 14, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.