Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
June 25, 2013, 23:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

June 25, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
More threads is not always a good solution (June 25, 2013, 16:51 UTC)

You might remember that some time ago I took over unpaper as I used (and sort of use still) to pre-filter the documents I scan to archive in electronic form. While I’ve been very busy in the past few months, I’m now trying to clear out my backlog, and it turns out that a good deal of it involves unpaper. There are bugs to fix and features to implement, but also patches to evaluate.

One of the most recent patches I received is designed to help with the performance of the tool, which are definitely not what Jens, the original author, had in mind when he came up with the code. Better performance is something that about everybody would love to have at this point. Unfortunately, it turns out that the patch is not working out as intended.

The patch is two-fold: from one side, it makes (optional) use of OpenMP to parallelize the code in hope to speed it up. Given that most of the code is a bunch of loops, it seemed obvious that using multithreading would speed things out, right? Well, first try after applying it showed very easily that it slows down, at least on Excelsior, which is a 32-cores system. While it would take less than 10 seconds for the first test to run without OpenMP, it would take over a minute with it, spinning up all the cores for a 3000% CPU usage.

A quick test shows that forcing the number of threads to 8, rather than leaving it unbound, makes it actually faster than the non-OpenMP variant. This means that there are a few different variables in play that needs to be tuned for performance to improve. Without going into profiling the code, I can figure out a few things that can go wrong with unchecked multithreading:

  • extensive locking when each worker thread is running, either because they are all accessing the same memory page, or because the loop needs a “reduced” result (e.g. a value has to be calculated as the sum of values calculated within a parallelized loop); in the case of unpaper I’m sure both situations happen fairly often in the looping codepath.
  • cache trashing: as the worker threads jump around the memory area to process, it no longer is fetching a linear amount of memory;
  • something entirely more complicated.

Beside being obvious that doing the “stupid” thing and just making all the major loops parallel, this situation is bringing me to the point where I finally will make use of the Using OpenMP book that I got a few years back and I only started reading after figuring out that OpenMP was not ready yet for prime time. Nowadays OpenMP support in Linux improved to the point it’s probably worth taking another look at it, and I guess unpaper is going to be the test case for it. You can expect a series of blog posts on the topic at this point.

The first thing I noticed while reading the way OpenMP handles shared and private variables, is that the const indication is much stronger when using OpenMP. The reason is that if you tell the code that a given datum is not going to change (it’s a constant, not a variable), it can easily assume direct access from all the threads will work properly; the variable is shared by default among all of them. This is something that, for non-OpenMP programs, is usually inferred from the SSA form — I assume that for whatever reason, OpenMP makes SSA weaker.

Unfortunately, this also means that there is one nasty change tha tmight be required to make code friendlier to OpenMP, and that is a change in prototypes of functions. The parameters to a function are, for what OpenMP is concerned, variables, and that means that unless you declare them const, it won’t be sharing them by default. Within a self-contained program like unpaper, changing the signatures of the functions so that parameters are declared, for instance, const int is painless — but for a library it would be API breakage.

Anyway, just remember: adding multithreading is not the silver bullet you might think it is!

P.S.: I’m currently trying to gauge interest on a volume that would collect, re-edit, and organize all I written up to now on ELF. I’ve created a page on leanpub where you can note down your interest, and how much you think such a volume would be worth. If you would like to read such a book, just add your interest through the form; as soon as at least a dozen people are interested, I’ll spend all my free time working on the editing.

June 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.13 (June 24, 2013, 10:25 UTC)

Yep, a quick bump of py3status to fix a bug reported by @lathan using python3. The private and special methods detection didn’t work on python3 because the class methods are reported differently from python2.

A special thanks to @bloodred and @drahier too for debugging, testing and proposing some solutions to this problem. First time I see multiple members of what I could humbly call the py3status community working together, it’s very nice of you guys !

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My time abroad: Dublin tips (June 24, 2013, 10:15 UTC)

I’m actually writing this while “on vacation” in Italy (vacation being defined as in, I took days off work, but I’ve actually been writing thousands of words, between the blog, updates to Autotools Mythbuster and starting up a new project that will materialize in the future months), but I’ve been in Ireland for a few months already, and there are a few tips that I think might be useful for the next person moving to Dublin.

First of all, get a local SIM card. It’s easy and quick to get a prepay (top up) card. I actually ended up getting one from Three Ireland, for a very simple reason: their “Three like home” promotion allows me to use the card in Italy, the UK and a few more countries like if it was a local one. In particular, I’ve been using HSDPA connection with my Irish account while in Italy, without risking bankruptcy — the Three offer I got in Ireland is actually quite nice by itself: as long as I top up 20 euro per month, whether I spend it or keep it, they give me unlimited data (it shows up in my account as 2TB of data!). The same offer persists in Italy.

I’ve also found useful to get a pre-paid mobile hotspot device, for when guests happen to stop by: since it does not make sense for them to get an Irish SIM, I just hand them the small device and they connect their phone to that. When my sister came to visit, we were able to keep in touch via WhatsApp.. neither of us spent money with expensive international SMS, and she could use the maps even if I was not around. I decided to hedge my bets and I got a Vodafone hotspot; the device costed me €60, and came with a full month prepaid, I can then buy weekly packages when I get guests.

Technology-wise, I found that Dublin is surprisingly behind even compared to Italy: I could find no chainstores like Mediaworld or Mediamarkt, and I would suggest you avoid Maplin like a plague — I needed quickly two mickey-mouse cables with UK plugs, so I bought them there for a whopping €35 per cable… they are sold at €6 usually. I’ve been lucky at Peats (in Parnell Street) but it seems to be a very hit and miss on which employee is following you. Most of everything I ended up getting through Amazon — interestingly enough I got a mop (Mocio Vileda) through Amazon as well, because the local supermarkets in my area did no carry it, and the one I found it at (Dunnes in St Stephen Green) made it cumbersome to bring it back home; Amazon shipped it and I paid less for it.

Speaking of supermarkets, I got extremely lucky in my house hunting, and I live right in the middle of two EuroSpar — some of their prices are more similar to a convenience store than a supermarket, but they are not altogether too bad. I was able to find buckwheat flakes in their “healthy and gluten free” aisle, which I actually like (since I’m not a coeliac, I don’t usually try to eat gluten free — I just happen to dislike corn and rice flakes).

I also found out that ordering online at Tesco can actually save me money: it allows me to buy bigger boxes for things like detergents, as I don’t have to carry the heavy bags, and at the same time they tend to have enough offers to make up for the delivery charge of €4. Since they have a very neat mobile app (as well as website — they even ask you the level of JavaScript complexity you want to use, to switch to a more accessible website), I found that it’s convenient for me to prepare a basket over there, then drop by the EuroSpar to check for things that are cheaper over there (when I go there for coffee), and finally order it. For those who wonder why I still drop by the EuroSpar, as I said in a previous post they have an Insomnia coffee shop inside, which means I go there to have breakfast, or for a post-lunch coffee, whenever I’m not at work. Plus sometimes you need something right away and you don’t want to wait delivery, in which case I also go to there.

Anyway, more tips might follow at a later time, for the moment you have a few ideas of what I’m spending my time doing in Dublin…

June 23, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

So the new version of Automake is out, and is most likely going to be the last release of the first major version. The next version of Automake is going to be 2.0, not to be confused with Automake NG which is a parallel project, still maintained by Stefano, but with a slightly different target.

After the various issues in 1.13 series Stefano decided to take a much more conservative approach for both 1.14 and the upcoming 2.0. While a bunch of features are getting deprecated with these two versions, they will not be dropped at least until version 3.0 I suppose. This mean that there should be all the time for developers to update their Autotools before they starting failing. Users of -Werror for Automake will of course still see issues, but I’ve already written about it so I’m not going back on the topic.

There are no big deals with the new release, by the way, as its topic seems to be mostly “get things straight”. For instance, the C compilation handling has been streamlined, with anticipation of further streamlining on Automake 2.0. In particular, the next major release will get rid of the subdir-objects option… by force-enabling it, which also means that the connected, optional AM_PROG_CC_C_O is now bolted on the basic AC_PROG_CC. What does this mean? Mostly that there is one fewer line to add to your when you use subdir-objects, and if you don’t use subdir-objects today, you should. It also means that the compile script is now needed by all automake projects.

The only one new feature that I think is worth the release, is better support for including files within — this allows the creation of almost independent “module” files so that your build rules still live with the source files, but the final result is non-recursive. The changes make Karel’s way much more practical, to the point I’ve actually started writing documentation for it in Autotools Mythbuster.

# src/

bin_PROGRAMS += myprog
man_MANS += %D%/myprog.8
myprog_SOURCES = %D%/myprog.c \

The idea is that instead of knowing exactly what your subdirectory is that contains the sources, you can simply use %D% (or reldir) and then you can move said directory around. It makes it possible to properly handle a bundled-but-optout-capable library so that you don’t have to fight too much with the build system. I think that’ll actuall be the next post in the Autotools Mythbuster series, how to create a library project with a clear bundling path and, at the same time, the ability to use the system copy of the library itself.

Anyway, let’s all thank Stefano for a likely uneventful automake release. Autotools Mythbuster is being updated, for now you can find up to date forward porting notes but before coming back from vacation I’ll most likely update a few more sections.

June 20, 2013
Greg KH a.k.a. gregkh (homepage, bugs)
Hardware, past, present, and future. (June 20, 2013, 20:35 UTC)

Here's some thoughts about some hardware I was going to use, hardware I use daily, and hardware I'll probably use someday in the future.

Thunderbolt is dead, long live Thunderbolt.

Seriously, it's dead, use it as a video interconnect and don't worry about anything else.

Ok, some more explanation is probably in order...

Back in October of 2012, after a meeting with some very smart Intel engineers, I ended up the proud owner of a machine with Thunderbolt support, some hard disks with Thunderbolt interfaces, and most importantly, access to the super-secret Thunderbolt specification on how to make this whole thing work properly on Linux. I also had a MacBook Pro with a Thunderbolt interface which is what I really wanted to get working.

Thunderbolt Specification

So I settled in and read the whole spec. It was fun reading (side note, it seems that BIOS engineers think Windows kernel developers are lower on the evolutionary scale than they are, and for all I know, they might be right...), and I'll summarize the whole super-secret, NDA-restricted specification, when it comes to how an operating system is supposed to deal with Thunderbolt, shhh, don't tell anyone that I'm doing this:

Thunderbolt is PCI Express hotplug, the BIOS handles all the hard work.

Seriously, it's that simple, at least from the kernel point of view. So, it turns out that Linux should work just fine with Thunderbolt, no changes needed at all, as we have been supporting PCI hotplug in one form or another for 15+ years now (you remember CardBus, right?)

Some patches were posted to get the one known motherboard with Thunderbolt support to work properly by the engineers at Intel (it seems that the ACPI tables were of course wrong, so work-arounds were needed), and that should be it, right?


It turns out that that Apple, in their infinite wisdom, doesn't follow the specification, but rather, they require a kernel driver to do all of the work that the BIOS is supposed to be doing. This works out well for them as they can share the same code from their BIOS with their kernel, but for any other operating system, that doesn't know how to talk directly to the hardware at that level, you are out of luck. So, no Thunderbolt support on Apple hardware for Linux (at least through May 2013, maybe newer models will change this, but I'm not counting on it.)

But wait, what about Thunderbolt support on other hardware? I was in Hong Kong in early 2013, and of course found the chance to find the local computer stores. I saw, on one wall of a shop, all of the latest motherboards that were brand new, and would be sold all around the world for the next 6+ months. None of them had Thunderbolt support on them. It's almost impossible to find Thunderbolt on a motherboard these days, and that doesn't look to change any time soon.

Then I read this interesting article that benchmarked Thunderbolt mass-storage devices with USB ones. It turns out that the speeds are the same. And that's with the decades-old USB storage specification that is so slow it's not funny. Wait for manufacturers to come out supporting the latest UAS specification (and the USB host controller drivers to support it as well, Linux doesn't yet because there is no hardware out there, wonderful chicken-and-egg problem...) When that happens, USB storage speeds are going to be way above Thunderbolt.

So Thunderbolt is dead, destined for the same future that FireWire ended up as, a special interconnect that almost no one outside of Apple hardware circles use, with USB ending up taking over the mass-market instead.

Note, all of this is for Thunderbolt the PCI interconnect, not the video connection. That works just fine on Linux as it isn't PCI Express, but just a video pass-through. No problems there.


I've been lucky to be using a Chromebook Pixel for the past few months, thanks to some friends at Google who got it for me. It's the best laptop I've used in a very long time, and I love the thing. I also hate it, and curse it daily, but wouldn't give it up at all.

I'm running openSUSE Tumbleweed on it, not Chrome OS, so of course that is the main reason I'm having the problems listed below with it. If you stick with Chrome OS, it's wonderful, seriously great. My day-job (Linux kernel work) means that I can't use Chrome OS as I can't change the kernel, but almost everyone else can use Chrome OS, especially if your company uses Google Apps for email and the like. Chrome OS is really good, I like it, and I think it is the way forward for a large segment of laptop users. My daughter weekly asks me if I'm willing to give the laptop to her to reinstall Chrome OS on it, as that's her desktop of choice, and this laptop runs it better than anything I've seen.

Here's the things that drive me crazy:

  • small disk size. It's ok for normal kernel work, but when I was trying to build some full-system virtual machines for testing, I quickly ran out of space.
  • slow disk speed. It's a "SSD", but I'm used to a real SSD speed, not this slow thing, where I can easily max out the I/O path doing kernel builds, as the processor quickly outraces it.
  • USB 2 ports, I could get around the disk size and speed if I had USB 3.0, and I totally understand why there are only USB 2 ports in the laptop, but hey, I can wish, right?
  • various EC issues, the Embedded Controller in the laptop is "odd" and when you run a different operating system than Chrome OS, the quirks come out. I've learned to live with them, but I would love to see an update for the BIOS that fixes the known problems that are already resolved within the code trees. It's just up to Google to push that out publicly.

Here's the things that make me love this laptop:

  • the screen
  • the screen
  • the screen
  • seriously, the screen. It's beautiful, and is worth any problem I've had with this laptop.
  • wireless just works, no issues at all, great Atheros driver / hardware.
  • it's the best presentation laptop I've ever had. Gnome 3 works wonderfully with it, the external display adaptor can easily handle a different resolution. LibreOffice's presentation mode, with the speaker notes on the laptop, with it's huge screen looks wonderful, and the slides at a much lower resolution is just great. No problems at all with this, just plug the laptop into the projector and go.
  • very fast processors. Full kernel builds in less than 5 minutes, no problem.

There are some things that originally bothered me, but have been fixed, or I'm now used to:

  • suspend / resume didn't work, that's fixed in 3.10-rc kernels.
  • resume used to throttle the CPU to only half speed, again, fixed in 3.10-rc kernels.
  • keyboard backlights don't survive suspend/resume, there are fixes out there that hopefully will get into 3.11, it doesn't bother me at all.
  • lack of PgUp/PgDown/Home/End/Delete keys. The ever-talented Dirk Hohndel made a patch for the PS/2 driver (seriously, a PS/2 keyboard?) that overloads the right Alt key and arrow keys to provide this fix, so this is solved, but it would be good to get it merged upstream, hopefully one day this will get there for others to use.
  • trackpad was annoying at first, but now I'm used to the three-finger tap for middle click. Oh, and I got a good wireless mouse to make it easier.

It's a great laptop, built really solid. I'd recommend it to anyone who uses Chrome OS, and for anyone else if you like tinkering with your own kernels (a small market, I know.) Later this year new hardware should be coming out, with the same type of high-resolution display, and beefier processors and bigger storage devices. When that happens, I'll get one of them, and my daughter will greedily grab this laptop and install Chrome OS, but until then, this is what I use to travel the world with.

The future is glass

A few weeks ago, a friend of mine came over with a newly acquired Google Glass device. I played with it for a few minutes, and was instantly amazed at the possibilities it will provide. I, like probably lots of you, have been reading books that describe different types of heads-up or "embedded computers" for many many years, and I've always been waiting for the day that this will become a reality.

Google Glass might not be the device described in science fiction books, but it's the closest I've seen so far. The interface is completely natural, the display is amazing, and the potential is huge.

And yes, you do look like a dork while wearing them, but that will either become acceptable, or the device will shrink over time. I'm betting on a combination of both of them.

But what I found even more amazing is what happened when the kids put them on. The youngest put them on, and, as I explained on Google+ after it happened, his responses went, in order:

  • "You could watch movies with this in class!"
  • "Google Glass, what is Iron Man?"
  • "Google Glass, what is 7 * 24"

So that was YouTube time waster, to to movie background information, to homework solver in a matter of minutes. Total acceptance, no hesitation at all, I think that's proof of just how big this will be eventually.

Later that day, we went to a neighborhood yogurt shop, and my friend ended up stalling the checkout line for a long time as the teenagers running the store insisted on trying them out and taking pictures of each other and doing google searches to see just how popular their store was (hint, it wasn't the highest ranking, which was funny.) After we finally paid for our dessert, my friend was stuck demoing the device for about everyone who came in the shop for the next 20 minutes. People of all ages, kids to retirees, all instantly got the device and enjoyed it.

So, if you've made fun of Google Glass in the past, try one out, and consider the potential of it.

And of course, it runs Linux, which makes me happy.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.12 (June 20, 2013, 09:11 UTC)

I’m glad to announce a new release of py3status ! I would like to thank @drahier for reporting an issue he found after suspending his computer. I took the opportunity to add a feature which will be helpful at work since we now have a local package installing some modules we share between colleagues (thx to @lujeni).


  • bugfix : don’t hang horribly when resuming from a suspend (was caused by an IOError exception which could occur when reading/writing to a suspending system).
  • feature : allow multiple -i include_path options to be passed and handle all the modules thus found.
  • feature : do not try to execute private and special methods on user-written Py3status’ classes.

June 19, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

How Can Any Company Ever Trust Microsoft Again?

June 18, 2013
Johannes Huber a.k.a. johu (homepage, bugs)
Gentoo’s road to Wayland in KDE (June 18, 2013, 22:57 UTC)

KWin in KDE SC 4.11 will include experimental support for Wayland, as you can read in the official 4.11 Beta 1 announcement:

KWin and Path to Wayland—Intial experimental support for Wayland was added to KWin. KWin also got many OpenGL improvements including support being added for creating an OpenGL 3.1 core context and robustness from using the new functionality provided by the GL_ARB_robustness extension. Numerous KWin optimizations are aimed at reducing CPU and memory overhead in the OpenGL backend. Some desktop effects have been re-written in JavaScript to ease maintenance.

As the Beta 1 is already available in the gentoo kde overlay you may ask what the current state is. You are right it is time to talk about it. I will try to serve you the facts.

Current state

Wayland is already packaged in portage (dev-libs/wayland), thanks to the x11 herd.

johu@elia ~ $ eix -s wayland
[I] dev-libs/wayland
     Available versions:  (~)0.95.0 (~)1.0.6 (~)1.1.0 {doc static-libs}
     Installed versions:  1.1.0(21:54:08 05.06.2013)(-doc -static-libs)
     Description:         Wayland protocol libraries

KWin 4.11 Beta 1 + KWin master (4.8.10 + 9999 ) has already a build option (USE flag) for wayland.

johu@elia ~ $ eix -s kwin
[I] kde-base/kwin
     Available versions:  (4) 4.10.3 (~)4.10.4 **^m[1] [M](~)4.10.80^m[1] (**)9999^m[1]
       {aqua debug gles opengl wayland}
     Installed versions:  9999(4)^m[1](22:14:15 18.06.2013)(gles opengl wayland -aqua -debug)
     Description:         KDE window manager

It builds and links already successfully against it.

johu@elia ~ $ scan kwin

The USE flag is globally masked for stable systems. As a side note, i realy like the stable use mask feature in EAPI 5.

Next steps

1) We need to package the Wayland compositor aka Weston in portage tree, to start a full blown Wayland session.  This task is already in progress (bug #445736), an ebuild is available in the gentoo x11 overlay. Will be finished soon hopefully.

2) Add a wayland build option (USE flag) for the KDE start script (kde-base/startkde). The USE flag will allow us to ship a modified version of it, so that we can tell KWin to use weston/wayland when starting.

So I am realy sure you will be able to play around with Wayland in starting August when KDE SC 4.11.0 is released and hit the portage tree by simply enabling a USE flag. Are you excited? You should!

Have fun!

June 16, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)

I’d like to ask gentoo-haskell community for help. We have a nice wiki and our project page have moved there. But it seems that we don’t have enough documentation quality for end-user application. As a developers we support proper builds and tests for that packages but we are not expert users for many of them. So I’d like to ask community to add some docs and tips for applications you use. This basically means installation, advanced config (examples), interesting use cases, links to external resources (blog posts/documentation) and so on. It can help a lot for new Gentoo users.

The most interesting projects are:


June 15, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Okay the title is a mouthful for sure, but this new book from Packt Publishing is an interesting read for those who happen to use Mercurial only from time to time and tends to forget most of the commands and workflows, especially when they differ quite a bit from the Git ones.

While I might disagree with using some very unsafe examples (changing the owner of /etc/apache to your user to experiment on it? Really?), the book is a very quick read and I feel like for the price it’s sold by Packt (don’t get distracted by the cover above, that links to Amazon) it’s worth a read, and keeping it on one’s shelf or preferred ebook reader device.

Well, not sure if I can add more to this, I know it sounds like filler, but the book is short enough that trying to get into more details about the various recipes it proposes would probably repeat it whole. As I said, in general, if you have to work with Mercurial for whatever reason, go for it!

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
HP Moonshot (June 15, 2013, 05:05 UTC)

A few months ago I had the chance to get to know about one of the upcoming HP innovation in the server architecture : the Moonshot project. Now that it is public, I thought I’d take some time to talk about it as I’m convinced this is something big enough to change the way we see datacenter infrastructures and servers in general. I’ll do my best to keep it short and understandable so if you want deeper technical insights, fell free to ask or search around.

The nowdays servers

As a reminder, that’s what a standard server looks like today :






We call them pizza boxes for their flat and tasty aspect.

Then in datacenters we put them in enclosures we call racks which look like this :








Now the basic stuff to understand and keep in mind is that those racks can typically hold 42 standard servers like the one above. Datacenters are just big hangars where you store and cool hundreds/thousands of those racks.

The Moonshot project

The more processing power you need, the more racks you need, the more datacenters you need. Think of facebook or google and their enormous amount of servers/racks/datacenters around the world. Every new device (PC/tablet/smartphone) activated is a new client to an always-growing infrastructure of those powerhouses.

Basically, there’s a limit in the number of full datacenters you can build and operate eventually (not to mention powering them up) but there’s worse : the new devices/clients growth is higher than our datacenter building/powering capabilites.

The Moonshot project is one of HP’s response to this challenge : permit businesses to accommodate and serve this rapidly growing demand of devices/clients without the datacenter model collapsing. Their method ? Invent a new server architecture from scratch.

The cartridges are back !

No your Master System is still out of date… But HP’s approach to getting more servers in less space while consuming way less power resides in turning the pizza-box above into a cartridge which looks like this :







No black magic involved : you can now store 45 servers in 4,3 units of space. Based on their calculation, if you want the same computing power as you would have with standard pizza-boxes you’d need only one full rack of those new servers versus 4 to 6 racks filled with standard ones (depending on their config). Overall gain factors are huge :

  • space divided by 4 to 6
  • energy divided by 6 to 2
  • cabling divided by 26 to 18
  • not to mention the time saved by technicians to put everything up

That’s what the beast looks like :








Of course, they have integrated redundant switches and all the flavors of modern enclosures.

The right cartridge at the right place

Over the year, HP will launch a series of cartridges with their own specifications (RAM/HDD/CPU) which should be used to meet specific needs and designs. They can also accommodate your needs by designing your own server-cartridges if you’re in a hurry of course.

As a DevOps, I love this idea of the hardware being designed and used upon your software’s architecture because that’s closer to real efficiency and will lead both developers and IT architects to distributed and massively scaling designs.

I’ll conclude with the last thing you need to understand about this technology : it does not fit everyone’s need. Moonshot will not take over the world and replace every server around, instead it should be used as a hardware matching a real software design.

June 13, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Last week we had our monthly Gentoo KDE team meeting; here are few details that are probably worth sharing.

  • So far we've provided the useflag "semantic-desktop" which in particular controls the nepomuk functionality. Some components of KDE require this functionality unconditionally, and if you try to build without it, bugs and build failures may occur. In addition, by now it is easily and reliably possible to disable e.g. the file indexer at runtime. So, we've decided that starting with KDE 4.11 we will remove the useflag and hard-enable the functionality and the required dependencies in the ebuilds. The changes are being done already in the KDE overlay in the live ebuilds (which build upstream git master and form the templates for the upcoming 4.11 releases).
  • After recent experiences the plan to drop kdepim-4.4 is off the table again. We will keep it in the portage tree as alternative version and try to support it until it finally breaks.
  • In the meantime we (well, mainly Chris Reffett) have started in the KDE overlay to package Plasma Active, the tablet Plasma workspace environment. Since Gentoo ARM support is already excellent, this may become a highly valuable addition. Unfortunately, it's not really ready yet for the main tree and general use, but packaging work will continue in the overlay- what we need most is testing and bug reporting!
Independent of the meeting, a stabilization request has already been filed for KDE 4.10.3;  thanks to the work of the kde stable testers, we can keep everyone uptodate. And as a final note, my laptop is back to kmail1... Cheers!

Edit, 13/6/2013: Johu has posted a blog entry on how to disable the semantic desktop functionality at runtime.

Johannes Huber a.k.a. johu (homepage, bugs)
Disabling semantic-desktop at runtime (June 13, 2013, 21:44 UTC)

Today we bumped  KDE SC 4.11 beta 1 (4.10.80) in the gentoo kde overlay. The semantic-desktop use flag is dropped in >=kde-base/4.10.80, as you may already noticed or read in dilfridges blog post. So if your hardware is not powerful enough or you just don’t want to use the feature you can easily disable it at runtime.

1. Go to the “System Settings” and search for “Desktop Search”

System Settings


2. Uncheck at least the file and email indexer. You can also disable the “Nepomuk Semantic Desktop”.

Desktop Search Settings

Have fun!



If you want to disable Akonadi you can check out this blog post.


Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Vertigo (June 13, 2013, 20:47 UTC)

16850004 16850003 16850007 16850010 16850011


June 10, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Game Review: Metal Gear Rising (June 10, 2013, 18:32 UTC)

Okay I know that most of you do not follow my blog with the intention of reading about videogames, but given my Open Source time lately has been limited by me being quite busy with settling down and caring for an apartment, some updates are better than nothing. And since one of the first things that I bought for my apartment was a TV and a (new) PlayStation 3, I got to spend some time with Metal Gear Rising I thought it might be a good idea to write something about it.

First of all I have to apologize to the fans of the whole saga. I only played Metal Gear Solid 4 before, and I didn’t even finish it (my first PS3 died while I was playing it, and I had no backup of the save games — and since this happened quite a bit within the game I didn’t want to play it back from the start afterwards, I might do so now, honestly). I’m also not a big fan of stealth games (I never even completed the demo of Thief, for instance). But I liked MGS4 and I wanted to give a try to MGR simply because I loved the character of Raiden (I like blades, what can I say).

So the gameplay is nice. I love being able to cut almost everything down to pieces, especially when I’m pissed off by the neighbour’s alarm ringing at ten in the night. Or seven in the morning on a bank holiday. I admit I played through in easy mode (I wanted to vent off the stress, not cause more), and thus that might have helped with being able to get away with a basically random kind of attacks. But I liked it, and I liked the fact that it’s not entirely random. I think it might be worth a re-play now that I understand the attacks better (I’m hoping in a new game plus kind of deal).

Graphic is .. well, it’s not like there are any more games that have a bad graphics, but it could be better. It does not feel at the level of Metal Gear Solid 4. It’s also running in 720p, which is surprising for a new game. Although it might have something to do with the fact that the PS3 lacks the memory to run this properly. Oh well, not surprised I’d say, but a bit disappointed.

The soundtrack, oh wow the soundtrack! I’ve loved the soundtrack to the point I had to get it on iTunes. It charges me the same way as DMC4 did.

Unfortunately, maybe because I played in Easy mode, the game is quite too short. Yes there are downloadable chapters, and side “VR” missions, but the former you have to pay for extra, which is just a lowly trick for the publisher, and the latter is not part of the story. One “file” (chapter) consists of … one cut scene and a single batter. That’s not really that nice, in my opinion. As I said I’m going to re-play it with a bit more clue about the attacks, it’s likely going to be more enjoyable. But seriously even in “easy”, two weeks playing on and off were enough to get to the final Metal Gear… I’m not really excited about it.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
Road Trip Ready (June 10, 2013, 07:47 UTC)

I’m leaving my home base in Australia, Skydive Maitland, and venturing off. It all started with an idea, and this:


I took the seats out of the van, put a bed in the back and now I’m ready to go.


My grand plan is to go all the way around Australia. Some people tell me it is 25,000km or so. I have no real time commitments (as always), so for now I’m heading “north” – to where it is warmer. I think I’m the only guy chasing mild winter on my rtw trip, no more! I’m leaving Maitland with new friends to visit again and I did about 250+ jumps in the 3 months I was working there, good times.


June 09, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: graphviz (June 09, 2013, 01:50 UTC)

Visualization of data is often needed in order to understand what the data means. When data needs to be visualized automatically, I often use the graphviz tools. Not that they are extremely pretty, but it works very well and is made to be automated.

Let me give a few examples of when visualization helps…

In SELinux, there is the notion of domain transitions: security contexts that can transition to another security context (and thus change the permissions that the application/process has). Knowing where domains can transition to (and how) as well as how domains can be transitioned to (so input/output, if you may) is an important aspect to validate the security of a system. The information can be obtained from tools such as sesearch, but even on a small system you easily find hundreds of transitions that can occur. Visualizing the transitions in a graph (using dot or neato) shows how a starting point can move (or cannot move – equally important to know ;-) to another domain. So a simple sesearch with a few awk statements in the middle and a dot at the end produces a nice graph in PNG format to analyze further.

A second visualization is about dependencies. Be it package dependencies or library dependencies, or even architectural dependencies (in IT architecturing, abstraction of assets and such also provides a dependency-like structure), with the Graphviz tools the generation of dependency graphs can be done automatically. At work, I sometimes use a simple home-brew web-based API to generate the data (similar to Ashitani’s Ajax/Graphviz) since the workstations don’t allow installation of your own software – and they’re windows.

Another purpose I use graphviz for is to quickly visualize processes during the design. Of course, this can be done using Visio or easily as well, but these have the disadvantage that you already require some idea on how the process will evolve. With the dot language, I can just start writing processes in a simple way, combining steps into clusters (or in scheduling terms: streams or applications ;-) and let Graphviz visualize it for me. When the process is almost finished, I can either copy the result in to generate a nicer drawing or use the Graphviz result (especially when the purpose was just rapid prototyping).

And sometimes it is just fun to generate graphs based on data. For instance, I can take the IRC logs of #gentoo or #gentoo-hardened to generate graphs showing interactions between people (who speaks to who and how frequently) or to find out the strength of topics (get the keywords and generate communication graphs based on those keywords).

June 08, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When you think of SSD manufacturers, it might be obvious to think of them as Linux friendly, given they target power users, and Linux users are for the most part are power users. Seems like this is not that true for Crucial. My main personal laptop has, since last year, a 64GB Crucial M4 SSD – given I’ve not been using a desktop computer for a while it does start to feel small, but that’s a different point – which underwent a couple of firmware update since then. In particular, there is a new release 070H that is supposed to fix some nasty power saving issues.

Crucial only provide firmware update utilities for Windows 7 and 8 (two different versions of them), and then they have an utility for “Windows and Mac” — the latter is actually a ZIP file that contains an ISO file… well, I don’t have a CD to burn with me, so my first option was to run the Windows 7 file from my Windows 7 install, which resides on the external SATA harddrive. No luck with that, from what I read on the forums what the upgrader does is simply setting up an EFI application to boot, and then reboot. Unfortunately in my case there are two EFI partitions, because of the two bootable drives, and that most likely messes up with the upgrader.

Okay strike one, let’s look at the ISO file. The ISO is very simple and very small.. it basically is just an ISOLINUX tree that uses memdisk to launch a 2.88MB image file (2.88MB is a semi-standard floppy disk size, which never really went popular for real disks, but has been leveraged by most virtual floppy disk images in bootable CD-Roms for the expanded size). Okay what’s in the image file then? Nothing surprising, it’s a FreeDOS image, with Crucial’s own utility and the firmware.

So if you remember, I had some experience with trying to update BIOS through FreeDOS images, and I have my trusty USB stick with FreeDOS arriving on Tuesday with most of my personal effects that were in Italy waiting for me to find a final place to move my stuff on. But I wanted to see if I could try to boot the image file without needing the FreeDOS stick, so I checked. Grub2 does not include a direct way to load image — the reference to memdisk in their manual refers to the situation where you’re loading a standalone or rescue Grub image, nothing to do with what we care about.

The most obvious way to run the image is through SYSLINUX’s memdisk loader. Debian even has a package that allows you to just drop the images in /boot/images and adds them to the grub menu. Quick and easy, no? Well, no. The problem is that memdisk needs to be loaded with linux16 — and, well, Grub 2 does not support it if you’re booting via EFI, like I am.

I guess I’ll wait until Tuesday night, when my BIOS disk will be here, and I’ll just use it to update the SSD. It should solve the issue once and for all.

Sebastian Pipping a.k.a. sping (homepage, bugs)

Just a quick update on Freeverb3 in Gentoo: media-libs/freeverb3-3.0.0 is now unmasked and working with media-sound/audacious-3.3.4, the latest Audacious in Gentoo. Give it a try :-)

PS: The theme on the screenshot is Radience known from Ubuntu or x11-themes/light-themes in Gentoo.

Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: LibreOffice (June 08, 2013, 01:50 UTC)

Of course, working with a Linux desktop eventually requires you to work with an office suite. Although I have used alternatives like AbiWord and Calligra in the past, and although I do think that Google Docs might eventually become powerful enough to use instead, I’m currently using LibreOffice.

The use of LibreOffice for Linux users is well known: it has decent Microsoft Office support (although I hardly ever need it; most users don’t mind exporting the files in an open document format and publishers often support OpenOffice/LibreOffice formats themselves) and its features are becoming more and more powerful, such as the CMIS support (for online collaboration through content management systems). It also has a huge community, sharing templates and other documents that make life with LibreOffice even much prettier. Don’t forget to check out its extensive documentation.

The aspects of LibreOffice I use the most are of course its writer (word processor) and calc (spreadsheet application). The writer-part is for when I do technical writing, whereas the spreadsheet application is for generating simple management sheets for startups and households that want to keep track of things (such as budgets, creating invoices, data for mail-merge, etc.). At my work, Excel is one of the most used “end user computing” tools, so I happen to get acquainted with quite a few spreadsheet tips and tricks that are beneficial for small companies or organizations ;-) Also, Calc has support for macro-like enhancements, which makes it a good start for fast application development (until the requests of the user/client has been stabilized, after which I usually suggest a real application development ;-)

I generally don’t use its presentation part much though – if I get a powerpoint, I first see if Google Docs doesn’t show it sufficiently well. If not, then I try it out in LibreOffice. But usually, if someone sends me a presentation, I tend to ask for a PDF version.

June 07, 2013
Pavlos Ratis a.k.a. dastergon (homepage, bugs)

soc-logo-300x200Last week Google announced all the accepted student proposals for Google Summer Of Code 2013… included mine !!
I will participate in this year’s GSoC with Gentoo Foundation. My proposal was about (aka Gentoo Identity). Gentoo Identity will be an LDAP web interface written using Django, which is a Python web framework. That will allow Gentoo developers _and_ users to configure easily their attributes from Gentoo’s LDAP server. This project is based on a previous GSoC project(codename ‘Okupy’). The web application will be for general usage and not only for LDAP administrators and sysadmins. I am also going to rewrite perl_ldap script  in python and improve it.  Currently perl_ldap  script is the only way to edit attirbutes from Gentoo’s LDAP server.

Some major features are:

  • LDAP Authentication.
  • Editable LDAP attributes via web forms.
  • Information about Gentoo developers (full name, gpg key, location, team) in a list like a address book.
  • Additional information about other accounts based on the ACL.
  • Enable privileged users/groups(recruiters, devrel, infrastructure team) to add new accounts and edit their information based on their ACL.

For more information about Gentoo identity, check my full proposal.

The expected outcome is a fully functional and scalable LDAP web interface, where both users and developers will be able to edit their attributes easily.

Theo Chatzimichos from Gentoo Infrastructure team will be my mentor and my co-mentor will be Gentoo Infrastructure lead, Robin Johnson. Also Gentoo dev, Matthew Summers as  a previous GSoC mentor of Okupy will help us in the Django part. In addition Michał Górny will work in parallel in another GSoC project for Gentoo Identity. His project aims to built a complete OpenID provider on top of it, to provide a common authentication and identity exchange mechanism for all Gentoo sites.

I am going to post weekly reports about the status of the project with more technical details. To come in contact for feedback and questions please send your emails at identity[AT]gentoo[DOT]org.

Happy coding!

Michal Hrusecky a.k.a. miska (homepage, bugs)

Database ServerRecently I had some time to do some clenaups/changes/updates in server:database repo regarding MySQL (and MariaDB). Nothing too big. Well actually, there are few little things that I want to talk about and that is the reason for this blog post, but still, nothing really important…

MySQL 5.5, 5.6 and 5.7

MySQL 5.6 is stable for some time already, so it’s time to put it in the action. So I sent the request to include it in Factory and therefore in openSUSE 13.1. There is off course a list of interesting stuff you might want to take a look at before you update. If you don’t want to update, you can install mysql-community-server_55 from server:database repo and stay a little bit longer with version 5.5. On the other hand, staying with old versions is boring, so you can also switch to mysql-community-server_57 which provides new MySQL 5.7. So if you are into databases and especially into MySQL and forks (we have MariaDB 5.5 and 10.0 as well), we have plenty of toys for you to play with.

NOTE: Having MySQL 5.6 in openSUSE 13.1 doesn’t mean switching default back to Oracles MySQL, M in LAMP still means MariaDB for whatever it is worth. It just mean, that you have MySQL 5.6 as an alternative available if you prefer it.

Default configuration

One of the interesting changes that happened in MySQL 5.6 is new default configuration. MySQL usually shipped with some examples of configuration that you can use. It  was there since forever and never changed, although typical computers went from 256M of RAM to 8G. It contained some buffers sizes and various other optimizations. I heard various complains that it would be better shipping without it than with the one that is there. What folks at Oracle did was drop most of it and replace it with pretty much empty one, with various settings commented and described. They probably heard the same complains :-D I consider it a really good step. Defaults are bult-in after all, so why to put them in config file? So I took theirs, added few things. For example Barracuda file format. It was set to be default upstream for few versions but they decided to go back to Antelope. But it’s also one of the thing people complain to me the most about – that they have to set file_per_table and Barracuda manually. And I added examples for multi configuration that we for some reason have included and exposed. This same config file will be pushed to MariaDB as well.

If you are interested in current state, you can see the config file on github and if you have some suggestions that everybody can benefit from, let me know either via comments or via pull request on github ;-)


Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: firefox (June 07, 2013, 01:50 UTC)

Browsers are becoming application disclosure frameworks rather than the visualization tools they were in the past. More and more services, like the one I discussed not that long ago, are using browsers are their client side while retaining the full capabilities of end clients (such as drag and drop, file management, editing capabilities and more).

The browser I use consistently is Firefox. I do think I will move to Chromium (or at least use it more actively) sooner or later, but firefox at this point in time covers all my needs. It isn’t just the browser itself though, but also the wide support in add-ons that I am relying upon. This did make me push out SELinux policies to restrict the actions that firefox can do, because it has become almost an entire operating system by itself (like ChromeOS versus Chrome/Chromium). With a few tunable settings (SELinux booleans) I can enable/disable access to system devices (such as webcams), often vulnerable plugins (flash, java), access to sensitive user information (I don’t allow firefox access to regular user files, only to the downloaded content) and more.

One of the add-ons that is keeping me with Firefox for now is NoScript. Being a security-conscious guy, being able to limit the exposure of my surfing habits to advertisement companies (and others) is very important to me. The NoScript add-on does this perfectly. The add-on is very extensible (although I don’t use that – just the temporary/permanent allow) and easy to work with: on a site where you notice some functionality isn’t working, right-click and seek the proper domain to allow methods from. Try-out a few of them temporarily until you find the “sweet spot” and then allow those for future reference.

Another extension I use often (not often enough) is the spelling checker capabilities. On multi-line fields, this gives me enough feedback about what I am typing and if it doesn’t use a mixture of American English and British English. But with a simple javascript bookmarklet, I can even enable spell check on a rendered page (simple javascript that sets the designMode variable and the contentEditable variable to true), which is perfect for the Gorg integration while developing Gentoo documentation.

The abilities of a browser are endless: I have extensions that offer ePub reading capabilities, full web development capabilities (to edit/verify CSS and HTML changes), HTTPS Everywhere (to enforce SSL when the site supports it), SQLite manager, Tamper Data (to track and manipulate HTTP headers) and more. With the GoogleTalk plugins, doing video chats and such is all done through the browser.

This entire eco-system of plugins and extensions make the browser a big but powerful interface, but also an important resource to properly manage: keep it up-to-date, backup your settings (including auto-stored passwords if you enable that), verify its integrity and ensure it runs in its confined SELinux domain.

June 06, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: bash and kiss tools (June 06, 2013, 01:50 UTC)

Okay, this just had to be here. I’m an automation guy – partially because of my job in which I’m responsible for the long-term strategy behind batch, scheduling and workload automation, but also because I believe proper automation makes life just that much easier. And for personal work, why not automate the majority of stuff as well? For most of the automation I use, I use bash scripts (or POSIX sh scripts that I try out with the dash shell if I need to export the scripts to non-bash users).

The Bourne-Again SHell (or bash) is the default shell on Gentoo Linux systems, and is a powerful shell in features as well. There are numerous resources available on bash scripting, such as the Advanced Bash-Scripting Guide or the (not purely bash), and specific features of Bash have several posts and articles all over the web.

Shell scripts are easy to write, but their power comes from the various tools that a Linux system contains (including the often forgotten GNU-provided ones, of which bash is one of them). My system is filled with scripts, some small, some large, all with a specific function that I imagined I would need to use again later. I prefix almost all my scripts with sw (first letters of SwifT) or mgl (in case the scripts have the potential to be used by others) so I can easily find them (if they are within my ${PATH} of course, not all of them are): just type the first letters followed by two tabs and bash shows me the list of scripts I have:

$ sw\t\t
swbackup               swdocbook2html      swsandboxfirefox    swletter      swpics
swstartvm              swstripcomment      swvmconsole         swgenpdf      swcheckmistakes
swdoctransaccuracy     swhardened-secmerge swmailman2mbox      swmassrename  swmassstable
swmovepics             swbumpmod           swsecontribmerge    swseduplicate swfindbracket
swmergeoverlay         swshowtree          swsetvid            swfileprocmon swlocalize
swgendigest            swgenmkbase         swgenfinoverview    swmatchcve

$ mgl\t\t
mglshow                mglverify         mglgxml2docbook       mglautogenif  mgltellabout
mgltellhowto           mgltellwhynot     mglgenmodoverview     mglgenoval    mglgensetup
mglcertcli             mglcleannode      mglwaitforfile

With the proper basic template, I can keep the scripts sane and well documented. None of the scripts execute something without arguments, and “-h” and “–help” are always mapped to the help information. Those that (re)move files often have a “-p” (or “–pretend”) flag that instead of executing the logic, echo’s it to the screen.

A simple example is the swpics script. It mounts the SD card, moves the images to a first location (Pictures/local/raw), unmounts the SD card, renames the pictures based on the metadata information, finds duplicates based on two checksums (in case I forgot to wipe the SD card afterwards – I don’t wipe it from the script) and removes the duplicates, converts the raws into JPEGs and moves these to a minidlna-served location so I can review the images from DLNA-compliant devices when I want and then starts the Geeqie application. When the Geeqie application has finished, it searches for the removed raws and removes those from the minidlna-served location as well. It’s simple, nothing fancy, and saves me a few minutes of work every time.

The kiss tools are not really a toolset that is called kiss, but rather a set of commands that are simple in their use. Examples are exiv2 (to manage JPEG EXIF information, including renaming them based on the EXIF timestamp), inotifywait (passive waiting for file modification/writes), sipcalc (calculating IP addresses and subnetwork ranges), socat (network socket “cat” tool), screen (or tmux, to implement virtual sessions), git (okay, not that KISS, but perfect for what it does – versioning stuff) and more. Because these applications just do what they are supposed to, without too many bells and whistles, it makes it easy to “glue” them together to get an automated flow.

Automation saves you from performing repetitive steps manually, so is a real time-saver. And bash is a perfect scripting language for it.

June 05, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB : latest releases (June 05, 2013, 10:34 UTC)


Just bumped it to portage and fixed an open bug along. This is yet another bugfix release which backports the switch to the Cyrus SASL2 library for sasl authentication (kerberos). Dependencies were adjusted so you no longer need libgsasl on your systems (remember to depclean).


  • config upgrade fails if collection missing “key” field
  • migrate to Cyrus SASL2 library for sasl authentication
  • rollback files missing after rollback


This one is important to note and I strongly encourage you to upgrade asap as it fixes an important security bug (CVE-2013-2132). I’ve almost dropped all other versions from tree anyway…

highlights 2.5.x

  • support GSSAPI (kerberos) authentication
  • support for SSL certificate validation with hostname matching
  • support for delegated and role based authentication

mongodb-2.5.x dev

What’s cooking for the next 2.6 releases ? Let’s take a quick look as of today.

  • background indexing on secondaries (hell yes!)
  • new implementation of external sort
  • add support for building from source with particular C++11 compilers (will fix a gentoo bug reported quite a long time ago)
  • mongod automatically continues in progress index builds following restart

Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: geekie (June 05, 2013, 01:50 UTC)

In the past, when I had to manage my images (pictures) I used GQview (which started back in 2008). But the application doesn’t get many updates, and if an application does not get many updates, it either means it is no longer maintained or that it does its job perfectly. Sadly, for GQview, it is the unmaintained reason (even though the application seems to work pretty well for most tasks). Enter Geeqie, a fork of GQview to keep evolution on the application up to speed.

The Geeqie image viewer is a simple viewer that allows to easily manipulate images (like rotation). I launch it the moment I insert my camera’s SD card into my laptop for image processing. It quickly shows the thumbnails of all images and I start processing them to see which ones are eligible for manipulations later on (or are just perfect – not that that occurs frequently) and which can be deleted immediately. You can also quickly set Exif information (to annotate the image further) and view some basic aspects of the picture (such as histogram information).

Two features however are what is keeping me with this image viewer: finding duplicates, and side-by-side comparison.

With the duplicate feature, geekie can compare images by name, size, date, dimensions, checksum, path and – most interestingly, similarity. If you start working on images, you often create intermediate snapshots or tryouts. Or, when you start taking pictures, you take several ones in a short time-frame. With the “find duplicate” feature, you can search through the images to find all images that had the same base (or are taking quickly after each other) and see them all simultaneously. That allows you to remove those you don’t need anymore and keep the good ones. I also use this feature often when people come with their external hard drive filled with images – none of them having any exif information anymore and not in any way structured – and ask to see if there are any duplicates on it. A simple checksum might reveal the obvious ones, but the similarity search of geeqie goes much, much further.

The side-by-side comparison creates a split view of the application, in which each pane has another image. This feature I use when I have two pictures that are taken closely after another (so very, very similar in nature) and I need to see which one is better. With the side-by-side comparison, I can look at artifacts in the image or the consequences of the different aperture, ISO and shutter speed.

And the moment I start working on images, Gimp and Darktable are just a single click away.

June 04, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

Today the Humble Indie Bundle #8 has just been upgraded with four more games: (links go to YouTube videos)

7 more days to go right now.

Tiny and Big: Grandpa’s Leftovers

Oil Rush

Intrusion 2

English Country Tune

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio 4.0 and more (June 04, 2013, 02:45 UTC)

And we’re back … PulseAudio 4.0 is out! There’s both a short and super-detailed changelog in the release notes. For the lazy, this release brings a bunch of Bluetooth stability updates, better low latency handling, performance improvements, and a whole lot more. :)

One interesting thing is that for this release, we kept a parallel next branch open while master was frozen for stabilising and releasing. As a result, we’re already well on our way to 5.0 with 52 commits since 4.0 already merged into master.

And finally, I’m excited to announce PulseAudio is going to be carrying out two great projects this summer, as part of the Google Summer of Code! We are going to have Alexander Couzens (lynxis) working on a rewrite of module-tunnel using libpulse, mentored by Tanu Kaskinen. In addition to this, Damir Jelić (poljar) working on improvements to resampling, mentored by Peter Meerwald.

That’s just some of the things to look forward to in coming months. I’ve got a few more things I’d like to write about, but I’ll save that for another post.

Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: freemind (June 04, 2013, 01:50 UTC)

Anyone who is even remotely busy with innovation will know what mindmaps are. They are a means to visualize information, ideas or tasks in whatever structure you like. By using graphical annotations the information is easier to look through, even when the mindmap becomes very large. In the commercial world, mindmapping software such as XMind and Mindmanager are often used. But these companies should really start looking into Freemind.

The Freemind software is a java-based mind map software, running perfectly on Windows, Linux or other platforms. Installation is a breeze (if you are allowed to on your work, you can just launch it from a USB drive if you want, so no installation hassles whatsoever) and its interface is very intuitive. For all the whistles and bells that the commercial ones provide, I just want to create my mindmaps and export them into a format that others can easily use and view.

At my real-time job, we (have to) use XMind. If someone shares a mindmap (“their mind” map as I often see it – I seem to have a different mind than most others in how I structure things, except for one colleague who imo does not structure things at all) they just share the XMind file and hope that the recipients can read it. Although XMind can export mindmaps just fine, I do like the freemind approach where a simple java applet can show the entire mindmap as interactively as you would navigate through the application itself. This makes it perfect for discussing ideas because you can close and open branches easily.

The export/import capabilities of freemind are also interesting. Before being forced to use XMind, we were using Mindmanager and I could just easily import the mindmaps into freemind. The file format that freemind uses is an XML-based one, so translating those onto other formats is not that difficult if you know some XSLT.

I personally use freemind when I embark on a new project, to structure the approach, centralize all information, keep track of problems (and their solutions), etc. The only thing I am missing is a nice interface for mobile devices though.

June 03, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: (June 03, 2013, 01:50 UTC)

The next few weeks (months even) will be challenging my free time as I’m working on (too many) projects simultaneously (sadly, only a few of those are free software related, most are house renovations). But that shouldn’t stop me from starting a new set of posts, being my application base. In this series, I’ll cover a few applications (or websites) that I either use often or that I should use more. In either case, the application does its job very well so why not give some input on it?

The first on the agenda is the website.

With, you get a web-browser based drawing application for diagrams, flowcharts, UML, BPMN etc. I came across this application while looking for an alternative to Dia, which by itself was supposed to be an alternative to Microsoft Visio (err, no). Don’t get me wrong, Dia is nice, but it lacks evolution and just doesn’t feel easy. on the other hand is evolving constantly, and it is also active on Google Plus where you can follow up on all recent developments and thoughts (I hope I get the G+ link correctly, it’s not that I don’t like numbers, just not in URLs).

I started using while documenting free software IT architectures (such as implementations of BIND, PostgreSQL, etc.) for which I needed some diagrams. Although is an online application (and its underlying engine is not completely free software) you can easily work with it from different locations. It integrates with Google Drive to store the diagrams on if you want – and if you don’t, you can always save the diagrams in their native XML format on your system and open them later again.

The interface is very easy to use, and I recently found out that it now also supports mobile devices, which is perfect for tablets (the mobile device support is recent afaik and still undergoing updates). The site also works well in various browsers (tried IExplorer 10 at work, Firefox and Google Chrome and they all seem to work nicely) – eat that stupid commercial vendors that force me into using IExplorer 8 or Firefox 10 – you know who you are!

A site/service to keep a close eye on. The service itself is free (and doesn’t seem too limited due to it), but also has commercial support if you want through Google Apps and Confluence integration. I don’t have much experience with those yet but that might change in the near future (projects, projects).

June 02, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Roadtrip 3600 (June 02, 2013, 19:03 UTC)

Deuxième roadtrip en Harley : la côte d’Azur et Biarritz.

Un vrai tour de France en un peu moins de deux semaines. 3600 kilomètres de liberté : Paris – Luberon – Gorges du Verdon – Grimaud (30 ans du HOG) – Marseille – Biarritz + les premières photos de vacances avec le GF670W, magique !000059

04760004 04760003

Sven Vermeulen a.k.a. swift (homepage, bugs)

One of the things I have been meaning to implement on my system is a way to properly “remove” old files from the system. Currently, I do this through frequently listing all files, going through them and deleting those I feel I no longer need (in any case, I can retrieve them back from the backup within 60 days). But this isn’t always easy since it requires me to reopen the files and consider what I want to do with them… again.

Most of the time, when files are created, you generally know how long they are needed on the system. For instance, an attachment you download from an e-mail to view usually has a very short lifespan (you can always re-retrieve it from the e-mail as long as the e-mail itself isn’t removed). Same with output you captured from a shell command, a strace logfile, etc. So I’m wondering if I can’t create a simple method for keeping track of expiration dates on files, similar to the expiration dates supported for z/OS data sets. And to implement this, I am considering to use extended attributes.

The idea is simple: when working with a file, I want to be able to immediately set an expiration date to it:

$ strace -o strace.log ...
$ expdate +7d strace.log

This would set an extended attribute named user.expiration with the value being the number of seconds since epoch (which you can obtain through date +%s if you want) on which the file can be expired (and thus deleted from the system). A system cronjob can then regularly scan the system for files with the extended attribute set and, if the expiration date is beyond the current date, the file can be removed from the system (perhaps first into a specific area where it lingers for an additional while just in case).

It is just an example of course. The idea is that the extended attributes keep information about the file close to the file itself. I’m probably going to have an additional layer on top if it, checking SELinux contexts and automatically identifying expiration dates based on their last modification time. Setting the expiration dates manually after creating the files is prone to be forgotten after a while. And perhaps introduce the flexibility of setting an user.expire_after attribute is well, telling that the file can be removed if it hasn’t been touched (modification time) in at least XX number of days.

June 01, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Hacking java bytecode with dhex (June 01, 2013, 01:50 UTC)

I found myself in a weird situation: a long long time ago, I wrote a java application that I didn’t touch nor ran for a few years. Today, I found it on a backup and wanted to run it again (its a graphical application for generating HTML pages). However, it failed in a particular feature. Not with an exception or stack trace, just functionally. Now, I have the source code at hand, so I look into the code and find the logical error. Below is a snippet of it:

if (myHandler != null) {
  int i = startValue + maxRange;
  for (int j = endValue; j > i; j--) {
    ... (do some logic)

It doesn’t matter what the code is supposed to do, but from what I can remember, I shouldn’t be adding maxRange to the i variable (yet – as I do that later in the code). But instead of setting up the java development environment, emerging the IDE etc. I decided to just edit the class file directly using dhex (a wonderful utility I recently discovered) because doing things the hard way is sometimes fun as well. So I ran javap -c MyClass to get some java bytecode information from the method, which gives me:

   8:   ifnull  116
   11:  iload_2
   12:  iload_3
   13:  iadd
   14:  istore  5
   16:  iload_2
   17:  istore  6
   19:  iload   6
   21:  iload   5
   23:  if_icmpge       106

I know lines 11 and 12 is about pushing the 2nd and 3rd arguments of the function (which are startValue and maxRange) to the stack to add them (line 13). To remove the third argument, I can change this opcode from 1d (iload_3) to 03 (iconst_0). This way, zero is added and the code itself just continues as needed. And for some reason, that seems to be the only mistake I made then because the application now works flawlessly.

Hacking is fun.

May 31, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

Quick summary

Humble Indie Bundle #8 (still available, get yours quickly!) includes seven games, most of them with (FLAC and MP3) soundtrack as dedicated downloads.
With Hotline Miami, one of the bundle’s most exciting games, no soundtrack is included. Well, maybe it is! :-) Friend Jonathan and I wrote a command line tool to extract the original OGG Vorbis music files from the game’s .wad file today. It’s free software licensed under GPL v3 or later and hosted at Github.

Superficial file format analysis

The game consists of a few files only, the biggest file is HotlineMiami_GL.wad. Using a hex viewer like od(1) you see filenames on the first page, already. However, the .wad seemed to be in a proprietary format and we could not figure it out quick enough. (If you find a way to extract all files from the archive, please comment below!)

Using the strings(1) command a list of Music/*.ogg files can be found:

$ strings HotlineMiami_GL.wad | grep -Eo '^.+\.ogg'

So we knew we were looking for OGG Vorbis content. Jonathan had the idea to just scan for any OGG Vorbis content in the file (i.e. guessing the offsets), rather than trying to understand where those Music/*.ogg file offsets where located. The OGG file format is well suited for that. Basically we just had to search for the byte sequence “OggS”, extract a few bytes from the header starting at that location, do some simple math, and write a block of continues bytes to a dedicated file.

Our tool in action

Clone, compile and run:

$ make
cc -std=c99 -Wall -Wextra -pedantic   -c -o extract.o extract.c
cc   extract.o -o extract-hotline-miami-soundtrack

$ ./extract-hotline-miami-soundtrack
File "ANewMorning.ogg" (offset 324769444 to 328796370, size 4026926 bytes) extracted.
File "Crush.ogg" (offset 328796370 to 331873115, size 3076745 bytes) extracted.
File "Crystals.ogg" (offset 331873115 to 338070714, size 6197599 bytes) extracted.
File "Daisuke.ogg" (offset 338070714 to 342209815, size 4139101 bytes) extracted.
File "DeepCover.ogg" (offset 342209815 to 354220376, size 12010561 bytes) extracted.
File "ElectricDreams.ogg" (offset 354220376 to 361621106, size 7400730 bytes) extracted.
File "Flatline.ogg" (offset 361621106 to 364436799, size 2815693 bytes) extracted.
File "HorseSteppin.ogg" (offset 364436799 to 379380866, size 14944067 bytes) extracted.
File "Hotline.ogg" (offset 379380866 to 384817467, size 5436601 bytes) extracted.
File "Hydrogen.ogg" (offset 384817467 to 393673098, size 8855631 bytes) extracted.
File "InnerAnimal.ogg" (offset 393673098 to 401464171, size 7791073 bytes) extracted.
File "ItsSafeNow.ogg" (offset 401464171 to 407138611, size 5674440 bytes) extracted.
File "Knock.ogg" (offset 407138611 to 414801118, size 7662507 bytes) extracted.
File "Miami2.ogg" (offset 414801118 to 420456265, size 5655147 bytes) extracted.
File "Musikk2.ogg" (offset 420456265 to 425787691, size 5331426 bytes) extracted.
File "Paris2.ogg" (offset 425787691 to 433210181, size 7422490 bytes) extracted.
File "Perturbator.ogg" (offset 433210181 to 441536474, size 8326293 bytes) extracted.
File "Release.ogg" (offset 441536474 to 452985194, size 11448720 bytes) extracted.
File "SilverLights.ogg" (offset 452985194 to 462700814, size 9715620 bytes) extracted.
File "Static.ogg" (offset 462700814 to 464811086, size 2110272 bytes) extracted.
File "ToTheTop.ogg" (offset 464811086 to 468529275, size 3718189 bytes) extracted.
File "TurfIntro.ogg" (offset 468529275 to 472214580, size 3685305 bytes) extracted.
File "TurfMain.ogg" (offset 472214580 to 480864247, size 8649667 bytes) extracted.

While not all files seem to contain proper tags, all of them seem perfectly playable. The bitrate seems constant 224 kb/s for all, could be worse. At least to our ears, these files sound like higher quality than this “Hotline Miami Soundtrack (Full)” video on YouTube. But you don’t need that anymore now anyway, right? :-)

Again, the code is up here.

[EDIT]: Someone uploaded track “Daisuke” to The Infinite Jukeboxcheck it out!

[EDIT]: Our extractor inspired Andy to take it a little further and extract all files from the .wad file. Check out his code on GitHub, it’s GPL 3, too.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)

Load balancing traffic between servers can sometimes lead to headaches depending on your topology and budget. Here I’ll discuss how to create a self load balanced cluster of web servers distributing HTTP requests between themselves and serving them at the same time. Yes, this means that you don’t need dedicated load balancers !

I will not go into the details on how to configure your kernel for ipvsadm etc since it’s already covered enough on the web but instead focus on the challenges and subtleties of achieving a load balancing based only on the realservers themselves. I expect you reader have a minimal knowledge of the terms and usage of ipvsadm and keepalived.

The setup

Let’s start with a scheme and some principles explaining our topology.

  • 3 web servers / realservers (you can do the same using 2)
  • Local subnet :
  • LVS forwarding method : DR (direct routing)
  • LVS scheduler : WRR (you can choose your own)
  • VIP :
  • Main interface for VIP : bond0


Let’s take a look at what happens as this will explain a lot of why we should configure the servers in a quite special way.

black arrow / serving

  1. the master server (the one who has the VIP) receives a HTTP port connection request
  2. the load balancing scheduler decides he’s the one who’ll serve this request
  3. the local web server handles the request and replies to the client

 blue arrow / direct routing / serving

  1. the master server receives a HTTP port connection request
  2. the load balancing scheduler decides the blue server should handle this request
  3. the HTTP packet is given to the blue server as-this (no modification is made on the packet)
  4. the blue server receives a packet whose destination IP is the VIP but he doesn’t hold the VIP (tricky part)
  5. the blue server’s web server handles the request and replies to the client

IP configuration

Almost all the tricky part lies in what needs to be done in order to solve the point #4 of the blue server example. Since we’re using direct routing, we need to configure all our servers so they accept packets directed to the VIP even if they don’t have it configured on their receiving interface.

The solution is to have the VIP configured on the loopback interface (lo) with a host scope on the keepalived BACKUP servers while it is configured on the main interface (bond0) on the keepalived MASTER server. This is what is usually done when you use pacemaker and ldirectord with IPAddr2 but keepalived does not handle this kind of configuration natively.

We’ll use the notify_master and notify_backup directives of keepalived.conf to handle this :

notify_master /etc/keepalived/
notify_backup /etc/keepalived/

We’ll discuss a few problems to fix before detailing those scripts.

The ARP problem

Now some of you wise readers will wonder about the ARP cache corruptions which will happen when multiple hosts claim to own the same IP address on the same subnet. Let’s fix this problem now then as the kernel does have a way of handling this properly. Basically we’ll ask the kernel not to advert the server’s MAC address for the VIP on certain conditions using the arp_ignore and arp_announce sysctl.

Add those lines on the sysctl.conf of your servers :

net.ipv4.conf.all.arp_ignore = 3
net.ipv4.conf.all.arp_announce = 2

Read more about those parameters for the detailed explanation of those values.

The IPVS synchronization problem

This is another problem arising from the fact that the load balancers are also acting as realservers. When keepalived starts, it spawns a synchronization process on the master and backup nodes so you load balancers’ IPVS tables stay in sync. This is needed for a fully transparent fail over as it keeps track of the sessions’ persistence so the clients don’t get rebalanced when the master goes down. Well, this is the limitation of our setup : clients’ HTTP sessions served by the master node will fail if he goes down. But note that the same will happen to the other nodes because we have to get rid of this synchronization to get our setup working. The reason is simple : IPVS table sync conflicts with the actual acceptance of the packet by our loopback set up VIP. Both mechanisms can’t coexist together, so you’d better use this setup for stateless (API?) HTTP servers or if you’re okay with this eventuality.

Final configuration


ip addr del dev lo
ipvsadm --restore < /tmp/keepalived.ipvs
  1. drop the VIP from the loopback interface (it will be setup by keepalived on the master interface)
  2. restore the IPVS configuration


ip addr add scope host dev lo
ipvsadm --save > /tmp/keepalived.ipvs
ipvsadm --clear
  1. add the VIP to the loopback interface, scope host
  2. keep a copy of the IPVS configuration, if we get to be master, we’ll need it back
  3. drop the IPVS local config so it doesn’t conflict with our own web serving


Even if it offers some serious benefits, remember the main limitation of this setup : if the master fails, all sessions of your web servers will be lost. So use it mostly for stateless stuff or if you’re okay with this. My setup and explanations may have some glitches, feel free to correct me if I’m wrong somewhere.

Sven Vermeulen a.k.a. swift (homepage, bugs)
A SELinux policy for incron: finishing up (May 31, 2013, 01:50 UTC)

After 9 posts, it’s time to wrap things up. You can review the final results online (incron.te, incron.if and incron.fc) and adapt to your own needs if you want. But we should also review what we have accomplished so far…

We built the start of an entire policy for a daemon (the inotify cron daemon) for two main types: the daemon itself, and its management application incrontab. We defined new types and contexts, we used attributes, declared a boolean and worked with interfaces. That’s a lot to digest, and yet it is only a part of the various capabilities that SELinux offers.

The policy isn’t complete though. We defined a type called incron_initrc_exec_t but don’t really use it further. In practice, we would need to define an additional interface (probably named incron_admin) that allows users and roles to manage incron without needing to grant this user/role sysadm_r privileges. I leave that up to you as an exercise for now, but I’ll post more about admin interfaces and how to work with them on a system in the near future.

We also made a few assumptions and decisions while building the policy that might not be how you yourself would want to build the policy. SELinux is a MAC system, but the policy language is very flexible. You can use an entirely different approach in policies if you want. For instance, incron supports launching the incrond as a command-line, foreground process. This could help users run incrond under their privileges for their own files – we did not consider this case in our design. Although most policies try to capture all use cases of an application, there will be cases when a policy developer did either not consider the use case or found that it infringed with his own principles on policy development (and allowed activities on a system).

In Gentoo Hardened, I try to write down the principles and policies that we follow in a Gentoo Hardened SELinux Development Policy document. As decisions need to be taken, such a document might help find common consensus on how to approach SELinux policy development further, and I seriously recommend that you consider writing up a similar document yourself, especially if you are going to develop policies for a larger organization.

One of the deficiencies of the current policy is that it worked with the unmodified incron version. If we would patch incron so that it could change context on executing the incrontab files of a user, then we can start making use of the default context approach (and perhaps even enhance with PAM services). In that case, user incrontabs could be launched entirely from the users’ context (like user_u:user_r:user_t) instead of the system_u:system_r:incrond_t or transitioned system_u:system_r:whatever_t contexts. Having user provided commands executed in the system context is a security risk, so in our policy we would not grant the incron_role to untrusted users – probably only to sysadm_t and even then he probably would be better with using the /etc/incron.d anyway.

The downside of patching code however is that this is only viable if upstream wants to support this – otherwise we would need to maintain the patches ourselves for a long time, creating delays in releases (upstream released a new version and we still need to reapply and refactor patches) and removing precious (human) resources from other, Gentoo Hardened/SELinux specific tasks (like bugfixing and documentation writing ;-)

Still, the policy returned a fairly good view on how policies can be developed. And as I said, there are still other things that weren’t discussed, such as:

  • Build-time decisions, which can change policies based on build options of the policy. In the reference policy, this is most often used for distribution-specific choices: if Gentoo would use one approach and Redhat another, then the differences would be separated through ifdef(`distro_gentoo',`...') and ifdef(`distro_redhat',`...') calls.
  • Some calls might only be needed if another policy is loaded. I think all calls made currently are part of base modules, so can be expected to be available at all times. But if we would need something like icecast_signal(incrond_t), then we would need to put that call inside a optional_policy(`...') statement. Otherwise, our policy would fail to load because the icecast SELinux policy isn’t loaded.
  • We could even introduce specific statements like dontaudit or neverallow to fine-tune the policy. Note though that neverallow is a compile-time statement: it is not a way to negate allow rules: if there is one allow that would violate the neverallow, then that module just refuses to build.

Furthermore, if you want to create policies to be pushed upstream to the reference policy project, you will need to look into the StyleGuide and InterfaceNaming documents as those define the order that rules should be placed and the name syntax for interfaces. I have been contributing a lot to the reference policy and I still miss a few of these, so for me they are not that obvious. But using a common style is important as it allows for simple patching, code comparison and even allows us to easily read through complex policies.

If you don’t want to contribute it, but still use it on your Gentoo system, you can use a simple ebuild to install the files. Create an ebuild (for instance selinux-incron), put the three files in the files/ subdirectory, and use the following ebuild code:

# Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header$

POLICY_FILES="incron.te incron.fc incron.if"

inherit selinux-policy-2

DESCRIPTION="SELinux policy for incron, the inotify cron daemon"

KEYWORDS="~amd64 ~x86"

When installed, the interface files will be published as well and can then be used by other modules (something we couldn’t do in the past few posts) or by the selocal tool.

May 30, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

Humble Indie Bundle 8 includes these games and (most of) their soundtracks: (links go to YouTube videos)

  1. Hotline Miami
  2. Proteus
  3. Little Inferno
  4. Awesomenauts
  5. Capsized
  6. Thomas Was Alone
  7. Dear Esther

Hotline Miami


Little Inferno



Thomas Was Alone



Dear Esther

May 29, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

Originial title (and link): GNµ 11 – LinuxTag Berlin 2013 – Teil 2: Gentoo, Fedora, Mageia

English subtitles available, audio is German.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Having fun with cross-compiling (May 29, 2013, 09:48 UTC)

$ file build/root-filesystem-*/bin/mksh
build/root-filesystem-armv4tl/bin/mksh:       ELF 32-bit LSB executable, ARM, EABI4 version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-armv6l/bin/mksh:        ELF 32-bit LSB executable, ARM, EABI4 version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-i486/bin/mksh:          ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-i586/bin/mksh:          ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-i686/bin/mksh:          ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-mips/bin/mksh:          ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-mips64/bin/mksh:        ELF 64-bit MSB executable, MIPS, MIPS64 version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-mipsel/bin/mksh:        ELF 32-bit LSB executable, MIPS, MIPS-I version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-powerpc-440fp/bin/mksh: ELF 32-bit MSB executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-powerpc/bin/mksh:       ELF 32-bit MSB executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-sh4/bin/mksh:           ELF 32-bit LSB executable, Renesas SH, version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-sparc/bin/mksh:         ELF 32-bit MSB executable, SPARC version 1 (SYSV), dynamically linked (uses shared libs), stripped
build/root-filesystem-x86_64/bin/mksh:        ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), stripped
... that is the result of an afternoon of hacking on Aboriginal Linux to include mksh support.
Why? eh ... why not. And for such a crude hack it works surprisingly well - only two of the arm crosscompile targets failed.

May 27, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

While I don’t want to say that all privacy advocates are the bad kind of crybabies that I described on my previous post there are certainly a lot I would call hypocrite when it gets to things like the loyalty schemes I already wrote about.

So as I said on that post, the main complain about loyalty scheme involve possible involvement with bad government (in which case we have a completely different problem), and basically have to do with hypothetical scenarios of a dystopian future. So what they are afraid of is not the proper use of the tool that is loyalty schemes, but of their abuse.

On the other hand, the same kind of persons advocate for tools like Tor, Bitcoin, Liberty Reserve or FreedomBox. These tools are supposed to help people fight repressive governments among others, but there are obvious drawbacks. Pirates use the same technologies. And so do cybercriminals (and other kind of criminals too).

Where I see a difference is that while even the Irish Times struggled to find evidence of the privacy invasion, or governmental abuse of loyalty schemes (as you probably noticed they had to resort complaining about a pregnant teenager who was found out through target advertising), it’s extremely easy to find evidence of the cyber organized crime relying on tools like Liberty Reserve. Using the trump card of paedophiles would probably be a bad idea, but I’d bet my life on many of them doing so.

Yes of course there are plenty of honest possible uses you could have for these technologies, but I’d also think that if you start with the assumption that your government is not completely corrupted or abusive (which, I know, could be considered a very fantastic assumption), and that you don’t just want to ignore anti-piracy laws because you don’t like them (while I still agree that many of those laws are completely idiotic, I have explained my standing already), then the remaining positive uses are marginal, compared to the criminal activities that they enable.

Am I arguing against Tor and FreedomBox? Not really. But I am arguing against things like MegaUpload, Liberty Reserve and Bitcoin — and I would say that most people who are defending Kim Dotcom and the likes of him are not my peers. I would push them together with the religious people I’m acquainted with, which is to say, I keep them at arm’s length.

Michał Górny a.k.a. mgorny (homepage, bugs)
The pointless art of subslots (May 27, 2013, 18:00 UTC)

The sub-slots feature of EAPI 5 was announced as if it was the ultimate solution to the problem of SONAME changes on library upgrades. However, the longer I see it, the more I believe that it is not really a good solution, and that it misses the actual issue targeting somewhere nearby.

The issue is likely well-known by most of the Gentoo users. Every time a library changes its ABI, it changes the SONAME (the filename programs link to) to avoid breaking existing programs. When the package is upgraded, the new version is installed under the new name, and the old one is removed. As a direct result, all applications linking to the old version become broken and need to be rebuilt.

The classic way of handling this is to run the revdep-rebuild tool. It takes a while to scan the system with it but it supposedly finds all broken executables and initiates a rebuild of them. Of course, the system is in broken state until all relevant packages are rebuilt, and sometimes they just fail to build…

As you can guess, this is far from being perfect. That’s why people tried to find a better solution, and a few solutions were actually implemented. I’d like to describe them in a quasi-chronological order.

Using slots with slot-operator deps

A perfect solution that has been long advocated by Exherbo developers. I’m not aware, though, if they ever used it themselves. I didn’t see an exact explanation of how they expect it to be done, therefore I am mostly explaining here how I think it could be done.

The idea is that every SONAME-version of the library uses a different slot. That is, every time the SONAME changes, you change slot as well. Using different slots for each SONAME means that the incompatible versions of the library can be installed in parallel until all applications are rebuilt. This has a few requirements though.

First of all, only the newest slot may install development files such as headers. This requires that every version bump is accompanied by a revision bump of the older version, dropping the development files. On each upgrade, user builds not only the new version but also rebuilds the older version.

To handle the upgrades without a small moment of breakage (and risk of longer breakage if a build fails), the package manager would need to build both packages before starting the merge process. I doubt that enforcing this is really possible right now.

Secondly, the ebuilds installing development files would need to block the older versions (in other slots) doing the same while keeping the versions lacking development files non-blocked.

To explain this better: let’s assume that we have: foo-1, foo-1-r1, foo-2, foo-2-r1, foo-3, … The -r0 versions have development files and -r1 versions don’t have them (they are just the upgrade compatibility ebuilds). Now, the blocker in foo-3 would need to block all the older -r0 versions and not -r1 ones.

In a real-life situation, there will likely be differing revision numbers as well. And I don’t know any way of handling this other than explicitly listing all blocked versions, one by one.

And in the end, reverse dependencies need to use a special slot-dependency operator which binds the dependency to the slot that was used during the package build. But it’s least of the problems, I believe.

The solution of preserved-libs

An another attempt of solving the issue was developed in portage-2.2. Although it is available in mainstream portage nowadays, it is still disabled by default due to a few bugs and the fact that some people believe it’s a hack.

The idea of preserved-libs is for the package manager to actually trace library linkage within installed programs and automatically preserve old versions of libraries as long as the relevant programs are not rebuilt to use the newer versions. As complex and as simple as that.

Preserving libraries this way doesn’t require any specific action from the package maintainer. Portage detects itself that a library with a new SONAME has been installed during an upgrade and preserves the old one. It also keeps track of all the consumers that link against the old version and remove it after the last one is rebuilt.

Of course it is not perfect. It can’t handle all kinds of incompatibilities, it won’t work outside the traditional executable-library linkage and the SONAME tracking is not perfect. But I believe this is the best solution we can have.

The nothing-new in sub-slots

Lately, a few developers who believed that preserved-libs is not supposed to go mainstream decided to implemented a different solution. After some discussion, the feature was quickly put into EAPI 5 and then started to be tested on the tree.

The problem is that it’s somehow a solution to the wrong problem. As far as I am concerned, the major issue with SONAMEs changing is that the system is broken between package rebuilds. Tangentially to this, sub-slots mostly address having to call tools like revdep-rebuild which is not a solution to the problem.

Basically all sub-slots do is forcing rebuild on a given set of reverse dependencies when the sub-slot of package changes. The rebuilds are pulled into the same dependency graph as the upgrade to be forced immediately after it.

I can agree that sub-slots have their uses. For example, xorg-server modules definitely benefit from them, and so may other cases which weren’t handled by preserved-libs already. For other cases the sub-slots are either not good enough (virtuals), redundant (regular libraries) or even broken (packages installing multiple libraries).

Aside from the xorg module benefit, I don’t see much use of sub-slots. On systems not having preserved-libs enabled, they may eventually remove the need for revdep-rebuild. On systems having preserved-libs, it can only result in needless or needlessly hurried rebuilds.

A short summary

So, we’re having two live solutions right now: one in preserved-libs, and other in sub-slots. The former addresses the issue of system being broken mid-upgrade, the latter removes (partially?) the need for calling an external tool. The former allows you to rebuild the affected packages at any convenient time, the latter forces you to do it right away.

What really worries me is that people are so opposed to preserved-libs, and at the same time accept a partial, mis-designed work called sub-slots that easily. Then advertise it without thoroughly explaining how and when to use it, and what are the problems with it. And, for example, unnecessarily rebuilding webkit-gtk regularly would be an important issue.

A particular result of that was visible when sub-slot support was introduced into app-text/poppler. That package installs a core library with quite an unstable ABI and a set of interface libraries with stable ABIs. External packages usually link with the latter.

When sub-slot support was enabled on poppler, all reverse dependencies were desired to use sub-slot matching. As a result, every poppler upgrade required needlessly rebuilding half of the system. The rev-deps were reverted but this only made people try to extend the sub-slots into a more complex and even less maintainable idea.

Is this really what we all want? Does it benefit us? And why the heck people reinvented library preservation in eclasses?!

May 26, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...

Edit: You can find part 2 here.

This is part 2 of a tutorial on OpenPGP smartcard use with Gentoo. Part 1 can be found in an earlier blog post. This time, we assume that you already have a smart card and a functioning reader, and continue setting up the card. Then we'll make everything ready for use with GnuPG by setting up a key pair. As already stated, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better". All information here is provided as a best effort, however I urge you to read up on your own. Even if you follow this guide to the last letter- if things break, it is your own responsibility.

Setting the AdminPIN and the PIN

OK, let's start. We insert a blank card into the card reader. The card should come with some paper documentation, stating the initial values of the PIN and the AdminPIN- these we will need in a moment. Now, we want to edit the card properties. We can do this with the command "gpg --card-edit".
jones@pinacolada ~ $ gpg --card-edit 

Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

gpg/card> help
quit       quit this menu
admin      show admin commands
help       show this help
list       list all available data
fetch      fetch the key specified in the card URL
passwd     menu to change or unblock the PIN
verify     verify the PIN and list all data
unblock    unblock the PIN using a Reset Code
This menu is not really that helpful yet. However, a lot more commands are hidden below the "admin" keyword:
gpg/card> admin
Admin commands are allowed

gpg/card> help
quit       quit this menu
admin      show admin commands
help       show this help
list       list all available data
name       change card holder's name
url        change URL to retrieve key
fetch      fetch the key specified in the card URL
login      change the login name
lang       change the language preferences
sex        change card holder's sex
cafpr      change a CA fingerprint
forcesig   toggle the signature force PIN flag
generate   generate new keys
passwd     menu to change or unblock the PIN
verify     verify the PIN and list all data
unblock    unblock the PIN using a Reset Code
First of all we change the AdminPIN and the PIN from the manufacturer defaults to some nice random-looking values that only we know.
gpg/card> passwd
gpg: OpenPGP card no. D276000124010200000500000AFA0000 detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 3
At this point a window from gpg-agent pops up (same as when asking for a passphrase), requests the old AdminPIN and twice the new AdminPIN. Make sure you remember the new AdminPIN or write it down somewhere safe. The AdminPIN allows to change the card parameters (from name of cardholder to stored keys and PIN) and can be used to reset the PIN if you have forgotten it or mistyped it three times. However, if you mistype the AdminPIN three times, your card locks up completely and is basically trash. Note that changing the PINs cannot be done via a reader keypad yet.

PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 1
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? q


Setting the cardholder data

Now, let's enter the cardholder data. With the first change you will be prompted for the AdminPIN.
gpg/card> nameCardholder's surname: Jones
Cardholder's given name: Henry W.

gpg/card> lang
Language preferences: en

gpg/card> sex
Sex ((M)ale, (F)emale or space): M

gpg/card> quit
jones@pinacolada ~ $
What are the remaining commands good for? Well...
  • "url" sets an URL where to retrieve the public keys. We will use that later on. 
  • "login" sets a log-in data field. Here you could store your username for e.g. network authentication. 
  • "forcesig" toggles a flag inside the card that has been introduced because of German legislative requirements for some smartcard applications. Normally, once you have inserted the card into the reader, you enter the PIN once for unlocking e.g. the encryption or the signature key, and then the key remains open for the moment. If the signature PIN is "forced", you will have to reenter the PIN again each time you want to make a signature.
  • "generate" generates a RSA key pair directly on the card. This is the "high security option"; the generated private key will and can never leave the card, which enhances its security but also makes backups of the key impossible.
Which leaves the "reset code" to be explained. Imagine you are issued a card by e.g. your employer. The card will be preset with your name, login, and keys, and you should not be able to change that. So, you will not know the AdminPIN. If you enter your user PIN wrong three times in a row, it is invalidated. Now the reset code instead of the AdminPIN can also be used to reset the PIN. Basically this is the same functionality as the PUK for mobile phone SIM cards. The definitive source on all this functionality is the OpenPGP Card 2.0 specification.

Generating GnuPG keypairs

As mentioned in the beginning, there are many different ways to proceed. A keypair can be generated on the card or in the computer. Different types of keys or parts of keys can be uploaded to the card. I'm now presenting the following use case:
  • We generate the GnuPG keys not on the card but on the trusted computer, and then copy them to the card. This makes backups of the keys possible, and you can also upload them later to a second card should the first one accidentally drop into the document shredder.
  • We upload the whole key, not just subkeys as described in some howtos. This makes it possible to access the entire GnuPG functionality from the card- decrypting, signing, and also especially certifying (i.e. signing keys). Of course this means that your primary key is on the card, too.
In general, before you generate a GnuPG keyset you may want to read up on GnuPG best practices; see e.g. this mailing list post of our Gentoo Infra team lead robbat2 for information and further pointers.
Enough talk. We use GPG to generate a 4096bit RSA primary key for signing and certifying with an 4096bit RSA encryption subkey. Note that for all the following steps you need in Gentoo at least app-crypt/gnupg-2.0.19-r2; I strongly recommend app-crypt/gnupg-2.0.20 since there smartcard handling has improved a lot.
jones@pinacolada ~ $ gpg --gen-key
gpg (GnuPG) 2.0.19; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 3y

Key expires at Tue May 24 23:26:58 2016 CEST
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Henry W. Jones Jr.
Email address:
You selected this USER-ID:
    "Henry W. Jones Jr. <>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform

some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /home/jones/.gnupg/trustdb.gpg: trustdb created

gpg: key 14ED37BC marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2016-05-24
pub   4096R/14ED37BC 2013-05-25 [expires: 2016-05-24]
      Key fingerprint = 3C94 3AC9 713D E3E3 B3C6  BF73 3898 61DB 14ED 37BC
uid                  Henry W. Jones Jr. <>
sub   4096R/345D5ECB 2013-05-25 [expires: 2016-05-24]

jones@pinacolada ~ $
Got it. Now we do something unusual- in addition to the sign/certify (SC) main key and the encryption (E) subkey, we add a second subkey, an authentication (A) key (for later on). We edit the just generated key with the --expert option:
jones@pinacolada ~ $ gpg --expert --edit 14ED37BC
gpg (GnuPG) 2.0.19; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24  usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096R/345D5ECB  created: 2013-05-25  expires: 2016-05-24  usage: E   
[ultimate] (1). Henry W. Jones Jr. <>

gpg> addkey
Please select what kind of key you want:

   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
Your selection? 8
We select to add an RSA key where we set the capabilities ourselves. Now we disable Sign and Encrypt, and enable Authenticate instead.
Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? s

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Encrypt

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? e

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions:

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? a

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Authenticate

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? q
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 3y
Key expires at Tue May 24 23:39:55 2016 CEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

pub  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24  usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096R/345D5ECB  created: 2013-05-25  expires: 2016-05-24  usage: E   
sub  4096R/808D3DB3  created: 2013-05-25  expires: 2016-05-24  usage: A   
[ultimate] (1). Henry W. Jones Jr. <>

gpg> save
jones@pinacolada ~ $
This additional key cannot be used directly by GnuPG, but it is stored in the keyring and will come in handy later on.

Copying the keys to the smartcard

Now we copy the secret keys to the smartcard.
jones@pinacolada ~ $ gpg --edit 14ED37BC
gpg (GnuPG) 2.0.19; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24  usage: SC  
                     trust: ultimate      validity: ultimate
sub  4096R/345D5ECB  created: 2013-05-25  expires: 2016-05-24  usage: E   
sub  4096R/808D3DB3  created: 2013-05-25  expires: 2016-05-24  usage: A   
[ultimate] (1). Henry W. Jones Jr. <>
With "toggle" we switch from public key to secret key view.
gpg> toggle

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb  4096R/345D5ECB  created: 2013-05-25  expires: never     
ssb  4096R/808D3DB3  created: 2013-05-25  expires: never     
(1)  Henry W. Jones Jr. <>
We select the authentication key and move it to the card (we need the AdminPIN for that):
gpg> key 2

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb  4096R/345D5ECB  created: 2013-05-25  expires: never     
ssb* 4096R/808D3DB3  created: 2013-05-25  expires: never     
(1)  Henry W. Jones Jr. <>

gpg> keytocard
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]

Please select where to store the key:
   (3) Authentication key
Your selection? 3

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb  4096R/345D5ECB  created: 2013-05-25  expires: never     
ssb* 4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>
Then, we select the encryption key and deselect the authentication key; same procedure follows.
gpg> key 1

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb* 4096R/345D5ECB  created: 2013-05-25  expires: never     
ssb* 4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>

gpg> key 2

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb* 4096R/345D5ECB  created: 2013-05-25  expires: never     
ssb  4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>

gpg> keytocard
Signature key ....: [none]
Encryption key....: [none]
Authentication key: 8474 2310 057F 1D64 056F  5903 F15B 3DEE 808D 3DB3

Please select where to store the key:
   (2) Encryption key
Your selection? 2

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb* 4096R/345D5ECB  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
ssb  4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>
Finally we deselect the encryption key, so no subkey is selected anymore, and move the primary (signature/certification) key.
gpg> key 1

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb  4096R/345D5ECB  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
ssb  4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>

gpg> keytocard
Really move the primary key? (y/N) y
Signature key ....: [none]
Encryption key....: 2050 EC35 2F6C 3EB0 223C  C551 279A 16D7 345D 5ECB
Authentication key: 8474 2310 057F 1D64 056F  5903 F15B 3DEE 808D 3DB3

Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1

sec  4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
                     card-no: 0005 00000AFA
ssb  4096R/345D5ECB  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
ssb  4096R/808D3DB3  created: 2013-05-25  expires: never     
                     card-no: 0005 00000AFA
(1)  Henry W. Jones Jr. <>
Now we leave GnuPG, and it's important that we leave without saving. Otherwise, the secret key would be deleted on-disk and only remain on the card. (Of course, this may also be desired.)
gpg> quit
Save changes? (y/N) n
Quit without saving? (y/N) y
jones@pinacolada ~ $
Now, the card is basically ready for use. Let's have a look at its contents once more:
jones@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: Henry W. Jones
Language prefs ...: en
Sex ..............: male
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 4096R 4096R 4096R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: 3C94 3AC9 713D E3E3 B3C6  BF73 3898 61DB 14ED 37BC
      created ....: 2013-05-25 21:30:56
Encryption key....: 2050 EC35 2F6C 3EB0 223C  C551 279A 16D7 345D 5ECB
      created ....: 2013-05-25 21:30:56
Authentication key: 8474 2310 057F 1D64 056F  5903 F15B 3DEE 808D 3DB3
      created ....: 2013-05-25 21:39:35
General key info..: pub  4096R/14ED37BC 2013-05-25 Henry W. Jones Jr. <>
sec   4096R/14ED37BC  created: 2013-05-25  expires: 2016-05-24
ssb   4096R/345D5ECB  created: 2013-05-25  expires: 2016-05-24
ssb   4096R/808D3DB3  created: 2013-05-25  expires: 2016-05-24
jones@pinacolada ~ $
We'll discuss how to exactly use the card next time (but that's not really hard to figure out either :). Cheers!

May 25, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
You call it privacy invasion, I don't. (May 25, 2013, 22:51 UTC)

So it looks like the paranoid came to my last post about loyalty cards complaining about the invasion of privacy that these cards come with. Maybe they expected that the myth of the Free Software developer who’s against all big corporation, who wants to be off the grid, and all that kind of stuff that comes out when you think of Stallman. Well, too bad as I’m not like that, while still considering myself a left-winger, but a realist one that cannot see how you can get workers happy by strangling the companies (the alternative to which is not, contrarily to what most people seem to think, just accepting whatever the heck they want).

But first an important disclaimer. What I’m writing here is my personal opinion and in no way that of my employer. Even if my current employer could be considered involved in what I’m going to write, this is an opinion I maintained for years — lu_zero can confirm it.

So, we’ve been told about the evil big brother of loyalty card since I can remember, when I was still a little boy. They can track what you buy, they can profile you, thus they will do bad things to you. But honestly I don’t see that like it has happened at all. Yes, they can track what you buy, they might even profile you, but about the evil things they do to you, I still have not heard of anything — and before you start with the Government (capital and evil G), if you don’t trust your government, a loyalty card programme is the last thing you should be worried in.

Let’s have a look first at the situation presented by the Irish Times article which I referred to in my first post on the topic. At least, they have been close to reality enough, so instead of going the paranoia of the Big Brother, they simply noted that marketeers will know about your life, although they do portray it as only negative.

Before long, he had come up with a list of 25 products which, if bought in certain amounts and in a certain sequence, allowed him to tell if a shopper was pregnant and when her due date was.

In his book, Duhigg tells the story of a man who goes into a branch of Target near Minneapolis. He is not happy as he wants to know why the retailer has suddenly started to send his high school-going daughter coupons for baby clothes and cribs. He asks the manager if the shop is trying to encourage very young girls, such as his daughter, to get pregnant.

The manager is bemused but promises to look into it, which he does. He finds that this girl had indeed been targeted with all manner of promos for baby products so he calls the father several days later to convey his apologies and his confusion.

That’s when the man tells him that when he raised the issue with his daughter, she told him she was pregnant. The retailer took a lot of flak when the details of its data mining emerged but the controversy blew over.

So first I would say I find it utterly ludicrous that sending coupons for “baby clothes and cribs” would “encourage very young girls […] to get pregnant”. I would also suggest that if the girl is so young that it’s scandalous that she could get pregnant, then it might indeed be too soon for her to have a loyalty card. In Italy for instance you have to be 18 before you can get a loyalty card for any program — why? Because you expect that a minor still does not have an absolutely clear idea of what his or her choices are going to mold their future as.

Then let’s see what the problem is about privacy here… if the coupons are sent by mail, one would expect that they are seen only by the addressee — if you have no expectation of privacy on personal mail, it’s hard to blame it strongly on the loyalty programmes. In this case, if you would count the profiling as a violation of privacy of the girl, then you would expect that her father looking at the coupons would be a bigger invasion still. That would be like reading a diary. If you argue that the father has a right to know as she’s a minor, I would answer that then she shouldn’t have the card to begin with.

Then there is the (anonymous, goes without saying) comment on my post, where they try to paint loyalty schemes in an even grimmer light, first by stating that data is sold to third party companies at every turn… well, turns out that’s illegal in most of Europe if you don’t provide a way for the customer not to have his data sold. And turns out that’s one of the few things I do take care of, but simply because I don’t want junk mail from a bunch of companies I don’t really care about. So using the “they’ll sell your detail” scare, to me, sounds like the usual bull.

Then it goes on to say that “Regularly purchasing alcohol and buying in the wrong neighbourhoods will certainly decrease your score to get loans.” — well, so what? The scores are statistical analysis of the chance of recovering or defaulting on a loan, I don’t blame banks for trying to make them more accurate. And maybe it’s because I don’t drink but I don’t see a problem with profiling as an alcoholic a person that would be buying four kegs of beer a day — either that or they have a bar.

Another brought point? A scare on datamining. Okay the term sounds bad, but data mining at the end is just a way for businesses to get better at what they do. If you want to blame them for doing so, it’s your call, but I think you’re out of your mind. There are obvious bad cases for data mining, but that is not the default case. As Jo pointed out on Twitter, we “sell” our shopping habits to the store chains, and what we get back are discounts, coupons and the like. It’s a tit-for-tat scenario, which to me is perfectly fine And applies to more than just loyalty card schemes.

Among others, this is why I have been blocking a number of webrobots on my ModSecurity Ruleset — those that try to get data without giving anything back, for me, are just bad companies. If you want to get something, give something bad back.

And finally, the comment twice uses the phrase, taken from the conspirationists’ rulebook, “This is only the beginning”. Sorry guys, you’ve been saying that this is the beginning for the past thirty years. I start to think you’re not smarter than me, just much more paranoid, too much.

To sum it up, I’m honestly of the opinion that all the people in countries that are in all effect free and democratic that complain about “invasion of privacy”, are only complaining because they want to keep hiding their bad sides, be it bad habits, false statements, or previous errors. Myself, as you can see from this blog, i tend to be fairly open. There is very little I would be embarrassed by, probably only the fact that I do have a profile on a dating site, but even in that, well, I’ve been as honest as a person can be. Did I do something stupid in my past? I think quite a few things. On the other hand, I don’t really care.

So, there you go, this is my personal opinion about all the paranoids who think that they have to live off the grid to be free. Unless you’re in a country that is far from democratic, I’d just say you’re a bunch of crybabies. As I said, places where your Government can’t be trusted, have much bigger problems than loyalty schemes or profiling.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : street art (May 25, 2013, 12:00 UTC)




May 21, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

My original post about loyalty cards missed the supermarkets that I’m actually using nowadays, because they are conveniently located just behind my building (for one) and right on the way back home from my office (for the other). Both of them are part of the EuroSpar chain and have the added convenience of being open respectively 24/7 and 7-22.

Mangled bill from EuroSpar

So, when I originally asked the store if they had any loyalty card, I was told they didn’t. I checked the website anyway and found the name of their loyalty program, which is “SuperEasy”, and the next time, I asked about it explicitly, and they gave me the card and a form to fill in; after filling almost all of it, I found that I could also do it online, so I trashed the paper form. They can’t get my name right anywhere here when I spell it.

On the website, strangely enough they even accept my surname as it should be, wow that’s a miracle, I thought… until I went to use the card at the shop and got back the bill that you see on the left. Yes that’s UTF-8 converted to some other 8-bit codepage which is not Latin-1. Indeed it reminds me of CP850 at the time of MS-DOS. Okay I give up, but the funniest part was getting the bill tonight, the one on the right.

The other mangled bill from EuroSpar

But beside them mangling my name in many different possible ways, is there anything that makes EuroSpar special enough for me to write a follow-up post on a topic that I don’t really care about or, honestly, have experience in? Yes of course. Compared with the various rewards I have been talking about last time, this seems to be mostly the same: one point per euro spent, and one cent per point redeemed.

The big difference here is that the points are accrued to the cent, rather than to the lower euro threshold! Not too shabby, considering that unlike Dunnes they do not round their prices to full euros most of the time. And the other one is that even though they have a single loyalty scheme for all the stores.. the cards are per-store, or so they proclaim. The two here are probably owned by the same person so they are actually linked and they work on each.

Another interesting point is that while both EuroSpar host an Insomnia café, neither accept Insomnia’s own loyalty card (ZapaTag) — instead they offer something similar in the sense that you get the 10th drink free. A similar offer is present at the regular Insomnia shops, but there, while you can combine the 10th drink offer with the ZapaTag points, you cannot combine it with other offers such as my usual coffee and brownie for €3,75 (the coffee alone is €3,25 while the brownie is €2,25)… at EuroSpar instead this is actually combinable, but of course if I use the free coffee while getting a brownie, I still have to pay almost as much as the coffee.. but sometimes I can skip on the pastry.

So yes, I think it was worth noting the differences about EuroSpar. And as a final note I’ll just say that even the pharmacy on the way to work has a loyalty card… and it’s the usual discount one, or as they call it “PayBack Card”. I have to see what Tesco does, but they somehow blacklisted my apartment in their delivery service.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
rabbitMQ : v3.1.1 released (May 21, 2013, 13:04 UTC)

EDIT: okay, they just released v3.1.1 so here it goes on portage as well !


  • relax validation of x-match binding to headers exchange for compatibility with brokers < 3.1.0
  • fix bug in ack handling for transactional channels that could cause queues to crash
  • fix race condition in cluster autoheal that could lead to nodes failing to re-join the cluster

3.1.1 changelog is here.

I’ve bumped the rabbitMQ message queuing server on portage. This new version comes with quite a nice bunch of bugfixes and features.


  • eager synchronisation of slaves by policy (manual & automatic)
  • cluster “autoheal” mode to automatically choose nodes to restart when a partition has occurred
  • cluster “pause minority” mode to prefer partition tolerance over availability
  • improved statistics (including charts) in the management plugin
  • quite a bunch of performance improvements
  • some nice memory leaks fixes

Read the full changelog.

May 19, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I was very sceptic for a long time. Then, I slowly started to trust the kmail2/akonadi combination. I've been using it on my office desktop for a long time, and it works well and is very stable and fast there. (Might be related to the fact that the IMAP server is just across the lawn.) Some time ago, when I deemed things solid enough I even upgraded my laptop again, despite earlier problems. In Gentoo, we've been keeping kdepim-4.4 around all the time, and as you may have read, internal discussions led indeed to the decision to finally drop it some time ago.
What happened in the meantime?
1) One of the more annoying bugs mentioned in my last blog post was fixed with some help from Kevin Kofler. Seems like Debian stumbled into the same issue long ago.
2) I was on vacation. Which was fun, but mostly unrelated to the issue at hand. None of my Gentoo colleagues went ahead with the removal in the meantime. A lot of e-mails accumulated in my account.
3) Coming back, I was on the train with my laptop, sorting the mail. The train was full, the onboard WLAN slightly overstressed, the 4G network just about more reliable. Network comes and goes sometime with a tunnel, no problem. Or so I thought.
4) Half an hour before arriving back home I realized that silently a large part of the e-mails that I had (I though) moved (using kmail2-4.10.3 / akonadi-1.9.2) from one folder to another over ~3 hours had disappeared on one side, and not re-appeared on the other. Restarting kmail2 and akonadi did not help. A quick check of the webmail interface of my provider confirmed that also on the IMAP server the mails were gone in both folders. &%(/&%(&/$/&%$§&/
I wasn't happy. Luckily there were daily server backup snapshots, and after a few days delay I had all the documents back. Nevertheless... Now, I am considering what to do next. (Needless to say, in my opinion we should forget dropping kmail1 in Gentoo for now.) Options...
a) migrate the laptop back to kmail1, which is way more resistant to dropped connections and flaky internet connection - doable but takes a bit of time
b) install OfflineIMAP and Dovecot on the laptop, and let kmail2/akonadi access the localhost Dovecot server - probably the most elegant solution but for the fact that OfflineIMAP seems to have trouble mirroring our Novell Groupwise IMAP server
c) other e-mail client? I've heard good things about trojita...
Summarizing... no idea still how to go ahead, no good solution available. And I actually like the kdepim integration idea, so I'll never be the first one to completely migrate away from it! I am sincerely sorry for the sure fact that this post is disheartening to all the people who put a lot of effort into improving kmail2 and akonadi. It has become a huge lot better. However, I am just getting more and more convinced that the complexity of this combined system is too much to handle and that kmail should never have gone the akonadi way.

May 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo CUPS-1.6 status (May 18, 2013, 21:02 UTC)

We've had CUPS 1.6 in the Gentoo portage tree for a while now already. It has even been keyworded by most of the arches (hooray!), and from the bug reports quite some people use it. Sometime in the intermediate future we'll stabilize it, however until then quite some bugs still have to be resolved.
CUPS 1.6 brings changes. The move to Apple has messed up the project priorities, and backward compatibility was kicked out of the window with a bang. As I've already detailed in a short previous blog post, per se, CUPS 1.6 does not "talk" the printer browsing protocol of previous versions anymore but solely relies on zeroconf (which is implemented in Gentoo by net-dns/avahi). Some other features were dropped as well...
Luckily, CUPS was and is open source, and that the people at Apple removed the code from the main CUPS distribution did not mean that it was actually gone. In the end, all these feature just made their way from the main CUPS package to a new package net-print/cups-filters maintained at The Linux Foundation. There, the code is evolving fast, bugs are fixed and features are introduced. Even network browsing with the CUPS-1.5 protocol has been restored by now; cups-filters includes a daemon called cups-browsed which can generate print queues on the fly and accepts configuration directives similar to CUPS-1.5. As far as we in Gentoo (and any other Linux distribution) are concerned, we can get along without zeroconf just fine.
The main thing that is hindering CUPS-1.6 stabilization a the moment is that the CUPS website is down, kind of. Their server had a hardware failure, and since nearly a month (!!!) only minimal, static pages are up. In particular, what's missing is the CUPS bugtracker (no I won't sign up for an Apple ID to submit CUPS bugs) and access to the Subversion repository of the source. (Remind me to git-svn clone the code history as soon as it's back and push it to gitorious.)
So... feel free to try out CUPS-1.6, testing and submitting bugs for sure helps. However, it may take some time to get these fixed...

May 17, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

It is a common request in squid to have it block downloading certain files based on their extension in the url path. A quick look at google’s results on the subject apparently gives us the solution to get this done easily by squid.

The common solution is to create an ACL file listing regular expressions of the extensions you want to block and then apply this to your http_access rules.




acl blockExtensions urlpath_regex -i "/etc/squid/blockExtensions.acl"


http_access allow localnet !blockExtensions

Unfortunately this is not enough to prevent users from downloading .exe files. The mistake here is that we assume that the URL will strictly finish by the extension we want to block, consider the two examples below :     // will be DENIED as expected    // WON'T be denied as it does not match the regex !

Squid uses the extended regex processor which is the same as egrep. So we need to change our blockExtensions.acl file to handle the possible ?whatever string which may be trailing our url_path. Here’s the solution to handle all the cases :



You will still be hated for limiting people’s need to download and install shit on their Windows but you implemented it the right way and no script kiddie can brag about bypassing you ;)