Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
December 20, 2014, 13:03 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

December 19, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Don't update NTP – stop using it (December 19, 2014, 23:47 UTC)

Clocktl;dr Several severe vulnerabilities have been found in the time setting software NTP. The Network Time Protocol is not secure anyway due to the lack of a secure authentication mechanism. Better use tlsdate.

Today several severe vulnerabilities in the NTP software were published. On Linux and other Unix systems running the NTP daemon is widespread, so this will likely cause some havoc. I wanted to take this opportunity to argue that I think that NTP has to die.

In the old times before we had the Internet our computers already had an internal clock. It was just up to us to make sure it shows the correct time. These days we have something much more convenient – and less secure. We can set our clocks through the Internet from time servers. This is usually done with NTP.

NTP is pretty old, it was developed in the 80s, Wikipedia says it's one of the oldest Internet protocols in use. The standard NTP protocol has no cryptography (that wasn't really common in the 80s). Anyone can tamper with your NTP requests and send you a wrong time. Is this a problem? It turns out it is. Modern TLS connections increasingly rely on the system time as a part of security concepts. This includes certificate expiration, OCSP revocation checks, HSTS and HPKP. All of these have security considerations that in one way or another expect the time of your system to be correct.

Practical attack against HSTS on Ubuntu

At the Black Hat Europe conference last October in Amsterdam there was a talk presenting a pretty neat attack against HSTS (the background paper is here, unfortunately there seems to be no video of the talk). HSTS is a protocol to prevent so-called SSL-Stripping-Attacks. What does that mean? In many cases a user goes to a web page without specifying the protocol, e. g. he might just type www.example.com in his browser or follow a link from another unencrypted page. To avoid attacks here a web page can signal the browser that it wants to be accessed exclusively through HTTPS for a defined amount of time. TLS security is just an example here, there are probably other security mechanisms that in some way rely on time.

Here's the catch: The defined amount of time depends on a correct time source. On some systems manipulating the time is as easy as running a man in the middle attack on NTP. At the Black Hat talk a live attack against an Ubuntu system was presented. He also published his NTP-MitM-tool called Delorean. Some systems don't allow arbitrary time jumps, so there the attack is not that easy. But the bottom line is: The system time can be important for application security, so it needs to be secure. NTP is not.

Now there is an authenticated version of NTP. It is rarely used, but there's another catch: It has been shown to be insecure and nobody has bothered to fix it yet. There is a pre-shared-key mode that is not completely insecure, but that is not really practical for widespread use. So authenticated NTP won't rescue us. The latest versions of Chrome shows warnings in some situations when a highly implausible time is detected. That's a good move, but it's not a replacement for a secure system time.

There is another problem with NTP and that's the fact that it's using UDP. It can be abused for reflection attacks. UDP has no way of checking that the sender address of a network package is the real sender. Therefore one can abuse UDP services to amplify Denial-of-Service-attacks if there are commands that have a larger reply. It was found that NTP has such a command called monlist that has a large amplification factor and it was widely enabled until recently. Amplification is also a big problem for DNS servers, but that's another toppic.

tlsdate can improve security

While there is no secure dedicated time setting protocol, there is an alternative: TLS. A TLS packet contains a timestamp and that can be used to set your system time. This is kind of a hack. You're taking another protocol that happens to contain information about the time. But it works very well, there's a tool called tlsdate together with a timesetting daemon tlsdated written by Jacob Appelbaum.

There are some potential problems to consider with tlsdate, but none of them is even closely as serious as the problems of NTP. Adam Langley mentions here that using TLS for time setting and verifying the TLS certificate with the current system time is a circularity. However this isn't a problem if the existing system time is at least remotely close to the real time. If using tlsdate gets widespread and people add random servers as their time source strange things may happen. Just imagine server operator A thinks server B is a good time source and server operator B thinks server A is a good time source. Unlikely, but could be a problem. tlsdate defaults to the PTB (Physikalisch-Technische Bundesanstalt) as its default time source, that's an organization running atomic clocks in Germany. I hope they set their server time from the atomic clocks, then everything is fine. Also an issue is that you're delegating your trust to a server operator. Depending on what your attack scenario is that might be a problem. However it is a huge improvement trusting one time source compared to having a completely insecure time source.

So the conclusion is obvious: NTP is insecure, you shouldn't use it. You should use tlsdate instead. Operating systems should replace ntpd or other NTP-based solutions with tlsdated (ChromeOS already does).

(I should point out that the authentication problems have nothing to do with the current vulnerabilities. These are buffer overflows and this can happen in every piece of software. Tlsdate seems pretty secure, it uses seccomp to make exploitability harder. But of course tlsdate can have security vulnerabilities, too.)

December 18, 2014
Michal Hrusecky a.k.a. miska (homepage, bugs)
Running for The Board (December 18, 2014, 00:04 UTC)

Hi everybody, openSUSE elections are just around the corner and I decided to step forward and run for the seat in The Board. For those who don’t know me and would like to know why consider me as an option, here is my platform.

Who am I?

I’m about 30 years old, live in Prague and I love openSUSE (and Gentoo ;-) ). SUSE 6.3 was my first Linux distribution, I went through som more and I actively joined the openSUSE community more than six years ago. I was for five years working for SUSE as openSUSE Boosters and package maintainer. I was also part of the Prague openSUSE Conference organization team. Nowadays I work for company called Eaton (in open source team), but I still love openSUSE, have plenty of friends in both SUSE and openSUSE, poke some packages from time to time and I’m spreading open source in general and openSUSE in particular wherever I go (we have few openSUSE servers at work now, yay).

What I see as a role of board and what I would like to achieve there?

I see the role of board as a supporter and caretaker. Board is here to do the boring stuff and to enable everybody else to make amazing things within the project. To encourage people to do new things, to smoother rough edges, remove obstacles, listen to the people and try to bring them together. Also if needed, defend the project from possible threats, but I don’t see any at the horizon currently :-)

What would I like to achieve? Wold domination? Probably not as I don’t think that the board is here to choose direction. But if you have a cunning and ethical plan how to do that, I think board should do everything possible to support you. But on more serious note, openSUSE as a distribution had a challenging year, went through some changes lately and I believe that thanks to the current board we managed to go through it quite well. But I alsi think there are more challenges in front of  us and I would like help to make our future path as smooth as possible.

Why vote for me?

Why vote for me especially if I don’t promise pink ponies and conquering the world? Well, I promise that I will do my best to support you and help project to move in whatever direction it wants. Even if it means pink ponies and conquering the world ;-) I always listen to the others and I’m trying to resolve everything peacefully. I’m almost always smiling and it’s hard to piss me off. So almost no matter what I’ll keep calm, patient and will try to resolve challenges peacefully and to satisfy all interested parties.

December 16, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

I pushed out another version of Lilblue Linux a few days ago but I don’t feel as good about this release as previous ones.  If you haven’t been following my posts, Lilblue is a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  The name is a bit misleading because Lilblue is Gentoo but departs from the mainstream in this one respect only.  In fact, I strive to make it as close to mainstream Gentoo as possible so that everything will “just work”.  I’ve been maintaining Lilblue for years as a way of pushing the limits of uClibc, which is mainly intended for embedded systems, to see where it breaks and fix or improve it.

As with all releases, there are always a few minor problems, little annoyances that are not exactly show stopper.  One minor oversight that I found after releasing was that I hadn’t configured smplayer correctly.  That’s the gui front end to mplayer that you’ll find on the toolbar on the bottom of the desktop. It works, just not out-of-the-box.  In the preferences, you need to switch from mplayer2 to mplayer and set the video out to x11.  I’ll add that to the build scripts to make sure its in the next release [1].  I’ve also been migrating away from gnome-centered applications which have been pulling in more and more bloat.  A couple of releases ago I switched from gnome-terminal to xfce4-terminal, and for this release, I finally made the leap from epiphany to midori as the main browser.  I like midori better although it isn’t as popular as epiphany.  I hope others approve of the choice.

But there is one issue I hit which is serious.  It seems with every release I hit at least one of those.  This time it was in uClibc’s implementation of dlclose().  Along with dlopen() and dlsym(), this is how shared objects can be loaded into a running program during execution rather than at load time.  This is probably more familiar to people as “plugins” which are just shared objects loaded while the program is running.  When building the latest Lilblue image, gnome-base/librsvg segfaulted while running gdk-pixbuf-query-loaders [2].  The later links against glib and calls g_module_open() and g_module_close() on many shared objects as it constructs a cache of of loadable objects.  g_module_{open,close} are just glib’s wrappers to dlopen() and dlclose() on systems that provide them, like Linux.  A preliminary backtrace obtained by running gdb on `/usr/bin/gdk-pixbuf-query-loaders ./libpixbufloader-svg.la` pointed to the segfault happening in gcc’s __deregister_frame_info() in unwind-dw2-fde.c, which didn’t sound right.  I rebuilt the entire system with CFLAGS+=”-fno-omit-frame-pointer -O1 -ggdb” and turned on uClibc’s SUPPORT_LD_DEBUG=y, which emits debugging info to stderr when running with LD_DEBUG=y, and DODEBUG=y which prevents symbol stripping in uClibc’s libraries.  A more complete backtrace gave:

Program received signal SIGSEGV, Segmentation fault.
__deregister_frame_info (begin=0x7ffff22d96e0) at /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c:222
222 /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c: No such file or directory.
(gdb) bt
#0 __deregister_frame_info (begin=0x7ffff22d96e0) at /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c:222
#1 0x00007ffff22c281e in __do_global_dtors_aux () from /lib/libbz2.so.1
#2 0x0000555555770da0 in ?? ()
#3 0x0000555555770da0 in ?? ()
#4 0x00007fffffffdde0 in ?? ()
#5 0x00007ffff22d8a2f in _fini () from /lib/libbz2.so.1
#6 0x00007fffffffdde0 in ?? ()
#7 0x00007ffff6f8018d in do_dlclose (vhandle=0x7ffff764a420 <__malloc_lock>, need_fini=32767) at ldso/libdl/libdl.c:860
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

The problem occurred when running the global destructors in dlclose()-ing libbz2.so.1.  Line 860 of libdl.c has DL_CALL_FUNC_AT_ADDR (dl_elf_fini, tpnt->loadaddr, (int (*)(void))); which is a macro that calls a function at address dl_elf_fini with signature int(*)(void).  If you’re not familiar with ctor’s and dtor’s, these are the global constructors/destructors whose code lives in the .ctor and .dtor sections of an ELF object which you see when doing readelf -S <obj>.  The ctors are run when a library is first linked or opened via dlopen() and similarly the dtors are run when dlclose()-ing.  Here’s some code to demonstrate this:

# Makefile
all: tmp.so test
tmp.o: tmp.c
        gcc -fPIC -c $^
tmp.so: tmp.o
        gcc -shared -Wl,-soname,$@ -o $@ $
test: test-dlopen.c
        gcc -o $@ $^ -ldl
clean:
        rm -f *.so *.o test
// tmp.c
#include <stdio.h>

void my_init() __attribute__ ((constructor));
void my_fini() __attribute__ ((destructor));

void my_init() { printf("Global initialization!\n"); }
void my_fini() { printf("Global cleanup!\n"); }
void doit() { printf("Doing it!\n" ; }
// test-dlopen.c
// This has very bad error handling, sacrificed for readability.
#include <stdio.h>
#include <dlfcn.h>

int main() {
        int (*mydoit)();
        void *handle = NULL;

        handle = dlopen("./tmp.so", RTLD_LAZY);
        mydoit = dlsym(handle, "doit");
        mydoit();
        dlclose(handle);

        return 0;
}

When run, this code gives:

# ./test 
Global initialization!
Doing it!
Global cleanup!

So, my_init() is run on dlopen() and my_fini() is run on dlclose().  Basically, upon dlopen()-ing a shared object as you would a plugin, the library is first mmap()-ed into the process’s address space using the PT_LOAD addresses which you can see with readelf -l <obj>.  Then, one walks through all the global constructors and runs them.  Upon dlclose()-ing the opposite process is done.  One first walks through the global destructors and runs them, and then one munmap()-s the same mappings.

Figuring I wasn’t the only person to see a problem here, I googled and found that Nathan Copa of Alpine Linux hit a similar problem [3] back when Alpine used to use uClibc — it now uses musl.  He identified a problematic commit and I wrote a patch which would retain the new behavior introduced by that commit upon setting an environment variable NEW_START, but would otherwise revert to the old behavior if NEW_START is unset.  I also added some extra diagnostics to LD_DEBUG to better see what was going on.  I’ll add my patch to a comment below, but the gist of it is that it toggles between the old and new way of calculating the size of the munmap()-ings by subtracting an end and start address.  The old behavior used a mapaddr for the start address that is totally wrong and basically causes every munmap()-ing to fail with EINVAL.  This is corrected by the commit as a simple strace -e trace=munmap shows.

My results when running with LD_DEBUG=1 were interesting to say the least.  With the old behavior, the segfault was gone:

# LD_DEBUG=1 /usr/bin//gdk-pixbuf-query-loaders libpixbufloader-svg.la
...
do_dlclose():859: running dtors for library /lib/libbz2.so.1 at 0x7f26bcf39a26
do_dlclose():864: unmapping: /lib/libbz2.so.1
do_dlclose():869: before new start = 0xffffffffffffffff
do_dlclose():877: during new start = (nil), vaddr = (nil), type = 1
do_dlclose():877: during new start = (nil), vaddr = 0x219c90, type = 1
do_dlclose():881: after new start = (nil)
do_dlclose():987: new start = (nil)
do_dlclose():991: old start = 0x7f26bcf22000
do_dlclose():994: dlclose using old start
do_dlclose():998: end = 0x21b000
do_dlclose():1013: removing loaded_modules: /lib/libbz2.so.1
do_dlclose():1031: removing symbol_tables: /lib/libbz2.so.1
...

Of course, all of the munmap()-ings failed.  The dtors were run, but no shared object got unmapped.  When running the code with the correct value of start, I got:

# NEW_START=1 LD_DEBUG=1 /usr/bin//gdk-pixbuf-query-loaders libpixbufloader-svg.la
...
do_dlclose():859: running dtors for library /lib/libbz2.so.1 at 0x7f5df192ba26
Segmentation fault

What’s interesting here is that the segfault occurs at  DL_CALL_FUNC_AT_ADDR which is before the munmap()-ing and so before any affect that the new value of start should have! This seems utterly mysterious until you realize that there is a whole set of dlopens/dlcloses as gdk-pixbuf-query-loader does its job — I counted 40 in all!  This is as far as I’ve gotten narrowing down this mystery, but I suspect some previous munmap()-ing is breaking the the dtors for libbz2.so.1 and when the call is made to that address, its no longer valid leading to the segfault.

Rich Felker,  aka dalias, the developer of musl, made an interesting comment to me in IRC when I told him about this issue.  He said that the unmappings are dangerous and that musl actually doesn’t do them.  For now, I’ve intentionally left the unmappings in uClibc’s dlclose() “broken” in the latest release of Lilblue, so you can’t hit this bug, but for the next release I’m going to look carefully at what glibc and musl do and try to get this fix upstream.  As I said when I started this post, I’m not totally happy with this release because I didn’t nail the issue, I just implemented a workaround.  Any hits would be much appreciated!

[1] The build scripts can be found in the releng repository at git://git.overlays.gentoo.org/proj/releng.git under tools-uclibc/desktop.  The scripts begin with a <a href=”http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-uclibc-hardened/”>hardened amd64 uclibc stage3</a> tarball and build up the desktop.

[2] The purpose of librsvg and gdk-pixbuf is not essential for the problem with dlclose(), but for completeness We state them here: librsvg is a library for rendering scalable vector graphics and gdk-pixbuf is an image loading library for gtk+.  gdk-pixbuf-query-loaders reads a libtool .la file and generates cache of loadable shared objects to be consumed by gdk-pixbuf.

[3] See  http://lists.uclibc.org/pipermail/uclibc/2012-October/047059.html. He suggested that the following commit was doing evil things: http://git.uclibc.org/uClibc/commit/ldso?h=0.9.33&id=9b42da7d0558884e2a3cc9a8674ccfc752369610

December 15, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)

Nokia 6230iYesterday I deleted all the remaining data on my old Nokia 6230i phone with the intent to give it away. It was my last feature phone (i. e. non-smartphone). My first feature phone was an 5130 in the late 90s. It made me think a bit about technology development.

I remember that at some point when I was a kid I asked myself if there are transportable phones. I was told they don't exist (which was not exactly true, but it's safe to say that they weren't widely available). Feature phones were nonexistent when I started to care about tech gadgets and today they're obsolete. (Some might argue that smartphones are the new mobile phones, but I don't think that's accurate. Essentially I think the name smartphone is misleading, because they are multi function devices where the phone functionality is just one – and hardly the most important one.)

I considered whether I should keep it in case my current smartphone breaks or gets lost so I have a quick replacement. However then I thought it would probably not do much good and decided it can go away as long as there are still people who would want to use it (the point where I could sell it has already passed). The reason is that the phone functionality is probably one of the lesser important ones of my smartphone and a feature phone wouldn't do much to help in case I loose it.

Of course feature phones are not the only tech gadgets that raised and became obsolete during my lifetime. CD-ROM drives, MP3 players, Modems, … I recently saw a documetary that was called “80s greatest gadgets” (this seems to be on Youtube, but unfortunately not available depending on your geolocation). I found it striking that almost every device they mentioned can be replaced with a smartphone today.

Something I wondered was what my own expectations of tech development were in the past. Surprisingly I couldn't remember that many. I would really be interested how I would've predicted tech development let's say 10 or 15 years ago and compare it to what really happened. The few things I can remember is that when I first heared about 3D printers I had high hopes (I haven't seen them come true until now) and that I always felt free software will become the norm (which in large parts it did, but certainly not in the way I expected). I'm pretty sure I didn't expected social media and I'm unsure about smartphones.

As I feel it's unfortunate I don't remember what I had expected in the past I thought I could write down some expectations now. I feel drone delivery will likely have an important impact in the upcoming years and push the area of online shopping to a whole new level. I expect the whole area that's today called “sharing economy” to rise and probably crash into much more areas. And I think that at some point robot technology will probably enter our everyday life. Admittedly none of this is completely unexpected but that's not the point.

If you have some interesting thoughts what tech we'll see in the upcoming years feel free to leave a comment.

Image from Rudolf Stricker / Wikimedia Commons

Sebastian Pipping a.k.a. sping (homepage, bugs)

Julian Treasure: How to speak so that people want to listen

December 14, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Handbooks moved (December 14, 2014, 12:42 UTC)

Yesterday the move of the Gentoo Wiki for the Gentoo handbooks (whose most important part are the installation instructions for the various supported architectures) has been concluded, with a last-minute addition being the one-page views so that users who want to can view the installation instructions completely within one view.

Because we use lots of transclusions (i.e. including different wiki articles inside another article) to support a common documentation base for the various architectures, I did hit a limit that prevented me from creating a single-page for the entire handbook (i.e. “Installing Gentoo Linux”, “Working with Gentoo”, “Working with portage” and “Network configuration” together), but I could settle with one page per part. I think that matches most of the use cases.

With the move now done, it is time to start tackling the various bugs that were reported against the handbook, as well as initiate improvements where needed.

I did make a (probably more – but this one is fresh in my memory) mistake in the move though. I had to do a lot of the following:

<noinclude><translate></noinclude>
...
<noinclude></translate></noinclude>

Without this, transcluded parts would suddenly show the translation tags as regular text. Only afterwards (I’m talking about more than 400 different pages) did I read that I should transclude the /en pages (like Handbook:Parts/Installation/About/en instead of Handbook:Parts/Installation/About) as those do not have the translation specifics in them. Sigh.

December 13, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A short list of fiction books I enjoyed (December 13, 2014, 16:18 UTC)

I promised in the previous review a few more reviews for the month, especially as Christmas gifts for geeks. I decided to publish this group-review of titles, as I don't think it would have served anybody to post separate book reviews for all of these. I would also suggest you take a look at my previous reviews so that you can find more ideas for new books to read.

Let's start with a new book by a well-known author: The Ocean at the End of the Lane by Neil Gaiman is, as it's usual with him, difficult to nail (ah-ha) to a genre quickly. It starts off as the story of a kid's youth but builds up to… quite something. I have listened to the audiobook version rather than the book, and as it started it seemed something perfect to make me sleep well, but then again it mixed with my own dreams to form something at the same time scary and full of warmth.

It's common to say that it's the journey, not the destination, that is important, and I find that is a very good description of what I like in books. And in the case of Gaiman's book, this is more true than ever. I was not looking forward for it to end, not because it's a bad ending (even though it did upset me a bit) but because I really wanted to stay in that magical world of the Ocean at the end of the lane.

Next up, two series from an author who's also a friend: Michael McCloskey who writes both fantasy and scifi — I have yet to start on his fantasy series, but I fell in love with his scifi writing with Trilisk Ruins. I think it might be worth retelling the story of how I found out about it, even though it is a bit embarrassing: for a while I was an user on OkCupid – so sue me, it feels lonely sometimes – and while I did not end up meeting anybody out of there, my interest was picked by an ad on one of the pages: it was part of the cover of Trilisk Ruins but with no text on it. I thought it was going to be some kind of game, instead it was a much more intriguing book.

Michael's secret is in talking about a future that may be far off, but that is, well, feasible. It's not the dystopia painted by most scifi books I've read or skimmed through recently, although it's not a perfect future — it is, after all, the same as now. And technology is not just thrown up from a future far away that we can count on it as magic, nor it is just an extension of today's. It is a projection of it in a future: the Links are technologies that while not existing now, and not having a clearly-defined path to get to them, wouldn't be too far fetched to be existing.

Parker Interstellar Travels – that's the name of the series starting with Trilisk Ruins – is mostly lighthearted, even though dark at times. It reads quickly, once you get past the first chapter or two, as it jumps straight into an unknown world, so you may be stunned by it for a moment. But I would suggest you to brace yourself and keep going, it's fully worth it!

There is a second series by Michael, but in this case an already-closed trilogy, Synchronicity, starting with Insidious is set in the same universe and future, but it takes a quite different approach: it's definitely darker and edgier, and it would appeal to many of the geeks who are, as I write, busy reading and discussing potential AI problems. I have a feeling that it would have been similar in the '60s-'70s after 2001 was released.

In this series, the focus is more on the military, rather than the individuals, and their use, and fear, of AIs. As I noted it is darker, and it's less action-driven than PIT, but it does make up for it in introspection, so depending on what your cup of tea is, you may chose between the two.

The fourth entry in this collection is something that arrived through Samsung's Amazon deals. Interestingly I already had an audiobook by the same author – B.V. Larson – through some Audible giveaway but I have not listened to it yet. Instead I read Technomancer in just a week or so, and it was quite interesting.

Rather than future, Larson goes for current time, but in a quite fictionalized setting. There's a bit of cliché painting of not one, but two women in the book, but it does not seem to be as endemic as in other books I've read recently. It's a quick-bite read but it's also the start of a series so if you're looking for something that does not end right away you may consider it.

To finish this up, I'll go back to an author that I reviewed before: Nick Harkaway, already author of The Gone-Away World, which is sill one of my favourite modern books. While I have not read yet Tigeman which was on this year's shortlist for the Goodreads Awards, last year I read Angelmaker which is in a lot of ways similar to The Gone-Away World, but different. His characters once again are fully built up even when they are cows, and the story makes you want to dive into that world, flawed and sometimes scary as it is.

Have fun, and good reads this holiday season!

December 12, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Handbooks almost moved to wiki (December 12, 2014, 15:35 UTC)

Content-wise, the move is done. I’ve done a few checks on the content to see if the structure still holds, translations are enabled on all pages, the use of partitions is sufficiently consistent for each architecture, and so on. The result can be seen on the gentoo handbook main page, from which the various architectural handbooks are linked.

I sent a sort-of announcement to the gentoo-project mailinglist (which also includes the motivation of the move). If there are no objections, I will update the current handbooks to link to the wiki ones, as well as update the links on the website (and in wiki articles) to point to the wiki.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo mailing lists down (December 12, 2014, 00:09 UTC)

Since yesterday the host running all Gentoo mailing lists is down. So far there is no information yet available on the nature of the problem. Please check the Gentoo Infrastructure status page, http://infra-status.gentoo.org/, for updates.

[Edit: All fixed.]

This public service announcement has been brought to you by non-infra Andreas.

December 11, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book Review: Getting More (December 11, 2014, 21:19 UTC)

It has been a while since I wrote my last book review and it was not exactly a great one, so I'll try to improve on this by writing a few reviews over the next month or so. After all what better gift for geeks than books?

I have had the pleasure to read Getting More last October, as part of a work training. It's a book about negotiation, and makes a point multiple times to detach that from the idea of it being manipulation, even though it's probably up to you to see whether the distinction is clear enough for you. The author, Prof. Stuart Diamond, runs a negotiation course at Wharton, in Pennsylvania, and got famous with this.

I was expecting the book to be hogwash, as many other business books, and especially so as many materials I've been given before at courses (before my current job though). Turned out that the book is not bad at all and I actually found it enjoyable, even though a bit repetitive — but repetita iuvant as they say; the repetition is there to make you see the point, not just for the sake of being there.

The main objective of the book is to provide you with process and tools to use during negotiation, big-time business deals and everyday transactions alike. It also includes example on how to use this with your significant other and children, but I'll admit I just skipped over them altogether as they are not useful to me (I'm single and I don't even see my nephew enough to care about dealing with children.)

It was a very interesting read to me because, while I knew I'm not exactly a cold-minded person especially when frustrated, I found that some of the tools described I've been using, for a long time, without even knowing about their existence. For example, when I interviewed for my current job, my first on-site interviewer arrived with a NERV sticker on his laptop — we spent a few minutes talking about anime, and not only that reassured me a lot about the day, – you have no idea how stressed I was, as I even caught a fever the day before the interview! – it also built an "instantaneous" connection with someone who did indeed become a colleague. I would think it might have added to his patience for my thicker than usual accent that day, too.

Between anecdotes and explanations, the book has another underlying theme: be real. This is the main point of difference between negotiation and manipulation as seen from the book. In the more mundane case of dealing with stores, hotels and airlines, you have two main examples of using the techniques, to get compensated for something negative that happened, whether or not it was in control of the other party, and otherwise to ask penalties waived when you did something incorrect, unintentionally. It would be tempted to cause something negative and ask for compensation even if everything was perfect — that would be manipulation, and it's unlikely to work very well unless you're a good -actor- liar, and rather makes it worse for the rest of the world.

The book invites you to keep exercising the tools daily — I have been trying but it's definitely not easy especially if you're not an extrovert by nature. It takes practice and, especially at the beginning, more time than it would be worth: arguing half an hour for a fifteen euro discount somewhere is not really worth it to me, but on the other hand practice makes perfect and the processes to apply for small and big transactions the same. I have indeed been able to get some ~$100 back at the Holiday Inn I've stayed at in San Francisco.

I have got my set of reserves on using the methods described on the book – it sometimes feels manipulative and relying on implicit privilege – but on the other hand, Prof. Diamond points out multiple time that the methods works best when both parties know about them, so spreading the word about the book is a good idea, and telling people explicitly what you're doing is the best strategy.

Indeed, I felt that I would have gotten better from Tesco just last week, if they had read the book and applied the same methods. A delivery was missed, and that was fine, but then the store went incommunicado for over ten hours instead of calling me right away to reschedule, and the guy who called me lied on the order going to be new the day after. They gave me some €25 back straight on the card — which is okay for me, but it was not really in their best interest, as I could have walked away with the money and gone to a different store. I asked them if they could offer me some months of their DeliverySaver (think Amazon Prime for groceries) for free.

Yes, the DeliverySaver subscription would have had a much higher value (€7.5/month), but it would be actually cheaper to them (as I live in an apartment complex, that they delivery to daily anyway, the delivery costs are much lower than that), and it would have "forced" me to come back to them, rather than going to a competitor such as SuperValu. As it turns out, I've decided to stick with Tesco, mostly because I have their credit card and it is thus still convenient to stay a customer. But I do think they could have made a better deal for themselves.

At any rate, the book is worth a read and the techniques are not completely worthless, even though difficult to pull off without being a jerk. It requires knowing a lot about a system to do so, but again this is something that is up to the people reading the book.

December 10, 2014
Gentoo Monthly Newsletter: November 2014 (December 10, 2014, 20:00 UTC)

Gentoo News

Council News

The Gentoo Council addressed a few miscellaneous matters this month.

The first concerned tinderbox reports to bugs. There was a bit of a back-and-forth in bugzilla with a  dispute over whether bugs generated from tinderbox runs that contained logs attached as URLs instead of as files could be closed as INVALID. Normally the use of URLs is discouraged to improve the long-term usability of the bugs. Since efforts were already underway to try to automatically convert linked logs into attached logs it was felt that closing bugs as INVALID was counterproductive.

There was also a proposal to implement a “future.eclass” which would make EAPI6 features available to EAPI5 ebuilds early. In general the Council decided that this was not a good thing to implement in the main tree as it would mean supporting two different implementations of some of the EAPI6 features, which could potentially diverge and cause confusion. Instead it would be preferable to focus on migrating packages to use EAPI6. The Council did encourage using mechanisms like this to do testing in overlays/etc if it was for the purpose of improving future EAPIs, but that this shouldn’t be something done in “production.”

Several other items came up with no action this month. There was a proposal to allow die withing subshells in EAPI6, but this had not received list discussion and the Council has been requiring this to ensure that all developers are able to properly vet significant changes. The remaining items were follow-ups from previous months which are being tracked but which have not had enough development to
act on yet.

Gentoo Developer Moves

Summary

Gentoo is made up of 244 active developers, of which 40 are currently away.
Gentoo has recruited a total of 805 developers since its inception.

Changes

  • Matthias Maier (tamiko) joined the Science team
  • Andrew Savchenko (bircoph) joined the Science, Mathematics and Physics team
  • Jason Zaman (perfinion) joined the Hardened, Integrity and SElinux teams
  • Aaron Swenson (titanofold) joined the Perl team
  • Patrice Clement (monsieurp) joined the Perl team
  • Tom Wijsman (tomwij) left the bug-wranglers, dotnet, kernel, portage, QA and proxy-maintainers teams

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 163
Packages 17849
Ebuilds 37661
Architecture Stable Testing Total % of Packages
alpha 3536 674 4210 23.59%
amd64 10838 6521 17359 97.25%
amd64-fbsd 0 1584 1584 8.87%
arm 2642 1848 4490 25.16%
arm64 549 64 613 3.43%
hppa 3076 529 3605 20.20%
ia64 3093 697 3790 21.23%
m68k 605 118 723 4.05%
mips 0 2422 2422 13.57%
ppc 6741 2549 9290 52.05%
ppc64 4295 1048 5343 29.93%
s390 1410 404 1814 10.16%
sh 1537 524 2061 11.55%
sparc 4033 980 5013 28.09%
sparc-fbsd 0 319 319 1.79%
x86 11483 5448 16931 94.86%
x86-fbsd 0 3205 3205 17.96%

gmn-portage-stats-2014-12

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201411-11 net-proxy/squid Squid: Multiple vulnerabilities 504176
201411-10 net-misc/asterisk Asterisk: Multiple Vulnerabilities 523216
201411-09 app-admin/ansible Ansible: Privilege escalation 516564
201411-08 net-wireless/aircrack-ng Aircrack-ng: User-assisted execution of arbitrary code 528132
201411-07 net-misc/openswan Openswan: Denial of Service 499870
201411-06 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 525430
201411-05 net-misc/wget GNU Wget: Arbitrary code execution 527056
201411-04 dev-lang/php PHP: Multiple vulnerabilities 525960
201411-03 net-misc/tigervnc TigerVNC: User-assisted execution of arbitrary code 505170
201411-02 dev-db/mysql (and 1 more) MySQL, MariaDB: Multiple vulnerabilities 525504
201411-01 media-video/vlc VLC: Multiple vulnerabilities 279340

Package Removals/Additions

Removals

Package Developer Date
dev-php/adodb-ext grknight 01 Nov 2014
dev-php/eaccelerator grknight 01 Nov 2014
dev-php/pecl-apc grknight 01 Nov 2014
dev-php/pecl-id3 grknight 01 Nov 2014
dev-php/pecl-mogilefs grknight 01 Nov 2014
dev-php/pecl-sca_sdo grknight 01 Nov 2014
app-text/pastebin dilfridge 02 Nov 2014
sys-devel/libperl dilfridge 08 Nov 2014
dev-perl/Lucene dilfridge 08 Nov 2014
razorqt-base/libqtxdg yngwin 08 Nov 2014
virtual/perl-Version-Requirements dilfridge 08 Nov 2014
perl-core/Version-Requirements dilfridge 08 Nov 2014
dev-python/python-exec mgorny 08 Nov 2014
sys-devel/bfin-toolchain vapier 08 Nov 2014
dev-python/gns3-gui idella4 09 Nov 2014
dev-python/sparqlwrapper idella4 09 Nov 2014
app-accessibility/gnome-mag pacho 13 Nov 2014
app-accessibility/gnome-speech pacho 13 Nov 2014
app-accessibility/gok pacho 13 Nov 2014
app-admin/gnome-system-tools pacho 13 Nov 2014
app-admin/pessulus pacho 13 Nov 2014
app-admin/sabayon pacho 13 Nov 2014
app-crypt/seahorse-plugins pacho 13 Nov 2014
app-pda/gnome-pilot pacho 13 Nov 2014
app-pda/gnome-pilot-conduits pacho 13 Nov 2014
dev-cpp/libgdamm pacho 13 Nov 2014
dev-cpp/libpanelappletmm pacho 13 Nov 2014
dev-python/brasero-python pacho 13 Nov 2014
dev-python/bug-buddy-python pacho 13 Nov 2014
dev-python/evince-python pacho 13 Nov 2014
dev-python/evolution-python pacho 13 Nov 2014
dev-python/gnome-applets-python pacho 13 Nov 2014
dev-python/gnome-desktop-python pacho 13 Nov 2014
dev-python/gnome-media-python pacho 13 Nov 2014
dev-python/libgda-python pacho 13 Nov 2014
dev-python/libgksu-python pacho 13 Nov 2014
dev-python/libgnomeprint-python pacho 13 Nov 2014
dev-python/libgtop-python pacho 13 Nov 2014
dev-python/totem-python pacho 13 Nov 2014
gnome-base/gnome-applets pacho 13 Nov 2014
gnome-base/gnome-fallback pacho 13 Nov 2014
gnome-base/gnome-panel pacho 13 Nov 2014
app-accessibility/morseall pacho 13 Nov 2014
app-accessibility/java-access-bridge pacho 13 Nov 2014
gnome-extra/libgail-gnome pacho 13 Nov 2014
app-accessibility/dasher pacho 13 Nov 2014
gnome-extra/bug-buddy pacho 13 Nov 2014
gnome-extra/deskbar-applet pacho 13 Nov 2014
gnome-extra/evolution-exchange pacho 13 Nov 2014
gnome-extra/evolution-webcal pacho 13 Nov 2014
gnome-extra/fast-user-switch-applet pacho 13 Nov 2014
gnome-extra/gcalctool pacho 13 Nov 2014
gnome-extra/gnome-audio pacho 13 Nov 2014
gnome-extra/gnome-games-extra-data pacho 13 Nov 2014
gnome-extra/gnome-games pacho 13 Nov 2014
gnome-extra/gnome-media pacho 13 Nov 2014
gnome-extra/gnome-screensaver pacho 13 Nov 2014
gnome-extra/gnome-swallow pacho 13 Nov 2014
gnome-extra/hamster-applet pacho 13 Nov 2014
gnome-extra/lock-keys-applet pacho 13 Nov 2014
gnome-extra/nautilus-open-terminal pacho 13 Nov 2014
gnome-extra/panflute pacho 13 Nov 2014
gnome-extra/sensors-applet pacho 13 Nov 2014
gnome-extra/file-browser-applet pacho 13 Nov 2014
gnome-extra/gnome-hdaps-applet pacho 13 Nov 2014
media-gfx/byzanz pacho 13 Nov 2014
net-analyzer/gnome-netstatus pacho 13 Nov 2014
net-analyzer/netspeed_applet pacho 13 Nov 2014
x11-misc/glunarclock pacho 13 Nov 2014
gnome-extra/swfdec-gnome pacho 13 Nov 2014
gnome-extra/tasks pacho 13 Nov 2014
media-gfx/shared-color-profiles pacho 13 Nov 2014
net-libs/gupnp-vala pacho 13 Nov 2014
media-libs/swfdec pacho 13 Nov 2014
net-libs/farsight2 pacho 13 Nov 2014
net-libs/libepc pacho 13 Nov 2014
net-misc/drivel pacho 13 Nov 2014
net-misc/blogtk pacho 13 Nov 2014
net-misc/gnome-blog pacho 13 Nov 2014
net-misc/tsclient pacho 13 Nov 2014
www-client/epiphany-extensions pacho 13 Nov 2014
www-plugins/swfdec-mozilla pacho 13 Nov 2014
x11-themes/gnome-themes pacho 13 Nov 2014
x11-themes/gnome-themes-extras pacho 13 Nov 2014
x11-themes/gtk-engines-cleanice pacho 13 Nov 2014
x11-themes/gtk-engines-dwerg pacho 13 Nov 2014
x11-plugins/wmlife pacho 13 Nov 2014
dev-dotnet/gtkhtml-sharp pacho 13 Nov 2014
dev-util/mono-tools pacho 13 Nov 2014
net-libs/telepathy-farsight pacho 13 Nov 2014
x11-themes/gdm-themes pacho 13 Nov 2014
x11-themes/metacity-themes pacho 13 Nov 2014
x11-wm/metacity pacho 13 Nov 2014
gnome-base/libgdu pacho 13 Nov 2014
rox-base/rox-media pacho 13 Nov 2014
dev-python/gns3-gui patrick 14 Nov 2014
kde-misc/kcm_touchpad mrueg 15 Nov 2014
net-misc/ieee-oui zerochaos 19 Nov 2014
app-shells/zsh-completion radhermit 21 Nov 2014
app-dicts/gnuvd pacho 21 Nov 2014
net-misc/netcomics-cvs pacho 21 Nov 2014
dev-python/kinterbasdb pacho 21 Nov 2014
dev-libs/ibpp pacho 21 Nov 2014
dev-php/PEAR-MDB2_Driver_ibase pacho 21 Nov 2014
net-im/kmess pacho 21 Nov 2014
games-server/halflife-steam pacho 21 Nov 2014
sys-apps/usleep pacho 21 Nov 2014
dev-util/cmockery radhermit 24 Nov 2014
dev-python/pry radhermit 24 Nov 2014
dev-perl/DateTime-Format-DateManip zlogene 26 Nov 2014
www-servers/ocsigen aballier 27 Nov 2014
dev-ml/ocamlduce aballier 27 Nov 2014
dev-perl/Mail-ClamAV zlogene 27 Nov 2014
dev-perl/SVN-Mirror zlogene 27 Nov 2014
dev-embedded/msp430-binutils radhermit 27 Nov 2014
dev-embedded/msp430-gcc radhermit 27 Nov 2014
dev-embedded/msp430-gdb radhermit 27 Nov 2014
dev-embedded/msp430-libc radhermit 27 Nov 2014
dev-embedded/msp430mcu radhermit 27 Nov 2014
mail-filter/spamassassin-fuzzyocr dilfridge 29 Nov 2014

Additions

Package Developer Date
dev-python/python-bugzilla dilfridge 01 Nov 2014
app-vim/sudoedit radhermit 01 Nov 2014
dev-java/icedtea-sound caster 01 Nov 2014
dev-perl/Net-Trackback dilfridge 01 Nov 2014
dev-perl/Syntax-Highlight-Engine-Simple dilfridge 01 Nov 2014
dev-perl/Syntax-Highlight-Engine-Simple-Perl dilfridge 01 Nov 2014
app-i18n/fcitx-qt5 yngwin 02 Nov 2014
virtual/postgresql titanofold 02 Nov 2014
dev-python/oslo-i18n alunduil 02 Nov 2014
dev-libs/libltdl vapier 03 Nov 2014
dev-texlive/texlive-langchinese aballier 03 Nov 2014
dev-texlive/texlive-langjapanese aballier 03 Nov 2014
dev-texlive/texlive-langkorean aballier 03 Nov 2014
app-misc/ltunify radhermit 05 Nov 2014
dev-vcs/gitsh jlec 05 Nov 2014
dev-python/pypy3 mgorny 05 Nov 2014
virtual/pypy3 mgorny 05 Nov 2014
dev-php/PEAR-Math_BigInteger grknight 06 Nov 2014
games-rpg/morrowind-data hasufell 06 Nov 2014
games-engines/openmw hasufell 06 Nov 2014
dev-perl/URI-Encode dilfridge 06 Nov 2014
dev-perl/MIME-Base32 dilfridge 08 Nov 2014
dev-libs/libqtxdg yngwin 08 Nov 2014
app-admin/lxqt-admin jauhien 08 Nov 2014
dev-python/oslo-utils alunduil 08 Nov 2014
net-misc/gns3-server idella4 09 Nov 2014
dev-python/gns3-gui idella4 09 Nov 2014
dev-python/pypy3-bin mgorny 09 Nov 2014
dev-python/oslo-serialization alunduil 09 Nov 2014
dev-python/bashate prometheanfire 10 Nov 2014
dev-python/ldappool prometheanfire 10 Nov 2014
dev-python/repoze-who prometheanfire 10 Nov 2014
dev-python/pysaml2 prometheanfire 10 Nov 2014
dev-python/posix_ipc prometheanfire 10 Nov 2014
dev-python/oslo-db prometheanfire 10 Nov 2014
dev-ml/enumerate aballier 10 Nov 2014
dev-ml/core_bench aballier 10 Nov 2014
dev-util/sysdig mgorny 11 Nov 2014
dev-python/singledispatch idella4 12 Nov 2014
dev-tex/biblatex-apa mrueg 12 Nov 2014
app-emacs/multiple-cursors ulm 12 Nov 2014
dev-python/libnacl chutzpah 13 Nov 2014
dev-python/ioflo chutzpah 13 Nov 2014
dev-python/raet chutzpah 13 Nov 2014
dev-qt/qtchooser pesa 13 Nov 2014
dev-python/dicttoxml chutzpah 13 Nov 2014
dev-python/moto chutzpah 13 Nov 2014
dev-python/gns3-gui idella4 13 Nov 2014
x11-plugins/wmlife voyageur 13 Nov 2014
net-misc/gns3-gui patrick 14 Nov 2014
games-rpg/a-bird-story hasufell 14 Nov 2014
virtual/python-singledispatch idella4 15 Nov 2014
dev-python/kiwisolver idella4 15 Nov 2014
app-forensics/afl hanno 16 Nov 2014
games-board/gambit sping 16 Nov 2014
dev-db/pgrouting titanofold 16 Nov 2014
dev-python/atom idella4 16 Nov 2014
dev-embedded/kobs-ng vapier 18 Nov 2014
dev-python/ordereddict prometheanfire 18 Nov 2014
dev-python/WSME prometheanfire 18 Nov 2014
dev-python/retrying prometheanfire 18 Nov 2014
dev-python/osprofiler prometheanfire 18 Nov 2014
dev-python/glance_store prometheanfire 18 Nov 2014
dev-python/python-barbicanclient prometheanfire 18 Nov 2014
dev-python/rfc3986 prometheanfire 19 Nov 2014
sys-cluster/libquo ottxor 19 Nov 2014
dev-python/flask-migrate patrick 20 Nov 2014
media-libs/libde265 dlan 20 Nov 2014
dev-python/pyqtgraph radhermit 20 Nov 2014
app-shells/gentoo-zsh-completions radhermit 21 Nov 2014
app-shells/zsh-completions radhermit 21 Nov 2014
dev-libs/libsecp256k1 blueness 21 Nov 2014
net-libs/libbitcoinconsensus blueness 21 Nov 2014
net-misc/gns3-converter idella4 22 Nov 2014
dev-python/pytest-timeout jlec 22 Nov 2014
net-dns/libidn2 jer 22 Nov 2014
app-emulation/vpcs idella4 23 Nov 2014
dev-libs/libmacaroons patrick 23 Nov 2014
app-vim/emmet radhermit 24 Nov 2014
sci-libs/orocos-bfl aballier 25 Nov 2014
sys-libs/efivar floppym 26 Nov 2014
dev-python/jmespath aballier 26 Nov 2014
net-misc/python-x2go voyageur 27 Nov 2014
net-misc/pyhoca-cli voyageur 27 Nov 2014
dev-python/simplekv aballier 27 Nov 2014
dev-python/Flask-KVSession aballier 27 Nov 2014
net-misc/pyhoca-gui voyageur 27 Nov 2014
dev-libs/fstrm radhermit 27 Nov 2014
sci-libs/fcl aballier 28 Nov 2014
dev-ml/labltk aballier 28 Nov 2014
dev-ml/camlp4 aballier 28 Nov 2014
dev-python/sphinxcontrib-doxylink aballier 28 Nov 2014
dev-util/cpputest radhermit 29 Nov 2014
app-text/groonga grknight 29 Nov 2014
app-text/groonga-normalizer-mysql grknight 29 Nov 2014
app-forensics/volatility chithanh 29 Nov 2014
dev-perl/Test-FailWarnings dilfridge 30 Nov 2014
dev-perl/RedisDB-Parser dilfridge 30 Nov 2014
dev-perl/RedisDB dilfridge 30 Nov 2014
dev-python/nose_fixes idella4 30 Nov 2014
dev-perl/MooX-Types-MooseLike-Numeric dilfridge 30 Nov 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 November 2014 and 01 December 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-12

Bug Activity Number
New 1858
Closed 1151
Not fixed 215
Duplicates 164
Total 6294
Blocker 4
Critical 14
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 57
2 Gentoo's Team for Core System packages 54
3 Gentoo Linux Gnome Desktop Team 39
4 Gentoo Perl team 32
5 Tim Harder 30
6 Gentoo Games 29
7 Gentoo KDE team 27
8 Java team 27
9 Gentoo Ruby Team 26
10 Others 829

gmn-closed-2014-12

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Python Gentoo Team 104
2 Gentoo Linux bug wranglers 97
3 Gentoo Linux Gnome Desktop Team 69
4 Gentoo Security 62
5 Gentoo's Team for Core System packages 56
6 Gentoo KDE team 44
7 Java team 38
8 Default Assignee for New Packages 37
9 Qt Bug Alias 33
10 Others 1317

gmn-opened-2014-12

Tips of the month

(by Alexander Berntsen)
New –alert emerge option

From the emerge(1) manpage

–alert [ y | n ] (-A short option) Add a terminal bell character (‘\a’) to all interactive prompts. This is especially useful if dependency resolution is taking a long time, and you want emerge to alert you when it is finished. If you use emerge -auAD world, emerge will courteously point out when it has finished calculating the graph.

–alert may be ‘y’ or ‘n’. ‘true’ and ‘false’ mean the same thing. Using –alert without an option is the same as using it with ‘y’. Try it with ‘emerge -aA portage’.

If your terminal emulator is set up to make ‘\a’ into a window manager urgency hint, move your cursor to a different window to get the effect.

 

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Sometimes I forget how important communication is (December 10, 2014, 18:38 UTC)

Free software (and documentation) developers don’t always have all the time they want. Instead, they grab whatever time they have to do what they believe is the most productive – be it documentation editing, programming, updating ebuilds, SELinux policy improvements and what not. But they often don’t take the time to communicate. And communication is important.

For one, communication is needed to reach a larger audience than those that follow the commit history in whatever repository work is being done. Yes, there are developers that follow each commit, but development isn’t just done for developers, it is also for end users. And end users deserve frequent updates and feedback. Be it through blog posts, Google+ posts, tweets or instragrams (well, I’m not sure how to communicate a software or documentation change through Instagram, but I’m sure people find lots of creative ways to do so), telling the broader world what has changed is important.

Perhaps a (silent or not) user was waiting for this change. Perhaps he or she is even actually trying to fix things himself/herself but is struggling with it, and would really benefit (time-wise) from a quick fix. Without communicating about the change, (s)he does not know that no further attempts are needed, actually reducing the efficiency in overall.

But communication is just one-way. Better is to get feedback as well. In that sense, communication is just one part of the feedback loop – once developers receive feedback on what they are doing (or did recently) they might even improve results faster. With feedback loops, the wisdom of the crowd (in the positive sense) can be used to improve solutions beyond what the developer originally intended. And even a simple “cool” and “I like” is good information for a developer or contributor.

Still, I often forget to do it – or don’t have the time to focus on communication. And that’s bad. So, let me quickly state what things I forgot to communicate more broadly about:

  • A new developer joined the Gentoo ranks: Jason Zaman. Now developers join Gentoo more often than just once in a while, but Jason is one of my “recruits”. In a sense, he became a developer because I was tired of pulling his changes in and proxy-committing stuff. Of course, that’s only half the truth; he is also a very active contributor in other areas (and was already a maintainer for a few packages through the proxy-maintainer project) and is a tremendous help in the Gentoo Hardened project. So welcome onboard Jason (or perfinion as he calls himself online).
  • I’ve started with copying the Gentoo handbook to the wiki. This is still an on-going project, but was long overdue. There are many reasons why the move to the wiki is interesting. For me personally, it is to attract a larger audience to update the handbook. Although the document will be restricted for editing by developers and trusted contributors only (it does contain the installation instructions and is a primary entry point for many users) that’s still a whole lot more than when just a handful (one or two actually) developers update the handbook.
  • The SELinux userspace (2.4 release) is looking more stable; there are no specific regressions anymore (upstream is at release candidate 7) although I must admit that I have not implemented it on the majority of test systems that I maintain. Not due to fears, but mostly because I struggle a bit with available time so I can do without testing upgrades that are not needed. I do plan on moving towards 2.4 in a week or two.
  • The reference policy has released a new version of the policy. Gentoo quickly followed through (Jason did the honors of creating the ebuilds).

So, apologies for not communicating sooner, and I promise I’ll try to uplift the communication frequency.

December 08, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Playing Xiangqi with xboard (December 08, 2014, 20:06 UTC)

Introduction

Out of the box, xboard is expecting you to play western chess. It does support Xiangqi, but the default setup uses ugly western pieces and western square fields rather than lines:

You can make it look more traditional ..

.. but it is not really trivial to get there. Windows users have WinBoard Xiangqi install as an option but Linux users don’t.
You could select board theme “xiangqi” at

MENU / View / Board / # ORIENTAL THEMES / double click on "xiangqi"

but you would end up with broken board scaling (despite xboard 2.8 knowing how to do better) without further tuning.

To summarize you have to teach xboard to

  1. play variant “xiangqi” rather than western chess,
  2. use different graphics, and
  3. get the board scaling right.

The following is a list of related options and how to get board scaling right by using a special symlink.

Prerequisites

  • xboard 2.8 or later (for proper scaling of the board image, see below)
  • a Xiangqi engine, e.g.
    • HoiXiangqi (of HoiChess, games-board/hoichess in Gentoo betagarden) or
    • MaxQi (of FairyMax, games-board/fairymax in Gentoo betagarden).

Command line view

Now some command line parameters need to be passed to xboard:

Tell engine to play chess variant “xiangqi”:

-variant xiangqi

Use images for drawing the board:

-useBoardTexture true

Use xqboard-9x10.png for drawing both light and dark fields of the board:

-liteBackTextureFile /usr/share/games/xboard/themes/textures/xqboard-9x10.png
-darkBackTextureFile /usr/share/games/xboard/themes/textures/xqboard-9x10.png

xqboard-9x10.png can be a symlink to xqboard.png. The “-9x10” part is for the filename parser introduced with xboard 2.8. It ensures proper board rendering at any windows size. Without that naming (and with earlier versions), you need to be lucky for proper scaling.

Suppress drawing squares (of default line-width 1px) around fields:

-overrideLineGap 0

Use SVG images of the traditional Xiangqi pieces:

-pieceImageDirectory /usr/share/games/xboard/themes/xiangqi

Suppress grayscale conversion of piece graphics applied by default:

-trueColors true

Use HoiXiangqi for an engine:

-firstChessProgram /usr/games/bin/hoixiangqi

If you are running Gentoo, feel free to

sudo layman -a betagarden
sudo emerge -av games-board/xboard-xiangqi

to make that a little easier.

December 06, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Russia blocks access to GitHub (December 06, 2014, 15:03 UTC)

Russia Blacklists, Blocks GitHub Over Pages That Refer To Suicide

http://techcrunch.com/2014/12/03/github-russia/

December 04, 2014
Remi Cardona a.k.a. remi (homepage, bugs)

This week, I upgraded my media center/filer and after a reboot (new kernel), systemd was blocking on my btrfs mount. It’s a 3-partition RAID1 array (until upstream says RAID5 is safe). Systemd was somehow waiting on it, with the infamous red spinner. Adding noauto to fstab did allow the machine to boot properly, but the mount itself silently failed: mount /my/mount/point would return 0 but nothing would show up in /proc/mounts nor in the mount point itself.

It turns out that the latest version of systemd reaches the local-fs target faster than earlier releases (at least that’s my theory) and the kernel has not yet fully figured out what partitions belong in which array. So what I needed was to tell systemd to run btrfs dev scan before attempting to mount local filesystems.

While searching for clues, I came across this stack exchange question which has the correct answer (though I did make a few changes). I’ll reproduce here the correct version for Gentoo, in case anyone runs into this:

$ cat /etc/systemd/system/local-fs-pre.target.wants/btrfs-dev-scan.service
[Unit]
Description=Btrfs scan devices
Before=local-fs-pre.target
DefaultDependencies=false

[Service]
Type=oneshot
ExecStart=/sbin/btrfs device scan

[Install]
WantedBy=local-fs-pre.target

I’m not exactly sure why “local-fs-pre.target” needs to be specified three times (twice inside the file, once in the path), but it does the trick: systemd waits for btrfs’s device scan to return before mounting file systems. Maybe btrfs-progs should ship such a file…

As a side note, while digging for information, I found out that systemd actually reads the fstab and translates it into unit files at boot time. The generated files are located in /run/systemd/generator/.

One final piece of information: if I had taken the time to read journalctl -b carefully, I would have saved hours. If you have any issues with systemd, read the damn journal.

I’ll take the opportunity to thank the kinds folks of #btrfs on FreeNode who promptly helped me.

That’s it for tonight, thanks for reading.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Shrinking stage3 (December 04, 2014, 11:23 UTC)

Today I wanted to fetch a new stage3. And I had many disappoint: Sooo big. 200MB for a stage3?
So I started cutting things up and filed some bugs, and the results are nice:
Starting point: stage3-amd64-20141127

Original:   207758877 bytes / 199M
Shrinked-1: 165755734 bytes / 159M
Shrinked-2: 161092189 bytes / 154M
Shrinked-3: 112635928 bytes / 108M
Shrinked-4: 109833196 bytes / 105M
What changed?
First step: Remove two of three python implementations. I think having only python2 is enough for a start - that's a bit over 150MB unpacked just gone.
Changes needed: PYTHON_TARGETS and USE="-python3" during build

Second: Build pkgconfig with USE="internal-glib". This removes the dependency on glib and its mad ball of dependencies for another ~40MB on-disk.

Repacking that gives Shrinked-1
Then I noticed that Python installs tests unconditionally. This is not needed at runtime and barely needed at buildtime, for another 28MB saved. That's Shrinked-2.

Shrinked-3 is Shrinked-1 with xz instead of bzip2; Shrinked-4 is Shrinked-2 with xz instead of bzip2.

So there, within a few minutes I've halved the size of the tarball by pruning unneeded bits ...

December 02, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Track your issues (December 02, 2014, 11:45 UTC)

Issue Trackers

If you are not aware of a problem, you cannot fix it.

Having full awareness of the issues and managing it is the key of success for any kind of project (not just software).

For an open-source project it is essential that the issue tracker focuses on at least 3 areas:
Ease of use: You get reports mainly by casual users, they must spend the least amount of time to understand the tool and to provide the information.
Loudness: It must make problems easy to spot.
Data Mining: It should provide tools to query details, aggregate bugs and manipulate them.

What’s available

Right now I tried in different projects many issue trackers, sadly almost none fit the bill, they usually are actually the opposite: limited, cumbersome, hard to configure and horrible to use either to fill bugs or to actually manage them.

Bugzilla

It is by far the least bad, it has plugins to provide near-instant access thanks to Mozilla Persona, it has a rich rpc system that could be leveraged to have irc notifiers or side site statistics, importing-exporting data is almost there. As we know in Gentoo, it requires some deep manipulation and if there is nobody around to do that you can get fallouts like this when a single stubborn (and probably distracted) developer (vapier) manages to spoil the result of the goodwill of another and makes the Project overall more frail.

Mantis

It is still too rich of confusing option but its default splash views are a boon if you are wondering what’s the status of your project. No open-id/persona/single-sign-on integration sadly.

Redmine/Trac

Usually not good enough on the reporting side and, even if they are much simpler than Bugzilla, still not good for the untrained user. They integrate with the source repository view and knowledge base (aka wiki) so they can be a good starting point for small organizations.

Github/GitLab/Gogs

They have a more encompassing approach than redmine and trac, their issue tracker component is too simple in some cases (with Github not having even support for attachments and gogs not really managing tags yet) or a little too rough (no bug dependencies). But, with its immediate UI and the label-oriented approach, it is already pretty good for a large deal of projects. Sadly not Libav: we do need proper attachments.

RT

Request Tracker is overwhelming. No other words. Do not use it if you do not need to. It is too complex to configure on the admin side and is too annoying to use on the developer side. For users the interface is usually a mailbox so you can’t go wrong. Perfect if you have to manage a huge number of paying customer and you want to have detailed billing and other extremely advanced features.

Brimir

New kid of the block, it is quite simple, way too simple. Its mail rendering makes it not really great but is pretty much a nice concept waiting to bloom. (Will it?)

Suggestion welcome

Do you know any better opensource issue tracker? Please comment down =)

November 30, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
The Fuzzing Project (November 30, 2014, 12:43 UTC)

This is already a few days old but I haven't announced it here yet. I recently started a little project to improve the state of security in free software apps and libraries:

The Fuzzing Project

This was preceded by a couple of discussions on the mailing list oss-security and findings that basic Unix/Linux tools like strings or less could pose a security risk. Also the availability of powerful tools like Address Sanitizer and american fuzzy lop makes fuzzing easier than ever before.

Fuzzing is a simple and powerful strategy to find bugs in software. It works by feeding a software with a large number of malformed input files usually by taking a small, valid file as a starting point. The sad state of things is that for a large number of software project you can find memory violation bugs within seconds with common fuzzing tools. The goal of the Fuzzing Project is to change that. At its core is currently a list of free software projects and their state of fuzzing robustness. What should follow are easy tutorials to start fuzzing, a collection of small input file samples and probably more ways to get involved (I think about moving the page's source code to github to allow pull requests). My own fuzzing already turned up a number of issues including a security bug in GnuPG.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Cleanup chores (November 30, 2014, 06:19 UTC)

These last few days in San Francisco I was able to at least do some of the work I set myself out to do in my spare time, mostly on my blog. The week turned out to be much more full of things to do than I planned originally, so it turned out that I did not go through with all my targets, but at least a few were accomplished.

Namely, all my blog archives consistently links to the HTTPS versions of the posts, as well as the HTTPS version of my website – which is slowly withering to leave more space to the log – and of Autotools Mythbuster on its new home domain. This sounds like an easy task but it turned out to be slightly more involved than I was expecting, among other things because at some point I used protocol-relative URLs. I even fixed all the links that pointed to the extremely old Planet Gentoo Blog, so that the cross-references are now working, even though probably nobody will read those posts ever again. I also made all the blog comments coming from me consistent by using the same email address (rather than three different ones) and the same website. Finally, I got the list of 404s as seen by GoogleBot and made sure that the links that were broken when posted out there pointed to the right posts.

But there have been a few more things that needed some housekeeping, and was related to account churn. For context, this past Friday was my birthday — and I received an interesting email from a very old games forum that I registered on when I was helping out with the NoX-Wizard emulator: a "happy birthday" message. I then remembered that most vBullettin/phpBB installs send their greetings to the registered user who opted in to provide their birthdate (and sometimes you were forced to due to COPPA). Then since there has been some rumors of a breach on an Italian provider which I used to use when I originally went online, I decided to go and change passwords – once again thanks LastPass ­– and found there two more similar messages for other forums, which I probably have not visited in almost ten years.

You could think that there is no reason to go and try to restore those accounts to life — and I would almost agree with you if it wasn't that they pose a security risk the moment they get breached. And it should be obvious by now that breaching lots of small sites can be just as profitable as breaching a single big site, and much easier. Those forums most likely still had my original, absolutely insecure passwords, so I went and regenerated them.

I wonder how many more accounts I forgot about are out there — I know that for sure there are some that were attached to my BerliOS email address, which is now long gone. The other day using Baidu to look for myself I got remembered I had a Last.FM account which I now got access to again. At least using a password manager it's more difficult to forget about accounts altogether, as they are stored there.

Anyway, for the moment this is enough cleanup, feel free to report if there are other things that I should probably work on, non-Gentoo related (Autotools Mythbuster is due an update but I have not had time to go through that yet); the slower Amazon ad on the blog will also be fixed, promised!

November 29, 2014

I recently got myself a new server that is, amongst others, intended to use for kvm/qemu virtual machines that I administer using virt-manager. As most of the guest VMs will be running Gentoo linux, and the installation procedure is nice and command-line based it enable quick installation of an up to date system without using an image by utilizing a few simple bash scripts that require a minimum of user interaction during install in order to get a base OS.

It goes like this: After booting the Gentoo live-cd we reset the root password to get a known password and start sshd to allow me to upload the script files.

passwd
/etc/init.d/sshd start

Once this is done we upload the script files using scp:

scp *.sh root@192.168.0.62:/

At this stage we edit the config.sh file using nano that is part of the live CD:

nano /config.sh

I rarely change much in the config file, but other users will naturally want to adjust this to their own environment. As for the drive layout I normally default it to
xda1: 5MB – spare for MBR
xda2: 100MB – /boot
xda3: 4096MB – swap
xda4: residual – /

xda is used in place for vda (if Virtio) or sda (if SATA) in this case. The underlying drive is an LVM2 logical volume created using

lvcreate -L 125G -n myVM vg0

A little trick on getting to use the LVM drive directly in virt-manager is to create a storage group for the directory of the volume group (/dev/vg0) which allows me to allocate the logical volumes directly to the drive as a virtio disk.

Attempting to run /host.sh without a drive setup it will naturally abort and we get a warning about missing drive configuration. Once this is configured (I normally use cfdisk /dev/xda) it is time to run:

/host.sh

The first thing that happens then is that the filesystems are configured appropriately (ext4) and a stage3 is downloaded and extracted, along with setting up the necessary mounts to enter the chroot. No more interaction is then necessary until we enter the chroot using:

chroot /mnt/gentoo /bin/bash
/chroot.sh

At this point the rest of the install instructions are being run, installing a regular gentoo-sources kernel with grub2 and setting up syslog-ng and cronie. Additionally I use Monkeysphere to set up the public keys for logging into the system as my user so this is automated as well as adding the user to wheel group (the latter two steps being optional in config file, but if you haven’t looked into Monkeysphere before I recommend doing so)

Once this complete it is just a matter of running

exit

to get of the of the chroot, and

reboot

and we have a working base-install of a VM once it gets back up. Then I can start making any adjustments for the service the VM is supposed to provide from here.

As for the actual scripts:
config.sh
host.sh
chroot.sh

November 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The end of an era, the end of the tinderbox (November 26, 2014, 03:04 UTC)

I'm partly sad, but for the most part this is a weight that goes away from my shoulders, so I can't say I'm not at least in part joyful of it, even though the context in which this is happening is not exactly what I expected.

I turned off the Gentoo tinderbox, never to come back. The S3 storage of logs is still running, but I've asked Ian to see if he can attach everything at his pace, so I can turn off the account and be done with it.

Why did this happen? Well, it's a long story. I already stopped running it for a few months because I got tired of Mike behaving like a child, like I already reported in 2012 by closing my bugs because the logs are linked (from S3) rather than attached. I already made my position clear that it's a silly distinction as the logs will not disappear in the middle of nowhere (indeed I'll keep the S3 bucket for them running until they are all attached to Bugzilla), but as he keeps insisting that it's "trivial" to change the behaviour of the whole pipeline, I decided to give up.

Yes, it's only one developer, and yes, lots of other developers took my side (thanks guys!), but it's still aggravating to have somebody who can do whatever he likes without reporting to anybody, ignoring Council resolutions, QA (when I was the lead) and essentially using Gentoo as his personal playground. And the fact that only two people (Michał and Julian) have been pushing for a proper resolution is a bit disappointing.

I know it might feel like I'm taking my toys and going home — well, that's what I'm doing. The tinderbox has been draining on my time (little) and my money (quite more), but those I was willing to part with — draining my motivation due to assholes in the project was not in the plans.

In the past six years that I've been working on this particular project, things evolved:

  • Originally, it was a simple chroot with a looping emerge, inspected with grep and Emacs, running on my desktop and intended to catch --as-needed failures. It went through lots of disks, and got me off XFS for good due to kernel panics.
  • It was moved to LXC, which is why the package entered the Gentoo tree, together with the OpenRC support and the first few crude hacks.
  • When I started spendig time in Los Angeles for a customer, Yamato under my desk got replaced with Excelsior which was crowdfounded and hosted, for two years straight, by my customer at the time.
  • This is where the rewrite happened, from attaching logs (which I could earlier do with more or less ease, thanks to NFS) to store them away and linking instead. This had to do mostly with the ability to remote-manage the tinderbox.
  • This year, since I no longer work for the company in Los Angeles, and instead I work in Dublin for a completely different company, I decided Excelsior was better off on a personal space, and rented a full 42 unit cabinet with Hurricane Electric in Fremont, where the server is still running as I type this.

You can see that it's not that 'm trying to avoid spending time to engineer solutions. It's just that I feel that what Mike is asking is unreasonable, and the way he's asking it makes it unbearable. Especially when he feigns to care about my expenses — as I noted in the previously linked post, S3 is dirty cheap, and indeed it now comes down to $1/month given to Amazon for the logs storage and access, compared to $600/month to rent the cabinet at Hurricane.

Yes, it's true that the server is not doing only tinderboxing – it also is running some fate instances, and I have been using it as a development server for my own projects, mostly open-source ones – but that's the original use for it, and if it wasn't for it I wouldn't be paying so much to rent a cabinet, I'd be renting a single dedicated server off, say, Hetzner.

So here we go, the end of the era of my tinderbox. Patrick and Michael are still continuing their efforts so it's not like Gentoo is left without integration test, but I'm afraid it'll be harder for at least some of the maintainers who leveraged the tinderbox heavily in the past. My contract with Hurricane expires in April; at that point I'll get the hardware out of the cabinet, and will decide what to do with it — it's possible I'll donate the server (minus harddrives) to Gentoo Foundation or someone else who can use it.

My involvement in Gentoo might also suffer from this; I hopefully will be dropping one of the servers I maintain off the net pretty soon, which will be one less system to build packages for, but I still have a few to take care of. For the moment I'm taking a break: I'll soon send an email that it's open season on my packages; I locked my bugzilla account already to avoid providing harsher responses in the bug linked at the top of this post.

November 25, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Pre-made Builds (November 25, 2014, 23:32 UTC)

Had been years I’m maintaining the win32 builds, about 2 weeks ago I had the n-th failure with the box hosting it and since I was busy with some work stuff I could not fix it till this week.

Top-IX graciously provided a better sized system and I’m almost done reconfiguring it. Sadly setting up the host and reconfigure it is a quite time consuming task and not so many (if all) show appreciation for it. (please do cheer for the other people taking care of our other piece of infrastructure from time to time).

The new host is builds.libav.org since it will host builds that are slightly more annoying to get. Probably it will start with just builds for the releases and then if there is interests (and volunteers) it will be extended to nightly builds.

Changes

More Platforms

The first an more apparent is that we’ll try to cover more platforms, soon I’ll start baking some Android builds and then hopefully Apple-oriented stuff will appear in some form.

Building Libav in itself is quite simple and hopefully documented well enough and our build system is quite easy to use for cross building.

Getting some of the external dependencies built, on the other hand, is quite daunting. gnutls/nettle and x265 are currently missing since their build system is terrible for cross compiling and my spare time didn’t allow to get that done within the deadline I set for myself.

Possibly in few weeks we will get at least the frameworks packaging for iOS and Android. Volunteers to help are more than welcome.

New theme

The new theme is due switching to nginx so now thanks to fancy_index is arguably nicer.

More builds

The original builds tried to add almost everything that was deemed useful and thus the whole thing was distributed under gpl. Since I noticed some people might not really need that or might just want less functionality I added a lgpl-distributable set. If somebody feels useful having a version w/out any dependencies, please drop me a line.

Thanks

Thanks again to Top-IX for the support and Gabriele in particular for setting up the new system while he was in a conference in London.

Thanks for Sean and Reinhart for helping with the continuous integration system.

Enjoy the new builds!

Post Scriptum: token of appreciation in form of drinks or just thank you are welcome: writing code is fun, doing sysadmin tasks is not.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Debit and credit cards in the USA (November 25, 2014, 08:10 UTC)

Credit Cards

I know it's a long time now, maybe an year or two, but I still remember clearly that after one of the many card data breaches in the US – maybe Target's – I ended up exchanging comments with some Americans on the difference between debit cards and credit cards. Turns out that for people who never had to face that choice before, it's not obvious why would anybody pay with a debit card at a store such as Target, rather than a credit one. So it might be worth writing it here, given that I talked about credit cards before. But be warned that this is pretty much limited to the United States, so if you're not interested, feel free to skip.

So first of all, what's the rokus about credit versus debit? The main difference between the two, on an user point of view, is the protection: in the case of fraudulent transactions on a credit card, most issuers will revert the charge and block the card without costing money to the consumer — who's going to eat that loss depends on a number of different factors including, as of recently, whether the bank issued an EMV card, and whether the point-of-sale used the chip to execute the transaction. On the other hand, fraudulent charges on a debit card are usually a loss for the cardholder.

So generally speaking, if you have a choice, you should pay with a credit card. Which is generally not what vendors want, as they would prefer you pay with a debit card (it costs them less in fees). As much as I feel for the vendors – I had my own company, remember? – the inherent risk of breaches and the amount of PoS malware makes it sadly a consumer protection choice.

But the relative ease to get debit and credit cards is also a factor. Getting a debit card is trivial: you walk into a branch, ask them to open a new account, give them enough information about yourself, and they will mail you a debit card. They won't look into your financial data – including your credit score – because they are not giving you credit, they are just giving you a mean to access the funds you deposited at their bank.

This, among other things, means that you can get a card number in the US without being a resident: if you're a non-resident in the US, but you have a permanent address of some kind, such as an office or a friend's, you can just enter a branch and open an account with a US bank. They'll need your documents (passport, and another credit/debit card with your name, or another non-photo ID), and a proof of address in your country of residence, but otherwise it's usually a quite pleasant experience.

To provide more information on the topic: since you're not a resident and you're not working illegally in the US, you're not receiving a fixed paycheck on your US account, which means that most fee-waiving programs that count on you receiving a given direct credit per month won't apply to you. Instead you should look into fee-waiving by the deposited amount — Bank of the West has a minimum deposit of $1000, which is the lowest I have seen, but when I asked them they tried to send me from Sunnyvale to San Francisco to open an account; the Chase next door was happy to have me as a client, even though their minimum deposit is $1500.

If you plan on transferring money often between the two accounts, you probably want to use a service like Transferwise, that converts currencies and transfer funds between USD, EUR, GBP and other currencies at a much cheaper rate than most banks, and definitely much cheaper than the banks that I have.

But things get complicated if you want a credit card, even more so if you want a rewards credit card, such as Amazon's, or any airline or hotel chain — which generally wouldn't be very useful to foreigners as most countries with the except of Ireland have some card that you can get, American Express being the worst case.

To get the most common credit cards in the US you need to be in the credit system somehow; you probably want to have some credit history and a rating too. If you're resident in the US, they will find you up through the Social Security Number (SSN), but it's more complicated for non-residents (unless they were at some point residents of course).

In either case, the simplest form of credit card you can request is a secured credit card — which is essentially a glorified debit card: you pay the bank an amount, and then then make that amount available to you as a credit line. The main difference between this and a debit card is that it does have the protections of a credit card. It also allows you to build up credit score, which is why it's usually the choice of card for immigrants and young people who don't have a history at all. They generally don't come with any kind of rewards system.

Immigrants here include techies, by the way. Even when working for big companies in the Silicon Valley, the lack of a credit history means you have to build it up from scratch. I know some of my American acquaintances were surprised that it's not as easy as showing your employment information to get credit.

Not all banks provide secured credit cards though. In particular when I asked Chase just the other day, they told me to try with the nearby US Bank or Wells Fargo – both walking distance – and I seem to recall that Bank of America does it as well. The idea is that you'll use a secured card for at least an year to build up positive history, and then get a proper, better credit card after that. And that's why you need a SSN to correlate them.

What I said up to now would imply that you have no option to get a credit card if you, like me, are just a visitor who happens to be in the States every few months. That is not strictly true: the requirement for the SSN is a requirement for an identifier that can be reported across multiple banks and with the IRS; there is another identifier you can use for that, and it's the ITIN. This non-resident identification number has some requirements attached, and it's not exactly trivial to get — I have unfortunately no experience with getting one to retell yet. It is usually assigned by filing a US tax return, which is not something you want (or need) to do if you're a foreigner. Especially because it usually requires a good reason, such as having an ebook published and having Amazon withhold 30% of the royalties for US taxes, when a treaty exists between the US and your country of residence.

I do indeed plan to look into how to declare my royalties properly next year to Ireland, and file a US tax return to get the (cents) back — if nothing else to request an ITIN, and with it a rewards credit card. After all, a lot of the money I spend ends up being spent in the US, so why not?

Well, to be honest there are a bunch of reasons why not. You risk getting audited by either or both of your country of residence and the USA — and for the USA there is no way to escape the IRS, or they wouldn't consider only two things certain. You have paperwork to file, again for both countries, which might be unwieldy or complex (I have yet to look at the paperwork to file with Irish authorities, they are usually straightforward). And you end up on the currency market; right now between EUR and USD it's pretty stable and doable, but if you don't keep an eye out it's easy to screw it up and ending up wasting money just on the exchange. So it's still an investment in time.

Myself, I still think it's likely it's going to be a good idea to try to get an ITIN and a proper credit card, since I come to the States every few months between conferences and work travel. But I won't make any suggestions to anybody else. Your money, your choice.

November 24, 2014
Michal Hrusecky a.k.a. miska (homepage, bugs)
Me, Raspberry Pi and old TV (November 24, 2014, 16:30 UTC)

In September I visited Akademy in Brno. It was close and sounded interesting (and it was). I met there Bruno and Francoise and tried to help them a little bit with openSUSE booth they organized. It was cool, wasn’t sure how many people I will know there, but I met Cornelius on my way to the venue and when we arrived there, there was already openSUSE booth – really great surprise :-) But getting to the point of this post (which is not the Akademy), there was a lottery where people could won Raspberry Pi. I already have better ARM board at home, but as I depend on that one as a home server so I can’t play with it that much anymore, I joined anyway. And to my surprise I won! As I had to leave before the draw, I have to thank Bruno and Francoise for fetching it up and sending it to me, so big thanks to them for everything!

I played with it, found out that getting video output running is super easy, openSUSE 13.2 runs there nicely, so I decided to put it into one specific use right now :-)

Getting smart TV

TV & RPiI live in rented flat, which was already equipped with TV when I rented it. It is the old CRT one. One of the advantages of Raspberry Pi contrary to the most of the boards out there is that it supports video output even for those legacy technologies. So let’s make some use of it and convert dumb CTR into something that can play movies and streams available online.

First obvious thing I tried was mpv. Didn’t managed to get framebuffer output working, didn’t managed to get wayland working, resolved to the X11 and found out that it can’t play movies smoothly. I was playing with some options, frame dropping and such but didn’t helped. So I started googling how to use hardware acceleration. And I found one disturbing piece of information.

There is hardware acceleration in Pi, but some of the codecs are locked out and you have to pay license fee to unlock functionality your device was shipped with. That sounds crazy. You get a device where parts of it are intentionally locked out so they can ask you for more money to allow you to use hardware you already bought. I understand that problem isn’t the foundation selling Raspberries, but legal protection against stupid patent laws mainly in US, nevertheless, it is silly. Luckily, h264 codec is enabled by default and codecs that you have to unlock this way are only mpeg, which almost noone uses nowadays, and some VC-1 I never heard about before and doubt that anyone ever used. So to get my Raspberry to be Smart TV, I didn’t have to give in to the patent trolls extortion.

So after a little ranting, how do I use that hardware acceleration? I was searching for some vaapi/vdpau abstraction, but haven’t found any. Luckily it didn’t matter much because I found something maybe even better – OMXPlayer. It is standalone video player, that has support for hardware accelerated video playback and works directly with framebuffer so no need for X anymore. Tricky part is that there is an upstream which looks dead and fork that looks pretty much alive. I found that after week of using the original upstream when I started searching for solutions to some of my problems. So don’t bother with upstream, use fork directly. Using this I’m able to play h264 movies and streams (like TV) on my old CRT TV.

Controlling it

Controlling TV via ssh is fun, but not that user friendly. So I decided to cook up some proof of concept of remote web UI. I know I could use XBMC and probably would be better of, but I want my Pi be idle when it is idle and how hard could be to cook something up, right? So I cooked up something really terrible but working :-) It is just a bunch of cgi scripts that needs to be run from webserver under user with enough privileges. And it has plenty of disadvantages (like UX, speed and security), but works for me although I will probably have to spend some time on usability soon cause it is starting to hurt even me :-)

Few last remarks

Yes, it is possible to make Raspberry Pi into TV player running openSUSE without having to resort to XBMC. But I still believe, that no matter what you want, there is better hardware available for similar price. If you are interested in multimedia, take a look at Matchstick. If in home server, there is plenty of Allwinners around, like Banana Pi, Cubie Board (my home server) or Cubie Truck (home server I would choose now).

November 23, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
DBus, FreeDesktop, and lots of madness (November 23, 2014, 08:26 UTC)

For various reasons I've spent a bit of time dissecting how dbus is supposed to work. It's a rather funny game, but it is confusing to see people trying to use these things. It starts quite hilarious, the official documentation (version 0.25) says:

The D-Bus protocol is frozen (only compatible extensions are allowed) as of November 8, 2006. However, this specification could still use a fair bit of work to make interoperable reimplementation possible without reference to the D-Bus reference implementation. Thus, this specification is not marked 1.0
Ahem. After over a decade (first release in Sept. 2003!!) people still haven't documented what it actually does.
Allrighty then!
So we start reading:
"D-Bus is low-overhead because it uses a binary protocol"
then
"Immediately after connecting to the server, the client must send a single nul byte."
followed by
"A nul byte in any context other than the initial byte is an error; the protocol is ASCII-only."
Mmmh. Hmmm. Whut?
So anyway, let's not get confused ... we continue:
The string-like types are basic types with a variable length. The value of any string-like type is conceptually 0 or more Unicode codepoints encoded in UTF-8, none of which may be U+0000. The UTF-8 text must be validated strictly: in particular, it must not contain overlong sequences or codepoints above U+10FFFF. Since D-Bus Specification version 0.21, in accordance with Unicode Corrigendum #9, the "noncharacters" U+FDD0..U+FDEF, U+nFFFE and U+nFFFF are allowed in UTF-8 strings (but note that older versions of D-Bus rejected these noncharacters).
So there seems to be some confusion what things like "binary" mean, and UTF-8 seems to be quite challenging too, but no worry: At least we are endian-proof!
A block of bytes has an associated byte order. The byte order has to be discovered in some way; for D-Bus messages, the byte order is part of the message header as described in the section called “Message Format”. For now, assume that the byte order is known to be either little endian or big endian.
Hmm? Why not just define network byte order and be happy? Well ... we're even smarterer:
The signature of the header is: "yyyyuua(yv)"
Ok, so ...
1st BYTE Endianness flag; ASCII 'l' for little-endian or ASCII 'B' for big-endian. Both header and body are in this endianness.
We actually waste a BYTE on each message to encode endianness, because ... uhm ... we run on ... have been ... I don't get it. And why a full byte (with a random ASCII mnemonic) instead of a bit? The whole 32bit bytemash at the beginning of the header could be collapsed into 8 bit if it were designed. Of course performance will look silly if you use an ASCII protocol over the wire with generous padding ... so let's kdbus because performance. What teh? Just reading this "spec" makes me want to get more drunk.

Here's a radical idea: Fix the wire protocol to be more sane, then fix the configuration format to not be hilarious XML madness (which the spec says is bad, so what were the authors not thinking?)
But enough about this idiocy, let's go up the stack one layer. The freedesktop wiki uses gdbus output as an API dump (would be boring if it were an obvious format), so we have a look at it:
      @org.freedesktop.systemd1.Privileged("true")
      UnlockSessions();
Looking through the man page there's no documentation what a line beginning with "@" means. Because it's obvious!@113###
So we read through the gdbus sourcecode and start crying (thanks glib, I really needed to be reminded that there are worse coders than me). And finally we can correlate it with "Annotations"

Back to the spec:
Method, interface, property, and signal elements may have "annotations", which are generic key/value pairs of metadata. They are similar conceptually to Java's annotations and C# attributes.
I have no idea what that means, but I guess as the name implies it's just a textual hint for the developer. Or not? Great to see a specification not define its own terms.

So what I interpret into this fuzzy text is most likely very wrong, and someone should define these items in the spec in a way that can be understood before you understood the spec. Less tautological circularity and all that ...
Let's assume then that we can ignore annotations for now ... here's our next funny:
The output of gdbus is not stable. If you were to write a dbus listener based on the documentation, well, the order of functions is random, so it's very hard to compare the wiki API dump 'documentation' with your output.
Oh great ... sigh. Grumble. Let's just do fuzzy matching then. Kinda looks similar, so that must be good enough.
(Radical thought: Shouldn't a specification be a bit less ambiguous and maybe more precise?)

Anyway. Ahem. Let's just assume we figure out a way to interact with dbus that is tolerable. Now we need to figure out what the dbus calls are supposed to do. Just for fun, we read the logind 'documentation':
      @org.freedesktop.systemd1.Privileged("true")
      CreateSession(in  u arg_0,
                    in  u arg_1,
                    in  s arg_2,
                    in  s arg_3,
                    in  s arg_4,
                    in  s arg_5,
                    in  u arg_6,
                    in  s arg_7,
                    in  s arg_8,
                    in  b arg_9,
                    in  s arg_10,
                    in  s arg_11,
                    in  a(sv) arg_12,
                    out s arg_13,
                    out o arg_14,
                    out s arg_15,
                    out h arg_16,
                    out u arg_17,
                    out s arg_18,
                    out u arg_19,
                    out b arg_20);
with the details being:
CreateSession() and ReleaseSession() may be used to open or close login sessions. These calls should never be invoked directly by clients. Creating/closing sessions is exclusively the job of PAM and its pam_systemd module.
*cough*
*nudge nudge wink wink*
Zero of twenty (!) parameters are defined in the 'documentation', and the same document says that it's an internal function that accidentally ended up in the public API (instead of, like, being, ah, a private API in a different namespace?)
Since it's actively undefined, and not to be used, a valid action on calling it would be to shut down the machine.

Dogbammit. What kind of code barf is this? Oh well. Let's try to figure out the other functions -
LockSession() asks the session with the specified ID to activate the screen lock.
And then we look at the sourcecode to learn that it actually just calls:
session_send_lock(session, streq(sd_bus_message_get_member(message), "LockSession"));
Which calls a function somewhere else which then does:
        return sd_bus_emit_signal(
                        s->manager->bus,
                        p,
                        "org.freedesktop.login1.Session",
                        lock ? "Lock" : "Unlock",
                        NULL);
So in the end it internally sends a dbus message to a different part of itself, and that sends a dbus signal that "everyone" is supposed to listen to.
And the documentation doesn't define what is supposed to happen, instead it speaks in useless general terms.

      PowerOff(in  b arg_0);
      Reboot(in  b arg_0);
      Suspend(in  b arg_0);
      Hibernate(in  b arg_0);
      HybridSleep(in  b arg_0);
Here we have a new API call for each flavour of power management. And there's this stuff:
The main purpose of these calls is that they enforce PolicyKit policy
And I have absolutely no idea about the mechanism(s) involved. Do I need to query PK myself? How does the dbus API know? Oh well, just read more code, and interpret it how you think it might work. Must be good enough.

While this exercise has been quite educational in many ways I am surprised that this undocumented early-alpha quality code base is used for anything serious. Many concepts are either not defined, or defined by the behaviour of the implementation. The APIs are ad-hoc without any obvious structure, partially redundant (what's the difference between Terminate and Kill ?), and not documented in a way that allows a reimplementation.
If this is the future I'll stay firmly stuck in the past ...

November 22, 2014
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Changing Gears, 1 Year after RTW trip (November 22, 2014, 18:45 UTC)

About a year ago, I was writing about my Round the World trip winding down and returning to the workforce, my career. I’ve gone through a whole bunch of ‘things’ in the past year which mostly remind me that 1) life is short and random, and 2) I can do anything I want to.

On the first topic, I did a number to my spine and compressed L1, L2 vertebrae. About a 3 month resting period and I’m still recovering from that one, probably will be for the rest of my life. Oh, I broke my wrist too. All this from a little skydiving accident, which I’ll spare the details. I’m back at the gym, eating well, and really inspired to build myself better than I was. I count my lucky stars that I’m able to make a full recovery. Ergo, life is short and random. However, it really opened up my viewpoints on many of life’s topics and made me realize all the calculated risks that humans take everyday.

Speaking of risks, enter new job…

A year ago, I was writing about starting a new job, getting a new apartment, and new car all within two weeks. Now I’m able to say that I’m at it again. While it may not be the same as traveling to a new country every few weeks, it is still very exciting. In December, I’ll be starting a new role at a new company, SPS Commerce. It was great working at Reeher, and I have nothing but good things to say about the company and the people. I’m also moving, but only 15 minutes away.

I’m thrilled to accelerate my career and position myself where I was prior to my career break started. It hasn’t been exactly what I envisioned, but does anything work out like we think? Now, for all the naysayers that say a career-break on your mid-20s resume is career suicide… I challenge you to go for your dreams because life is short and you can do anything you want to.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
When (multimedia) fiefdoms crumble (November 22, 2014, 06:41 UTC)

Mike coined the term multimedia fiefdoms recently. He points to a number of different streaming, purchase and rental services for video content (movies, TV series) as the new battleground for users (consumers in this case). There are of course a few more sides in this battle, including music and books, but the idea is still perfectly valid.

What he didn't get into the details of is what happens one of those fiefdoms capitulates, declaring itself won over, and goes away. It's not a fun situation to be in, but we actually have plenty of examples of it, and these, more than anything else, should drive the discourse around and against DRM, in my opinion.

For some reasons, the main example of failed fiefdoms is to be found in books, and I lived through (and recounted) a few of those instances. For me personally, it all started four years ago, when I discovered Sony gave up on their LRF format and decided to adopt the "industry standard" ePub by supporting Adobe Digital Editions (ADEPT) DRM scheme on their devices. I was slow on the uptake, the announcement came two years earlier. For Sony, this meant tearing down their walled garden, even though they kept supporting the LRF format and their store for a while – they may even do still, I stopped following two years ago when I moved onto a Kindle – for the user it meant they were now free to buy books from a number of stores, including some publishers, bookstores with online presence and dedicated ebookstores.

But things didn't always go smoothly: two years later, WHSmith partnered with Kobo, and essentially handed the latter all their online ebook market. When I read the announcement I was actually happy, especially since I could not buy books off WHSmith any more as they started looking for UK billing addresses. Unfortunately it also meant that only a third of the books that I bought from WHSmith were going to be ported over to Kobo due to an extreme cock-up with global rights even to digital books. If I did not go and break the DRM off all my ebooks for the sake of it, I would have lost four books, having to buy them anew again. Given this was not for the seller going bankrupt but for a sell-out of their customers, it was not understandable that they refused to compensate people. Luckily, it did port The Gone-Away World which is one of my favourite books.

Fast forward another year, and the Italian bookstore LaFeltrinelli decided to go the same way, with a major exception: they decided they would keep users on both platforms — that way if you want to buy a digital version of a book you'll still buy it on the same website, but it'll be provided by Kobo and in your Kobo library. And it seems like they at least have a better deal regarding books' rights, as they seemed to have ported over most books anyway. But of course it did not work out as well as it should have been, throwing an error in my face and forcing me to call up Kobo (Italy) to have my accounts connected and the books ported.

The same year, I end up buying a Samsung Galaxy Note 10.1 2014 Edition, which is a pretty good tablet and has a great digitizer. Samsung ships Google Play in full (Store, Movies, Music, Books) but at the same time install its own App, Video, Music and Book store apps, it's not surprising. But it does not take six months for them to decide that it's not their greatest idea, in May this year, Samsung announced the turn down of their Music and Books stores — outside of South Korea at least. In this case there is no handover of the content to other providers, so any content bought on those platforms is just gone.

Not completely in vain; if you still have access to a Samsung device (and if you don't, well, you had no access to the content anyway), a different kind of almost-compensation kicks in: the Korean company partnered with Amazon of all bookstores — surprising given that they are behind the new "Nook Tablet" by Barnes & Noble. Beside a branded «Kindle for Samsung» app, they provide one out of a choice of four books every month — the books are taken from Amazon's KDP Select pool as far as I can tell, which is the same pool used as a base for the Kindle Owners' Lending Library and the Kindle Unlimited offerings; they are not great but some of them are enjoyable enough. Amazon is also keeping honest and does not force you to read the books on your Samsung device — I indeed prefer reading from my Kindle.

Now the question is: how do you loop back all this to multimedia? Sure books are entertaining but they are by definition a single media, unless you refer to the Kindle Edition of American Gods. Well, for me it's still the same problem of fiefdoms that Mike referred to; indeed every store used to be a walled garden for a long while, then Adobe came and conquered most with ePub and ADEPT — but then between Apple and their iBooks (which uses its own, incompatible DRM), and Amazon with the Kindle, the walls started crumbling down. Nowadays plenty of publishers allow you to buy the book, in ePub and usually many other formats at the same time, without DRM, because the publishers don't care which device you want to read your book on (a Kindle, a Kobo, a Nook, an iPad, a Sony Reader, an Android tablet …), they only want for you to read the book, and get hooked, and buy more books.

Somehow the same does not seem to work for video content, although it did work to an extent, for a while at least, with music. But this is a different topic.

The reason why I'm posting this right now is that just today I got an email from Samsung that they are turning down their video store too — now their "Samsung Hub" platform gets to only push you games and apps, unless you happen to live in South Korea. It's interesting to see how the battles between giants is causing small players to just get off the playing fields… but at the same time they bring their toys with them.

Once again, there is no compensation; if you rented something, watch it by the end of the year, if you bought something, sorry, you won't be able to access it after new year. It's a tough world. There is a lesson, somewhere, to be learnt about this.

November 21, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Free of earthly burdens (November 21, 2014, 02:43 UTC)

So I was perusing Reddit—an activity that can be nothing more than a way to pass time, or, on occasion, can be rewarding—this evening, and found a picture of a tombstone that a father designed for his differently abled child who passed away untimely.

Free of earthly burdens tombstone for a differently abled child

The picture certainly will resonant with anyone who has a child with a “disability.” The image, though, was not the part of the post that really stuck out to me. No, there was a comment about it that really put the concept of death into perspective:

When your parents or elders die, you feel like you’ve lost a connection to the past. I’ve been told that losing a child is like living through the process of losing the future.

I agree with person who responded by saying that it is a “crushingly profound statement.” The death of a child is not only untimely, but it is a chronological anomaly that simply shouldn’t occur. We as humans recognise items in space and time that are out of place on a regular basis—they catch our attention. For instance, have you ever been watching a film about a time period of long ago and noticed something that wasn’t available at that time (known as an anachronism, by the way)? The loss of a child is arguably the epitome of disturbances in the natural order of time.

For good measure, here is the full thread on Reddit, a link to the particular comment that I referenced, and the image hosted on imgur.

As a side note, the wonderful comment came from a user named Turkeybuzzard, which should be an indication to not pre-judge.

–Zach

November 20, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RIP ns2 (November 20, 2014, 12:39 UTC)

Today we did shutdown our now oldest running Gentoo Linux production server : ns2.

Obviously this machine was happily spreading our DNS records around the world but what’s remarkable about it is that it has been doing so for 2717 straight days !

$ uptime
 13:00:45 up 2717 days,  2:20,  1 user,  load average: 0.13, 0.04, 0.01

As I mentioned when we did shutdown stabber, our beloved firewall, our company has been running Gentoo Linux servers in production for a long time now and we’re always a bit sad when we have to power off one of them.

As usual, I want to take this chance to thank everyone contributing to Gentoo Linux ! Without our collective work, none of this would have been possible.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Languages, native speakers, culture (November 20, 2014, 05:37 UTC)

People who follow me probably know already that I'm not a native English speaker. Those who don't but will read this whole post will probably notice it by the end of it, just by my style, even if I were not to say it right at the start as I did.

It's not easy for me and it's often not easy for my teammates, especially so when they are native speakers. It's too easy for the both of us to underestimate or overestimate, at the same time sometimes, how much information we're conveying with a given phrase.

Something might sounds absolutely rude in English to a native speaker, but when I was forming my thought in my head it was intended to be much softer, even kind. Or the other way around: it might be actually quite polite in English, and my interpretation of it would be, to me, much ruder. And this is neither an easy or quick problem to solve, I have been working within English-based communities for a long while – this weblog is almost ten years old! – and still to this day the confusion is not completely gone.

It's interestingly sometimes easier to interact with other non-native speakers because we realize the disconnect, but other times is even harder because one or the other is not making the right amount of effort. I find it interestingly easier to talk with speakers of other Latin languages (French, Spanish, Portuguese), as the words and expressions are close enough that it can be easy to port them over — with a colleague and friend who's a native French speaker, we got to the point where it's sometimes faster to tell the other a word in our own language, rather than trying to go to English and back again; I promised him and other friends that I'll try to learn proper French.

It is not limited to language, culture is also connected: I found that there are many connections between Italian culture and Balkan, sometimes in niches that nobody would have expected it to creep up, such as rude gestures — the "umbrella gesture" seems to work just as good for Serbs as it does for Italians. This is less obvious when interacting with people exclusively online, but it is something useful when meeting people face to face.

I can only expect that newcomers – whether they are English speakers who have never worked closely with foreigners, or people whose main language is not English and who are doing their best to communicate in this language for the first time – to have a hard time.

This is not just a matter of lacking grammar or dictionary: Languages and societal customs are often interleaved and shape each other, so not understanding very well someone else's language may also mean not understanding their society and thus their point of view, and I would argue that points of view are everything.

I will make an example, but please remember I'm clearly not a philologist so I may be misspeaking, please be gentle with me. Some months ago, I've been told that English is a sexist language. While there wasn't a formal definition or reasoning for why stating that, I've been pointed at the fact you have to use "he" or "she" when talking about a third party.

I found this funny: not only in Italian you have to do so when talking about a third party, but you have to do so when talking about a second party (you) and even about a first party (me) ­— indeed, most adjectives and verbs require a gender. And while English can cop-out with the singular "they", this does not apply to Italian as easily. You can use a generic, plural "you", but the words still need a gender — it usually become feminine to match "your persons".

Because of the need for a gender in words, it is common to assume the male gender as a "default" in Italian; some documentation, especially paperwork from the public administration, will use the equivalent of "he/she" in the form of "signore/a", but it becomes cumbersome if you're writing something longer than a bank form, as every single word needs a different suffix.

I'm not trying to defend the unfortunate common mistake to assume a male gender when talking about "the user" or any other actor in a discussion, but I think it's a generally bad idea to assume that people have a perfect understanding of the language and thus assign maliciousness when there is simple naïve ignorance, as was the case with Lennart, systemd and the male pronouns. I know I try hard to use the singular "they", and I know I fall short of it too many times.

But the main point I'm trying to get across here is that yes, it's not easy, in this world that becomes more and more small, to avoid the shocking contrast of different languages and cultures. And it can't be just one side accommodating to this, we all have to make an effort, by understanding the other side's limit, and by brokering among sides that would be talking past each other anyway.

It's not easy, and it takes time, and effort. But I think it's all worth it.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
2005 Volkswagen Jetta Fuse Diagram (November 20, 2014, 03:44 UTC)

It is surprisingly hard to find this fuse diagram online. I actually had the diagram in the glove box of my car but it is cold out and I didn’t want to sit outside reading the manual. I went in trying to find the source of my rear window defroster failure and found the fuse blown and “melted” to the plastic. I broke the fuse when I removed it and then replaced it with a spare fuse. It looks like the previous owner used a 30A when it should have been 25A. Anyway, works like a charm now – ready for winter.

fuse-diagram
fuse-description
example-fuses

November 19, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Request Tracker (November 19, 2014, 15:52 UTC)

So, I’ve kind of taken over Request Tracker (bestpractical.com).

Initially I took it because I’m interested in using RT at work to take track customer service emails. All I did at the time was bump the version and remove old, insecure versions from the tree.

However, as I’ve finally gotten around to working on getting it setup, I’ve discovered there were a lot of issues that had gone unreported.

The intention is for RT to run out of its virtual host root, like /var/www/localhost/rt-4.2.9/bin/rt and configured by /var/www/localhost/rt-4.2.9/etc/RT_SiteConfig.pm, and for it to reference any supplementary packages with ${VHOST_ROOT} as its root. However, because of a broken install process and a broken hook script used by webapp-config that didn’t happen. Further, the rt_apache.conf included by us was outdated by a few years, too, which in itself isn’t a bad thing, except that it was wrong for RT 4+.

I spent much longer than I care to admit trying to figure out why my settings weren’t sticking when I edited RT_SiteConfig.pm. I was trying to run RT under its own path rather than on a subdomain, but Set($WebPath, ‘/rt’) wasn’t doing what it should.

It also complained about not being able to write to /usr/share/webapps/rt/rt-4.2.9/data/mason_data/obj, which clearly wasn’t right.

Once I tried moving RT_SiteConfig.pm to /usr/share/webapps/rt/rt-4.2.9/etc/, and chmod and chown on ../data/mason_data/obj, everything worked as it should.

Knowing this was wrong and that it would prevent anyone using our package from having multiple installation, aka vhosts, I set out to fix it.

It was a descent into madness. Things I expected to happen did not. Things that shouldn’t have been a problem were. Much of the trouble I had circled around webapp-config and webapp.eclass.

But, I prevailed, and now you can really have multiple RT installations side-by-side. Also, I’ve added an article (wiki.gentoo.org) to our wiki with updated instructions on getting RT up and running.

Caveat: I didn’t use FastCGI, so that part may be wrong still, but mod_perl is good to go.

November 16, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Nach mehreren Wochen downtime — primär durch mich verschuldet — ist rsync1.de.gentoo.org nun wieder online.
Wie vorher wird das komplette Repository aus einer RAM disk ausgeliefert, daher ist der Mirror relativ flott.

# rsync --list-only rsync://rsync1.de.gentoo.org/gentoo-portage/
drwxr-xr-x          3,480 2014/11/16 16:01:19 .
-rw-r--r--            121 2014/01/01 01:31:01 header.txt
-rw-r--r--          3,658 2014/08/18 21:01:02 skel.ChangeLog
-rw-r--r--          8,119 2014/08/30 12:01:02 skel.ebuild
-rw-r--r--          1,231 2014/08/18 21:01:02 skel.metadata.xml
drwxr-xr-x            860 2014/11/16 16:01:02 app-accessibility
drwxr-xr-x          4,800 2014/11/16 16:01:03 app-admin
drwxr-xr-x            100 2014/11/16 16:01:03 app-antivirus
[..]
drwxr-xr-x          1,240 2014/11/16 16:01:21 x11-wm
drwxr-xr-x            340 2014/11/16 16:01:21 xfce-base
drwxr-xr-x          1,340 2014/11/16 16:01:21 xfce-extra

Die Hardware darunter ist gesponsort von Manitu.

Introducing Gambit to Gentoo (November 16, 2014, 14:50 UTC)

Hi!

I would like to introduce you to Gambit, a rather young Qt-based chess UI with excellent usability and its very own engine.

It has been living in the betagarden overlay while maturing and just hit the Gentoo main repository.
Install through

emerge -av games-board/gambit

as usual.

November 15, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
RDepending on Perl itself (November 15, 2014, 17:36 UTC)

Writing correct dependency specifications is an art in itself. So, here's a small guide for Gentoo developers how to specify runtime dependencies on dev-lang/perl. First, the general rule.
Check the following two things: 1) is your package linking anywhere to libperl.so, 2) is your package installing any Perl modules into Perl's vendor directory (e.g., /usr/lib64/perl5/vendor_perl/5.20.1/)? If at least one of these two questions is answered with yes, you need in your dependency string a slot operator, i.e. "dev-lang/perl:=" Obviously, your ebuild will have to be EAPI=5 for that. If neither 1) nor 2) are the case, "dev-lang/perl" is enough.
Now, with eclasses. If you use perl-module.eclass or perl-app.eclass, two variables control automatic adding of dependencies. GENTOO_DEPEND_ON_PERL sets whether the eclass automatically adds a dependency on Perl, and defaults to yes in both cases. GENTOO_DEPEND_ON_PERL_SUBSLOT controls whether the slot operator ":=" is used. It defaults to yes in perl-module.eclass and to no in perl-app.eclass. (This is actually the only difference between the eclasses.) The idea behind that is that a Perl module package always installs modules into vendor_dir, while an application can have its own separate installation path for its modules or not install any modules at all.
In many cases, if a package installs Perl modules you'll need Perl at build time as well since the module build system is written in Perl. If a package links to Perl, that is obviously needed at build time too.

So, summarizing:
eclass 1) or 2) true 1) false, 2) false
none "dev-lang/perl:=" needed in RDEPEND and most likely also DEPEND "dev-lang/perl" needed in RDEPEND, maybe also in DEPEND
perl-module.eclass no need to do anything GENTOO_DEPEND_ON_PERL_SUBSLOT=no possible before inherit
perl-app.eclass GENTOO_DEPEND_ON_PERL_SUBSLOT=yes needed before inherit no need to do anything

Luca Barbato a.k.a. lu_zero (homepage, bugs)
Making a new demuxer (November 15, 2014, 13:40 UTC)

Maxim asked me to to check a stream from a security camera that he could not decode with avconv without forcing the format to mjpeg.

Mysterious stream

Since it is served as http the first step had been checking the mime type. Time to use curl -I.

# curl -I "http://host/some.cgi?user=admin&amp;pwd=pwd" | grep Content-Type

Interesting enough it is a multipart/x-mixed-replace

Content-Type: multipart/x-mixed-replace;boundary=object-ipcamera

Basically the cgi sends a jpeg images one after the other, we even have a (old and ugly) muxer for it!

Time to write a demuxer.

Libav demuxers

We already have some documentation on how to write a demuxer, but it is not complete so this blogpost will provide an example.

Basics

Libav code is quite object oriented: every component is a C structure containing a description of it and pointers to a set of functions and there are fixed pattern to make easier to make new code fit in.

Every major library has an all${components}.c in which the components are registered to be used. In our case we talk about libavformat so we have allformats.c.

The components are built according to CONFIG_${name}_${component} variables generated by configure. The actual code reside in the ${component} directory with a pattern such as ${name}.c or ${name}dec.c/${name}enc.c if both demuxer and muxer are available.

The code can be split in multiple files if it starts growing to an excess of 500-1000 LOCs.

Registration

We have some REGISTER_ macros that abstract some logic to make every component selectable at configure time since in Libav you can enable/disable every muxer, demuxer, codec, IO/protocol from configure.

We had already have a muxer for the format.

    REGISTER_MUXER   (MPJPEG,           mpjpeg);

Now we register both in a single line:

    REGISTER_MUXDEMUX(MPJPEG,           mpjpeg);

The all${components} files are parsed by configure to generate the appropriate Makefile and C definitions. The next run we’ll get a new
CONFIG_MPJPEG_DEMUXER variable in config.mak and config.h.

Now we can add to libavformat/Makefile a line like

OBJS-$(CONFIG_MPJPEG_DEMUXER)            += mpjpegdec.o

and put our mpjpegdec.c in libavformat and we are ready to write some code!

Demuxer structure

Usually I start putting down a skeleton file with the bare minimum:

The AVInputFormat and the core _read_probe, _read_header and _read_packet callbacks.

#include "avformat.h"

static int ${name}_read_probe(AVProbeData *p)
{
    return 0;
}

static int ${name}_read_header(AVFormatContext *s)
{
    return AVERROR(ENOSYS);
}

static int ${name}_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    return AVERROR(ENOSYS);
}

AVInputFormat ff_${name}_demuxer = {
    .name           = "${name}",
    .long_name      = NULL_IF_CONFIG_SMALL("Longer ${name} description"),
    .read_probe     = ${name}_read_probe,
    .read_header    = ${name}_read_header,
    .read_packet    = ${name}_read_packet,

I make so that all the functions return a no-op value.

_read_probe

This function will be called by the av_probe_input functions, it receives some probe information in the form of a buffer. The function return a score between 0 and 100; AVPROBE_SCORE_MAX, AVPROBE_SCORE_MIME and AVPROBE_SCORE_EXTENSION are provided to make more evident what is the expected confidence. 0 means that we are sure that the probed stream is not parsable by this demuxer.

_read_header

This function will be called by avformat_open_input. It reads the initial format information (e.g. number and kind of streams) when available, in this function the initial set of streams should be mapped with avformat_new_stream. Must return 0 on success. The skeleton is made to return ENOSYS so it can be run and just exit cleanly.

_read_packet

This function will be called by av_read_frame. It should return an AVPacket containing demuxed data as contained in the bytestream. It will be parsed and collated (or splitted) to a frame-worth amount of data by the optional parsers. Must return 0 on success. The skeleton again returns ENOSYS.

Implementation

Now let’s implement the mpjpeg support! The format in itself is quite simple:
– a boundary line starting with --
– a Content-Type line stating image/jpeg.
– a Content-Length line with the actual buffer length.
– the jpeg data

Probe function

We just want to check if the Content-Type is what we expect basically, so we just
go over the lines (\n\r-separated) and check if there is a tag Content-Type with a value image/jpeg.

static int get_line(AVIOContext *pb, char *line, int line_size)
{
    int i, ch;
    char *q = line;

    for (i = 0; !pb->eof_reached; i++) {
        ch = avio_r8(pb);
        if (ch == 'n') {
            if (q > line && q[-1] == 'r')
                q--;
            *q = '';

            return 0;
        } else {
            if ((q - line) < line_size - 1)
                *q++ = ch;
        }
    }

    if (pb->error)
        return pb->error;
    return AVERROR_EOF;
}

static int split_tag_value(char **tag, char **value, char *line)
{
    char *p = line;

    while (*p != '' && *p != ':')
        p++;
    if (*p != ':')
        return AVERROR_INVALIDDATA;

    *p   = '';
    *tag = line;

    p++;

    while (av_isspace(*p))
        p++;

    *value = p;

    return 0;
}

static int check_content_type(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-type") ||
        av_strcasecmp(value, "image/jpeg"))
        return AVERROR_INVALIDDATA;

    return 0;
}

static int mpjpeg_read_probe(AVProbeData *p)
{
    AVIOContext *pb;
    char line[128] = { 0 };
    int ret;

    pb = avio_alloc_context(p->buf, p->buf_size, 0, NULL, NULL, NULL, NULL);
    if (!pb)
        return AVERROR(ENOMEM);

    while (!pb->eof_reached) {
        ret = get_line(pb, line, sizeof(line));
        if (ret < 0)
            break;

        ret = check_content_type(line);
        if (!ret)
            return AVPROBE_SCORE_MAX;
    }

    return 0;
}

Here we are using avio to be able to reuse get_line later.

Reading the header

The format is pretty much header-less, we just check for the boundary for now and
set up the minimum amount of information regarding the stream: media type, codec id and frame rate. The boundary by specification is less than 70 characters with -- as initial marker.

static int mpjpeg_read_header(AVFormatContext *s)
{
    MPJpegContext *mp = s->priv_data;
    AVStream *st;
    char boundary[70 + 2 + 1];
    int ret;

    ret = get_line(s->pb, boundary, sizeof(boundary));
    if (ret < 0)
        return ret;

    if (strncmp(boundary, "--", 2))
        return AVERROR_INVALIDDATA;

    st = avformat_new_stream(s, NULL);

    st->codec->codec_type = AVMEDIA_TYPE_VIDEO;
    st->codec->codec_id   = AV_CODEC_ID_MJPEG;

    avpriv_set_pts_info(st, 60, 1, 25);

    return 0;
}

Reading packets

Even this function is quite simple, please note that AVFormatContext provides an
AVIOContext. The bulk of the function boils down to reading the size of the frame,
allocate a packet using av_new_packet and write down if using avio_read.

static int parse_content_length(char *line)
{
    char *tag, *value;
    int ret = split_tag_value(&tag, &value, line);
    long int val;

    if (ret < 0)
        return ret;

    if (av_strcasecmp(tag, "Content-Length"))
        return AVERROR_INVALIDDATA;

    val = strtol(value, NULL, 10);
    if (val == LONG_MIN || val == LONG_MAX)
        return AVERROR(errno);
    if (val > INT_MAX)
        return AVERROR(ERANGE);
    return val;
}

static int mpjpeg_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    char line[128];
    int ret, size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    ret = check_content_type(line);
    if (ret < 0)
        return ret;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        return ret;

    size = parse_content_length(line);
    if (size < 0)
        return size;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    ret = av_new_packet(pkt, size);
    if (ret < 0)
        return ret;

    ret = avio_read(s->pb, pkt->data, size);
    if (ret < 0)
        goto fail;

    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    // Consume the boundary marker
    ret = get_line(s->pb, line, sizeof(line));
    if (ret < 0)
        goto fail;

    return ret;

fail:
    av_free_packet(pkt);
    return ret;
}

What next

For now I walked you through on the fundamentals, hopefully next week I’ll show you some additional features I’ll need to implement in this simple demuxer to make it land in Libav: AVOptions to make possible overriding the framerate and some additional code to be able to do without Content-Length and just use the boundary line.

PS: wordpress support for syntax highlight is quite subpar, if somebody has a blog engine that can use pygments or equivalent please tell me and I’d switch to it.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Small differences don't matter (to unpaper) (November 15, 2014, 04:53 UTC)

After my challenge with the fused multiply-add instructions I managed to find some time to write a new test utility. It's written ad hoc for unpaper but it can probably be used for other things too. It's trivial and stupid but it got the job done.

What it does is simple: it loads both a golden and a result image files, compares the size and format, and then goes through all the bytes to identify how many differences are there between them. If less than 0.1% of the image surface changed, it consider the test a pass.

It's not a particularly nice system, especially as it requires me to bundle some 180MB of golden files (they compress to just about 10 MB so it's not a big deal), but it's a strict improvement compared to what I had before, which is good.

This change actually allowed me to explore one change that I abandoned before because it resulted in non-pixel-perfect results. In particular, unpaper now uses single-precision floating points all over, rather than doubles. This is because the slight imperfection caused by this change are not relevant enough to warrant the ever-so-slight loss in performance due to the bigger variables.

But even up to here, there is very little gain in performance. Sure some calculation can be faster this way, but we're still using the same set of AVX/FMA instructions. This is unfortunate, unless you start rewriting the algorithms used for searching for edges or rotations, there is no gain to be made by changing the size of the code. When I converted unpaper to use libavcodec, I decided to make the code simple and as stupid as I could make it, as that meant I could have a baseline to improve from, but I'm not sure what the best way to improve it is, now.

I still have a branch that uses OpenMP for the processing, but since most of the filters applied are dependent on each other it does not work very well. Per-row processing gets slightly better results but they are really minimal as well. I think the most interesting parallel processing low-hanging fruit would be to execute processing in parallel on the two pages after splitting them from a single sheet of paper. Unfortunately, the loops used to do that processing right now are so complicated that I'm not looking forward to touch them for a long while.

I tried some basic profile-guided optimization execution, just to figure out what needs to be improved, and compared with codiff a proper release and a PGO version trained after the tests. Unfortunately the results are a bit vague and it means I'll probably have to profile it properly if I want to get data out of it. If you're curious here is the output when using rbelf-size -D on the unpaper binary when built normally, with profile-guided optimisation, with link-time optimisation, and with both profile-guided and link-time optimisation:

% rbelf-size -D ../release/unpaper ../release-pgo/unpaper ../release-lto/unpaper ../release-lto-pgo/unpaper
    exec         data       rodata        relro          bss     overhead    allocated   filename
   34951         1396        22284            0        11072         3196        72899   ../release/unpaper
   +5648         +312         -192           +0         +160           -6        +5922   ../release-pgo/unpaper
    -272           +0        -1364           +0         +144          -55        -1547   ../release-lto/unpaper
   +7424         +448        -1596           +0         +304          -61        +6519   ../release-lto-pgo/unpaper

It's unfortunate that GCC does not give you any diagnostic on what it's trying to do achieve when doing LTO, it would be interesting to see if you could steer the compiler to produce better code without it as well.

Anyway, enough with the microptimisations for now. If you want to make unpaper faster, feel free to send me pull requests for it, I'll be glad to take a look at them!

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Having fun with networking (November 15, 2014, 04:14 UTC)

Since the last minor upgrade my notebook has been misbehaving in funny ways.
I presumed that it was NetworkManager being itself, but ... this is even more fun. To quote from the manpage:

If the hostname is currently blank, (null) or localhost, or force_hostname is YES or TRUE or 1 then dhcpcd
     sets the hostname to the one supplied by the DHCP server.
Guess what. Now my hostname is 192.168.0.7, I mean 192.168.0.192.168.0.7, err...
And as a bonus this even breaks X in funny ways so that starting new apps becomes impossible. The fix?
Now the hostname is set to "localhorst". Because that's the name of the machine!111 (It doesn't have an explicit name, so localhost used to be ok)

November 14, 2014
Gentoo Monthly Newsletter: October 2014 (November 14, 2014, 19:30 UTC)

Gentoo News

Council News

The council addressed a number of issues this month. The change with the biggest long-term significance was clearing the way to proceed with the git migration once infra is ready. This included removing changelogs from future git commits, removing cvs headers, and simplifying our news repository format. The infra and git migration projects will coordinate the actual migration hopefully in the not-so-distant future.

The council also endorsed getting rid of herds, but acknowledged that there are some details that need to be worked out before pulling the plug. The bikeshedding was moved back to the lists so all could share in the fun.

There are still some concerns with the games team. The council decided to give the team more time to sort things out internally before interfering. It was acknowledged that most of the serious issues were already resolved with the decision to allow anybody to elect to make their packages a part of the games herd or not. Some QA concerns with some games were brought up, but it was felt that this is best dealt with on a per-package basis with QA/treecleaners and that games shouldn’t receive any special treatment one way or the other.

Other decisions include removing einstall from EAPI6, and approving GLEP64 (VDB caching / API). There was also a status update on multilib (nearly done), and migrating project pages to the wiki (sadly we can’t just get rid of unmigrated projects like the x86 and amd64 arches).

PYTHON_SINGLE_TARGETS updates

(by Ian Stakenvicius)

On November 7th, packages inheriting python-single-r1 got a whole lot easier for end-users to manage.

It used to be that any package supporting just one Python required it to have a python_single_target_* USE-flag set to choose it, even if the package was only compatible with one Python in the first place. Since November 7th, if a package is only compatible with a single supported Python version (say, python-2.7), then it no longer uses python_single_target_* use flags and relies instead on that implementation being enabled in PYTHON_TARGETS.

The most visible change seen from this is package rebuilds from removal of a lot of PYTHON_SINGLE_TARGET flags, especially on python-2.7-only packages. However, the removal of these flags also means that setting PYTHON_SINGLE_TARGET to something other than python2_7 no longer needs all of those packages to be listed in package.use.

Portage users are also likely to notice that exceptions to PYTHON_SINGLE_TARGET that would require package.use changes are now also be calculated properly by –autounmask, instead of solely being reported as an illegible REQUIRED_USE error.

Gentoo Developer Moves

Summary

Gentoo is made up of 243 active developers, of which 39 are currently away.
Gentoo has recruited a total of 804 developers since its inception.

Changes

  • Yixun Lan joined the electronics team

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 163
Packages 17876
Ebuilds 38009
Architecture Stable Testing Total % of Packages
alpha 3663 592 4255 23.80%
amd64 10926 6462 17388 97.27%
amd64-fbsd 0 1580 1580 8.84%
arm 2709 1812 4521 25.29%
arm64 565 46 611 3.42%
hppa 3103 502 3605 20.17%
ia64 3218 629 3847 21.52%
m68k 624 99 723 4.04%
mips 0 2423 2423 13.55%
ppc 6869 2479 9348 52.29%
ppc64 4381 988 5369 30.03%
s390 1445 376 1821 10.19%
sh 1625 461 2086 11.67%
sparc 4160 921 5081 28.42%
sparc-fbsd 0 319 319 1.78%
x86 11576 5402 16978 94.98%
x86-fbsd 0 3245 3245 18.15%

gmn-portage-stats-2014-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201410-02 perl-core/Locale-Maketext (and 1 more) Perl, Perl Locale-Maketext module: Multiple vulnerabilities 446376
201410-01 app-shells/bash Bash: Multiple vulnerabilities 523742

Package Removals/Additions

Removals

Package Developer Date
media-sound/cowbell k_f 06 Oct 2014
x11-plugins/msn-pecan voyageur 08 Oct 2014
x11-plugins/pidgin-facebookchat voyageur 08 Oct 2014
dev-perl/IO-Socket-IP dilfridge 11 Oct 2014
dev-perl/Template-Latex dilfridge 13 Oct 2014
app-emulation/emul-linux-x86-compat ulm 14 Oct 2014
app-doc/djbdns-man mjo 15 Oct 2014
app-text/unix2dos mjo 18 Oct 2014
app-text/regex idella4 29 Oct 2014
games-board/chessdb mr_bones_ 30 Oct 2014
dev-ml/async_core aballier 30 Oct 2014

Additions

Package Developer Date
net-analyzer/openvas-tools jlec 01 Oct 2014
net-p2p/bitcoin-cli blueness 02 Oct 2014
app-benchmarks/wrk vikraman 02 Oct 2014
dev-perl/Net-IPv4Addr mjo 04 Oct 2014
dev-ruby/compass-core graaff 05 Oct 2014
dev-ruby/compass-import-once graaff 05 Oct 2014
media-sound/apulse jauhien 05 Oct 2014
dev-perl/Test-Warnings zlogene 05 Oct 2014
x11-misc/rofi jer 06 Oct 2014
dev-python/parse alunduil 06 Oct 2014
dev-python/clint alunduil 07 Oct 2014
app-admin/lastpass robbat2 08 Oct 2014
dev-perl/XML-Entities dilfridge 09 Oct 2014
dev-python/Numdifftools jlec 10 Oct 2014
app-text/krop dilfridge 10 Oct 2014
net-voip/vidyodesktop prometheanfire 10 Oct 2014
kde-misc/kcm-touchpad mrueg 11 Oct 2014
dev-perl/Unicode-Normalize dilfridge 11 Oct 2014
dev-perl/Net-IDN-Encode dilfridge 11 Oct 2014
dev-perl/tkispell dilfridge 11 Oct 2014
perl-core/IO-Socket-IP dilfridge 11 Oct 2014
virtual/perl-IO-Socket-IP dilfridge 11 Oct 2014
dev-python/pyhamcrest alunduil 11 Oct 2014
dev-python/enum34 alunduil 11 Oct 2014
dev-db/postgresql titanofold 11 Oct 2014
dev-python/doublex alunduil 11 Oct 2014
dev-python/pycallgraph alunduil 12 Oct 2014
dev-python/python-termstyle alunduil 12 Oct 2014
dev-python/rednose alunduil 12 Oct 2014
dev-python/PyQt5 pesa 13 Oct 2014
net-analyzer/ipguard jer 13 Oct 2014
dev-perl/Template-Plugin-Latex dilfridge 13 Oct 2014
dev-perl/LaTeX-Driver dilfridge 14 Oct 2014
dev-perl/Pod-LaTeX dilfridge 14 Oct 2014
dev-perl/LaTeX-Encode dilfridge 14 Oct 2014
dev-perl/MooseX-FollowPBP dilfridge 14 Oct 2014
dev-perl/LaTeX-Table dilfridge 14 Oct 2014
virtual/perl-Term-ReadLine dilfridge 14 Oct 2014
dev-python/python-etcd zmedico 15 Oct 2014
dev-db/etcd zmedico 15 Oct 2014
dev-libs/extra-cmake-modules kensington 15 Oct 2014
kde-frameworks/kglobalaccel kensington 15 Oct 2014
kde-frameworks/kwallet kensington 15 Oct 2014
kde-frameworks/kjobwidgets kensington 15 Oct 2014
kde-frameworks/kxmlgui kensington 15 Oct 2014
kde-frameworks/plasma kensington 15 Oct 2014
kde-frameworks/kcrash kensington 15 Oct 2014
kde-frameworks/kdesignerplugin kensington 15 Oct 2014
kde-frameworks/frameworkintegration kensington 15 Oct 2014
kde-frameworks/kf-env kensington 15 Oct 2014
kde-frameworks/kdesu kensington 15 Oct 2014
kde-frameworks/ki18n kensington 15 Oct 2014
kde-frameworks/kitemmodels kensington 15 Oct 2014
kde-frameworks/kguiaddons kensington 15 Oct 2014
kde-frameworks/knewstuff kensington 15 Oct 2014
kde-frameworks/kcoreaddons kensington 15 Oct 2014
kde-frameworks/kapidox kensington 15 Oct 2014
kde-frameworks/kactivities kensington 15 Oct 2014
kde-frameworks/kdelibs4support kensington 15 Oct 2014
kde-frameworks/kcmutils kensington 15 Oct 2014
kde-frameworks/sonnet kensington 15 Oct 2014
kde-frameworks/kconfig kensington 15 Oct 2014
kde-frameworks/kidletime kensington 15 Oct 2014
kde-frameworks/kunitconversion kensington 15 Oct 2014
kde-frameworks/kio kensington 15 Oct 2014
kde-frameworks/kdbusaddons kensington 15 Oct 2014
kde-frameworks/kconfigwidgets kensington 15 Oct 2014
kde-frameworks/kauth kensington 15 Oct 2014
kde-frameworks/kcompletion kensington 15 Oct 2014
kde-frameworks/kcodecs kensington 15 Oct 2014
kde-frameworks/kpty kensington 15 Oct 2014
kde-frameworks/solid kensington 15 Oct 2014
kde-frameworks/kplotting kensington 15 Oct 2014
kde-frameworks/kbookmarks kensington 15 Oct 2014
kde-frameworks/knotifyconfig kensington 15 Oct 2014
kde-frameworks/kemoticons kensington 15 Oct 2014
kde-frameworks/kinit kensington 15 Oct 2014
kde-frameworks/kross kensington 15 Oct 2014
kde-frameworks/kwidgetsaddons kensington 15 Oct 2014
kde-frameworks/kimageformats kensington 15 Oct 2014
kde-frameworks/kdewebkit kensington 15 Oct 2014
kde-frameworks/kdeclarative kensington 15 Oct 2014
kde-frameworks/attica kensington 15 Oct 2014
kde-frameworks/kservice kensington 15 Oct 2014
kde-frameworks/kiconthemes kensington 15 Oct 2014
kde-frameworks/kdnssd kensington 15 Oct 2014
kde-frameworks/kmediaplayer kensington 15 Oct 2014
kde-frameworks/knotifications kensington 15 Oct 2014
kde-frameworks/kded kensington 15 Oct 2014
kde-frameworks/kjsembed kensington 15 Oct 2014
kde-frameworks/kjs kensington 15 Oct 2014
kde-frameworks/ktexteditor kensington 15 Oct 2014
kde-frameworks/kdoctools kensington 15 Oct 2014
kde-frameworks/krunner kensington 15 Oct 2014
kde-frameworks/kitemviews kensington 15 Oct 2014
kde-frameworks/karchive kensington 15 Oct 2014
kde-frameworks/khtml kensington 15 Oct 2014
kde-frameworks/kwindowsystem kensington 15 Oct 2014
kde-frameworks/kparts kensington 15 Oct 2014
kde-frameworks/ktextwidgets kensington 15 Oct 2014
kde-frameworks/threadweaver kensington 15 Oct 2014
kde-base/oxygen-fonts kensington 15 Oct 2014
dev-libs/sni-qt mrueg 15 Oct 2014
dev-db/etcdctl zmedico 15 Oct 2014
dev-db/go-etcd zmedico 16 Oct 2014
sys-fs/etcd-fs zmedico 16 Oct 2014
dev-python/mamba alunduil 16 Oct 2014
virtual/podofo-build zmedico 16 Oct 2014
dev-games/goatee hasufell 16 Oct 2014
games-board/goatee-gtk hasufell 16 Oct 2014
app-crypt/etcd-ca zmedico 16 Oct 2014
dev-python/expects alunduil 17 Oct 2014
app-emacs/rust-mode jauhien 18 Oct 2014
app-vim/rust-mode jauhien 18 Oct 2014
app-shells/rust-zshcomp jauhien 18 Oct 2014
dev-lang/rust-bin jauhien 18 Oct 2014
dev-python/args alunduil 18 Oct 2014
sys-process/xjobs mjo 19 Oct 2014
dev-python/parse-type alunduil 19 Oct 2014
dev-perl/Devel-CheckCompiler dilfridge 19 Oct 2014
dev-perl/Cwd-Guard dilfridge 19 Oct 2014
dev-perl/Module-Build-XSUtil dilfridge 19 Oct 2014
dev-perl/File-Find-Rule-Perl dilfridge 19 Oct 2014
dev-perl/PPI-PowerToys dilfridge 19 Oct 2014
dev-util/jenkins-bin mrueg 20 Oct 2014
dev-python/sphinxcontrib-cheeseshop alunduil 21 Oct 2014
dev-perl/BZ-Client dilfridge 21 Oct 2014
dev-perl/Data-Serializer dilfridge 21 Oct 2014
dev-perl/Math-NumberCruncher dilfridge 21 Oct 2014
dev-python/behave alunduil 22 Oct 2014
dev-python/django-opensearch ercpe 22 Oct 2014
app-admin/lastpass-cli zx2c4 22 Oct 2014
dev-python/simpleeval cedk 22 Oct 2014
net-misc/xrdp mgorny 23 Oct 2014
dev-libs/collada-dom aballier 23 Oct 2014
sci-libs/libccd aballier 23 Oct 2014
dev-ml/ocaml-re aballier 24 Oct 2014
dev-ml/cudf aballier 24 Oct 2014
dev-perl/File-ShareDir-Install dilfridge 24 Oct 2014
dev-perl/POSIX-strftime-Compiler dilfridge 24 Oct 2014
dev-perl/Apache-LogFormat-Compiler dilfridge 24 Oct 2014
dev-python/doublex-expects alunduil 25 Oct 2014
app-crypt/libu2f-host flameeyes 25 Oct 2014
app-crypt/libykneomgr flameeyes 25 Oct 2014
app-crypt/yubikey-neo-manager flameeyes 25 Oct 2014
dev-perl/Redis dilfridge 25 Oct 2014
dev-perl/Types-Serialiser dilfridge 25 Oct 2014
net-analyzer/ospd jlec 26 Oct 2014
dev-perl/Cache-FastMmap dilfridge 26 Oct 2014
dev-python/dockerpty alunduil 27 Oct 2014
app-text/restview radhermit 27 Oct 2014
dev-ml/parmap aballier 27 Oct 2014
dev-ml/camlbz2 aballier 27 Oct 2014
net-misc/x11rdp mgorny 27 Oct 2014
app-emulation/fig alunduil 27 Oct 2014
dev-perl/Algorithm-ClusterPoints dilfridge 27 Oct 2014
dev-ml/dose3 aballier 28 Oct 2014
x11-libs/libQGLViewer aballier 28 Oct 2014
dev-ml/cmdliner aballier 29 Oct 2014
dev-ml/uutf aballier 29 Oct 2014
dev-ml/jsonm aballier 29 Oct 2014
dev-ml/opam aballier 29 Oct 2014
sci-libs/octomap aballier 29 Oct 2014
app-text/regex idella4 29 Oct 2014
dev-python/regex idella4 29 Oct 2014
games-rpg/soltys calchan 30 Oct 2014
sci-libs/orocos_kdl aballier 30 Oct 2014
dev-cpp/metslib aballier 31 Oct 2014
media-libs/libsixel hattya 31 Oct 2014
app-crypt/libscrypt blueness 31 Oct 2014
sec-policy/selinux-android swift 31 Oct 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 October 2014 and 01 November 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-11

Bug Activity Number
New 1881
Closed 1153
Not fixed 171
Duplicates 168
Total 6198
Blocker 4
Critical 18
Major 65

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Linux Gnome Desktop Team 50
2 Gentoo Perl team 43
3 Gentoo Games 42
4 Gentoo KDE team 39
5 Gentoo's Team for Core System packages 39
6 Netmon Herd 32
7 Python Gentoo Team 27
8 PHP Bugs 25
9 Gentoo Toolchain Maintainers 21
10 Others 834

gmn-closed-2014-11

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 107
2 Gentoo Linux Gnome Desktop Team 69
3 Gentoo's Team for Core System packages 65
4 Gentoo Security 58
5 Gentoo KDE team 53
6 Python Gentoo Team 49
7 Gentoo Games 47
8 Gentoo Perl team 44
9 Default Assignee for New Packages 43
10 Others 1345

gmn-opened-2014-11

 

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

November 12, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Veteran’s Day is one of uncertainty (November 12, 2014, 03:17 UTC)

Today, 11 November, is an interesting holiday in the United States. It is the day in which we honour those individuals who have served in the armed forces and have defended their country. I say that it is an interesting holiday because I am torn on how I feel about the entire concept. On one hand, I am incredibly grateful for those people that have fought to defend the principles and freedoms on which the United States was founded. However, the fight itself is one that I cannot condone.

There is no flag large enough to cover the shame of killing innocent people
There is no flag large enough to cover the shame of killing innocent people

Threats to freedom in any nation are brought about by political groups, and should be handled in a political manner. I understand that my viewpoint here is one of pseudoutopian cosmography, but it is one that I hope will become more and more realistic as both time and humanity march onward. The “wars” should be fought by national leaders, and done so via discussion and debate; not by citizens (military or civilian) via guns, bombs, or other weaponry.

I also understand that there will be many people who disagree (in degrees that result in emotions ranging from mild irritation to infuriated hostility) with my viewpoint, and that is completely fine. Again, my dilemma comes from being simultaneously thankful for those individuals who have given their all to defend “freedom” (whatever concept that word may represent) and sorrowful that they were the ones that had to give anything at all. These men and women had to leave their families knowing that they may never return to them; knowing that they may die trying to defend something that shouldn’t be challenged in the first place—human freedoms.

Little boy looking at his veteran father
Who will explain it to him?

Let us not forget a quote by former President of the United States, John F. Kennedy who stated that “mankind must put an end to war before war puts an end to mankind.”

–Zach

November 10, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Today's good news is that our manuscript "Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube" has been accepted for publication by New Journal of Physics.
In a way, this work is directly building on our previous publication on thermally induced quasiparticles in niobium-carbon nanotube hybrid systes. As a contribution mainly from our theory colleagues, now the modelling of transport processes is enhanced and extended to cotunneling processes within Coulomb blockade. A generalized master equation based on the reduced density matrix approach in the charge conserved regime is derived, applicable to any strength of the intradot interaction and to finite values of the superconducting gap.
We show both theoretically and experimentally that also in cotunneling spectroscopy distinct thermal "replica lines" due to the finite quasiparticle occupation of the superconductor occur at higher temperature T~1K: the now possible transport processes lead to additional conductance both at zero bias and at finite voltage corresponding to an excitation energy; experiment and theoretical result match very well.

"Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube"
S. Ratz, A. Donarini, D. Steininger, T. Geiger, A. Kumar, A. K. Hüttel, Ch. Strunk, and M. Grifoni
accepted for publication by New Journal of Physics, arXiv:1408.5000 (PDF)

November 09, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
PyPy is back, and for real this time! (November 09, 2014, 23:17 UTC)

As you may recall, I was looking for a dedicated PyPy maintainer for quite some time. Sadly, all the people who helped (and who I’d like to thank a lot) ended up lacking time soon enough. So finally I’ve decided to look into the hacks reducing build-time memory use and take care of the necessary ebuild and packaging work myself.

So first of all, you may notice that the new PyPy (source-code) ebuilds have a new USE flag called low-memory. When this flag is enabled, the translation process is done using PyPy with some memory-reducing adjustments suggested by upstream. The net result is that it finally is possible to build PyPy with 3.5G RAM (on amd64) and 1G of swap (the latter being used the compiler is spawned and memory used during translation is no longer necessary), at a cost of slightly increased build time.

As noted above, the low-memory option requires using PyPy to perform the translation. So while having to enforce that, I went a bit further and made the ebuild default to using PyPy whenever available. In fact, even for a first PyPy build you are recommended to install dev-python/pypy-bin first and let the ebuild use it to bootstrap your own PyPy.

Next, I have cleaned up the ebuilds a bit and enforced more consistency. Changing maintainers and binary package builders have resulted in the ebuilds being a bit inconsistent. Now you can finally expect pypy-bin to install exactly the same set of files as source-built pypy.

I have also cleaned up the remaining libpypy-c symlinks. The library is not packaged upstream currently, and therefore has no proper public name. Using libpypy-c.so is just wrong, and packages can’t reliably refer to that. I’d rather wait with installing it till there’s some precedence in renaming. The shared library is still built but it’s kept inside the PyPy home directory.

All those changes were followed by a proper version bump to 2.4.0. While you still may have issues upgrading PyPy, Zac already committed a patch to Portage and the next release should be able to handle PyPy upgrades seamlessly. I have also built all the supported binary package variants, so you can choose those if you don’t want to spend time building PyPy.

Finally, I have added the ebuilds for PyPy 3. They are a little bit more complex than regular PyPy, especially because the build process and some of the internal modules still require Python 2. Sadly, PyPy 3 is based on Python 3.2 with small backports, so I don’t expect package compatibility much greater than CPython 3.2 had.

If you want to try building some packages with PyPy 3, you can use the convenience PYTHON_COMPAT_OVERRIDE hack:

PYTHON_COMPAT_OVERRIDE='pypy3' emerge -1v mypackage

Please note that it is only a hack, and as such it doesn’t set proper USE flags (PYTHON_TARGETS are simply ignored) or enforce dependencies.

If someone wants to help PyPy on Gentoo a bit, there are still unsolved issues needing a lot of specialist work. More specifically:

  1. #465546; PyPy needs to be modified to support /usr prefix properly (right now, it requires prefix being /usr/lib*/pypy which breaks distutils packages assuming otherwise.
  2. #525940; non-SSE2 JIT does not build.
  3. #429372; we lack proper sandbox install support.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
gentooJoin 2004/04/11 (November 09, 2014, 11:06 UTC)

How time flies!
gentooJoin: 2004/04/11

Now I feel ooold

November 05, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Just a simple webapp, they said ... (November 05, 2014, 08:38 UTC)

The complexity of modern software is quite insanely insane. I just realized ...
Writing a small webapp with flask, I've had to deal with the following technologies/languages:

  • System package manager, in this case portage
  • SQL DBs, both SQLite (local testing) and PostgreSQL (production)
  • python/flask, the core of this webapp
  • jinja2, the template language usually used with it
  • HTML, because the templates don't just appear magically
  • CSS (mostly hidden in Bootstrap) to make it look sane
  • JavaScript, because dynamic shizzle
  • (flask-)sqlalchemy, ORMs are easier than writing SQL by hand when you're in a hurry
  • alembic, for DB migrations and updates
  • git, because version control
So that's about a dozen things that each would take years to master. And for a 'small' project there's not much time to learn them deeply, so we staple together what we can, learning as we go along ...

And there's an insane amount of context switching going on, you go from mangling CSS to rewriting SQL in the span of a few minutes. It's an impressive polyglot marathon, but how is this supposed to generate sustainable and high-quality results?

Und then I go home in the evening and play around with OpenCL and such things. Learning never ends - but how are we going to build things that last for more than 6 months? Too many moving parts, too much change, and never enough time to really understand what we're doing :)

November 04, 2014
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Notes from the PulseAudio Mini Summit 2014 (November 04, 2014, 16:49 UTC)

The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well.

I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well!

Incontrovertible proof that all our users are happy

Happy faces — incontrovertible proof that everyone loves PulseAudio!

With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :)

Release plan

We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process.

Build simplification for BlueZ HFP/HSP backends

For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has)

srbchannel plans

We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads.

Routing framework patches

Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts)

module-device-manager

As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this.

Module writing infrastructure

Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions.

Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave it to the module/library to take care of marshalling/unmarshalling.

Resampler quality evaluation

Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged.

It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation.

Addition of a “hi-fi” mode

The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates.

No objections were raised to having such a mode — I’d like to take this up at some point of time.

LFE channel module

Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core.

rtkit

David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head).

The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!)

kdbus/memfd

Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done.

ALSA controls spanning multiple outputs

David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed.

Audio groups

Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts.

Stream and device objects

Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up.

Filter sinks

Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time).

Dynamic PCM for HDMI

David plans to see if we can use profile availability to help determine when an HDMI device is actually available.

Browser volumes

The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this.

Handling bad rewinding code

Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly.

BlueZ native backend

Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support).

Containers and PA

Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach.

Common ALSA configuration

Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format).

(thanks to Tanu, David and Peter for reviewing this)

November 03, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)

Poodle
The latest SSL attack was called POODLE. Image source
The world of SSL/TLS Internet encryption is in trouble again. You may have heard that recently a new vulnerability called POODLE has been found in the ancient SSLv3 protocol. Shortly before another vulnerability that's called BERserk has been found (which hasn't received the attention it deserved because it was published on the same day as Shellshock).
.
I think it is crucial to understand what led to these vulnerabilities. I find POODLE and BERserk so interesting because these two vulnerabilities were both unnecessary and could've been avoided by intelligent design choices. Okay, let's start by investigating what went wrong.

The mess with CBC

POODLE (Padding Oracle On Downgraded Legacy Encryption) is a weakness in the CBC block mode and the padding of the old SSL protocol. If you've followed previous stories about SSL/TLS vulnerabilities this shouldn't be news. There have been a whole number of CBC-related vulnerabilities, most notably the Padding oracle (2003), the BEAST attack (2011) and the Lucky Thirteen attack (2013) (Lucky Thirteen is kind of my favorite, because it was already more or less mentioned in the TLS 1.2 standard). The POODLE attack builds on ideas already used in previous attacks.

CBC is a so-called block mode. For now it should be enough to understand that we have two kinds of ciphers we use to authenticate and encrypt connections – block ciphers and stream ciphers. Block ciphers need a block mode to operate. There's nothing necessarily wrong with CBC, it's the way CBC is used in SSL/TLS that causes problems. There are two weaknesses in it: Early versions (before TLS 1.1) use a so-called implicit Initialization Vector (IV) and they use a method called MAC-then-Encrypt (used up until the very latest TLS 1.2, but there's a new extension to fix it) which turned out to be quite fragile when it comes to security. The CBC details would be a topic on their own and I won't go into the details now. The long-term goal should be to get rid of all these (old-style) CBC modes, however that won't be possible for quite some time due to compatibility reasons. As most of these problems have been known since 2003 it's about time.

The evil Protocol Dance

The interesting question with POODLE is: Why does a security issue in an ancient protocol like SSLv3 bother us at all? SSL was developed by Netscape in the mid 90s, it has two public versions: SSLv2 and SSLv3. In 1999 (15 years ago) the old SSL was deprecated and replaced with https://tools.ietf.org/html/rfc2246 TLS 1.0 standardized by the IETF. Now people still used SSLv3 up until very recently mostly for compatibility reasons. But even that in itself isn't the problem. SSL/TLS has a mechanism to safely choose the best protocol available. In a nutshell it works like this:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server may answer "No, sorry, I don't understand TLS 1.2, can you please connect with TLS 1.0?“
c) The client says "Ok, let's connect with TLS 1.0“

The point here is: Even if both server and client support the ancient SSLv3, they'd usually not use it. But this is the idealized world of standards. Now welcome to the real world, where things like this happen:

a) A client (e. g. a browser) connects to a server and may say something like "I want to connect with TLS 1.2“
b) The server thinks "Oh, TLS 1.2, never heard of that. What should I do? I better say nothing at all...“
c) The browser thinks "Ok, server doesn't answer, maybe we should try something else. Hey, server, I want to connect with TLS 1.1“
d) The browser will retry all SSL versions down to SSLv3 till it can connect.

Dance with the Devil
The Protocol Dance is a Dance with the Devil. Image source
So here's our problem: There are broken servers out there that don't answer at all if they see a connection attempt with an unknown protocol. The well known SSL test by Qualys checks for this behaviour and calls it „Protocol intolerance“ (but „Protocol brokenness“ would be more precise). On connection fails the browsers will try all old protocols they know until they can connect. This behaviour is now known as the „Protocol Dance“ - and it causes all kinds of problems.

I first encountered the Protocol Dance back in 2008. Back then I already used a technology called SNI (Server Name Indication) that allows to have multiple websites with multiple certificates on a single IP address. I regularly became complains from people who saw the wrong certificates on those SNI webpages. A bug report to Firefox and some analysis revealed the reason: The protocol downgrades don't just happen when servers don't answer to new protocol requests, they also can happen on faulty or weak internet connections. SSLv3 does not support SNI, so when a downgrade to SSLv3 happens you get the wrong certificate. This was quite frustrating: A compatibility feature that was purely there to support broken hardware caused my completely legit setup to fail every now and then.

But the more severe problem is this: The Protocol Dance will allow an attacker to force downgrades to older (less secure) protocols. He just has to stop connection attempts with the more secure protocols. And this is why the POODLE attack was an issue after all: The problem was not backwards compatibility. The problem was attacker-controlled backwards compatibility.

The idea that the Protocol Dance might be a security issue wasn't completely new either. At the Black Hat conference this year Antoine Delignat-Lavaud presented a variant of an attack he calls "Virtual Host Confusion“ where he relied on downgrading connections to force SSLv3 connections.

"Whoever breaks it first“ - principle

The Protocol Dance is an example for something that I feel is an unwritten rule of browser development today: Browser vendors don't want things to break – even if the breakage is the fault of someone else. So they add all kinds of compatibility technologies that are purely there to support broken hardware. The idea is: When someone introduced broken hardware at some point – and it worked because the brokenness wasn't triggered at that point – the broken stuff is allowed to stay and all others have to deal with it.

To avoid the Protocol Dance a new feature is now on its way: It's called SCSV and the idea is that the Protocol Dance is stopped if both the server and the client support this new protocol feature. I'm extremely uncomfortable with that solution because it just adds another layer of duct tape and increases the complexity of TLS which already is much too complex.

There's another recent example which is very similar: At some point people found out that BIG-IP load balancers by the company F5 had trouble with TLS connection attempts larger than 255 bytes. However it was later revealed that connection attempts bigger than 512 bytes also succeed. So a padding extension was invented and it's now widespread behaviour of TLS implementations to avoid connection attempts between 256 and 511 bytes. To make matters completely insane: It was later found out that there is other broken hardware – SMTP servers by Ironport – that breaks when the handshake is larger than 511 bytes.

I have a principle when it comes to fixing things: Fix it where its broken. But the browser world works differently. It works with the „whoever breaks it first defines the new standard of brokenness“-principle. This is partly due to an unhealthy competition between browsers. Unfortunately they often don't compete very well on the security level. What you'll constantly hear is that browsers can't break any webpages because that will lead to people moving to other browsers.

I'm not sure if I entirely buy this kind of reasoning. For a couple of months the support for the ftp protocol in Chrome / Chromium is broken. I'm no fan of plain, unencrypted ftp and its only legit use case – unauthenticated file download – can just as easily be fulfilled with unencrypted http, but there are a number of live ftp servers that implement a legit and working protocol. I like Chromium and it's my everyday browser, but for a while the broken ftp support was the most prevalent reason I tend to start Firefox. This little episode makes it hard for me to believe that they can't break connections to some (broken) ancient SSL servers. (I just noted that the very latest version of Chromium has fixed ftp support again.)

BERserk, small exponents and PKCS #1 1.5

Keys
We have a problem with weak keys. Image source
Okay, now let's talk about the other recent TLS vulnerability: BERserk. Independently Antoine Delignat-Lavaud and researchers at Intel found this vulnerability which affected NSS (and thus Chrome and Firefox), CyaSSL, some unreleased development code of OpenSSL and maybe others.

BERserk is actually a variant of a quite old vulnerability (you may begin to see a pattern here): The Bleichenbacher attack on RSA first presented at Crypto 2006. Now here things get confusing, because the cryptographer Daniel Bleichenbacher found two independent vulnerabilities in RSA. One in the RSA encryption in 1998 and one in RSA signatures in 2006, for convenience I'll call them BB98 (encryption) and BB06 (signatures). Both of these vulnerabilities expose faulty implementations of the old RSA standard PKCS #1 1.5. And both are what I like to call "zombie vulnerabilities“. They keep coming back, no matter how often you try to fix them. In April the BB98 vulnerability was re-discovered in the code of Java and it was silently fixed in OpenSSL some time last year.

But BERserk is about the other one: BB06. BERserk exposes the fact that inside the RSA function an algorithm identifier for the used hash function is embedded and its encoded with BER. BER is part of ASN.1. I could tell horror stories about ASN.1, but I'll spare you that for now, maybe this is a topic for another blog entry. It's enough to know that it's a complicated format and this is what bites us here: With some trickery in the BER encoding one can add further data into the RSA function – and this allows in certain situations to create forged signatures.

One thing should be made clear: Both the original BB06 attack and BERserk are flaws in the implementation of PKCS #1 1.5. If you do everything correct then you're fine. These attacks exploit the relatively simple structure of the old PKCS standard and they only work when RSA is done with a very small exponent. RSA public keys consist of two large numbers. The modulus N (which is a product of two large primes) and the exponent.

In his presentation at Crypto 2006 Daniel Bleichenbacher already proposed what would have prevented this attack: Just don't use RSA keys with very small exponents like three. This advice also went into various recommendations (e. g. by NIST) and today almost everyone uses 65537 (the reason for this number is that due to its binary structure calculations with it are reasonably fast).

There's just one problem: A small number of keys are still there that use the exponent e=3. And six of them are used by root certificates installed in every browser. These root certificates are the trust anchor of TLS (which in itself is a problem, but that's another story). Here's our problem: As long as there is one single root certificate with e=3 with such an attack you can create as many fake certificates as you want. If we had deprecated e=3 keys BERserk would've been mostly a non-issue.

There is one more aspect of this story: What's this PKCS #1 1.5 thing anyway? It's an old standard for RSA encryption and signatures. I want to quote Adam Langley on the PKCS standards here: "In a modern light, they are all completely terrible. If you wanted something that was plausible enough to be widely implemented but complex enough to ensure that cryptography would forever be hamstrung by implementation bugs, you would be hard pressed to do better."

Now there's a successor to the PKCS #1 1.5 standard: PKCS #1 2.1, which is based on technologies called PSS (Probabilistic Signature Scheme) and OAEP (Optimal Asymmetric Encryption Padding). It's from 2002 and in many aspects it's much better. I am kind of a fan here, because I wrote my thesis about this. There's just one problem: Although already standardized 2002 people still prefer to use the much weaker old PKCS #1 1.5. TLS doesn't have any way to use the newer PKCS #1 2.1 and even the current drafts for TLS 1.3 stick to the older - and weaker - variant.

What to do

I would take bets that POODLE wasn't the last TLS/CBC-issue we saw and that BERserk wasn't the last variant of the BB06-attack. Basically, I think there are a number of things TLS implementers could do to prevent further similar attacks:

* The Protocol Dance should die. Don't put another layer of duct tape around it (SCSV), just get rid of it. It will break a small number of already broken devices, but that is a reasonable price for avoiding the next protocol downgrade attack scenario. Backwards compatibility shouldn't compromise security.
* More generally, I think the working around for broken devices has to stop. Replace the „whoever broke it first“ paradigm with a „fix it where its broken“ paradigm. That also means I think the padding extension should be scraped.
* Keys with weak choices need to be deprecated at some point. In a long process browsers removed most certificates with short 1024 bit keys. They're working hard on deprecating signatures with the weak SHA1 algorithm. I think e=3 RSA keys should be next on the list for deprecation.
* At some point we should deprecate the weak CBC modes. This is probably the trickiest part, because up until very recently TLS 1.0 was all that most major browsers supported. The only way to avoid them is either using the GCM mode of TLS 1.2 (most browsers just got support for that in recent months) or using a very https://tools.ietf.org/html/rfc7366 new extension that's rarely used at all today.
* If we have better technologies we should start using them. PKCS #1 2.1 is clearly superior to PKCS #1 1.5, at least if new standards get written people should switch to it.

Update: I just read that Mozilla Firefox devs disabled the protocol dance in their latest nightly build. Let's hope others follow.

November 02, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

I just finished updating 102 packages. The change? Removing the following from the ebuilds:

DEPEND="selinux? ( sec-policy/selinux-${packagename} )"

In the past, we needed this construction in both DEPEND and RDEPEND. Recently however, the SELinux eclass got updated with some logic to relabel files after the policy package is deployed. As a result, the DEPEND variable no longer needs to refer to the SELinux policy package.

This change also means that for those moving from a regular Gentoo installation to an SELinux installation will have much less packages to rebuild. In the past, getting USE="selinux" (through the SELinux profiles) would rebuild all packages that have a DEPEND dependency to the SELinux policy package. No more – only packages that depend on the SELinux libraries (like libselinux) or utilities rebuild. The rest will just pull in the proper policy package.

October 31, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
EVE Online on Gentoo Linux (October 31, 2014, 16:56 UTC)

Good news, everyone! I’m finally rid of Windows.

A couple weeks ago my Windows installation corrupted itself on the 5 minute trip home from the community theatre. I didn’t command it to go to sleep, I just unplugged it and closed the lid. Somehow, it managed to screw up its startup files, and the restore process didn’t do what it was supposed to so I was greeted with a blank screen. No errors. Just staring into the void.

I’ve been using Windows as the sole OS on this machine with Gentoo running in VirtualBox for various reasons related to minor annoyances of unsupported hardware, but as I needed a working machine sooner rather than later and the only tools I could find to solve my Windows problem appeared to be old, defunct, and/or suspicious, I downloaded an ISO of SystemRescueCd (www.sysresccd.org) and installed Gentoo in the sliver of space left on the drive.

There were only two real reasons why I was intent on keeping Windows: Netflix (netflix.com) and EVE Online (eveonline.com). I intended to get Windows up and running once the show was over at the theatre, but then I read about Netflix being supported in Linux (www.mpagano.com). That left me with just one reason to keep Windows: EVE. I turned to Wine (www.winehq.org) and discovered reports of it running EVE quite well (appdb.winehq.org). I also learned that the official Mac OS release of EVE runs on Cider (www.transgaming.com), which is based on Wine.

I had another hitch: I chose the no-multilib stage3 for that original sliver thinking I wouldn’t be running anything other than 64 bit software, and drive space was at a premium. EVE Online is 32 bit.

So I had to begin my adventure with switching to multilib. This didn’t involve me reinstalling Gentoo thanks to a handy, but unsupported and unofficial, guide (jkroon.blogs.uls.co.za) by Jaco Kroon.

As explained on Multilib System without emul-linux Packages (wiki.gentoo.org), I decided it’s better to build my own 32 bit library. So, the next step is to mask the emulation packages:

# /etc/portage/package.mask
app-emulation/emul-linux-x86-*

Because I didn’t want to build a 32 bit variant for everything on my system, I iterated through what Portage wanted and marked several packages to build their 32 bit variant via use flags. This is what I wound up with:

# /etc/portage/package.use
app-arch/bzip2 abi_x86_32
app-emulation/wine mono abi_x86_32
dev-libs/elfutils static-libs abi_x86_32
dev-libs/expat abi_x86_32
dev-libs/glib abi_x86_32
dev-libs/gmp abi_x86_32
dev-libs/icu abi_x86_32
dev-libs/libffi abi_x86_32
dev-libs/libgcrypt abi_x86_32
dev-libs/libgpg-error abi_x86_32
dev-libs/libpthread-stubs abi_x86_32
dev-libs/libtasn1 abi_x86_32
dev-libs/libxml2 abi_x86_32
dev-libs/libxslt abi_x86_32
dev-libs/nettle abi_x86_32
dev-util/pkgconfig abi_x86_32
media-libs/alsa-lib abi_x86_32
media-libs/fontconfig abi_x86_32
media-libs/freetype abi_x86_32
media-libs/glu abi_x86_32
media-libs/libjpeg-turbo abi_x86_32
media-libs/libpng abi_x86_32
media-libs/libtxc_dxtn abi_x86_32
media-libs/mesa abi_x86_32
media-libs/openal abi_x86_32
media-sound/mpg123 abi_x86_32
net-dns/avahi abi_x86_32
net-libs/gnutls abi_x86_32
net-print/cups abi_x86_32
sys-apps/dbus abi_x86_32
sys-devel/llvm abi_x86_32
sys-fs/udev gudev abi_x86_32
sys-libs/gdbm abi_x86_32
sys-libs/ncurses abi_x86_32
sys-libs/zlib abi_x86_32
virtual/glu abi_x86_32
virtual/jpeg abi_x86_32
virtual/libffi abi_x86_32
virtual/libiconv abi_x86_32
virtual/libudev abi_x86_32
virtual/opengl abi_x86_32
virtual/pkgconfig abi_x86_32
x11-libs/libX11 abi_x86_32
x11-libs/libXau abi_x86_32
x11-libs/libXcursor abi_x86_32
x11-libs/libXdamage abi_x86_32
x11-libs/libXdmcp abi_x86_32
x11-libs/libXext abi_x86_32
x11-libs/libXfixes abi_x86_32
x11-libs/libXi abi_x86_32
x11-libs/libXinerama abi_x86_32
x11-libs/libXrandr abi_x86_32
x11-libs/libXrender abi_x86_32
x11-libs/libXxf86vm abi_x86_32
x11-libs/libdrm abi_x86_32
x11-libs/libvdpau abi_x86_32
x11-libs/libxcb abi_x86_32
x11-libs/libxshmfence abi_x86_32
x11-proto/damageproto abi_x86_32
x11-proto/dri2proto abi_x86_32
x11-proto/dri3proto abi_x86_32
x11-proto/fixesproto abi_x86_32
x11-proto/glproto abi_x86_32
x11-proto/inputproto abi_x86_32
x11-proto/kbproto abi_x86_32
x11-proto/presentproto abi_x86_32
x11-proto/randrproto abi_x86_32
x11-proto/renderproto abi_x86_32
x11-proto/xcb-proto abi_x86_32 python_targets_python3_4
x11-proto/xextproto abi_x86_32
x11-proto/xf86bigfontproto abi_x86_32
x11-proto/xf86driproto abi_x86_32
x11-proto/xf86vidmodeproto abi_x86_32
x11-proto/xineramaproto abi_x86_32
x11-proto/xproto abi_x86_32

Now emerge both Wine — the latest and greatest of course — and the questionable library so textures will be rendered:

emerge -av media-libs/libtxc_dxtn =app-emulation/wine-1.7.29

You may get some messages along the lines of:

emerge: there are no ebuilds to satisfy ">=sys-libs/zlib-1.2.8-r1".

This was a bit of a head scratcher for me. I have syslibs/zlib-1.2.8-r1 installed. I didn’t have to accept its keyword. It’s already stable! I haven’t really looked into why, but you have to accept its keyword to press forward:

# echo '=sys-libs/zlib-1.2.8-r1' >> /etc/portage/package.accept_keywords

You’ll have to do the above several times for other packages when you try to emerge Wine. Most of the time the particular version it wants is something you already have installed. Check what you do have installed with eix or other favorite tool so you don’t downgrade anything. Once wine is installed, as your user run:

$ winecfg

Download the EVE Online Windows installer and run it using Wine:

$ wine EVE_Online_Installer_*.exe

Once that’s done, invoke the launcher as:

$ force_s3tc_enable=true wine 'C:\Program Files (x86)\CCP\EVE\eve.exe'

force_s3tc_enable=true is needed to enable texture rendering. Without it, EVE will freeze during start up. (If you didn’t emerge media-libs/libtxc_dxtn, EVE will start, but none of the textures will load, and you’ll have a lot of black on black objects.) I didn’t have to do any of the other things I’ve found, such as disabling DirectX 11.

As for my Linux setup: I have a Radeon HD6480G (SUMO/r600) in my ThinkPad Edge E525, and I’m using the open source radeon (www.x.org) drivers with graphics on high and medium anti-aliasing with Mesa and OpenGL. For the most part, I find the game play to be smooth and indistinguishable from my experience on Windows.

There are a few things that don’t work well. Psychedelic, rendering artifacts galore when I open the in-game browser (IGB) or switch to another application, but that’s resolve without logging out of EVE by changing the graphics quality to something else. It may be related to resource caching, but I need to do more testing. I haven’t tried going into the Captain’s Quarters (other users have reported crashes entering there) as back on Windows that brings my system to a crawl, and there isn’t anything particularly interesting about going in there…yet.

Overall, I’m quite happy with the EVE/Wine experience on Gentoo. It was quite easy and there wasn’t any real troubleshooting for me to do.

If you’re a fellow Gentoo-er in EVE, drop me a line. If you want to give EVE a go, have an extra week on me.

Update: I’ve been informed by Aatos Taavi that running EVE in windowed mode works quite well. I’ve also been informed that we need to declare stable packages in portage.accept_keywords because abi_x86_32 is use masked.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Using multiple priorities with modules (October 31, 2014, 16:24 UTC)

One of the new features of the 2.4 SELinux userspace is support for module priorities. The idea is that distributions and administrators can override a (pre)loaded SELinux policy module with another module without removing the previous module. This lower-version module will remain in the store, but will not be active until the higher-priority module is disabled or removed again.

The “old” modules (pre-2.4) are loaded with priority 100. When policy modules with the 2.4 SELinux userspace series are loaded, they get loaded with priority 400. As a result, the following message occurs:

~# semodule -i screen.pp
libsemanage.semanage_direct_install_info: Overriding screen module at lower priority 100 with module at priority 400

So unlike the previous situation, where the older module is substituted with the new one, we now have two “screen” modules loaded; the last one gets priority 400 and is active. To see all installed modules and priorities, use the --list-modules option:

~# semodule --list-modules=all | grep screen
100 screen     pp
400 screen     pp

Older versions of modules can be removed by specifying the priority:

~# semodule -X 100 -r screen

October 30, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don't want to have a trollfest on my blog, because moderating it is something I hate and I'm sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don't think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don't see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it's easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I've considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don't think I trust the stability promises for the moment and I don't reboot servers that often anyway.

There are good things to the systemd design. And I'm sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don't like Lennart's approach on this to begin with, and I don't think it's uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don't like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don't agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the "software biodiversity". At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I'll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.