Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Zack Medico

Last updated:
July 24, 2014, 17:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

July 22, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL: drop-in and ABI leakage (July 22, 2014, 23:09 UTC)

There has been some confusion on my previous post with Bob Beck of LibreSSL on whether I would advocate for using a LibreSSL shared object as a drop-in replacement for an OpenSSL shared object. Let me state this here, boldly: you should never, ever, for no reason, use shared objects from different major/minor OpenSSL versions or implementations (such as LibreSSL) as a drop-in replacement for one another.

The reason is, obviously, that the ABI of these libraries differs, sometimes subtly enought that they may actually load and run, but then perform abysmally insecure operations, as its data structures will have changed, and now instead of reading your random-generated key, you may be reading the master private key. nd in general, for other libraries you may even be calling the wrong set of functions, especially for those written in C++, where the vtable content may be rearranged across versions.

What I was discussing in the previous post was the fact that lots of proprietary software packages, by bundling a version of Curl that depends on the RAND_egd() function, will require either unbundling it, or keeping along a copy of OpenSSL to use for runtime linking. And I think that is a problem that people need to consider now rather than later for a very simple reason.

Even if LibreSSL (or any other reimplementation, for what matters) takes foot as the default implementation for all Linux (and not-Linux) distributions, you'll never be able to fully forget of OpenSSL: not only if you have proprietary software that you maintain, but also because a huge amount of software (and especially hardware) out there will not be able to update easily. And the fact that LibreSSL is throwing away so much of the OpenSSL clutter also means that it'll be more difficult to backport fixes — while at the same time I think that a good chunk of the black hattery will focus on OpenSSL, especially if it feels "abandoned", while most of the users will still be using it somehow.

But putting aside the problem of the direct drop-in incompatibilities, there is one more problem that people need to understand, especially Gentoo users, and most other systems that do not completely rebuild their package set when replacing a library like this. The problem is what I would call "ABI leakage".

Let's say you have a general libfoo that uses libssl; it uses a subset of the API that works with both OpenSSL. Now you have a bar program that uses libfoo. If the library is written properly, then it'll treat all the data structures coming from libssl as opaque, providing no way for bar to call into libssl without depending on the SSL API du jour (and thus putting a direct dependency on libssl for the executable). But it's very well possible that libfoo is not well-written and actually treats the libssl API as transparent. For instance a common mistake is to use one of the SSL data structures inline (rather than as a pointer) in one of its own public structures.

This situation would be barely fine, as long as the data types for libfoo are also completely opaque, as then it's only the code for libfoo that relies on the structures, and since you're rebuilding it anyway (as libssl is not ABI-compatible), you solve your problem. But if we keep assuming a worst-case scenario, then you have bar actually dealing with the data structures, for instance by allocating a sized buffer itself, rather than calling into a proper allocation function from libfoo. And there you have a problem.

Because now the ABI of libfoo is not directly defined by its own code, but also by whichever ABI libssl has! It's a similar problem as the symbol table used as an ABI proxy: while your software will load and run (for a while), you're really using a different ABI, as libfoo almost certainly does not change its soname when it's rebuilt against a newer version of libssl. And that can easily cause crashes and worse (see the note above about dropping in LibreSSL as a replacement for OpenSSL).

Now honestly none of this is specific to LibreSSL. The same is true if you were to try using OpenSSL 1.0 shared objects for software built against OpenSSL 0.9 — which is why I cringed any time I heard people suggesting to use symlink at the time, and it seems like people are giving the same suicidal suggestion now with OpenSSL, according to Bob.

So once again, don't expect binary-compatibility across different versions of OpenSSL, LibreSSL, or any other implementation of the same API, unless they explicitly aim for that (and LibreSSL definitely doesn't!)

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)

One of the first tools that you should get if you’re hacking with GStreamer or want to play with the latest version without doing evil things to your system is probably the gst-uninstalled script. It’s the equivalent of Python’s virtualenv for hacking on GStreamer. :)

The documentation around getting this set up is a bit frugal, though, so here’s my attempt to clarify things. I was going to put this on our wiki, but that’s a bit search-engine unfriendly, so probably easiest to just keep it here. The setup I outline below can probably be automated further, and comments/suggestions are welcome.

  • First, get build dependencies for GStreamer core and plugins on your distribution. Commands to do this on some popular distributions follow. This will install a lot of packages, but should mean that you won’t have to play find-the-plugin-dependency for your local build.

    • Fedora: $ sudo yum-builddep gstreamer1-*
    • Debian/Ubuntu: $ sudo apt-get build-dep gstreamer1.0-plugins-{base,good,bad,ugly}
    • Gentoo: having the GStreamer core and plugin packages should suffice
    • Others: drop me a note with the command for your favourite distro, and I’ll add it here
  • Next, check out the code (by default, it will turn up in ~/gst/master)

    • $ curl http://cgit.freedesktop.org/gstreamer/gstreamer/plain/scripts/create-uninstalled-setup.sh | sh
    • Ignore the pointers to documentation that you see — they’re currently defunct
  • Now put the gst-uninstalled script somewhere you can get to it easily:

    • $ ln -sf ~/gst/master/gstreamer/scripts/gst-uninstalled ~/bin/gst-master
    • (the -master suffix for the script is important to how the script works)
  • Enter the uninstalled environment:

    • $ ~/bin/gst-master
    • (this puts you in the directory with all the checkouts, and sets up a bunch of environment variables to use your uninstalled setup – check with echo $GST_PLUGIN_PATH)
  • Time to build

    • $ ./gstreamer/scripts/git-update.sh
  • Take it out for a spin

    • $ gst-inspect-1.0 filesrc
    • $ gst-launch-1.0 playbin uri=file:///path/to/some/file
    • $ gst-discoverer-1.0 /path/to/some/file
  • That’s it! Some tips:

    • Remember that you need to run ~/bin/gst-master to enter the environment for each new shell
    • If you start up a GStreamer app from your system in this environment, it will use your uninstalled libraries and plugins
    • You can and should periodically update you tree by rerunning the git-update.sh script
    • To run gdb on gst-launch, you need to do something like:
    • $ libtool --mode=execute gdb --args gstreamer/tools/gst-launch-1.0 videotestsrc ! videoconvert ! xvimagesink
    • I find it useful to run cscope on the top-level tree, and use that for quick code browsing

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Privacy Theatre (July 22, 2014, 08:26 UTC)

I really wish I could take credit for the term, but Jürgen points out he coined the term way before me, in German: Datenschutztheater. I still like to think that the name fits many behaviours I see out there, and it's not a coincidence that it sounds like the way we think of TSA's rules at airports, security theatre.

I have seen lots and lots of people advocating for 100% encryption of everything, and hiding information and all kind of (in my opinion) overly paranoid suggestions for everybody, without understanding any threat model at all, and completely forgetting that your online privacy is only a small part of the picture.

I have been reminded of this as I proceeded sorting out my paperwork here in Dublin, which started piling up a little too much. My trick is the usual I used in Italy too: scan whatever is important to keep a copy of, and unless the original is required for anything, I destroy the hard copy. I don't trash it, I destroy it. I include anything that has my address on it, and when I was destroying it with my personal shredder, I always made sure to include enough "harmless" papers in the mix to make it more difficult to filter out the parts that looked important.

As I said in my previous post, I'm not worried about "big" corporations knowing things about me, like Tesco knowing what I like to buy. I find it useful, and I don't have a problem with that. On the other hand, I would have a problem with anybody, wanting to attack me directly, decided to dumpster-dive me.

Another common problem I see that I categorize as Privacy Theatre is the astounding lack of what others would call OpSec. I have seen plenty of people at conferences, even in security training, using their laptop without consideration for the other people in the room, and without any sort of privacy screen. In one of the past conferences I've seen mail admins from a provider that will go unnamed, working on production issues in front of my eyes: if I had mischievous intents I would have learnt quite a bit about their production environment.

Yes I know that the screens are a pain, and that you have to keep taking them in and out, and that they take away some of the visual space on your monitor. Myself, for my personal laptop I decided for a gold privacy screen by 3M, which is bearable to use even if you don't need it, as long as you don't need to watch movies on your laptop (I don't, the laptop's display is good but I have a TV and a good monitor for that).

But there are tons of other, smaller pieces that people who insist they are privacy advocates really don't seem to care about. I'm not saying that you should be paranoid, actually I'm saying the exact opposite: try to not be the paranoid person that wants everything encrypted without understanding why. In most cases, Internet communication needs to be encrypted indeed. And you want to encrypt your important files if you put them in the cloud. But at the same time there are things that you don't really care about that much and you're just making your life miserable because Crypto-Gods, while the same energy could be redirected to save you from more realistic petty criminals.

July 20, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL and the bundled libs hurdle (July 20, 2014, 09:55 UTC)

It was over five years ago that I ranted about the bundling of libraries and what that means for vulnerabilities found in those libraries. The world has, since, not really listened. RubyGems still keep insisting that "vendoring" gems is good, Go explicitly didn't implement a concept of shared libraries, and let's not even talk about Docker or OSv and their absolutism in static linking and bundling of the whole operating system, essentially.

It should have been obvious how this can be a problem when Heartbleed came out, bundled copies of OpenSSL would have needed separate updates from the system libraries. I guess lots of enterprise users of such software were saved only by the fact that most of the bundlers ended up using older versions of OpenSSL where heartbeat was not implemented at all.

Now that we're talking about replacing the OpenSSL libraries with those coming from a different project, we're going to be hit by both edges of the proprietary software sword: bundling and ABI compatibility, which will make things really interesting for everybody.

If you've seen my (short, incomplete) list of RAND_egd() users which I posted yesterday. While the tinderbox from which I took this is out of date and needs cleaning, it is a good starting point to figure out the trends, and as somebody already picked up, the bundling is actually strong.

Software that bundled Curl, or even Python, but then relied on the system copy of OpenSSL, will now be looking for RAND_egd() and thus fail. You could be unbundling these libraries, and then use a proper, patched copy of Curl from the system, where the usage of RAND_egd() has been removed, but then again, this is what I've been advocating forever or so. With caveats, in the case of Curl.

But now if the use of RAND_egd() is actually coming from the proprietary bits themselves, you're stuck and you can't use the new library: you either need to keep around an old copy of OpenSSL (which may be buggy and expose even more vulnerability) or you need a shim library that only provides ABI compatibility against the new LibreSSL-provided library — I'm still not sure why this particular trick is not employed more often, when the changes to a library are only at the interface level but still implements the same functionality.

Now the good news is that from the list that I produced, at least the egd functions never seemed to be popular among proprietary developers. This is expected as egd was vastly a way to implement the /dev/random semantics for non-Linux systems, while the proprietary software that we deal with, at least in the Linux world, can just accept the existence of the devices themselves. So the only problems have to do with unbundling (or replacing) Curl and possibly the Python SSL module. Doing so is not obvious though, as I see from the list that there are at least two copies of libcurl.so.3 which is the older ABI for Curl — although admittedly one is from the scratchbox SDKs which could just as easily be replaced with something less hacky.

Anyway, my current task is to clean up the tinderbox so that it's in a working state, after which I plan to do a full build of all the reverse dependencies on OpenSSL, it's very possible that there are more entries that should be in the list, since it was built with USE=gnutls globally to test for GnuTLS 3.0 when it came out.

July 19, 2014
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I was experimenting in my arm chroot, and after a gcc upgrade and emerge --depclean --ask that removed the old gcc I got the following error:

# ls -l
ls: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory

Fortunately the newer working gcc was present, so the steps to make things work again were:

# LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config -l
 * gcc-config: Active gcc profile is invalid!

 [1] armv7a-hardfloat-linux-gnueabi-4.8.2

# LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config 1 
 * Switching native-compiler to armv7a-hardfloat-linux-gnueabi-4.8.2 ...

Actually my first thought was using busybox. The unexpected breakage during a routine gcc upgrade made me do some research in case I can't rely on /bin/busybox being present and working.

I highly recommend the following links for further reading:
http://lambdaops.com/rm-rf-remains
http://eusebeia.dyndns.org/bashcp
http://www.reddit.com/r/linux/comments/27is0x/rm_rf_remains/ci199bk

Read more »

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When I read about LibreSSL coming from the OpenBSD developers, my first impression was that it was a stunt. I did not change my impression of it drastically still. While I know at least one quite good OpenBSD developer, my impression of the whole set is still the same: we have different concepts of security, and their idea of "cruft" is completely out there for me. But this is a topic for some other time.

So seeing the amount of scrutiny from other who are, like me, skeptical of the OpenBSD people left on their own is a good news. It keeps them honest, as they say. But it also means that things that wouldn't be otherwise understood by people not used to Linux don't get shoved under the rug.

This is not idle musings: I still remember (but can't find now) an article in which Theo boasted not ever having used Linux. And yet kept insisting that his operating system was clearly superior. I was honestly afraid that the way the fork-not-a-fork project was going to be handled was the same, I'm positively happy to be proven wrong up to now.

I actually have been thrilled to see that finally there is movement to replace the straight access to /dev/random and /dev/urandom: Ted's patch to implement a getrandom() system call that can be made compatible with OpenBSD's own getentropy() in user space. And even more I'm happy to see that at least one of the OpenBSD/LibreSSL developers pitching in to help shape the interface.

Dropping out the egd support made me puzzled for a moment, but then I realized that there is no point in using egd to feed the randomness to the process, you just need to feed entropy to the kernel, and let the process get it normally. I have had, unfortunately, quite a bit of experience with entropy-generating daemons, and I wonder if this might be the right time to suggest getting a new multi-source daemon out.

So a I going to just blindly trust the OpenBSD people because "they have a good track record"? No. And to anybody that suggest that you can take over lines and lines of code from someone else's crypto-related project, remove a bunch of code that you think is useless, and have an immediate result, my request is to please stop working with software altogether.

Security Holes Copyright © Randall Munroe.

I'm not saying that they would do it on purpose, or that they wouldn't be trying to do the darndest to make LibreSSL a good replacement for OpenSSL. What I'm saying is that I don't like the way, and the motives, the project was started from. And I think that a reality check, like the one they already got, was due and a good news.

On my side, once the library gets a bit more mileage I'll be happy to run the tinderbox against it. For now, I'm re-gaining access to Excelsior after a bad kernel update, and I'll just go and search with elfgrep for which binaries do use the egd functionalities and need to be patched, I'll post it on Twitter/G+ once I have it. I know it's not much, but this is what I can do.

July 14, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

I just watched a TED talk that I would like to share with you.

First, let me quote a line from that talk that works without much context:

Next to the technologie, entertainment and social media industries
we now spend more time with other people’s ideas than we do with our own.

For the remainder, see for yourself: Blur the line: Dan Jaspersen at TEDxCSU

Richard Freeman a.k.a. rich0 (homepage, bugs)
Quick systemd-nspawn guide (July 14, 2014, 20:31 UTC)

I switched to using systemd-nspawn in place of chroot and wanted to give a quick guide to using it.  The short version is that I’d strongly recommend that anybody running systemd that uses chroot switch over – there really are no downsides as long as your kernel is properly configured.

Chroot should be no stranger to anybody who works on distros, and I suspect that the majority of Gentoo users have need for it from time to time.

The Challenges of chroot

For most interactive uses it isn’t sufficient to just run chroot.  Usually you need to mount /proc, /sys, and bind mount /dev so that you don’t have issues like missing ptys, etc.  If you use tmpfs you might also want to mount the new tmp, var/tmp as tmpfs.  Then you might want to make other bind mounts into the chroot.  None of this is particularly difficult, but you usually end up writing a small script to manage it.

Now, I routinely do full backups, and usually that involves excluding stuff like tmp dirs, and anything resembling a bind mount.  When I set up a new chroot that means updating my backup config, which I usually forget to do since most of the time the chroot mounts aren’t running anyway.  Then when I do leave it mounted overnight I end up with backups consuming lots of extra space (bind mounts of large trees).

Finally, systemd now by default handles bind mounts a little differently when they contain other mount points (such as when using –rbind).  Apparently unmounting something in the bind mount will cause systemd to unmount the corresponding directory on the other side of the bind.  Imagine my surprise when I unmounted my chroot bind to /dev and discovered /dev/pts and /dev/shm no longer mounted on the host.  It looks like there are ways to change that, but this isn’t the point of my post (it just spurred me to find another way).

Systemd-nspawn’s Advantages

Systemd-nspawn is a tool that launches a container, and it can operate just like chroot in its simplest form.  By default it automatically sets up most of the overhead like /dev, /tmp, etc.  With a few options it can also set up other bind mounts as well.  When the container exits all the mounts are cleaned up.

From the outside of the container nothing appears different when the container is running.  In fact, you could spawn 5 different systemd-nspawn container instances from the same chroot and they wouldn’t have any interaction except via the filesystem (and that excludes /dev, /tmp, and so on – only changes in /usr, /etc will propagate across).  Your backup won’t see the bind mounts, or tmpfs, or anything else mounted within the container.

The container also has all those other nifty container benefits like containment – a killall inside the container won’t touch anything outside, and so on.  The security isn’t airtight – the intent is to prevent accidental mistakes.  

Then, if you use a compatible sysvinit (which includes systemd, and I think recent versions of openrc), you can actually boot the container, which drops you to a getty inside.  That means you can use fstab to do additional mounts inside the container, run daemons, and so on.  You get almost all the benefits of virtualization for the cost of a chroot (no need to build a kernel, and so on).  It is a bit odd to be running systemctl poweroff inside what looks just like a chroot, but it works.

Note that unless you do a bit more setup you will share the same network interface with the host, so no running sshd on the container if you have it on the host, etc.  I won’t get into this but it shouldn’t be hard to run a separate network namespace and bind the interfaces so that the new instance can run dhcp.

How to do it

So, getting it actually working will likely be the shortest bit in this post.

You need support for namespaces and multiple devpts instances in your kernel:

CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y

 From there launching a namespace just like a chroot is really simple:

systemd-nspawn -D .

That’s it – you can exit from it just like a chroot.  From inside you can run mount and see that it has taken care of /dev and /tmp for you.  The “.” is the path to the chroot, which I assume is the current directory.  With nothing further it runs bash inside.

If you want to add some bind mounts it is easy:

systemd-nspawn -D . –bind /usr/portage

Now your /usr/portage is bound to your host, so no need to sync/etc.  If you want to bind to a different destination add a “:dest” after the source, relative to the root of the chroot (so –bind foo is the same as –bind foo:foo).

If the container has a functional init that can handle being run inside, you can add a -b to boot it:

systemd-nspawn -D . –bind /usr/portage -b

Watch the init do its job.  Shut down the container to exit.

Now, if that container is running systemd you can direct its journal to the host journal with -h:

systemd-nspawn -D . –bind /usr/portage -j -b

Now, nspawn registers the container so that it shows up in machinectl.  That makes it easy to launch a new getty on it, or ssh to it (if it is running ssh – see my note above about network namespaces), or power it off from the host.  

That’s it.  If you’re running systemd I’d suggest ditching chroot almost entirely in favor of nspawn.  


Filed under: foss, gentoo, linux

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Biggest ebuilds in-tree (July 14, 2014, 06:39 UTC)

Random datapoint: There's only about 10 packages with ebuilds over 600 lines.

Sorted by lines, duplicate entries per-package removed, these are the biggest ones:

828 dev-lang/ghc/ghc-7.6.3-r1.ebuild
817 dev-lang/php/php-5.3.28-r3.ebuild
750 net-nds/openldap/openldap-2.4.38-r2.ebuild
664 www-client/chromium/chromium-36.0.1985.67.ebuild
654 www-servers/nginx/nginx-1.4.7.ebuild
658 games-rpg/nwn-data/nwn-data-1.29-r5.ebuild
654 media-video/mplayer/mplayer-1.1.1-r1.ebuild
644 dev-vcs/git/git-9999-r3.ebuild
621 x11-drivers/ati-drivers/ati-drivers-13.4.ebuild
617 sys-freebsd/freebsd-lib/freebsd-lib-9.1-r11.ebuild

July 13, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Background story / context

At work I’m dealing with a test suite running >30 minutes, even on moderately fast hardware. When testing some changes, I launch the test suite and start working on something else to not be waiting for the test suite. Now the sooner I know that the test suite finished execution, the sooner I can fix errors and give it another spin. So checking the test suite for being done manually is not efficient.

The problem

What I wanted was a notification, something audible, looped, like an alarm clock. Either

$ ALARM_WHEN_DONE cmd [p1 p2 ..]

or

$ cmd [p1 p2 ..] ; ALARM

usage would have worked for me.

My approach

I ended up grabbing the free Analog Alarm Clock sound — the low-quality MP3 version download works without registration — and this shell alias:

alias ALARM='mplayer --loop=0 ~/Desktop/alarm.mp3 &>/dev/null'

With this alias, now I can do stuff like

./testrunner ; ALARM

on the shell and never miss the end of test suite execution again :)

Do you have a different/better approach to the same problem? Let me know!

PS: Yes, I have heard of continuous integration and we do that, too :)

July 12, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
LibreSSL on Gentoo (July 12, 2014, 18:31 UTC)

LibreSSL PuffyYesterday the LibreSSL project released the first portable version that works on Linux. LibreSSL is a fork of OpenSSL and was created by the OpenBSD team in the aftermath of the Heartbleed bug.

Yesterday and today I played around with it on Gentoo Linux. I was able to replace my system's OpenSSL completely with LibreSSL and with few exceptions was able to successfully rebuild all packages using OpenSSL.

After getting this running on my own system I installed it on a test server. The Webpage tlsfun.de runs on that server. The functionality changes are limited, the only thing visible from the outside is the support for the experimental, not yet standardized ChaCha20-Poly1305 cipher suites, which is a nice thing.

A warning ahead: This is experimental, in no way stable or supported and if you try any of this you do it at your own risk. Please report any bugs you have with my overlay to me or leave a comment and don't disturb anyone else (from Gentoo or LibreSSL) with it. If you want to try it, you can get a portage overlay in a subversion repository. You can check it out with this command:
svn co https://svn.hboeck.de/libressl-overlay/
git clone https://github.com/gentoo/libressl.git

This is what I had to do to get things running:

LibreSSL itself

First of all the Gentoo tree contains a lot of packages that directly depend on openssl, so I couldn't just replace that. The correct solution to handle such issues would be to create a virtual package and change all packages depending directly on openssl to depend on the virtual. This is already discussed in the appropriate Gentoo bug, but this would mean patching hundreds of packages so I skipped it and worked around it by leaving a fake openssl package in place that itself depends on libressl.

LibreSSL deprecates some APIs from OpenSSL. The first thing that stopped me was that various programs use the functions RAND_egd() and RAND_egd_bytes(). I didn't know until yesterday what egd is. It stands for Entropy Gathering Daemon and is a tool written in perl meant to replace the functionality of /dev/(u)random on non-Linux-systems. The LibreSSL-developers consider it insecure and after having read what it is I have to agree. However, the removal of those functions causes many packages not to build, upon them wget, python and ruby. My workaround was to add two dummy functions that just return -1, which is the error code if the Entropy Gathering Daemon is not available. So the API still behaves like expected. I also posted the patch upstream, but the LibreSSL devs don't like it. So on the long term it's probably better to fix applications to stop trying to use egd, but for now these dummy functions make it easier for me to build my system.

The second issue popping up was that the libcrypto.so from libressl contains an undefined main() function symbol which causes linking problems with a couple of applications (subversion, xorg-server, hexchat). According to upstream this undefined symbol is intended and most likely these are bugs in the applications having linking problems. However, for now it was easier for me to patch the symbol out instead of fixing all the apps. Like the egd issue on the long term fixing the applications is better.

The third issue was that LibreSSL doesn't ship pkg-config (.pc) files, some apps use them to get the correct compilation flags. I grabbed the ones from openssl and adjusted them accordingly.

OpenSSH

This was the most interesting issue from all of them.

To understand this you have to understand how both LibreSSL and OpenSSH are developed. They are both from OpenBSD and they use some functions that are only available there. To allow them to be built on other systems they release portable versions which ship the missing OpenBSD-only-functions. One of them is arc4random().

Both LibreSSL and OpenSSH ship their compatibility version of arc4random(). The one from OpenSSH calls RAND_bytes(), which is a function from OpenSSL. The RAND_bytes() function from LibreSSL however calls arc4random(). Due to the linking order OpenSSH uses its own arc4random(). So what we have here is a nice recursion. arc4random() and RAND_bytes() try to call each other. The result is a segfault.

I fixed it by using the LibreSSL arc4random.c file for OpenSSH. I had to copy another function called arc4random_stir() from OpenSSH's arc4random.c and the header file thread_private.h. Surprisingly, this seems to work flawlessly.

Net-SSLeay

This package contains the perl bindings for openssl. The problem is a check for the openssl version string that expected the name OpenSSL and a version number with three numbers and a letter (like 1.0.1h). LibreSSL prints the version 2.0. I just hardcoded the OpenSSL version numer, which is not a real fix, but it works for now.

SpamAssassin

SpamAssassin's code for spamc requires SSLv2 functions to be available. SSLv2 is heavily insecure and should not be used at all and therefore the LibreSSL devs have removed all SSLv2 function calls. Luckily, Debian had a patch to remove SSLv2 that I could use.

libesmtp / gwenhywfar

Some DES-related functions (DES is the old Data Encryption Standard) in OpenSSL are available in two forms: With uppercase DES_ and with lowercase des_. I can only guess that the des_ variants are for backwards compatibliity with some very old versions of OpenSSL. According to the docs the DES_ variants should be used. LibreSSL has removed the des_ variants.

For gwenhywfar I wrote a small patch and sent it upstream. For libesmtp all the code was in ntlm. After reading that ntlm is an ancient, proprietary Microsoft authentication protocol I decided that I don't need that anyway so I just added --disable-ntlm to the ebuild.

Dovecot

In Dovecot two issues popped up. LibreSSL removed the SSL Compression functionality (which is good, because since the CRIME attack we know it's not secure). Dovecot's configure script checks for it, but the check doesn't work. It checks for a function that LibreSSL keeps as a stub. For now I just disabled the check in the configure script. The solution is probably to remove all remaining stub functions. The configure script could probably also be changed to work in any case.

The second issue was that the Dovecot code has some #ifdef clauses that check the openssl version number for the ECDH auto functionality that has been added in OpenSSL 1.0.2 beta versions. As the LibreSSL version number 2.0 is higher than 1.0.2 it thinks it is newer and tries to enable it, but the code is not present in LibreSSL. I changed the #ifdefs to check for the actual functionality by checking a constant defined by the ECDH auto code.

Apache httpd

The Apache http compilation complained about a missing ENGINE_CTRL_CHIL_SET_FORKCHECK. I have no idea what it does, but I found a patch to fix the issue, so I didn't investigate it further.

Further reading:
Someone else tried to get things running on Sabotage Linux.

Update: I've abandoned my own libressl overlay, a LibreSSL overlay by various Gentoo developers is now maintained at GitHub.

July 09, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

SELinux users might be facing failures when emerge is merging a package to the file system, with an error that looks like so:

>>> Setting SELinux security labels
/usr/lib64/portage/bin/misc-functions.sh: line 1112: 23719 Segmentation fault      /usr/sbin/setfiles "${file_contexts_path}" -r "${D}" "${D}"
 * ERROR: dev-libs/libpcre-8.35::gentoo failed:
 *   Failed to set SELinux security labels.

This has been reported as bug 516608 and, after some investigation, the cause is found. First the quick workaround:

~# cd /etc/selinux/strict/contexts/files
~# rm *.bin

And do the same for the other SELinux policy stores on the system (targeted, mcs, mls, …).

Now, what is happening… Inside the mentioned directory, binary files exist such as file_contexts.bin. These files contain the compiled regular expressions of the non-binary files (like file_contexts). By using the precompiled versions, regular expression matching by the SELinux utilities is a lot faster. Not that it is massively slow otherwise, but it is a nice speed improvement nonetheless.

However, when pcre updates occur, then the basic structures that pcre uses internally might change. For instance, a number might switch from a signed integer to an unsigned integer. As pcre is meant to be used within the same application run, most applications do not have any issues with such changes. However, the SELinux utilities effectively serialize these structures and later read them back in. If the new pcre uses a changed structure, then the read-in structures are incompatible and even corrupt.

Hence the segmentation faults.

To resolve this, Stephen Smalley created a patch that includes PCRE version checking. This patch is now included in sys-libs/libselinux version 2.3-r1. The package also recompiles the existing *.bin files so that the older binary files are no longer on the system. But there is a significant chance that this update will not trickle down to the users in time, so the workaround might be needed.

I considered updating the pcre ebuilds as well with this workaround, but considering that libselinux is most likely to be stabilized faster than any libpcre bump I let it go.

At least we have a solution for future upgrades; sorry for the noise.

Edit: libselinux-2.2.2-r5 also has the fix included.

Michał Górny a.k.a. mgorny (homepage, bugs)
The Council and the Community (July 09, 2014, 06:27 UTC)

A new Council election is in progress and we have a few candidates. Most of them have written a manifesto. For some of them this is one of the few mails they sent to the public mailing lists recently. For one of them this is the only one. Do we want to elect people who do not participate actively in the Community? Does such election even make sense?

Gentoo is an open, free community. While the Developer Community is not really open (joining consumes a lot of time), the discussion media were always open to non-developer comments and ideas. Most of the people working on Gentoo are volunteers, doing all the work in their free time or between other tasks.

While we have formal rules, leaders and projects, all of them have very limited power. The rules pretty much boil down to being «do not»s. You can try to convince developer to follow your vision but you can’t force him to. If you try too hard, the best you can get is losing a valuable contributor. And I’m not talking about the extremes like rage quits; the person will simply no longer be interested in working on a particular project.

Most of the mailing list (and bug) discussions are about that. Finding possible solutions, discussing their technical merits and finding an agreement. It is not enough to choose a solution which is considered best by a majority or a team. It is about agreeing on a solution that is good and that comes with people willing to work on it. Otherwise, you end up with no solution because what has been chosen is not being implemented.

Consider the late games team policy thread. The games team and their supporter believes their solutions to have technical merit. Without getting into debating this, we can easily see the effects. The team is barely getting any contributions, mostly thanks to a few (three?) persistent out-of-team developers that are willing to overcome all the difficulties. And even those contributors support the idea of abolishing the current policy.

So, what’s the purpose of all the teams, their leads and the Council in all this? As I see it, teams are the people who know the particular area better than others, and have valuable experience. Yet teams need to be open to the Community, to listen to their feedback, to provide valuable points to the discussion and to guide it towards a consensus.

The teams may need to make a final decision if a mailing list discussion doesn’t end in a clear agreement. However, they need to weigh it carefully, to foresee the outcome. It is not enough to discuss the merits in a semi-open meeting, and it is not enough to consider only the technical aspect. The teams need to predict how the decision will affect the Community, how it will affect the users and the contributors.

The Council is not very different from those teams, albeit more formal in its proceedings. Likewise, it needs to listen to the Community, especially if it is called specifically to revise a team’s decision (or lack of action).

Now, how could the Council determine what’s best for Gentoo without actively participating in the proceedings of the Community? Non-active candidates, do you expect to start participating after being elected? Or do you think that grepping through the threads five minutes before the meeting is enough?

Well, I hope that the next Council will be up to the task. That it will listen to the Community and weigh their decisions carefully. That it will breed action and support ideas backed by technical merits and willing people, rather than decisions that discourage further contribution.

July 06, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hello :)

I don’t get to playing with code much lately. Yesterday and today I put some effort into trying to understand and document the EGF file format used by Xie Xie to store Xiangqi games including per-move comments and a bit of other metadata.

Status quo includes a simple command line tool:

# ./egf/cli.py test.egf 
Event:  Blog post
Site:  At home
Date:  6-7-2014
Round:  1
Red name:  sping
Black name:  Xie Xie Freeware 2.5.0
Description:  Command line tool demo input
Author:  sping

File i:  R _ _ P _ _ p _ _ r
File h:  H _ C _ _ _ _ c _ h
File g:  E _ _ P _ _ p _ _ e
File f:  A _ _ _ _ _ _ _ _ a
File e:  K _ _ P _ _ p _ _ k
File d:  A _ _ _ _ _ _ _ _ a
File c:  E _ _ P _ _ p _ _ e
File b:  H _ C _ _ _ _ c _ h
File a:  R _ _ P _ _ p _ _ r
(Ranks 9 to 0 from left to right)

To start:  red

6 single moves in total
[ 1]  c h3  - e3 
[ 1]                   H h10 - g8 
[ 2]  h h1  - g3 
[ 2]                   R i10 - h10
[ 3]  r i1  - h1 
[ 3]                   C h8  - h4 

Result:  to be determined

Bytes remaining to be read:
0 0

I welcome help to fill in the remaining blanks, e.g. with decoding time markers and king-in-check markers of moves.

If you are on Gentoo and would like to run Xie Xie the easy way, grab games-board/xiexie-freeware-bin from the betagarden overlay.

EGF files for inspection can be downloaded from http://www.cc-xiexie.com/download.php.

Gentoo Monthly Newsletter: June 2014 (July 06, 2014, 15:00 UTC)

Gentoo News

Interview with Patrick McLean (chutzpah)

(by David Abbott)
1. Hi Patrick o/ tell us about yourself?
I am currently a Gentoo Engineer (yes, that is my actual job title) at Gaikai. Before this job I was a Systems Administrator at the McGill Centre for Intelligent Machines, in Montreal, Quebec, Canada.
When I am not coding or packaging I like to watch television, read sci-fi and fantasy, cycle, occasionally go on hikes. When I can I love downhill skiing, but it’s a little harder in California than it was in Quebec.

2. How did you get involved with Linux and Open Source, and what was the path that lead to you to Gentoo?
I started using Linux at the end of 1996. Originally I switched to Linux because with the slow Internet connections of the times, web pages would take a long time to load. I would often open dozens of windows so I could be reading on site while others were loading. After a certain number of open browsers, Windows 95 would start to bog down then just crash, while when I did the same thing on Linux it would just happily chug along.
Around 2001, when Gnome 2 came out, I wanted to try out, and I don’t like installing software outside of the package manager, so I attempted to get the rpms from the rawhide repository. This experience made me decide to look for a different distro, and I ended up liking Gentoo the most.

3. What aspects of Gentoo do you feel the developers and maintainers have got right?
The ebuild is a great source-based package format, it has it’s drawbacks but it is far superior to the other formats I have looked at. I also like that Gentoo treats configurability as an important feature. The frequent use of /etc/foo.d and the scriptability of many parts of the system is great.
I also like some of the more recent work that has gone in to not breaking systems, preserved-rebuild and (despite some overuse) subslots fix many of the annoyances we had in the old days.
I am also a big fan of what is now OpenRC, ever since I first started using Gentoo, I have thought that this is a huge improvement over the alternatives.

4. What is it about Gentoo you would like to see improved?
I think that portage itself is getting very crufty, and the code base is not very nice to work with. I am sure just about everyone reading this would agree that dependency resolution is way too slow at the moment (especially with subslots). Sometimes it generates error messages that are horribly verbose with no indication of how to fix them. I have seen those errors make people leave Gentoo, this is especially bad when the things it’s generating errors about are relatively harmless.
There are also other problems with how portage stores the information about installed packages on the disk, and binary packages in their current form just suck, and are pretty useless.

5. What resources have you found most helpful when troubleshooting within Gentoo and Linux in general?
For doing research into problems, google of course is very useful. For tracking down problems strace is probably the one tool I find the most useful. Of course also digging into the source is probably the single best way to figure out what is actually going on.

6. What are some of the projects within Gentoo that you enjoy contributing to?
I mostly do ebuild work at the moment, python is one area that I contribute the most to. I would like to get more in to package manager work, and I want to start helping more with OpenRC, but finding time is frequently a problem.

7. What is your programming background?
I taught myself to program on GW-BASIC for DOS, it was in no way a modern or even remotely modern language. I moved on to QBASIC a bit later on. Once I got to post high school I started learning Java, C, C++, but my first programming job was Visual Basic, it was an internship that turned in to a summer job. During this time frame I also taught myself shell scripting.
Later (around 2008) I taught myself python when a friend and I were trying to start a business.

8. For someone new to Python what tips could you give them to get a good foundation?
There are lots of good tutorials out there, I personally used Dive in to Python and found it quite useful. I also found that when I learned more about how Python is implemented, it improved my abilities quite a bit. If you truly understand that in Python everything is a dictionary, and the implications of that then it helps quite a bit in debugging the root cause of problems and write better code.

9. Tell us about pkgcore, its features and future?
Pkgcore is an alternative implementation of the PMS. It’s basically an alternative to portage. It has always had the eventual goal of becoming the default package manager on Gentoo, replacing portage. It’s currently orders of magnitude faster than portage. It’s code base is much cleaner, though a little hard to understand at first thanks to it’s use of libsnakeoil for performance optimization. Currently Tim Harder (radhermit) is working on getting all the recent portage feature implemented, it mostly supports EAPI 5 in the git repo now.
Hopefully it can attract more developers and eventually become a truly viable portage replacement, so we can get rid of the cruft that has built up in the portage source over the years.

10. Which open source programs would you like to see developed?
That’s a hard question to answer. I think the biggest one is I would love to see an open source firmware for BMC controllers. These are the extra small computers included in servers that allow things such as remote console and the ability to remotely manage servers. Currently the ecosystem is full of half-assed implementations done by hardware companies, many of which are rife with security holes. There is no standard for remote console, so they all use buggy and horrible java applets to implement this. I would love to see a standard open source suite that motherboard developer all use, with native remote console clients for major OSes.

11. What would be your dream job?
Well I have long wanted a job as a kernel developer, but have never had the time to really dedicate to get to the point where someone would hire me. My current job is a close second. I work with Gentoo every day at work, often writing new ebuilds an fixing bugs in existing ebuilds as part of my day-to-day duties at work.
My day-to-day duties involve ebuild development and debugging. I also do a lot of automation of things like installing new systems, and was the lead developer on our in-house answer to configuration management. I get to do a lot of cool stuff with Gentoo and I get to get paid for it.

12. Need any help?
Yes, we are currently hiring lots of positions, all working with Gentoo. We are really looking for ebuild developers of all kinds, especially if you are comfortable with Java ebuilds (not mandatory, but it would be nice). We are also looking for anyone who is familiar with Gentoo to help with work in Release Engineering and Site Reliability Engineering. We currently have offices in Southern California, USA and Berlin, Germany.
If you are interested in getting paid to work with Gentoo, please drop me a line.

13. With your skills you would be welcome in any project, why did you chose Gentoo?
It had been my distro of choice for many years, and I just ended maintaining a local overlay with many bug fixes and miscellaneous things, so I decided to become a developer to share my work with everyone else.

14. What can we do to get more people involved as Gentoo developers?
That’s a hard question to answer, at the moment probably the best way would be to get back the “hot” and “cool” factors. These days Gentoo is sort of a “background” distro that has been around for ages, has loads of users but new people don’t get excited about anymore, kind of like Debian.
I think we also need to reduce developer burnout, I get the impression that once some people become developers, they feel that they have to fix every bug in the tree. This leads to them being really productive devs for a few months, then leaving when they get burned out and quit.

15. What users would you like to see recruited to become Gentoo developers?
It would be nice to recruit some of the proxy maintainers to contribute to more packages. I don’t have anyone specific in mind at this moment.

16. As a Gentoo developer what are some of your accomplishments?
When I first started, I was on the amd64 bandwagon very early, so I ended up doing the 64-bit ports for a pretty large number of packges. More recently I maintain ebuilds for some particularly tricky packages such as Ganeti, which is a mixture of Python and Haskell code.

17. Same question but work related.
Well, it’s probably a combination of two things.
Creating Gentoo profiles to auto generate dozens of different server image types, and building solid base Gentoo install for those servers.
Also building a fully automated Gentoo installation system that can partition disks, set up RAID, LVM and other parameters based on a JSON definition. Also a configuration file generation system that makes up the basis of our configuration management system.

18. What are the specs of your personal and work boxes?
My home box is a 6-core Core-i7 970 with 24GB of RAM, a GeForce 770, a 256GB SSD, 2 500GB spinning disks and a 1TB spinning disk. I have a 24” monitor and a 22”.
My workstation at work is a 8-core Opteron with 16GB of RAM. I have 2 32” monitors hooked up to it. We also have some pretty beefy servers for building Gentoo images.

19. Describe your home network.
Nothing that exciting, I have a Netgear WNDR3800 running OpenWRT, and a gigabit switch. Connected to that I have a Synology NAS, a smart TV that I never use the smart features of, a media streaming box, a Blu-Ray, a PS4 (I work for Sony) and a couple of computers.

20. What de/wm do you use now and what did you use in the past?
I currently use XFCE, I used to use Gnome 2, tried out Gnome 3 for 2 days, decided that it isn’t for me so created a huge package.mask to mask it. I stuck with that for several months, then decided I should switch to something else. I tried out Cinnamon for a bit, played with E17, considered Mate but then settled on XFCE.

21. What gives you the most enjoyment within the Gentoo community?
In general developers get along pretty well, this is more true on IRC than on the mailing lists. Also, at conferences there is a strong feeling of community among the Gentoo developers who are attending the conference.

22. How did you get the nick (chutzpah)?
It’s kind of a silly story. Way back when I first started hanging out online (early 90s) I needed a nick. I ended up choosing the name of a particularly challenging Ski Trail at the Sunday River ski resort in Maine. I have been using the name ever since.

Council News

This month’s big issue was to compile a preliminary list of features that could go into the next EAPI. It probably does not make sense to go into all the technical details here; you can find the accepted items in the meeting summaries [1,2,3] or on a separate wiki page [4]. One user-visible change will be that from EAPI=6 on every ebuild should accept user patches from /etc/portage/patches [5], as many do already today. Another one will be that(given an implementation in Portage is ready in time) a new type of use-flags will be introduced that can be used to, e.g., only pull in run-time dependencies; toggling such a useflag does not require a rebuild of the package.

In addition, some of us prepared a proposal to make it in the end easier for developers to host semi-official services within the gentoo.org domain [6]. This still needs work and is definitely not something the council can do on its own, but the general idea was given clear support.

Election News

The nomination process is complete, and voting is now open. This year’s candidates are blueness, dberkholz, dilfridge, jlec, patrick, pinkbyte, radhermit, rich0, ryao,TomWij, ulm, williamh, and zerochaos. Additionally, almost every developer was nominated for the council. Elections will be open until 2359 UTC on July 14, and results should be posted around July 16. We’ve already had around 30 people vote, but there are 200 more developers who can vote. Get out there and vote!

Featured New Project: Hardened Musl

(by Anthony G. Basile)

The hardened musl project aims to build and maintain full stage3 tarballs for amd64, arm, mips and i686 architectures using musl as a its C standard library rather than glibc. The “hardened” aspect means that we will also make use of toolchain hardening features so that the resulting userland executables and libraries are more resistant to exploit, although we also provide a “vanilla” flavor without any hardening. In every respect, these stages will be like regular Gentoo stages, except glibc will be replaced by musl.

musl, like uClibc, is ideal for embedded systems although both can be used for servers and desktops. Embedded systems generally have three needs beyond regular systems: 1) They need to have a small footprint both on their storage device and in RAM. 2) They need speed for real time applications. 3) And in some situations, they need their executables to be statically linked. A typical embedded system has has a minimally configured busybox for some needed utilities as well as whatever service the image is to provide, eg. some httpd service. The stages we are producing are not really embedded stages because they don’t use busybox to provide some minimal set of utilities; rather, they use the full set of utilities provided by coreutils, util-linux and friends. This makes these stages ideal as development platforms for building custom embedded images [1] or expanded into a server or desktop system.

However, be warned! If you try to build a full desktop system, you will hit breakage since musl adheres closely to standards while many packages do not. We are working on getting patches [2] for as a full XFCE4 desktop as we did for uClibc [3]. On the other hand, I’ve had lots of success building servers and routers from those stages without any extra patching.

[1] An example of the hardened uClibc stages being used this way is “Real Time And Tiny” (aka RAT) Gentoo.
[2] These patches are house on the musl branch of the hardened dev overlay.
[3] As a subproject of the Hardened uClibc project, maintain a full XFCE4 desktop based on uClibc, affectionately named “Lilblue” after the Little Blue Penguin, a smaller relative of the Gentoo.

Gentoo Developer Moves

Summary

Gentoo is made up of 237 active developers, of which 35 are currently away.
Gentoo has recruited a total of 799 developers since its inception.

Changes

The following developers have recently changed roles:
None this month

Additions

The following developers have recently joined the project:

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17529
Ebuilds 37513
Architecture Stable Testing Total % of Packages
alpha 3604 551 4155 23.70%
amd64 10781 6247 17028 97.14%
amd64-fbsd 0 1578 1578 9.00%
arm 2662 1726 4388 25.03%
hppa 3059 482 3541 20.20%
ia64 3181 620 3801 21.68%
m68k 623 82 705 4.02%
mips 4 2386 2390 13.63%
ppc 6819 2375 9194 52.45%
ppc64 4317 875 5192 29.62%
s390 1486 316 1802 10.28%
sh 1681 387 2068 11.80%
sparc 4122 896 5018 28.63%
sparc-fbsd 0 316 316 1.80%
x86 11444 5308 16752 95.57%
x86-fbsd 0 3236 3236 18.46%

gmn-portage-stats-2013-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201406-36 net-nds/openldap OpenLDAP: Multiple vulnerabilities 290345
201406-35 net-im/openfire Openfire: Multiple vulnerabilities 266129
201406-34 kde-base/kdelibs KDE Libraries: Multiple vulnerabilities 358025
201406-33 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 503792
201406-32 dev-java/icedtea-bin IcedTea JDK: Multiple vulnerabilities 312297
201406-31 kde-base/konqueror Konqueror: Multiple vulnerabilities 438452
201406-30 app-admin/sudo sudo: Privilege escalation 503586
201406-29 net-misc/spice-gtk spice-gtk: Privilege escalation 435694
201406-28 media-video/libav Libav: Multiple vulnerabilities 439052
201406-27 None polkit Spice-Gtk systemd HPLIP libvirt: Privilege escalation 484486
201406-26 dev-python/django Django: Multiple vulnerabilities 508514
201406-25 net-misc/asterisk Asterisk: Multiple vulnerabilities 513102
201406-24 net-dns/dnsmasq Dnsmasq: Denial of Service 436894
201406-23 app-admin/denyhosts DenyHosts: Denial of Service 495130
201406-22 media-libs/nas Network Audio System: Multiple vulnerabilities 484480
201406-21 net-misc/curl cURL: Multiple vulnerabilities 505864
201406-20 www-servers/nginx nginx: Arbitrary code execution 505018
201406-19 dev-libs/nss Mozilla Network Security Service: Multiple vulnerabilities 455558
201406-18 x11-terms/rxvt-unicode rxvt-unicode: User-assisted execution of arbitrary code 509174
201406-17 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 512888
201406-16 net-print/cups-filters cups-filters: Multiple vulnerabilities 504474
201406-15 kde-misc/kdirstat KDirStat: Arbitrary command execution 504994
201406-14 www-client/opera Opera: Multiple vulnerabilities 442044
201406-13 net-misc/memcached memcached: Multiple vulnerabilities 279386
201406-12 net-dialup/freeradius FreeRADIUS: Arbitrary code execution 501754
201406-11 x11-libs/libXfont libXfont: Multiple vulnerabilities 510250
201406-10 www-servers/lighttpd lighttpd: Multiple vulnerabilities 392581
201406-09 net-libs/gnutls GnuTLS: Multiple vulnerabilities 501282
201406-08 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 510278
201406-07 net-analyzer/echoping Echoping: Buffer Overflow Vulnerabilities 349569
201406-06 media-sound/mumble Mumble: Multiple vulnerabilities 500486
201406-05 mail-client/mutt Mutt: Arbitrary code execution 504462
201406-04 dev-util/systemtap SystemTap: Denial of Service 405345
201406-03 net-analyzer/fail2ban Fail2ban: Multiple vulnerabilities 364883
201406-02 app-arch/libarchive libarchive: Multiple vulnerabilities 366687
201406-01 None D-Bus GLib: Privilege escalation 436028

Package Removals/Additions

Removals

Package Developer Date
dev-python/python-gnutls mrueg 02 Jun 2014
dev-ruby/fastthread mrueg 07 Jun 2014
dev-perl/perl-PBS zlogene 11 Jun 2014
games-strategy/openxcom mr_bones_ 14 Jun 2014
media-plugins/vdr-noepgmenu hd_brummy 15 Jun 2014
net-mail/fetchyahoo eras 16 Jun 2014
app-emacs/redo ulm 17 Jun 2014
games-emulation/boycott-advance-sdl ulm 17 Jun 2014
games-emulation/neopocott ulm 17 Jun 2014

Additions

Package Developer Date
dev-ruby/sshkit graaff 01 Jun 2014
media-gfx/plantuml pva 02 Jun 2014
dev-python/sphinxcontrib-plantuml pva 02 Jun 2014
dev-util/kdevelop-qmake zx2c4 02 Jun 2014
x11-misc/easystroke jer 04 Jun 2014
dev-python/docopt jlec 04 Jun 2014
dev-python/funcsigs jlec 04 Jun 2014
virtual/funcsigs jlec 04 Jun 2014
dev-python/common jlec 04 Jun 2014
dev-python/tabulate jlec 04 Jun 2014
app-admin/ngxtop jlec 04 Jun 2014
dev-python/natsort idella4 05 Jun 2014
dev-libs/liblinear jer 05 Jun 2014
net-analyzer/arp-scan jer 06 Jun 2014
www-servers/mongoose zmedico 06 Jun 2014
dev-ruby/spring graaff 06 Jun 2014
dev-ruby/wikicloth mrueg 06 Jun 2014
net-analyzer/ipgen jer 07 Jun 2014
sec-policy/selinux-dropbox swift 07 Jun 2014
dev-python/jingo idella4 08 Jun 2014
dev-python/click rafaelmartins 08 Jun 2014
dev-python/Coffin idella4 08 Jun 2014
dev-python/sphinx_rtd_theme bicatali 09 Jun 2014
dev-ruby/netrc graaff 09 Jun 2014
dev-ruby/delayer naota 11 Jun 2014
www-client/qtweb jer 11 Jun 2014
dev-python/pyoembed rafaelmartins 12 Jun 2014
www-apps/blohg-tumblelog rafaelmartins 12 Jun 2014
dev-python/jaraco-utils patrick 12 Jun 2014
dev-python/more-itertools patrick 12 Jun 2014
dev-libs/libserialport vapier 12 Jun 2014
dev-python/pretty-yaml chutzpah 12 Jun 2014
net-libs/phodav dev-zero 13 Jun 2014
dev-python/django-haystack idella4 14 Jun 2014
sci-libs/libsigrok vapier 14 Jun 2014
sci-libs/libsigrokdecode vapier 14 Jun 2014
sci-electronics/sigrok-cli vapier 14 Jun 2014
sys-firmware/sigrok-firmware-fx2lafw vapier 14 Jun 2014
sci-electronics/pulseview vapier 14 Jun 2014
dev-ruby/hashr mrueg 14 Jun 2014
games-strategy/openxcom maksbotan 14 Jun 2014
games-engines/openxcom mr_bones_ 14 Jun 2014
net-analyzer/icinga2 prometheanfire 15 Jun 2014
dev-python/pyxenstore robbat2 15 Jun 2014
sys-cluster/ampi jauhien 16 Jun 2014
dev-python/pyjwt idella4 17 Jun 2014
app-emulation/openstack-guest-agents-unix robbat2 22 Jun 2014
dev-python/plyr idella4 22 Jun 2014
app-misc/relevation radhermit 22 Jun 2014
media-sound/lyvi idella4 22 Jun 2014
app-emulation/xe-guest-utilities robbat2 23 Jun 2014
net-misc/yandex-disk pinkbyte 24 Jun 2014
sec-policy/selinux-resolvconf swift 25 Jun 2014
dev-python/json-rpc chutzpah 26 Jun 2014
app-backup/cyphertite grknight 26 Jun 2014
dev-python/jdcal idella4 26 Jun 2014
net-libs/libcrafter jer 26 Jun 2014
net-analyzer/tracebox jer 26 Jun 2014
dev-python/python-catcher jlec 27 Jun 2014
dev-python/python-exconsole jlec 27 Jun 2014
dev-python/reconfigure jlec 27 Jun 2014
sys-block/sas2ircu robbat2 27 Jun 2014
sys-block/sas3ircu robbat2 27 Jun 2014
dev-ruby/psych mrueg 27 Jun 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 31 May 2014 and 30 June 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 1991
Closed 1065
Not fixed 171
Duplicates 147
Total 5843
Blocker 5
Critical 18
Major 64

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 152
2 Gentoo Linux Gnome Desktop Team 54
3 Python Gentoo Team 39
4 Gentoo KDE team 33
5 Gentoo Games 28
6 Gentoo Ruby Team 20
7 Default Assignee for Orphaned Packages 20
8 media-video herd 17
9 Julian Ospald (hasufell) 17
10 Others 684

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Security 97
2 Gentoo Linux Gnome Desktop Team 91
3 Gentoo Linux bug wranglers 91
4 Python Gentoo Team 70
5 Gentoo Games 64
6 Gentoo KDE team 50
7 Gentoo Prefix 49
8 Default Assignee for Orphaned Packages 49
9 Gentoo's Team for Core System packages 35
10 Others 1394

Tips of the month

(by Sven Vermeulen)
Quick one-time patching of packages

If you want to patch a package once (for instance to test a patch provided through bugzilla), just start building the package, but when the following is shown, interrupt it (Ctrl-Z):

>>> Source prepared.

Then go to the builddir (like /var/tmp/portage/net-misc/tor-0.2.4.22/work/tor-0.2.4.22) and apply the patch. Then continue the build (with “fg” command).

Verify integrity of installed software

If you don’t want the full-fledged features of tools like AIDE, you can use qcheck to verify this for installed packages:
~# qcheck -e vim-core
Checking app-editors/vim-core-7.4.273 ...
MD5-DIGEST: /usr/share/vim/vim74/doc/tags
* 1783 out of 1784 files are good

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

July 02, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Multilib in Gentoo (July 02, 2014, 19:03 UTC)

One of the areas in Gentoo that is seeing lots of active development is its ongoing effort to have proper multilib support throughout the tree. In the past, this support was provided through special emulation packages, but those have the (serious) downside that they are often outdated, sometimes even having security issues.

But this active development is not because we all just started looking in the same direction. No, it’s thanks to a few developers that have put their shoulders under this effort, directing the development workload where needed and pressing other developers to help in this endeavor. And pushing is more than just creating bugreports and telling developers to do something.

It is also about communicating, giving feedback and patiently helping developers when they have questions.

I can only hope that other activities within Gentoo and its potential broad impact work on this as well. Kudos to all involved, as well as all developers that have undoubtedly put numerous hours of development effort in the hope to make their ebuilds multilib-capable (I know I had to put lots of effort in it, but I find it is worthwhile and a big learning opportunity).

June 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
D-Bus and SELinux (June 30, 2014, 18:07 UTC)

After a post about D-Bus comes the inevitable related post about SELinux with D-Bus.

Some users might not know that D-Bus is an SELinux-aware application. That means it has SELinux-specific code in it, which has the D-Bus behavior based on the SELinux policy (and might not necessarily honor the “permissive” flag). This code is used as an additional authentication control within D-Bus.

Inside the SELinux policy, a dbus permission class is supported, even though the Linux kernel doesn’t do anything with this class. The class is purely for D-Bus, and it is D-Bus that checks the permission (although work is being made to implement D-Bus in kernel (kdbus)). The class supports two permission checks:

  • acquire_svc which tells the domain(s) allowed to “own” a service (which might, thanks to the SELinux support, be different from the domain itself)
  • send_msg which tells which domain(s) can send messages to a service domain

Inside the D-Bus security configuration (the busconfig XML file, remember) a service configuration might tell D-Bus that the service itself is labeled differently from the process that owned the service. The default is that the service inherits the label from the domain, so when dnsmasq_t registers a service on the system bus, then this service also inherits the dnsmasq_t label.

The necessary permission checks for the sysadm_t user domain to send messages to the dnsmasq service, and the dnsmasq service itself to register it as a service:

allow dnsmasq_t self:dbus { acquire_svc send_msg };
allow sysadm_t dnsmasq_t:dbus send_msg;
allow dnsmasq_t sysadm_t:dbus send_msg;

For the sysadm_t domain, the two rules are needed as we usually not only want to send a message to a D-Bus service, but also receive a reply (which is also handled through a send_msg permission but in the inverse direction).

However, with the following XML snippet inside its service configuration file, owning a certain resource is checked against a different label:

<selinux>
  <associate own="uk.org.thekelleys.dnsmasq"
             context="system_u:object_r:dnsmasq_dbus_t:s0" />
</selinux>

With this, the rules would become as follows:

allow dnsmasq_t dnsmasq_dbus_t:dbus acquire_svc;
allow dnsmasq_t self:dbus send_msg;
allow sysadm_t dnsmasq_t:dbus send_msg;
allow dnsmasq_t sysadm_t:dbus send_msg;

Note that only the access for acquiring a service based on a name (i.e. owning a service) is checked based on the different label. Sending and receiving messages is still handled by the domains of the processes (actually the labels of the connections, but these are always the process domains).

I am not aware of any policy implementation that uses a different label for owning services, and the implementation is more suited to “force” D-Bus to only allow services with a correct label. This ensures that other domains that might have enough privileges to interact with D-Bus and own a service cannot own these particular services. After all, other services don’t usually have the privileges (policy-wise) to acquire_svc a service with a different label than their own label.

June 29, 2014
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Accepted for Google Summer of Code 2014 (June 29, 2014, 21:00 UTC)

This year I’ve been accepted for Google Summer of Code 2014 with Gentoo Foundation for the Gentoo Keys project and my mentor will be Brian Dolbec (dol-sen). Gentoo Keys is a Python based project that aims to manage the GPG keys used for validation on users and Gentoo’s infrastructure servers. These keys will be any/all of the release keys, developer keys and any other third party keys or keyrings available or needed.

Participating in large communities and being a developer has great responsibilities. Developers have access to commit their new changes to the main repository, however, even an unintended incorrect commit in the main repository would affect the majority of the users. This issue could be addressed easily by the developer that did the mistake instantly. A less innocent case is that if a developer’s box is compromised, then the malicious user could commit malicious changes freely to the main tree. To prevent this kind of incidents, developers are requested to sign their own commits with their GPG key in order to ensure who they claim to be. It’s an extra layer of protection that helps to keep the integrity of the main repository. Gentoo Keys aims to solve that and provides its features in many scenarios like overlays and release engineering management.

Gentoo Keys will be able to verify GPG keys used for Gentoo’s release media, such as installation CD’s, Live DVD’s, packages and other GPG signed documents. In addition, it will be used by Gentoo infrastructure team to achieve GPG signed git commits in the forthcoming git migration of the main CVS tree.

Gentoo Keys is an open source project which has its code available from the very first day in Gentoo’s official repositories. Everyone is welcome to provide patches and request new features.

Source code: https://github.com/gentoo/gentoo-keys.
Weekly Reports are posted here.
Wiki page: https://wiki.gentoo.org/wiki/Project:Gentoo-keys.

Accepted for Google Summer of Code 2014 was originally published by Pavlos Ratis at dastergon's weblog on June 30, 2014.

Sven Vermeulen a.k.a. swift (homepage, bugs)
D-Bus, quick recap (June 29, 2014, 17:16 UTC)

I’ve never fully investigated the what and how of D-Bus. I know it is some sort of IPC, but higher level than the POSIX IPC methods. After some reading, I think I start to understand how it works and how administrators can work with it. So a quick write-down is in place so I don’t forget in the future.

There is one system bus and, for each X session of a user, also a session bus.

A bus is governed by a dbus-daemon process. A bus itself has objects on it, which are represented through path-like constructs (like /org/freedesktop/ConsoleKit). These objects are provided by a service (application). Applications “own” such services, and identify these through a namespace-like value (such as org.freedesktop.ConsoleKit).
Applications can send signals to the bus, or messages through methods exposed by the service. If methods are invoked (i.e. messages send) then the application must specify the interface (such as org.freedesktop.ConsoleKit.Manager.Stop).

Administrators can monitor the bus through dbus-monitor, or send messages through dbus-send. For instance, the following command invokes the org.freedesktop.ConsoleKit.Manager.Stop method provided by the object at /org/freedesktop/ConsoleKit owned by the service/application at org.freedesktop.ConsoleKit:

~$ dbus-send --system --print-reply 
  --dest=org.freedesktop.ConsoleKit 
  /org/freedesktop/ConsoleKit/Manager 
  org.freedesktop.ConsoleKit.Manager.Stop

What I found most interesting however was to query the busses. You can do this with dbus-send although it is much easier to use tools such as d-feet or qdbus.

To list current services on the system bus:

~# qdbus --system
:1.1
 org.freedesktop.ConsoleKit
:1.10
:1.2
:1.3
 org.freedesktop.PolicyKit1
:1.36
 fi.epitest.hostap.WPASupplicant
 fi.w1.wpa_supplicant1
:1.4
:1.42
:1.5
:1.6
:1.7
 org.freedesktop.UPower
:1.8
:1.9
org.freedesktop.DBus

The numbers are generated by D-Bus itself, the namespace-like strings are taken by the objects. To see what is provided by a particular service:

~# qdbus --system org.freedesktop.PolicyKit1
/
/org
/org/freedesktop
/org/freedesktop/PolicyKit1
/org/freedesktop/PolicyKit1/Authority

The methods made available through one of these:

~# qdbus --system org.freedesktop.PolicyKit1 /org/freedesktop/PolicyKit1/Authority
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
...
property read uint org.freedesktop.PolicyKit1.Authority.BackendFeatures
property read QString org.freedesktop.PolicyKit1.Authority.BackendName
property read QString org.freedesktop.PolicyKit1.Authority.BackendVersion
method void org.freedesktop.PolicyKit1.Authority.AuthenticationAgentResponse(QString cookie, QDBusRawType::(sa{sv} identity)
method void org.freedesktop.PolicyKit1.Authority.CancelCheckAuthorization(QString cancellation_id)
signal void org.freedesktop.PolicyKit1.Authority.Changed()
...

Access to methods and interfaces is governed through XML files in /etc/dbus-1/system.d (or session.d depending on the bus). Let’s look at /etc/dbus-1/system.d/dnsmasq.conf as an example:

<!DOCTYPE busconfig PUBLIC
 "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
 "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
        <policy user="root">
                <allow own="uk.org.thekelleys.dnsmasq"/>
                <allow send_destination="uk.org.thekelleys.dnsmasq"/>
        </policy>
        <policy context="default">
                <deny own="uk.org.thekelleys.dnsmasq"/>
                <deny send_destination="uk.org.thekelleys.dnsmasq"/>
        </policy>
</busconfig>

The configuration mentions that only the root Linux user can ‘assign’ a service/application to the uk.org.thekelleys.dnsmasq name, and root can send messages to this same service/application name. The default is that no-one can own and send to this service/application name. As a result, only the Linux root user can interact with this object.

D-Bus also supports starting of services when a method is invoked (instead of running this service immediately). This is configured through *.service files inside /usr/share/dbus-1/system-services/.

June 27, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Emmy Noether grant extended (June 27, 2014, 11:42 UTC)



Today we've received the good news that our Emmy Noether project on the electronic and nano-electromechanical properties of carbon nanotubes has been given a positive intermediate evaluation from the referees. This means funding for an additional period will be granted. Cheers!

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Build times (June 27, 2014, 08:01 UTC)

Just for fun, over about 8500 packages built, the slowest three:

     Fri Jun 13 19:40:13 2014 >>> dev-python/pypy-2.2.1
       merge time: 2 hours, 7 minutes and 23 seconds.

     Fri Jun 20 09:58:38 2014 >>> app-office/libreoffice-4.2.4.2
       merge time: 1 hour, 37 minutes and 22 seconds.

     Fri Jun 27 12:52:19 2014 >>> sci-libs/openfoam-2.3.0
       merge time: 1 hour, 5 minutes and 8 seconds.
(Quadcore AMD64, 3.4Ghz, 8GB RAM)

These are also the only packages above 1h build time.
Average seems to be near 5 minutes (hard to filter out all the binpkg merges, which are silly-fast)

Edit: New highscore!
     Sun Jun 29 20:36:09 2014 >>> sci-mathematics/nusmv-2.5.4
       merge time: 2 hours, 58 minutes.

June 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
XBMC part 2 (June 26, 2014, 21:24 UTC)

I have posted about me setting up a new box for XBMC and here is a second part to that post, now that I arrived to Dublin and I actually set it up on my living room as part of my system. There are a few things that needs better be described.

The first problem I had was how to set up the infrared receiver for the remote control. I originally intended to use my Galaxy Note as it has an IR blaster for I have no idea what reason; but then I realized I have a better option.

While the NUC does not, unfortunately, support CEC input, my receiver, a Yamaha RX-V475 comes with a programmable remote controller, which – after a very quick check cat-ing the event input device node – appeared sending signals in the right frequency for the built-in IR sensor to pick it up. So the question was to find a way to map the buttons on the remote to action to XBMC.

Important note: a lot of the documentation out there tells you that the nuvoton driver is buggy and requires to play with /sys files and the DSDT tables. This is outdated, just make sure you use kernel version 3.15 or later and it works perfectly fine.

The first obvious option, which I have seen documented basically everywhere, is to use lirc. Now that's a piece of software that I know a little too well for comfort. Not everybody knows this, both because I went by a different nickname at the time, and because it happened a long time before I joined Gentoo, and definitely before I started keeping a blog. But as things are, back in the days when Linux 2.5 was a thing, I did the first initial port of the lirc driver to a newer kernel, mostly as an external patch to apply on top of the kernel. I even implemented devfs, since while I was doing that I finally moved to Gentoo. and I needed devfs to use it.

I wanted to find an alternative to using lirc for this and other reasons. Among other things, last time I have used it, I was using it on computer that was not dedicated as an HTPC, so this looked like a much easier task with a single user-facing process in the system. After looking around quite a bit I found that you can make the driver output X-compatible key events instead of IR events by loading the right keymap. While there are multiple ways to do this, I ended up using ir-keytable which comes in v4l-utils.

The remote control only had to be set to send codes for a VDR for the brand "Microsoft" — which I assume puts it in a mode compatible with Windows XP Media Center Edition. Funnily enough they actually put a separate section for Apple TV codes. After that, the RC6/MCE table can be used, and that will send proper keypresses fr things like the arrows and the number buttons.

I only had to change a couple of keys, namely Enter and Exit to send KEY_RETURN and KEY_BACKSPACE respectively, so that they map to actions in XBMC. It would probably be simple enough to change the bindings to XBMC directly, but I find it more reliable for it to send a different key altogether. The trick is to edit /etc/rc_keymaps/rc6_mce to change the key that is sent, and then re-run ir-keytable -a /etc/rc_maps.cfg, and the problem is solved (udev rules are in place so that the map is loaded at reboot).

And this is one more problem solved, now I'm actually watching things with XBMC so it seems to be working fine.

June 25, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Building Everything (June 25, 2014, 06:27 UTC)

Preparation:

  • Take recent stage3 and unpack to a temporary location
  • Set up things: make.conf, resolv.conf, keywords, ...
  • Update @system, check gcc version etc.
  • Clone this snapshot to 4 locations (4 because of CPU cores)
  • bindmount /usr/portage and friends
Run: Start a screen session for each clone. Chroot in. Apply magic oneliner:
for i in $( qsearch -NC --all | sort -R ); do 
    if $( emerge --nodeps -pk $i > /dev/null ) ; then 
        emerge --depclean; echo $i; emerge -uNDk1 $i; 
    fi; 
done
Wait 4-5 days, get >10k binary packages, lots of logfiles.

Space usage:
~2.5G logfiles
~35G distfiles
~20G binary packages
~100G temp space (/var/tmp has lots of cruft unless FEATURES="fail-clean")


Triage of these logfiles yields about 1% build failures, on average.
It's not hard to do, just tedious!

make.conf additions:
FEATURES="buildpkg split-log -news"
PORT_LOGDIR="/var/log/portage/"
MAKEOPTS="-j4"
EMERGE_DEFAULT_OPTS="--jobs 4"

CLEAN_DELAY="0"
EMERGE_WARNING_DELAY="0"
ACCEPT_PROPERTIES="* -interactive"

June 24, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Since Adobe has stopped developing Flash for Firefox and Mozilla has no plans to support the Pepper plug-in API, Rinat Ibragimov is developing a wrapper to use Google Chrome’s Pepper-based Flash plug-in with Mozilla Firefox:

GitHub: i-rinat/freshplayerplugin

A live ebuild is now available in the “betagarden” overlay. Users of non-stable Chrome can edit the plug-in path in /etc/freshwrapper.conf.

Interestingly, freshplayerplugin makes use of my library uriparser for URI handling. Very nice! :)

June 23, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
Inlining -march=native for distcc (June 23, 2014, 15:26 UTC)

-march=native is a gcc flag that enables auto-detection of CPU architecture and properties. Not only it allows you to avoid finding the correct value of -march= but also enables instruction sets that do not fit any standard CPU profile and detects the cache sizes.

Sadly, -march=native itself can’t really work well with distcc. Since the detection is performed when compiling, remote gcc invocations would use the architecture of the distcc host rather than the client. Therefore, the resulting executables would be a mix of different architectures used by distcc.

You may also find -march=native a bit opaque. For example, we had multiple bug reports about LLVM failing to build with -march=atom. However, some of the reporters were using -march=native, so we wasn’t able to immediately identify the duplicates.

In this article, I will guide you shortly on replacing -march=native with expanded compiler flags, for the benefit of distcc compatibility and more explicit build logs.

Obtaining the native flags from gcc

The first step towards replacing -march=native is to determine which flags are enabled by it. Various people suggest multiple ways of obtaining -march=native flags. For example, you can use the following call:

$ gcc -### -march=native -x c -
Using built-in specs.
COLLECT_GCC=/usr/x86_64-pc-linux-gnu/gcc-bin/4.8.3/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-pc-linux-gnu/4.8.3/lto-wrapper
Target: x86_64-pc-linux-gnu
[…]
Thread model: posix
gcc version 4.8.3 (Gentoo 4.8.3 p1.1, pie-0.5.9) 
COLLECT_GCC_OPTIONS='-march=native'
 /usr/libexec/gcc/x86_64-pc-linux-gnu/4.8.3/cc1 -quiet - "-march=k8-sse3" -mcx16 -msahf -mno-movbe -mno-aes -mno-pclmul -mno-popcnt -mno-abm -mno-lwp -mno-fma -mno-fma4 -mno-xop -mno-bmi -mno-bmi2 -mno-tbm -mno-avx -mno-avx2 -mno-sse4.2 -mno-sse4.1 -mno-lzcnt -mno-rtm -mno-hle -mno-rdrnd -mno-f16c -mno-fsgsbase -mno-rdseed -mno-prfchw -mno-adx -mfxsr -mno-xsave -mno-xsaveopt --param "l1-cache-size=64" --param "l1-cache-line-size=64" --param "l2-cache-size=512" "-mtune=k8" -quiet -dumpbase - -auxbase - -fstack-protector -o /tmp/cckZDyUR.s
[…]

For those more curious, a similar call can be made with -x c++ for the C++ compiler flags. The expanded optimization flags can be found in the cc1 (or cc1plus in case of C++) command line. I have highlighted the relevant flags — usually you’re looking for various -m flags and --params related to caches.

You may also notice -fstack-protector there. This is because nowadays Gentoo enables it by default. If you are using a non-Gentoo distcc host (why would you have a non-Gentoo host in the first place?), you may want to pass it explicitly as well.

You may find the above output a bit oververbose. While this technically isn’t a problem, it clutters the build logs. So, let’s filter it a bit.

Filtering out redundant flags

Most of the -m flags listed above are redundant, being either equivalent to the defaults, or enabled implicitly by -march. For example, on the host providing the example output none of -mno-* flags were actually required, and -msahf was enabled implicitly.

You can safely assume that in Gentoo all -m flags are disabled by default. To find out what flags are implied by the -march, let's look at gcc sources.

$ tar -xf /var/cache/portage/distfiles/gcc-4.8.3.tar.bz2
$ find gcc-4.8.3/gcc/config -name '*.c' -exec grep k8-sse3 {} +
gcc-4.8.3/gcc/config/i386/i386.c:      {"k8-sse3", PROCESSOR_K8, CPU_K8,
gcc-4.8.3/gcc/config/i386/driver-i386.c:	cpu = "k8-sse3";

The first file has what we're looking for. Inside, you can find:

      {"k8-sse3", PROCESSOR_K8, CPU_K8,
	PTA_64BIT | PTA_MMX | PTA_3DNOW | PTA_3DNOW_A | PTA_SSE
	| PTA_SSE2 | PTA_SSE3 | PTA_NO_SAHF | PTA_PRFCHW | PTA_FXSR},

So -march=k8-sse3 would enable -mmmx, -m3dnow, -msse and so on. If you compare this list with the output obtained before, you'd notice that the -march option didn't enable any flags that would need to be disabled explicitly, so all -mno-* flags can be omitted. Similarly, -mfxsr is redundant. But -mcx16 and -msahf seem relevant since the former is not listed there at all, and the latter is disabled by default.

After filtering out the unnecessary flags, we can create both distcc- and eye-friendly CFLAGS like:

CFLAGS='-O2 -pipe -march=k8-sse3 -mcx16 -msahf -param l1-cache-size=64 --param l1-cache-line-size=64 --param l2-cache-size=512'

June 22, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Chroots for SELinux enabled applications (June 22, 2014, 18:16 UTC)

Today I had to prepare a chroot jail (thank you grsecurity for the neat additional chroot protection features) for a SELinux-enabled application. As a result, “just” making a chroot was insufficient: the application needed access to /sys/fs/selinux. Of course, granting access to /sys is not something I like to see for a chroot jail.

Luckily, all other accesses are not needed, so I was able to create a static /sys/fs/selinux directory structure in the chroot, and then just mount the SELinux file system on that:

~# mount -t selinuxfs none /var/chroot/sys/fs/selinux

In hindsight, I probably could just have created a /selinux location as that location, although deprecated, is still checked by the SELinux libraries.

Anyway, there was a second requirement: access to /etc/selinux. Luckily it was purely for read operations, so I was first contemplating of copying the data and doing a chmod -R a-w /var/chroot/etc/selinux, but then considered a bind-mount:

~# mount -o bind,ro /etc/selinux /var/chroot/etc/selinux

Alas, bad luck – the read-only flag is ignored during the mount, and the bind-mount is still read-write. A simple article on lwn.net informed me about the solution: I need to do a remount afterwards to enable the read-only state:

~# mount -o remount,ro /var/chroot/etc/selinux

Great! And because my brain isn’t what it used to be, I just make a quick blog for future reference ;-)

June 18, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A new XBMC box (June 18, 2014, 22:07 UTC)

A couple of months ago I was at LinuxTag in Berlin with the friends from VIdeoLAN and we shared a booth with the XBMC project. It was interesting to see the newest version of XBMC running, and I decided that it was time for me to get a new XBMC box — last time I used XBMC was on my AppleTV and while it was not strictly disappointing it was not terrific either after a while.

At any rate, we spoke about what options are available nowadays to make a good XBMC set up, and while the RaspberryPi is all the rage nowadays, my previous experience with the platform made it a no-go. It also requires you to find a place where to store your data (the USB support on the Pi is not good for many things) and you most likely will have to re-encode animes to the Right Format™ so that the RPi VideoCore can properly decode them: anything that can't be hardware-accelerated will not play on such a limited hardware.

The alternative has been the Intel NUC (Next Unit of Computing), which Intel sells in pre-configured "barebone" kits, some of which include wifi antennas, 2.5" disk bays, and a CIR (Consumer Infrared Receiver) that allows you to use a remote such as the one for the XBox 360 to control the unit. I decided to look into the options and I settled on the D54250WYKH which has a Core i5 CPU, space for both a wireless card (I got the Intel 7260 802.11ac which is dual-radio and supports the new 11ac protocol, even though my router is not 11ac yet), and a mSATA SSD (I got a Transcend 128GB one), as well the 2.5" bay that allows me to use a good old spinning-rust harddrive to store the bulk of the data.

Be careful and don't repeat my mistake! I originally ordered a very cool Western Digital Caviar Green 2TB HDD but while it is a 2.5" HDD, it does not fit properly in the provided cradle; the same problem used to happen with the first series of 1TB HDDs on PlayStation 3s. I decided to keep the HDD and bring it with me to Ireland, as I don't otherwise have a 2TB HDD, instead I opted for a HGST 1.5TB HDD (no link for this one as I bought it at Fry's the same day I picked up the rest, if nothing else because I had no will to wait, and also because I forgot I needed a keyboard).

While I could have just put OpenELEC on the device, I decided instead to install my trusted Gentoo — a Core i5 with 16GB of RAM and a good SSD is well in its ability to run it. And since I was finally setting something up that needs (for myself) to turn on very quickly, I decided to give systemd a go (especially as Robbins is now considered a co-maintainer for OpenRC which drains all my will to keep using it). The effect has been stunning, but there are a few issues that needs to be ironed out; for instance, as far as I can tell, there is no unit for rngd which means that both my laptop (now converted to systemd) and the device have no entropy, even though they both have the rdrand instruction; I'll try to fix this lack myself.

Another huge problem for me has been getting the audio to work; while I've been told by the XBMC people that the NUC are perfectly well supported, I couldn't for the sake of me get the audio to work for days. At the end it was Alexander Patrakov who pointed out to intel_iommu=on,igfx_off as a kernel option to get it to work (kernel bug #67321 still unfixed). So if you have no HDMI output on your NUC, that's what you have to do!

Speaking about XBMC and Gentoo, the latest version as of last week (which was not the latest upstream version, as a new one got released exactly while I was installing the box), seem to force you to install FFmpeg over libav – I honestly felt a bit sorry for the developers of XBMC at LinuxTag while they were trying to tell me how the multi-threaded h264 decoder from FFmpeg is great… Anton, who wrote it, is a libav developer! – but even after you do that, it seems like it does not link it in, preferring a bundled copy of it instead. Which also doesn't seem to build support for multithread (uh?). This is something that I'll have to look into once I'm back in Dublin.

Other than that, there isn't much to say; the one remaining big issue is to figure out how to properly have XBMC start up at boot without nasty autologin hacks on systemd. And of course finding a better way than using a transmission user to start the Transmission daemon, or at least find a better way to share the downloads with XBMC itself. Probably separating the XBMC and Transmission users is a good idea.

Expect more posts on what's going on with my XBMC box in the future, and take this one as a reference about the NUC audio issue.

Hanno Böck a.k.a. hanno (homepage, bugs)

Today I had a little twitter conversation which made me think about the responsibilities a science journalist has. It all started with a quote from Ivan Oransky (who is the editor of Retraction Watch) who said reporting on a study without reading it is 'journalist malpractice'. The source of this is another person who probably just heard him saying that, so I'm not sure what his exact words were.

Twitter conversation

Admittedly my first thought was: "He is right, too many journalists report about things they don't understand." My second thought was: "If he is right then I am probably guilty of 'journalist malpractice'." So I gave it a second thought and I probably won't agree with the statement any more.

I had a quick look at articles I wrote in the past and I have identified the last ten ones that more or less were coverages of a scientific piece of work. I have marked the ones I actually read with a [Y] and the ones I didn't read with a [N]. I've linked the appropriate scientific works and my articles (all in German). I must admit that I defined "read" widely, meaning that I haven't neccesarrily read the whole study/article in detail, I sometimes have just tried to parse the important parts for me.

  1. [X] Supposedly successful Turing Test taz, 2014-06-13
  2. [N]A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic, Golem.de, 2014-05-17
  3. [N] Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children (Cochrane-Review on Tamiflu), Neues Deutschland, 2014-04-26)
  4. [N] 20 Years of SSL/TLS Research: An Analysis of the Internet's Security Foundation, Golem.de, 2014-04-17
  5. [N] DRAFT FIPS 202 - SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, Golem, 2014-04-05
  6. [Y] Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations, Golem.de, 2014-04-04
  7. [Y] On the Practical Exploitability of Dual EC in TLS Implementations, Golem.de, 2014-04-01
  8. [Y] Publishers withdraw more than 120 gibberish papers, Golem.de, 2014-02-27
  9. [Y] Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data, taz, 2013-10-18
  10. [Y] Factoring RSA keys from certified smart cards: Coppersmith in the wild, Golem.de, 2013-09-17


Now the first thing that comes to mind is that I seem to have become lazier recently in reading studies. I hope this isn't the case and I hoestly think this is mostly coincidence. Now let's get into some details: The first example (the Turing Test) is interesting because it seems there is no scientific publication at all, just a press release. This probably tells you something about the quality of that "research", but while I read the press release I haven't even bothered to check if there is a scientific publication I could read.

The second example becomes interesting. I understand enough to know what a "quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic" actually is and I think I also understand what it means, but there's just no way I could understand the paper itself. This is complex mathematics. I seriously doubt that any journalist who covered this work actually read it. If there is I'd like to meet that person. I'm also very sure that the people who wrote the press release overselling this research have neither read this paper nor understood its implications.

I think this example gets to the point why I would disagree with the very general statement that a journalist should've read every scientific piece he writes about: It's sometimes so specialized that it's basically impossible. And I don't think this is an out of the line example. Just think about the Higgs Boson: Certainly this is something we want journalists to write about. But I'm pretty sure there are very few - if any - journalists who are able to read the scientific publications that are the basis of this discovery.

Some quick notes on the others: Number 4 was part of a 200-page-thesis and the press release was already pretty detailed and technically, I think it was legitimate to not read the original source in that case. Number 5 is somewhat similar to 2, because it is about an algorithm that includes complex math. Number 8 is not really a scientific paper, it is merely a news item on the Nature webpage. In the above list, the only case where I think maybe I should've read the scientific paper and I didn't is the Cochrane-Review on Tamiflu.

Conclusion: Don't get me wrong. I certainly welcome the idea that science journalists should have a look into the original scientific papers they write about more often - and this doesn't exclude myself. However, as shown above I doubt that this works in all cases.

June 17, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
EAPI statistics, again (June 17, 2014, 07:13 UTC)

Start: Thu Jan 16 08:18:45 UTC 2014
End:   Mon Jun 16 00:00:01 UTC 2014

EAPI 0:   5966 ebuilds (15.78 percent) ->  5477 ebuilds (14.40 percent)
EAPI 1:    370 ebuilds (0.98 percent)  ->   215 ebuilds ( 0.57 percent)
EAPI 2:   3335 ebuilds (8.82 percent)  ->  2938 ebuilds ( 7.72 percent)
EAPI 3:   3005 ebuilds (7.95 percent)  ->  2585 ebuilds ( 6.79 percent)
EAPI 4:  12385 ebuilds (32.76 percent) -> 10375 ebuilds (27.27 percent)
EAPI 5:  12742 ebuilds (33.71 percent) -> 16455 ebuilds (43.25 percent)
Total    37803 -> 38045

EAPI 0 change:  -8.2%
EAPI 1 change: -58.1%
EAPI 2 change: -11.9%
EAPI 3 change: -14.0%
EAPI 4 change: -16.2%
EAPI 5 change: +29.1%
So over the last 5 months we had about 2% increase in the total amount of ebuilds. The only growing class is EAPI5, which is quite excellent.

EAPI 0 is the slowest decreasing, as long as there's no coordinated effort to get rid of it it'll be there forever. EAPI1 is now very close to extinction.

EAPI 2,3 and 4 are slowly shrinking away, but at this rate it'll still take years.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Hardware review: Asus WL-300NUL (June 17, 2014, 01:21 UTC)

Some people probably still remember that I used to have an absolute fear of flying and planes altogether. To the point that I have avoided going to the on-site interview of the company I'm now (years later) working for, because it would have taken place in California and I got scared. While I still do not like to travel, I've been traveling quite a bit in the past few years, not only back and forth between Venice and Los Angeles, but also within Europe and within other cities in the USA both last year and this.

In particular, TripIt is telling me I'm going to be away from home at least 41 days this year (and this is without including trips that are not scheduled yet, such as a visit back in Italy, and another trip to the United States in November). And most of them are not for personal reason (although some are, luckily). With all of this going on, I've started looking at any reasonably cheap option for me to reduce the pains of traveling.

One of these options came to me through a few colleagues, who presented me the Asus WL-330NUL (Amazon UK) — a tiny wireless router, the almost exact size of the Ethernet adapter that was bundled with my laptop, that provides you with your own, personal WiFi network, routed to another, less-private network, either wireless or wired. An absolute must if you spend a considerable amount of time in hotels.

First of all, the device itself is tiny, as I said it's almost the exact size of my Ethernet adapter and it can replace it 100%. Indeed, the device has four interfaces (although not the proper term): USB (gadget), Ethernet and two wireless radios; the USB connection is used both for host connectivity and for power: if you connect the router to your computer via USB, it'll present itself as a cdc_ether device, which Linux supports full well as if it was a standard Ethernet port — if possible, it's better supported than some of the USB Ethernet adapters out there in the wild.

Once your computer sees the connection via Ethernet, the device itself can be configured to either use a wired or wireless upstream connection — if you choose to use a wired network, which is what I do, as I'll explain in a moment, then this by itself is going to be already a replacement of the ethernet adapter; indeed at first the device will configure itself to be a simple bridge between USB and Ethernet, although that's not what I use it for.

Once you configured the wired or wireless upstream connection, you can focus on setting up your own private WiFi network: the second radio can broadcast your own SSID and handle your own 802.11n network, protected with WPA for instance. Since you have a stable SSID/key combination, once you turn the device on, all your gadgets will connect to that network, without requiring manual, device-by-device, configuration.

Even better, since you're now behind a router, for what the hotel or other provider is concerned, you have a single device: you consume a single IP and a single connection. For networks where you have to login separately for each device every 24 hours (or even every reconnection), this also means you only have to do it from one device, where it's handy, and everything else will follow.

As I said above, my suggested approach is to always use the wired network if the hotel makes it available (most of the non-economy hotels do). The reason why I'm saying this is that it's easy to misread the security implications of a device like this. While it is true that it can create your own private WiFi to then route to the hotel wireless, when you do so you add nothing to security, even if your WiFi is WPA2. The reason is simple: the public wireless network from the hotel is still completely unencrypted, so anybody eavesdropping can see what you're doing, unless you're using encrypted websites and even then part of your traffic can be inspected, such as which websites you're consulting. If, on the other hand, you use the wired network, while not totally secure (the hotel and the provider can still see the non-encrypted connections), you're still stopping a good bunch of people from gathering your data.

Finally, there is one more feature that is important if you travel a lot among hotels of respectable size: all of them use multiple access points for their WiFi networks, even though they broadcast the same SSID (and sometimes they don't); these access point do not allow you to roam data across them, so if you have two devices, say a Nexus 7 and a Chromecast that you bring with you, they may not be able to talk to each other without a device like this, as they may end up on different APs, and unable to "see" each other on the network, or at least not consistently enough to stream from one to the other. Since with this device you can just connect all the gadgets at the same network and access point, your problem is then solved.

I've been using the device for ten days now on two hotels and two airports, and it's definitely handy. I can't complain about the range either: I'm now in Pittsburgh's Bakery Square at the SpringHill Suites and my phone connected fine to it across the square in the Coffee Tree Roaster shop. Oh yeah and my room faces away from the square too.

Also, the power supply (by Asus!) that I bought last year (the original US one that I got with it just died on my, so I bought a different one) comes with a USB charging port by itself, which means I can just WiFi from my laptop even with a single power socket, freeing up the USB port (I only have two and one I use for my smartcard reader). I guess I could probably run this off my Anker battery (Amazon UK) but I have not tried that yet, as I somehow doubt that the airlines would be okay with me broadcasting my own WiFi on their planes. In any case, this is now part of my essential tools.

June 16, 2014
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
unsafePerformIO and missing NOINLINE (June 16, 2014, 16:10 UTC)

Two months ago Ivan asked me if we had working darcs-2.8 for ghc-7.8 in gentoo. We had a workaround to compile darcs to that day, but darcs did not work reliably. Sometimes it needed 2-3 attempts to pull a repository.

A bit later I’ve decided to actually look at failure case (Issued on darcs bugtracker) and do something about it. My idea to debug the mystery was simple: to reproduce the difference on the same source for ghc-7.6/7.8 and start plugging debug info unless difference I can understand will pop up.

Darcs has great debug-verbose option for most of commands. I used debugMessage function to litter code with more debugging statements unless complete horrible image would emerge.

As you can see in bugtracker issue I posted there various intermediate points of what I thought went wrong (don’t expect those comments to have much sense).

The immediate consequence of a breakage was file overwrite of partially downloaded file. The event timeline looked simple:

  • darcs scheduled for download the same file twice (two jobs in download queue)
  • first download job did finish
  • notified waiter started processing of that downloaded temp file
  • second download started truncating previous complete download
  • notified waiter continued processing partially downloadeed file and detected breakage

Thus first I’ve decided to fix the consequence. It did not fix problems completely, sometimes darcs pull complained about remote repositories still being broken (missing files), but it made errors saner (only remote side was allegedly at fault).

Ideally, that file overwrite should not happen in the first place. Partially, it was temp file predictability.

But, OK. Then i’ve started digging why 7.6/7.8 request download patterns were so severely different. At first I thought of new IO manager being a cause of difference. The paper says it fixed haskell thread scheduling issue (paper is nice even for leisure reading!):

GHC’s RTS had a bug in which yield
placed the thread back on the front of the run queue. This bug
was uncovered by our use of yield
which requires that the thread
be placed at the end of the run queue

Thus I was expecting the bug from this side.

Then being determined to dig A Lot in darcs source code I’ve decided to disable optimizations (-O0) to speedup rebuilds. And, the bug has vanished.

That made the click: unsafePerformIO might be the real problem. I’ve grepped for all unsafePerformIO instances and examined all definition sites.

Two were especially interesting:

-- src/Darcs/Util/Global.hs
-- ...
_crcWarningList :: IORef CRCWarningList
_crcWarningList = unsafePerformIO $ newIORef []
{-# NOINLINE _crcWarningList #-}
-- ...
_badSourcesList :: IORef [String]
_badSourcesList = unsafePerformIO $ newIORef []
{- NOINLINE _badSourcesList -}
-- ...

Did you spot the bug?

Thus The Proper Fix was pushed upstream a month ago. Which means ghc is now able to inline things more aggressively (and _badSourcesList were inlined in all user sites, throwing out all update sites).

I don’t know if those newIORef [] can be de-CSEd if types would have the same representation. Ideally the module also needs -fno-cse, or get rid of unsafePerformIO completely :].

(Side thought: top-level global variables in C style are surprisingly non-trivial in "pure" haskell. They are easy to use via peek / poke (in a racy way), but are hard to declare / initialize.)

I had a question wondered how many haskell packages manage to misspell ghc pragma decparations in a way darcs did it. And there still _is_ a few of such offenders:

$ fgrep -R NOINLINE . | grep -v '{-# NOINLINE' | grep '{-'
--
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE filterFB #-}
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE iterateFB #-}
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE mapFB #-}
--
darcs-2.8.4/src/Darcs/Global.hs:{- NOINLINE _badSourcesList -}
darcs-2.8.4/src/Darcs/Global.hs:{- NOINLINE _reachableSourcesList -}
--
dph-lifted-copy-0.7.0.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
dph-par-0.5.1.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
dph-seq-0.5.1.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
freesect-0.8/FreeSectAnnotated.hs:{- # NOINLINE showSSI #-}
freesect-0.8/FreeSectAnnotated.hs:{- # NOINLINE FreeSectAnnotated.showSSI #-}
freesect-0.8/FreeSect.hs:{- # NOINLINE fs_warn_flaw #-}
--
http-proxy-0.0.8/Network/HTTP/Proxy/ReadInt.hs:{- NOINLINE readInt64MH #-}
http-proxy-0.0.8/Network/HTTP/Proxy/ReadInt.hs:{- NOINLINE mhDigitToInt #-}
--
lhc-0.10/lib/base/src/GHC/PArr.hs:{- NOINLINE emptyP #-}
--
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE doubleToWord64 -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE word64ToDouble -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE floatToWord32 -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE word32ToFloat -}
--
warp-2.0.3.3/Network/Wai/Handler/Warp/ReadInt.hs:{- NOINLINE readInt64MH #-}
warp-2.0.3.3/Network/Wai/Handler/Warp/ReadInt.hs:{- NOINLINE mhDigitToInt #-}

Looks like there is yet something to fix :]

Would be great if hlint would be able to detect pragma-like comments and warn when comment contents is a valid pragma, but comment brackets don’t allow it to fire.

{- NOINLINE foo -} -- bad
{- NOINLINE foo #-} -- bad
{-# NOINLINE foo -} -- bad
{-# NOINLINE foo #-} -- ok

Thanks for reading!


June 15, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Iran : Yazd (June 15, 2014, 20:25 UTC)

We took our first bus trip to reach Yazd from Shiraz using the Hamsafar company. Booking a bus trip is as easy as it is cheap in Iran so this is by far the best way to get around even tho it’s a bit slow mostly due to the police controls along the road.

Yazd was a shock as it’s a small and beautiful desert town with an unique athmosphere. This city still haunts me and remains my favorite of the trip. The baazar, the covered streets and the mud walls gives you a feeling which is difficult to describe.

We stayed at the very pleasant Orient Hotel and spent one night in a caravanserai where I tried my luck and succeeded to rent a motorcycle for one day ! That was a fun and incredible experience and we’ll always remember the look on the amuzed face of the Iranian people when they realized some tourists where riding a motorcycle among them.

05130001

05090008

05130004

05110008

05130003

05110012

05130008

05090012

05110006

Yazd is of course not only about desert and features some beautiful and peaceful gardens.

05110001

05110002

Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened, June 2014 (June 15, 2014, 19:28 UTC)

Friday the Gentoo Hardened project had its monthly online meeting to talk about the progress within the various tools, responsibilities and subprojects.

On the toolchain part, Zorry mentioned that GCC 4.9 and 4.8.3 will have SSP enabled by default. The hardened profiles will still have a different SSP setting than the default (so yes, there will still be differences between the two) but this will help in securing the Gentoo default installations.

Zorry is also working on upstreaming the PIE patches for GCC 4.10.

Next to the regular toolchain, blueness also mentioned his intentions to launch a Hardened musl subproject which will focus on the musl C library (rather than glibc or uclibc) and hardening.

On the kernel side, two recent kernel vulnerabilities in the vanilla kernel Linux (pty race and privilege escalation through futex code) painted the discussions on IRC recently. Some versions of the hardened kernels are still available in the tree, but the more recent (non-vulnerable) kernels have proven not to be as stable as we’d hoped.

The pty race vulnerability is possibly not applicable to hardened kernels thanks to grSecurity, due to its protection to access the kernel symbols.

The latest kernels should not be used with KSTACKOVERFLOW on production systems though; there are some issues reported with virtio network interface support (on the guests) and ZFS.

Also, on the Pax support, the install-xattr saga continues. The new wrapper that blueness worked in dismissed some code to keep the PWD so the $S directory knowledge was “lost”. This is now fixed. All that is left is to have the wrapper included and stabilized.

On SELinux side, it was the usual set of progress. Policy stabilization and user land application and library stabilization. The latter is waiting a bit because of the multilib support that’s now being integrated in the ebuilds as well (and thus has a larger set of dependencies to go through) but no show-stoppers there. Also, the SELinux documentation portal on the wiki was briefly mentioned.

Also, the policycoreutils vulnerability has been worked around so it is no longer applicable to us.

On the hardened profiles, we had a nice discussion on enabling capabilities support (and move towards capabilities instead of setuid binaries), which klondike will try to tackle during the summer holidays.

As I didn’t take notes during the meeting, this post might miss a few (and I forgot to enable logging as well) but as Zorry sends out the meeting logs anyway later, you can read up there ;-)

Hanno Böck a.k.a. hanno (homepage, bugs)

I recently held a workshop about cryptography for web developers at the company Internations. I am publishing the slides here.

Part 1: Crypto and Web [PDF] [LaTeX], [Slideshare]
Part 2: How broken is TLS? [PDF] [LaTeX], [Slideshare]
Part 3: Don't do this yourself [PDF] [LaTeX], [Slideshare]
Part 1: Crypto and Web [PDF] [LaTeX], [Slideshare]
Part 1: Crypto and Web [PDF] [LaTeX] [Slideshare]

Part 2 is the same talk I recently have at the Easterhegg conference about TLS.

June 13, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

The new year once more brings good news. Our manuscript "Temperature dependence of Andreev spectra in a superconducting carbon nanotube quantum dot" was finally accepted for publication by Physical Review B. So what's this about?
When you place a carbon nanotube at low temperature between contacts made from a superconducting metal, lots of interesting things happen. Strongly simplifying, currents in a superconductor are carried by Cooper pairs of two electrons each, while the localized electronic system in the carbon nanotube is normal-conducting and carries single electrons. One mechanism at a superconductor - normal conductor interface that mediates between these two types of charge transport is so-called  Andreev reflection. An electron from the normal conductor enters the superconductor, at the same time a "missing electron", i.e. a "hole where an electron should be", is sent back into the normal conductor. The total charge passing through the interface is 2e, just right to form a Cooper pair. The superconductor-nanotube-superconductor system consistis of two such interfaces back to back; analogous to box potential, multiple reflections on both sides lead to the formation of bound quantum states within the nanotube, the so-called Andeev bound states (ABS).
So far, all other observations of ABS involved aluminum, which has a fairly low critical temperature and critical field. What is new in our work is that we use niobium as superconducting material, with higher critical temperature and larger energy gap. We can increase the temperature to over 1K and still see the superconductivity plus the ABS in the transport spectrum. This way, we can observe how thermal population of an excited Andreev state takes place. Additionally we observe a second pair of Andreev states in the larger superconducting energy gap, and a surprising multi-loop behaviour. All these effects are successfully modelled by calculations based on the superconducting Anderson model, in a collaboration with Alfredo Levy Yeyati and Alvaro Martin-Rodero from Universidad Autonoma de Madrid.

"Temperature dependence of Andreev spectra in a superconducting carbon nanotube quantum dot"
A. Kumar, M. Gaim, D. Steininger, A. Levy Yeyati, A. Martin-Rodero, A. K. Hüttel, and C. Strunk
Physical Review B 89, 075428 (2014), arXiv:1308.1020 (PDF)

Excellent news- our manuscript "Sub-gap spectroscopy of thermally excited quasiparticles in a Nb contacted carbon nanotube quantum dot" was just accepted for publication by Physical Review B as a Rapid Communication.
Once again we visit the topic of a carbon nanotube quantum dot with superconducting contacts, and again we use niobium for these contacts. Only, this time the connection between the nanotube and the superconductor is pretty bad, i.e., very low electronic tunnel rates. In the end this means that the superconductor does not influence the localized electronic system very much. However, in the metallic contacts we still have a superconductor, meaning electrons pair up into Cooper pairs, and for free quasiparticles carrying only one electron charge an energy gap evolves (the so-called BCS density of states).
Since we're using niobium, we can see superconducting effects over a fairly large temperature range. If we increase the temperature enough, thermal quasiparticles are excited over this energy gap. This precisely is what we observe in our experiment, as additional discrete lines in the transport spectrum. A detailed theoretical analysis of single electron tunneling, in a close cooperation with the research group Prof. Dr. M. Grifoni, confirms our results very well, especially also the temperature dependence of the features visible in the measurements.
In addition there is an interesting bonus to be had here. The thermally activated processes lead to a distinct double-peak of the conductance at zero bias, and the relative height of the two maxima is controlled by the degeneracy of the quantum dot ground states involved in tunneling. This means that looking at the thermally activated current provides additional information to identify the carbon nanotube level spectrum, even if it is not immediately clear from the usual "Coulomb diamond spectroscopy".

"Sub-gap spectroscopy of thermally excited quasiparticles in a Nb contacted carbon nanotube quantum dot"
M. Gaass, S. Pfaller, T. Geiger, A. Donarini, M. Grifoni, A. K. Hüttel, and Ch. Strunk
Phys. Rev. B 89, 241405(R) (2014), arXiv:1403.4456 (PDF)

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
A one-line Tinderbox (June 13, 2014, 08:33 UTC)

Needs portage-utils, best to run in a chroot:

for i in $( qsearch --all -CN | sort -R ); do emerge -1 $i; emerge --depclean; done

June 12, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
FLAC compression level comparison (June 12, 2014, 16:41 UTC)

So, I’m in the process of ripping all my music to FLAC since I am getting a completely new audio system in my home. With the high-end pre-amp, amplifiers, DACs, and floorstanding speakers in place, my full music collection (currently ripped in OGG) will no longer be of sufficient quality. Re-ripping a really large collection is a cumbersome task, so I wanted to make sure that I chose wisely with regard to the FLAC options that are available (particular concerning compression).

A little background is that FLAC is the Free Lossless Audio Codec, which means that there is no loss of quality at all. So, regardless of the compression level that is chosen, FLAC will always decode into the exact uncompressed audio track (bit for bit). The difference between the compression levels, then, is the resulting file size. Along with that benefit (higher compression results in a smaller file size), though, comes the downside of longer times to encode. According to Wikipedia (which cites comparisons that don’t seem to directly mention decoding times), there shouldn’t be any noticeable effect on the decoding of the FLAC files based on the compression level used during encoding. The default FLAC compression level tends to be 5 for most applications.

All of that being said, I decided to do a small test (n=2) with two songs. I firstly ripped the two songs into uncompressed WAV files, and then encoded them into FLAC from the command line using the following code:

time flac $SONG.wav --compression-level-X -o flacX.flac

That showed me the time to encode, and I substituted the compression level number (between 0 [lowest compression] and 8 [highest compression]) for ‘X’. Before looking at the results, here’s some information about the system used and the information contained in the results tables:

System specs:
Intel Core i7-960 (Bloomfield) @ 3.20 GHZ (quad-core with Hyperthreading)
24 GiB RAM (DDR3-1600)
Gentoo Linux with kernel 3.12.11
FLAC 1.3.0

Table data:

  • Quality: The FLAC compression level used
  • Encode (sec): The time it took to encode the song
  • Size (MiB): The resulting FLAC file size (rounded to tenths of a Mebibyte)
  • Ratio (%): FLAC file size as a percentage of the original uncompressed WAV
  • Enc + (sec): The additional time required to encode as compared to FLAC 0 (in seconds)
  • Enc + (%): The additional time required to encode as compared to FLAC 0 (as a percentage of increase)

Below you will find information about the two songs used as tests, and the results (in sortable tables):

Song 1:
Artist: Dream Theater
Album: A Change of Seasons EP
Song: A Change of Seasons
Length: 23’08″ (1388 seconds)
Uncompressed WAV: 128 seconds to rip – 233.6 MiB resulting file size

Quality
Encode (sec)
Size (MiB)
Ratio (%)
Enc + (sec)
Enc + (%)
FLAC 03.531174.674.7%0.0000.00%
FLAC 13.721173.574.3%0.1905.38%
FLAC 24.658173.274.1%1.12731.92%
FLAC 35.255165.070.6%1.72448.82%
FLAC 46.584163.870.1%3.05386.46%
FLAC 59.112163.469.9%5.581158.06%
FLAC 69.130163.469.9%5.599158.57%
FLAC 719.475163.369.9%15.944451.54%
FLAC 828.846163.169.8%25.315660.30%

Song 2:
Artist: Libera
Album: New Dawn
Song: Air (Air on the G string by Bach)
Length: 3’43″ (223 seconds)
Uncompressed WAV: 23 seconds to rip – 37.6 MiB resulting file size

Quality
Encode (sec)
Size (MiB)
Ratio (%)
Enc + (sec)
Enc + (%)
FLAC 00.51620.253.8%0.0000.00%
FLAC 10.54119.652.2%0.0254.84%
FLAC 20.69919.652.2%0.18335.47%
FLAC 30.80619.150.8%0.29056.20%
FLAC 41.02218.649.3%0.50698.06%
FLAC 51.43118.549.3%0.915177.33%
FLAC 61.42918.549.3%0.913176.94%
FLAC 73.04918.549.1%2.533490.89%
FLAC 84.52418.449.0%4.008776.74%

From both tests, it seems like FLAC compression level 3 is the right trade-off between file size and additional encoding time. Now, are either that big of a deal by today’s standards (in both available storage capacity and processing power)? Probably not. I could rip everything in FLAC 0 and call it a day, since the difference between FLAC 0 and FLAC 3 seems to be about 0.5 MiB for every minute of music. However, my current collection is approximately 391 hours (or 23460 minutes). That means that I will save somewhere in the neighbourhood of 12 GiB for my entire collection. Is that space savings worth the roughly 50% more time to encode? Maybe or maybe not.

At this point, my entire collection of ~391 hours of music will consume around 177 GiB if ripped at FLAC 0, and around 165 GiB if ripped at FLAC 3. On a 2 TiB HDD, that doesn’t seem like a big deal, really.

So, ultimately, I will either rip at FLAC 0 and not worry about the additional space, or FLAC 3 if I think it will help. At the rate that storage prices are dropping, FLAC 0 would seem like the obvious answer, but for so very little of an increase in encoding time, FLAC 3 makes more sense.

What are your thoughts?

Cheers,
Zach

June 10, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Please test =app-admin/perl-cleaner-2.14 (June 10, 2014, 20:26 UTC)

We've made a few small updates to perl-cleaner that should get you around subslot issues much better in the future.
If you are planning to do any major Perl update on your Gentoo box in the near future, please as a first step update to =app-admin/perl-cleaner-2.14, which is currently in ~arch but in my opinion a good stabilization candidate. This will hopefully give you a much better upgrade of your Perl modules.
Of course any feedback is appreciated, and if you encounter problems, please file bugs! If nothing unexpected happens, =app-admin/perl-cleaner-2.14 will go stable in a month.

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Again this year, I’m a bit late with posting the results and information about the 2014 St. Louis Children’s Hospital Make Tracks for the Zoo 5K, but alas, at least it’s coming now. The race was much like last year’s in that it was the same course, start time, et cetera. This year, though, it was much cooler outside. That made for an even better run!

There were fewer runners this year (slipping from 2133 to 1828 [a drop of 305[), but some great contenders! Though my time was improved by 16 seconds (down to 19’28″ from my 19’44″ last year), I slipped by one spot from 13th to 14th. That just tells me that there were more excellent runners this year even than last year. Below is a screen capture of the top 15 overall, but you can see the full results on ChronoTrack’s event site, which was managed by Big River Race Management.


Click to enlarge

Some of the most impressive times to me were:

Jacksen McNeal – 6-years-old – 31’41″
Joe Barzilai – 100-years-old – 37’49″

When I was 6-years-old, I wasn’t even thinking about running competitively, and I hope that I am still able to walk (let alone run) when I’m 100-years-old!

Again this year, this was an outstanding run, and I hope to break 19′ flat by the time of the race next year.

Keep running, and remember, you’re only in competition with yourself!

Cheers,
Zach

June 06, 2014
Remi Cardona a.k.a. remi (homepage, bugs)

A couple of days ago, like everyone using ~arch, I upgraded my Gnome desktop to 3.12. Though a few packages failed to build, the upgrade itself went pretty smooth. Hats off to the Gnome herders.

Overall, 3.12 feels like a solid and well put together release. There were a few disappointments. The biggest of which being the removal of changing tab titles in gnome-terminal. I’ll spare everyone a long rant but this is feature I have been using extensively for the better part of a decade and I’m very disappointed to see this useful features go away without much justification. Material for another blog post… maybe.

One thing I did notice really quickly is the new geolocation entry in the shell’s main top-right menu. Not being a fan of geolocation, I went out to see how I could turn it off by default system-wide as my system has more than one regular user.

Going through dconf-editor, I found the correct setting key: org.gnome.shell.location.max-accuracy-level. This key is an enum and the correct value (at least to my taste) is ‘off’. Setting this for each user is a matter of running “gsettings set”. However, to change the default value, a little elbow grease is required.

GLib’s GSettings is actually an API for various backends. The one we use on Linux is dconf. So this is what I’ll have to bang on. This https://wiki.gnome.org/action/show/Projects/dconf/SystemAdministrators basically has all the reasoning behind it all. I’ll just summarize what I did.

  1. Create a /etc/dconf/profile/user with the following content:
    user-db:user
    system-db:site
  2. Create a matching ‘site’ settings database (I could have called it anything really) in /etc/dconf/db/site.d/ containing my new default settings file ’00_settings’
    [org/gnome/shell/location]
    max-accuracy-level='off'
  3. Run ‘dconf-update’ which will translate the INI-like settings file into a binary dconf file ‘/etc/dconf/db/site’

Now, I assume GSettings did not pick up this new profile on its own, so I had to restart my session. But from there, all changes to the settings file followed by a ‘dconf update’ automatically propagates to running applications, gnome-shell included.

Overall, this was easier than I anticipated. Hope that helps anyone trying to do similar things.

Michael Palimaka a.k.a. kensington (homepage, bugs)
Reviving the tinderbox (June 06, 2014, 19:50 UTC)

tinderboxOne of the problems faced by the tinderbox of yesteryear is the picking information out of logs, as well as the reliance of one person to interpret the results. With this in mind, I’ve been doing some work to improve accessibility of this data and have produced a tinderbox interface.

A Portage bashrc (based on the original work by Diego Elio Pettenò) collects QA information about builds, and stores it in individual files to make it easier to operate on – eliminating a lot of the need to parse logs.

You’ll notice the interface lists all packages – not just those with a recent build. This allows for a central location to report static analysis information from tools such as repoman and pkgcore-checks. Other lesser-known tools are supported, with experimental reporting of sub-slot candidates and automated dependency checking.

What’s next? I’d like to add ways to find packages beyond the usual category breakdown – such as by maintainer or builds by architecture. There’s more build-time checks to add, and I’m sure there’s other static analysis tools out there too. I don’t personally have the resources to build packages at the scale seen previously, so last but of course not least, more building power is needed. Fortunately, it’s quite easy to collate the tinderbox data from multiple sources so we may be able to ‘crowd-source’ if necessary.

As always, comments/feedback/suggestions welcome.

Hello users,

TL;DR: x86 (32bit) support is going away soon, if you use Sabayon x86_64 (64bit), you can ignore this.

in an effort of decreasing our computing and human capacity requirements, I am going to start the process that deprecates Sabayon x86 (32bit) images, package repositories and their support.
x86_64 (or AMD64) has been introduced one decade ago. Yes, it was 2004, pretty much the same year I started messing with a binary Gentoo-based distro.

It’s time to move on, free up resources and focus on what matters. 32bit is not important anymore and modern computers come with tons of GB of RAM. At the same time, I don’t see x32 going anywhere. Instead, I see the need to standardize on one single x86 architecture. Some distributions have started doing the same, for instance, RHEL 7 will not see any 32bit version. Windows 8, well, yes, said goodbye to 32bit as well.

If you are still stuck with 32bit CPUs, there are 5 things you could do:

  1. Make sure that your CPU does not really support x86_64. You may be surprised to know that it might run x86_64 code just fine.
  2. Given our deprecation roadmap, migrate your stuff over a more recent system. eBay, Amazon, are your friends. A second-hand x86_64 system can cost you less than $100.
  3. Migrate to other distros and pray they won’t kill 32bit anytime soon (time is not in your favor).
  4. Migrate your Sabayon system to Gentoo/Portage, basically compiling your own stuff. Alternatively, setup your own Entropy repository in order to keep your system up-to-date.
  5. Burn your motherboard and CPU by doing insane overclocking and then, when they die, violently hit them with a hammer while screaming “You shall not compute!”.

Our deprecation roadmap is as follows:

  • June 2014: stop offering x86 images off our download pages, keep them on mirrors.
  • July/August 2014: stop building x86 images as part of our daily and monthly release rollout.
  • October 2014: stop offering x86 images from our mirrors.
  • November 2014: stop offering package updates, including security updates, for x86 images.
  • January 2015: stop offering packages from our mirrors.

After January 2015, you will not be able to install new packages as well. The only way to keep your system up-to-date is to use Portage (plus our overlays) or Entropy (by maintaining your own repository). Our x86_64 images are multilib, which means that you can run 32bit code on them just fine.


Hanno Böck a.k.a. hanno (homepage, bugs)

SSL test on hboeck.deI recently switched my personal web page and my blog to deliver content exclusively encrypted via HTTPS. I want to take this opportunity to give some facts about enabling TLS encryption by default and problems you may face.

First of all the non-problems: Enabling HTTPS by default is almost never a significant performance problem. If people tell me that they can not possibly enable HTTPS due to performance reasons the first thing I ask is if they believe this or if they have real benchmark data showing this. If you don't believe me on that, I can quote Adam Langley from Google here: "In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."

Enabling HTTPS may cause a number of compatibility issues you may not instantly think about. First of all, we know that IPs in the IPv4 space are limited and expensive these days, so many people probably can't afford having a distinct IP for their web page. The solution to that is a TLS extension called SNI (Server Name Indication) which allows to have different certificates for different domain names on the same IP. It works in all major browsers and has been working for quite some time. The only major browser you'll face these days that doesn't support SNI is the Android 2.x browser.

There are some subtle issues with SNI. One is that browsers have fallback modes if they cannot connect via TLS and that may lead to a connection downgrade to SSLv3. And that ancient protocol doesn't support extensions and thus no SNI. So you may have irregular certificate errors if you are on a bad connection. A solution to that on the server side is to just disable SSLv3. It will make SNI much more reliable.

I don't really have a clear picture how many browsers will fail with SNI. There are probably a number of embedded devices out there like smart TVs with browsers or things alike that have problems. If you have any experiences feel free to post them in the comments.

The first issue I only noticed after I switched to HTTPS: I had an application called RSS Graffiti set up to automatically post all articles I write to a facebook fan page. After changing to HTTPS only it silently stopped working. Re-adding my feed didn't work. I now found a similar service called dlvr.it that I now use to post my RSS feed to facebook. I can only assume that this is a glimpse of a much bigger problem: There are probably tons of applications and online services out there not prepared for an encrypted Internet. If we want more people to deploy encryption by default we need to find these issues, document them and hopefully put enough pressure on their developers to fix them.

Another yet unfixed issue is the Yandex Bot. Yandex is a search engine and although you may never have heard of it it's probably one of the few companies in this area that can claim to be a serious competitor to Google. The reason you may not know it is that it's mostly operating in Russian language. Depending on who your page visitors are this may matter more or less.

The Yandex Bot speaks SSL but according to the Qualys SSL test it only supports the ancient SSLv3. So you have a choice between three possibilities: Don't enable HTTPS by default, enable HTTPS with a shitty configuration supporting ancient technology that will cause trouble for SNI or enable HTTPS with a sane configuration and get no traffic from the leading Russian search engine. None of them sounds very good to me.

Another issue is third party content. For security reasons today's browsers block all active HTTP content (CSS, JavaScript etc.) on HTTPS webpages. This isn't much of a problem for me, but it's a problem for webpages that rely on advertising because from what I hear most advertisement providers don't support HTTPS yet (Google being a laudable exception here). This is the main reason you won't see many news webpages enforcing HTTPS. However, I still have passive third party HTTP content on my blog. That's why you'll probably see a yellow warning sign in front of the URL in some browsers.

June 04, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Consul on Gentoo Linux (June 04, 2014, 21:10 UTC)

As a clustering and distributed architecture enthusiast, I’m naturally interested in software providing neat ways to coordinate any kind of state/configuration/you-name-it over a large number of machines.

My quest, as many of you I guess, were so far limited to tools like zookeeper (packaged on my overlay but with almost no echo) and doozerd (last commit nearly 6 months ago) which both cover some of the goals listed above with more or less flavors and elegance (sorry guys, JAVA is NOT elegant to me).

I recently heard about consul, a new attempt to solve some of those problems in an interesting way while providing some rich fuctionnalities so I went on giving it a try and naturally started packaging it so others can too.

WTF is consul ?

consul logo

Consul is a few months’ old project (and already available on Gentoo !) from the guys making Vagrant. I especially like its datacenter centric architecture, intuitive deployment and its DNS + HTTP API query mecanisms. This sounds promising so far !

This is a descripion taken from the Hashicorp’s blog :

Consul is a solution for service discovery and configuration. Consul is completely distributed, highly available, and scales to thousands of nodes and services across multiple datacenters.

Some concrete problems Consul solves: finding the services applications need (database, queue, mail server, etc.), configuring services with key/value information such as enabling maintenance mode for a web application, and health checking services so that unhealthy services aren’t used. These are just a handful of important problems Consul addresses.

Consul solves the problem of service discovery and configuration. Built on top of a foundation of rigorous academic research, Consul keeps your data safe and works with the largest of infrastructures. Consul embraces modern practices and is friendly to existing DevOps tooling.

app-admin/consul ?

This is a RFC and interest call about the packaging and availability of consul for Gentoo Linux.

The latest version and live ebuilds are present in my overlay so if you are interested, please tell me (here, IRC, email, whatever) and I’ll consider adding it to the portage tree.

I want to test it !

Now that would be helpful to get some feedback about the usability of the current packaging. So far the ebuild features what I think should cover a lot of use cases :

  • full build from sources
  • customizable consul agent init script with reload, telemetry and graceful stop support
  • web UI built from sources and installation for easy deployment
# layman -a ultrabug
# emerge -av consul

Hope this interests some of you folks !

Gentoo Monthly Newsletter - May 2014 (June 04, 2014, 05:10 UTC)

The May 2014 GMN issue is now available online.

This month on GMN:

  • Interview with Gentoo developer Brian Dolbec (dol-sen)
  • Samba 4, sys-power/upower updates, infrastructure hosting needs
  • Latest Gentoo news, tips, interesting stats and much more.

June 02, 2014
Gentoo Monthly Newsletter: May 2014 (June 02, 2014, 21:00 UTC)

Gentoo News

Interview with Brian Dolbec (dol-sen)

by David Abbott

1. Hi Brian, tell us about yourself.

I’m a wannabe scientist/inventor that never did take the full plunge into that career path.
I’m married with 28 and 14 year old daughters, four dogs, one cat, several aquariums of fish…
And despite what many readers or other developers may expect or think: I’m not in an IT career. I’m a journeyman refrigeration mechanic with a gas ticket. I install, repair furnaces, rooftop heating/cooling equipment, computer room cooling systems etc.

2. Bring us back to your start with electronics and computers.

I’ve been taking things apart, seeing how they are built, and work since I was 9 or 10 years old.
Things from really old tube radios, appliances, etc.When I was in 7th grade, my teachers wife worked taking care of people in a care home. One of her patients was an electronics teacher crippled with polio. He asked a classmate and myself if we would to help him with things from repairing, modifying his HAM and CB radio equipment, to modifying his home built 3 wheel vehicle that he steered with buttons under his elbows.
Computer work started years later, my first machine was a used Atari 400 with a cassette player drive. Programming in basic. I had an apple IIe compatible for a year or so, then while returning to college, taking science (physics, chemistry) and computer programming courses (mostly coded in pascal) on a VAX 11 and/or x86 pc’s, my next one was an Atari 520ST (first production run) which I still have today.

3. How did you get involved with open source?

After installing gentoo, I had soon started working on porthole which was a new project at that time. I was also new to python and had not done any coding in many years. It was primarily porthole that brought me to doing work in gentoolkit, layman, portage and other tools in gentoo.

4. What path did you take to become a Gentoo developer?

I had been working around portage for many years with porthole development. Which led me to begin working on gentoolkit in order to create working api’s for other tools to use. It was that and layman work that got me into helping mentor GSOC projects. I first became a staffer as I was a coder, not an ebuild developer. It was one year later I took the plunge and completed the developer quiz and became a full developer.

5. Tell us about your mentor and the process to become a developer?

There have been many people over the years that I’ve learned from.
But my most important mentor in developing my coding skills has been Brian Harring
His knowledge of how to do things in an efficient, fast way continues to amaze and inspire me.

6. What aspects of Gentoo do we need to keep and what could we get rid of?

hmm… Keep the good coding skills and efforts into improving Gentoo as a whole, get rid of the major bikeshedding over who’s right and who’s wrong…

7. Tell us about Porthole (The portage frontend) http://porthole.sourceforge.net/ and what skills you learned from it?

Python programming, knowledge of data acquisition using portage’s API’s, learning to do things with less code, more adaptable and robust with less long term maintenance required. I’ve rewritten areas of porthole’s code several times as it evolved and grew. Sadly, I’ve been neglecting porthole these past few years. I keep getting distracted with other projects in need of help, re-writes, updates, or even new projects like gentoo-keys which was spawned from dev-python/pyGPG which I created to handle gpg signed list verification for layman. Layman’s code also spawned a small new python lib (dev-python/ssl-fetch) that will be used in several tools soon. I split that code out of layman to re-use in mirrorselect for fetching files from api.gentoo.org.

8. You have become a proficient Python programmer, how did you do it?

Coding, making mistakes, fixing them. Learning better faster ways to accomplish something from others.
But, one of my key strong points is my ability to quickly see the big picture. The details you can figure out along the way with help from others as the need arises. Many new programmers get stuck focusing on the details without knowing how they should be put together. Hint, think of a jigsaw puzzle, when you get one, you have the finished picture on the box to use as a reference of what it should look like. This makes it easier to figure out where a piece might fit. The same holds true for any programming task. You need to know what the end goal is and how it might fit together. Adjustments are made along the way so that you end up with a completed code block, then you move along to the next one.

9. Walk me through the steps you do to write python code, test, and your editor of choice etc.

see above answer… Current preferred editor is Geany, 2nd is Scite which I used for many years and still do for some things.

10. Catalyst (the tool used for building Gentoo releases) is in the process of a major overhaul, what has been done, who is helping you and what needs to be completed?

I got started working on catalyst so that the default location for the portage tree (gentoo ebuild tree) can be relocated. The catalyst code base was in sad shape with paths hard-coded throughout the code. It even had paths used as both a variable name and value in places. Its code base still had (questionable to poor) code copied from early portage code which has long since been replaced. The code had also been modified by the releng team which (not being proficient in python) used bad examples to modify its operation. The bulk of the rewrite work has and is being done by Trevor King and myself. With others contributing to improvements, additions to portions of it. Currently I’m in the middle of migrating all the changes from a development branch (3.0) into the master branch of the repository. Once that is caught up, the rewrites will continue. There are still too many areas of code to improve or rewrite to list them here.

11. Tell us about your other projects you are currently working on?

Gentoo-keys – A gpg key management and verification tool. Designed to manage all aspects of Gentoo’s gpg keys, developer keys and verification of things like the release media, commits to Gentoo’s ebuild tree, layman’s repositories etc.

Mirrorselect – a mirror selection tool for Gentoo. I did the 2.2 re-write and some additional work adding more features in the 2.2.1 release.

Ssl-fetch – A breakout lib which wraps dev-python/requests code and does verified ssl fetching of files and handles use of headers and timestamps to prevent re-downloading of data which hasn’t been modified.

pyGPG – A universal gnupg wrapper lib that is capable of mining all data available from gpg calls and puts that info into python available data types.

Layman – overlay management tool.

Portage – I am the current (temporary) lead after Zac took an extended break from gentoo. I am spear-heading a new plugin-sync system for it which will make portage more versatile and ease future maintenance and make it expandable with third party installable sync modules. You can look forward to a possible squashfs sync module. Work is being done to have Gentoo’s infrastructure be able to supply sqaushfs tree images. So encourage Micheal Gorny and the Gentoo infra team to complete that work.

Elogviewer – I’m maintaining the package, did code review for recent updates. I have a recent version bump to do at time of this writing.

Gentoolkit – Various python based modules, enalyze, equery, eclean, the new python based revdep-rebuild rewrite (some final debugging, fixes)

Catalyst – Gentoo Stage building tool, major re-write

A new small python based breakout lib for easy compression/decompression handling. It comes from my work in the catalyst rewrite, but could be useful in other tools. I have yet to create and name it as a standalone project.

12. What open source software can you not live without at home and at work?

dev-vcs/gitg, dev-util/geany, dev-vcs/git, Hexchat, xfce4 desktop environment,…

13. Which open source programs would you like to see developed?

gtk+:2 branch of gitg. It has gone to a gnome 3 look now which IMHO is yuk. It also lost the git blame feature currently in its re-write.

14. Age old question for Gentoo, how can we get more help?

Reducing the bikeshedding and name calling type attitudes present in some mail lists. Continue being an innovative leading Linux distribution building system.

15. Describe your desktop setup (WM/DE)?

Intel core-2 quad core based system with a shiny new SSD drive (Thank you Alec)
2 – 24 inch widescreen monitors
Basic xfce4 desktop, 14 virtual desktops, is a mix of Mac like toolbars and retro theme.
A hexchat window, toolbars, etc. in the left monitor, right monitor for main working apps windows, terminals

16. Tell us about your boxes and home network setup?

Not much to tell really. There’s my main desktop, an old 11 year old laptop, several printers. I have an old x86 box that I setup for a small server and router, but need to work on it. A hard drive failed on it due to a power failure. I have a 24 port gigabit switch. I still haven’t wired up this new house yet with lan everywhere. My wife and kids have some ipads, an Acer netbook.

17. What would be your dream job?

Working on some inventions, ideas I have for energy efficiency, earth friendly, and just plain cool ot fun :)

18. What gives you the most enjoyment within the Gentoo community?

Doing (hopefully) great coding work and having users really like what I’ve done to ease their work or save their system.
Mentoring students into doing better coding, being a more versatile developer.

19. What gives you the most enjoyment outside the Gentoo community?

Family

Help with samba-4 packages needed!

by Lars Wendler

Currently Gentoo’s samba team is severely understaffed. This has slowed down development of samba packages and its direct dependencies to a level where we cannot foresee when it is convenient to finally remove the mask on samba-4 and give it a wider range of testing from our users. There are a couple of automagic dependencies that need attention. Unfortunately samba upstream does very little to resolve these issues so we need people knowing the new build system of samba-4 to write patches for us. Furthermore samba-4 requires app-crypt/heimdal as kerberos provider which leads to packages blocking each other because they require app-crypt/mit-krb5 which cannot be installed together with heimdal.

This is a call for help getting as many blocker bugs from [1] fixed as possible. Once all these blockers are solved, unmasking samba-4 is the next logical step.

[1] https://bugs.gentoo.org/489762

Council News

This month the council addressed two issues brought up by the community.

In the aftermath of Heartbleed many are questioning the default configuration of packages like OpenSSH/OpenSSL, etc. If we had not enabled tls-heartbeat by default then Gentoo would have been immune to the recent troubles.

The council took up discussion, but felt that trying to make a one-size-fits-all policy wasn’t going to be practical. Maintainers were encouraged to follow upstream (which in the case of Heartbleed would have meant being vulnerable), but decisions are going to remain in the hands of individual maintainers. Specific issues can still be escalated to Council.

The other matter which came up concerned pkg-config files. Everybody can agree that upstream should be providing these when applicable, but there was disagreement over what should be done with upstream drops the ball. The crux of the argument was that not including them makes life more difficult for packages using the libraries on Gentoo, while including them can cause developers working on Gentoo to make assumptions that will cause problems on other distributions. The council decided that the current policy in the devmanual was not adequate and struck it down. In general maintainers will be given discretion to create pkg-config files not provided by upstream, but there will be guidelines around when this is done. The guidelines themselves need to be written, approved, and published to the devmanual.

Finally it was noted that election season is coming up, and the next Council meeting will be the last one of this term. Stay tuned for further details from the election team.

sys-power/upower update

>=sys-power/upower-0.99.0 has entered ~arch and has deprecated support for sys-power/pm-utils and hibernate/suspend in favor of using sys-apps/systemd.
If you suddenly notice that your favorite package no longer has capability for hibernate/suspend and you want them back, we have created a compatibility package sys-power/upower-pm-utils which will give you the old UPower back.
For example, Xfce 4.11+ has support for UPower 0.99 and it has copied the sys-power/pm-utils code from before UPower dropped it, and therefore hibernate/suspend should work with both versions, but this is likely untrue for most of the other packages.
Check out this forum post for more information.

Infrastructure News

Hosting sponsors needed
The Gentoo Infrastructure team is currently searching for hosting sponsors in Europe. We ask that sponsors contribute to Gentoo in one of two ways:

  1. A donation of at least two physical machines including space, power and 10Mbits of bandwidth (burstable to 50Mbit). This is the most common option that organizations prefer. Sponsors typically have existing dedicated space for their business and host hardware for Gentoo in that space.
  2. Donation of at least 12U space, 15A, and 10Mbits of bandwidth (burstable to 50Mbits).

In the latter case, the Gentoo Foundation can provide the server hardware (but not power, bandwidth, or rackspace / a rack.) In both cases we prefer the sponsor to provide remote hands for the machines.

Sponsors will received ads on ads.gentoo.org (the ad sidebar to the main site), postings on the sponsors page, as well as news items posted to www.gentoo.org.

Interested parties should contact infra@gentoo.org.

Sponsors often ask to host official Gentoo mirrors. Note that the Gentoo mirror network is not currently seeking new mirror sponsors at this time.
The gentoo infrastructure team has had significant operational problems with virtual machines and Gentoo Hardened. We see this as a pretty significant preference for physical hardware over solutions like Xen or VMWare.

Gentoo Developer Moves

Summary

Gentoo is made up of 236 active developers, of which 30 are currently away.
Gentoo has recruited a total of 798 developers since its inception.

Changes

The following developers have recently changed roles:

  • Jauhien Piatlicki joined the emacs, physics, science, mathematics and lxqt teams
  • Yury German joined the security team
  • Yixun Lan joined the proxy-maintainers, ARM and cjk teams
  • Peter Wilmott joined the ruby team
  • Julian Ospald joined the multilib and sound teams
  • Vlastimil Babka joined the kernel team
  • Michael Palimaka joined the lxqt team
  • Manuel Rueger joined the ARM team
  • Agostino Sarubbo left the KDE team
  • Brian Evans joined the MySQL team
  • Mikle Kolyada joined the embedded and dev-embedded teams.

Additions

The following developers have recently joined the project:

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17471
Ebuilds 37518
Architecture Stable Testing Total % of Packages
alpha 3591 538 4129 23.63%
amd64 10762 6209 16971 97.14%
amd64-fbsd 0 1576 1576 9.02%
arm 2634 1722 4356 24.93%
arm64 436 30 466 2.67%
hppa 3051 488 3539 20.26%
ia64 3176 595 3771 21.58%
m68k 575 93 668 3.82%
mips 4 2379 2383 13.64%
ppc 6809 2388 9197 52.64%
ppc64 4313 876 5189 29.70%
s390 1460 332 1792 10.26%
sh 1656 402 2058 11.78%
sparc 4119 899 5018 28.72%
sparc-fbsd 0 319 319 1.83%
x86 11418 5259 16677 95.46%
x86-fbsd 0 3236 3236 18.52%

gmn-portage-stats-2014-06

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201405-28 x11-wm/xmonad-contrib xmonad-contrib: Arbitrary code execution 478288
201405-27 dev-libs/libyaml LibYAML: Arbitrary code execution 505948
201405-26 net-misc/x2goserver X2Go Server: Privilege Escalation 497260
201405-25 dev-php/symfony Symfony: Information disclosure 444696
201405-24 dev-libs/apr Apache Portable Runtime, APR Utility Library: Denial of Service 339527
201405-23 media-libs/lib3ds lib3ds: User-assisted execution of arbitrary code 308033
201405-22 net-im/pidgin Pidgin: Multiple vulnerabilities 457580
201405-21 net-irc/charybdis Charybdis,ShadowIRCd: Denial of Service 449544
201405-20 media-libs/jbigkit JBIG-KIT: Denial of Service 507254
201405-19 app-crypt/mcrypt MCrypt: User-assisted execution of arbitrary code 434112
201405-18 net-misc/openconnect OpenConnect: User-assisted execution of arbitrary code 457068
201405-17 net-analyzer/munin Munin: Multiple vulnerabilities 412881
201405-16 dev-lang/mono Mono: Denial of Service 433768
201405-15 sys-apps/util-linux util-linux: Multiple vulnerabilities 359759
201405-14 dev-ruby/ruby-openid Ruby OpenID: Denial of Service 460156
201405-13 x11-libs/pango Pango: Multiple vulnerabilities 268976
201405-12 net-analyzer/ettercap Ettercap: Multiple vulnerabilities 340897
201405-11 app-backup/bacula Bacula: Information disclosure 434878
201405-10 dev-ruby/rack Rack: Multiple vulnerabilities 451620
201405-09 media-gfx/imagemagick ImageMagick: Multiple vulnerabilities 409431
201405-08 app-antivirus/clamav ClamAV: Multiple vulnerabilities 462278
201405-07 x11-base/xorg-server X.Org X Server: Multiple vulnerabilities 466222
201405-06 net-misc/openssh OpenSSH: Multiple vulnerabilities 231292
201405-05 net-misc/asterisk Asterisk: Denial of Service 504180
201405-04 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 501960
201405-03 net-irc/weechat WeeChat: Multiple vulnerabilities 442600
201405-02 net-libs/libsrtp libSRTP: Denial of Service 472302
201405-01 sys-fs/udisks udisks: Arbitrary code execution 504100

Package Removals/Additions

Removals

Package Developer Date
sci-geosciences/gempak pacho 03 May 2014
gnome-extra/evolution-kolab pacho 03 May 2014
www-apache/mod_ruby pacho 03 May 2014
x11-misc/suxpanel pacho 03 May 2014
kde-base/kdeartwork-sounds johu 09 May 2014
kde-base/kdnssd johu 09 May 2014
kde-base/kwallet johu 09 May 2014
games-puzzle/krosswordpuzzle johu 10 May 2014
app-portage/udept pacho 11 May 2014
media-libs/libj2k pacho 11 May 2014
media-gfx/cfe pacho 11 May 2014
media-gfx/yablex pacho 11 May 2014
app-admin/osiris pacho 11 May 2014
sys-power/cpufreqd pacho 11 May 2014
net-irc/ctrlproxy pacho 11 May 2014
x11-misc/pogo pacho 11 May 2014
sci-geosciences/openstreetmap-icons pacho 11 May 2014
dev-python/telepathy-python pacho 11 May 2014
media-tv/huludesktop pacho 11 May 2014
app-admin/lcap pacho 11 May 2014
www-apache/mod_chroot pacho 11 May 2014
dev-util/dissy pacho 11 May 2014
dev-libs/clens ulm 12 May 2014
dev-java/randomguid ulm 12 May 2014

Additions

Package Developer Date
net-wireless/openggsn zx2c4 01 May 2014
x11-misc/urxvt-font-size radhermit 02 May 2014
kde-misc/baloo-kcmadv dilfridge 02 May 2014
dev-ruby/dotenv-deployment graaff 03 May 2014
dev-java/headius-options tomwij 03 May 2014
gnome-extra/gnome-commander hwoarang 03 May 2014
mate-extra/caja-extensions tomwij 04 May 2014
media-gfx/eom tomwij 04 May 2014
x11-misc/mozo tomwij 04 May 2014
dev-ruby/descendants_tracker graaff 05 May 2014
gnome-extra/cinnamon-desktop tetromino 06 May 2014
gnome-extra/cinnamon-settings-daemon tetromino 06 May 2014
gnome-extra/cinnamon-session tetromino 06 May 2014
app-i18n/tagainijisho calchan 06 May 2014
dev-ruby/nio4r mrueg 07 May 2014
gnome-extra/cjs tetromino 07 May 2014
gnome-extra/cinnamon-menus tetromino 07 May 2014
app-crypt/paperkey mrueg 07 May 2014
dev-ruby/rinku mrueg 07 May 2014
gnome-extra/cinnamon-control-center tetromino 08 May 2014
net-wireless/cinnamon-bluetooth tetromino 08 May 2014
dev-python/aniso8601 radhermit 08 May 2014
dev-python/flask-restful radhermit 08 May 2014
dev-python/polib tetromino 09 May 2014
dev-db/soci jauhien 09 May 2014
dev-db/cppdb jauhien 09 May 2014
dev-python/sexpdata jauhien 10 May 2014
gnome-extra/cinnamon-screensaver tetromino 10 May 2014
sys-block/zram-init jauhien 10 May 2014
sci-chemistry/propka jlec 11 May 2014
dev-python/oslo-vmware vadimk 11 May 2014
sys-boot/winusb yac 11 May 2014
app-arch/xarchiver ssuominen 11 May 2014
dev-util/android-studio jauhien 11 May 2014
dev-ruby/fssm vikraman 11 May 2014
dev-ruby/compass vikraman 11 May 2014
dev-python/rax-scheduled-images-python-novaclient-ext prometheanfire 12 May 2014
dev-python/os-virtual-interfacesv2-python-novaclient-ext prometheanfire 12 May 2014
kde-misc/milou johu 12 May 2014
net-wireless/btcrack zerochaos 12 May 2014
dev-python/pymysql grknight 13 May 2014
app-arch/defluff tomwij 14 May 2014
sci-biology/update-blastdb jlec 14 May 2014
x11-misc/calise tomwij 14 May 2014
dev-ruby/pdf-core mrueg 15 May 2014
dev-ruby/priorityqueue mrueg 15 May 2014
dev-ruby/expression_parser mrueg 15 May 2014
dev-ruby/ae p8952 15 May 2014
dev-ruby/ansi p8952 15 May 2014
dev-ruby/brass p8952 15 May 2014
dev-ruby/facets p8952 15 May 2014
dev-ruby/lemon p8952 15 May 2014
dev-ruby/qed p8952 15 May 2014
dev-ruby/rubytest p8952 15 May 2014
dev-ruby/rubytest-cli p8952 15 May 2014
dev-ruby/hashery p8952 15 May 2014
gnome-extra/cinnamon-translations tetromino 16 May 2014
net-libs/balde rafaelmartins 18 May 2014
dev-lang/rust jauhien 18 May 2014
sci-libs/libgeodecomp slis 19 May 2014
dev-java/netty-common tomwij 19 May 2014
dev-java/netty-buffer tomwij 19 May 2014
dev-ruby/rrdtool-bindings graaff 19 May 2014
app-leechcraft/lc-eleeminator maksbotan 20 May 2014
app-backup/snapper dlan 21 May 2014
dev-java/netty-transport tomwij 21 May 2014
games-strategy/0ad-data hasufell 21 May 2014
games-strategy/0ad hasufell 21 May 2014
www-servers/hiawatha hasufell 22 May 2014
www-apps/hiawatha-monitor hasufell 22 May 2014
media-fonts/ahem idella4 23 May 2014
x11-misc/sddm jauhien 24 May 2014
lxqt-base/liblxqt jauhien 25 May 2014
net-misc/lxqt-openssh-askpass jauhien 25 May 2014
lxqt-base/lxqt-qtplugin jauhien 25 May 2014
app-vim/gitgutter radhermit 25 May 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 May 2014 and 31 May 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-05

Bug Activity Number
New 1388
Closed 977
Not fixed 259
Duplicates 158
Total 5734
Blocker 5
Critical 18
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 109
2 Gentoo Linux Gnome Desktop Team 44
3 Gentoo Games 31
4 Gentoo KDE team 29
5 Gentoo's Team for Core System packages 26
6 Multilib team 24
7 Gentoo X packagers 21
8 Qt Bug Alias 20
9 Retirement Admin 19
10 Others 653

gmn-closed-2014-05

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 158
2 Gentoo Linux Gnome Desktop Team 93
3 Gentoo Security 53
4 Gentoo KDE team 47
5 Multilib team 41
6 Python Gentoo Team 35
7 Gentoo's Team for Core System packages 35
8 Default Assignee for New Packages 25
9 Qt Bug Alias 24
10 Others 876

gmn-opened-2014-05

Tip of the month

Would you like to know why a particular package is masked?
You can create a simple shell function like this:

whymask() {
    find /usr/portage/profiles/ -name '*.mask' -exec \
        awk -vRS= "/${*/\//.}/ {
                print \" \" FILENAME \":\", \"\n\" \"\n\" \$0 \"\n\"
        }" {} + | less
}

You can do `whymask sys-kernel/gentoo-sources` to get reasons as to why
a particular package is masked; very handy to quickly check something
up, especially for USE flag masks which Portage doesn’t explain.

You can do `whymask Gnome 3.12` to get the entire GNOME 3.12 mask,
piping it to `grep -v mask: > /etc/portage/package.unmask/gnome3` then
allows you to quickly update your GNOME 3.12 unmask; if you want this to
happen on sync, you can put this line in /etc/portage/postsync.d/gnome3
and make it executable such that it’ll be ran after every sync.

The magic trick here is that awk -vRS= “/…/” matches paragraphs; as
the record separator is empty, it takes the blank lines.
by Tom Wijsman

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
uWSGI v2.0.5.1 (June 02, 2014, 14:12 UTC)

This release is important to me (and my company) as it officially introduces a few features we developed for our needs and then contributed to uWSGI.

Special congratulations to my co-worker @btall for his first contribution and for those nice features to the metrics subsystem with many thanks as usual to @unbit for reviewing and merging them so quickly.

new features

  • graceful reload of mule processes (Credits: Paul Egan) : SIGHUP is now sent to mules instead of directly killing them, by default you have 60 seconds to react before a SIGKILL
  • –metrics-no-cores, –stats-no-cores, –stats-no-metrics : don’t calculate and process all those core related metrics (gevent anyone ?)
  • reset_after_push for metrics (Credits: Babacar Tall) : this metric attribute ensures that the metric value is reset to 0 or its hardcoded initial_value every time the metric is pushed to some external system (like carbon, or statsd)
  • new metric_set_max and metric_set_min helpers (Credits: Babacar Tall) : can be used to avoid having to call “metric_get“ when you need a metric to be set at a maximal or minimal value. Another simple use case is to use the “avg“ collector to calculate an average between some *max* and *min* set metrics. Available in C and python.

See the full changelog here, especially some interesting bugfixes.

June 01, 2014
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

If you tried upgrading from stable amd64 to ~amd64 or otherwise done a big update of perl, you probably hit this weird perl-cleaner slot conflict:

# perl-cleaner --all
!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

dev-lang/perl:0

  (dev-lang/perl-5.18.2:0/5.18::gentoo, installed) pulled in by
    =dev-lang/perl-5.18* required by (virtual/perl-IO-1.280.0:0/0::gentoo, ebuild scheduled for merge)
    ^              ^^^^^                                                                                                                                  
    dev-lang/perl:0/5.18=[-build(-)] required by (perl-core/version-0.990.800:0/0::gentoo, installed)
                 ^^^^^^^^                                                                                                              
    (and 7 more with the same problems)

  (dev-lang/perl-5.16.3:0/5.16::gentoo, ebuild scheduled for merge) pulled in by
    =dev-lang/perl-5.16* required by (virtual/perl-Package-Constants-0.20.0-r3:0/0::gentoo, installed)
    ^              ^^^^^                                                                                                                                  
    (and 6 more with the same problem)


This is bug #506616, and the solution is to run the following command:

perl-cleaner --all -- --backtrack=30

Read more »

May 31, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Visualizing constraints (May 31, 2014, 01:47 UTC)

SELinux constraints are an interesting way to implement specific, well, constraints on what SELinux allows. Most SELinux rules that users come in contact with are purely type oriented: allow something to do something against something. In fact, most of the SELinux rules applied on a system are such allow rules.

The restriction of such allow rules is that they only take into consideration the type of the contexts that participate. This is the type enforcement part of the SELinux mandatory access control system.

Constraints on the other hand work on the user, role and type part of a context. Consider this piece of constraint code:

constrain file all_file_perms (
  u1 == u2
  or u1 == system_u
  or u2 == system_u
  or t1 != ubac_constrained_type
  or t2 != ubac_constrained_type
);

This particular constraint definition tells the SELinux subsystem that, when an operation against a file class is performed (any operation, as all_file_perms is used, but individual, specific permissions can be listed as well), this is denied if none of the following conditions are met:

  • The SELinux user of the subject and object are the same
  • The SELinux user of the subject or object is system_u
  • The SELinux type of the subject does not have the ubac_constrained_type attribute set
  • The SELinux type of the object does not have the ubac_constrained_type attribute set

If none of the conditions are met, then the action is denied, regardless of the allow rules set otherwise. If at least one condition is met, then the allow rules (and other SELinux rules) decide if an action can be taken or not.

Constraints are currently difficult to query though. There is seinfo –constrain which gives all constraints, using the Reverse Polish Notation – not something easily readable by users:

~$ seinfo --constrain
constrain { sem } { create destroy getattr setattr read write associate unix_read unix_write  } 
(  u1 u2 ==  u1 system_u ==  ||  u2 system_u ==  ||  t1 { screen_var_run_t gnome_xdg_config_home_t admin_crontab_t 
links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t 
remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t consolekit_input_xevent_t user_mail_tmp_t 
chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t gnome_keyring_tmp_t links_tmpfs_t 
skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t xscreensaver_xproperty_t 
consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t 
mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t 
java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t googletalk_plugin_tmp_t gpg_agent_tmp_t 
ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t 
iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t 
chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t 
pulseaudio_xproperty_t xscreensaver_input_xevent_t gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t 
mozilla_plugin_tmpfs_t user_home_t user_sudo_t ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t 
java_home_t skype_tmpfs_t xdg_pictures_home_t xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t 
gpg_pinentry_t mozilla_t session_dbusd_tmp_t staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t 
mysqld_home_t pan_tmpfs_t root_input_xevent_t links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t 
gconf_home_t mozilla_plugin_xproperty_t mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t 
pulseaudio_t mencoder_t admin_crontab_tmp_t xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t 
mozilla_plugin_home_t googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t 
mozilla_tmp_t gconfd_t lpr_t mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t 
links_xproperty_t mplayer_xproperty_t xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t 
gpg_secret_t gpg_helper_t staff_ssh_agent_t pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t 
user_fonts_config_t ssh_home_t staff_su_t screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t 
user_mail_t pulseaudio_home_t xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t 
crontab_t pan_home_t session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t 
lpr_tmp_t gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t2 { 
screen_var_run_t gnome_xdg_config_home_t admin_crontab_t links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t 
crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t 
consolekit_input_xevent_t user_mail_tmp_t chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t 
gnome_keyring_tmp_t links_tmpfs_t skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t 
xscreensaver_xproperty_t consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t 
mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t 
googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t 
googletalk_plugin_tmp_t gpg_agent_tmp_t ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t 
googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t 
sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t 
xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t pulseaudio_xproperty_t xscreensaver_input_xevent_t 
gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t mozilla_plugin_tmpfs_t user_home_t user_sudo_t 
ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t java_home_t skype_tmpfs_t xdg_pictures_home_t 
xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t gpg_pinentry_t mozilla_t session_dbusd_tmp_t 
staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t mysqld_home_t pan_tmpfs_t root_input_xevent_t 
links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t gconf_home_t mozilla_plugin_xproperty_t 
mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t pulseaudio_t mencoder_t admin_crontab_tmp_t 
xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t mozilla_plugin_home_t 
googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t mozilla_tmp_t gconfd_t lpr_t 
mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t links_xproperty_t mplayer_xproperty_t 
xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t gpg_secret_t gpg_helper_t staff_ssh_agent_t 
pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t user_fonts_config_t ssh_home_t staff_su_t 
screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t user_mail_t pulseaudio_home_t 
xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t crontab_t pan_home_t 
session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t lpr_tmp_t 
gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t1 
 ==  || );

There RPN notation however isn’t the only reason why constraints are difficult to read. The other reason is that seinfo does not know (anymore) about the attributes used to generate the constraints. As a result, we get a huge list of all possible types that match a common attribute – but we don’t know which anymore.

Not everyone can read the source files in which the constraints are defined, so I hacked together a script that generates GraphViz dot file based on the seinfo –constrain output for a given class and permission and, optionally, limiting the huge list of types to a set that the user (err, that is me ;-) is interested in.

For instance, to generate a graph of the constraints related to file reads, limited to the user_t and staff_t types if huge lists would otherwise be shown:

~$ seshowconstraint file read "user_t staff_t" > constraint-file.dot
~$ dot -Tsvg -O constraint-file.dot

This generates the following graph:

If you’re interested in the (ugly) script that does this, you can find it on my github location.

There are some patches laying around to support naming constraints and taking the name up in the policy, so that denials based on constraints can at least give feedback to the user which constraint is holding an access back (rather than just a denial that the user doesn’t know why). Hopefully such patches can be made available in the kernel and user space utilities soon.