Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Erik Mackdanz
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Vlastimil Babka
. Yury German
. Zack Medico

Last updated:
October 26, 2016, 02:03 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

October 25, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Gentoo Miniconf 2016 (October 25, 2016, 18:02 UTC)

Gentoo Miniconf, Prague, October 2016

As I noted when I resurrected the blog, part of the reason why I managed to come back to “active duty” within Gentoo Linux is because Robin and Amy helped me set up my laptop and my staging servers for singing commits with GnuPG remotely.

And that happened because this year I finally managed to go to the Gentoo MiniConf hosted as part of LinuxDays in Prague, Czech Republic.

The conference track was fairly minimal; Robin gave us an update on the Foundation and on what Infra is doing — I’m really looking forward to the ability to send out changes for review, instead of having to pull and push Git directly. After spending three years using code reviews with a massive repository I feel I like it and want to see significantly more of it.

Ulrich gave us a nice presentation on the new features coming with EAPI 7, which together with Michal’s post on EAPI 6 made it significantly easier to pick up Gentoo again.

And of course, I managed to get my GnuPG key signed by some of the developers over there, so that there is proof that who’s committing those changes is really.

But the most important part for me has been seeing my colleagues again, and meeting the new ones. Hopefully this won’t be the last time I get to the Miniconf, although fitting this together with the rest of my work travel is not straightforward.

I’m hoping to be at 33C3 — I have a hotel reservation and flight tickets, but no ticket for the conference yet. If any of you, devs or users, is there, feel free to ping me over Twitter or something. I’ll probably be at FOSDEM next year too, although that is not a certain thing, because I might have some scheduling conflicts with ENIGMA (unless I can get Delta to give me the ticket I have in mind.)

So once again thank you for CVU and LinuxDays for hosting us, and hopefully see you all in the future!

October 17, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GnuPG Agent Forwarding with OpenPGP cards (October 17, 2016, 22:02 UTC)

Finally, after many months (a year?) absence, I’m officially back as a Gentoo Linux developer with proper tree access. I have not used my powers much yet, but I wanted to at least point out why it took me so long to make it possible for me to come back.

There are two main obstacles that I was facing, the first was that the manifest signing key needed to be replaced for a number of reasons, and I had no easy access to the smartcard with my main key which I’ve been using since 2010. Instead I set myself up with a separate key on a “token”: a SIM-sized OpenPGP card installed into a Gemalto fixed-card reader (IDBridge K30.) Unfortunately this key was not cross-signed (and still isn’t, but we’re fixing that.)

The other problem is that for many (yet not all) packages I worked on, I would work on a remote system, one of the containers in my “testing server”, which also host(ed) the tinderbox. This means that the signing needs to happen on the remote host, although the key cannot leave the smartcard on the local laptop. GPG forwarding is not very simple but it has sort-of-recently become possible without too much intrusion.

The first thing to know is that you really want GnuPG 2.1; this is because it makes your life significantly easier as the key management is handed over to the Agent in all cases, which means there is no need for the “stubs” of the private key to be generated in the remote home. The other improvement in GnuPG 2.1 is that there is better sockets’ handling: on systemd it uses the /run/user path, and in general it uses a standard named socket with no way to opt-out. It also allows you to define an extra socket that is allowed to issue signature requests, but not modify the card or secret keys, which is part of the defence in depth when allowing remote access to the key.

There are instructions which should make it easier to set up, but they don’t quite work the way I read them, in particular because they require a separate wrapper to set up the connection. Instead, together with Robin we managed to figure out how to make this work correctly with GnuPG 2.0. Of course, since that Sunday, GnuPG 2.1 was made stable, and so it stopped working, too.

So, without further ado, let’s see what is needed to get this to work correctly. In the following example we assume we have two hosts, “local” and “remote”; we’ll have to change ~/.gnupg/gpg-agent.conf and ~/.ssh/config on “local”, and /etc/ssh/sshd_config on “remote”.

The first step is to ask GPG Agent to listen to an “extra socket”, which is the restricted socket that we want to forward. We also want for it to keep the display information in memory, I’ll get to explain that towards the end.

# local:~/.gnupg/gpg-agent.conf

extra-socket ~/.gnupg/S.gpg-agent.remote

This is particularly important for systemd users because the normal sockets would be in /run and so it’s a bit more complicated to forward them correctly.

Secondly, we need to ask OpenSSH to forward this Unix socket to the remote host; for this to work you need at least OpenSSH 6.7, but since that’s now quite old, we can be mostly safe to assume you are using that. Unlike GnuPG, SSH does not correctly expand tilde for home, so you’ll have to know the actual paths we want to write the unix at the right path.

# local:~/.ssh/config

Host remote
RemoteForward /home/remote-user/.gnupg/S.gpg-agent /home/local-user/.gnupg/S.gpg-agent.remote
ExitOnForwardFailure yes

Note that the paths need to be fully qualified and are in the order remote, local. The ExitOnForwardFailure option ensures that you don’t get a silent failure to listen to the socket and fight for an hour trying to figure out what’s going on. Yes, I had that problem. By the way, you can combine this just fine with the now not so unknown SSH tricks I spoke about nearly six years ago.

Now is the slightly trickier part. Unlike the original gpg-agent, OpenSSH will not clean up the socket when it’s closed, which means you need to make sure it gets overwritten. This is indeed the main logic behind the remote-gpg script that I linked earlier, and the reason for that is that the StreamLocalBindUnlink option, which seems like the most obvious parameter to set, does not behave like most people would expect it to.

The explanation for that is actually simple: as the name of the option says, this only works for local sockets. So if you’re using the LocalForward it works exactly as intended, but if you’re using RemoteForward (as we need in this case), the one on the client side is just going to be thoroughly ignored. Which means you need to do this instead:

# remote:/etc/sshd/config

StreamLocalBindUnlink yes

Note that this applies to all the requests. You could reduce the possibility of bugs by using the Match directive to reduce them to the single user you care about, but that’s left up to you as an exercise.

At this point, things should just work: GnuPG 2.1 will notice there is a socket already so it will not start up a new gpg-agent process, and it will still start up every other project that is needed. And since as I said the stubs are not needed, there is no need to use --card-edit or --card-status (which, by the way, would not be working anyway as they are forbidden by the extra socket.)

However, if you try at this point to sign anything, it’ll just fail because it does not know anything about the key; so before you use it, you need to fetch a copy of the public key for the key id you want to use:

gpg --recv-key ${yourkeyid}
gpg -u ${yourkeyid} --clearsign --stdin

(It will also work without -u if that’s the only key it knows about.)

So what is about keep-display in local:~/.gnupg/gpg-agent.conf? One of the issues I faced with Robin was gpg failing saying something about “file not found”, though obviously the file I was using was found. A bit of fiddling later found these problems:

  • before GnuPG 2.1 I would start up gpg-agent with the wrapper script I wrote, and so it would usually be started by one of my Konsole session;
  • most of the time the Konsole session with the agent would be dead by the time I went to SSH;
  • the PIN for the card has to be typed on the local machine, not remote, so the pinentry binary should always be started locally; but it would get (some of) the environment variables from the session in which gpg is running, which means the shell on “remote”;
  • using DISPLAY=:0 gpg would make it work fine as pinentry would be told to open the local display.

A bit of sniffing around the source code brought up that keep-display option, which essentially tells pinentry to ignore the session where gpg is running and only consider the DISPLAY variable when gpg-agent is started. This works for me, but it has a few drawbacks. It would not work correctly if you tried to use GnuPG out of the X11 session, and it would not work correctly if you have multiple X11 sessions (say through X11 forwarding.) I think this is fine.

There is another general drawback on this solution: if two clients connect to the same SSH server with the same user, the last one connecting is the one that actually gets to provide its gpg-agent. The other one will be silently overruled. I”m afraid there is no obvious way to fix this. The way OpenSSH itself handles this for the SSH Agent forwarding is to provide a randomly-named socket in /tmp, and set the environment variable to point at it. This would not work for GnuPG anymore because it now standardised the socket name, and removed support for passing it in environment variables.

October 16, 2016
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Fixing gtk behaviour (October 16, 2016, 13:26 UTC)

Recently I've noticed all gtk2 apps becoming quite ... what's the word ... derpy?
Things like scrollbars not working and stuff. And by "not working" I mean the gtk3 behaviour of not showing up/down arrows and being a grey smudge of stupid.

So accidentally I stumbled over an old gentoo bug where it was required to deviate from defaults to have, like, icons and stuff.
That sounds pretty reasonable to me, but with gtk upstream crippling the Ad-Waiter, err, adwaita theme, because gtk3, this is a pretty sad interaction. And unsurprisingly by switching to the upstream default theme, Raleigh, gtk2 apps start looking a lot better.(Like, scrollbars and stuff)

The change might make sense to apply to Gentoo globally, locally for each user it is simply:

$ cat ~/.gtkrc-2.0
gtk-theme-name = "Raleigh"
gtk-cursor-theme-name = "Raleigh"
I'm still experimenting with 'gtk-icon-theme-name' and 'gtk-fallback-icon-theme', maybe that should change too. And as a benefit we can remove the Ad-Waiter from dependencies, possibly drop gnome-themes too, and restore a fair amount of sanity to gtk2.

Changing console fontsize (October 16, 2016, 10:09 UTC)

Recently I accidentally aquired some "HiDPI" hardware. While it is awesome to use it quickly becomes irritating to be almost unable to read the bootup messages or work in a VT.
The documentation on fixing this is surprisingly sparse, but luckily it is very easy:

  • Get a font that comes in the required sizes. media-fonts/terminus-font was the first choice I found, there may be others that are nice to use. Since terminus works well enough I didn't bother to check.
  • Test the font with "setfont". The default path is /usr/share/consolefonts, and the font 'name' is just the filename without the .psf.gz suffix. If you break things you can revert to sane defaults by just calling "setfont" or rebooting the machine (ehehehehehe)
  • Set the font in /etc/conf.d/consolefont. For a 210dpi notebook display I chose 'ter-v24b', but I'm considering going down a font size or two, maybe 'ter-v20b'? It's all very subjective ...
  • On reboot the consolefont init script will set the required font.
Now I'm wondering if such fonts can be embedded into the kernel so that on boot it directly switches to a 'nice' font, but just being able to read the console output is a good start ...

October 13, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The end of an era, the end of the tinderbox (October 13, 2016, 19:02 UTC)

I’m partly sad, but for the most part this is a weight that goes away from my shoulders, so I can’t say I’m not at least in part joyful of it, even though the context in which this is happening is not exactly what I expected.

I turned off the Gentoo tinderbox, never to come back. The S3 storage of logs is still running, but I’ve asked Ian to see if he can attach everything at his pace, so I can turn off the account and be done with it.

Why did this happen? Well, it’s a long story. I already stopped running it for a few months because I got tired of Mike behaving like a child, like I already reported in 2012 by closing my bugs because the logs are linked (from S3) rather than attached. I already made my position clear that it’s a silly distinction as the logs will not disappear in the middle of nowhere (indeed I’ll keep the S3 bucket for them running until they are all attached to Bugzilla), but as he keeps insisting that it’s “trivial” to change the behaviour of the whole pipeline, I decided to give up.

Yes, it’s only one developer, and yes, lots of other developers took my side (thanks guys!), but it’s still aggravating to have somebody who can do whatever he likes without reporting to anybody, ignoring Council resolutions, QA (when I was the lead) and essentially using Gentoo as his personal playground. And the fact that only two people (Michał and Julian) have been pushing for a proper resolution is a bit disappointing.

I know it might feel like I’m taking my toys and going home — well, that’s what I’m doing. The tinderbox has been draining on my time (little) and my money (quite more), but those I was willing to part with — draining my motivation due to assholes in the project was not in the plans.

In the past six years that I’ve been working on this particular project, things evolved:

  • Originally, it was a simple chroot with a looping emerge, inspected with grep and Emacs, running on my desktop and intended to catch --as-needed failures. It went through lots of disks, and got me off XFS for good due to kernel panics.
  • It was moved to LXC, which is why the package entered the Gentoo tree, together with the OpenRC support and the first few crude hacks.
  • When I started spendig time in Los Angeles for a customer, Yamato under my desk got replaced with Excelsior which was crowdfounded and hosted, for two years straight, by my customer at the time.
  • This is where the rewrite happened, from attaching logs (which I could earlier do with more or less ease, thanks to NFS) to store them away and linking instead. This had to do mostly with the ability to remote-manage the tinderbox.
  • This year, since I no longer work for the company in Los Angeles, and instead I work in Dublin for a completely different company, I decided Excelsior was better off on a personal space, and rented a full 42 unit cabinet with Hurricane Electric in Fremont, where the server is still running as I type this.

You can see that it’s not that ’m trying to avoid spending time to engineer solutions. It’s just that I feel that what Mike is asking is unreasonable, and the way he’s asking it makes it unbearable. Especially when he feigns to care about my expenses — as I noted in the previously linked post, S3 is dirty cheap, and indeed it now comes down to $1/month given to Amazon for the logs storage and access, compared to $600/month to rent the cabinet at Hurricane.

Yes, it’s true that the server is not doing only tinderboxing – it also is running some fate instances, and I have been using it as a development server for my own projects, mostly open-source ones – but that’s the original use for it, and if it wasn’t for it I wouldn’t be paying so much to rent a cabinet, I’d be renting a single dedicated server off, say, Hetzner.

So here we go, the end of the era of my tinderbox. Patrick and Michael are still continuing their efforts so it’s not like Gentoo is left without integration test, but I’m afraid it’ll be harder for at least some of the maintainers who leveraged the tinderbox heavily in the past. My contract with Hurricane expires in April; at that point I’ll get the hardware out of the cabinet, and will decide what to do with it — it’s possible I’ll donate the server (minus harddrives) to Gentoo Foundation or someone else who can use it.

My involvement in Gentoo might also suffer from this; I hopefully will be dropping one of the servers I maintain off the net pretty soon, which will be one less system to build packages for, but I still have a few to take care of. For the moment I’m taking a break: I’ll soon send an email that it’s open season on my packages; I locked my bugzilla account already to avoid providing harsher responses in the bug linked at the top of this post.

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don’t want to have a trollfest on my blog, because moderating it is something I hate and I’m sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don’t think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don’t see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it’s easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I’ve considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don’t think I trust the stability promises for the moment and I don’t reboot servers that often anyway.

There are good things to the systemd design. And I’m sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don’t like Lennart’s approach on this to begin with, and I don’t think it’s uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don’t like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don’t agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the “software biodiversity”. At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I’ll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.

New devbox running (October 13, 2016, 19:02 UTC)

I announced it in February that Excelsior, which ran the Tinderbox, was no longer at Hurricane Electric. I have also said I’ll start on working on a new generation Tinderbox, and to do that I need a new devbox, as the only three Gentoo systems I have at home are the laptops and my HTPC, not exactly hardware to run compilation all the freaking time.

So after thinking of options, I decided that it was much cheaper to just rent a single dedicated server, rather than a full cabinet, and after asking around for options I settled for, because of price and recommendation from friends. Unfortunately they do not support Gentoo as an operating system, which makes a few things a bit more complicated. They do provide you with a rescue system, based on Ubuntu, which is enough to do the install, but not everything is easy that way either.

Luckily, most of the configuration (but not all) was stored in Puppet — so I only had to rename the hosts there, changed the MAC addresses for the LAN and WAN interfaces (I use static naming of the interfaces as lan0 and wan0, which makes many other pieces of configuration much easier to deal with), changed the IP addresses, and so on. Unfortunately since I didn’t start setting up that machine through Puppet, it also meant that it did not carry all the information to replicate the system, so it required some iteration and fixing of the configuration. This also means that the next move is going to be easier.

The biggest problem has been setting up correctly the MDRAID partitions, because of GRUB2: if you didn’t know, grub2 has an automagic dependency on mdadm — if you don’t install it it won’t be able to install itself on a RAID device, even though it can detect it; the maintainer refused to add an USE flag for it, so you have to know about it.

Given what can and cannot be autodetected by the kernel, I had to fight a little more than usual and just gave up and rebuilt the two (/boot and / — yes laugh at me but when I installed Excelsior it was the only way to get GRUB2 not to throw up) arrays as metadata 0.90. But the problem was being able to tell what the boot up errors were, as I have no physical access to the device of course.

The server I rented is a Dell server, that comes with iDRAC for remote management (Dell’s own name for IPMI, essentially), and allows you to set up connections to through your browser, which is pretty neat — they use a pool of temporary IP addresses and they only authorize your own IP address to connect to them. On the other hand, they do not change the default certificates, which means you end up with the same untrustable Dell certificate every time.

From the iDRAC console you can’t do much, but you can start up the remove, JavaWS-based console, which reminded me of something. Unfortunately the JNLP file that you can download from iDRAC did not work on either Sun, Oracle or IcedTea JREs, segfaulting (no kidding) with an X.509 error log as last output — I seriously thought the problem was with the certificates until I decided to dig deeper and found this set of entries in the JNLP file:

 <resources os="Windows" arch="x86">
   <nativelib href="https://idracip/software/avctKVMIOWin32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin32.jar" download="eager"/>
 <resources os="Windows" arch="amd64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
 <resources os="Windows" arch="x86_64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
  <resources os="Linux" arch="x86">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i386">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i586">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i686">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="amd64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  <resources os="Linux" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  <resources os="Mac OS X" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOMac64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLMac64.jar" download="eager"/>

Turns out if you remove everything but the Linux/x86_64 option, it does fetch the right jar and execute the right code without segfaulting. Mysteries of Java Web Start I guess.

So after finally getting the system to boot, the next step is setting up networking — as I said I used Puppet to set up the addresses and everything, so I had working IPv4 at boot, but I had to fight a little longer to get IPv6 working. Indeed IPv6 configuration with servers, virtual and dedicated alike, is very much an unsolved problem. Not because there is no solution, but mostly because there are too many solutions — essentially every single hosting provider I ever used had a different way to set up IPv6 (including none at all in one case, so the only option was a tunnel) so it takes some fiddling around to set it up correctly.

To be honest, has a better set up than OVH or Hetzner, the latter being very flaky, and a more self-service one that Hurricane, which was very flexible, making it very easy to set up, but at the same time required me to just mail them if I wanted to make changes. They document for dibbler, as they rely on DHCPv6 with DUID for delegation — they give you a single /56 v6 net that you can then split up in subnets and delegate independently.

What DHCPv6 in this configuration does not give you is routing — which kinda make sense, as you can use RA (Route Advertisement) for it. Unfortunately at first I could not get it to work. Turns out that, since I use subnets for the containerized network, I enabled IPv6 forwarding, through Puppet of course. Turns out that Linux will ignore Route Advertisement packets when forwarding IPv6 unless you ask it nicely to — by setting accept_ra=2 as well. Yey!

Again this is the kind of problems that finding this information took much longer than it should have been; Linux does not really tell you that it’s ignoring RA packets, and it is by far not obvious that setting one sysctl will disable another — unless you go and look for it.

Luckily this was the last problem I had, after that the server was set up fine and I just had to finish configuring the domain’s zone file, and the reverse DNS and the SPF records… yes this is all the kind of trouble you go through if you don’t just run your whole infrastructure, or use fully cloud — which is why I don’t consider self-hosting a general solution.

What remained is just bits and pieces. The first was me realizing that Puppet does not remove the entries from /etc/fstab by default, so I noticed that the Gentoo default /etc/fstab file still contains the entries for CD-ROM drives as well as /dev/fd0. I don’t remember which was the last computer with a floppy disk drive that I used, let alone owned.

The other fun bit has been setting up the containers themselves — similarly to the server itself, they are set up with Puppet. Since the server used to be running a tinderbox, it used to also host a proper rsync mirror, it was just easier, but I didn’t want to repeat that here, and since I was unable to find a good mirror through mirrorselect (longer story), I configured Puppet to just provide to all the containers with as their sync server, which did not work. Turns out that our default mirror address does not have any IPv6 hosts on it ­– when I asked Robin about it, it seems like we just don’t have any IPv6-hosted mirror that can handle that traffic, it is sad.

So anyway, I now have a new devbox and I’m trying to set up the rest of my repositories and access (I have not set up access to Gentoo’s repositories yet which is kind of the point here.) Hopefully this will also lead to more technical blogging in the next few weeks as I’m cutting down on the overwork to relax a bit.

What does #shellshock mean for Gentoo? (October 13, 2016, 19:02 UTC)

Gentoo Penguins with chicks at Jougla Point, Antarctica
Photo credit: Liam Quinn

This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I’ll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that’s a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I’ve pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart’s design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you’re using systemd (like me on the laptops), then it’s mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash

That will change your /bin/sh and make it much less likely that you’d be vulnerable to this particular problem. Unfortunately as I said it’s mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you’ll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I’m actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That’s bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it’s a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that’s hard to deny. I’ve stopped being active since I finally accepted stable employment – I’m almost thirty, it was time to stop playing around, I needed to make a living, even if I don’t really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it’s going to be probably too late to be relevant, I’ll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I’ll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that’s probably hoping too much.

TG4: Tinderbox Generation 4 (October 13, 2016, 19:02 UTC)

Everybody’s a critic: the first comment I received when I showed other Gentoo developers my previous post about the tinderbox was a question on whether I would be using pkgcore for the new generation tinderbox. If you have understood what my blog post was about, you probably understand why I was not happy about such a question.

I thought the blog post made it very clear that my focus right now is not to change the way the tinderbox runs but the way the reporting pipeline works. This is the same problem as 2009: generating build logs is easy, sifting through them is not. At first I thought this was hard just for me, but the fact that GSoC attracted multiple people interested in doing continuous build, but not one interested in logmining showed me this is just a hard problem.

The approach I took last time, with what I’ll start calling TG3 (Tinderbox Generation 3), was to: highlight the error/warning messages; provide a list of build logs for which a problem was identified (without caring much for which kind of problem), and just showing up broken builds or broken tests in the interface. This was easy to build up, and to a point to use, but it had a lots of drawbacks.

Major drawbacks in that UI is that it relies on manual work to identify open bugs for the package (and thus make sure not to report duplicate bugs), and on my own memory not to report the same issue multiple time, if the bug was closed by some child as NEEDINFO.

I don’t have my graphic tablet with me to draw a mock of what I have in mind yet, but I can throw in some of the things I’ve been thinking of:

  • Being able to tell what problem or problems a particular build is about. It’s easy to tell whether a build log is just a build failure or a test failure, but what if instead it has three or four different warning conditions? Being able to tell which ones have been found and having a single-click bug filing system would be a good start.
  • Keep in mind the bugs filed against a package. This is important because sometimes a build log is just a repeat of something filed already; it may be that it failed multiple times since you started a reporting run, so it might be better to show that easily.
  • Related, it should collapse failures for packages so not to repeat the same package multiple times on the page. Say you look at the build failures every day or two, you don’t care if the same package failed 20 times, especially if the logs report the same error. Finding out whether the error messages are the same is tricky, but at least you can collapse the multiple logs in a single log per package, so you don’t need to skip it over and over again.
  • Again related, it should keep track of which logs have been read and which weren’t. It’s going to be tricky if the app is made multi-user, but at least a starting point needs to be there.
  • It should show the three most recent bugs open for the package (and a count of how many other open bugs) so that if the bug was filed by someone else, it does not need to be filed again. Bonus points for showing the few most recently reported closed bugs too.

You can tell already that this is a considerably more complex interface than the one I used before. I expect it’ll take some work with JavaScript at the very least, so I may end up doing it with AngularJS and Go mostly because that’s what I need to learn at work as well, don’t get me started. At least I don’t expect I’ll be doing it in Polymer but I won’t exclude that just yet.

Why do I spend this much time thinking and talking (and soon writing) about UI? Because I think this is the current bottleneck to scale up the amount of analysis of Gentoo’s quality. Running a tinderbox is getting cheaper — there are plenty of dedicated server offers that are considerably cheaper than what I paid for hosting Excelsior, let alone the initial investment in it. And this is without going to look again at the possible costs of running them on GCE or AWS at request.

Three years ago, my choice of a physical server in my hands was easier to justify than now, with 4-core HT servers with 48GB of RAM starting at €40/month — while I/O is still the limiting factor, with that much RAM it’s well possible to have one tinderbox building fully in tmpfs, and just run a separate server for a second instance, rather than sharing multiple instances.

And even if GCE/AWS instances that are charged for time running are not exactly interesting for continuous build systems, having a cloud image that can be instructed to start running a tinderbox with a fixed set of packages, say all the reverse dependencies of libav, would make it possible to run explicit tests for code that is known to be fragile, while not pausing the main tinderbox.

Finally, there are different ideas of how we should be testing packages: all options enabled, all options disabled, multilib or not, hardened or not, one package at a time, all packages together… they can all share the same exact logmining pipeline, as all it needs is the emerge --info output, and the log itself, which can have markers for known issues to look out for or not. And then you can build the packages however you desire, as long as you can submit them there.

Now my idea is not to just build this for myself and run analysis over all the people who want to submit the build logs, because that would be just about as crazy. But I think it would be okay to have a shared instance for Gentoo developers to submit build logs from their own personal instances, if they want to, and then have them look at their own accounts only. It’s not going to be my first target but I’ll keep that in mind when I start my mocks and implementations, because I think it might prove successful.

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we’ve been waiting for the day for a while. I’ve been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let’s start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can’t vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I’ll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don’t know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you’re using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you’re not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I’ll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you’re supposed to use it right now. I don’t know if the former workflow is going to be deprecated at some point, but I wouldn’t be surprised.

For those like me who bought the NEO devices, you’ll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it’ll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you’ll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I’ll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let’s all hope that more sites will start protecting your accounts with U2F — I’ll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

You might have seen the word TEXTREL thrown around security or hardening circles, or used in Gentoo Linux installation warnings, but one thing that is clear out there is that the documentation around this term is not very useful to understand why they are a problem. so I’ve been asked to write something about it.

Let’s start with taking apart the terminology. TEXTREL is jargon for “text relocation”, which is once again more jargon, as “text” in this case means “code portion of an executable file.” Indeed, in ELF files, the .text section is the one that contains all the actual machine code.

As for “relocation”, the term is related to dynamic loaders. It is the process of modifying the data loaded from the loaded file to suit its placement within memory. This might also require some explanation.

When you build code into executables, any named reference is translated into an address instead. This includes, among others, variables, functions, constants and labels — and also some unnamed references such as branch destinations on statements such as if and for.

These references fall into two main typologies: relative and absolute references. This is the easiest part to explain: a relative reference takes some address as “base” and then adds or subtracts from it. Indeed, many architectures have a “base register” which is used for relative references. In case of executable code, particularly with the reference to labels and branch destinations, relative references translate into relative jumps, which are relative to the current instruction pointer. An absolute reference is instead a fully qualified pointer to memory, well at least to the address space of the running process.

While absolute addresses are kinda obvious as a concept, they are not very practical for a compiler to emit in many cases. For instance, when building shared objects, there is no way for the compiler to know which addresses to use, particularly because a single process can load multiple objects, and they need to all be loaded at different addresses. So instead of writing to the file the actual final (unknown) address, what gets written by the compiler first – and by the link editor afterwards – is a placeholder. It might sound ironic, but an absolute reference is then emitted as a relative reference based upon the loading address of the object itself.

When the loader takes an object and loads it to memory, it’ll be mapped at a given “start” address. After that, the absolute references are inspected, and the relative placeholder resolved to the final absolute address. This is the process of relocation. Different types of relocation (or displacements) exists, but they are not the topic of this post.

Relocations as described up until now can apply to both data and code, but we single out code relocations as TEXTRELs. The reason for this is to be found in mitigation (or hardening) techniques. In particular, what is called W^X, NX or PaX. The basic idea of this technique is to disallow modification to executable areas of memory, by forcing the mapped pages to either be writable or executable, but not both (W^X reads “writable xor executable”.) This has a number of drawbacks, which are most clearly visible with JIT (Just-in-Time) compilation processes, including most JavaScript engines.

But beside JIT problem, there is the problem with relocations happening in code section of an executable, too. Since the relocations need to be written to, it is not feasible (or at least not easy) to provide an exclusive writeable or executable access to those. Well, there are theoretical ways to produce that result, but it complicates memory management significantly, so the short version is that generally speaking, TEXTRELs and W^X techniques don’t go well together.

This is further complicated by another mitigation strategy: ASLR, Address Space Layout Randomization. In particular, ASLR fully defeats prelinking as a strategy for dealing with TEXTRELs — theoretically on a system that allows TEXTREL but has the address space to map every single shared object at a fixed address, it would not be necessary to relocate at runtime. For stronger ASLR you also want to make sure that the executables themselves are mapped at different addresses, so you use PIE, Position Independent Executable, to make sure they don’t depend on a single stable loading address.

Usage of PIE was for a long while limited to a few select hardened distributions, such as Gentoo Hardened, but it’s getting more common, as ASLR is a fairly effective mitigation strategy even for binary distributions where otherwise function offsets would be known to an attacker.

At the same time, SELinux also implements protection against text relocation, so you no longer need to have a patched hardened kernel to provide this protection.

Similarly, Android 6 is now disallowing the generation of shared objects with text relocations, although I have no idea if binaries built to target this new SDK version gain any more protection at runtime, since it’s not really my area of expertise.

LibreSSL and the bundled libs hurdle (October 13, 2016, 19:02 UTC)

It was over five years ago that I ranted about the bundling of libraries and what that means for vulnerabilities found in those libraries. The world has, since, not really listened. RubyGems still keep insisting that “vendoring” gems is good, Go explicitly didn’t implement a concept of shared libraries, and let’s not even talk about Docker or OSv and their absolutism in static linking and bundling of the whole operating system, essentially.

It should have been obvious how this can be a problem when Heartbleed came out, bundled copies of OpenSSL would have needed separate updates from the system libraries. I guess lots of enterprise users of such software were saved only by the fact that most of the bundlers ended up using older versions of OpenSSL where heartbeat was not implemented at all.

Now that we’re talking about replacing the OpenSSL libraries with those coming from a different project, we’re going to be hit by both edges of the proprietary software sword: bundling and ABI compatibility, which will make things really interesting for everybody.

If you’ve seen my (short, incomplete) list of RAND_egd() users which I posted yesterday. While the tinderbox from which I took this is out of date and needs cleaning, it is a good starting point to figure out the trends, and as somebody already picked up, the bundling is actually strong.

Software that bundled Curl, or even Python, but then relied on the system copy of OpenSSL, will now be looking for RAND_egd() and thus fail. You could be unbundling these libraries, and then use a proper, patched copy of Curl from the system, where the usage of RAND_egd() has been removed, but then again, this is what I’ve been advocating forever or so. With caveats, in the case of Curl.

But now if the use of RAND_egd() is actually coming from the proprietary bits themselves, you’re stuck and you can’t use the new library: you either need to keep around an old copy of OpenSSL (which may be buggy and expose even more vulnerability) or you need a shim library that only provides ABI compatibility against the new LibreSSL-provided library — I’m still not sure why this particular trick is not employed more often, when the changes to a library are only at the interface level but still implements the same functionality.

Now the good news is that from the list that I produced, at least the egd functions never seemed to be popular among proprietary developers. This is expected as egd was vastly a way to implement the /dev/random semantics for non-Linux systems, while the proprietary software that we deal with, at least in the Linux world, can just accept the existence of the devices themselves. So the only problems have to do with unbundling (or replacing) Curl and possibly the Python SSL module. Doing so is not obvious though, as I see from the list that there are at least two copies of which is the older ABI for Curl — although admittedly one is from the scratchbox SDKs which could just as easily be replaced with something less hacky.

Anyway, my current task is to clean up the tinderbox so that it’s in a working state, after which I plan to do a full build of all the reverse dependencies on OpenSSL, it’s very possible that there are more entries that should be in the list, since it was built with USE=gnutls globally to test for GnuTLS 3.0 when it came out.

October 11, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Openstack Newton Update (October 11, 2016, 05:00 UTC)

The short of it

Openstack Newton was packaged early last week (when rc2 was still going on upstream) and the tags for the major projects were packaged the day they released (nova and the like).

I've updated the openstack-meta package to 2016.2.9999 and would recommend people use that.

Heat has also been packaged this time around so you are able to use that if you wish.

I'll link to my keywords and use files so you may use them if you wish as well. Please keep in mind that my use file is for my personal setup (static kernel, vxlan/linuxbridge and postgresql)

September 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
We do not ship SELinux sandbox (September 27, 2016, 18:47 UTC)

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 22, 2016

mupdf is a lightweight PDF viewer and toolkit written in portable C.

A fuzzing though mutool revealed an infinite loop in gatherresourceinfo if mutool try to get info from a crafted pdf.

The output is a bit cutted because the original is ~1300 lines (because of the loop)

# mutool info $FILE
[cut here]
warning: not a font dict (0 0 R)
==8763==ERROR: AddressSanitizer: stack-overflow on address 0x7ffeb34e6f6c (pc 0x7f188e685b2e bp 0x7ffeb34e7410 sp 0x7ffeb34e6ea0 T0)
    #0 0x7f188e685b2d in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266
    #1 0x7f188e6888c0 in buffered_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:2346
    #2 0x7f188e685cd4 in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1292
    #3 0x49927f in __interceptor_vfprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/
    #4 0x499352 in fprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/
    #5 0x7f188f70f03c in fz_flush_warnings /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:18:3
    #6 0x7f188f70f03c in fz_throw /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:168
    #7 0x7f188fac98d5 in pdf_parse_ind_obj /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-parse.c:565:3
    #8 0x7f188fb5fe6b in pdf_cache_object /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2029:13
    #9 0x7f188fb658d2 in pdf_resolve_indirect /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2155:12
    #10 0x7f188fbc0a0d in pdf_is_dict /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:268:2
    #11 0x53ea6a in gatherfonts /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:257:8
    #12 0x53ea6a in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:595
    #13 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5
    [cut here]
    #253 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5

SUMMARY: AddressSanitizer: stack-overflow /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266 in _IO_vfprintf
Pages: 1
Retrieving info from pages 1-1...

Affected version:

Fixed version:

Commit fix:;h=fdf71862fe929b4560e9f632d775c50313d6ef02

This bug was discovered by Agostino Sarubbo of Gentoo.

2016-08-05: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-22: blog post about the issue

This bug was found with American Fuzzy Lop.


mupdf: mutool: stack-based buffer overflow after an infinite loop in gatherresourceinfo (pdfinfo.c)

September 11, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
4 Weeks Left to Gentoo Miniconf (September 11, 2016, 17:50 UTC)

4 weeks are left until LinuxDays and Gentoo Miniconf 2016 in Prague.

Are you excited to see Gentoo in action? Here is something to look forward to:

  • Gentoo amd64-fbsd on an ASRock E350M1
  • Gentoo arm64 on a Raspberry Pi 3 (yes, running a 64 bit kernel and userland) with systemd
  • Gentoo mipsel on a MIPS Creator CI20 with musl libc
  • Gentoo arm on an Orange Pi PC, with Clang as system compiler


September 07, 2016

Graphicsmagick is an Image Processing System.

A fuzzing revealed a NULL pointer access in the TIFF parser.

The complete ASan output:

# gm identify $FILE
==19028==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fbd36dd6c3c bp 0x7ffe3c007090 sp 0x7ffe3c006d10 T0)
    #0 0x7fbd36dd6c3b in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:3
    #1 0x7fbd2b714c26 in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2048:13
    #2 0x7fbd36ac506a in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #3 0x7fbd36ac46ac in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9
    #4 0x7fbd36a165a0 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17
    #5 0x7fbd36a1affb in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17
    #6 0x7fbd36a6fee3 in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10
    #7 0x7fbd36a6eb78 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16
    #8 0x7fbd3597761f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:3 in MagickStrlCpy

Affected version:
1.3.24 (and maybe past)

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-08-17: bug discovered
2016-08-18: bug reported privately to upstream
2016-08-19: upstream released a patch
2016-09-05: upstream released 1.3.25
2016-09-07: blog post about the issue

This bug was found with American Fuzzy Lop.


graphicsmagick: NULL pointer dereference in MagickStrlCpy (utility.c)

September 06, 2016

ettercap is a comprehensive suite for man in the middle attacks.

Etterlog, which is part of the package, fails to read malformed data produced from the fuzzer and then it overflows.

Since there are three issues, to make it short, I’m cutting a bit the ASan output.

# etterlog $FILE
==10077==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60e00000dac0 at pc 0x00000047b8cf bp 0x7ffdc6850580 sp 0x7ffdc684fd30                                                      
READ of size 43780 at 0x60e00000dac0 thread T0                                                                                                                                                 
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/            
    #1 0x50e26e in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:106:12                                                             
    #2 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7                                                              
    #3 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4                                                                   
    #4 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52                                                                           
    #5 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4                                                                               
    #6 0x7faa8656161f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #7 0x41a408 in _start (/usr/bin/etterlog+0x41a408)                                                                                                                                         

0x60e00000dac0 is located 0 bytes to the right of 160-byte region [0x60e00000da20,0x60e00000dac0)                                                                                              
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/                                              
    #1 0x50e215 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:99:4                                                               

==10144==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60600000efa0 at pc 0x00000047b8cf bp 0x7ffe22881090 sp 0x7ffe22880840
READ of size 256 at 0x60600000efa0 thread T0
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/
    #1 0x50ff6f in update_user_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:251:12
    #2 0x50e555 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:91:10
    #3 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7
    #4 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4
    #5 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52
    #6 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4
    #7 0x7f53a781461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #8 0x41a408 in _start (/usr/bin/etterlog+0x41a408)

0x60600000efa0 is located 0 bytes to the right of 64-byte region [0x60600000ef60,0x60600000efa0)
allocated by thread T0 here:
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/
    #1 0x50ffea in update_user_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:256:4

==10212==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60e00000de40 at pc 0x00000047b8cf bp 0x7ffe0a6b9460 sp 0x7ffe0a6b8c10
READ of size 37636 at 0x60e00000de40 thread T0
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/
    #1 0x50e1a3 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:89:12
    #2 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7
    #3 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4
    #4 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52
    #5 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4
    #6 0x7f562119261f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #7 0x41a408 in _start (/usr/bin/etterlog+0x41a408)

0x60e00000de40 is located 0 bytes to the right of 160-byte region [0x60e00000dda0,0x60e00000de40)
allocated by thread T0 here:
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/
    #1 0x50e215 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:99:4

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-08-10: bug discovered
2016-08-11: bug reported to upstream
2016-09-06: blog post about the issue

This bug was found with American Fuzzy Lop.


ettercap: etterlog: multiple (three) heap-based buffer overflow (el_profiles.c)

Greg KH a.k.a. gregkh (homepage, bugs)
4.9 == next LTS kernel (September 06, 2016, 07:59 UTC)

As I briefly mentioned a few weeks ago on my G+ page, the plan is for the 4.9 Linux kernel release to be the next “Long Term Supported” (LTS) kernel.

Last year, at the Linux Kernel Summit, we discussed just how to pick the LTS kernel. Many years ago, we tried to let everyone know ahead of time what the kernel version would be, but that caused a lot of problems as people threw crud in there that really wasn’t ready to be merged, just to make it easier for their “day job”. That was many years ago, and people insist they aren’t going to do this again, so let’s see what happens.

I reserve the right to not pick 4.9 and support it for two years, if it’s a major pain because people abused this notice. If so, I’ll possibly drop back to 4.8, or just wait for 4.10 to be released. I’ll let everyone know by updating the releases page when it’s time (many months from now.)

If people have questions about this, email me and I will be glad to discuss it.

September 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RethinkDB on Gentoo Linux (September 02, 2016, 06:56 UTC)


It was about time I added a new package to portage and I’m very glad it to be RethinkDB and its python driver !

  • dev-db/rethinkdb
  • dev-python/python-rethinkdb

For those of you who never heard about this database, I urge you to go about their excellent website and have a good read.

Packaging RethinkDB

RethinkDB has been under my radar for quite a long time now and when they finally got serious enough about high availability I also got serious about using it at work… and obviously “getting serious” + “work” means packaging it for Gentoo Linux :)

Quick notes on packaging for Gentoo Linux:

  • This is a C++ project so it feels natural and easy to grasp
  • The configure script already offers a way of using system libraries instead of the bundled ones which is in line with Gentoo’s QA policy
  • The only grey zone about the above statement is the web UI which is used precompiled

RethinkDB has a few QA violations which the ebuild is addressing by modifying the sources:

  • There is a configure.default which tries to force some configure options
  • The configure is missing some options to avoid auto installing some docs and init scripts
  • The build system does its best to guess the CXX compiler but it should offer an option to set it directly
  • The build system does not respect users’ CXXFLAGS and tries to force the usage of -03

Getting our hands into RethinkDB

At work, we finally found the excuse to get our hands into RethinkDB when we challenged ourselves with developing a quizz game for our booth as a sponsor of Europython 2016.

It was a simple game where you were presented a question and four possible answers and you had 60 seconds to answer as much of them as you could. The trick is that we wanted to create an interactive game where the participant had to play on a tablet but the rest of the audience got to see who was currently playing and follow their score progression + their ranking for the day and the week in real time on another screen !

Another challenge for us in the creation of this game is that we only used technologies that were new to us and even switched jobs so the backend python guys would be doing the frontend javascript et vice et versa. The stack finally went like this :

  • Game quizz frontend : Angular2 (TypeScript)
  • Game questions API : Go
  • Real time scores frontend : Angular2 + autobahn
  • Real time scores API : python 3.5 asyncio + autobahn
  • Database : RethinkDB

As you can see on the stack we chose RethinkDB for its main strength : real time updates pushed to the connected clients. The real time scores frontend and API were bonded together using autobahn while the API was using the changefeeds (realtime updates coming from the database) and broadcasting them to the frontend.

What we learnt about RethinkDB

  • We’re sure that we want to use it in production !
  • The ReQL query language is a pipeline so its syntax is quite tricky to get familiar with (even more when coming from mongoDB like us), it is as powerful as it can be disconcerting
  • Realtime changefeeds have limitations which are sometimes not so easy to understand/find out (especially the order_by / secondary index part)
  • Changefeeds limitations is a constraint you have to take into account in your data modeling !
  • Changefeeds + order_by can do the ordering for you when using the include_offsets option, this is amazing
  • The administration web UI is awesome
  • The python 3.5 asyncio proper support is still not merged, this is a pain !

Try it out

Now that you can emerge rethinkdb I encourage you to try this awesome database.

Be advised that the ebuild also provides a way of configuring your rethinkdb instance by running emerge –config dev-db/rethinkdb !

I’ll now try to get in touch with upstream to get Gentoo listed on their website.

August 29, 2016
potrace: memory allocation failure (August 29, 2016, 17:23 UTC)

potrace is a utility that transforms bitmaps into vector graphics.

A crafted image, through a fuzz testing, causes the memory allocation to fail.

Since the ASan symbolyzer didn’t start up correctly, I’m reporting only what it prints at the end.

# potrace $FILE
potrace: warning: 2.hangs: premature end of file
==13660==ERROR: AddressSanitizer failed to allocate 0x200003000 (8589946880) bytes of LargeMmapAllocator (error code: 12)
==13660==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/ "((0 && "unable to mmap")) != (0)" (0x0, 0x0)

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-08-26: bug discovered
2016-08-27: bug reported privately to upstream
2016-08-29: blog post about the issue

This bug was found with American Fuzzy Lop.


potrace: memory allocation failure

potrace is a utility that transforms bitmaps into vector graphics.

A crafted image revealed, through a fuzz testing, the presence of a NULL pointer access.

The complete ASan output:

# potrace $FILE
potrace: warning: 48.crashes: premature end of file                                                                                                                                            
==13940==ERROR: AddressSanitizer: SEGV on unknown address 0x7fd7b865b800 (pc 0x7fd7ec5bcbf4 bp 0x7fff9ebad590 sp 0x7fff9ebad360 T0)                                                            
    #0 0x7fd7ec5bcbf3 in findnext /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:436:11                                                                             
    #1 0x7fd7ec5bcbf3 in getenv /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:478                                                                                  
    #2 0x7fd7ec5c3ed9 in potrace_trace /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/potracelib.c:76:7                                                                         
    #3 0x4fea6e in process_file /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/main.c:1102:10                                                                                   
    #4 0x4f872b in main /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/main.c:1250:7                                                                                            
    #5 0x7fd7eb4d961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #6 0x418fc8 in getenv (/usr/bin/potrace+0x418fc8)                                                                                                                                          
AddressSanitizer can not provide additional info.                                                                                                                                              
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:436:11 in findnext                                                                   

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-08-26: bug discovered
2016-08-27: bug reported privately to upstream
2016-08-29: blog post about the issue

This bug was found with American Fuzzy Lop.


potrace: NULL pointer dereference in findnext (decompose.c)

August 23, 2016

Graphicsmagick is an Image Processing System.

A fuzzing revealed two minor issues in the TIFF parser. Both issues come out from different line in the tiff.c file but the problem seems to be the same.

The complete ASan output:

# gm identify $FILE
==6321==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eb12 at pc 0x7fa98ca1fcf4 bp 0x7fff957069a0 sp 0x7fff95706998                                                       
READ of size 1 at 0x60200000eb12 thread T0                                                                                                                                                     
    #0 0x7fa98ca1fcf3 in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:10                                                    
    #1 0x7fa98135de5a in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2060:13                                                       
    #2 0x7fa98c70e06a in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13                                                     
    #3 0x7fa98c70d6ac in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9                                                      
    #4 0x7fa98c65f5a0 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17                                             
    #5 0x7fa98c663ffb in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17                                                    
    #6 0x7fa98c6b8ee3 in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10                                                 
    #7 0x7fa98c6b7b78 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16                                                       
    #8 0x7fa98b5c061f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)                                                                                                                                                
0x60200000eb12 is located 0 bytes to the right of 2-byte region [0x60200000eb10,0x60200000eb12)                                                                                                
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4c01a8 in realloc /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/                                                     
    #1 0x7fa9810ebe5b in _TIFFCheckRealloc /var/tmp/portage/media-libs/tiff-4.0.6/work/tiff-4.0.6/libtiff/tif_aux.c:73

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:10 in MagickStrlCpy

# gm identify $FILE
==26025==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000ecf2 at pc 0x7f07a3aaab3c bp 0x7ffc558602c0 sp 0x7ffc558602b8                                                      
READ of size 1 at 0x60300000ecf2 thread T0                                                                                                                                                     
    #0 0x7f07a3aaab3b in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4557:7                                                     
    #1 0x7f07983e851c in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2048:13                                                       
    #2 0x7f07a3797a62 in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13                                                     
    #3 0x7f07a3796f18 in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9                                                      
    #4 0x7f07a36e6648 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17                                             
    #5 0x7f07a36eb01b in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17                                                    
    #6 0x7f07a3740a3e in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10                                                 
    #7 0x7f07a373f5bb in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16                                                       
    #8 0x7f07a264961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)                                                                                                                                                
0x60300000ecf2 is located 0 bytes to the right of 18-byte region [0x60300000ece0,0x60300000ecf2)                                                                                               
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4bfe28 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/                                                      
    #1 0x7f0798178fd4 in setByteArray /var/tmp/portage/media-libs/tiff-4.0.6/work/tiff-4.0.6/libtiff/tif_dir.c:51                                                                              
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4557:7 in MagickStrlCpy

Affected version:
1.3.24 (and maybe past)

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-08-17: bug discovered
2016-08-18: bug reported privately to upstream
2016-08-19: upstream released a patch
2016-08-23: blog post about the issue
2016-09-05: upstream released 1.3.25

This bug was found with American Fuzzy Lop.


graphicsmagick: two heap-based buffer overflow in ReadTIFFImage (tiff.c)

In Memory of Jonathan “avenj” Portnoy (August 23, 2016, 00:00 UTC)

The Gentoo project mourns the loss of Jonathan Portnoy, better known amongst us as Jon, or avenj.

Jon was an active member of the International Gentoo community, almost since its founding in 1999. He was still active until his last day.

His passing has struck us deeply and with disbelief. We all remember him as a vivid and enjoyable person, easy to reach out to and energetic in all his endeavors.

On behalf of the entire Gentoo Community, all over the world, we would like to convey our deepest sympathy for his family and friends. As per his wishes, the Gentoo Foundation has made a donation in his memory to the Perl Foundation.

Please join the community in remembering Jon on our forums.

August 20, 2016

Libav is an open source set of tools for audio and video processing.

A crafted file causes a stack-based buffer overflow. The ASan report may be confused because it mentions get_bits, but the issue is in aac_sync.
This issue was discovered the past year, I reported it to Luca Barbato privately and I didn’t follow the state.
Before I made the report, the bug was noticed by Janne Grunau because the fate test reported a failure, then he fixed it, but at that time there wasn’t no stable release(s) that included the fix.

The complete ASan output:

~ # avconv -i $FILE -f null -
==20736==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd3bd34f4a at pc 0x7f0805611189 bp 0x7ffd3bd34e20 sp 0x7ffd3bd34e18
READ of size 4 at 0x7ffd3bd34f4a thread T0
    #0 0x7f0805611188 in get_bits /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/get_bits.h:244:5
    #1 0x7f0805611188 in avpriv_aac_parse_header /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aacadtsdec.c:58
    #2 0x7f080560f19e in aac_sync /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_parser.c:43:17
    #3 0x7f080560a87b in ff_aac_ac3_parse /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_ac3_parser.c:48:25
    #4 0x7f0806fcd8e6 in av_parser_parse2 /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/parser.c:157:13
    #5 0x7f0808efd4dd in parse_packet /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:794:15
    #6 0x7f0808edae64 in read_frame_internal /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:960:24
    #7 0x7f0808ee8783 in avformat_find_stream_info /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2156:15
    #8 0x4f62f6 in open_input_file /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:726:11
    #9 0x4f474f in open_files /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:2127:15
    #10 0x4f3f62 in avconv_parse_options /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:2164:11
    #11 0x528727 in main /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv.c:2629:11
    #12 0x7f0803c83aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #13 0x43a5d6 in _start (/usr/bin/avconv+0x43a5d6)

Address 0x7ffd3bd34f4a is located in stack of thread T0 at offset 170 in frame
    #0 0x7f080560ee3f in aac_sync /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_parser.c:31

  This frame has 3 object(s):
    [32, 64) 'bits'
    [96, 116) 'hdr'
    [160, 168) 'tmp' 0x10002779e9e0: 00 00 04 f2 f2 f2 f2 f2 00[f3]f3 f3 00 00 00 00                                                                                                                                                                                                              
  0x10002779e9f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1                                                                                                                                                                                                              
  0x10002779ea10: 00 f2 f2 f2 04 f2 04 f3 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea30: f1 f1 f1 f1 00 f3 f3 f3 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5                                                                                                                                                                                                                                                  
  Stack use after scope:   f8                                                                                                                                                                                                                                                  
  Global redzone:          f9                                                                                                                                                                                                                                                  
  Global init order:       f6                                                                                                                                                                                                                                                  
  Poisoned by user:        f7                                                                                                                                                                                                                                                  
  Container overflow:      fc                                                                                                                                                                                                                                                  
  Array cookie:            ac                                                                                                                                                                                                                                                  
  Intra object redzone:    bb                                                                                                                                                                                                                                                  
  ASan internal:           fe                                                                                                                                                                                                                                                  
  Left alloca redzone:     ca                                                                                                                                                                                                                                                  
  Right alloca redzone:    cb                                                                                                                                                                                                                                                  

Affected version:
11.3 (and maybe past versions)

Fixed version:

Commit fix:;a=commit;h=fb1473080223a634b8ac2cca48a632d037a0a69d

This bug was discovered by Agostino Sarubbo of Gentoo.
This bug was also discovered by Janne Grunau.


2015-07-27: bug discovered
2015-07-28: bug reported privately to upstream
2016-08-20: blog post about the issue

This bug was found with American Fuzzy Lop.
This bug does not affect ffmpeg.
A same fix, was applied to another part of (similar) code in the ac3_parser.c file.


libav: stack-based buffer overflow in aac_sync (aac_parser.c)

August 18, 2016
Events: FrOSCon 11 (August 18, 2016, 00:00 UTC)

This weekend, the University of Applied Sciences Bonn-Rhein-Sieg will host the Free and Open Source Software Conference, better known as FrOSCon. Gentoo will be present there on 20 and 21 August with a chance for you to meet devs and other users, grab merchandise, and compile your own Gentoo button badges.

See you there!

August 14, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
Gentoo Miniconf 2016 Call for Papers closed (August 14, 2016, 21:08 UTC)

The Call for Papers for the Gentoo Miniconf is now closed and the acceptance notices have been sent out.
Missed the deadline? Don’t despair, the LinuxDays CfP is still open and you can still submit talk proposals there until the end of August.

August 08, 2016

potrace is a utility that transforms bitmaps into vector graphics.

A crafted images (bmp) revealed, through a fuzz testing, the presence of three NULL pointer access.

The complete ASan output:

==13806==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f027c bp 0x7ffd8442c190 sp 0x7ffd8442bfc0 T0)
    #0 0x4f027b in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717:4
    #1 0x4f027b in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f2f77104aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717 bm_readbody_bmp

==13812==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f0958 bp 0x7ffd1e689a50 sp 0x7ffd1e689880 T0)
    #0 0x4f0957 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744:4
    #1 0x4f0957 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7fbc3b936aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744 bm_readbody_bmp

==13885==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f10b8 bp 0x7ffdf745fff0 sp 0x7ffdf745fe20 T0)
    #0 0x4f10b7 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651:11
    #1 0x4f10b7 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7fc675763aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651 bm_readbody_bmp

Affected version:

Fixed version:

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

This bug was discovered by Agostino Sarubbo of Gentoo.


2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

This bug was found with American Fuzzy Lop.


potrace: multiple (three) NULL pointer dereference in bm_readbody_bmp (bitmap_io.c)

potrace: divide-by-zero in bm_new (bitmap.h) (August 08, 2016, 14:51 UTC)

potrace is a utility that transforms bitmaps into vector graphics.

A crafted image (bmp) revealed, through a fuzz testing, the presence of a division by zero.

The complete ASan output:

# potrace $FILE.bmp
==25102==ERROR: AddressSanitizer: FPE on unknown address 0x000000508d52 (pc 0x000000508d52 bp 0x7ffc381edff0 sp 0x7ffc381ede20 T0)
    #0 0x508d51 in bm_new /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap.h:63:24
    #1 0x508d51 in bm_readbody_bmp /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:548
    #2 0x508d51 in bm_read /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #3 0x4fe12d in process_file /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #4 0x4f82af in main /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #5 0x7f8d6729e61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x419018 in getenv (/usr/bin/potrace+0x419018)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap.h:63:24 in bm_new

Affected version:

Fixed version:

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

This bug was discovered by Agostino Sarubbo of Gentoo.


2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

This bug was found with American Fuzzy Lop.


potrace: divide-by-zero in bm_new (bitmap.h)

potrace is a utility that transforms bitmaps into vector graphics.

A crafted images (bmp) revealed, through a fuzz testing, the presence of SIX heap-based buffer overflow.

To avoid to make the post much long, I splitted the ASan output to leave only the relevant trace.

==13565==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61100000a2fc at pc 0x0000004f370a bp 0x7ffd81d22f90 sp 0x7ffd81d22f88
READ of size 4 at 0x61100000a2fc thread T0
    #0 0x4f3709 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717:4
    #1 0x4f3709 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f9a1c8f4aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717 bm_readbody_bmp

==13663==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe4 at pc 0x0000004f3729 bp 0x7fff07737d30 sp 0x7fff07737d28
READ of size 4 at 0x60200000efe4 thread T0
    #0 0x4f3728 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651:11
    #1 0x4f3728 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f3adde99aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651 bm_readbody_bmp

==13618==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000f00c at pc 0x0000004f37a9 bp 0x7ffc33e306b0 sp 0x7ffc33e306a8
READ of size 4 at 0x60300000f00c thread T0
    #0 0x4f37a8 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:652:11
    #1 0x4f37a8 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f20147f7aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:652 bm_readbody_bmp

==13624==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe8 at pc 0x0000004f382a bp 0x7fff60b8bed0 sp 0x7fff60b8bec8
READ of size 4 at 0x60200000efe8 thread T0
    #0 0x4f3829 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:690:4
    #1 0x4f3829 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129                                                                                       
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9                                                                                    
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7                                                                                            
    #4 0x7f35633d5aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289                                                                        
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)                                                                                                                                          
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:690 bm_readbody_bmp                                                  
==13572==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000f018 at pc 0x0000004f38d5 bp 0x7ffc994b68d0 sp 0x7ffc994b68c8                                                      
READ of size 4 at 0x60200000f018 thread T0                                                                                                                                                     
    #0 0x4f38d4 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744:4                                                                             
    #1 0x4f38d4 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f11b6253aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744 bm_readbody_bmp

==13753==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe8 at pc 0x0000004f3948 bp 0x7fff4f6df6b0 sp 0x7fff4f6df6a8
READ of size 4 at 0x60200000efe8 thread T0
    #0 0x4f3947 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:601:2
    #1 0x4f3947 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f26d3d28aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:601 bm_readbody_bmp

Affected version:

Fixed version:

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

This bug was discovered by Agostino Sarubbo of Gentoo.


2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

This bug was found with American Fuzzy Lop.


potrace: multiple(six) heap-based buffer overflow in bm_readbody_bmp (bitmap_io.c)

July 31, 2016
Zack Medico a.k.a. zmedico (homepage, bugs)

Suppose that you host a gentoo rsync mirror on your company intranet, and you want it to gracefully handle bursts of many connections from clients, queuing connections as long as necessary for all of the clients to be served (if they don’t time out first). However, you don’t want to allow unlimited rsync processes, since that would risk overloading your server. In order to solve this problem, I’ve created socket-burst-dampener, an inetd-like daemon for handling bursts of connections.

It’s a very simple program, which only takes command-line arguments (no configuration file). For example:

socket-burst-dampener 873 \
--backlog 8192 --processes 128 --load-average 8 \
-- rsync --daemon

This will allow up to 128 concurrent rsync processes, while automatically backing off on processes if the load average exceeds 8. Meanwhile, the --backlog 8192 setting means that the kernel will queue up to 8192 connections (until they are served or they time out). You need to adjust the net.core.somaxconn sysctl in order for the kernel to queue that many connections, since net.core.somaxconn defaults to 128 connections (cat /proc/sys/net/core/somaxconn).

July 18, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gimp 2.9.4 now in Gentoo (July 18, 2016, 16:24 UTC)

Hi there!

Just a quick heads up that Gimp 2.9.4 is now available in Gentoo.

Upstream has an article on what’s new with Gimp 2.9.4: GIMP 2.9.4 Released

Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

The call for papers for the Gentoo Miniconf 2016 (8th+9th October 2016 in Prague) will close in two weeks, on 1 August 2016. Time to get your submission ready!

July 04, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)

After quite a bit of hard work, I've at long last launched WireGuard, a secure network tunnel that uses modern crypto, is extremely fast, and is easy and pleasurable to use. You can read about it at the website, but in short, it's based on the simple idea of an association between public keys and permitted IP addresses. Along the way it uses some nice crypto trick to achieve it's goal. For performance it lives in the kernel, though cross-platform versions in safe languages like Rust, Go, etc are on their way.

The launch was wildly successful. About 10 minutes after I ran /etc/init.d/nginx restart, somebody had already put it on Hacker News and the Twitter sphere, and within 24 hours I had received 150,000 unique IPs. The reception has been very warm, and the mailing list has already started to get some patches. Distro maintainers have stepped up and packages are being prepared. There are currently packages for Gentoo, Arch, Debian, and OpenWRT, which is very exciting.

Although it's still experimental and not yet in final stable/secure form, I'd be interested in general feedback from experimenters and testers.

June 25, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.0 (June 25, 2016, 15:31 UTC)

Oh boy, this new version is so amazing in terms of improvements and contributions that it’s hard to sum it up !

Before going into more explanations I want to dedicate this release to tobes whose contributions, hard work and patience have permitted this ambitious 3.0 : THANK YOU !

This is the graph of contributed commits since 2.9 just so you realise how much this version is thanks to him:
2016-06-25-165245_1289x248_scrotI can’t continue on without also thanking Horgix who started this madness by splitting the code base into modular files and pydsigner for his everlasting contributions and code reviews !

The git stat since 2.9 also speaks for itself:

 73 files changed, 7600 insertions(+), 3406 deletions(-)

So what’s new ?

  • the monolithic code base have been split into modules responsible for the given tasks py3status performs
  • major improvements on modules output orchestration and execution resulting in considerable CPU consumption reduction and i3bar responsiveness
  • refactoring of user notifications with added dbus support and rate limiting
  • improved modules error reporting
  • py3status can now survive an i3status crash and will try to respawn it
  • a new ‘container’ module output type gives the ability to group modules together
  • refactoring of the time and tztime modules support brings the support of all the time macros (%d, %Z etc)
  • support for stopping py3status and its modules when i3bar hide mode is used
  • refactoring of general, contribution and most noticeably modules documentation
  • more details on the rest of the changelog


Along with a cool list of improvements on the existing modules, these are the new modules:

  • new group module to cycle display of several modules (check it out, it’s insanely handy !)
  • new fedora_updates module to check for your Fedora packages updates
  • new github module to check a github repository and notifications
  • new graphite module to check metrics from graphite
  • new insync module to check your current insync status
  • new timer module to have a simple countdown displayed
  • new twitch_streaming module to check is a Twitch Streamer is online
  • new vpn_status module to check your VPN status
  • new xrandr_rotate module to rotate your screens
  • new yandexdisk_status module to display Yandex.Disk status


And of course thank you to all the others who made this version possible !

  • @egeskow
  • Alex Caswell
  • Johannes Karoff
  • Joshua Pratt
  • Maxim Baz
  • Nathan Smith
  • Themistokle Benetatos
  • Vladimir Potapev
  • Yongming Lai

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Time to retire (June 25, 2016, 07:08 UTC)

sign-big-150dpi-magnified-name-200x200I’m sad to say it’s the end of the road for me with Gentoo, after 13 years volunteering my time (my “anniversary” is tomorrow). My time and motivation to commit to Gentoo have steadily declined over the past couple of years and eventually stopped entirely. It was an enormous part of my life for more than a decade, and I’m very grateful to everyone I’ve worked with over the years.

My last major involvement was running our participation in the Google Summer of Code, which is now fully handed off to others. Prior to that, I was involved in many things from migrating our X11 packages through the Big Modularization and maintaining nearly 400 packages to serving 6 terms on the council and as desktop manager in the pre-council days. I spent a long time trying to change and modernize our distro and culture. Some parts worked better than others, but the inertia I had to fight along the way was enormous.

No doubt I’ve got some packages floating around that need reassignment, and my retirement bug is already in progress.

Thanks, folks. You can reach me by email using my nick at this domain, or on Twitter, if you’d like to keep in touch.

Tagged: gentoo,

June 23, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

I think there has been more than enough news on nextcloud origins, the fork from owncloud, … so I will keep this post to the technical bits.

Installing nextcloud on Gentoo

For now, the differences between owncloud 9 and nextcloud 9 are mostly cosmetic, so a few quick edits to owncloud ebuild ended in a working nextcloud one. And thanks to the different package names, you can install both in parallel (as long as they do not use the same database, of course).

So if you want to test nextcloud, it’s just a command away:

# emerge -a nextcloud

With the default webapp parameters, it will install alongside owncloud.

Migrating owncloud data

Nothing official again here, but as I run a small instance (not that much data) with simple sqlite backend, I could copy the data and configuration to nextcloud and test it while keeping owncloud.
Adapt the paths and web user to your setup: I have these webapps in the default /var/www/localhost/htdocs/ path, with www-server as the web server user.

First, clean the test data and configuration (if you logged in nextcloud):

# rm /var/www/localhost/htdocs/nextcloud/config/config.php
# rm -r /var/www/localhost/htdocs/nextcloud/data/*

Then clone owncloud’s data and config. If you feel adventurous (or short on available disk space), you can move these files instead of copying them:

# cp -a /var/www/localhost/htdocs/owncloud/data/* /var/www/localhost/htdocs/nextcloud/data/
# cp -a /var/www/localhost/htdocs/owncloud/config/config.php /var/www/localhost/htdocs/nextcloud/config/

Change all owncloud occurences in config.php to nextcloud (there should be only one, for ‘datadirectory’. And then run the (nextcloud) updater. You can do it via the web interface, or (safer) with the CLI occ tool:

# sudo -u www-server php /var/www/localhost/htdocs/nextcloud/occ upgrade

As with “standard” owncloud upgrades, you will have to reactivate additional plugins after logging in. Also check the nextcloud log for potential warnings and errors.

In my test, the only non-official plugin that I use, files_reader (for ebooks)  installed fine in nextcloud, and the rest worked as fine as owncloud with a lighter default theme 🙂
For now, owncloud-client works if you point it to the new /nextcloud URL on your server, but this can (and probably will) change in the future.

More migration tips and news can be found in this nextcloud forum post, including some nice detailed backup steps for mysql-backed systems migration.

June 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

In my previous post I have described a number of pitfalls regarding Gentoo dependency specifications. However, I have missed a minor point of correctness of various dependency types in specific dependency classes. I am going to address this in this short post.

There are three classes of dependencies in Gentoo: build-time dependencies that are installed before the source build happens, runtime dependencies that should be installed before the package is installed to the live system and ‘post’ dependencies which are pretty much runtime dependencies whose install can be delayed if necessary to avoid dependency loops. Now, there are some fun relationships between dependency classes and dependency types.


Blockers are the dependencies used to prevent a specific package from being installed, or to force its uninstall. In modern EAPIs, there are two kinds of blockers: weak blockers (single !) and strong blockers (!!).

Weak blockers indicate that if the blocked package is installed, its uninstall may be delayed until the blocking package is installed. This is mostly used to solve file collisions between two packages — e.g. it allows the package manager to replace colliding files, then unmerge remaining files of the blocked package. It can also be used if the blocked package causes runtime issues on the blocking package.

Weak blockers make sense only in RDEPEND. While they’re technically allowed in DEPEND (making it possible for DEPEND=${RDEPEND} assignment to be valid), they are not really meaningful in DEPEND alone. That’s because weak blockers can be delayed post build, and therefore may not influence the build environment at all. In turn, after the build is done, build dependencies are no longer used, and unmerging the blocker does not make sense anymore.

Strong blockers indicate that the blocked package must be uninstalled before the dependency specification is considered satisfied. Therefore, they are meaningful both for build-time dependencies (where they indicate the blocker must be uninstalled before source build starts) and for runtime dependencies (where they indicate it must be uninstalled before install starts).

This leaves PDEPEND which is a bit unclear. Again, technically both blocker types are valid. However, weak blockers in PDEPEND would be pretty much equivalent to those in RDEPEND, so there is no reason to use that class. Strong blockers in PDEPEND would logically be equivalent to weak blockers — since the satisfaction of this dependency class can be delayed post install.

Any-of dependencies and :* slot operator

This is going just to be a short reminder: those types of dependencies are valid in all dependency classes but no binding between those occurences is provided.

An any-of dependency in DEPEND indicates that at least one of the packages will be installed before the build starts. An any-of dependency in RDEPEND (or PDEPEND) indicates that at least one of them will be installed at runtime. There is no guarantee that the dependency used to satisfy DEPEND will be the same as the one used to satisfy RDEPEND, and the latter is fully satisfied when one of the listed packages is replaced by another.

A similar case occurs for :* operator — only that slots are used instead of separate packages.

:= slot operator

Now, the ‘equals’ slot operator is a fun one. Technically, it is valid in all dependency classes — for the simple reason of DEPEND=${RDEPEND}. However, it does not make sense in DEPEND alone as it is used to force rebuilds of installed package while build-time dependencies apply only during the build.

The fun part is that for the := slot operator to be valid, the matching package needs to be installed when the metadata for the package is recorded — i.e. when a binary package is created or the built package is installed from source. For this to happen, a dependency guaranteeing this must be in DEPEND.

So, the common rule would be that a package dependency using := operator would have to be both in RDEPEND and DEPEND. However, strictly speaking the dependencies can be different as long as a package matching the specification from RDEPEND is guaranteed by DEPEND.

June 21, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

During my work on Gentoo, I have seen many types of dependency pitfalls that developers fell in. Sad to say, their number is increasing with new EAPI features — we are constantly increasing new ways into failures rather than working on simplifying things. I can’t say the learning curve is getting much steeper but it is considerably easier to make a mistake.

In this article, I would like to point out a few common misunderstandings and pitfalls regarding slots, slot operators and any-of (|| ()) deps. All of those constructs are used to express dependencies that can be usually be satisfied by multiple packages or package versions that can be installed in parallel, and missing this point is often the cause of trouble.

Separate package dependencies are not combined into a single slot

One of the most common mistakes is to assume that multiple package dependency specifications listed in one package are going to be combined somehow. However, there is no such guarantee, and when a package becomes slotted this fact actually becomes significant.

Of course, some package managers take various precautions to prevent the following issues. However, such precautions not only can not be relied upon but may also violate the PMS.

For example, consider the following dependency specification:


It is a common way of expressing version ranges (in this case, versions 2*, 3* and 4* are acceptable). However, if app-misc/foo is slotted and there are versions satisfying the dependencies in different slots, there is no guarantee that the dependency could not be satisfied by installing foo-1 (satisfies <foo-5) and foo-6 (satisfies >=foo-2) in two slots!

Similarly, consider:

bar? ( app-misc/foo[baz] )

This one is often used to apply multiple sets of USE flags to a single package. Once again, if the package is slotted, there is no guarantee that the dependency specifications will not be satisfied by installing two slots with different USE flag configurations.

However, those problems mostly apply to fully slotted packages such as sys-libs/db where multiple slots are actually meaningfully usable by a package. With the more common use of multiple slots to provide incompatible versions of the package (e.g. binary compatibility slots), there is a more important problem: that even a single package dependency can match the wrong slot.

For non-truly multi-slotted packages, the solution to all those problems is simple: always specify the correct slot. For truly multi-slotted packages, there is no easy solution.

For example, a version range has to be expressed using an any-of dep:

|| (

Multiple sets of USE flags? Well, if you really insist, you can combine them for each matching slot separately…

|| (
    ( sys-libs/db:5.3 tools? ( sys-libs/db:5.3[cxx] ) )  
    ( sys-libs/db:5.1 tools? ( sys-libs/db:5.1[cxx] ) )  

The ‘equals’ slot operator and multiple slots

A similar problem applies to the use of the EAPI 5 ‘equals’ slot operator. The PMS notes that:

Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will break unless a matching package with slot and sub-slot equal to the slot and sub-slot of the best installed version at the time the package was installed is available.


To implement the equals slot operator, the package manager will need to store the slot/sub-slot pair of the best installed version of the matching package. […]

PMS, Slot Dependencies

The significant part is that the slot and subslot is recorded for the best package version matched by the specification containing the operator. So again, if the operator is used on multiple dependencies that can match multiple slots, multiple slots can actually be recorded.

Again, this becomes really significant in truly slotted packages:

|| (

While one may expect the code to record the slot of sys-libs/db used by the package, this may actually record any newer version that is installed while the package is being built. In other words, this may implicitly bind to db-6* (and pull it in too).

For this to work, you need to ensure that the dependency with the slot operator can not match any version newer than the two requested:

|| (

In this case, the dependency with the operator could still match earlier versions. However, the other dependency enforces (as long as it’s in DEPEND) that at least one of the two versions specified is installed at build-time, and therefore is used by the operator as the best version matching it.

The above block can easily be extended by a single set of USE dependencies (being applied to all the package dependencies including the one with slot operator). For multiple conditional sets of USE dependencies, finding a correct solution becomes harder…

The meaning of any-of dependencies

Since I have already started using the any-of dependencies in the examples, I should point out yet another problem. Many of Gentoo developers do not understand how any-of dependencies work, and make wrong assumptions about them.

In an any-of group, at least one immediate child element must be matched. A blocker is considered to be matched if its associated package dependency specification is not matched.

PMS, 8.2.3 Any-of Dependency Specifications

So, PMS guarantees that if at least one of the immediate child elements (package dependencies, nested blocks) of the any-of block, the dependency is considered satisfied. This is the only guarantee PMS gives you. The two common mistakes are to assume that the order is significant and that any kind of binding between packages installed at build time and at run time is provided.

Consider an any-of dependency specification like the following:

|| (

In this case, it is guaranteed that at least one of the listed packages is installed at the point appropriate for the dependency class. If none of the packages are installed already, it is customary to assume the Package Manager will prefer the first one — while this is not specified and may depend on satisfiability of the dependencies, it is a reasonable assumption to make.

If multiple packages are installed, it is undefined which one is actually going to be used. In fact, the package may even provide the user with explicit run time choice of the dependency used, or use multiple of them. Assuming that A will be preferred over B, and B over C is simply wrong.

Furthermore, if one of the packages is uninstalled, while one of the remaining ones is either already installed or being installed, the dependency is still considered satisfied. It is wrong to assume that in any case the Package Manager will bind to the package used at install time, or cause rebuilds when switching between the packages.

The ‘equals’ slot operator in any-of dependencies

Finally, I am reaching the point of lately recurring debates. Let me make it clear: our current policy states that under no circumstances may := appear anywhere inside any-of dependency blocks.

Why? Because it is meaningless, it is contradictory. It is not even undefined behavior, it is a case where requirements put for the slot operator can not be satisfied. To explain this, let me recall the points made in the preceding sections.

First of all, the implementation of the ‘equals’ slot operator requires the Package Manager to explicitly bind the slot/subslot of the dependency to the installed version. This can only happen if the dependency is installed — and an any-of block only guarantees that one of them will actually be installed. Therefore, an any-of block may trigger a case when PMS-enforced requirements can not be satisfied.

Secondly, the definition of an any-of block allows replacing one of the installed packages with another at run time, while the slot operator disallows changing the slot/subslot of one of the packages. The two requested behaviors are contradictory and do not make sense. Why bind to a specific version of one package, while any version of the other package is allowed?

Thirdly, the definition of an any-of block does not specify any particular order/preference of packages. If the listed packages do not block one another, you could end up having multiple of them installed, and bound to specific slots/subslots. Therefore, the Package Manager should allow you to replace A:1 with B:2 but not with B:1 nor with A:2. We’re reaching insanity now.

Now, all of the above is purely theoretical. The Package Manager can do pretty much anything given invalid input, and that is why many developers wrongly assume that slot operators work inside any-of. The truth is: they do not, the developer just did not test all the cases correctly. The Portage behavior varies from allowing replacements with no rebuilds, to requiring both of mutually exclusive packages to be installed simultaneously.

June 19, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Since GLEP 67 was approved, bug assignment became easier. However, there were still many metadata.xml files which made this suboptimal. Today, I have fixed most of them and I would like to provide this short guide on how to write good metadata.xml files.

The bug assignment procedure

To understand the points that I am going to make, let’s take a look at how bug assignment happens these days. Assuming a typical case of bug related to a specific package (or multiple packages), the procedure for assigning the bug involves, for each package:

  1. reading all <maintainer/> elements from the package’s metadata.xml file, in order;
  2. filtering the maintainers based on restrict="" attributes (if any);
  3. filtering and reordering the maintainers based on <description/>s;
  4. assigning the bug to the first maintainer left, and CC-ing the remaining ones.

I think the procedure is quite clear. Since we no longer have <herd/> elements with special meaning applied to them, the assignment is mostly influenced by maintainer occurrence order. Restrictions can be used to limit maintenance to specific versions of a package, and descriptions to apply special rules and conditions.

Now, for semi-automatic bug assignment, only the first or the first two of the above steps can be clearly automated. Applying restrictions correctly requires understanding whether the bug can be correctly isolated to a specific version range, as some bugs (e.g. invalid metadata) may require being fixed in multiple versions of the package. Descriptions, in turn, are written for humans and require a human to interpret them.

What belongs in a good description

Now, many of existing metadata.xml files had either useless or even problematic maintainer descriptions. This is a problem since it increases time needed for bug assignment, and makes automation harder. Common examples of bad maintainer descriptions include:

  1. Assign bugs to him; CC him on bugs — this is either redundant or contradictory. Ensure that maintainers are listed in correct order, and bugs will be assigned correctly. Those descriptions only force a human to read them and possibly change the automatically determined order.
  2. Primary maintainer; proxied maintainer — this is some information but it does not change anything. If the maintainer comes first, he’s obviously the primary one. If the maintainer has non-Gentoo e-mail and there are proxies listed, he’s obviously proxied. And even if we did not know that, does it change anything? Again, we are forced to read information we do not need.

Good maintainer descriptions include:

  1. Upstream; CC on bugs concerning upstream, Only CC on bugs that involve USE=”d3d9″ — useful information that influences bug assignment;
  2. Feel free to fix/update, All modifications to this package must be approved by the wxwidgets herd. — important information for other developers.

So, before adding another description, please answer two questions: will the information benefit anyone? Can’t it be expressed in machine-readable form?

Proxy-maintained packages

Since a lot of the affected packages are maintained by proxied maintainers, I’d like to explicitly point out how proxy-maintained packages are to be described. This overlaps with the current Proxy maintainers team policy.

For proxy-maintained packages, the maintainers should be listed in the following order:

  1. actual package maintainers, in appropriate order — including developers maintaining or co-maintaining the package, proxied maintainers and Gentoo projects;
  2. developer proxies, preferably described as such — i.e. developers who do not actively maintain the package but only proxy for the maintainers;
  3. Proxy-maintainers project — serving as the generic fallback proxy.

I would like to put more emphasis on the key point here — the maintainers should be listed in an order making it clearly possible to distinguish packages that are maintained only by a proxied maintainer (with developers acting as proxies) from packages that are maintained by Gentoo developers and co-maintained by a proxied maintainer.

Third-party repositories (overlays)

As a last point, I would like to point out the special case of unofficial Gentoo repositories. Unlike the core repositories, metadata.xml files can not be fully trusted there. The reason for this is quite simple — many users copy (fork) packages from Gentoo along with metadata.xml files. If we were to trust those files — we would be assigning overlay bugs to Gentoo developers maintaining the original package!

For this reason, all bugs on unofficial repository packages are assigned to the repository owners.

June 17, 2016
Michał Górny a.k.a. mgorny (homepage, bugs) bug assignment UserJS (June 17, 2016, 12:48 UTC)

Since time does not permit me to write in more extent, just a short note: yesterday, I have published a Gentoo Bugzilla bug assignment UserJS. When enabled, it automatically tries to find package names in bug summary, fetches maintainers for them (from packages.g.o) and displays them in a table with quick assignment/CC checkboxes.

Note that it’s still early work. If you find any bugs, please let me know. Patches will be welcome too. And some redesign, since it looks pretty bad, standard Bugzilla style applied to plain HTML.

Update: now on GitHub as bug-assign-user-js

June 15, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)

Trojitá, a fast Qt IMAP e-mail client, has a shiny new release. A highlight of the 0.7 version is support for OpenPGP ("GPG") and S/MIME ("X.509") encryption -- in a read-only mode for now. Here's a short summary of the most important changes:

  • Verification of OpenPGP/GPG/S-MIME/CMS/X.509 signatures and support for decryption of these messages
  • IMAP, MIME, SMTP and general bugfixes
  • GUI tweaks and usability improvements
  • Zooming of e-mail content and improvements for vision-impaired users
  • New set of icons matching the Breeze theme
  • Reworked e-mail header display
  • This release now needs Qt 5 (5.2 or newer, 5.6 is recommended)

As usual, the code is available in our git as a "v0.7" tag. You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

The Trojitá developers

June 06, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo, Openstack and OSIC (June 06, 2016, 05:00 UTC)

What to use it for

I recently applied for, and use an allocation from do extend more support for running Openstack on Gentoo. The end goal of this is to allow Gentoo to become a gated job within the Openstack test infrastructure. To do that, we need to add support for building an image that can be used.


To speed up the work on adding support for generating an openstack infra Gentoo image I already completed work on adding Gentoo to diskimage builder. You can see images at


The actual work has been slow going unfortunately, working with upstreams to add Gentoo support has tended to find other issues that need fixing along the way. The main thing that slowed me down though was the Openstack summit (Newton). That went on at the same time and reveiws were delated at least a week, usually two.

Since then though I've been able to work though some of the issues and have started testing the final image build in diskimage builder.

More to do

The main things left to do is to add gentoo support to the bindep elemet within diskimage builder and finish and other rough edges in other elements (if they exist). After that, Openstack Infra can start caching a Gentoo image and the real work can begin. Adding Gentoo support to the Openstack Ansible project to allow for better deployments.

May 27, 2016
New Gentoo LiveDVD "Choice Edition" (May 27, 2016, 00:00 UTC)

We’re happy to announce the availability of an updated Gentoo LiveDVD. As usual, you can find it on our downloads page.

May 26, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Akonadi for e-mail needs to die (May 26, 2016, 10:48 UTC)

So, I'm officially giving up on kmail2 (i.e., the Akonadi-based version of kmail) on the last one of my PCs now. I have tried hard and put in a lot of effort to get it working. However, it costs me a significant amount of time and effort just to be able to receive and read e-mail - meaning hanging IMAP resources every few minutes, the feared "Multiple merge candidates" bug popping up again and again, and other surprise events. That is plainly not acceptable in the workplace, where I need to rely on e-mail as means of communication. By leaving kmail2 I seem to be following many many other people... Even dedicated KDE enthusiasts that I know have by now migrated to Trojita or Thunderbird.

My conclusion after all these years, based on my personal experience, is that the usage of Akonadi for e-mail is a failed experiment. It was a nice idea in theory, and may work fine for some people. I am certain that a lot of effort has been put into improving it, I applaud the developers of both kmail and Akonadi for their tenaciousness and vision and definitely thank them for their work. Sadly, however, if something doesn't become robust and error-tolerant after over 5 (five) years of continuous development effort, the question pops up whether the initial architectural idea wasn't a bad one in the first place - in particular in terms of unhandleable complexity.

I am not sure why precisely in my case things turn out so badly. One possible candidate is the university mail server that I'm stuck with, running Novell Groupwise. I've seen rather odd behaviour in the IMAP replies in the past there. That said, there's the robustness principle for software to consider, and even if Groupwise were to do silly things, other IMAP clients seem to get along with it fine.

Recently I've heard some rumors about a new framework called Sink (or Akonadi-Next), which seems to be currently under development... I hope it'll be less fragile, and less overcomplexified. The choice of name is not really that convincing though (where did my e-mails go again)?

Now for the question and answer session...

Question: Why do you post such negative stuff? You are only discouraging our volunteers.
Answer: Because the motto of the drowned god doesn't apply to software. What is dead should better remain dead, and not suffer continuous revival efforts while users run away and the brand is damaged. Also, I'm a volunteer myself and invest a lot of time and effort into Linux. I've been seeing the resulting fallout. It likely scared off other prospective help.

Question: Have you tried restarting Akonadi? Have you tried clearing the Akonadi cache? Have you tried starting with a fresh database?
Answer: Yes. Yes. Yes. Many times. And yes to many more things. Did I mention that I spent a lot of time with that? I'll miss the akonadiconsole window. Or maybe not.

Question: Do you think kmail2 (the Akonadi-based kmail) can be saved somehow?
Answer: Maybe. One could suggest an additional agent as replacement to the usual IMAP module. Let's call it IMAP-stupid, and mandate that it uses only a bare minimum of server features and always runs in disconnected mode... Then again, I don't know the code, and don't know if that is feasible. Also, for some people kmail2 seems to work perfectly fine.

Question: So what e-mail program will you use now?
Answer: I will use kmail. I love kmail. Precisely, I will use Pali Rohar's noakonadi fork, which is based on kdepim 4.4. It is neither perfect nor bug-free, but accesses all my e-mail accounts reliably. This is what I've been using on my home desktop all the time (never upgraded) and what I downgraded my laptop to some time ago after losing many mails.

Question: So can you recommend running this ages-old kmail1 variant?
Answer: Yes and no. Yes, because (at least in my case) it seems to get the basic job done much more reliably. Yes, because it feels a lot snappier and produces far less random surprises. No, because it is essentially unmaintained, has some bugs, and is written for KDE 4, which is slowly going away. No, because Qt5-based kmail2 has more features and does look sexier. No, because you lose the useful Akonadi integration of addressbook and calendar.
That said, here are the two bugs of kmail1 that I find most annoying right now: 1) PGP/MIME cleartext signature is broken (at random some signatures are not verified correctly and/or bad signatures are produced), and 2), only in a Qt5 / Plasma environment, attachments don't open on click anymore, but can only be saved. (Which is odd since e.g. Okular as viewer is launched but never appears on screen, and the temporary file is written but immediately disappears... need to investigate.)

Question: I have bugfixes / patches for kmail1. What should I do?
Answer: Send them!!! I'll be happy to test and forward.

Question: What will you do when Qt4 / kdelibs goes away?
Answer: Dunno. Luckily I'm involved in packaging myself. :)

May 24, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.


May 21, 2016
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
balde internals, part 1: Foundations (May 21, 2016, 15:25 UTC)

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.

May 16, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
How LINGUAS are thrice wrong! (May 16, 2016, 06:34 UTC)

The LINGUAS environment variable serves two purposes in Gentoo. On one hand, it’s the USE_EXPAND flag group for USE flags controlling installation of localizations. On the other, it’s a gettext-specfic environment variable controlling installation of localizations in some of build systems supporting gettext. Fun fact is, both uses are simply wrong.

Why LINGUAS as an environment variable is wrong?

Let’s start with the upstream-blessed LINGUAS environment variable. If set, it limits localization files installed by autotools+gettext-based build systems (and some more) to the subset matching specified locales. At first, it may sound like a useful feature. However, it is an implicit feature, and therefore one causing a lot of confusion for the package manager.

Long story short, in this context the package manager does not know anything about LINGUAS. It’s just a random environment variable, that has some value and possibly may be saved somewhere in package metadata. However, this value can actually affect the installed files in a hardly predictable way. So, even if package managers actually added some special meaning to LINGUAS (which would be non-PMS compliant), it still would not be good enough.

What does this practically mean? It means that if I set LINGUAS to some value on my system, then most of the binary packages produced by it suddenly have files stripped, as compared to non-LINGUAS builds. If I installed the binary package on some other system, it would match the LINGUAS of build host rather than the install host. And this means the binary packages are simply incorrect.

Even worse, any change to LINGUAS can not be propagated correctly. Even if the package manager decided to rebuild packages based on changes in LINGUAS, it has no way of knowing which locales were supported by a package, and if LINGUAS was used at all. So you end up rebuilding all installed packages, just in case.

Why LINGUAS USE flags are wrong?

So, how do we solve all those problems? Of course, we introduce explicit LINGUAS flags. This way, the developer is expected to list all supported locales in IUSE, the package manager can determine the enabled localizations and match binary packages correctly. All seems fine. Except, there are two problems.

The first problem is that it is cumbersome. Figuring out supported localizations and adding a dozen flags on a number of packages is time-consuming. What’s even worse, those flags need to be maintained once added. Which means you have to check supported localizations for changes on every version bump. Not all developers do that.

The second problem is that it is… a QA violation, most of the time. We already have quite a clear policy that USE flags are not supposed to control installation of small files with no explicit dependencies — and most of the localization files are exactly that!

Let me remind you why we have that policy. There are two reasons: rebuilds and binary packages.

Rebuilds are bad because every time you change LINGUAS, you end up rebuilding relevant packages, and those can be huge. You may think it uncommon — but just imagine you’ve finally finished building your new shiny Gentoo install, and noticed that you forgot to enable the localization. And guess what! You have to build a number of packages, again.

Binary packages are even worse since they are tied to a specific USE flag combination. If you build a binary package with specific LINGUAS, it can only be installed on hosts with exactly the same LINGUAS. While it would be trivial to strip localizations from installed binary package, you have to build a fresh one. And with dozen lingua-flags… you end up having thousands of possible binary package variants, if not more.

Why EAPI 5 makes things worse… or better?

Reusing the LINGUAS name for the USE_EXPAND group looked like a good idea. After all, the value would end up in ebuild environment for use by the build system, and in most of the affected packages, LINGUAS worked out of the box with no ebuild changes! Except that… it wasn’t really guaranteed to before EAPI 5.

In earlier EAPIs, LINGUAS could contain pretty much anything, since no special behavior was reserved for it. However, starting with EAPI 5 the package manager guarantees that it will only contain those values that correspond to enabled flags. This is a good thing, after all, since it finally makes LINGUAS work reliably. It has one side effect though.

Since LINGUAS is reduced to enabled USE flags, and enabled USE flags can only contain defined USE flags… it means that in any ebuild missing LINGUAS flags, LINGUAS should be effectively empty (yes, I know Portage does not do that currently, and it is a bug in Portage). To make things worse, this means set to an empty value rather than unset. In other words, disabling localization completely.

This way, a small implicit QA issue of implicitly affecting installed localization files turned out into a bigger issue of suddenly stopping to install localizations. Which in turn can’t be fixed without introducing proper set of LINGUAS everywhere, causing other kind of QA issues and additional maintenance burden.

What would be the good solution, again?

First of all, kill LINGUAS. Either make it unset for good (and this won’t be easy since PMS kinda implies making all PM-defined variables read-only), or disable any special behavior associated with it. Make the build system compile and install all localizations.

Then, use INSTALL_MASK. It’s designed to handle this. It strips files from installed systems while preserving them in binary packages. Which means your binary packages are now more portable, and every system you install them to will get correct localizations stripped. Isn’t that way better than rebuilding things?

Now, is that going to happen? I doubt it. People are rather going to focus on claiming that buggy Portage behavior was good, that QA issues are fine as long as I can strip some small files from my system in the ‘obvious’ way, that the specification should be changed to allow a corner case…

May 13, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Whew, I know that’s a long title for a post, but I wanted to make sure that I mentioned every term so that people having the same problem could readily find the post that explains what solved it for me. For some time now (ever since the 346.x series [340.76, which was the last driver that worked for me, was released on 27 January 2015]), I have had a problem with the NVIDIA Linux Display Drivers (known as nvidia-drivers in Gentoo Linux). The problem that I’ve experienced is that the newer drivers would, upon starting an X session, immediately clock up to Performance Level 2 or 3 within PowerMizer.

Before using these newer drivers, the Performance Level would only increase when it was really required (3D rendering, HD video playback, et cetera). I probably wouldn’t have even noticed that the Performance Level was changing, except that it would cause the GPU fan to spin faster, which was noticeably louder in my office.

After scouring the interwebs, I found that I was not the only person to have this problem. For reference, see this article, and this one about locking to certain Performance Levels. However, I wasn’t able to find a solution for the exact problem that I was having. If you look at the screenshot below, you’ll see that the Performance Level is set at 2 which was causing the card to run quite hot (79°C) even when it wasn’t being pushed.

NVIDIA Linux drivers stuck on high Performance Level in Xorg
Click to enlarge

It turns out that I needed to add some options to my X Server Configuration. Unfortunately, I was originally making changes in /etc/X11/xorg.conf, but they weren’t being honoured. I added the following lines to /etc/X11/xorg.conf.d/20-nvidia.conf, and the changes took effect:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"

The portion in bold (the RegistryDwords option) was what ultimately fixed the problem for me. More information about the NVIDIA drivers can be found in their README and Installation Guide, and in particular, these settings are described on the X configuration options page. The PowerMizerDefaultAC setting may seem like it is for laptops that are plugged in to AC power, but as this system was a desktop, I found that it was always seen as being “plugged in to AC power.”

As you can see from the screenshots below, these settings did indeed fix the PowerMizer Performance Levels and subsequent temperatures for me:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Whilst I was adding X configuration options, I also noticed that Coolbits (search for “Coolbits” on that page) were supported with the Linux driver. Here’s the excerpt about Coolbits for version 364.19 of the NVIDIA Linux driver:

Option “Coolbits” “integer”
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer’s design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer’s operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

When “2” (Bit 1) is set in the “Coolbits” option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.

When “4” (Bit 2) is set in the “Coolbits” option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.

When “8” (Bit 3) is set in the “Coolbits” option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or performance levels may be modified.

When “16” (Bit 4) is set in the “Coolbits” option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.

When this option is set for an X screen, it will be applied to all X screens running on the same GPU.

The default for this option is 0 (unsupported features are disabled).

I found that I would personally like to have the options enabled by “4” and “8”, and that one can combine Coolbits by simply adding them together. For instance, the ones I wanted (“4” and “8”) added up to “12”, so that’s what I put in my configuration:

Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "Coolbits" "12"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"

and that resulted in the following options being available within the nvidia-settings utility:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Though the Coolbits portions aren’t required to fix the problems that I was having, I find them to be helpful for maintenance tasks and configurations. I hope, if you’re having problems with the NVIDIA drivers, that these instructions help give you a better understanding of how to workaround any issues you may face. Feel free to comment if you have any questions, and we’ll see if we can work through them.


May 10, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
New Company: Edge Security (May 10, 2016, 14:48 UTC)

I've just launched a website for my new information security consulting company, Edge Security. We're expert hackers, with a fairly diverse skill set and a lot of experience. I mention this here because in a few months we plan to release an open-source kernel module for Linux called WireGuard. No details yet, but keep your eyes open in this space.