Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Yury German
. Zack Medico

Last updated:
January 15, 2017, 23:05 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

January 13, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

The Dieselgate is in my opinion an interesting case study for the ethics of software development and hacking in general. Particularly because it’s most definitely not a straightforward bad guys vs good guys case, and because there is no clear cut solution to the problem. There was a talk about this (again) at the Chaos Computer Congress this year. It’s a nice talk to watch:

I was actually physically in the room only for the closing notes, and they had me thinking, thus why it was the second talk I watched once back in Dublin, after the Congres was over.

[Regarding the responsibility for Bosch as the supplier to Volkswagen]

If you build software that you know is used to be illegally, it should it must be your responsibility to not do that and I’m not sure if this is something that is legally enforceable but it should be something that’s enforceable ethically or for all us programmers that we don’t build software that is designed to break the law.

The rambling style of the quote is clearly understandable as it was a question asked on the spot to the speaker.

Let me be clear: these cheating “devices” (the device itself is not the cheating part, but rather the algorithms and curves programmed onto it, of course) are a horrible thing not only from the legal point of view but because we only have one planet. Although in the great scheme of things they appear to have forced the hands of various manufacturers towards moving to electric cars quicker. But let’s put that aside for a moment.

Felix suggests that developers (programmers) should refuse to build software to break the law. Which law are we talking about here? Any law? All the laws? Just the laws we like?

Let’s take content piracy. File sharing software is clearly widely used to break the law, by allowing software, movies, tv series, books, music to be shared without permission. DRM defeating software is also meant to break (some, many) laws. Are the people working on the BitTorrent ecosystem considering that the software they are writing is used to break the law? Should it be enforceable ethically that they should not do that?

Maybe content piracy is not that high in the ethical list of hackers. After all we have all heard «Information wants to be free», particularly used as an excuse for all kind of piracy. I would argue that I’m significantly more open to accept de-DRM software as ethical if not legal, because DRM is (vastly) unethical. In particular as I already linked above, defeating DRM is needed to fix the content you buy and to maintain access to the content you already bought.

Let’s try to find a more suitable example… oh I know! “Anonymous” transaction systems like Bitcoin, Litecoin and the newest product of that particular subculture, Keybase’s zCash which just happens to have been presented at 33C3 as well. These are clearly used to break the law in more way than one. They allow transactions outside of the normal, regulated banking system, they allow to work around laws that apply to cash-only transactions, and they keep engineering to make transactions untraceable. After all, Silk Road would probably have had a significant reduction in business if people actually had to use traceable money transfers.

Again, should we argue that anybody working on this ecosystem should be ethically enforced out? Should they be forced to acknowledge that the software they work on is used for illegal activities? Well, maybe not, after all these systems are just making things easier, but they are not enabling anything that was not possible already before. So after all, they are not critical to the illegal activity, so surely it’s not the developers fault up to this point.

Let’s take yet another example, even though I feel like I’m beating a dead horse. The most loved revolutionary tool: Tor. When you ask the privacy advocates, this is the most important tool for refugees, activists, dissidents, all the good people in the world. And arguably, in this day and age, there are indeed many activists and dissidents that a lot of us want to protect, as even the US starts feeling like a regime to some. When asked about it, these advocates would argue that supermarket loyalty cards are abused more than Tor.

But if you do take down your starry-eyed glasses, you’ll see how Tor has been used not only by Silk Road and its ilks, but also for significantly nastier endeavours than buying and selling drugs. Another talk from 33c3 criticised law enforcement using “hacking techniques”, particularly when dealing with child pornography websites, and for once it was not doing so simply to attack law enforcement’s intentions of breaking Tor, but rather the fact that they have caused lots of evidence to be thrown out as fruit of poisonous tree.

I’m not arguing that Felix is wrong with saying that Bosch very likely knew what the software they developed, under spec from their customer, would have caused. And I’m not sure I can actually asses the ethical differences between providing platforms for people to hire killers and poisoning our cities and planets. I’m not a philosopher and I don’t particularly care about thinking through trolley problems, whether for real or for trolling. (Yes we have all seen the memes.)

But at the same time, I find it ironic that about the same group of people who cheered on his take down of Bosch would also cheer Tor and the other projects… you could say that what Felix referred to was more about immorality than illegality, but not all morals are absolute, and laws are not universal. And saying that you want to enforce whatever laws you think should be, but not others, well, it’s not something I agree with.

Leaving this last consideration aside – and it’s funny that just his last few phrases are what brought me to post all this! – I really liked Felix’s talk and happy that he’s talking a grown-up approach, rather than the anarchist hacker’s. Among other things admitting that it might not be feasible to suggest or force the car companies to open-source their whole ECU code, but it might be feasible to approach it like Microsoft’s Shared Source license.

Finally I would point out that this is a possibly proper example of where giving the ability of users to upload their own code is not something I’d be looking forward to. The software that is the centre of the storm is not designed to just defeat regulation, for the sole sake that corporations are evil and want to poison our planet, even though I’m sure that it would be an easy narrative, particularly with the Congress crowd. Instead, these curves and decisions are meant to tip the scales towards the interests of the user rather than those of the collectivity.

You just need to watch the talk and you’ll hear how these curves actually increase the time-to-service for cars, both by reducing the amount of dirt getting caught by the EGR and by reducing the amount of AdBlue used. It is also stands to reason that the aim of not only the curves, but the whole design of these cars is to ignore regulations wherever possible (since the ECU helps cheating during the needed tests) to improve performance. Do we really expect that, if people were allowed to push their own code into the ECU they would consider the collective good, rather than their own personal gain? Or would you rather expect them to rather disable the AdBlue tank altogether, so they have to spend even less time worrying about servicing their cars?

Let me be clear, there is no easy solution to this set of problems. And I think Felix’s work and these talks are actually a very good starting point to the solution. So maybe there is hope, after all.

January 10, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have said that I’ve been wrong multiple times in the past. Some of it has been buying too much into the BOFH myth. With this I mean that between reading UserFriendly and the Italian language Storie dalla Sala Macchine, I bought into the story that system administrators (which now you may want to call “ops people”) are the only smart set of eyes in an organization, and that most of other people have absolutely no idea what they are doing.

But over time I have worn many hats: I left high school with some basic understanding of system administration, and I proceeded working as a translator, a developer for autonomous apps in an embedded environment, I taught courses on debugging and development under Linux, and worked on even more embedded systems. While I have been self-employed as a sysadmin for hire (or to use a more buzzword-compliant term, as a MSP, a Managed Service Provider), I ended up working on media streaming software, as well as media players and entire services. I worked as a web developer, even in PHP for a very brief time, I wrote software for proprietary environment as well as Linux and other open source systems. In addition to my random musings on this blog, I wrote for more reputable publications. And I’m currently a so-called Site Reliability Engineer, and “look after” (but not really) highly distributed systems.

This possibly abnormal list of tasks, if not really occupations, has a clear upside: I can pass for a member of different groups relatively easily. For months I had people think that I was an EE student, at work a bunch of people thought I had previous experience with distributed systems, and of course at LISA I can talk my way around as if I was still a system administrator. Somehow I also manage to pass for a security person, because I have a personal interest in the topic and so I learnt a bunch of things about it, even though I have not worked or researched that officially.

On the other hand, this gives me a downside that, personally, is much heavier: impostor syndrome. In all those crowds, while I can probably hide within for a while, I’ll feel out of place. I have not used a soldering iron (or a breadboard) for years by now (although I’m working to fix that), and I have not really worked on much on the small discrete electronics for years — the last time I worked as a firmware engineer, it was for a system that had 8GB of RAM and an i7 Xeon. I don’t have the calculus skills to be an actual multimedia developer: I know my way through container formats, but I need someone else to write the codecs as I have no clue how to go to decode even the simplest encoding, well maybe except Huffman codes at this point.

So while I can camouflage in these groups, I can’t really find myself feeling as a real member of them.

You could say that the free software community should be something I’m more at my ease with, given I’ve been a developer for over fifteen years at this point, but there is are two big problems with it: the first is that I’m not a purist, I use proprietary software any time I feel it’s the best tool to do the task, and the second is that free software and privacy advocacy mix up too much nowadays, and my concept of privacy does not match that of the activists.

While at 33C3 I realized I don’t really match up to this crowd either, and not just for the privacy topic. I somehow have more respect for the rules than most of the people I see around here, though I still enjoy the hacking and breaking, so when the Twitter people starts complaining that Nintendo and PS4 exploits are not released, I find that a perfectly reasonable approach. After all, hiding behind outrage for blocking Linux on PS3 was the intention to pirate games, and that’s not something I’m happy to condone.

I have hang around a few of my previous acquaintances, and friends of theirs, while they were working on the CTF — and that was kind of cool, but that’s also not something I’m very interested in: while I can work m way around security problems, and know what to look out for, I don’t really like that kind of puzzle. Just like I don’t enjoy logic puzzles, or sudoku. I much enjoy Scrabble, though.

The evenings were the least interesting to me, too. Most of them included parties that revolve around alcohol, and you know I don’t care for it. Given that this is C3, I’m sure a number of other drugs involved too — I’m not an expert, but I can at this point tell the smell of weed quite clearly, and the conference centre was smelling more than Seattle. So effectively the only night I left after 10pm was the first, that only because Hector was talking at 11pm. (On the bright side, Hamburg makes procuring sugar free fritz-kola very easy.)

To stray away a bit from technology, I should add that even when going to non-software related conventions, such as EasterCon and Nine Worlds, I feel as an outsider there too. Much as I’m a fan of sci-fi and fantasy, and a would-be avid reader, I don’t have the time to read as much as I would like, and I’m clearly not tailored to be a cosplayer or a fanzine writer. And most of these events also involve a disproportionate amount of alcohol.

So why did I title this post “Growing up”? The answer is that acceptance comes with growing up. For some of the subcultures and groups I get myself sometimes in, but I know I won’t. For some of them it’s because I don’t have the time to invest to join them properly, for instance while I would love to actually be an EE, I did not really go to university (two weeks don’t count) and going right now is not an option, I’m too old for this. And I would not be ready to compromise my ethics with regard to piracy or legality.

And, much as I understand people do enjoy those “responsibly”, I don’t really think that weed or any other drug would be something I care about using. I know how I feel when I’m not in control, and though that may be able to “relax” me enough to not be afraid of every single social interaction, it is not a pretty feeling afterwards. Even though there is a chance I’ll always feel isolated without one of those “social lubricants”.

Unfortunately this does mean that for many things, I’ll always be an outsider looking in, rather than an insider, which makes it difficult to drive change, for instance. But again, accepting that is part of growing up. So be it.

January 09, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: Security (or lack of) at Number26 (January 09, 2017, 22:57 UTC)


I would like to share a talk that I attended at 33c3. It’s about a company with a banking license and accounts with actual money. Some people downplay these issues as “yeah, but the issues were fixed” and “every major bank probably has something like this”. I would like to reply:

  • With a bit of time and interest, any moderate hobby security researcher could have found what he found, including me.
  • The issues uncovered are not mere issues of a product, they are issues in processes and culture.

When I checked earlier, Number26 did not have open positions for security professionals. They do now:

Senior Security Engineer (f/m)

The video: Shut Up and Take My Money! (33c3)

January 07, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Sniffing on an Android phone with Wireshark (January 07, 2017, 18:04 UTC)

In my review of the iHealth glucometer I pointed out that I did indeed check if the app talked with the remote service over TLS or not. This was important because if it didn’t, it meant it was sending medical information over plaintext. There are a few other things that can go wrong, they can for instance not validate the certificate provided over TLS, effectively allowing MITM attacks to succeed, but that’s a different story altogether, so I won’t go there for now.

What I wanted to write about is some notes about my experience, if nothing else because it took me a while to get all the fragments ready, and I could not find a single entry anywhere that would explain what the error message I was receiving was about.

First of all, this is about the Wireshark tool, and Android phones, but at the end of the day you’ll find something that would work almost universally with a bunch of caveats. So make sure you get your Wireshark installed, and make sure you never run it as root for your own safety.

Rick suggested to look into the androiddump tool that comes with Wireshark; on Gentoo that requires enabling the right USE flag. This uses the extcap interface to “fetch” the packets to display from a remote source. I like this idea among other things because it splits the displaying/parsing from the capturing. As I’ll show later, this is not the only useful tool using the interface.

There are multiple interfaces that androiddump can capture from; that does include the logcat output, that makes it very useful when you’re debugging an application in realtime, but what I cared about was sniffing the packets from the interfaces on the device itself. This kept failing with the following error:

Error by extcap pipe: ERROR: Broken socket connection.

And no further debugging information available. Googling for a good half hour didn’t bring me anywhere, I even started strace‘ing the process (to the point that Wireshark crashed in a few situations!) until I finally managed to figure out the right -incantation- invokation of the androiddump tool… that had no more information even in verbose mode, but at least it told me what it was trying to do.

The explanation is kind of simple: this set of interfaces is effetively just a matrioska of interfaces. Wireshark calls into extcap, that calls into androiddump, that calls into adb, that calls into tcpdump on the device.

And here is the problem: my device (a Sony Xperia XA from 3 Ireland) has indeed a tcpdump command, but the only thing it does is returning 1 as return value, and that’s it. No error message and not even a help output to figure out if you need to enable somethihng. I have not dug into the phone much more because I was already kind of tired of having to figure out pieces of the puzzle that are not obvious at all, so I looked for alternative approaches.

Depending on the working system you use to set the capture up, you may be able to set up your computer to be an access point, and connect the phone to it. But this is not easy particularly on a laptop with already-oversubscribed USB ports. So I had to look for alternatives.

On the bright side, my router is currently running OpenWRT (with all the warts it has). Which means I have som leeway on the network access already. Googling around would suggest setting up a tee: tell iptables to forward a copy of every single packet coming from or to the phone to another mac address. This is relativel expensive, and no reliable over WiFi networks anyway, beside increasing congestion on an already busy network.

I opted instead to use another tool that is available in extcap: ssh-based packet captures. In Gentoo these require the sshdump and libssh USE flags enabled. With this interface, Wireshark effectively opens a session via SSH to the router, and runs tcpdump on it. It can also use dumpcap or tshark, which are Wireshark-specific tools, and would be significantly more performant, but there is no build for them on OpenWRT so that does not help either.

While this actually increases the amount of traffic over WiFi compared to the tee option, it does so over a reliable channel, and it allows you to apply capture filters, as well as start and stop capture as needed. I ended up going for ths option, and the good thing with this is that if you know the hardware addresses of your devices, you can now very easily sniff any of the connected clients just by filtering on that particular address, which opens for interesting discoveries. But that’s for another day.

January 05, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Last week I received an unexpected but great email that my new car had arrived way ahead of schedule. I had ordered a 2017 Honda Civic EX-T back in November, but it wasn’t slated to come in until February 2017. I had to order one from the factory because I wanted to get one that perfectly matched my needs and wants, and apparently, that’s rare. When the 2016 came out, I was uninspired because I wanted the 1.5L turbocharged engine instead of the 2.0L naturally aspirated one, and I also wanted some of the bells and whistles (like the larger display, more speakers, et cetera), but above all else, I wanted a manual transmission. For 2017, Honda came through in that I could get the manual transmission a trim higher than the base model. The EX-T had everything I wanted, and I picked it up on Friday, 30 December 2016.

As with all of my previous vehicles, there are things that I want to change about this car, but far fewer than ever before. I’m quite happy with the performance, the smooth ride, and the niceties that come with the higher trim level. Not even a week later, though, and I took on my first modification to the car (albeit a minor one). Though it wasn’t something as involved as swapping a JDM K20a / Y2M3 (from the ITR), it did make a noticeable difference with the car (just a cosmetic one this time). 🙂

Though I love the looks of the 2017 Civic, I think that the “Civic” emblem/badge makes the car look asymmetrical and a little less classy. So, I thought it best to remove it completely.

2017 Honda Civic emblem removed - debadged letters

The process was relatively straightforward, but I understand that it can be a little unnerving to remove a badge on a brand new car. What happens if I scratch the paint? What if the adhesive is really strong and leaves a full residue? Those are legitimate concerns, but this little project turned out to be pretty easy. Here’s what I did:

  • Used a hair dryer to heat the adhesive behind each letter for ~60-90 seconds
  • Used a piece of floss in a seesaw motion behind each letter until they came off
  • Used an old credit card to remove some of the excess adhesive
  • Applied Goo Gone Automative Spray Gel to the remaining residue
  • Held a rag under the letters to catch the excess Goo Gone that would otherwise drip
  • Used my handy-dandy AmazonBasic microfiber cloth to get rid of the remaining residue
  • Washed the spot with some soap and water
  • Dried the spot with another microfiber cloth
  • Basked in the glory of having a much cleaner look to the rear of the car 🙂

I think that the results were well worth the minimal amount of time and effort:

2017 Honda Civic emblem removed - debadged before and after

2017 Honda Civic emblem removed - debadged before and after - wide
Click each image to enlarge

The only thing that I would note is that I did need to apply a good amount of pressure when getting rid of the excessive adhesive with the old credit card, and especially when using the microfiber cloth & Goo Gone to clean the remaining residue. I was a bit nervous to press that hard at first, but soon realised that it was necessary, and as long as I was careful, it wouldn’t damage the clearcoat or the paint. I thought that it would take about 10 minutes, and it ended up taking about 45 to do it in my OCD manner. That being said, it could have been a lot worse. My friend Mike always used to say that “to estimate the time needed for a project—especially one involving a car—take your initial guess, multiply it by 2, and go up one unit of measure.” In that case, I’m glad that it didn’t take 20 hours. 😛


January 03, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Virtual rewiring, part two: the EC (January 03, 2017, 19:04 UTC)

In the previous post I explained what I want: to be able to use the caps lock key for Fn, at least for the arrow keys to achieve the page up/down, home and end keys (navigation keys).

After that post, I was provided a block schematics of my laptop identifying the EC in the system as an ITE IT8572. This is a bit unfortunate, because ITE is not known for sharing their datasheets easily, but at least I know that the EC is based on the Intel 8051 (also known as MSC-51), with a 64KiB flash ROM.

Speaking of the ROM, it’s possible to extract the EC firmware from the ASUS-provided update files Using (unmodified) UEFITool. Within the capsule, the EC firmware is the first padding entry, the non-empty one, you can extract with the tool, and then you have the actual ROM image file, that’s easy.

I was also pointed at Moravia Microsystems’ MCU 8051 IDE which is a fully-functional IDE for developing for 8051 MCUs. I submitted an ebuild for this while at 33C3, so that you can just emerge mcu8051ide to have a copy installed. It supports some optional runtime dependencies that I have not actually made optional yet. This IDE supports both the conversion of binary file to Intel HEX (why on Earth is Intel HEX still considered a good idea I’m not sure), disassembly of the binaries, and comes with its own (Tcl/Tk) assembler.

Unfortunately, this has not brought me quite as close as it might be expected knowing I have the firmware, a disassembler and an assembler. The reason is also not quite obvious either.

The first problem is that the IDE is unable to actually re-assemble the code it produces. Since disassembly (unlike decompilation) should be a lossless procedure, that was the first thing I tried, and it failed. There appears to be at least two big problems: the first is that the IDE does not have a configuration for a 64KiB ROM 8051 (even though that is the theoretical maximum size of the ROM for that device), and the other is that, since it does not have a way to define which part of the ROM are data and which ones are code, it disassemble the data in the ROM as instructions that are not actually valid for the base 8051 instruction set.

So, I decided to look into other options; unfortunately I found only a DJGPP-era disassembler – which produces what looks like a valid assembly file, but can’t be re-assembled – and a apparently promising Python-based one that failed to even execute due to a Python syntax error.

I have thus started working on writing my own, because why not, it’s fun, and it wouldn’t be the first time I go parsing instructions manually — though the last time, I was in high school and I wrote a very dumb 8086 emulator to try my homework out without having to wait in the queue at the lab for the horrible Rube Goldberg Machine we were using. This was some 15 years ago by now.

But back to present: to be able to write a proper disassembler that does not suffer the problems I noted above, I need to make sure I have a test that checks that re-assembling the disassembled code produces the same binary ROM as the source. Luckily, there is an obvious way to do so incrementally: you just emit every single byte of the ROM as a literal byte value. It’s not too difficult.

Except, which syntax do you use for that? The disassembler didn’t use any literal bytes (instead emitted extended instructions for bytes that would not otherwise be mapped in the base ISA), so I spent some time googling for 8051 syntax, and I found a few decent pointers but nothing quite right. From what I can tell, the SDCC assembler should accept the same syntax as Alan Baldwin’s assembler suite except for some of the more sophisticated instructions, as SDCC forked an earlier version of the same software. Even just opening the website should make it clear we’re talking serious vintage code here!

This syntax is also significantly different from the syntax used by MCU 8051 IDE, though. Admittedly, I was hoping to use the SDCC assembler for this (Baldwin’s is not quite obvious to build at first, as it effectively only provides .bat files for that) since that can be more easily scripted. The IDE is a Tcl/Tk full environment, and its assembler is very slow from what I can tell. Unfortunately, I have yet to find a way for the SDCC-provided assembler to produce any binary file. It’s all hidden behind flags and multi-level object files, sigh!

So I decided to at least make a file that assembles with the IDE. According to this page, the syntax should be quite simple:


The DB pseudo-instructions defining a literal byte or bytes. And that sounds exactly like what I need! So I just made my skeleton disassembler emit every byte with this syntax, and… it fails to compile. It looks like the IDE assembler only supports DB with decimal numbers, which makes them harder to read and match to the hexdump -C output I”ve been using to compare the binaries. Fixing that, also still made things not build right, but I have yet to look deeper into it.

Given that I’m at 33C3, and there was a talk about radare2 already (although I have not seen it yet, I’ll watch it at home), I decided to try using that, as it also already supports 8051, at least in theory. I say in theory because:

% radare2 -a 8051 ec212.bin
[0x00000000]> pd
*** invalid %N$ use detected ***
zsh: abort      radare2 -a 8051 ec212.bin

This is a known problem which is still unfixed, and that has been de-prioritized already, so if I want it fixed, I’ll have to fix it myself.

At this point, I have not much to work with. I started a very skeleton version of a disassembler, so I can start building the parsing I need. I have not done the paperwork yet to release it but I hope to do so soon, and develop it in the open as usual. I will also have to do some paperwork to submit a few fixes for MCU 8051 IDE, to support at least the basics of the ITE controller I have, guessed from the firmware itself, rather than with the datasheet, as I have no access to that as of yet.

If anybody knows anything I don’t and can point me to useful documentation, I’d really be happy to hear it.

January 01, 2017

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

A crafted tiff file revealed a NULL pointer access.

The complete ASan output:

# tiffinfo -Dijr $FILE

TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
TIFFReadDirectory: Warning, Unknown field with tag 384 (0x180) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 1093 (0x445) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 2 (0x2) encountered.
TIFFFetchNormalTag: Warning, ASCII value for tag "DocumentName" contains null byte in value; value incorrectly truncated during reading due to implementation limitations.
TIFFFetchNormalTag: Warning, Incorrect count for "JpegProc"; tag ignored.
TIFFReadDirectory: Warning, Photometric tag value assumed incorrect, assuming data is YCbCr instead of RGB.
TIFFReadDirectory: Warning, SamplesPerPixel tag is missing, applying correct SamplesPerPixel value of 3.
_TIFFVSetField: Warning, SamplesPerPixel tag value is changing, but SMinSampleValue tag was read with a different value. Cancelling it.
==15897==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000050d8ad bp 0x7ffc4a3eaf90 sp 0x7ffc4a3eaec0 T0)
==15897==The signal is caused by a READ memory access.
==15897==Hint: address points to the zero page.
    #0 0x50d8ac in TIFFReadRawData /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffinfo.c:421:29
    #1 0x50b2de in tiffinfo /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffinfo.c:473:4
    #2 0x50a999 in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffinfo.c:152:6
    #3 0x7f6258f0961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #4 0x419f38 in _init (/usr/bin/tiffinfo+0x419f38)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffinfo.c:421:29 in TIFFReadRawData
TIFF Directory at offset 0xc (12)
  Image Width: 128 Image Length: 1
  Bits/Sample: 32189
  Compression Scheme: Old-style JPEG
  Photometric Interpretation: YCbCr
  YCbCr Subsampling: 2, 2
  Samples/Pixel: 3
  Rows/Strip: 2048
  Planar Configuration: single image plane
  Tag 384: 16779264

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-22: bug discovered and reported to upstream
2016-12-03: upstream released a patch
2017-01-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libtiff: NULL pointer dereference in TIFFReadRawData (tiffinfo.c)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

A crafted tiff file revealed an assertion failure.

The complete output:

# tiffcp -i $FILE /tmp/foo
tiffcp: /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:1390:
int readSeparateTilesIntoBuffer(TIFF *, uint8 *, uint32, uint32, tsample_t):
Assertion `bps % 8 == 0' failed.

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-23: bug discovered and reported to upstream
2016-12-03: upstream released a patch
2017-01-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libtiff: assertion failure in readSeparateTilesIntoBuffer (tiffcp.c)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

A crafted tiff file revealed a stack buffer overflow.

The complete ASan output:

# tiffsplit $FILE
TIFFReadDirectory: Warning, Unknown field with tag 317 (0x13d) encountered.
==10362==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7f3824f00090 at pc 0x7f3829624fbb bp 0x7fffe0eb1da0 sp 0x7fffe0eb1d98
WRITE of size 4 at 0x7f3824f00090 thread T0
    #0 0x7f3829624fba in _TIFFVGetField /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_dir.c:1077:29
    #1 0x7f382960f202 in TIFFVGetField /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_dir.c:1198:6
    #2 0x7f382960f202 in TIFFGetField /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_dir.c:1182
    #3 0x50a719 in tiffcp /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffsplit.c:183:2
    #4 0x50a719 in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffsplit.c:89
    #5 0x7f382871561f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x419a78 in _init (/usr/bin/tiffsplit+0x419a78)

Address 0x7f3824f00090 is located in stack of thread T0 at offset 144 in frame
    #0 0x5099cf in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffsplit.c:59

  This frame has 18 object(s):
    [32, 40) 'bytecounts.i263.i'
    [64, 72) 'bytecounts.i.i'
    [96, 98) 'bitspersample.i'
    [112, 114) 'samplesperpixel.i'
    [128, 130) 'compression.i'
    [144, 146) 'shortv.i' 0x0fe7849d8010: 02 f2[02]f2 00 f2 f2 f2 04 f2 04 f2 04 f2 00 f2
  0x0fe7849d8020: f2 f2 04 f2 04 f2 00 f2 f2 f2 00 f2 f2 f2 00 f2
  0x0fe7849d8030: f2 f2 00 f2 f2 f2 02 f3 00 00 00 00 00 00 00 00
  0x0fe7849d8040: f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5
  0x0fe7849d8050: f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5
  0x0fe7849d8060: f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-12-04: bug discovered and reported to upstream
2017-01-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libtiff: stack-based buffer overflow in _TIFFVGetField (tif_dir.c)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

A crafted tiff file revealed a memcpy-param-overlap.

The complete ASan output:

# tiff2pdf $FILE -o foo
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
TIFFReadDirectory: Warning, Unknown field with tag 2 (0x2) encountered.
1006.crashes: Warning, Nonstandard tile width 769, convert file.
TIFFReadDirectory: Warning, Unknown field with tag 7710 (0x1e1e) encountered.
TIFFFetchNormalTag: Warning, Incorrect count for "FillOrder"; tag ignored.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFAdvanceDirectory: Error fetching directory count.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
TIFFReadDirectory: Warning, Unknown field with tag 2 (0x2) encountered.
1006.crashes: Warning, Nonstandard tile width 769, convert file.
TIFFReadDirectory: Warning, Unknown field with tag 7710 (0x1e1e) encountered.
TIFFFetchNormalTag: Warning, Incorrect count for "FillOrder"; tag ignored.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
TIFFReadDirectory: Warning, Unknown field with tag 2 (0x2) encountered.
1006.crashes: Warning, Nonstandard tile width 769, convert file.
TIFFReadDirectory: Warning, Unknown field with tag 7710 (0x1e1e) encountered.
TIFFFetchNormalTag: Warning, Incorrect count for "FillOrder"; tag ignored.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
TIFFReadDirectory: Warning, Unknown field with tag 2 (0x2) encountered.
1006.crashes: Warning, Nonstandard tile width 769, convert file.
TIFFReadDirectory: Warning, Unknown field with tag 7710 (0x1e1e) encountered.
TIFFFetchNormalTag: Warning, Incorrect count for "FillOrder"; tag ignored.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
Fax3Decode2D: Warning, Premature EOL at line 0 of tile 0 (got 768, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 1 of tile 0 (got 35, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 2 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 3 of tile 0 (got 0, expected 769).
Fax3Decode2D: Uncompressed data (not supported) at line 4 of tile 0 (x 0).
Fax3Decode2D: Warning, Premature EOL at line 4 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 5 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 7 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 8 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 9 of tile 0 (got 0, expected 769).
Fax3Decode2D: Warning, Line length mismatch at line 10 of tile 0 (got 1792, expected 769).
Fax3Decode2D: Warning, Premature EOL at line 11 of tile 0 (got 0, expected 769).
==29687==ERROR: AddressSanitizer: memcpy-param-overlap: memory ranges [0x7f2dcce0b85d,0x7f2dcce0b8ba) and [0x7f2dcce0b861, 0x7f2dcce0b8be) overlap
    #0 0x4bbee1 in __asan_memcpy /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x7f2dccb87f0d in _TIFFmemcpy /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:340:2
    #2 0x52ac36 in t2p_tile_collapse_left /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:3596:3
    #3 0x52ac36 in t2p_readwrite_pdf_image_tile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:3073
    #4 0x50f1dc in t2p_write_pdf /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:5526:16
    #5 0x50bfee in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:808:2
    #6 0x7f2dcbb4361f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #7 0x41a298 in _init (/usr/bin/tiff2pdf+0x41a298)

0x7f2dcce0b85d is located 93 bytes inside of 968448-byte region [0x7f2dcce0b800,0x7f2dccef7f00)
allocated by thread T0 here:
    #0 0x4d3058 in malloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x7f2dccb87d7e in _TIFFmalloc /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:316:10
    #2 0x5294e8 in t2p_readwrite_pdf_image_tile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:2933:29
    #3 0x50f1dc in t2p_write_pdf /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:5526:16
    #4 0x50bfee in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:808:2
    #5 0x7f2dcbb4361f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289

0x7f2dcce0b861 is located 97 bytes inside of 968448-byte region [0x7f2dcce0b800,0x7f2dccef7f00)
allocated by thread T0 here:
    #0 0x4d3058 in malloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x7f2dccb87d7e in _TIFFmalloc /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:316:10
    #2 0x5294e8 in t2p_readwrite_pdf_image_tile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:2933:29
    #3 0x50f1dc in t2p_write_pdf /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:5526:16
    #4 0x50bfee in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:808:2
    #5 0x7f2dcbb4361f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289

SUMMARY: AddressSanitizer: memcpy-param-overlap /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/ in __asan_memcpy

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-12-20: bug discovered and reported to upstream
2016-12-20: upstream released a patch
2017-01-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libtiff: memcpy-param-overlap in t2p_tile_collapse_left (tiff2pdf.c)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

A crafted tiff file revealed an invalid memory read.

The complete ASan output:

# tiff2pdf $FILE -o foo
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
111.crashes: Warning, Nonstandard tile length 3, convert file.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFFetchNormalTag: Warning, ASCII value for tag "Software" contains null byte in value; value incorrectly truncated during reading due to implementation limitations.
TIFFAdvanceDirectory: Error fetching directory count.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
111.crashes: Warning, Nonstandard tile length 3, convert file.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFFetchNormalTag: Warning, ASCII value for tag "Software" contains null byte in value; value incorrectly truncated during reading due to implementation limitations.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
111.crashes: Warning, Nonstandard tile length 3, convert file.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFFetchNormalTag: Warning, ASCII value for tag "Software" contains null byte in value; value incorrectly truncated during reading due to implementation limitations.
TIFFReadDirectoryCheckOrder: Warning, Invalid TIFF directory; tags are not sorted in ascending order.
111.crashes: Warning, Nonstandard tile length 3, convert file.
TIFFFetchNormalTag: Warning, Incorrect count for "XResolution"; tag ignored.
TIFFFetchNormalTag: Warning, ASCII value for tag "Software" contains null byte in value; value incorrectly truncated during reading due to implementation limitations.
tiff2pdf: Warning, RGB image 111.crashes has 4 samples per pixel, assuming RGBA.
TIFFReadRawTile: Read error at row 4294967295, col 4294967295, tile 0; got 0 bytes, expected 23297.
TIFFReadRawTile: Read error at row 4294967295, col 4294967295, tile 1; got 0 bytes, expected 513.
TIFFReadRawTile: Read error at row 4294967295, col 4294967295, tile 2; got 512 bytes, expected 65285.
TIFFReadRawTile: Read error at row 4294967295, col 4294967295, tile 3; got 512 bytes, expected 1535.
==19864==ERROR: AddressSanitizer: SEGV on unknown address 0x61b000020000 (pc 0x7fc86d4a320b bp 0x000000000efc sp 0x7fff06650bf8 T0)
==19864==The signal is caused by a READ memory access.
    #0 0x7fc86d4a320a  /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/string/../sysdeps/x86_64/memcpy.S:270
    #1 0x7fc86d491f79 in _IO_file_xsputn /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/libio/fileops.c:1319
    #2 0x7fc86d487828 in fwrite /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/libio/iofwrite.c:43
    #3 0x50cdff in t2p_writeproc /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:405:21
    #4 0x52baea in t2pWriteFile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:379:10
    #5 0x52baea in t2p_readwrite_pdf_image_tile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:2924
    #6 0x50f1dc in t2p_write_pdf /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:5526:16
    #7 0x50bfee in main /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2pdf.c:808:2
    #8 0x7fc86d43e61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #9 0x41a298 in _init (/usr/bin/tiff2pdf+0x41a298)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/string/../sysdeps/x86_64/memcpy.S:270 

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-12-20: bug discovered and reported to upstream
2016-12-20: upstream released a patch
2017-01-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libtiff: invalid memory READ in t2p_writeproc (tiff2pdf.c)

libtiff: multiple heap-based buffer overflow (January 01, 2017, 15:34 UTC)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

Some crafted images, through a fuzzing revealed multiple overflow. Since the number of the issues, I will post the relevant part of the stacktrace.

Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==16440==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62500000e861 at pc 0x0000004531de bp 0x7ffd2aba5c30 sp 0x7ffd2aba53e0
READ of size 78490 at 0x62500000e861 thread T0
    #1 0x7f280456d37b in _tiffWriteProc /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:115:23


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==14332==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x63000000f4f0 at pc 0x7f95e90c11ad bp 0x7ffd74ba5ca0 sp 0x7ffd74ba5c98
READ of size 1 at 0x63000000f4f0 thread T0
    #0 0x7f95e90c11ac in TIFFReverseBits /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_swab.c:289:27


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

#tiffcp -i $FILE /tmp/foo
==10398==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eef4 at pc 0x0000004bc235 bp 0x7fff3ebfa700 sp 0x7fff3ebf9eb0
READ of size 512 at 0x60200000eef4 thread T0
     #1 0x7fcaf590cf0d in _TIFFmemcpy /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:340:2


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==15106==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000edd8 at pc 0x7f33918c5de3 bp 0x7ffc5abe6ba0 sp 0x7ffc5abe6b98
READ of size 8 at 0x60200000edd8 thread T0
    #0 0x7f33918c5de2 in TIFFFillStrip /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_read.c:523:22


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcrop -i $FILE /tmp/foo
==9181==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fd3b2e277f8 at pc 0x7fd3b7a762cc bp 0x7ffffd6e2550 sp 0x7ffffd6e2548
READ of size 1 at 0x7fd3b2e277f8 thread T0
    #0 0x7fd3b7a762cb in _TIFFFax3fillruns /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_fax3.c:413:13


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcrop -i $FILE /tmp/foo
==988==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62100001ccff at pc 0x0000004bc00c bp 0x7fff920da690 sp 0x7fff920d9e40
WRITE of size 1 at 0x62100001ccff thread T0
    #1 0x7f49edd6af0d in _TIFFmemcpy /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:340:2


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==7788==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000edd3 at pc 0x0000004629ac bp 0x7ffe4adf8df0 sp 0x7ffe4adf85a0
READ of size 1 at 0x60200000edd3 thread T0
    #1 0x50d6a5 in tiffcp /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:784:57


Affected version / Tested on:
Fixed version:
Commit fix:
Upstream said that the previous changes, fixes this too. It needs to be bisected.
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==25645==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f651cc3b800 at pc 0x00000051ef24 bp 0x7ffec0573a70 sp 0x7ffec0573a68
READ of size 16 at 0x7f651cc3b800 thread T0
    #0 0x51ef23 in cpSeparateBufToContigBuf /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:1209:14


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==20438==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fef2adde803 at pc 0x00000051befa bp 0x7ffd3ee26b50 sp 0x7ffd3ee26b48
WRITE of size 16 at 0x7fef2adde803 thread T0
    #0 0x51bef9 in cpStripToTile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:1171:11


Affected version / Tested on:
Fixed version:
Commit fix:
Upstream said that the previous changes, fixes this too. It needs to be bisected.
Relevant part of the stacktrace:

# tiffcrop -i $FILE /tmp/foo
==29649==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62d00000a3fc at pc 0x0000004bc48c bp 0x7ffd6f23c680 sp 0x7ffd6f23be30
WRITE of size 2048 at 0x62d00000a3fc thread T0
      #1 0x7fcac5ac0033 in NeXTDecode /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_next.c:64:9


Affected version / Tested on:
Fixed version:
Commit fix:
Upstream said that the previous changes, fixes this too. It needs to be bisected.
Relevant part of the stacktrace:

# tiffcrop -i $FILE /tmp/foo
==23091==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eed2 at pc 0x0000004629dc bp 0x7fff8d1e2950 sp 0x7fff8d1e2100
READ of size 1 at 0x60200000eed2 thread T0
   #1 0x53277f in writeCroppedImage /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcrop.c:7940:23


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiff2ps $FILE
==32416==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000ee91 at pc 0x00000051ea78 bp 0x7ffd76b73dd0 sp 0x7ffd76b73dc8
READ of size 1 at 0x60200000ee91 thread T0
    #0 0x51ea77 in PSDataBW /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2ps.c:2703:21


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiff2ps $FILE
==31384==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000ee54 at pc 0x000000518b75 bp 0x7fff437bfdb0 sp 0x7fff437bfda8
READ of size 1 at 0x60200000ee54 thread T0
    #0 0x518b74 in PSDataColorContig /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiff2ps.c:2470:2


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcrop -i $FILE /tmp/foo
==8016==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eef1 at pc 0x000000530805 bp 0x7ffeb0d41770 sp 0x7ffeb0d41768
READ of size 1 at 0x60200000eef1 thread T0
    #0 0x530804 in combineSeparateSamples16bits /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcrop.c:3913:20


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiff2pdf $FILE -o foo
==31315==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000ea11 at pc 0x0000004bc10c bp 0x7fffd59abc40 sp 0x7fffd59ab3f0
WRITE of size 2 at 0x60200000ea11 thread T0
    #1 0x7fd49c1adf0d in _TIFFmemcpy /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_unix.c:340:2


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiff2rgba $FILE /tmp/foo
==20699==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62500000ed12 at pc 0x7f49ab2c134c bp 0x7ffc7e4eda30 sp 0x7ffc7e4eda28                                                                                                                                      
READ of size 1 at 0x62500000ed12 thread T0                                                                                                                                                                                                                                     
    #0 0x7f49ab2c134b in putcontig8bitYCbCr44tile /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_getimage.c:1885:28

These bugs were discovered by Agostino Sarubbo of Gentoo.

2016-11-20: started to post the issues to upstream
2017-01-01: blog post about the issue

These bugs were found with American Fuzzy Lop.


libtiff: multiple heap-based buffer overflow

libtiff: multiple divide-by-zero (January 01, 2017, 15:32 UTC)

Libtiff is a software that provides support for the Tag Image File Format (TIFF), a widely used format for storing image data.

Some crafted images, through a fuzzing revealed multiple division by zero. Since the number of the issues, I will post the relevant part of the stacktrace.

Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp $FILE /tmp/foo
==12079==ERROR: AddressSanitizer: FPE on unknown address 0x7fd319436251 (pc 0x7fd319436251 bp 0x7fff851e3d80 sp 0x7fff851e3d30 T0)
    #0 0x7fd319436250 in TIFFReadEncodedStrip /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_read.c:351:22


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffmedia $FILE /tmp/foo
==28106==ERROR: AddressSanitizer: FPE on unknown address 0x7faeae7f744e (pc 0x7faeae7f744e bp 0x7ffceab45e40 sp 0x7ffceab45ce0 T0)
    #0 0x7faeae7f744d in OJPEGDecodeRaw /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/libtiff/tif_ojpeg.c:816:8


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcrop $FILE /tmp/foo
==19098==ERROR: AddressSanitizer: FPE on unknown address 0x000000523acf (pc 0x000000523acf bp 0x7ffcb22ada30 sp 0x7ffcb22ad780 T0)
    #0 0x523ace in readSeparateStripsIntoBuffer /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcrop.c:4841:36


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp $FILE /tmp/foo
==13262==ERROR: AddressSanitizer: FPE on unknown address 0x00000051c43b (pc 0x00000051c43b bp 0x7ffdc8d81d70 sp 0x7ffdc8d81b20 T0)
    #0 0x51c43a in readSeparateTilesIntoBuffer /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:1434:9


Affected version / Tested on:
Fixed version:
Commit fix:
Relevant part of the stacktrace:

# tiffcp -i $FILE /tmp/foo
==3614==ERROR: AddressSanitizer: FPE on unknown address 0x00000051650a (pc 0x00000051650a bp 0x7fff41587d30 sp 0x7fff41587b00 T0)
    #0 0x516509 in writeBufferToSeparateTiles /tmp/portage/media-libs/tiff-4.0.7/work/tiff-4.0.7/tools/tiffcp.c:1591:13

These bugs were discovered by Agostino Sarubbo of Gentoo.

2016-11-20: started to post the issues to upstream
2017-01-01: blog post about the issue

These bugs were found with American Fuzzy Lop.


libtiff: multiple divide-by-zero

December 31, 2016
Domen Kožar a.k.a. domen (homepage, bugs)
Reflecting on 2016 (December 31, 2016, 18:00 UTC)

Haven't blogged in 2016, but a lot has happened.

A quick summary of highlighted events:

2016 was a functional programming year as I've planned by end of 2015.

I greatly miss Python community and in that spirit, I've attended EuroPython 2016 and helped organize DragonSprint in Ljubljana. I don't think there's a place for me in OOP anymore, but I'll surely attend community events as nostalgia will kick in.

2017 seems extremely exciting, plans will unveil as I go, starting with some exciting news in January for Nix community.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Motherboard review: ASUS vs MSI (December 31, 2016, 08:04 UTC)

You may remember last year I bought a gamestation to play games at home (and that means running Windows on it). Last month, I had to do a relatively big change: replace the motherboard altogether. And since I now managed to compare two motherboards of about the same generation, I thought I can give a bit of a comparative review of the two.

My original motherboard was an ASUS X99-S (which right now has an absolutely crazy price!) which I coupled with an Intel 5930K (which is not sold anymore). The motherboard on paper is great, SATA3, m.2 and so on, and it may actually be good if it’s not a broken one, but mine clearly was.

The first glitch I noticed, but not paid enough attention to, was related to the USB 3 ports. While all the ports worked fine, I never managed to install the ASMedia drivers, even though the ASMedia controller was meant to be backing some of the ports, and SysRescCD was actually seeing them fine. This bothered me for a while when I had performance issues on one of my devices, but otherwise it seemed ok.

The second problem was tricky to pin down exactly if it was always there or if it was an update causing it. When I bought the Gamestation, the memory was expensive so I only got 32GB of it. A few months later, I had some spare pocket money (well, I got some bonuses that I wanted to exchange for some gratification) and bought 32GB more. Stupidly, I don’t remember if I checked if it worked fine, just trusted it. A few months later, while trying to do some big processing in Lightroom, I came to notice that Windows only saw half of the RAM. I thought it was a bad bank or something like that, but any combination of shuffling the RAM around would only have Windows seeing 32GB of it. Even though CPU-Z would see all eight banks in.

At that point, Nikolaj suggested it could be an ME problem, so I went on and re-flashed the BIOS from scratch with an SPI flash adapter, but that didn’t help. Re-seating the CPU also didn’t help. I was appalled, but it was not enough to replace the board just yet, so I put the extra RAM to the side and soldiered on. I was wrong.

Last November, literally the day after my birthday, I came back home from a trip and wanted to download some dozens of GB of pictures I took… and my computer wouldn’t boot. The bootcode showed the system blocked in a CSM (compatibility system mode) failure. Trying all the permutation of things to change helped nothing, so it was either the motherboard or the CPU — I took a bet on the motherboard given the previous history, and ordered a MSI X99 SLI Plus while I was in the US — it was significantly cheaper than in Europe.

My hunch was right and indeed, the new motherboard solved the problem. The specs between the two are about the same, actually, there is the same ASMedia USB controller, though this time the drivers install correctly, all the RAM is actually seen by the system now, and of course the computer boots. But this is just the very superficial look at it. There is something else.

Both ASUS and MSI provide software utilities for overclocking, as it is expected for motherboards designed for the Haswell-E family of processors. But the approach the two take is significantly different. ASUS encodes most of the logic in the software itself it appears, with their “DIP5” core, while MSI appears to keep it in the firmware (that also appears to make the boot process a bit slower).

ASUS utility pack is called “AISuite”, and the major version is tied to the board’s generation, version 3 for the X99 motherboards. While there has been at least one update since the time I bought the card, the last version released for the suite was on 2015-07-28. In addition to the overclocking UI, the suite includes a handful of other board-specific tools: one to set the bulk transfer mode sizes (to provide higher performance on USB3 non-UAS devices, not needed on Linux as the kernel does the right thing by default), one to allow faster charge on iPhone devices, and so on so forth. Some of this is actually quite useful, for instance the faster USB transfer actually is useful, although it also has the side effect of stopping WD SmartWave tools from recognizing the drive, and so break your backups if you decided to use WD’s own tool rather than Microsoft’s.

On the other hand, a release for the DIP5 core was released on 2016-06-29, to support the new CPUs — their 2011-3 socket is full-pin, which allowed them to support a further generation of CPUs with only firmware updates. This is effectively an update for the various drivers needed for the underlying overclocking system, as well as a complete overhaul of the Suite UI — which is likely due to actually applying a newer-generation Suite to the motherboard.

Unfortunately, the new Suite UI does not come with a new set of add-ons for charger, USB, etc. This would be okay, except the add-ons ABI changed: the moment you open the Suite app, you have to press Enter so many times, as it tries to fetch icon files that do not exist. Copying the old PNG files into the new path makes it stop throwing up these errors, but the UI clearly shows the wrong icons.

Oh and by the way, starting AISuite with a different motherboard causes Windows 10 to blue-screen. I know because after booting my gamestation with the new motherboard I was welcome by the blue screen of death and I had a sagging feeling of dismay, expecting the CPU to be broken instead (turns out no, it was all the AISuite’s fault).

What about MSI’s app then? Well, their approach appears to be significantly different: first of all the overclocking app only has the overclocking function — they rely on ASMedia’s own tooling and drivers for the USB bulk transfer reconfiguration, and provide an optional tool for the charging options. In the spirit of not reimplementing stuff, they also don’t require any new Windows driver for this, requiring you to install the Intel ME drivers instead… which was fun because the copy I had installed from before the motherboard replacement was newer than the one MSI provides on their website.

And this makes the MSI utility more interesting: last update 2016-12-06, since they use the same exact package for all their boards, it includes no board-specific features and no drivers, so updating it is significantly simpler for them.

The end result is that I’m fairly happy. MSI does not have the tons of crapware that ASUS appears to provide for their boards. They do come with a “Live Update” tool, which I wouldn’t trust, even though I have not tested. Too many of those apps have forgot to implement HTTPS, certificate validation or pinning, making them extremely risky to run, which is unfortunate.

An aside, when you replace the motherboard of your computer, most systems that use computer authorization will consider it a new computer. Including Microsoft’s own Windows 10 license handling, as the Windows 10 license is tied to a EFI variable, for what I remember.

Of all those systems, Microsoft’s was the easiest to deal with, though. The system booted as unactivated, and they do try to point you towards buying a new license, burying the right interface behind “Troubleshooting”, but once you say “I changed hardware recently”, it allows you to just replace the previous computer authorization with the current one.

Both Google Play Music and iTunes require authorizing an additional computer, and that makes it a problem if you are close to the limit (because then you may have to unauthorize them all and then re-authorize them. Stupid DRMs.

December 28, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Synchronised Playback and Video Walls (December 28, 2016, 18:01 UTC)

Hello again, and I hope you’re having a pleasant end of the year (if you are, maybe don’t check the news until next year).

I’d written about synchronised playback with GStreamer a little while ago, and work on that has been continuing apace. Since I last wrote about it, a bunch of work has gone in:

  • Landed support for sending a playlist to clients (instead of a single URI)

  • Added the ability to start/stop playback

  • The API has been cleaned up considerably to allow us to consider including this upstream

  • The control protocol implementation was made an interface, so you don’t have to use the built-in TCP server (different use-cases might want different transports)

  • Made a bunch of robustness fixes and documentation

  • Introduced API for clients to send the server information about themselves

  • Also added API for the server to send video transformations for specific clients to apply before rendering

While the other bits are exciting in their own right, in this post I’m going to talk about the last two items.

Video walls

For those of you who aren’t familiar with the term, a video wall is just an array of displays stacked to make a larger display. These are often used in public installations.

One way to set up a video wall is to have each display connected to a small computer (such as the Raspberry Pi), and have them play a part of the entire video, cropped and scaled for the display that is connected. This might look something like:

A 4×4 video wall

The tricky part, of course, is synchronisation — which is where gst-sync-server comes in. Since we’re able to play a given stream in sync across devices on a network, the only missing piece was the ability to distribute a set of per-client transformations so that clients could apply those, and that is now done.

In order to keep things clean from an API perspective, I took the following approach:

  • Clients now have the ability to send a client ID and a configuration (which is just a dictionary) when they first connect to the server

  • The server API emits a signal with the client ID and configuration, which allows you to know when a client connects, what kind of display it’s running, and where it is positioned

  • The server now has additional fields to send a map of client ID to a set of video transformations

This allows us to do fancy things like having each client manage its own information with the server dynamically adapting the set of transformations based on what is connected. Of course, the simpler case of having a static configuration on the server also works.


Since seeing is believing, here’s a demo of the synchronised playback in action:

The setup is my laptop, which has an Intel GPU, and my desktop, which has an NVidia GPU. These are connected to two monitors (thanks go out to my good friends from Uncommon for lending me their thin-bezelled displays).

The video resolution is 1920×800, and I’ve adjusted the crop parameters to account for the bezels, so the video actually does look continuous. I’ve uploaded the text configuration if you’re curious about what that looks like.

As I mention in the video, the synchronisation is not as tight than I would like it to be. This is most likely because of the differing device configurations. I’ve been working with Nicolas to try to address this shortcoming by using some timing extensions that the Wayland protocol allows for. More news on this as it breaks.

More generally, I’ve done some work to quantify the degree of sync, but I’m going to leave that for another day.

p.s. the reason I used kmssink in the demo was that it was the quickest way I know of to get a full-screen video going — I’m happy to hear about alternatives, though

Future work

Make it real

My demo was implemented quite quickly by allowing the example server code to load and serve up a static configuration. What I would like is to have a proper working application that people can easily package and deploy on the kinds of embedded systems used in real video walls. If you’re interested in taking this up, I’d be happy to help out. Bonus points if we can dynamically calculate transformations based on client configuration (position, display size, bezel size, etc.)

Hardware acceleration

One thing that’s bothering me is that the video transformations are applied in software using GStreamer elements. This works fine(ish) for the hardware I’m developing on, but in real life, we would want to use OpenGL(ES) transformations, or platform specific elements to have hardware-accelerated transformations. My initial thoughts are for this to be either API on playbin or a GstBin that takes a set of transformations as parameters and internally sets up the best method to do this based on whatever sink is available downstream (some sinks provide cropping and other transformations).

Why not audio?

I’ve only written about video transformations here, but we can do the same with audio transformations too. For example, multi-room audio systems allow you to configure the locations of wireless speakers — so you can set which one’s on the left, and which on the right — and the speaker will automatically play the appropriate channel. Implementing this should be quite easy with the infrastructure that’s currently in place.

Merry Happy *.*

I hope you enjoyed reading that — I’ve had great responses from a lot of people about how they might be able to use this work. If there’s something you’d like to see, leave a comment or file an issue.

Happy end of the year, and all the best for 2017!

December 26, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

You may remember that some months ago I stopped updating the blog. Part of it was a technical problem of having the content on Typo (for more than a few reasons), part of it was disappointed in the current free software and open source scene. I vented this disappointed to the people over at FSFE that and they suggested I should have gone to the yearly meeting in Berlin to talk about them, but I was otherwise engaged, and I really felt I needed to withdraw from the scene for a while to think things over.

Open source and free software have, over the years, attracted people for widely different reasons. In the beginning, it was probably mostly the ethics, it also attracted tinkerers and hackers of course. I was attracted to it as an user because I liked tinkering, but as a developer because I was hoping it would lead me to find a good job. You can judge me if you want, but growing up in a blue collar family, finding a good job was actually something I was taught to be always mindful of. And it’s not fair, but I had the skills and time (and lack of extreme pressure to find said job) that I managed to spend a significant amount of time on free software development.

I went to a technical school, and the default career out of it when I joined was working for a fairly big insurance company, whose headquarters were (and as far as I know still are) not far from where I grew up. Ending up at the Italian telco (Telecom Italia) was considered a huge score — this was a time before university was considered mandatory to do anything at all wherever you wanted to go (not that I think that’s the case now).

My hopes were to find something better: originally, that meant hoping to move to a slightly bigger city (Padua) and work at Sun Microsystems, that happened to have a local branch. And it seemed like the open source would get you noticed. Of course at the end I ended up slightly more North than I planned – in Ireland – and Sun is gone replaced by a much bigger corporation that, well, let’s just say does not interest me at all.

Was open source needed for me to get where I am? Probably yes, if nothing else because it made me grow more than anything else I could have done otherwise. I see many of my classmates, who even after going to university ended up in the same insurance company, or at the local telco, or one of the many big consultancy companies. The latter is the group that tends to be the most miserable. On the other hand I also have colleagues at my current company who came from the same year of the same high school as me — except they went on to university, while I didn’t. I’m not sure who got the better deal but we’re all happy, mostly.

What I see right now though, and that worries a bit, is that many people see open source as a way to jump-start a startup. And that is perfectly okay if that’s how you present yourself, but it is disingenuous when you effectively hide behind open-source projects to either avoid scrutiny or to make yourself more appealing, and what you end up with is a fake open source project which probably is not free software in the license, and easily so even in the spirit.

I have complained before about a certain project, that decided to refuse my offer to work on a shared format specification because they thought a single contributor not part of their core team can leave the scene at any moment. Even leaving aside the fact that I probably maintained open-source code for more than they were active on said scene, this is quite the stance. Indeed when I suggested this format is needed, their answer was that they are already working on that behind closed doors together with vendors. Given how they rebranded themselves from a diabetes management software to a quantified self tool, I assume their talks with vendor went just as about everybody expected.

But that was already just a final straw; the project itself was supposedly open-source, but beside for the obvious build or typo fixes, their open source repository only had internal contributions. I’m not sure if it was because they didn’t want to accept external contributions or because the possible contributors were turned down by CLA requirements or something like that, it’s not really something I cared enough to look into. In addition to the open source repository, though, most of the service was closed-source, making it nearly impossible to leverage in a free way.

My impression at this point is that whereas before your average “hacker” would be mostly looking to publish whatever they were working on as an open source project, possibly to use it as a ticket to find a job (either because of skill or being paid to maintain the software), nowadays any side project is a chance to a startup… whether a business plan is available for it or not.

And because you want at some point to make money, if you are a startup, you need to have at least some plan B, some reserve on the code, that makes your startup itself valuable, at least to a point. And that usually makes for poor open source contributions as noticed above, and as, good timing, CyanogenMod turned out to be. Similar things have happened and keep happening over time with OpenWRT too, although that one probably already went through the phase of startup-importance into a more community-driven project, although clearly not a mature enough one.

So here it is, I’m grumpy and old at this point, but I think a lot of us in the free software and open source world should do better to consider the importance of maintaining community projects as such, and avoid hiding targets of “startupizzation”, rather than just caring about firmware lock-in and “tivoization.” I wish I had a solution to this but I really don’t, I can only rant, at least for now.

Virtually rewiring laptop keyboards (December 26, 2016, 19:04 UTC)

You may remember I had problems with my laptop a few months before, because it refused to boot until I unplugged the CMOS battery. This by the way happened again, to the point I need to remember to buy a new CMOS battery next time I’m in the States (the european prices are crazy insane, and I’ll be back reasonably soon). This is the start of a story for the same laptop, but it has nothing to do with the CMOS in this case.

I have recently replaced my work laptop with an HP Chromebook from the previous MacBook Pro I was using. If you’re curious for my reasons, they boil down to traveling too much, and the MBP being too heavy. I briefly considered an Air, but given the direction they go to, the Chromebook works better for the work needs.

If you didn’t know, Chromebooks don’t come (by default) with a Caps Lock key. Maybe it’s a public service, making it more difficult to shout on the Internet, maybe it’s because whoever designed the keyboards was nostalgic of the control key in place of the caps lock, I’m not sure. Instead of moving the control key, they introduced a new search button, which triggers the search box as well as function as a “Fn” modifier, to access features such as page up/down, home and end. I liked the approach and it’s actually fairly handy. Unfortunately it means that now I have a third way (in addition to the Asus and the Dell keyboards) to access these functionalities, which makes my muscle memory suffer badly. It also meant I kept typing all-caps on my Asus lap when I tried using the modifier (and failed) and that was pissing me off.

On Apple USB and Bluetooth keyboards there is a Fn button, but it’s handled entirely in software. Indeed if you have one such keyboard, particularly the 60% version (those without numpad and separate isles for movement keys), and you want to use it on Linux you need to enable a kernel module to implement the correct emulation. I know that because it bit me when they first introduced it, as I was using a full-size Apple keyboard instead, and the numlock emulation was making me unable to type.

This is give or take the way it works on the Chromebook, mostly out of necessity of sharing the Fn modifier with the Search button. And it allows you to change which key is Search/Fn in software, which is handy. Why can’t I do that with my Asus laptop? Well, I can disable the Caps Lock at least, and replace it with Control like so many people do already, after all I use Emacs and they tell me it’s much better to use Emacs that way (I don’t know about it, I tried it briefly, but my muscle memory works better with the pinky-control). But that’s not exactly what I want.

I could try remapping Ctrl+arrows to behave the same way as Fn+arrows but that’s not quite what I want either because then I lose the skip-ahead/forwards that I want from Ctrl+arrows. So I need to come up with alternatives. Much as I wish this was going to be a step-by-step procedure to fix this, it’s not, and it’s instead a musing of what may or may not work.

The first option would be to implement the Fn in software, either by the kernel, X11 or libinput level. This could actually be interesting to make the Fn behaviour of Apple keyboards generic enough. I don’t really know where to start with that one, because between systemd, libinput and Wayland the input layers flow changed so much that I’m completely lost.

The other option is more daring and possibly more interesting: rewiring the laptop keyboard by changing what the keys actually send over the PS/2 bus. As Hector suggested over twitter, the keyboard is handled as part of the Embedded Controller (EC) firmware, and it is not untold of modifying a laptop’s EC although a quick search doesn’t turn up anyone doing so on an Asus laptop to change the keyboard scancodes.

Does it mean I can do it? Does it mean I will? I’m not sure yet. Part of the problem is that playing around with an EC is the kind of thing that can easily brick your laptop, and this is currently my only Linux environment in which I do actual work. I could try to re-target my HTPC to be a workstation, and then hack on this laptop like it’s disposable, but the truth is that I spend enough time in the air that I really want to have a laptop, at least as a secondary system.

The first problem is figuring out how run the update. The first step would be figuring out where the EC firmware is. In Matthew’s posts, he found a promising area of the update file within the image, based off a size and the (known) EC firmware version. In my case I don’t have that luck, since the only version I can see from the Linux host is the BIOS revision, which is 219. On the other hand, if I look at the Asus download page versions 212 and 216 explicitly mention an EC firmware update, so it would at least make it easy to verify whether my guess is right if I am to guess which area of the firmware image is the EC firmware itself.

But it might be easier. UEFITool supports reading these update files, as they are AMI Aptio capsules, and it should be possible then to extract a listing of object trees and checksum that tells you what actually changed between two versions. Unfortunately that would only tell you what and not how, but it’s a starting point. Unfortunately, the documentation of the tool itself already points out that many AMI features are not implemented because of the author’s NDA. Of course the moment when you look for aptio capsule format you find a post by Nikolaj’s about the AFU utility.

This may be a throw-in post just to give a random idea, or it may follow up with more details, and maybe some code to get the list of changed files in the capsule, but I have not started on this yet and I’m not sure I’ll do. The tools are out there, and it would be an interesting game to play, the problem is, nowadays, mostly the time.

Of the two options, implementing the second Fn key (without changing the one that is there) is obviously the one that has the most potential to be useful: if it can be made generic enough, it can be used on any keyboard, laptop or not, and might allow simplifying the Fn key handling in the Apple keyboards, by moving it away from a Apple-specific driver. So if someone has ideas of where this should fit nowadays, I’m happy to hear about those.

December 23, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Glucometer Review: iHealth Align (December 23, 2016, 00:04 UTC)

You’d expect that with me being pretty happy with the FreeStyle Libre, new glucometer reviews would be unlikely. On the other hand, as you probably noticed as well, I like reverse engineering devices, and I have some opinions about accessing your own medical data (which I should write about at some point), so when I drop by the United States I check if they have any new glucometer being sold for cheap that I might enjoy reversing.

The iHealth Align is a bit special in this regard. Usually I just buy what’s on offer at Walgreens or CVS (sometimes thanks to Rebate just paying taxes for it), but this time I ordered it straight from Amazon. And the reason whas interesting: I found it with a “heavy” discount (the page I ordered it from is gone, but it was something along the lines of 30%), and I guess this is because the device is mostly advertised to be for iOS, and since it uses the TRRS (headphone) jack, it will not work on the more recent devices. On the other hand, it still works fine on Android.

Originally I wanted to wait to go back home to look into it, as I also bought some TRRS breakout connectors to be able to run my newly-bought Saleae Logic Pro 16 onto it, but a funny accident got me to open it while on my trip, in Pittsburgh, and try it out already.

As for the funny accident, I am not entirely sure how that happened but after dinner with Rick, after I went back to my room, my FreeStyle Libre reported a fall from 10 mmol/L (138mg/dL) to LO within two readings (and missing a few data points after that), and then reported me as being having a low blood sugar event. I was not hypoglycemic at that point, as I can feel it, so I double checked with the new meter, that showed me just fine in the 7 mmol/L range, which meant I was good. After an hour or two the self-calibration of the sensor went back to normal and it aligned again with the other meter.

My doctor suggests that the -15℃ weather outside would affect the chemical reaction, and make even blood testing unreliable. On the other hand, I had a second happening of the same failure mode after a shower last night, it might be a defect of this sensor.

I’ll start with a first impression before actually going onto the details of what I found in deeper inspection of the device, mostly out of need since I’ve started the post while travelling through airports.

The device is very small and it’s an active device: it has a button cell inside, and it comes with replacement ones, likely because they are not very common ones: CR1620 — not to be confused with CR2016! The package I got had no strips, but came with “sanitary phone covers” — I think they meant it to be for medical professionals rather than self-use, but that might explain the cheaper price.

To use it with your phone, you obviously have to install the iGluco application from iHealth, and that application needs obviously permission to listen to your microphone to use the audio jack. What surprised me was that just plugging the device in opens the application. I was scared that the application was constantly listening to the microphone waiting for a magic handshake, but a friend suggested that they may just be registering an intent for the jack-sensing, and playing back a handshake when they receive that. Plugging in a pair of headphones doesn’t do anything, and I have yet to fire up the analyser to figure out if something else may be going on.

Before you can take a reading you have to sign up for the iHealth remote service, and provide it with a valid email address. It also asks a couple of basic questions meant to be useful to track your health in general, found those a tad creepy too but I understand why they are there. It’s not surprising, given that the LibreLink does something very similar (without the questions though). It does give me a bigger incentive to try to figure out the device, although I’m also concerned it might be too locked in for the vendor. On the bright side, I confirmed all the communication with the remote service happens over HTTPS; what I did not confirm is whether the app is validating certificates correctly so don’t trust my word for it.

There is a blast from the past when I tried using it the first time: the strips are coded. I have not seen a device using coded strips in years by now. The OneTouch Ultra strips have been coded in the past, but over the past four years or so, they only sell code 25 — and they stopped selling them altogether in the UK and Ireland. To code the strips in, the strip bottles come with a big QR code on the top. This QR code embeds the date of issue and expiration of the strips, and appears to provide a unique identifier on the bottle, as the application will keep track of a shorter expiration time for the bottle when you start using it, which means it’s hard (or impossible) to use two bottles (like I used to do, one at home and one at the office).

The app includes some interesting metadata with the reading, but that’s nothing new. It’s definitely easier to write notes or select pre- or post-meal readings than with a normal glucometer, but again, this is not really exciting. It does, though, allow you to select in which measurement unit you want your readings in, whether mg/dL or mmol/L. This marks a first in my experience; as far as I knew, it is against regulation in most countries to let the user switch units, as the risk of a patient to misconfigure the glucometer makes it a deadly risk. But who am I to complain? I have written my tools because I needed to dump an Italian meter in a way that my Irish doctor liked.

As for the testing strips, they are huge compared to anything else I’ve used, although they don’t require so much blood as they may look like doing. They are a bit difficult to fit properly into the meter at first, requiring a bit more force than I’m used to either, and sometimes the app gets stuck if you don’t fit them in properly. All in all this makes it a bit of a shoddy meter on the practical level in my opinion.

From what I can tell, iHealth has a newer model of their meter that uses Bluetooth instead of the audio jack, and is thus compatible with the new iPhone models, and that may be more user friendly with regard to the app freezing, but I would expect the issue with the force needed to fit the strip to still exist.

I have not yet started looking into the communication between device and phone yet, although I have all the pieces I need. This weekend I think I’ll be soldering up a couple of things I need and then post pictures of the resulting breadboards, I’m sure it’s going to be a funny one.

December 22, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux System Administration, 2nd Edition (December 22, 2016, 18:26 UTC)

While still working on a few other projects, one of the time consumers of the past half year (haven't you noticed? my blog was quite silent) has come to an end: the SELinux System Administration - Second Edition book is now available. With almost double the amount of pages and a serious update of the content, the book can now be bought either through Packt Publishing itself, or the various online bookstores such as Amazon.

With the holidays now approaching, I hope to be able to execute a few tasks within the Gentoo community (and of the Gentoo Foundation) and get back on track. Luckily, my absence was not jeopardizing the state of SELinux in Gentoo thanks to the efforts of Jason Zaman.

December 21, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’m starting to write this blog post while sitting in a room full of system administrators and system engineers, at the opening talk of the the 30th LISA conference by USENIX. This is one of my “usual” conferences at this point, since I started building a routine of conference-going a few years ago — having stable employments helps a lot with that.

A few days ago, I was sitting in front of people after out tutorial session, at an open table answering questions from the attendees, with no preparation. And one thing I realized is that, sitting next to colleagues that have much more experience than me – if not at our current company, overall – is that I have grown since I started being active in the open source ecosystem and in particular since I started blogging, just about 12 years ago.

And with growth, and time, I changed my mind, because that’s what growing up means. And this meant at the same time softening my views in some cases, and hardening them in other. The end result is that when I read back some of the things I wrote in 2006, I really feel ashamed. It would be all too easy to go back and delete those posts. After all some of it I don’t even have a copy of, because the migration of Planet Gentoo from Serendipity (someone remembers that?) to WordPress lost anything that came after a non-ASCII character — and I use non-ASCII a lot. But on the other hand, I don’t think that denying that I wrote something is a good idea. I made mistakes and will keep making them over time. Being able to read how bad I was wrong, and realizing it, is a good thing, I think.

What I have been doing, is that I have been making notes in the posts that I did have to go back and read, and made notes on the posts when the (technical) content was out of date. I may do a little more of that, but I may also go through some of my older opinion pieces and write new posts, on the same topic, and pointing out why I think I was wrong, and what I would do to make up for it, and cross-reference them.

It is a cathartic feeling to realize how much my point of view changed. Even before I started blogging, I’ve been the “16-years old son of a friend that can do a better job” — and even at this point in time I think I did a good job at that, but I think that had less to do with the age of the people who I was trying to replace, and more with the fact that good support is expensive, particularly so in Italy as most of the people who have a clue would probably not be doing the MSP (sysadmin-for-hire) dance.

And having gone back to Venice just last month and finding so many things stashed away from my youth made things interesting, realized how much magazines made me try things out that I would not otherwise look out for myself. With Internet being at your fingertips at every time now, there is not much space to get to know things just because there is one part of the magazine that I’m interested in. It’s another viewport on the “bubble” that everybody seems to talk about in politics.

I may write a couple more entries talking about my past, so if you find them boring feel free to skip them. If you’re using NewsBlur you can filter them out with the Intelligence Trainer feature, by ignoring the life tag. This is possible because of my fix to Hugo, just so you know.

Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

OpenStack has a Grafana dashboard with infrastructure metrics, including CI jobs history (failure rate, …). These dashboards are configured via YML files, hosted in the project-config repo, with the help of grafyaml.

As a part of the Neutron stadium, projects like networking-sfc are expected to have a working grafana dashboard for failure rates in gates. I updated the configuration file for networking-sfc recently, but wanted to locally test these changes before sending them for review.

Documentation mentions the steps with the help of puppet, but I wanted to try and configure a local test server. Here are my notes on the process!

Installing the Grafana server

I run this on a Centos 7 VM, with some of the usual development packages already installed (git, gcc, python, pip, …). Some steps will be distribution-specific, like grafana install here.

Grafana has some nice documentation, but for my test server, I just installed it with packagecloud repository:
[root@grafana ~]# wget
[root@grafana ~]# vi # Never blindly run a downloaded script ;)
[root@grafana ~]# bash

Then start the server:
[root@grafana ~]# systemctl start grafana-server
(optionally, run “systemctl enable grafana-server” if you want it to start at boot)
And check that you can connect to http://${SERVER_IP}:3000, the default login password is admin / admin

Install and configure grafyaml

Seeing the main dashboard? Good, now open the API keys menu, and generate a key with Admin role (required as we will change the data source).

Now install grafyaml via pip (some distributions have a package for it, but not Centos):
[root@grafana ~]# pip install grafyaml

Create the configuration file /etc/grafyaml/grafyaml.conf with the following content (use the API key you just generated):

url = http://localhost:3000
apikey = generated_admin_key

Configure a dashboard

Now get the current configuration for OpenStack dashboards, and add one of them:
[root@grafana ~]# git clone # or sync from your local copy
[root@grafana ~]# grafana-dashboard update project-config/grafana/datasource.yaml
[root@grafana ~]# grafana-dashboard update project-config/grafana/networking-sfc.yaml

The first update command will add the OpenStack graphite datasource, the second one adds the current networking-sfc dashboard (the one I wanted to update in this case).
If everything went fine, refresh the grafana page, you should be able to select the Networking SFC Failure rates dashboard and see the same graphs as on the main site.

Modifying the dashboard

But we did not set up this system just to mimick the existing dashboards, right? Now it’s time to add your modifications to the dashboard YAML file, and test them.

A small tip on metrics names: if you want to be sure “stats_counts.zuul.pipeline.check.job.gate-networking-sfc-python27-db-ubuntu-xenial.FAILURE” is a correct metric, is your friend!
This is a web interface to the datasource, and allows you to look for metrics by exact name (Search), with some auto-completion help (Auto-completer), or browsing a full tree (Tree).

Now that you have your metrics, update the YAML file with new entries, then you can validate (the YAML structure only, for metrics names see previous paragraph) and update your grafana dashboard with:
[root@grafana ~]# grafana-dashboard validate project-config/grafana/networking-sfc.yaml
[root@grafana ~]# grafana-dashboard update project-config/grafana/networking-sfc.yaml

Refresh your browser and you can see how your modifications worked out!

Next steps

Remember that this is a simple local test setup (default account, api key with admin privileges, manual configuration, …). This can be used as a base guide for a real grafana/grafyaml server, but the next steps are left as an exercise for the reader!

In the meantime, I found it useful to be able to try and visualize my changes before sending the patch for review.

December 08, 2016
Mike Pagano a.k.a. mpagano (homepage, bugs)

Just a quick note that I am walking the patch for CVE-2016-8655 down the gentoo-sources kernels.

Yesterday, I released the following kernels with the patch backported:


Updated: 12/08
Also patched:

Updated 12/09

Updated 12/11

If Alice does not get to the others before me, I will continue to walk down the versions until all of them are patched.


Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have never reviewed a boardgame before, but I thought I would make an exception for once because this particular game has interesting “social” effects, which might not be obvious at first.

The game is Codenames, which I played a few months ago at the office during one of our unofficial board game nights.

The mechanics of the game is better left explained to the rules, as I suck at that, but the summary is that you get a (random) board of words on the table, and two teams are lead by captains that need to find word associations so that their team chooses the right words, and avoids choosing the other teams’ or the “killer word.”

This might sound boring at first unless you really are in word-association games, which is something that I am, and thus why I joined for this game. But what happened was much more interesting. To understand, let me describe the situation.

As I said we played this at the office, which meant that most of the players were software and systems engineers from my office, but there were at least a couple of SO guests, and one or two people from other parts of the company, around the corner where the “non-Eng” colleagues are. Since I work in Dublin, any random subset of engineers means that you get a nice sample of different nationalities — as sort-of expected, the age sampling is a bit more uniform, but even that was not a complete given, at 31 I was nicely in the middle of the spectrum.

This effectively meant that we had the winning combination: explaining the game was easy, but then at the first board of words, the first curious question “What the heck is pewter?” And after that we even got to the point that there is no word in French to describe pewter, they call it “alloy of lead and tin.” I do wonder how they translated Mistborn.

Things got even more interesting from there. As I said we were an interesting mix of people, even though we were approximately all white, we were all coming from different countries, not even all European, and with a word association games, that counted. Beside trying to figure out what the meanings of various words were, the native English (or American) speakers would point out how some word has more than one meaning, and so it would become a team discussion trying to figure out if the person giving the hint would know this other meaning or not.

I think the nicest example has been at some point when the captain was a visitor from the States, slightly older than me. He gives the hint “Actress”, on the table there is a “Theatre” (easy), and my team gravitates towards “Model”, while I notice “Temple.” And that was probably the longest discussion we had during the whole game. Why did I point at “Temple”? To me it was obviously a reference to Shirley Temple, but more than half my team never even heard of her! For some, a Shirley Temple is just a cocktail, and they never thought of where the name came from. Even so, would the captain know her? Would he choose such a hint knowing his audience? Or would he realize she’s not that well known across Europe?

At the end, he did see “Temple” in there, he thought it might be flagged, but was not aiming for that. He did intend to point to “Theatre” and “Model” — “Temple” was fine, so when we selected it we were still safe.

I had a go to be the captain, I did good (I said I love word association game) and my team won without making any mistakes in choosing the words. Having noted the problem of knowing my audience, I did my best to limit the hints to something I could expect my colleagues to know about, rather than trying to look for very odd, or unexpected meanings.

I have not tried experimenting this again with more people I don’t know, to figure out how well you can play this when you don’t even know which references you can make and which ones you can’t. But I thought I would at least share my take on this.

December 02, 2016
10 year anniversary for (December 02, 2016, 18:55 UTC)

December 3rd 2016 marks 10 years since was first announced on the sks-devel mailing list. The time really has passed by too quickly, driven by a community that is a pleasure to cooperate with. Sadly there is still a long way to go for OpenPGP to be used mainstream, but in this blog post … Continue reading "10 year anniversary for"

December 01, 2016

Graphicsmagick is an Image Processing System.

This is an old memory failure, discovered time ago. The maintainer, Mr. Bob Friesenhahn was able to reproduce the issue; I’m quoting his feedback about:

The problem is that the embedded JPEG data claims to have dimensions 59395×56833 and
this is only learned after we are in the JPEG reader.

But for some reasons (maybe not easy to fix) it is still not fixed.
EDIT: the patch was added but I was not aware of that.

The complete ASan output:

# gm identify $FILE
==12404==ERROR: AddressSanitizer failed to allocate 0xfb8065000 (67511930880) bytes of LargeMmapAllocator (error code: 12)
==12404==Process memory map follows:
	0x000000400000-0x000000522000	/usr/bin/gm
	0x000000722000-0x000000723000	/usr/bin/gm
	0x000000723000-0x000000726000	/usr/bin/gm
	0x7fcc55fbe000-0x7fcc56027000	/usr/lib64/
	0x7fcc56027000-0x7fcc56226000	/usr/lib64/
	0x7fcc56226000-0x7fcc56227000	/usr/lib64/
	0x7fcc56227000-0x7fcc56228000	/usr/lib64/
	0x7fcc56228000-0x7fcc56254000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56254000-0x7fcc56453000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56453000-0x7fcc56454000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56454000-0x7fcc56457000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5645b000-0x7fcc5648c000	/usr/lib64/
	0x7fcc5648c000-0x7fcc5668b000	/usr/lib64/
	0x7fcc5668b000-0x7fcc5668c000	/usr/lib64/
	0x7fcc5668c000-0x7fcc5668d000	/usr/lib64/
	0x7fcc5668d000-0x7fcc5671d000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5671d000-0x7fcc5691d000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5691d000-0x7fcc5691f000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5691f000-0x7fcc56927000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56932000-0x7fcc5cfa4000	/usr/lib64/locale/locale-archive
	0x7fcc5fdff000-0x7fcc5fe08000	/usr/lib64/
	0x7fcc5fe08000-0x7fcc60007000	/usr/lib64/
	0x7fcc60007000-0x7fcc60008000	/usr/lib64/
	0x7fcc60008000-0x7fcc60009000	/usr/lib64/
	0x7fcc60009000-0x7fcc6001e000	/lib64/
	0x7fcc6001e000-0x7fcc6021d000	/lib64/
	0x7fcc6021d000-0x7fcc6021e000	/lib64/
	0x7fcc6021e000-0x7fcc6021f000	/lib64/
	0x7fcc6021f000-0x7fcc6022e000	/lib64/
	0x7fcc6022e000-0x7fcc6042d000	/lib64/
	0x7fcc6042d000-0x7fcc6042e000	/lib64/
	0x7fcc6042e000-0x7fcc6042f000	/lib64/
	0x7fcc6042f000-0x7fcc604d6000	/usr/lib64/
	0x7fcc604d6000-0x7fcc606d6000	/usr/lib64/
	0x7fcc606d6000-0x7fcc606dc000	/usr/lib64/
	0x7fcc606dc000-0x7fcc606dd000	/usr/lib64/
	0x7fcc606dd000-0x7fcc60730000	/usr/lib64/
	0x7fcc60730000-0x7fcc60930000	/usr/lib64/
	0x7fcc60930000-0x7fcc60931000	/usr/lib64/
	0x7fcc60931000-0x7fcc60936000	/usr/lib64/
	0x7fcc60936000-0x7fcc60ac9000	/lib64/
	0x7fcc60ac9000-0x7fcc60cc9000	/lib64/
	0x7fcc60cc9000-0x7fcc60ccd000	/lib64/
	0x7fcc60ccd000-0x7fcc60ccf000	/lib64/
	0x7fcc60cd3000-0x7fcc60ce9000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ce9000-0x7fcc60ee8000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ee8000-0x7fcc60ee9000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ee9000-0x7fcc60eea000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60eea000-0x7fcc60ef0000	/lib64/
	0x7fcc60ef0000-0x7fcc610f0000	/lib64/
	0x7fcc610f0000-0x7fcc610f1000	/lib64/
	0x7fcc610f1000-0x7fcc610f2000	/lib64/
	0x7fcc610f2000-0x7fcc61109000	/lib64/
	0x7fcc61109000-0x7fcc61308000	/lib64/
	0x7fcc61308000-0x7fcc61309000	/lib64/
	0x7fcc61309000-0x7fcc6130a000	/lib64/
	0x7fcc6130e000-0x7fcc6140b000	/lib64/
	0x7fcc6140b000-0x7fcc6160a000	/lib64/
	0x7fcc6160a000-0x7fcc6160b000	/lib64/
	0x7fcc6160b000-0x7fcc6160c000	/lib64/
	0x7fcc6160c000-0x7fcc6160e000	/lib64/
	0x7fcc6160e000-0x7fcc6180e000	/lib64/
	0x7fcc6180e000-0x7fcc6180f000	/lib64/
	0x7fcc6180f000-0x7fcc61810000	/lib64/
	0x7fcc61810000-0x7fcc61e6e000	/usr/lib64/
	0x7fcc61e6e000-0x7fcc6206e000	/usr/lib64/
	0x7fcc6206e000-0x7fcc6209f000	/usr/lib64/
	0x7fcc6209f000-0x7fcc62125000	/usr/lib64/
	0x7fcc621a0000-0x7fcc621c2000	/lib64/
	0x7fcc62322000-0x7fcc62329000	/usr/lib64/gconv/gconv-modules.cache
	0x7fcc62329000-0x7fcc6234c000	/usr/share/locale/it/LC_MESSAGES/
	0x7fcc623c1000-0x7fcc623c2000	/lib64/
	0x7fcc623c2000-0x7fcc623c3000	/lib64/
	0x7ffcfee34000-0x7ffcfee55000	[stack]
	0x7ffcfef4c000-0x7ffcfef4e000	[vvar]
	0x7ffcfef4e000-0x7ffcfef50000	[vdso]
	0xffffffffff600000-0xffffffffff601000	[vsyscall]
==12404==End of process memory map.
==12404==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/ "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
    #0 0x4c9b3d in AsanCheckFailed /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #1 0x4d0673 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #2 0x4d0861 in __sanitizer::ReportMmapFailureAndDie(unsigned long, char const*, char const*, int, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #3 0x4d989a in __sanitizer::MmapOrDie(unsigned long, char const*, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #4 0x421c2f in __sanitizer::LargeMmapAllocator::Allocate(__sanitizer::AllocatorStats*, unsigned long, unsigned long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1033
    #5 0x421c2f in __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >, __sanitizer::LargeMmapAllocator >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >*, unsigned long, unsigned long, bool, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1302
    #6 0x421c2f in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #7 0x421c2f in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #8 0x4c0201 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #9 0x7fcc61c6a3f2 in MagickRealloc /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/memory.c:471:18
    #10 0x7fcc61cbb2b0 in OpenCache /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3155:7
    #11 0x7fcc61cb98fd in ModifyCache /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:2955:18
    #12 0x7fcc61cbee4c in SetCacheNexus /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3878:7
    #13 0x7fcc61cbf5e1 in SetCacheViewPixels /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3957:10
    #14 0x7fcc61cbf5e1 in SetImagePixels /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:4023
    #15 0x7fcc56235483 in ReadJPEGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/jpeg.c:1344:9
    #16 0x7fcc61ad3a8a in ReadImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #17 0x7fcc566ed13e in ReadOneJNGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/png.c:3308:17
    #18 0x7fcc566d6f72 in ReadJNGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/png.c:3516:9
    #19 0x7fcc61ad3a8a in ReadImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #20 0x7fcc61ad1a4b in PingImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9
    #21 0x7fcc61a23240 in IdentifyImageCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17
    #22 0x7fcc61a27786 in MagickCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17
    #23 0x7fcc61a81740 in GMCommandSingle /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10
    #24 0x7fcc61a7fce3 in GMCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16
    #25 0x7fcc6095661f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #26 0x418cd8 in _init (/usr/bin/gm+0x418cd8)

/usr/bin/gm identify: abort due to signal 6 (SIGABRT) "Abort"...

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-10-19: bug discovered and reported privately to upstream
2016-10-21: upstream released a patch
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


graphicsmagick: memory allocation failure in MagickRealloc (memory.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed a null pointer access in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 7917 but filesize is 187
File version: 100
File size: 187
Frame size: (8452,8981)x(-4096,0)
Frame rate: 67.851562 / sec.
Total frames: 16387
 Stream out of sync after parse of blocktype 2 (SWF_DEFINESHAPE). 166 but expecting 23.

Offset: 21 (0x0015)
Block type: 2 (SWF_DEFINESHAPE)
Block length: 0

 CharacterID: 55319
 RECT:  (-2048,140)x(0,-1548):12
 FillStyleArray:  FillStyleCount:     18  FillStyleCountExtended:      0
 FillStyle:  FillStyleType: 0
 RGBA: ( 0, 1,9a,ff)
 FillStyle:  FillStyleType: 7f
 FillStyle:  FillStyleType: b
 FillStyle:  FillStyleType: fb
 FillStyle:  FillStyleType: 82                                                                                                                                                                 
 FillStyle:  FillStyleType: 24                                                                                                                                                                 
 FillStyle:  FillStyleType: 67                                                                                                                                                                 
 FillStyle:  FillStyleType: 67                                                                                                                                                                 
 FillStyle:  FillStyleType: 18                                                                                                                                                                 
 FillStyle:  FillStyleType: 9d                                                                                                                                                                 
 FillStyle:  FillStyleType: 6d                                                                                                                                                                 
 FillStyle:  FillStyleType: d7                                                                                                                                                                 
 FillStyle:  FillStyleType: 97                                                                                                                                                                 
 FillStyle:  FillStyleType: 1                                                                                                                                                                  
 FillStyle:  FillStyleType: 26                                                                                                                                                                 
 FillStyle:  FillStyleType: 1a                                                                                                                                                                 
 FillStyle:  FillStyleType: 17                                                                                                                                                                 
 FillStyle:  FillStyleType: 9a                                                                                                                                                                 
 LineStyleArray:  LineStyleCount: 19                                                                                                                                                           
 LineStyle:  Width: 1722                                                                                                                                                                       
 RGBA: (7a,38,df,ff)                                                                                                                                                                           
 LineStyle:  Width: 42742                                                                                                                                                                      
 RGBA: ( 0, 0, 0,ff)                                                                                                                                                                           
 LineStyle:  Width: 70                                                                                                                                                                         
 RGBA: (10,91,64,ff)                                                                                                                                                                           
 LineStyle:  Width: 37031                                                                                                                                                                      
 RGBA: (e7,c7,15,ff)                                                                                                                                                                           
 LineStyle:  Width: 9591                                                                                                                                                                       
 RGBA: (dc,ee,81,ff)                                                                                                                                                                           
 LineStyle:  Width: 4249                                                                                                                                                                       
 RGBA: ( 0,ee,ed,ff)                                                                                                                                                                           
 LineStyle:  Width: 60909                                                                                                                                                                      
 RGBA: (ed,ed,ed,ff)                                                                                                                                                                           
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,a7,ff)
 LineStyle:  Width: 42919
 RGBA: (a7,a7,9c,ff)
 LineStyle:  Width: 40092
 RGBA: (9c,9c,9c,ff)
 LineStyle:  Width: 32156
 RGBA: (9c,bc,9c,ff)
 LineStyle:  Width: 33948
 RGBA: (9c,9c,9c,ff)
 LineStyle:  Width: 26404
 RGBA: ( 0, c,80,ff)
 LineStyle:  Width: 42752
 RGBA: (a7, 2, 2,ff)
 LineStyle:  Width: 514
 RGBA: (c6, 2, 0,ff)
 NumFillBits: 11
 NumLineBits: 13
 Curved EdgeRecord: 9 Control(-145,637) Anchor(-735,-1010)
 Curved EdgeRecord: 7 Control(-177,156) Anchor(16,32)
  StateNewStyles: 0 StateLineStyle: 1  StateFillStyle1: 0
  StateFillStyle0: 0 StateMoveTo: 0
   LineStyle: 257

Offset: 23 (0x0017)
Block type: 864 (Unknown Block Type)
Block length: 23

0000: 64 00 00 00 46 4f a3 12  00 00 01 9a 7f 0b fb 82    d...FO.. .......
0010: 24 67 67 18 9d 6d d7                               $gg..m.

Offset: 48 (0x0030)
Block type: 6 (SWF_DEFINEBITS)
Block length: 23

 CharacterID: 6694

Offset: 73 (0x0049)
Block length: 7

==27703==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000059d2ff bp 0x7ffe859e6fc0 sp 0x7ffe859e6f50 T0)
==27703==The signal is caused by a READ memory access.
==27703==Hint: address points to the zero page.
    #0 0x59d2fe in dumpBuffer /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:441:23
    #1 0x51c305 in outputSWF_UNKNOWNBLOCK /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2870:3
    #2 0x51c305 in outputBlock /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2937
    #3 0x527e83 in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:277:4
    #4 0x527e83 in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #5 0x7f0186c4461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x419b38 in _init (/usr/bin/listswf+0x419b38)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:441:23 in dumpBuffer

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: NULL pointer dereference in dumpBuffer (read.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed an overflow in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 18446744072727653119 but filesize is 165
File version: 128
File size: 165
Frame size: (-4671272,-4672424)x(-4703645,4404051)
Frame rate: 142.777344 / sec.
Total frames: 2696

Offset: 25 (0x0019)
Block type: 67 (Unknown Block Type)
Block length: 24

0000: 00 97 6b ba 06 91 6f 98  7a 38 01 00 a6 e3 80 2c    ..k...o. z8.....,
0010: 77 25 d3 d3 1a 19 80 7f                            w%.....

Offset: 51 (0x0033)
Block type: 24 (SWF_PROTECT)
Block length: 1                                                                                                                                                                                
==3132==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eff1 at pc 0x000000499d10 bp 0x7ffc34a55e10 sp 0x7ffc34a555c0                                                       
READ of size 2 at 0x60200000eff1 thread T0                                                                                                                                                     
    #0 0x499d0f in printf_common /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/       
    #1 0x499a9d in printf_common /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/       
    #2 0x49abfa in __interceptor_vfprintf /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/    
    #3 0x509dd7 in vprintf /usr/include/bits/stdio.h:38:10                                                                                                                                     
    #4 0x509dd7 in _iprintf /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:144                                                                                            
    #5 0x51f1f5 in outputSWF_PROTECT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:1873:5                                                                                
    #6 0x51c35b in outputBlock /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2933:4                                                                                      
    #7 0x527e83 in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:277:4                                                                                              
    #8 0x527e83 in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350                                                                                                     
    #9 0x7f0f1ff6861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #10 0x419b38 in _init (/usr/bin/listswf+0x419b38)                                                                                                                                          
0x60200000eff1 is located 0 bytes to the right of 1-byte region [0x60200000eff0,0x60200000eff1)                                                                                                
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4d28f8 in malloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/                                                       
    #1 0x59b9ab in readBytes /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:201:17                                                                                             
    #2 0x592864 in parseSWF_PROTECT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:2668:26                                                                                   
    #3 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14                                                                                      
    #4 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11                                                                                             
    #5 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350                                                                                                     
    #6 0x7f0f1ff6861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/ in printf_common                                                                                                                                                                      
Shadow bytes around the buggy address:
  0x0c047fff9da0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9db0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9de0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9df0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[01]fa
  0x0c047fff9e00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: heap-based buffer overflow in _iprintf (outputtxt.c)

November 29, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Service Function Chaining demo with devstack (November 29, 2016, 14:09 UTC)

After a first high-level post, it is time to actually show networking-sfc in action! Based on a documentation example, we will create a simple demo, where we route some HTTP traffic through some VMs, and check the packets on them with tcpdump:

SFC demo diagram

This will be hosted on a single node devstack installation, and all VMs will use the small footprint CirrOS image, so this should run on “small” setups.

Installing the devstack environment

On your demo system (I used Centos 7), check out devstack on the Mitaka branch (remember to run devstack as a sudo-capable user, not root):

[stack@demo ~]$ git clone -b stable/mitaka

Grab my local configuration file that enables the networking-sfc plugin, rename it to local.conf in your devstack/ directory.
If you prefer to adapt your current configuration file, just make sure your devstack checkout is on the mitaka branch, and add the SFC parts:
enable_plugin networking-sfc

Then run the usual “./” command, and go grab a coffee.

Deploy the demo instances

To speed this step up, I regrouped all the following items in a script. You can check it out (at a tested revision for this demo):
[stack@demo ~]$ git clone -b sfc_mitaka_demo

The script will:

  • Configure security (disable port security, set a few things in security groups, create a SSH key pair)
  • Create source, destination systems (with a basic web server)
  • Create service VMs, configuring the network interfaces and static IP routing to forward the packets
  • Create the SFC items (port pair, port pair  group, flow classifier, port chain)

I highly recommend to read it, it is mostly straightforward and commented, and where most of the interesting commands are hidden. So have a look, before running it:
[stack@demo ~]$ ./openstack-scripts/
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
Updated network: private
Created a new port:

route: SIOCADDRT: File exists
WARN: failed: route add -net "" gw ""
You can safely ignore the route errors at the end of the script (they are caused by duplicate default route on the service VMs).

Remember, from now on, to source the credentials file in your current shell before running CLI commands:
[stack@demo ~]$ source ~/devstack/openrc demo demo

We first get the IP addresses for our source and destination demo VMs:[vagrant@defiant-devstack ~]$ openstack server show source_vm -f value -c addresses; openstack server show dest_vm -f value -c addresses

private=, fd73:381c:4fa2:0:f816:3eff:fe65:12fd

Now, we look for the tap devices associated to our service VMs:
[stack@demo ~]$ neutron port-list -f table -c id -c name

| name           | id                                   |
| p1in           | 897df85a-26c3-4491-888e-8cc58f19cea1 |
| p1out          | fa838294-317d-46df-b10e-b1734dd62faf |
| p2in           | c86dafc7-bda6-4537-b806-be2282f7e11e |
| p2out          | 12e58ea8-a9ab-4d0b-9fd7-707dc6e99f20 |
| p3in           | ee14f406-e9d6-4047-812b-aa04514f50dd |
| p3out          | 2d86403b-4639-40a0-897e-68fa0c759f01 |

These devices names follow the tap<port ID first 10 digits> pattern, so for example tap897df85a-26 is the associated  for the p1in port here

See SFC in action

In this example we run a request loop from client_vm to dest_vm (remember to use the IP addresses found in the previous section):
[stack@demo ~]$ ssh cirros@
$ while true; do curl; sleep 1; done
Welcome to dest-vm
Welcome to dest-vm
Welcome to dest-vm

So we do have access to the web server! But does the packets really go through the service VMs? To confirm that, in another shell, run tcpdump on the tap interfaces:

# On the outgoing interface of VM 3
$ sudo tcpdump port 80 -i tap2d86403b-46
tcpdump: WARNING: tap2d86403b-46: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap2d86403b-46, link-type EN10MB (Ethernet), capture size 65535 bytes
11:43:20.806571 IP > Flags [S], seq 2951844356, win 14100, options [mss 1410,sackOK,TS val 5010056 ecr 0,nop,wscale 2], length 0
11:43:20.809472 IP > Flags [.], ack 3583226889, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.809788 IP > Flags [P.], seq 0:136, ack 1, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 136
11:43:20.812226 IP > Flags [.], ack 39, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.817599 IP > Flags [F.], seq 136, ack 40, win 3525, options [nop,nop,TS val 5010059 ecr 5008746], length 0

Here are some other examples (skipping the tcpdump output for clarity):
# You can check other tap devices, confirming both VM 1 and VM2 get traffic
$ sudo tcpdump port 80 -i tapfa838294-31
$ sudo tcpdump port 80 -i tap12e58ea8-a9

# Now we remove the flow classifier, and check the tcpdump output
$ neutron port-chain-update --no-flow-classifier PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # Quiet time

# We restore the classifier, but remove the group for VM3, so tcpdump will only show traffic on other VMs
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # No traffic
$ sudo tcpdump port 80 -i tapfa838294-31 # Packets!

# Now we remove VM1 from the first group
$ neutron port-pair-group-update PG1 --port-pair PP2
$ sudo tcpdump port 80 -i tapfa838294-31 # No more traffic
$ sudo tcpdump port 80 -i tap12e58ea8-a9 # Here it is

# Restore the chain to its initial demo status
$ neutron port-pair-group-update PG1 --port-pair PP1 --port-pair PP2
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 --port-pair-group PG2 PC1

Where to go from here

Between these examples, the commands used in the demo script, and the documentation, you should have enough material to try your own commands! So have fun experimenting with these VMs.

Note that in the meantime we released the Newton version (3.0.0), which also includes the initial OpenStackClient (OSC) interface, so I will probably update this to run on Newton and with some shiny “openstack sfc xxx” commands. I also hope to make a nicer-than-tcpdumping-around demo later on, when time permits.

November 21, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.3 (November 21, 2016, 12:40 UTC)

Ok I slacked by not posting for v3.1 and v3.2 and I should have since those previous versions were awesome and feature rich.

But v3.3 is another major milestone which was made possible by tremendous contributions from @tobes as usual and also greatly thanks to the hard work of @guiniol and @pferate who I’d like to mention and thank again !

Also, I’d like to mention that @tobes has become the first collaborator of the py3status project !

Instead of doing a changelog review, I’ll highlight some of the key features that got introduced and extended during those versions.

The py3 helper

Writing powerful py3status modules have never been so easy thanks to the py3 helper !

This magical object is added automatically to modules and provides a lot of useful methods to help normalize and enhance modules capabilities. This is a non exhaustive list of such methods:

  • format_units: to pretty format units (KB, MB etc)
  • notify_user: send a notification to the user
  • time_in: to handle module cache expiration easily
  • safe_format: use the extended formatter to handle the module’s output in a powerful way (see below)
  • check_commands: check if the listed commands are available on the system
  • command_run: execute the given command
  • command_output: execute the command and get its output
  • play_sound: sound notifications !

Powerful control over the modules’ output

Using the self.py3.safe_format helper will unleash a feature rich formatter that one can use to conditionally select the output of a module based on its content.

  • Square brackets [] can be used. The content of them will be removed from the output if there is no valid placeholder contained within. They can also be nested.
  • A pipe (vertical bar) | can be used to divide sections the first valid section only will be shown in the output.
  • A backslash \ can be used to escape a character eg \[ will show [ in the output.
  • \? is special and is used to provide extra commands to the format string, example \?color=#FF00FF. Multiple commands can be given using an ampersand & as a separator, example \?color=#FF00FF&show.
  • {<placeholder>} will be converted, or removed if it is None or empty. Formatting can also be applied to the placeholder eg {number:03.2f}.

Example format_string:

This will show artist - title if artist is present, title if title but no artist, and file if file is present but not artist or title.

"[[{artist} - ]{title}]|{file}"

More code and documentation tests

A lot of efforts have been put into py3status automated CI and feature testing allowing more confidence in the advanced features we develop while keeping a higher standard on code quality.

This is such as even modules’ docstrings are now tested for bad formatting 🙂

Colouring and thresholds

A special effort have been put in normalizing modules’ output colouring with the added refinement of normalized thresholds to give users more power over their output.

New modules, on and on !

  • new clock module to display multiple times and dates informations in a flexible way, by @tobes
  • new coin_balance module to display balances of diverse crypto-currencies, by Felix Morgner
  • new diskdata module to shows both usage data and IO data from disks, by @guiniol
  • new exchange_rate module to check for your favorite currency rates, by @tobes
  • new file_status module to check the presence of a file, by @ritze
  • new frame module to group and display multiple modules inline, by @tobes
  • new gpmdp module for Google Play Music Desktop Player by @Spirotot
  • new kdeconnector module to display information about Android devices, by @ritze
  • new mpris module to control MPRIS enabled music players, by @ritze
  • new net_iplist module to display interfaces and their IPv4 and IPv6 IP addresses, by @guiniol
  • new process_status module to check the presence of a process, by @ritze
  • new rainbow module to enlight your day, by @tobes
  • new tcp_status module to check for a given TCP port on a host, by @ritze


The changelog is very big and the next 3.4 milestone is very promising with amazing new features giving you even more power over your i3bar, stay tuned !

Thank you contributors

Still a lot of new timer contributors which I take great pride in as I see it as py3status being an accessible project.

  • @btall
  • @chezstov
  • @coxley
  • Felix Morgner
  • Gabriel Féron
  • @guiniol
  • @inclementweather
  • @jakubjedelsky
  • Jan Mrázek
  • @m45t3r
  • Maxim Baz
  • @pferate
  • @ritze
  • @rixx
  • @Spirotot
  • @Stautob
  • @tjaartvdwalt
  • Yuli Khodorkovskiy
  • @ZeiP

November 10, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)

Open Source Conference 2016 Tokyo

Many people came to the Gentoo booth,
mainly students and Open Source users
asking for Gentoo information.

We gave away around 200 flyers, and
many many stickers during the two days.

Unfortunately the sticker we ordered
from unixsticker had some SVG problem.

We had also in exposition some esoteric
enviroment like the IS01 sharp,
off course mounting Gentoo as Native and
as prefix.
Of course one of the first things we tried
was the 5 minutes long Gentoo sl command.

image from: @NTSC_J

We also had a Gentoo notebook
running wayland (the one in the middle).

It was an amazing event and I would
like to thanks everyone that came to
the Gentoo booth, everyone that helped
making the Gentoo booth and all the
amazing Gentoo community.

November 07, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
What is “Service Function Chaining”? (November 07, 2016, 16:59 UTC)

This is the first article in a series about Service Function Chaining (SFC for short), and its OpenStack implementation, networking-sfc, that I have been working on.

The SFC acronym can easily appear in Software-defined networking (SDN), in a paper about Network function virtualization (NFV), in some IETF documents, … Some of these broader subjects use other names for SFC elements, but this is probably a good topic for another post/blog.
If you already know SFC elements, you can probably skip to the next blog post.


So what is this “Service Function Chaining”? Let me quote the architecture RFC:

The delivery of end-to-end services often requires various service functions. These include traditional network service functions such
as firewalls and traditional IP Network Address Translators (NATs),
as well as application-specific functions. The definition and instantiation of an ordered set of service functions and subsequent
“steering” of traffic through them is termed Service Function
Chaining (SFC).

I see SFC as a higher level of abstraction routing: in a typical network, you route all the traffic coming from Internet through a firewall box. So you set up the firewall system, with its network interfaces (Internet and intranet sides), and add some IP routes to steer the traffic through.
SFC uses the same concept, but with logical blocks: if a packet matches some conditions (it is Internet traffic), force it through a series of “functions” (in that case, only one function: a firewall system). And voilà, you have your Service function chain!

I like this simple comparison as it introduces most of the SFC elements:

  • service function: a.k.a. “bump in the wire”. This is a transparent system that you want some flows to go through (typical use cases: firewall, load balancer, analyzer).
  • flow classifier: the “entry point”, it determines if a flow should go through the chain. This can be based on IP attributes (source/dest adress/port, …), layer 7 attributes or even from metadata in the flow, set by a previous chain.
  • port pair:  as the name implies, this is a pair of ports (network interfaces) for a service function (the firewall in our example). The traffic is routed to the “in” port, and is expected to exit the VM through the “out” port. This can be the same port
  • port chain: the SFC object itself, a set of flow classifiers and a set of port pairs (that define the chain sequence).

An additional type not mentioned before is the port pair group: if you have multiple service functions of an identical type, you can regroup them to distribute the flows among them.

Use cases and advantages

OK, after seeing all these definitions, you may wonder “what’s the point?” What I have seen so far is that it allows:

  • complex routing made easier. Define a sequence of logical steps, and the flow will go through it.
  • HA deployments: add multiple VMS in a same group, and the load will be distributed between them.
  • dynamic inventory. Add or remove functions dynamically, either to scale a group (add a load balancer, remove an analyzer), change functions order, add a new function in the middle of some chain, …
  • complex classification. Flows can be classified based on L7 criterias, output from a previous chain (for example a Deep-Packet Inspection system).

Going beyond these technical advantages, you can read an RFC that is actually a direct answer to this question: RFC 7498

Going further

To keep a reasonable post length, I did not talk about:

  • How does networking-sfc tag traffic? Hint: MPLS labels
  • Service functions may or may not be SFC-aware: proxies can handle the SFC tagging
  • Upcoming feature: support for Network Service Header (NSH)
  • Upcoming feature: SFC graphs (allowing complex chains and chains of chains)
  • networking-sfc modularity: reference implementation uses OVS, but this is juste one of the possible drivers
  • Also, networking-sfc architecture in general
  • SFC use in VNF Forwarding Graphs (VNFFG)


SFC has abundant documentation, both in the OpenStack project and outside. Here is some additional reading if you are interested (mostly networking-sfc focused):

Denis Dupeyron a.k.a. calchan (homepage, bugs)
SCALE 15x CFP is closing soon (November 07, 2016, 04:07 UTC)

Just a quick reminder that the deadline for proposing a talk to SCALE 15x is on November 15th. More information, including topics of interest, is available on the SCALE website.

SCALE 15x is to be held on March 2-5, 2017 at the Pasadena Convention Center in Pasadena, California, near Los Angeles. This is the same venue as last year and is much nicer than the original one form the years before.

I’ll see you there.

November 06, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-01 Gentoo Study Meeting (November 06, 2016, 18:24 UTC)

Gentoo Study Meeting talking (English Summary):  
Live broadcast:  

    First Gentoo Study Meeting Tokyo with  
    How to become Gentoo Developer introduction talk.  
            Contributing Ebuild:  
                - sending Git pull request  
                - Searching for a Mentor on proxy-maint  
                - Asking in #gentoo-proxy-maint  
                - Using  
        Non committer developer:  
            - Contributing in Gentoo projects, with work that 
              not need gentoo git repository access.  
            - Contributing in the wiki (Not with translation also if 
              translator need the wiki translator permisson)  
    How to get help in Japanese:  
        - #gentoo-ja Freenode  
        - ?forum  
        - Gentoo勉強会 (Gentoo Study Meeting)  
    Gentoo News update:  
        Talk about Future EAPI 7 ulm slide  
            Question: When new EAPI are released?  
                I think there is not a setted release date for EAPI  
            New feature  
                - Runtime-switchable useflag  
                - eqwarn  
                - dohtml  
                - package.provided in profiles  
                - DESTTREE and INSDESTTREE  
    Talk about presence of Gentoo booth at Open Source Conference 
    2016 Tokyo :  
        - Stickers  
        ask to fondation:  
        - Banner  
            size and format  
        - Table cover  
            size and format  
        Presenter: Matsuu san  
        Slide: Isucon 6  
            - Team tuning speed contest  
            - This time was tuning speed contest on  
                Only distribution with company can give sapport on azure.  
                Debian have a third party company that is supporting azure.  
                Gentoo also need something similar.  
            - Is good to do past problem for have more score on isucon.  
                vagrant is nice to use for the doing previous problem  
            - Go language, varnish+ESI, mysql  
            - access log 
              analyzer for isucon/tuning  
            - sshrc 
              bring your .bashrc, .vimrc, etc. with you when you ssh.  
            - Matsu san as been choosen for become staff at the 
              isucon presentation in the future.  
        Presenter: @tkshnt  
        Slide: Report on last update  
            - let's make Gentoo goods shop for Gentoo-JP  
                previous OSC item:  
                    - t-shirt (@matsuu, @naota)  
                    - stickers (@matsuu)  
                next item:  
                    - Gentoo Tenugui (手拭い)  
                OSC booth:  
                    - presentation  
                    - flyer  
                Design repository:  
                    - Github  
                        - project management  
                        - simple file upload  
        Presenter: @d_aki  
        Slide: my chaotic /etc/portage  
            - package.use can become chaotic  
            - /var/lib/portage/world difficult to 
              remeber when you added something and why  
            - let's use package.use directory and put file 
              name about what you are installing  
            - not what but why you installed the package  
        Presenter: alicef  
        Slide: How to contribute on Gentoo Github  
            - recently Gentoo CVS repository as been converted to Git  
            - Using the Github mirror is possible to send pull request.  
            - Good point of pull request:  
                - Code comment and review from more than one developer  
                - fast way to send ebuild patch upstream  
                - QA automatic check   
            - Bad point of pull request:  
                - the review are open to see to everyone  
                - basic git knowledge is needed  
            When we clone Gentoo repository:   
                Use git clone --depth=50   
                for fast pull request with less log information  
                git clone and git clone --depth=50 time difference:  
        Presenter: @usaturn  
        Slide: systemd-nspawn & btrfs  
            - On Gentoo using systemd-nspawn  
                - copy on write  
                - using subvolume we can make snapshot  
                - compress is possible  
                - cannot make swapfile  
                - unit is the process file manager  
                - using systemd stage 3 is simple to install  
                - not using syslog but journald  
                - network setting by networkd  
                - instead of cron there is timer  
                - instead of ntp there is systemd-timesyncd  
                - grub is not needed, instead systemd-boot 
                  (ex gummi boot) work as bootloader  
                - docker is not needed, instead systemd-nspawn 
                  using machinectl command (good for testing gentoo package)  
        Presenter: @naota344  
        Slide: automatically resolving conflict   
            Gentoo developer, btrfs, linux kernel, emacs, T-code  
                resolving conflict  
                    - when USE flag is needed it will ask to 
                      add a USE flag.  
                    - when circulation dependency is detected it will 
                      ask to remove a USE flag, for example  
                Why there is a conflict:  
                    - Before installing a new package, we 
                      have a package (for example perl-5.20) with all 
                      the the dependency package setted  
                    - when we are goin to update world and we get a 
                      new package update (for example perl-5.22),
                       also some dependency of perl-5.22 get new update  
                    - in this situation can happen that some dependency
                      to perl-5.20 get in conflict with perl-5.22  
                How can we fix such situation:  
                    - we have a option to add --reinstall-atoms="Y"
                      to emerge command (Y= name of the dependency
                      package that is causing problem)  
                    - this command will instead of just update the
                      package it will reinstall the package as if 
                      they are not installed, solving such dependency
                Why package is anyway deciding to automatically 
                not fixing dependency  
                    maybe because trying to fix all the dependecy will not
                    work correctly  
                When portage have conflict for many package  
                    it became more complicated and we will have a command
                    similar to this:  
                    --reinstall-atoms="A B C D E F G H I L M N ..."  
                For solving such problem there is emerge --reinstall-atoms
                    - automatically fixing circle dependecy  
                    - showing dependency graph  
                    - there is also a function for make try the 
                      dependency graph in a container  
                    - emerge analyzer tool  

November 04, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
GStreamer and Synchronisation Made Easy (November 04, 2016, 10:16 UTC)

A lesser known, but particularly powerful feature of GStreamer is our ability to play media synchronised across devices with fairly good accuracy.

The way things stand right now, though, achieving this requires some amount of fiddling and a reasonably thorough knowledge of how GStreamer’s synchronisation mechanisms work. While we have had some excellent talks about these at previous GStreamer conferences, getting things to work is still a fair amount of effort for someone not well-versed with GStreamer.

As part of my work with the Samsung OSG, I’ve been working on addressing this problem, by wrapping all the complexity in a library. The intention is that anybody who wants to implement the ability for different devices on a network to play the same stream and have them all synchronised should be able to do so with a few lines of code, and the basic know-how for writing GStreamer-based applications.

I’ve started work on this already, and you can find the code in the creatively named gst-sync-server repo.

Design and API

Let’s make this easier by starting with a picture …

Big picture of the architecture

Let’s say you’re writing a simple application where you have two ore more devices that need to play the same video stream, in sync. Your system would consist of two entities:

  • A server: this is where you configure what needs to be played. It instantiates a GstSyncServer object on which it can set a URI that needs to be played. There are other controls available here that I’ll get to in a moment.

  • A client: each device would be running a copy of the client, and would get information from the server telling it what to play, and what clock to use to make sure playback is synchronised. In practical terms, you do this by creating a GstSyncClient object, and giving it a playbin element which you’ve configured appropriately (this usually involves at least setting the appropriate video sink that integrates with your UI).

That’s pretty much it. Your application instantiates these two objects, starts them up, and as long as the clients can access the media URI, you magically have two synchronised streams on your devices.


The keen observers among you would have noticed that there is a control entity in the above diagram that deals with communicating information from the server to clients over the network. While I have currently implemented a simple TCP protocol for this, my goal is to abstract out the control transport interface so that it is easy to drop in a custom transport (Websockets, a REST API, whatever).

The actual sync information is merely a structure marshalled into a JSON string and sent to clients every time something happens. Once your application has some media playing, the next thing you’ll want to do from your server is control playback. This can include

  • Changing what media is playing (like after the current media ends)
  • Pausing/resuming the media
  • Seeking
  • “Trick modes” such as fast forward or reverse playback

The first two of these work already, and seeking is on my short-term to-do list. Trick modes, as the name suggets, can be a bit more tricky, so I’ll likely get to them after other things are done.

Getting fancy

My hope is to see this library being used in a few other interesting use cases:

  • Video walls: having a number of displays stacked together so you have one giant display — these are all effectively playing different rectangles from the same video

  • Multiroom audio: you can play the same music across different speakers in a single room, or multiple rooms, or even group sets of speakers and play different media on different groups

  • Media sharing: being able to play music or videos on your phone and have your friends be able to listen/watch at the same time (a silent disco app?)

What next

At this point, the outline of what I think the API should look like is done. I still need to create the transport abstraction, but that’s pretty much a matter of extracting out the properties and signals that are part of the existing TCP transport.

What I would like is to hear from you, my dear readers who are interested in using this library — does the API look like it would work for you? Does the transport mechanism I describe above cover what you might need? There is example code that should make it easier to understand how this library is meant to be used.

Depending on the feedback I get, my next steps will be to implement the transport interface, refine the API a bit, fix a bunch of FIXMEs, and then see if this is something we can include in gst-plugins-bad.

Feel free to comment either on the Github repository, on this blog, or via email.

And don’t forget to watch this space for some videos and measurements of how GStreamer synchronised fares in real life!

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

At the end of August—I know that it’s now November, but time seems to get away from me more often these days—I got the honour of trying the new 2015 vintage of Syncopation red blend (read about the 2014 vintage here) from Mike Ward on Wine! This is the second year that Mike has produced the incredible blend that changed my perspective on Missouri wines, and this year, it was joined by the new Acoustic white blend. Before getting into the new white blend, let’s take a look at the changes for this 2015 release of Syncopation Rhythmic red blend.

2015 Ward on Wine Syncopation Rhythmic Red and Acoustic White

Unlike the 2014 vintage—which was a blend of Chambourcin, Vidal blanc, Seyval blanc, and Traminette—this year was a cuvée of Chambourcin, Vignoles, Norton, and Traminette. So, the primary varietal is still Chambourcin, and the Traminette remains (though is slightly more prominent than last year). The Vidal blanc and Seyval blanc, though, were replaced by Vignoles and Norton. The breakdown in varietals is 70% Chambourcin, and 10% of the remaining three grapes.

Seeing as the Vidal blanc and Seyval blanc, which are both white grapes, were replaced by Vignoles (a complex hybrid) and Norton (a very deep purple grape, somewhat resembling Concords), I didn’t really have any idea what to expect from this new blend. Below are my impressions:

2015 Syncopation Rhythmic Red blend – tasting notes:
With its beautiful ruby-to-garnet colour, this wine shows wonderfully when backlit in the glass. Subdued purples shine through the burgundy in the centre, and it is encompassed by a dark pink ring at the edges. A bouquet of red plum and blueberries is evident, but completely unassuming and lovely in its simplicity. Interestingly, though, those fruits didn’t come through for me in taste. Instead, I found raspberry, strawberry, and forest underbrush (akin to some Pinot noirs from Oregon’s Willamette Valley) to be much more prominent on the palate. Those flavours were further complimented by slight hints of clove and white pepper. Fascinatingly, though this is not a sparkling wine in any way, there was a slight effervescent feel upfront. Like the previous vintage, I found that this Syncopation red blend is best enjoyed with a slight chill on it (14-16°C / 57-61°F).

Mike Ward of Ward on Wine with his 2015 Syncopation wines
Mike and his 2015 Syncopation wines
2015 Syncopation Rhythmic Red blend with a glass and Sommelier knife
Syncopation Rhythmic Red

I was quite confident that I would enjoy this new vintage of Syncopation red, but I wasn’t sure how I would feel about the new Acoustic white blend since this year was its debut. Once again, Mike Ward challenged what I thought I knew about my taste preferences by creating an absolutely outstanding white wine that is sure to please a wide array of tastes! Syncopation Acoustic White is a blend of 70% Vignoles, 20% Vidal Blanc, and 10% Traminette.

2015 Syncopation Acoustic White blend – tasting notes:
A light but vivid yellow in the glass, this brilliant blend demands your attention due to its dazzling vibrancy! On the nose, there is an elegant mix of less pronounced, almost musky fruits like apricot and the mellow sweetness of Bosc pears. There is an ever-so-faint hint of ginger and lemon zest that adds to the wine’s elusive profile. It has a crisp yet completely approachable acidity. The lemon starts to come through, but is almost immediately thwarted off by the more rounded flavours of nectarine and apricot.

2015 Ward on Wine Syncopation Acoustic White blend bottle with glass

Overall, I enjoyed both of these wines, chiefly seeing as Missouri wines are not usually my favourites. Having tasted the 2014 and 2015 Syncopation Rhythmic Red blends side-by-side, I slightly prefer the 2014. That could be caused by any number of factors, but I am willing to bet that it is due to my strong preference for Vidal blanc. Changing out two white grapes (the Vidal blanc and Seyval blanc) for another red grape (the Norton) significantly changed the flavour profile, especially given the almost mordant forwardness of big fruits exhibited by Norton. We are splitting hairs here though, because both years have shown me the intricacies that Missouri wines are capable of producing. Further, I was taken aback by the Acoustic White blend, and find it to rank amongst my favourites of Missouri whites. I am sure that I will enjoy many bottles of these two wines over the upcoming year, and am excited to experience the next incantation of Mike Ward’s Syncopation!

So, I encourage you to pick up at least a bottle of each and experience them for yourself—even if you were like me in thinking that Missouri wines didn’t hold their own. You can purchase them at several Saint Louis area Schnucks grocery stores, or by stopping in at The Wine Barrel on Lindbergh near Watson. At The Wine Barrel, you can also choose to try Syncopation by the glass, and if you’re lucky, Mike may even be there when you stop by. 🙂


October 31, 2016
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Intel MediaSDK mini-walkthrough (October 31, 2016, 14:24 UTC)

Using hwaccel

Had been a while since I mentioned the topic and we made a huge progress on this field.

Currently with Libav12 we already have nice support for multiple different hardware support for decoding, scaling, deinterlacing and encoding.

The whole thing works nicely but it isn’t foolproof yet so I’ll start describing how to setup and use it for some common tasks.

This post will be about Intel MediaSDK, the next post will be about NVIDIA Video Codec SDK.



  • A machine with QSV hardware, Haswell, Skylake or better.
  • The ability to compile your own kernel and modules
  • The MediaSDK mfx_dispatch

It works nicely both on Linux and Windows. If you happen to have other platforms feel free to contact Intel and let them know, they’ll be delighted.


The MediaSDK comes with either the usual Windows setup binary or a Linux bash script that tries its best to install the prerequisites.

# tar -xvf MediaServerStudioEssentials2017.tar.gz

Focus on SDK2017Production16.5.tar.gz.

tar -xvf SDK2017Production16.5.tar.gz


The MediaSDK leverages libva to access the hardware together with an highly extended DRI kernel module.
They support CentOS with rpms and all the other distros with a tarball.

BEWARE: if you use the installer script the custom libva would override your system one, you might not want that.

I’m using Gentoo so it is intel-linux-media_generic_16.5-55964_64bit.tar.gz for me.

The bits of this tarball you really want to install in the system no matter what is the firmware:


If you are afraid of adding custom stuff on your system I advise to offset the whole installation and then override the LD paths to use that only for Libav.

BEWARE: you must use the custom iHD libva driver with the custom i915 kernel module.

If you want to install using the provided script on Gentoo you should first emerge lsb-release.

emerge lsb-release
source /etc/profile.d/*.sh
echo /opt/intel/mediasdk/lib64/ >> /etc/

Kernel Modules

The patchset resides in:


The current set is 143 patches against linux 4.4, trying to apply on a more recent kernel requires patience and care.

The 4.4.27 works almost fine (even btrfs does not seem to have many horrible bugs).


In order to use the Media SDK with Libav you should use the mfx_dispatch from yours truly since it provides a default for Linux so it behaves in an uniform way compared to Windows.

Building the dispatcher

It is a standard autotools package.

git clone git://
cd mfx_dispatch
autoreconf -ifv
./configure --prefix=/some/where
make -j 8
make install

Building Libav

If you want to use the advanced hwcontext features on Linux you must enable both the vaapi and the mfx support.

git clone git://
cd libav
export PKG_CONFIG_PATH=/some/where/lib/pkg-config
./configure --enable-libmfx --enable-vaapi --prefix=/that/you/like
make -j 8
make install


Media SDK is sort of temperamental and the setup process requires manual tweaking so the odds of having to do debug and investigate are high.

If something misbehave here is a checklist:
– Make sure you are using the right kernel and you are loading the module.

uname -a
  • Make sure libva is the correct one and it is loading the right thing.
strace -e open ./avconv -c:v h264_qsv -i test.h264 -f null -
  • Make sure you aren’t using the wrong ratecontrol or not passing all the parameters required
./avconv -v verbose -filter_complex testsrc -c:v h264_qsv {ratecontrol params omitted} out.mkv

See below for some examples of working rate-control settings.
– Use the MediaSDK examples provided with the distribution to confirm that everything works in case the SDK is more recent than the updates.


The Media SDK support in Libav covers decoding, encoding, scaling and deinterlacing.

Decoding is straightforward, the rest has still quite a bit of rough edges and this blog post had been written mainly to explain them.

Currently the most interesting format supported are h264 and hevc, but even other formats such as vp8 and vc1 are supported.

./avconv -codecs | grep qsv


The decoders can output directly to system memory and can be used as normal decoders and feed a software implementation just fine.

./avconv -c:v h264_qsv -i input.h264 -c:v av1 output.mkv

Or they can decode to opaque (gpu backed) buffers so further processing can happen

./avconv -hwaccel qsv -c:v h264_qsv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v x265

NOTICE: you have to explicitly pass the filterchain hwdownload,format=nv12 not have mysterious failures.


The encoders are almost as straightforward beside the fact that the MediaSDK provides multiple rate-control systems and they do require explicit parameters to work.

./avconv -i input.mkv -c:v h264_qsv -q 20 output.mkv

Failing to set the nominal framerate or the bitrate would make the look-ahead rate control not happy at all.

Rate controls

The rate control is one of the most rough edges of the current MediaSDK support, most of them do require a nominal frame rate and that requires an explicit -r to be passed.

There isn’t a default bitrate so also -b:v should be passed if you want to use a rate-control that has a bitrate target.

Is it possible to use a look-ahead rate-control aiming to a quality metric passing -global_quality -la_depth.

The full list is documented.


It is possible to have a full hardware transcoding pipeline with Media SDK.


./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv -c:v h264_qsv -r 25 -b:v 2M


./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 10

Both at the same time

./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv,scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 10

Hardware filtering caveats

The hardware filtering system is quite new and introducing it shown a number of shortcomings in the Libavfilter architecture regarding format autonegotiation so for hybrid pipelines (those that do not keep using hardware frames all over) it is necessary to explicitly call for hwupload and hwdownload explictitly in such ways:

./avconv -hwaccel qsv -c:v h264_qsv -i in.mkv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v vp9 out.mkv

Future for MediaSDK in Libav

The Media SDK supports already a good number of interesting codecs (h264, hevc, vp8/vp9) and Intel seems to be quite receptive regarding what codecs support.
The Libav support for it will improve over time as we improve the hardware acceleration support in the filtering layer and we make the libmfx interface richer.

We’d need more people testing and helping us to figure use-cases and corner-cases that hadn’t been thought of yet, your feedback is important!

October 29, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy 19th Birthday, Noah (October 29, 2016, 05:04 UTC)

Happy 19th Birthday, Noah! I hope that, this year, you are able to spend your special day with family, friends and loved ones. Be safe out there, and have a good time! 🙂

I also wanted to let you know how proud I am of all that you’ve accomplished, and I hope that you are too. Juggling undergraduate studies (with classes, lectures, homework, and the likes) along with a job that carries with it a lot of hours is no easy task, but you are managing to do it quite well! Keep it up, and I know that you will go far in this life.

Love you, buddy,

October 27, 2016
Robin Johnson a.k.a. robbat2 (homepage, bugs)

Cross-posting from where I've written up some other pieces:
- How to set up Ceph RGW StaticSites (S3 Website mode). I wrote the code over the course of the last year, and here's the first solid documentation for setting it up now. As for 'using' it, your S3 client with WebsiteConfiguration support should just work.
- Boto S3: how to muck with where it actually connects. Boto S3 tries to be smart about where it's connecting to, such that it takes the hostname you give it and uses that for most things. This makes some testing fun where you want it to request a certain hostname but actually connect somewhere entirely different.

Flattr this

October 24, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-07 Gentoo kernel maintainer 4.7.x (October 24, 2016, 06:33 UTC)

Recently i became Gentoo Kernel Project member, maintaining the Gentoo Kernel branch 4.7
Kernel Project

I already made some release.

so you can ping me,
if the Gentoo Kernel is not up to date :)

Recently we had the Dirty COW (CVE-2016-5195) kernel vulnerability come out,
and the Gentoo 4.7 branch update was released after just some couple hours after the kernel patch was release.

October 16, 2016
Robin Johnson a.k.a. robbat2 (homepage, bugs)
LVM: convert linear to striped (October 16, 2016, 14:55 UTC)

This requires temporarily having 2x the size of your LVM volume. You need to create a mirror of your data, with the new leg of the mirror striped over the target disks, then drop the old leg of the mirror that was not striped. If you want to stripe over ALL of your disks (including the one that was already used), you also need to specify --alloc anywhere otherwise the mirror code will refuse to use any disk twice.

# convert to a mirror (-m1), with new leg striped over 4 disks: /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde
# --mirrorlog core - use in-memory status during the conversion
# --interval 1: print status every second
lvconvert --interval 1 -m1 $myvg/$mylv --mirrorlog core --type mirror --stripes 4 /dev/sd{b,c,d,e}
# drop the old leg, /dev/sda
lvconvert --interval 1 -m0 $myvg/$mylv  /dev/sda

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Fixing gtk behaviour (October 16, 2016, 13:26 UTC)

Recently I've noticed all gtk2 apps becoming quite ... what's the word ... derpy?
Things like scrollbars not working and stuff. And by "not working" I mean the gtk3 behaviour of not showing up/down arrows and being a grey smudge of stupid.

So accidentally I stumbled over an old gentoo bug where it was required to deviate from defaults to have, like, icons and stuff.
That sounds pretty reasonable to me, but with gtk upstream crippling the Ad-Waiter, err, adwaita theme, because gtk3, this is a pretty sad interaction. And unsurprisingly by switching to the upstream default theme, Raleigh, gtk2 apps start looking a lot better.(Like, scrollbars and stuff)

The change might make sense to apply to Gentoo globally, locally for each user it is simply:

$ cat ~/.gtkrc-2.0
gtk-theme-name = "Raleigh"
gtk-cursor-theme-name = "Raleigh"
I'm still experimenting with 'gtk-icon-theme-name' and 'gtk-fallback-icon-theme', maybe that should change too. And as a benefit we can remove the Ad-Waiter from dependencies, possibly drop gnome-themes too, and restore a fair amount of sanity to gtk2.

Changing console fontsize (October 16, 2016, 10:09 UTC)

Recently I accidentally aquired some "HiDPI" hardware. While it is awesome to use it quickly becomes irritating to be almost unable to read the bootup messages or work in a VT.
The documentation on fixing this is surprisingly sparse, but luckily it is very easy:

  • Get a font that comes in the required sizes. media-fonts/terminus-font was the first choice I found, there may be others that are nice to use. Since terminus works well enough I didn't bother to check.
  • Test the font with "setfont". The default path is /usr/share/consolefonts, and the font 'name' is just the filename without the .psf.gz suffix. If you break things you can revert to sane defaults by just calling "setfont" or rebooting the machine (ehehehehehe)
  • Set the font in /etc/conf.d/consolefont. For a 210dpi notebook display I chose 'ter-v24b', but I'm considering going down a font size or two, maybe 'ter-v20b'? It's all very subjective ...
  • On reboot the consolefont init script will set the required font.
Now I'm wondering if such fonts can be embedded into the kernel so that on boot it directly switches to a 'nice' font, but just being able to read the console output is a good start ...

October 12, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
GnuPG: private key suddenly missing? (October 12, 2016, 16:56 UTC)

After updating my workstation, I noticed that keychain reported that it could not load one of the GnuPG keys I passed it on.

 * keychain 2.8.1 ~
 * Found existing ssh-agent: 2167
 * Found existing gpg-agent: 2194
 * Warning: can't find 0xB7BD4B0DE76AC6A4; skipping
 * Known ssh key: /home/swift/.ssh/id_dsa
 * Known ssh key: /home/swift/.ssh/id_ed25519
 * Known gpg key: 0x22899E947878B0CE

I did not modify my key store at all, so what happened?

GnuPG upgrade to 2.1

The update I did also upgraded GnuPG to the 2.1 series. This version has quite a few updates, one of which is a change towards a new private key storage approach. I thought that it might have done a wrong conversion, or that the key which was used was of a particular method or strength that suddenly wasn't supported anymore (PGP-2 is mentioned in the article).

But the key is a relatively standard RSA4096 one. Yet still, when I listed my private keys, I did not see this key. I even tried to re-import the secring.gpg file, but it only found private keys that it already saw previously.

I'm blind - the key never disappeared

Luckily, when I tried to sign something with the key, gpg-agent still asked me for the passphraze that I had used for a while on that key. So it isn't gone. What happened?

Well, the key id is not my private key id, but the key id of one of the subkeys. Previously, gpg-agent sought and found the private key associated with the subkey, but now it no longer does. I don't know if this is a bug in the past that I accidentally used, or if this is a bug in the new version. I might investigate that a bit more, but right now I'm happy that I found it.

All I had to do was use the right key id in keychain, and things worked again.

Good, now I can continue debugging networking issues with an azure-hosted system...

October 11, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Openstack Newton Update (October 11, 2016, 05:00 UTC)

The short of it

Openstack Newton was packaged early last week (when rc2 was still going on upstream) and the tags for the major projects were packaged the day they released (nova and the like).

I've updated the openstack-meta package to 2016.2.9999 and would recommend people use that.

Heat has also been packaged this time around so you are able to use that if you wish.

I'll link to my keywords and use files so you may use them if you wish as well. Please keep in mind that my use file is for my personal setup (static kernel, vxlan/linuxbridge and postgresql)

October 08, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
New job and new blog category (October 08, 2016, 06:49 UTC)

Sorry blog, this announcement comes late for you (I updated sites like Linkedin some time ago), but better late than never!

I got myself a new job in May, joining the Red Hat software developers working on OpenStack. More specifically, I will work mostly on the network parts: Neutron itself (the “networking as a service” main project), but also other related projects like Octavia (load balancer), image building, and more recently Service Function Chaining.

Working upstream on these projects, I plan to write some posts about them, which will be regrouped in a new OpenStack category. I am not sure yet about the format (short popularisation items and tutorials, long advanced technical topics, a mix of both, …), we will see. In all cases, I hope it will be of interest to some people 🙂

PS for Gentoo Universe readers: don’t worry, that does not mean I will switch all my Linux boxes to RHEL/CentOS/Fedora! I still have enough free time to work on Gentoo

October 05, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-05 exam finished and news (October 05, 2016, 08:49 UTC)

School exam are almost finished.
I could give 20 class in one semester and get 37 school point.
in this second semester i need around 10 point for get in 4 year and start doing mainly research.

Because i had some free time i got to internship for look for work and did Gentoo Study Meeting, after almost 6 months.
And contribute on Gentoo.
I also could get into the open source research lab of school,
so in this months i will follow some few lessons and do opensource research.

September 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
We do not ship SELinux sandbox (September 27, 2016, 18:47 UTC)

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 26, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Mounting QEMU images (September 26, 2016, 17:26 UTC)

While working on the second edition of my first book, SELinux System Administration - Second Edition I had to test out a few commands on different Linux distributions to make sure that I don't create instructions that only work on Gentoo Linux. After all, as awesome as Gentoo might be, the Linux world is a bit bigger. So I downloaded a few live systems to run in Qemu/KVM.

Some of these systems however use cloud-init which, while interesting to use, is not set up on my system yet. And without support for cloud-init, how can I get access to the system?

Mounting qemu images on the system

To resolve this, I want to mount the image on my system, and edit the /etc/shadow file so that the root account is accessible. Once that is accomplished, I can log on through the console and start setting up the system further.

Images that are in the qcow2 format can be mounted through the nbd driver, but that would require some updates on my local SELinux policy that I am too lazy to do right now (I'll get to them eventually, but first need to finish the book). Still, if you are interested in using nbd, see these instructions or a related thread on the Gentoo Forums.

Luckily, storage is cheap (even SSD disks), so I quickly converted the qcow2 images into raw images:

~$ qemu-img convert root.qcow2 root.raw

With the image now available in raw format, I can use the loop devices to mount the image(s) on my system:

~# losetup /dev/loop0 root.raw
~# kpartx -a /dev/loop0
~# mount /dev/mapper/loop0p1 /mnt

The kpartx command will detect the partitions and ensure that those are available: the first partition becomes available at /dev/loop0p1, the second /dev/loop0p2 and so forth.

With the image now mounted, let's update the /etc/shadow file.

Placing a new password hash in the shadow file

A google search quickly revealed that the following command generates a shadow-compatible hash for a password:

~$ openssl passwd -1 MyMightyPassword

The challenge wasn't to find the hash though, but to edit it:

~# vim /mnt/etc/shadow
vim: Permission denied

The image that I downloaded used SELinux (of course), which meant that the shadow file was labeled with shadow_t which I am not allowed to access. And I didn't want to put SELinux in permissive mode just for this (sometimes I /do/ have some time left, apparently).

So I remounted the image, but now with the context= mount option, like so:

~# mount -o context="system_u:object_r:var_t:s0: /dev/loop0p1 /mnt

Now all files are labeled with var_t which I do have permissions to edit. But I also need to take care that the files that I edited get the proper label again. There are a number of ways to accomplish this. I chose to create a .autorelabel file in the root of the partition. Red Hat based distributions will pick this up and force a file system relabeling operation.

Unmounting the file system

After making the changes, I can now unmount the file system again:

~# umount /mnt
~# kpart -d /dev/loop0
~# losetup -d /dev/loop0

With that done, I had root access to the image and could start testing out my own set of commands.

It did trigger my interest in the cloud-init setup though...

September 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Few notes on locale craziness (September 22, 2016, 20:13 UTC)

Back in the EAPI 6 guide I shortly noted that we have added a sanitization requirement for locales. Having been informed of another locale issue in Python (pre-EAPI 6 ebuild), I have decided to write a short note of locale curiosities that could also serve in reporting issues upstream.

When l10n and i18n are concerned, most of the developers correctly predict that the date and time format, currencies, number formats are going to change. It’s rather hard to find an application that would fail because of changed system date format; however, much easier to find one that does not respect the locale and uses hard-coded format strings for user display. You can find applications that unconditionally use a specific decimal separator but it’s quite rare to find one that chokes itself combining code using hard-coded separator and system routines respecting locales. Some applications rely on English error messages but that’s rather obviously perceived as mistake. However, there are also two hard cases…

Lowercase and uppercase

For a start, if you thought that the ASCII range of lowercase characters would map clearly to the ASCII range of uppercase characters, you were wrong. The Turkish (tr_TR) locale is different here, and maps lowercase ‘i’ (LATIN SMALL LETTER I) into uppercase ‘İ’ (LATIN CAPITAL LETTER I WITH DOT ABOVE). Similarly, ‘I’ (LATIN CAPITAL LETTER I) maps to ‘ı’ (LATIN SMALL LETTER DOTLESS I). What does this mean in practice? That if you have a Turkish user, then depending on the software used, you Latin ‘i’ may be uppercased onto ‘I’ (as you expect it to be), ‘İ’ (as would be correct in free text) or… left as ‘i’.

What’s the solution for this? If you need to uppercase/lowercase an ASCII text (e.g. variable names), either use a function that does not respect locale (e.g. 'i' - ('a' - 'A') in C) or set LC_CTYPE to a sane locale (e.g. C). However, remember that LC_CTYPE affects the character encoding — i.e. if you read UTF-8, you need to use a locale with UTF-8 codeset.


The other problem is collation, i.e. sorting. The more obvious part of it is that the particular locales enforce specific sorting of their specific diacritic characters. For example, the Polish letter ‘ą’ would be sorted between ‘a’ and ‘b’ in the Polish locale, and somewhere at the end in the C locale. The intermediately obvious part of it is that some locales have different ordering of lowercase and uppercase characters — the C and German locales sort uppercase characters first (the former because of ASCII codes), while many other locales sort the opposite.

Now, the non-obvious part is that some locales actually reorder the Latin alphabet. For example, the Estonian (et_EE) locale puts ‘z’ somewhere between ‘s’ and ‘t’. Yep, seriously. What’s even less obvious is that it means that the [a-z] character class suddenly ends halfway through the lowercase characters!

What’s the solution? Again, either use non-locale-sensitive functions or sanitize LC_COLLATE. For regular expressions, the named character classes ([[:lower:]], [[:upper:]]) are always a better choice.

Does anyone know more fun locales?