Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
April 29, 2017, 01:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 28, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Project Memory (April 28, 2017, 14:03 UTC)

For a series of events that I’m not entirely sure the start of, I started fixing some links on my old blog posts, fixing some links that got broken, most of which I ended up having to find on the Wayback Archive. While looking at that, I ended up finding some content for my very old blog. A one that was hosted on Blogspot, written in Italian only, and seriously written by an annoying brat that needed to learn something about life. Which I did, of course. The story of that blog is for a different time and post, for now I’ll actually focus on a different topic.

When I started looking at this, I ended up going through a lot of my blogposts and updated a number of broken links, either by linking to the Wayback machine or by removing the link. I focused on those links that can easily be grepped for, which turns out to be a very good side effect of having migrated to Hugo.

This meant, among other things, removing references to (which came to my mind because of all the hype I hear about Mastodon nowadays), removing links to my old “Hire me!” page, and so on. And that’s where things started showing a pattern.

I ended up updating or removing links to Berlios, Rubyforge, Gitorious, Google Code, Gemcutter, and so on.

For many of these, turned out I don’t even have a local copy (at hand at least) of some of my smaller projects (mostly the early Ruby stuff I’ve done). But I almost certainly have some of that data in my backups, some of which I actually have in Dublin and want to start digging into at some point soon. Again, this is a story for a different time.

The importance to the links to those project management websites is that for many projects, those pages were all that you had about them. And for some of those, all the important information was captured by those services.

Back when I started contributing to free software projects, SourceForge was the badge of honor of being an actual project: it would give you space to host a website, as well as the ability to have source code repositories and websites. And this was the era before Git, before Mercurial, and the other DVCS, which means either you had SourceForge, or you likely had no source control at all. But SourceForge admins also reviewed (or at least alleged to) every project that was created, and so creating a project on the platform was not straightforward, you would do that only if you really had the time to invest on the project.

A few projects were big enough to have their own servers, and a few projects were hosted in some other “random” project management sites, that for a while appeared to sprout out because the Forge software used by SourceForge was (for a while at least) released as free software itself. Some of those websites were specific in nature, others more general. Over time, BerliOS appeared to become the anti-SourceForge, with a streamlined application process, and most importantly, with Subversion years before SF would gain support for it.

Things got a bit more interesting later, when things like Bazaar, Mercurial, GIT and so on started appearing on the horizon, because at that point having proper source control could be achieved without needing special servers (without publishing it at least, although there were ways around that. Which at the same time made some project management website redundant, and others more appealable.

But, let’s take a look at the list of project management websites I have used and are now completely or partly gone, with or without history:

  • The aforementioned BerliOS, which has been teetering back and forth a couple of times. I had a couple of projects over there, which I ended up importing to GitHub, and I also forked unpaper. The service and the hosting were taken down in 2014, but (all?) the projects hosted on the platform were mirrored on SourceForge. As far as I can tell they were mirrored read-only, so for instance I can’t de-duplicate the unieject projects since I originally wrote it on SourceForge and then migrated to BerliOS.

  • The Danish SunSITE, which hosted a number of open-source projects for reasons that I’m not entirely clear on. NoX-Wizard, an open-source Ultima OnLine server emulator was hosted there, for reasons that are even murkier to me. The site got renamed to, but they dropped all the hosting services in 2009. I can’t seem to find an archive of their data; Nox-Wizard was migrated during my time to SourceForge, so that’s okay by me.

  • RubyForge used that same Forge app as SourceForge, and was focused on Ruby module development. It was abruptly terminated in 2014, and as it turns out I made the mistake here of not importing my few modules explicitly. I should have them in my backups if I start looking for them, I just haven’t done so yet.

  • Gitorious set itself up as being an open, free software software to compete with GitHub. Unfortunately it clearly was not profitable enough and it got acquired, twice. The second time by competing service GitLab, that had no interest in running the software. A brittle mirror of the project repositories only (no user pages) is still online, thanks to Archive Team. I originally used Gitorious for my repositories rather than GitHub, but I came around to it and moved everything over before they shut the service down, well almost everything, as it turns out some of the LScube repos were not saved, because they were only mirrors… except that the domain for that project expired so we lost access to the main website and GIT repository, too.

  • Google Code was Google’s project hosting service, that started by offering Subversion repositories, downloads, issue trackers and so on. Very few of the projects I tracked used Google Code to begin with, and it was finally turned down in 2015, by making all projects read-only except for setting up a redirection to a new homepage. The biggest project I followed from Google Code was libarchive and they migrated it fully to GitHub, including migrating the issues.

  • Gemcutter used to be a repository for Ruby gems packages. I actually forgot why it was started, but it was for a while the alternative repository where a lot of cool kids stored their libraries. Gemcutter got merged back into and the old links now appear to redirect to the right pages. Yay!

With such a list of project hosting websites going the way of the dodo, an obvious conclusion to take is that hosting things on your own servers is the way to go. I would still argue otherwise. Despite the amount of hosting websites going away, it feels to me like the vast majority of the information we lost over the past 13 years is for blogs and personal websites badly used for documentation. With the exception of RubyForge, all the above examples were properly archived one way or the other, so at least the majority of the historical memory is not gone at all.

Not using project hosting websites is obviously an option. Unfortunately it comes with the usual problems and with even higher risks of losing data. Even GitLab’s snafu had higher chances to be fixed than whatever your one-person-project has when the owner gets tired, runs out of money, graduate from university, or even dies.

So what can we do to make things more resilient to disappearing? Let me suggest a few points of actions, which I think are relevant and possible right now to make things better for everybody.

First of all, let’s all make sure that the Internet Archive by donating. I set up a €5/month donation which gets matched by my employer. The Archive not only provides the Wayback Machine, which is how I can still fetch some of the content both from my past blogs and from blogs of people who deleted or moved them, or even passed on. Internet is our history, we can’t let it disappear without effort.

Then for what concerns the projects themselves, it may be a bit less clear cut. The first thing I’ll be much more wary about in the future is relying on the support sites when writing comments or commit messages. Issue trackers get lost, or renumbered, and so the references to those may be broken too easily. Be verbose in your commit messages, and if needed provide a quoted issue, instead of just “Fix issue #1123”.

Even mailing lists are not safe. While Gmane is currently supposedly still online, most of the gmane links from my own blog are broken, and I need to find replacements for them.

This brings me to the following problem: documentation. Wikis made documenting things significantly cheaper as you don’t need o learn lots neither in form of syntax nor in form of process. Unfortunately, backing up wikis is not easy because a database is involved, and it’s very hard, when taking over a project whose maintainers are unresponsive, to find a good way to import the wiki. GitHub makes thing easier thanks to their GitHub pages, and it’s at least a starting point. Unfortunately it makes the process a little more messy than the wiki, but we can survive that, I’m sure.

Myself, I decided to use a hybrid approach. Given that some of my projects such as unieject managed to migrate from SourceForge to BerliOS, to Gitorious, to GitHub, I have now set up a number of redirects on my website, so that their official websites will read and it’ll redirect to wherever I’m hosting them at the time.

April 27, 2017

imageworsener is a utility for image scaling and processing.

The complete ASan output of the issue:

# imagew $FILE /tmp/out -outfmt bmp
==20314==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fe233b99af8 at pc 0x7fea7f55da64 bp 0x7ffdb4737840 sp 0x7ffdb4737838                                                                         
WRITE of size 4 at 0x7fe233b99af8 thread T0                                                                                                                                                                       
    #0 0x7fea7f55da63 in iw_process_cols_to_intermediate /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:903:75                                                             
    #1 0x7fea7f55da63 in iw_process_one_channel /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:1144                                                                        
    #2 0x7fea7f54ca71 in iw_process_internal /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:1405:7                                                                         
    #3 0x7fea7f520095 in iw_process_image /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:2248:8                                                                            
    #4 0x528de1 in iwcmd_run /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:1400:6                                                                                          
    #5 0x515326 in iwcmd_main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3018:7                                                                                         
    #6 0x515326 in main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3067                                                                                                 
    #7 0x7fea7e5e878f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                        
    #8 0x41b028 in _init (/usr/bin/imagew+0x41b028)                                                                                                                                                               
0x7fe233b99af8 is located 4 bytes to the right of 8003134196-byte region [0x7fe056b37800,0x7fe233b99af4)                                                                                                          
allocated by thread T0 here:                                                                                                                                                                                      
    #0 0x4da6f8 in malloc /tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.0/work/compiler-rt-4.0.0.src/lib/asan/                                                                          
    #1 0x551fc0 in my_mallocfn /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:794:9                                                                                         
    #2 0x7fea7f6a39ae in iw_malloc_ex /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-util.c:48:8                                                                                  
    #3 0x7fea7f6a3dec in iw_malloc_large /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-util.c:77:9                                                                               
    #4 0x7fea7f54c5a0 in iw_process_internal /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:1396:44                                                                        
    #5 0x7fea7f520095 in iw_process_image /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:2248:8                                                                            
    #6 0x528de1 in iwcmd_run /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:1400:6                                                                                          
    #7 0x515326 in iwcmd_main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3018:7                                                                                         
    #8 0x515326 in main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3067                                                                                                 
    #9 0x7fea7e5e878f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-main.c:903:75 in iw_process_cols_to_intermediate
Shadow bytes around the buggy address:
  0x0ffcc676b300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffcc676b310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffcc676b320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffcc676b330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffcc676b340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0ffcc676b350: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04[fa]
  0x0ffcc676b360: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0ffcc676b370: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0ffcc676b380: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0ffcc676b390: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0ffcc676b3a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-04-12: bug discovered and reported to upstream
2017-04-12: upstream released a patch
2017-04-27: blog post about the issue

This bug was found with American Fuzzy Lop.


imageworsener: heap-based buffer overflow in iw_process_cols_to_intermediate (imagew-main.c)

imageworsener: two left shift (April 27, 2017, 19:37 UTC)

imageworsener is a utility for image scaling and processing.

There are two left shift visible with UbSan enabled.

# imagew $FILE /tmp/out -outfmt bmp
src/imagew-util.c:415:68: runtime error: left shift of 255 by 24 places cannot be represented in type 'int'
src/imagew-bmp.c:427:10: runtime error: left shift of 1 by 31 places cannot be represented in type 'int'

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-04-13: bug discovered and reported to upstream
2017-04-22: upstream released a patch
2017-04-27: blog post about the issue

This bug was found with American Fuzzy Lop.


imageworsener: two left shift

imageworsener is a utility for image scaling and processing.

There is a memory allocation failure, I will show the interesting ASan output,

# imagew $FILE /tmp/out -outfmt bmp
    #8 0x551fc0 in my_mallocfn /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:794:9
    #9 0x7f37f140c9ae in iw_malloc_ex /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-util.c:48:8
    #10 0x7f37f140cdec in iw_malloc_large /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-util.c:77:9
    #11 0x7f37f136d66c in bmpr_read_uncompressed /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-bmp.c:665:32
    #12 0x7f37f134ce64 in iwbmp_read_bits /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-bmp.c:910:7
    #13 0x7f37f134ce64 in iw_read_bmp_file /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-bmp.c:980
    #14 0x7f37f1349f94 in iw_read_file_by_fmt /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-allfmts.c:66:12
    #15 0x519304 in iwcmd_run /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:1191:6
    #16 0x515326 in iwcmd_main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3018:7
    #17 0x515326 in main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3067
    #18 0x7f37f035178f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #19 0x41b028 in _init (/usr/bin/imagew+0x41b028)

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-04-13: bug discovered and reported to upstream
2017-04-12: upstream released a patch for another issue that fixes this issue too
2017-04-27: blog post about the issue

This bug was found with American Fuzzy Lop.


imageworsener: memory allocation failure in my_mallocfn (imagew-cmd.c)

April 24, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
ELECOM DEFT and the broken descriptor (April 24, 2017, 10:03 UTC)

In my previous post reviewing the ELECOM DEFT I noted that I had to do some work to get the three function buttons on the mouse to work on Linux correctly. Let me try to dig a bit into this one so it can be useful to others in the future.

The simptoms: the three top buttons (Fn1, Fn2, Fn3) of the device are unresponsive on Linux, they do not show up on xev and evtest.

My first guess was that they were using the same technique they do for gaming mice, by configuring on the device itself what codes to send when the buttons are pressed. That looked likely because the receiver is appearing as a big composite device. But that was not the case. After installing the Windows driver and app on my “sacrificial laptop”, and using USBlyzer to figure out what was going on, I couldn’t see the app doing anything to the device. Which meant they instead remapped the behaviour of the buttons on the software side.

This left open only the option that the receiver needs a “quirk driver” to do something. Actually, since I have looked into HID (the protocol used for USB mice and keyboards, among others), I already knew the problem was the HID Report Descriptor is reporting something broken and the Linux kernel is ignoring it. I’m not sure if Windows is just ignoring the descriptor, or if there is a custom driver to implement the quirk there. I did not look too much into this.

But what is this descriptor? If you have not looked into HID before, you have to understand that the HID protocol in USB only specifies very little information by itself, and is mainly a common format for both sending “reports” and to describe said reports. The HID Report Descriptor is effectively bytecode representing the schema that those reports should follow. As it happens, sometimes it’s not the case at all, and the descriptor itself can even be completely broken and unparsable. But that is not the case here.

The descriptor is fetched by the operating system when you connect the device, and is then used to parse the reports coming as interrupt transfer. The first byte of each transfer refers to the report used, and that is looked up in the descriptor to understand it. In most mice, your reports will all look vastly the same: state of the buttons, relative X and Y displacement, wheel (one or two dimensional) displacement. But, since the presence of one or more wheels is not a given, and the amount of buttons to expect can be significantly high, even without going to the ludicrous extent of the OpenOffice mouse, the report descriptor will tell you the size of each field in the structure.

So, looking at USBlyzer, I could tell that the report number 1 was clearly the one that gives the mouse data, and even without knowing much about HID and having not seen the report descriptor, I can tell what’s going on:

button1: 01 01 00 00 00 00 00 00
button2: 01 02 00 00 00 00 00 00
button3: 01 04 00 00 00 00 00 00
fn1:     01 20 00 00 00 00 00 00
fn2:     01 40 00 00 00 00 00 00
fn3:     01 80 00 00 00 00 00 00

So quite obviously, the second byte is a bitmask of which button is being pressed. Note that this is the first of two reports you receive every time you click on the button (and everything is zero because on a trackball you can click the buttons without even touching the ball, and so there is no movement indication in the report).

But, when I looked at the Analysis tab, I found out that USBlyzer is going to parse the reports based on the descriptor as well, showing the button number from the bitmask, the X and Y displacement and so on. For the bitmasks of the three buttons at the top of the device, no button is listed in the analysis. Bingo, we have a problem.

The quest items. Thinking of it like a quest in a JRPG, I now needed two items to complete the quest: a way to figure out what the report descriptor of the device is and what it means. Let’s start from the first item.

There are a number of ways that you find documented for dumping a USB HID report descriptor on Linux. Most of them rely on you unbinding the device from the usbhid driver and then fetching it by sending the right HID commands. usbhid-dump does that and it does well, but I’m going to ignore that. Instead I’m going to read the report descriptor as is presented by sysfs. This may not be the one reported by the hardware, but rather the one that the quirk may have already “fixed” somehow.

So how can you tell where to find the report descriptor? If you look when you plug in a device:

% dmesg | tail -n 3
[13358.058915] elecom 0003:056E:00FF.000C: Fixing up Elecom DEFT Fn buttons
[13358.059721] input: ELECOM ELECOM TrackBall Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2.1/3-2.1:1.0/0003:056E:00FF.000C/input/input45
[13358.111673] elecom 0003:056E:00FF.000C: input,hiddev0,hidraw1: USB HID v1.11 Mouse [ELECOM ELECOM TrackBall Mouse] on usb-0000:00:14.0-2.1/input0
% cp /sys/devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2.1/3-2.1:1.0/0003:056E:00FF.000C/report_descriptor my_report_descriptor.bin

You can tell from this dmesg that I’m cheating, and I’m looking at it after the device has been fixed already. Otherwise it would probably be saying hid-generic rather than elecom.

I have made a copy of the original report descriptor of course, so I can look at it even now, but the binary file is not going to be very useful by itself. But, from the same author as the tool listed above, hidrd makes it significantly easier to understand what’s going on. The full spec output includes a number of report pages that are vendor specific, and may be interesting to either fuzz or figure out if they are used for reporting things such as low battery. But let’s ignore that for the immediate and let’s look at the “Desktop, Mouse” page:

Usage Page (Desktop),               ; Generic desktop controls (01h)
Usage (Mouse),                      ; Mouse (02h, application collection)
Collection (Application),
    Usage (Pointer),                ; Pointer (01h, physical collection)
    Collection (Physical),
        Report ID (1),
        Report Count (5),
        Report Size (1),
        Usage Page (Button),        ; Button (09h)
        Usage Minimum (01h),
        Usage Maximum (05h),
        Logical Minimum (0),
        Logical Maximum (1),
        Input (Variable),
        Report Count (1),
        Report Size (3),
        Input (Constant),
        Report Size (16),
        Report Count (2),
        Usage Page (Desktop),       ; Generic desktop controls (01h)
        Usage (X),                  ; X (30h, dynamic value)
        Usage (Y),                  ; Y (31h, dynamic value)
        Logical Minimum (-32768),
        Logical Maximum (32767),
        Input (Variable, Relative),
    End Collection,
    Collection (Physical),
        Report Count (1),
        Report Size (8),
        Usage Page (Desktop),       ; Generic desktop controls (01h)
        Usage (Wheel),              ; Wheel (38h, dynamic value)
        Logical Minimum (-127),
        Logical Maximum (127),
        Input (Variable, Relative),
    End Collection,
    Collection (Physical),
        Report Count (1),
        Report Size (8),
        Usage Page (Consumer),      ; Consumer (0Ch)
        Usage (AC Pan),             ; AC pan (0238h, linear control)
        Logical Minimum (-127),
        Logical Maximum (127),
        Input (Variable, Relative),
    End Collection,
End Collection,

This is effectively a description of the structure in the reported I showed earlier, starting from the buttons and X/Y displacement, followed by the wheel and the “AC pan” (which I assume is the left/right wheel). All the sizes are given in bits, and the way the language works is a bit strange. The part that interests us is at the start of the first block. Refer to this tutorial for the nitty gritty details, but I’ll try to give a human-readable example.

Report ID is the constant we already know about, and the first byte of the message. Following that we can see it declaring five (Count = 5) bits (Size = 1) used for Buttons between 1 and 5. Ignore the local maximum/minimum in this case, as they are of course either on or off. The Input (Variable) instruction is effectively saying “These are the useful parts”. Following that it declares one (Count = 1) 3-bit (Size = 3) constant value. Since it’s constant, the HID driver will just ignore it. Unfortunately those three bits are actually the three bits needed for the top buttons.

The obvious answer is to change the descriptor so that it describe eight one-bit entries for eight buttons, and no constant bits (if you forget to remove the constant bits, the whole message gets misparsed and moving the mouse is taken as clicks, ask me how I know!). How do you do that? Well, you need a quirk driver in the Linux kernel to intercept the device, and rewrite the descriptor on the fly. This is not hard, and I know of plenty of other drivers doing so. As it happens Linux already has a hid-elecom driver, which was fixing a Bluetooth mouse that also had a wrong descriptor; I extended that to fix the descriptor. But how do you fix a descriptor exactly?

Some of the drivers check for the size of the descriptor, and for some anchor values (usually the ones they are going to change), others replace the descriptor entirely. I prefer the former, as they make it clear that they are trying to just fix something rather than discard whatever the manufacturer is doing. Particularly because in this case the fix is quite trivial, just three bytes need to be changed: change the Count and Maximum for the Buttons input to 8, and make the Count of the constant import zero. hidrd has a mode where it outputs the whole descriptor as a valid C array that you can just embed in the kernel source, with comments what each byte combination does. I used that during testing, before changing my code to do the patching instead. The actual diff, in code format, is:

@@ -4,15 +4,15 @@
 0x09, 0x01,         /*      Usage (Pointer),                */
 0xA1, 0x00,         /*      Collection (Physical),          */
 0x85, 0x01,         /*          Report ID (1),              */
-0x95, 0x05,         /*          Report Count (5),           */
+0x95, 0x08,         /*          Report Count (8),           */
 0x75, 0x01,         /*          Report Size (1),            */
 0x05, 0x09,         /*          Usage Page (Button),        */
 0x19, 0x01,         /*          Usage Minimum (01h),        */
-0x29, 0x05,         /*          Usage Maximum (05h),        */
+0x29, 0x08,         /*          Usage Maximum (08h),        */
 0x15, 0x00,         /*          Logical Minimum (0),        */
 0x25, 0x01,         /*          Logical Maximum (1),        */
 0x81, 0x02,         /*          Input (Variable),           */
-0x95, 0x01,         /*          Report Count (1),           */
+0x95, 0x00,         /*          Report Count (0),           */
 0x75, 0x03,         /*          Report Size (3),            */
 0x81, 0x01,         /*          Input (Constant),           */
 0x75, 0x10,         /*          Report Size (16),           */

And that’s enough to make all the buttons work just fine. Yay! So I sent the first patch to the linux-input mailing list… and then I had a doubt “Am I the first ever Linux user of this device?” As it happens, I’m not, and after sending the patch I searched and found that there was already a patch by Yuxuan Shui sent last year that does effectively the same thing, except with a new module altogether (rather than extending the one already there) and by removing the Constant input declaration altogether, which requires a memmove() of the rest of the input. It also contains the USB ID for the wired version of the DEFT, adding the same fix.

So I went and sent another (or three) revision of the patch, including the other ID. Of course I would argue that mine is cleaner by reusing the other module, but in general I’ll leave it to the maintainers to decide which one to use. One thing that I can say at least for mine is that I tried to make it very explicit what’s going on, in particular by adding as a comment the side-by-side diff of the Collection stanza that I change in the driver. Because I always find it bothersome when I have to look into one of those HID drivers and they seem to just come up with magical constants to save the day. Sigh!

April 21, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Hardware Review: ELECOM DEFT Trackball (April 21, 2017, 17:04 UTC)

I know that by this point it feels like I’m collecting half-done reverse engineering projects, but sometimes you need some time off from one project to figure out how another is going to behave, so I have temporarily shelved my Accu-Chek Mobile and instead took a day to look at a different problem altogether.

I like trackballs, and with exception of gaming purposes, I consider them superior to mice: no cables that get tangled, no hands to move around the desk, much less space needed to keep clear, and so on. On laptops I prefer TrackPoint™ Style Pointers, which are rare to find, but on desktop I am a happy user of trackballs. Unfortunately my happiness has been having trouble as of late, because finding good trackballs is getting harder. My favourite device would still be Logitech’s Cordless Optical Trackman, but it was discontinued years ago, and it’s effectively impossible to find to buy. Amazon has second-hand listings for hundreds of dollars! I’m still kicking myself in the face for having dropped mine on the floor (literally) while I packed to move to Dublin, completely destroying it.

The new Logitech offerings appear to be all in the realm of thumb-operated “portable” trackballs, such as the M570, which I have and don’t particularly like. An alternative manufacturer that is easy to find both online and in store is Kensington, and I do indeed own an Orbit with the scroll ring, but it’s in my opinion too avant-garde and inconvenient. So I have been mostly unhappy.

But last year a colleague, also a trackball user, suggested me to look into the ELECOM DEFT Trackball (also available wired).

ELECOM, for those who may not be familiar with it, is a Japanese hardware company, that would sell everything from input devices to ultra flat network cables. If you have not been to Japan, it may be interesting to know that there is effectively a parallel world of hardware devices that you would not find in Europe or the USA, which makes a visit to Yodobashi Camera a must-see for every self-respecting geek.

I got the DEFT last year, and loved it. I’ve been using it at work all the time, because that’s where I mostly use a non-gaming input device anyway, but recently I started working from home a bit more often (it’s a long story) and got myself a proper setup for it, with a monitor, keyboard and, for a while, the M570 I noted above. I decided then to get myself two more of the DEFT, one to use with my work from home setup, and the other to use with my personal laptop while I work on reverse engineering.

Note here: I made a huge mistake. In both cases I ordered them from eBay directly from Japan, so I had to deal with the boring customs and VAT modules on my end. Which is not terrible, since the An Post depot is about ten minutes away from my apartment and my office, but it’s still less nice than just receiving the package directly. The second order, I ended up receiving two “Certified Frustration Free” packages, so I checked and indeed these devices are available on Amazon Japan. As I found out a few weeks ago for a completely different product, there is a feature called AmazonGlobal, which is not available for all products but would have been for these. With AmazonGlobal, the customs and VAT charges are taken care by Amazon, so there is no need for me to go out of my way and pay cash to An Post. And as it happens, if you don’t want to sign up for an account with Amazon Japan (which somehow is not federated to the others), you can just look for the same product on the USA version of Amazon, and AmazonGlobal applies just the same.

The trackball has a forefinger-operated ball (although ELECOM also makes a thumb-operated trackball), the usual left/middle/right buttons, a scroll wheel that “tilts” (badly) horizontally, and three function buttons at the top of the mouse (which I’ll go back to later). It also has two switches, one on the side, that has a red or blue area showing depending on how you pull it, and one on the bottom that is marked with the power symbol, H and L. Unfortunately, the manual leaflet that comes with the device is all in Japanese, which meant I had to get the help of Google Translate with my phone’s camera.

The switch on the side selects the DPI of the ball tracking (750 for blue, 1500 for red), while the one at the bottom appear to be a “power-assist” for the ball — it warns that the H version will use more battery.

As I said before, the trackball has three function buttons (marked Fn1, Fn2, Fn3) on the top. These are, for what I could tell, configurable through the Windows and Mac application, and they were indeed not seen by Linux at all, neither through xev nor through evtest. So I set myself up to reverse whichever protocol they used to configure this — I expected something similar to the Anker/Holtek gaming mouse I’m also working on, where the software programs the special event ID in the device directly.

The software can be downloaded on ELECOM’s website, although the whole page is in Japanese. On the other hand, the software itself is (badly) translated to English, so fewer yaks to shave there. Unfortunately when I tried using the app with the USB sniffer open… I could find nothing whatsoever. Turns out the app is actually handling all of that in software, rather than programming the hardware. So why did it not work on Linux? Well, I think that may be the topic for another post, since it turned out to require a kernel patch (which I sent, but can’t currently quite find it in the archives. I think that writing a blog post about it is going to be fairly useful, given that I had to assemble documentation that I found on at least a half dozen different sites.

Other things that may be relevant to know about the ELECOM is that somehow the 2.4GHz connection is sometimes a bit unstable. At the office, I cannot connect the receiver on the USB behind the monitor, because otherwise it skips beats. At home instead I have to put it there, because if I try to connect it directly to the Anker USB-C adapter I use with my Chromebook, the same problem happens. Ironically, the Microsoft receiver has the opposite problem: if I connect it behind the monitor at home, the keyboard sometimes get stuck repeating the same key over and over again. But again, that’s a topic for another time.

April 19, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)

Nextcloud logoA while ago I wanted to report a bug in one of Nextcloud's apps. They use the Github issue tracker, after creating a new issue I was welcomed with a long list of things they wanted to know about my installation. I filled the info to the best of my knowledge, until I was asked for this:

The content of config/config.php:

Which made me stop and wonder: The config file probably contains sensitive information like passwords. I quickly checked, and yes it does. It depends on the configuration of your Nextcloud installation, but in many cases the configuration contains variables for the database password (dbpassword), the smtp mail server password (mail_smtppassword) or both. Combined with other information from the config file (e. g. it also contains the smtp hostname) this could be very valuable information for an attacker.

A few lines later the bug reporting template has a warning (“Without the database password, passwordsalt and secret”), though this is incomplete, as it doesn't mention the smtp password. It also provides an alternative way of getting the content of the config file via the command line.

However... you know, this is the Internet. People don't read the fineprint. If you ask them to paste the content of their config file they might just do it.

User's passwords publicly accessible

The issues on github are all public and the URLs are of a very simple form and numbered (e. g.[number]), so downloading all issues from a project is trivial. Thus with a quick check I could confirm that some users indeed posted real looking passwords to the bug tracker.

Nextcoud is a fork of Owncloud, so I checked that as well. The bug reporting template contained exactly the same words, probably Nextcloud just copied it over when they forked. So I reported the issue to both Owncloud and Nextcloud via their HackerOne bug bounty programs. That was in January.

I proposed that both projects should go through their past bug reports and remove everything that looks like a password or another sensitive value. I also said that I think asking for the content of the configuration file is inherently dangerous and should be avoided. To allow users to share configuration options in a safe way I proposed to offer an option similar to the command line tool (which may not be available or usable for all users) in the web interface.

The reaction wasn't overwhelming. Apart from confirming that both projects acknowledged the problem nothing happened for quite a while. During FOSDEM I reached out to members of both projects and discussed the issue in person. Shortly after that I announced that I intended to disclose this issue three months after the initial report.

Disclosure deadline was nearing with passwords still public

The deadline was nearing and I didn't receive any report on any actions being taken by Owncloud or Nextcloud. I sent out this tweet which received quite some attention (and I'm sorry that some people got worried about a vulnerability in Owncloud/Nextcloud itself, I got a couple of questions):

will soon disclose a security issue in @ownCloud & @Nextclouders that will harm several users, both projects failed to act properly.

In all fairness to NextCloud, they had actually started scrubbing data from the existing bug reports, they just hadn't informed me. After the tweet Nextcloud gave me an update and Owncloud asked for a one week extension of the disclosure deadline which I agreed to.

The outcome by now isn't ideal. Both projects have scrubbed all obvious passwords from existing bug reports, although I still find values where it's not entirely clear whether they are replacement values or just very bad passwords (e. g. things like “123456”, but you might argue that people using such passwords have other problems).

Nextcloud has changed the wording of the bug reporting template. The new template still asks for the config file, but it mentions the safer command line option first and has the warning closer to the mentioning of the config. This is still far from ideal and I wouldn't be surprised if people continue pasting their passwords. However Nextcloud developers have indicated in the HackerOne discussion that they might pick up my idea of offering a GUI version to export a scrubbed config file. Owncloud has changed nothing yet.

If you have reported bugs to Owncloud or Nextcloud in the past and are unsure whether you may have pasted your password it's probably best to change it. Even if it's been removed now it may still be available within search engine caches or it might have already been recorded by an attacker.

April 17, 2017

imageworsener is a utility for image scaling and processing.

A fuzz on it discovered a divide-by-zero.

The complete ASan output:

# imagew $FILE /tmp/out -outfmt bmp
==20305==ERROR: AddressSanitizer: FPE on unknown address 0x7f8e57340cd6 (pc 0x7f8e57340cd6 bp 0x7ffc0fee8910 sp 0x7ffc0fee87e0 T0)                                                                                
    #0 0x7f8e57340cd5 in iwgif_record_pixel /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:213:13                                                                           
    #1 0x7f8e57340cd5 in lzw_emit_code /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:312                                                                                   
    #2 0x7f8e57339a94 in lzw_process_code /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:376:3                                                                              
    #3 0x7f8e57339a94 in lzw_process_bytes /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:433                                                                               
    #4 0x7f8e57339a94 in iwgif_read_image /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:669                                                                                
    #5 0x7f8e57339a94 in iwgif_read_main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:724                                                                                 
    #6 0x7f8e5732fb71 in iw_read_gif_file /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:773:6                                                                              
    #7 0x7f8e572e9091 in iw_read_file_by_fmt /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-allfmts.c:61:12                                                                       
    #8 0x519304 in iwcmd_run /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:1191:6                                                                                          
    #9 0x515326 in iwcmd_main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3018:7                                                                                         
    #10 0x515326 in main /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-cmd.c:3067                                                                                                
    #11 0x7f8e562f078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                       
    #12 0x41b028 in _init (/usr/bin/imagew+0x41b028)                                                                                                                                                              
AddressSanitizer can not provide additional info.                                                                                                                                                                 
SUMMARY: AddressSanitizer: FPE /tmp/portage/media-gfx/imageworsener-1.3.0/work/imageworsener-1.3.0/src/imagew-gif.c:213:13 in iwgif_record_pixel                                                                  

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-04-12: bug discovered and reported to upstream
2017-04-14: upstream released a patch
2017-04-17: blog post about the issue
2017-04-19: CVE assigned

This bug was found with American Fuzzy Lop.


imageworsener: divide-by-zero in iwgif_record_pixel (imagew-gif.c)

libcroco is a Generic Cascading Style Sheet (CSS) parsing and manipulation toolkit.

A fuzz on it discovered and heap overflow and an undefined behavior.

The complete ASan output:

# csslint-0.6 $FILE
==9246==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60400000007a at pc 0x7f3771a05074 bp 0x7fff426076a0 sp 0x7fff42607698                                                                          
READ of size 1 at 0x60400000007a thread T0                                                                                                                                                                        
    #0 0x7f3771a05073 in cr_input_read_byte /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-input.c:416:19                                                                                      
    #1 0x7f3771a3c0ba in cr_tknzr_parse_rgb /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-tknzr.c:1295:17                                                                                     
    #2 0x7f3771a3c0ba in cr_tknzr_get_next_token /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-tknzr.c:2127                                                                                   
    #3 0x7f3771ab6688 in cr_parser_parse_any_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:1179:18                                                                              
    #4 0x7f3771ab6c1e in cr_parser_parse_any_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:1215:34                                                                              
    #5 0x7f3771ab6c1e in cr_parser_parse_any_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:1215:34                                                                              
    #6 0x7f3771ab6c1e in cr_parser_parse_any_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:1215:34                                                                              
    #7 0x7f3771ab9579 in cr_parser_parse_block_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:1005:26                                                                            
    #8 0x7f3771a8882a in cr_parser_parse_atrule_core /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:798:26                                                                            
    #9 0x7f3771ab0644 in cr_parser_parse_stylesheet /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c                                                                                    
    #10 0x7f3771a8131e in cr_parser_parse /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:4381:26                                                                                      
    #11 0x7f3771a804f1 in cr_parser_parse_file /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:2993:18                                                                                 
    #12 0x7f3771b04869 in cr_om_parser_parse_file /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-om-parser.c:956:18                                                                            
    #13 0x51506f in cssom_parse /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/csslint/csslint.c:252:18                                                                                               
    #14 0x51506f in main /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/csslint/csslint.c:997                                                                                                         
    #15 0x7f377041b78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                       
    #16 0x41a9b8 in _init (/usr/bin/csslint-0.6+0x41a9b8)

0x60400000007a is located 0 bytes to the right of 42-byte region 
allocated by thread T0 here:
    #0 0x4da285 in calloc /tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.0/work/compiler-rt-4.0.0.src/lib/asan/
    #1 0x7f377168a1a0 in g_malloc0 /tmp/portage/dev-libs/glib-2.48.2/work/glib-2.48.2/glib/gmem.c:124
    #2 0x7f3771a00c4d in cr_input_new_from_buf /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-input.c:151:26
    #3 0x7f3771a027d6 in cr_input_new_from_uri /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-input.c:251:26
    #4 0x7f3771a22797 in cr_tknzr_new_from_uri /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-tknzr.c:1642:17
    #5 0x7f3771a8047c in cr_parser_parse_file /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-parser.c:2986:17
    #6 0x7f3771b04869 in cr_om_parser_parse_file /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-om-parser.c:956:18
    #7 0x51506f in cssom_parse /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/csslint/csslint.c:252:18
    #8 0x51506f in main /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/csslint/csslint.c:997
    #9 0x7f377041b78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-input.c:416:19 in cr_input_read_byte
Shadow bytes around the buggy address:
  0x0c087fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c087fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c087fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c087fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c087fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c087fff8000: fa fa 00 00 00 00 00 fa fa fa 00 00 00 00 00[02]
  0x0c087fff8010: fa fa 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
  0x0c087fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c087fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c087fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c087fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Commit fix:


# csslint-0.6 $FILE
/tmp/portage/dev-libs/libcroco-0.6.12/work/libcroco-0.6.12/src/cr-tknzr.c:1283:15: runtime error: value 9.11111e+19 is outside the range of representable values of type 'long'

Commit fix:

Affected version:
0.6.11 and 0.6.12

Fixed version:
0.6.13 (not released atm)

These bugs were discovered by Agostino Sarubbo of Gentoo.

2017-04-12: bugs discovered and reported to upstream
2017-04-16: upstream released a patch
2017-04-17: blog post about the issues
2017-04-19: CVE assigned

These bugs were found with American Fuzzy Lop.


libcroco: heap overflow and undefined behavior

April 15, 2017
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
GHC as a cross-compiler update (April 15, 2017, 11:05 UTC)


Gentoo’s dev-lang/ghc-8.2.1_rc1 supports both cross-building and cross-compiling modes! It’s useful for cross-compiling haskell software and initial porting of GHC itself on a new gentoo target.

Building a GHC crossompiler on Gentoo

Getting ${CTARGET}-ghc (crosscompiler) on Gentoo:

# # convenience variables:
# # Installing a target toolchain: gcc, glibc, binutils
crossdev ${CTARGET}
# # Installing ghc dependencies:
emerge-${CTARGET} -1 libffi ncurses gmp
# # adding 'ghc' symlink to cross-overlay:
ln -s path/to/haskell/overlay/dev-lang/ghc part/to/cross/overlay/cross-${CTARGET}/ghc
# # Building ghc crosscompiler:
emerge -1 cross-${CTARGET}/ghc
powerpc64-unknown-linux-gnu-ghc --info | grep Target
# ,("Target platform","powerpc64-unknown-linux")

Cross-building GHC on Gentoo

Cross-building ghc on ${CTARGET}:

# # convenience variables:
# # Installing a target toolchain: gcc, glibc, binutils
crossdev ${CTARGET}
# # Installing ghc dependencies:
emerge-${CTARGET} -1 libffi ncurses gmp
# # Cross-building ghc crosscompiler:
emerge-${CTARGET} --buildpkg -1 dev-lang/ghc
# # Now built packages can be used on a target to install
# # built ghc as: emerge --usepkg -1 dev-lang/ghc

Building a GHC crossompiler (generic)

That’s how you get a powerpc64 crosscompiler in a fresh git checkout:

$ ./configure --target=powerpc64-unknown-linux-gnu
$ cat mk/
# to speed things up
$ make -j$(nproc)
$ inplace/bin/ghc-stage1 --info | grep Target
,("Target platform","powerpc64-unknown-linux")


Below are details that have only historical (or backporting) value.

How did we get there?

Cross-compiling support in GHC is not a new thing. GHC wiki has a detailed section on how to build a crosscompiler. That works quite good. You can even target ghc at m68k: porting example.

What did not work so well is the attempt to install the result! In some places GHC build system tried to run ghc-pkg built for ${CBUILD}, in some places for ${CHOST}.

I never really tried to install a crosscompiler before. I think mostly because I was usually happy to make cross-compiler build at all: making GHC build for a rare target usually required a patch or two.

But one day I’ve decided to give full install a run. Original motivation was a bit unusual: I wanted to free space on my hard drive.

The build tree for GHC usually takes about 6-8GB. I had about 15 GHC source trees lying around. All in all it took about 10% of all space on my hard drive. Fixing make install would allow me to install only final result and get rid of all intermediate files.

I’ve decided to test make install code on Gentoo‘s dev-lang/ghc package as a proper package.

As a result a bunch of minor cleanups happened:

What works?

It allowed me to test various targets. Namely:

Target Bits Endianness Codegen
cross-aarch64-unknown-linux-gnu/ghc 64 LE LLVM
cross-alpha-unknown-linux-gnu/ghc 64 LE UNREG
cross-armv7a-unknown-linux-gnueabi/ghc 32 LE LLVM
cross-hppa-unknown-linux-gnu/ghc 32 BE UNREG
cross-m68k-unknown-linux-gnu/ghc 32 BE UNREG
cross-mips64-unknown-linux-gnu/ghc 32/64 BE UNREG
cross-powerpc64-unknown-linux-gnu/ghc 64 BE NCG
cross-powerpc64le-unknown-linux-gnu/ghc 64 LE NCG
cross-s390x-unknown-linux-gnu/ghc 64 BE UNREG
cross-sparc-unknown-linux-gnu/ghc 32 BE UNREG
cross-sparc64-unknown-linux-gnu/ghc 64 BE UNREG

I am running all of this on x86_64 (64-bit LE platform)

Quite a list! With help of qemu we can even test whether cross-compiler produces something that works:

$ cat hi.hs 
main = print "hello!"
$ powerpc64le-unknown-linux-gnu-ghc hi.hs -o hi.ppc64le
[1 of 1] Compiling Main             ( hi.hs, hi.o )
Linking hi.ppc64le ...
$ file hi.ppc64le 
hi.ppc64le: ELF 64-bit LSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib64/, for GNU/Linux 3.2.0, not stripped
$ qemu-ppc64le -L /usr/powerpc64le-unknown-linux-gnu/ ./hi.ppc64le 

Many qemu targets are slightly buggy and usually are very easy to fix!

A few recent examples:

  • epoll syscall is not wired properly on qemu-alpha: patch
  • CPU initialization code on qemu-s390x
  • thread creation fails on qemu-sparc32plus due to simple mmap() emulation bug
  • tcg on qemu-sparc64 crashes at runtime in static_code_gen_buffer()

Tweaking qemu is fun 🙂

April 11, 2017

libsndfile is a C library for reading and writing files containing sampled sound.

A fuzz via the sndfile-resample command-line tool of libsamplerate, discovered and invalid memory read and an invalid memory write. The upstream author Erik de Castro Lopo (erikd) said that they was fixed in the recent commit 60b234301adf258786d8b90be5c1d437fc8799e0 which addresses CVE-2017-7585. As usual I’m providing the stacktrace and the reproducer so that all release distros can test and check if their version is affected or not.

The complete ASan output:

# sndfile-resample -to 24000 -c 1 $FILE out
==959==ERROR: AddressSanitizer: SEGV on unknown address 0x0000013cc000 (pc 0x7fc1ba91251c bp 0x60e000000040 sp 0x7fff95597f70 T0)
==959==The signal is caused by a WRITE memory access.
    #0 0x7fc1ba91251b in flac_buffer_copy /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:264
    #1 0x7fc1ba913404 in flac_read_loop /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:884
    #2 0x7fc1ba913505 in flac_read_flac2f /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:949
    #3 0x7fc1ba907a49 in sf_readf_float /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/sndfile.c:1870
    #4 0x5135c5 in sample_rate_convert /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:213:29
    #5 0x5135c5 in main /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:163
    #6 0x7fc1b9a4178f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #7 0x419f88 in _init (/usr/bin/sndfile-resample+0x419f88)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:264 in flac_buffer_copy



# sndfile-resample -to 24000 -c 1 $FILE out
==32533==ERROR: AddressSanitizer: SEGV on unknown address 0x000000004000 (pc 0x7f576a5e8512 bp 0x60e000000040 sp 0x7ffeab4e66d0 T0)
==32533==The signal is caused by a READ memory access.
    #0 0x7f576a5e8511 in flac_buffer_copy /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:263
    #1 0x7f576a5e9404 in flac_read_loop /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:884
    #2 0x7f576a5e9505 in flac_read_flac2f /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:949
    #3 0x7f576a5dda49 in sf_readf_float /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/sndfile.c:1870
    #4 0x5135c5 in sample_rate_convert /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:213:29
    #5 0x5135c5 in main /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:163
    #6 0x7f576971778f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #7 0x419f88 in _init (/usr/bin/sndfile-resample+0x419f88)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/media-libs/libsndfile-1.0.27-r1/work/libsndfile-1.0.27/src/flac.c:263 in flac_buffer_copy


Affected version:

Fixed version:

Commit fix:

These bugs were discovered by Agostino Sarubbo of Gentoo.

2017-04-11: bugs discovered and reported to upstream
2017-04-11: blog post about the issues
2017-04-12: CVE assigned

These bugs were found with American Fuzzy Lop.


libsndfile: invalid memory READ and invalid memory WRITE in flac_buffer_copy (flac.c)

libsamplerate is a Sample Rate Converter for audio.

This bug was initially discovered and silently fixed by the upstream author Erik de Castro Lopo (erikd). As usual I’m providing the stacktrace and the reproducer so that all release distros can test and patch their own version of the package.

# sndfile-resample -to 24000 -c 1 $FILE out
==13807==ERROR: AddressSanitizer: global-buffer-overflow on address 0x7f44bc709a3c at pc 0x7f44bc6b1d6b bp 0x7fffec8f5e20 sp 0x7fffec8f5e18                                                                       
READ of size 4 at 0x7f44bc709a3c thread T0                                                                                                                                                                        
    #0 0x7f44bc6b1d6a in calc_output_single /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/src/src_sinc.c:296:48                                                                         
    #1 0x7f44bc6b1d6a in sinc_mono_vari_process /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/src/src_sinc.c:400                                                                        
    #2 0x7f44bc6a3659 in src_process /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/src/samplerate.c:174:11                                                                              
    #3 0x51369a in sample_rate_convert /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:221:16                                                                 
    #4 0x51369a in main /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/examples/sndfile-resample.c:163                                                                                   
    #5 0x7f44bb55278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                        
    #6 0x419f88 in _init (/usr/bin/sndfile-resample+0x419f88)                                                                                                                                                     
0x7f44bc709a3c is located 0 bytes to the right of global variable 'slow_mid_qual_coeffs' defined in '/tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/src/mid_qual_coeffs.h:37:3' (0x7f44bc6f3ba0) of size 89756                                                                                                                                                                                             
SUMMARY: AddressSanitizer: global-buffer-overflow /tmp/portage/media-libs/libsamplerate-0.1.8-r1/work/libsamplerate-0.1.8/src/src_sinc.c:296:48 in calc_output_single                                             
Shadow bytes around the buggy address:                                                                                                                                                                            
  0x0fe9178d92f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                 
  0x0fe9178d9300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                 
  0x0fe9178d9310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                 
  0x0fe9178d9320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                 
  0x0fe9178d9330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                 
=>0x0fe9178d9340: 00 00 00 00 00 00 00[04]f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
  0x0fe9178d9350: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
  0x0fe9178d9360: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
  0x0fe9178d9370: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
  0x0fe9178d9380: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
  0x0fe9178d9390: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9                                                                                                                                                 
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                              
  Addressable:           00                                                                                                                                                                                       
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                     
  Heap left redzone:       fa                                                                                                                                                                                     
  Freed heap region:       fd                                                                                                                                                                                     
  Stack left redzone:      f1                                                                                                                                                                                     
  Stack mid redzone:       f2                                                                                                                                                                                     
  Stack right redzone:     f3                                                                                                                                                                                     
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Erik de Castro Lopo and Agostino Sarubbo.



2017-04-11: bug discovered and reported to upstream
2017-04-11: blog post about the issue
2017-04-12: CVE assigned

This bug was found with American Fuzzy Lop.


libsamplerate: global buffer overflow in calc_output_single (src_sinc.c)

April 10, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.5 (April 10, 2017, 10:19 UTC)

Howdy folks,

I’m obviously slacking a bit on my blog and I’m ashamed to say that it’s not the only place where I do. py3status is another of them and it wouldn’t be the project it is today without @tobes.

In fact, this new 3.5 release has witnessed his takeover on the top contributions on the project, so I want to extend a warm thank you and lots of congratulations on this my friend 🙂

Also, an amazing new contributor from the USA has come around in the nickname of @lasers. He has been doing a tremendous job on module normalization, code review and feedbacks. His high energy is amazing and more than welcome.

This release is mainly his, so thank you @lasers !

What’s new ?

Well the changelog has never been so large that I even don’t know where to start. I guess the most noticeable change is the gorgeous and brand new documentation of py3status on readthedocs !

Apart from the enhanced guides and sections, what’s amazing behind this new documentation is the level of automation efforts that @lasers and @tobes put into it. They even generate modules’ screenshots programmatically ! I would never have thought of it possible 😀

The other main efforts on this release is about modules normalization where @lasers put so much energy in taking advantage of the formatter features and bringing all the modules to a new level of standardization. This long work brought to light some lack of features or bugs which got corrected along the way.

Last but not least, the way py3status notifies you when modules fail to load/execute got changed. Now modules which fail to load or execute will not pop up a notification (i3 nagbar or dbus) but display directly in the bar where they belong. Users can left click to show the error and right click to discard them from their bar !

New modules

Once again, new and recurring contributors helped the project get better and offer a cool set of modules, thank you contributors !

  • air_quality module, to display the air quality of your place, by @beetleman and @lasers
  • getjson module to display fields from a json url, by @vicyap
  • keyboard_locks module to display keyboard locks states, by @lasers
  • systemd module to check the status of a systemd unit, by @adrianlzt
  • tor_rate module to display the incoming and outgoing data rates of a Tor daemon instance, by @fmorgner
  • xscreensaver module, by @lasers and @neutronst4r

Special mention to @maximbaz for his continuous efforts and help. And also a special community mention to @valdur55 for his responsiveness and help for other users on IRC !

What’s next ?

The 3.6 version will focus on the following ideas, some sane and some crazy 🙂

  • we will continue to work on the ability to add/remove/move modules in the bar at runtime
  • i3blocks and i3pystatus support, to embed their configurations and modules inside py3status
  • formatter optimizations
  • finish modules normalization
  • write more documentation and clean up the old ones

Stay tuned

April 09, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switched to Lineage OS (April 09, 2017, 14:40 UTC)

I have been a long time user of Cyanogenmod, which discontinued its services end of 2016. Due to lack of (continuous) time, I was not able to switch over toward a different ROM. Also, I wasn't sure if LineageOS would remain the best choice for me or not. I wanted to review other ROMs for my Samsung Galaxy SIII (the i9300 model) phone.

Today, I made my choice and installed LineageOS.

The requirements list

When looking for new ROMs to use, I had a number of requirements, some must-have, others should-have or would-have (using the MoSCoW method.

First of all, I want the ROM to be installable through ClockworkMod 6.4.0.something. This is a mandatory requirement, because I don't want to venture out in installing a different recovery (like TWRP). Not that much that I'm scared from it, but it might require me to install stuff like Heimdal and update my SELinux policies on my system to allow it to run, and has the additional risk that things still fail.

I tried updating the recovery ROM in the past (a year or so ago) using the mobile application approaches themselves (which require root access, that my phone had at the time) but it continuously said that it failed and that I had to revert to the more traditional way of flashing the recovery.

Given that I know I need to upgrade within a day (and have other things planned today) I didn't want to loose too much time in upgrading the recovery first.

Second, the ROM had to allow OTA updates. With CyanogenMod, the OTA didn't fully work on my phone (it downloaded and verified the images correctly, but couldn't install it automatically - I had to reboot in recovery manually and install the ZIP), but it worked sufficiently for me to easily update the phone on a weekly basis. I wanted to keep this luxury, and who knows, move towards an end-to-end working OTA.

Furthermore, the ROM had to support Android 7.1. I want the latest Android to see how long this (nowadays aged) phone can handle things. Once the phone cannot get the latest Android anymore, I'll probably move towards a new phone. But as long as I don't have to, I'll put my money in other endeavours ;-)

Finally, the ROM must be in active development. One of the reasons I want the latest Android is also because I want to keep receiving the necessary security fixes. If a ROM doesn't actively follow the security patches and code, then it might become (too) vulnerable for comfort.

ROMs, ROMs everywhere (?)

First, I visited the Galaxy S3 discussion on the XDA-Developers site. This often contains enough material to find ROMs which have a somewhat active development base.

I was still positively surprised by the activity on this quite old phone (the i9300 was first released in May, 2012, making this phone almost 5 years old).

The Vanir mod seemed to imply that TWRP was required, but past articles on Vanir showed that CWM should also work. However, from the discussion I gathered that it is based on LineageOS. Not that that's bad, but it makes LineageOS the "preferred" ROM first (default installed software list, larger upstream community, etc.)

The Ressurrection Remix shows a very active discussion with good feedback from the developer(s). It is based on a number of other resources (including CyanogenMod), so seems to borrow and implement various other features. Although I got the slight impression that it would be a bit more filled with applications I might not want, I kept it on the short-list.

SLIMROM is based on AOSP (the Android Open Source Project). It doesn't seem to support OTA though, and its release history is currently still premature. However, I will keep an eye on this one for future reference.

After a while, I started looking for ROMs based on AOSP, as the majority of ROMs shown are based on LineageOS (abbreviated to LOS). Apparently, for the Samsung S3, LineageOS seems to be one of the most popular sources (and ROMs).

So I put my attention to LineageOS:

So, why not?

Using LineageOS without root

While deciding to use LineageOS or go through with additional ROM seeking, I stumbled upon the installation instructions that showed that the ROM can be installed without automatically enabling rooted Android access. I'm not sure if this was the case with Cyanogenmod (I've been running with a rooted Cyanogenmod for too long to remember) but it opened a possiblity for me...

Personally, I don't mind having a rooted phone, as long as it is the user who decides which applications can get root access and which can't. For me, the two applications that used root access was an open source ad blocker called AdAway and the Android shell (for troubleshooting purposes, such as killing the media server if it locked my camera).

But some applications seem to think that a rooted phone automatically means that the phone is open access and full of malware. It is hard to find any trustworthy, academical research on the actual secure state of rooted versus non-rooted devices. I believe that proper application vetting (don't install applications that aren't popular and long-existing, check the application vendors, etc.) and keeping your phone up-to-date is much more important than not rooting.

And although these applications happily function on old, unpatched Android 4.x devices they refuse to function on my (rooted) Android 7.1 phone. So, the ability to install LineageOS without root (rooting actually requires flashing an additional package) is a nice thing as I can start with a non-rooted device first, and switch back to a rooted device if I need it later.

With that, I decided to flash my phone with the latest LineageOS nightly for my phone.

Switching password manager

I tend to use such ROM switches (or, in case of CyanogenMod, major version upgrades) as a time to revisit the mobile application list, and reduce it to what I really used the last few months.

One of the changes I did on my mobile application list is switch the password application. I used to use Remember Passwords but it hasn't seen updates for quite some time, and the backup import failed last time I migrated to a higher CyanogenMod version (possibly Android version related). Because I don't want to synchronize the passwords or see the application have any Internet oriented activity, I now use Keepass2Android Offline.

This is for passwords which I don't auto-generate using SuperGenPass, my favorite password manager. I don't use the bookmarklet approach myself, but download and run it separately when generating passwords - or use a SuperGenPass mobile application.

First impressions

It is too soon to say if it is fully functional or not. Most standard functionality works OK (phone, SMS, camera) but it is only after a few days that specific issues can come up.

Only the first boot was very slow (probably because it was optimizing the application list in the background), the second boot was well below half a minute. I didn't count it, but it's fast enough for me.

April 07, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)

I want to tell a little story here. I am usually relatively savvy in IT security issues. Yet I was made aware of a quite severe mistake today that caused a security issue in my web page. I want to learn from mistakes, but maybe also others can learn something as well.

I have a private web page. Its primary purpose is to provide a list of links to articles I wrote elsewhere. It's probably not a high value target, but well, being an IT security person I wanted to get security right.

Of course the page uses TLS-encryption via HTTPS. It also uses HTTP Strict Transport Security (HSTS), TLS 1.2 with an AEAD and forward secrecy, has a CAA record and even HPKP (although I tend to tell people that they shouldn't use HPKP, because it's too easy to get wrong). Obviously it has an A+ rating on SSL Labs.

Surely I thought about Cross Site Scripting (XSS). While an XSS on the page wouldn't be very interesting - it doesn't have any kind of login or backend and doesn't use cookies - and also quite unlikely – no user supplied input – I've done everything to prevent XSS. I set a strict Content Security Policy header and other security headers. I have an A-rating on (no A+, because after several HPKP missteps I decided to use a short timeout).

I also thought about SQL injection. While an SQL injection would be quite unlikely – you remember, no user supplied input – I'm using prepared statements, so SQL injections should be practically impossible.

All in all I felt that I have a pretty secure web page. So what could possibly go wrong?

Well, this morning someone send me this screenshot:
PDO stack trace

And before you ask: Yes, this was the real database password. (I changed it now.)

So what happened? The mysql server was down for a moment. It had crashed for reasons unrelated to this web page. I had already taken care of that and hadn't noted the password leak. The crashed mysql server subsequently let to an error message by PDO (PDO stands for PHP Database Object and is the modern way of doing database operations in PHP).

The PDO error message contains a stack trace of the function call including function parameters. And this led to the password leak: The password is passed to the PDO constructor as a parameter.

There are a few things to note here. First of all for this to happen the PHP option display_errors needs to be enabled. It is recommended to disable this option in production systems, however it is enabled by default. (Interesting enough the PHP documentation about display_errors doesn't even tell you what the default is.)

display_errors wasn't enabled by accident. It was actually disabled in the past. I made a conscious decision to enable it. Back when we had display_errors disabled on the server I once tested a new PHP version where our custom config wasn't enabled yet. I noticed several bugs in PHP pages. So my rationale was that disabling display_errors hides bugs, thus I'd better enable it. In hindsight it was a bad idea. But well... hindsight is 20/20.

The second thing to note is that this only happens because PDO throws an exception that is unhandled. To be fair, the PDO documentation mentions this risk. Other kinds of PHP bugs don't show stack traces. If I had used mysqli – the second supported API to access MySQL databases in PHP – the error message would've looked like this:

PHP Warning: mysqli::__construct(): (HY000/1045): Access denied for user 'test'@'localhost' (using password: YES) in /home/[...]/mysqli.php on line 3

While this still leaks the username, it's much less dangerous. This is a subtlety that is far from obvious. PHP functions have different modes of error reporting. Object oriented functions – like PDO – throw exceptions. Unhandled exceptions will lead to stack traces. Other functions will just report error messages without stack traces.

If you wonder about the impact: It's probably minor. People could've seen the password, but I haven't noticed any changes in the database. I obviously changed it immediately after being notified. I'm pretty certain that there is no way that a database compromise could be used to execute code within the web page code. It's far too simple for that.

Of course there are a number of ways this could've been prevented, and I've implemented several of them. I'm now properly handling exceptions from PDO. I also set a general exception handler that will inform me (and not the web page visitor) if any other unhandled exceptions occur. And finally I've changed the server's default to display_errors being disabled.

While I don't want to shift too much blame here, I think PHP is making this far too easy to happen. There exists a bug report about the leaking of passwords in stack traces from 2014, but nothing happened. I think there are a variety of unfortunate decisions made by PHP. If display_errors is dangerous and discouraged for production systems then it shouldn't be enabled by default.

PHP could avoid sending stack traces by default and make this a separate option from display_errors. It could also introduce a way to make exceptions fatal for functions so that calling those functions is prevented outside of a try/catch block that handles them. (However that obviously would introduce compatibility problems with existing applications, as Craig Young pointed out to me.)

So finally maybe a couple of takeaways:

  • display_errors is far more dangerous than I was aware of.
  • Unhandled exceptions introduce unexpected risks that I wasn't aware of.
  • In the past I was recommending that people should use PDO with prepared statements if they use MySQL with PHP. I wonder if I should reconsider that, given the circumstances mysqli seems safer (it also supports prepared statements).
  • I'm not sure if there's a general takeaway, but at least for me it was quite surprising that I could have such a severe security failure in a project and code base where I thought I had everything covered.

April 05, 2017

binutils are a collection of binary tools necessary to build programs.

An updated clang version were able to discover two null pointer dereference in the following simple way:

# echo "int main () { return 0; }" > test.c
# cc test.c -o test
/tmp/portage/sys-devel/binutils-2.28/work/binutils-2.28/bfd/elflink.c:124:12: runtime error: member access within null pointer of type 'struct elf_link_hash_entry'                            

/tmp/portage/sys-devel/binutils-2.28/work/binutils-2.28/bfd/elflink.c:11979:58: runtime error: member access within null pointer of type 'elf_section_list' (aka 'struct elf_section_list')  

Affected version:

Fixed version:

Commit fix:;h=ad32986fdf9da1c8748e47b8b45100398223dba8

This bug was discovered by Agostino Sarubbo of Gentoo.


2017-04-01: bug discovered and reported to upstream
2017-04-04: upstream released a patch
2017-04-05: blog post about the issue
2017-04-09: CVE assigned

This bug was found with clang’s Undefined Behavior Sanitizer.


binutils: two NULL pointer dereference in elflink.c

April 03, 2017

elfutils is a set of libraries/utilities to handle ELF objects (drop in replacement for libelf).

A fuzz on eu-elflint showed a memory allocation failure.

The interesting ASan output:

# eu-elflint -d $FILE
==5053==AddressSanitizer CHECK failed: /tmp/portage/sys-devel/gcc-6.3.0/work/gcc-6.3.0/libsanitizer/sanitizer_common/ "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
    #0 0x7faa2335941d  (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #1 0x7faa2335f063 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #2 0x7faa2335f24d  (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #3 0x7faa23368c52  (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #4 0x7faa232ba0b9  (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #5 0x7faa232b249b  (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #6 0x7faa2335040a in calloc (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #7 0x431b8d in xcalloc /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/lib/xmalloc.c:64
    #8 0x41f0bb in check_sections /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:3680
    #9 0x42961f in process_elf_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:4697
    #10 0x42961f in process_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:242
    #11 0x402d33 in main /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:175
    #12 0x7faa21c6378f in __libc_start_main (/lib64/
    #13 0x403498 in _start (/usr/bin/eu-elflint+0x403498)

Affected version:

Fixed version:
0.169 (not released atm)

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-03-27: bug discovered and reported to upstream
2017-04-04: blog post about the issue
2017-04-09: CVE assigned

This bug was found with American Fuzzy Lop.


elfutils: memory allocation failure in xcalloc (xmalloc.c)

elfutils is a set of libraries/utilities to handle ELF objects (drop in replacement for libelf).

A fuzz on eu-elflint showed an heap overflow.

The complete ASan output:

# eu-elflint -d $FILE
==14428==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60b00000aff4 at pc 0x00000040b36b bp 0x7ffe1e25ef20 sp 0x7ffe1e25ef18
READ of size 4 at 0x60b00000aff4 thread T0
    #0 0x40b36a in check_sysv_hash /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:2020
    #1 0x40b36a in check_hash /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:2315
    #2 0x422e73 in check_sections /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:4118
    #3 0x42961f in process_elf_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:4697
    #4 0x42961f in process_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:242
    #5 0x402d33 in main /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:175
    #6 0x7f7a318a878f in __libc_start_main (/lib64/
    #7 0x403498 in _start (/usr/bin/eu-elflint+0x403498)

0x60b00000aff7 is located 0 bytes to the right of 103-byte region [0x60b00000af90,0x60b00000aff7)
allocated by thread T0 here:
    #0 0x7f7a32f95288 in malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/
    #1 0x7f7a32bf1b46 in convert_data /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/elf_getdata.c:166
    #2 0x7f7a32bf1b46 in __libelf_set_data_list_rdlock /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/elf_getdata.c:434
    #3 0x7f7a32bf2662 in __elf_getdata_rdlock /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/elf_getdata.c:541
    #4 0x7f7a32bf2776 in elf_getdata /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/elf_getdata.c:559
    #5 0x7f7a32c1e035 in elf32_getchdr /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/elf32_getchdr.c:72
    #6 0x7f7a32c1e55c in gelf_getchdr /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/libelf/gelf_getchdr.c:52
    #7 0x420edf in check_sections /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:3911
    #8 0x42961f in process_elf_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:4697
    #9 0x42961f in process_file /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:242
    #10 0x402d33 in main /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:175
    #11 0x7f7a318a878f in __libc_start_main (/lib64/

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/dev-libs/elfutils-0.168/work/elfutils-0.168/src/elflint.c:2020 in check_sysv_hash
Shadow bytes around the buggy address:
  0x0c167fff95a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff95b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff95c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff95d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff95e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c167fff95f0: fa fa 00 00 00 00 00 00 00 00 00 00 00 00[07]fa
  0x0c167fff9600: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff9610: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff9620: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff9630: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c167fff9640: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:
0.169 (not released atm)

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2017-03-27: bug discovered and reported to upstream
2017-04-04: blog post about the issue
2017-04-09: CVE assigned

This bug was found with American Fuzzy Lop.


elfutils: heap-based buffer overflow in check_sysv_hash (elflint.c)

March 30, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

After a short detour on how Skydive can help debugging Service Function Chaining, in this post I will give details on the new RPM files now available in RDO to install networking-sfc. We will go through a complete TripleO demo deployment, and configure the different nodes to enable networking-sfc components.
Note that most steps will be quite generic, so can be used to install networking-sfc on other OpenStack setups!

Running tripleo-quickstart

To quickly get a TripleO setup up and running, I used tripleo-quickstart with most default options. So prepare a big enough machine (I used a 32GB one), that you can SSH in as root with password, get this helpful script and run it on master “release”:
$ ./ --install-deps
$ ./ -R master -t all ${VIRTHOST}

After some coffee cups, you should get an undercloud node, an overcloud with one compute node and one controller node, all running inside your test system.

Accessing the nodes

A quick tip here: if you want to have an easy access to all nodes and web interfaces, I recommend sshuttle, the “poor man’s VPN”. Once installed (it is available in Fedora and Gentoo at least), run this command (as root):
# sshuttle -e "ssh -F /home/YOUR_USER/.quickstart/ssh.config.ansible" -r undercloud -v

And now, you can directly access both undercloud IP addresses (192.168.x.x) and overcloud ones (10.x)! For example, you can take a look at the tripleo-ui web interface, which should be at (username:admin, password: find with “sudo hiera admin_password” on the undercloud)

As for SSH access, tripleo-quickstart created a configuration file to simplify the commands, so you can use these:
# undercloud login (where we can run CLI commands)
$ ssh -F ~/.quickstart/ssh.config.ansible undercloud
# overcloud compute node
$ ssh -F ~/.quickstart/ssh.config.ansible overcloud-novacompute-0
# overcloud controller node
$ ssh -F ~/.quickstart/ssh.config.ansible overcloud-controller-0

The undercloud has some interesting scripts and credential files (stackrc for the undercloud itself, overcloudrc for… the overcloud indeed). But let’s go back to SFC before.

Enable networking-sfc

First, install the networking-sfc RPM package on each node (we will run CLI and demo scripts from the undercloud, so do it on all three nodes):
# yum install -y python-networking-sfc

That was easy, right? OK, you still have to do the configuration steps manually (for now).

On the controller node(s), modify the neutron-server configuration file /etc/neutron/neutron.ini:

# Look for service_plugins line, add the SFC ones at the end
# The service plugins Neutron will use (list value)
# At the end, set the backends to use (in this case, the default OVS one)
drivers = ovs

drivers = ovs

The controller is now configured. Now, create the SFC tables in the neutron database, and restart the neutron-server service:
$ sudo neutron-db-manage --subproject networking-sfc upgrade head
$ sudo systemctl restart neutron-server

Now for the compute node(s)! We will enable the SFC extension in the Open vSwitch agent, note that its configuration file can be different depending on your setup (you can confirm yours checking the output of “ps aux|grep agent”). In this demo, edit /etc/neutron/plugins/ml2/openvswitch_agent.ini

# This time, look for the extensions line, add sfc to it
# Extensions list to use (list value)
extensions =qos,sfc

And restart the agent:
$ sudo systemctl restart neutron-openvswitch-agent

Demo time

Congratulations, you successfully deployed a complete OpenStack setup and enabled SFC on it! To confirm it, connect to the undercloud and run some networking-sfc commands against the overcloud, they should run without errors:
$ source overcloudrc
(OVERCLOUD) $ neutron port-pair-list # Or "port-pair-create --help"

I updated my demo script for this tripleo-quickstart setup: in addition to the SFC-specific parts, it will also create basic networks, images, … and floating IP addresses for the demo VMs (we can not connect directly to the private addresses as we are not on the same node this time). Now, still on the undercloud, download an run the script:
$ git clone
$ ./openstack-scripts/

If all went well, you can read back a previous post to see how to test this setup, or go on your own and experiemnt.

As a short example to end this post, this will confirm that an HTTP request from the source VM does indeed visit a few systems on the way:
$ source overcloudrc
# Get the private IP for the destination VM
(OVERCLOUD) $ openstack server show -f value -c addresses dest_vm
# Get the floating IP for the source VM
(OVERCLOUD) $ openstack server show -f value -c addresses source_vm
(OVERCLOUD) $ ssh cirros@

$ traceroute -n
traceroute to (, 30 hops max, 46 byte packets
1 23.700 ms 0.476 ms 0.320 ms
2 4.239 ms 0.467 ms 0.374 ms
3 0.941 ms 0.599 ms 0.429 ms

Yes, these networking-sfc RPM packages do seem to work 🙂

March 29, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GnuCash: Stock and Currencies (March 29, 2017, 00:03 UTC)

This is probably a post that can be categorized as a first-world problem, but it’s something that I found difficult to figure out by myself, and I thought it may be interesting to other free software users who happen to work for a company that provides stock grants as a form of compensation. As it happens, I’m one of them.

Many years ago I set up my accounting through GnuCash, because I needed something to keep track of the expenses of my company, back when I had one. Being stubborn and pedantic, now four years after closing said company, I still keep running it and I have a nearly complete accounting of my expenses, for sure from 2009 (time from which I started recording all movements on the credit cards) and some before — I actually manually imported all operations on my bank account from its opening in 2005!

One thing that was not very clear to me how to handle, as setting up proper accounts to track the compensation in form of stock. The as usual unnamed company I work for provides me with a part of compensation in form of stock grants, as long as I work for them. The tax implications of these grants are a bit complicated, but boil down to needing to know the “fair market value” (FMV) of the stock at vesting, and the exchange rate for that month on the Irish Revenue website, since the stock is valued in dollars, but my taxes are in Euro (and do note that it does not matter when you transfer that money, as far as I can tell, which is both a blessing and a curse of course).

For this description to work, your GnuCash file needs to be set to Use Trading Accounts, as that makes it much nicer to have multiple currencies and securities in the same transaction. You also need to create a new security (symbol), a new income account and a new stock account. For the first you select Tools → Security Editor → Add, the two others can be created by right clicking on the Income and Stock accounts respectively, and choosing New Account…:

Creating a new Security for Free Software Foundation, with symbol GNU and type BAZAAR.

Creating a new Stock Grants income with USD as currency.

Creating a new FSF stock account with GNU as the security.

Once you open the newly-created stock account, it will show you a number of column as usual for GnuCash, but in particular it’ll have the stock-specific Price, Buy, and Sell column, which are used for trading. What was definitely not originally clear to me before was that the price is always expressed in the default currency of the GnuCash file, which as far as I know is not editable.

You can of course manually calculate the value at time of vesting by using the FMV and the exchange rate, but it becomes quite more complicated in my opinion. So instead, whenever one of my grants vests, I’m now going to the income account to record it. This account is denominated in dollars, which match the symbol’s currency, and we’ll use it to add new stock to the stock (or asset) account.

Say for example as of today I received a grant of two stock units at FMV of $1337 (note I expanded the splits of the line, that is on purpose because it makes it easier to edit):

Adding an Income entry of $1337*2 in the Stock Grants account.

You then add a second split that points to the stock account created previously and insert the number of stock units (2) in that Charge. Again this only works correctly if you’re using trading accounts! Once you do that, the Transfer Funds dialog will pop up and it will have already the right “Exchange Rate” (which is actually the FMV at vesting time).

Transfer Funds dialog for an income of 2 FSF stock units.

Up to now this is all good, now you have some stock, and the Buy field is actually showing the right price in dollars, but the account’s value is not going to be aggregated correctly in the Accounts page. This is because up until now we have made no effort of dealing with the exchange rate between USD and EUR.

The easiest way to fix this is going to Tools → Price Editor, select the symbol and add a new price (I use Net Asset Value for type), and in the price divide the value in dollar for the right exchange rate.

Price Editor dialog setting the price of GNU to 1337/1.0555 EUR

Et voilà! Now you should have most of the data you need not only to keep an eye on the stock grants, but also to do tax reporting as you have recorded the FMV already converted to Euro, which is what you need there. Unfortunately, as far as I can tell, there is no easy way to export the prices in a nice table that you can just use for reporting. And I’m horrible with Scheme so I have not considered writing my own reporting module yet.

There is also another caveat: I have tried to track the grants already in two different ways in the past, and they did not work out very well. I’m seriously considering making a copy of my GnuCash file, delete the accounts altogether and recreate them following this simplified procedure. But that will probably be a bridge for another time.

March 27, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
cvechecker 3.8 released (March 27, 2017, 17:00 UTC)

A new release is now available for the cvechecker application. This is a stupid yet important bugfix release: the 3.7 release saw all newly released CVEs as being already known, so it did not take them up to the database. As a result, systems would never check for the new CVEs.

March 25, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We want to stabilize Perl 5.24 on Gentoo pretty soon (meaning in a few weeks), and do actually not expect any big surprises there. If you are running a stable installation, are willing to do some testing, and are familiar with our Gentoo bugzilla and with filing bug reports, then you might just be the right volunteer to give it a try in advance!

Here's what to do:

Step 1: Update app-admin/perl-cleaner to current ~arch.
I'm deliberately not supplying any version number here, since I might do another release, but you should at least have perl-cleaner-2.25.

Step 2: Make sure your system is uptodate (emerge -uDNav world) and do a depclean step (emerge --depclean --ask).

Step 3: Download the current stabilization list from bug 604602 and place it into your /etc/portage/package.keywords or /etc/portage/package.accept_keywords.

Step 4: Update your world (emerge -uDNav world), which triggers the perl update and the module rebuild.

Step 5: Run "perl-cleaner --all"  (you might also want to try "perl-cleaner --all --delete-leftovers").

... and make sure you file bugs for any problems you encounter, during the update and afterwards! Feedback is also appreciated if all goes fine; then you best leave a comment here on the blog post.

March 21, 2017
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
WireGuard in Google Summer of Code (March 21, 2017, 18:52 UTC)

WireGuard is participating in Google Summer of Code 2017. If you're a student who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, or working on a wide variety of interesting problems, then this might be appealing. The program opened to students on March 20th. If you're applying for WireGuard, choose "Linux Foundation" and state in your proposal that you'd like to work on WireGuard with "Jason Donenfeld" as your mentor.

March 17, 2017
Michał Górny a.k.a. mgorny (homepage, bugs)

You should know already that you are not supposed to rely on Portage internals in ebuilds — all variables, functions and helpers that are not defined by the PMS. You probably know that you are not supposed to touch various configuration files, vdb and other Portage files as well. What most people don’t seem to understand, you are not supposed to make any assumptions about the ebuild repository either. In this post, I will expand on this and try to explain why.

What PMS specifies, what you can rely on

I think the first confusing point is that PMS actually defines the repository format pretty thoroughly. However, it does not specify that you can rely on that format being visible from within ebuild environment. It just defines a few interfaces that you can reliably use, some of them in fact quite consistent with the repository layout.

You should really look as the PMS-defined repository format as an input specification. This is the format that the developers are supposed to use when writing ebuilds, and that all basic tools are supposed to support. However, it does not prevent the package managers from defining and using other package formats, as long as they provide the environment compliant with the PMS.

In fact, this is how binary packages are implemented in Gentoo. The PMS does not define any specific format for them. It only defines a few basic rules and facilities, and both Portage and Paludis implement their own binary package formats. The package managers expose APIs required by the PMS, and can use them to run the necessary pkg_* phases.

However, the problem is not limited to two currently used binary package formats. This is a generic goal of being able to define any new package format in the future, and make it work out of the box with existing ebuilds. Imagine just a few possibilities: more compact repository formats (i.e. not requiring hundreds of unpacked files), fetching only needed ebuild files…

Sadly, none of this can even start being implemented if developers continuosly insist to rely on specific repository layout.

The *DIR variables

Let’s get into the details and iterate over the few relevant variables here.

First of all, FILESDIR. This is the directory where ebuild support files are provided throughout src_* phases. However, there is no guarantee that this will be exactly the directory you created in the ebuild repository. The package manager just needs to provide the files in some directory, and this directory may not actually exist before the first src_* phase. This implies that the support files may not even exist at all when installing from a binary package, and may be created (copied, unpacked) later when doing a source build.

The next variable listed by the PMS is DISTDIR. While this variable is somewhat similar to the previous one, some developers are actually eager to make the opposite assumption. Once again, the package manager may provide the path to any directory that contains the downloaded files. This may be a ‘shadow’ directory containing only files for this package, or it can be any system downloads directory containing lots of other files. Once again, you can’t assume that DISTDIR will exist before src_*, and that it will exist at all (and contain necessary files) when the build is performed using a binary package.

The two remaining variables I would like to discuss are PORTDIR and ECLASSDIR. Those two are a cause of real mayhem: they are completely unsuited for a multi-repository layout modern package managers use and they enforce a particular source repository layout (they are not available outside src_* phases). They pretty much block any effort on improvement, and sadly their removal is continuously blocked by a few short-sighted developers. Nevertheless, work on removing them is in progress.

Environment saving

While we’re discussing those matters, a short note on environment saving is worth being written. By environment saving we usually mean the magic that causes the variables set in one phase function to be carried to a phase function following it, possibly over a disjoint sequence of actions (i.e. install followed by uninstall).

A common misunderstanding is to assume the Portage model of environment saving — i.e. basically dumping a whole ebuild environment including functions into a file. However, this is not sanctioned by the PMS. The rules require the package manager to save only variables, and only those that are not defined in global scope. If phase functions define functions, there is no guarantee that those functions will be preserved or restored. If phases redefine global variables, there is no guarantee that the redefinition will be preserved.

In fact, the specific wording used in the PMS allows a completely different implementation to be used. The package manager may just snapshot defined functions after processing the global scope, or even not snapshot them at all and instead re-read the ebuild (and re-inherit eclasses) every time the execution continues. In this case, any functions defined during phase function are lost.

Is there a future in this?

I hope this clears up all the misunderstandings on how to write ebuilds so that they will work reliably, both for source and binary builds. If those rules are followed, our users can finally start expecting some fun features to come. However, before that happens we need to fix the few existing violations — and for that to happen, we need a few developers to stop thinking only of their own convenience.

Marek Szuba a.k.a. marecki (homepage, bugs)
Gentoo Linux in a Docker container (March 17, 2017, 14:31 UTC)

I have been using Docker for ebuild development for quite a while and absolutely love it, mostly because how easy it is to manipulate filesystem state with it. Work on several separate ebuilds in parallel? Just spin up several containers. Clean up once I’m done? Happens automatically when I close the container. Come back to something later? One docker commit invocation and I’m done. I could of course do something similar with virtual machines (and indeed I have to for cross-platform work) – but for native amd64 is is extremely convenient.

There is, however, one catch. By default processes running in a Docker container are fairly restricted privilege-wise and the Gentoo sandbox uses ptrace(). Result? By default, certain ebuilds (sys-libs/glibc and dev-libs/gobject-introspection , to name just two) will fail to emerge. One can of course set FEATURES=”-sandbox -usersandbox” for such ebuilds but it is an absolute no-no for both new ebuilds and any stabilisation work.

In the past working around this issue required messing with Docker security policies, which at least I found rather awkward. Fortunately since version 1.13.0 there has been a considerably easier way – simply pass


to docker-run. Done! Sandbox can now use ptrace() to its heart’s content.

Big Fat Warning: The reason why by default Docker restricts CAP_SYS_PTRACE is that a malicious program can use ptrace() to break out of the container it runs in. Do not grant this capability to containers unless you know what you are doing. Seriously.

Unfortunately the above is not the end of the story because at least as of version 1.13.0, Docker does not allow to enhance the capabilities of a docker-build job. Why is this a problem? For my own work I use a custom image which extends somewhat the official gentoo/stage3-amd64-hardened . One of the things my Dockerfile does is rsync the Portage tree and update @world so that my image contains a fully up-to-date stage3 even when the official base image does not. You can guess what happens when Docker tries to emerge an ebuild requiring the sandbox to use ptrace()… and remember, one of the packages containing such ebuilds is sys-libs/glibc . To my current knowledge the only way around this is to spin up a ptrace-enabled container using the latest good intermediate image left behind by docker-build and execute the remaining build steps manually. Not fun… Hope they will fix this some day.


Possibly the simplest way of changing the passhprase protecting a SSH key imported into gpg-agent is to use the Assuan passwd command:

echo passwd foo | gpg-connect-agent

where foo is the keygrip of your SSH key, which one can obtain from the file $GNUPGHOME/sshcontrol [1]. So far so good – but how does one know which of the keys listed in that file is the right one, especially if your sshcontrol list is fairly long? Here are the options I am aware of at this point:

Use the key comment. If you remember the contents of the comment field of the SSH key in question you can simply grep for it in all the files stored in $GNUPGHOME/private-keys-v1.d/ . Take the name of the file that matches, strip .key from the end and you’re set! Note that these are binary files so make sure your grep variant does not skip over them.

Use the MD5 fingerprint and the key comment. If for some reason you would rather not do the above you can take advantage of the fact that for SSH keys imported into gpg-agent the normal way, each keygrip line in sshcontrol is preceded by comment lines containing, among other things, the MD5 fingerprint of the imported key. Just tell ssh-add to print MD5 fingerprints for keys known to the agent instead of the default SHA256 ones:

ssh-add -E md5 -l

locate the fingerprint corresponding to the relevant key comment, then find the corresponding keygrip in sshcontrol .

Use the MD5 fingerprint and the public key. A slightly more complex variant of the above can be used if your SSH key pair in question has no comment but you still have the public key lying around. Start by running

ssh-add -L

and note the number of the line in which the public key in question shows up. The output of ssh-add -L and ssh-add -l is in the same order so you should have no trouble locating the corresponding MD5 fingerprint.

Bottom line: use meaningful comments for your SSH keys. It can really simplify key management in the long run.


March 15, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)
Zero Days and Cargo Cult Science (March 15, 2017, 12:25 UTC)

I've complained in the past about the lack of rigorous science in large parts of IT security. However there's no lack of reports and publications that claim to provide data about this space.

Recently RAND Corporation, a US-based think tank, published a report about zero day vulnerabilities. Many people praised it, an article on Motherboard quotes people saying that we finally have “cold hard data” and quoting people from the zero day business who came to the conclusion that this report clearly confirms what they already believed.

I read the report. I wasn't very impressed. The data is so weak that I think the conclusions are almost entirely meaningless.

The story that is spun around this report needs some context: There's a market for secret security vulnerabilities, often called zero days or 0days. These are vulnerabilities in IT products that some actors (government entities, criminals or just hackers who privately collect them) don't share with the vendor of that product or the public, so the vendor doesn't know about them and can't provide a fix.

One potential problem of this are bug collisions. Actor A may find or buy a security bug and choose to not disclose it and use it for its own purposes. If actor B finds the same bug then he might use it to attack actor A or attack someone else. If A had disclosed that bug to the vendor of the software it could've been fixed and B couldn't have used it, at least not against people who regularly update their software. Depending on who A and B are (more or less democratic nation states, nation states in conflict with each other or simply criminals) one can argue how problematic that is.

One question that arises here is how common that is. If you found a bug – how likely is it that someone else will find the same bug? The argument goes that if this rate is low then stockpiling vulnerabilities is less problematic. This is how the RAND report is framed. It tries to answer that question and comes to the conclusion that bug collisions are relatively rare. Thus many people now use it to justify that zero day stockpiling isn't so bad.

The data is hardly trustworthy

The basis of the whole report is an analysis of 207 bugs by an entity that shared this data with the authors of the report. It is incredibly vague about that source. They name their source with the hypothetical name BUSBY.

We can learn that it's a company in the zero day business and indirectly we can learn how many people work there on exploit development. Furthermore we learn: “Some BUSBY researchers have worked for nation-states (so
their skill level and methodology rival that of nation-state teams), and many of BUSBY’s products are used by nation-states.” That's about it. To summarize: We don't know where the data came from.

The authors of the study believe that this is a representative data set. But it is not really explained why they believe so. There are numerous problems with this data:

  • We don't know in which way this data has been filtered. The report states that 20-30 bugs “were removed due to operational sensitivity”. How was that done? Based on what criteria? They won't tell you. Were the 207 bugs plus the 20-30 bugs all the bugs the company had found or was this already pre-filtered? They won't tell you.
  • It is plausible to assume that a certain company focuses on specific bugs, has certain skills, tools or methods that all can affect the selection of bugs and create biases.
  • Oh by the way, did you expect to see the data? Like a table of all the bugs analyzed with the at least the little pieces of information BUSBY was willing to share? Because you were promised to see cold hard data? Of course not. That would mean others could reanalyze the data, and that would be unfortunate. The only thing you get are charts and tables summarizing the data.
  • We don't know the conditions under which this data was shared. Did BUSBY have any influence on the report? Were they allowed to read it and comment on it before publication? Did they have veto rights to the publication? The report doesn't tell us.

Naturally BUSBY has an interest in a certain outcome and interpretation of that data. This creates a huge conflict of interest. It is entirely possible that they only chose to share that data because they expected a certain outcome. And obviously the reverse is also true: Other companies may have decided not to share such data to avoid a certain outcome. It creates an ideal setup for publication bias, where only the data supporting a certain outcome is shared.

It is inexcusable that the problem of conflict of interest isn't even mentioned or discussed anywhere in the whole report.

A main outcome is based on a very dubious assumption

The report emphasizes two main findings. One is that the lifetime of a vulnerability is roughly seven years. With the caveat that the data is likely biased, this claim can be derived from the data available. It can reasonably be claimed that this lifetime estimate is true for the 207 analyzed bugs.

The second claim is about the bug collision rate and is much more problematic:
“For a given stockpile of zero-day vulnerabilities, after a year, approximately 5.7 percent have been discovered by an outside entity.”

Now think about this for a moment. It is absolutely impossible to know that based on the data available. This would only be possible if they had access to all the zero days discovered by all actors in that space in a certain time frame. It might be possible to extrapolate this if you'd know how many bugs there are in total on the market - but you don't.

So how does this report solve this? Well, let it speak for itself:

Ideally, we would want similar data on Red (i.e., adversaries of Blue, or other private-use groups), to examine the overlap between Blue and Red, but we could not obtain that data. Instead, we focus on the overlap between Blue and the public (i.e., the teal section in the figures above) to infer what might be a baseline for what Red has. We do this based on the assumption that what happens in the public groups is somewhat similar to what happens in other groups. We acknowledge that this is a weak assumption, given that the composition, focus, motivation, and sophistication of the public and private groups can be fairly different, but these are the only data available at this time. (page 12)

Okay, weak assumption may be the understatement of the year. Let's summarize this: They acknowledge that they can't answer the question they want to answer. So they just answer an entirely different question (bug collision rate between the 207 bugs they have data about and what is known in public) and then claim that's about the same. To their credit they recognize that this is a weak assumption, but you have to read the report to learn that. Neither the summary nor the press release nor any of the favorable blog posts and media reports mention that.

If you wonder what the Red and Blue here means, that's also quite interesting, because it gives some insights about the mode of thinking of the authors. Blue stands for the “own team”, a company or government or anyone else who has knowledge of zero day bugs. Red is “the adversary” and then there is the public. This is of course a gross oversimplification. It's like a world where there are two nation states fighting each other and no other actors that have any interest in hacking IT systems. In reality there are multiple Red, Blue and in-between actors, with various adversarial and cooperative relations between them.

Sometimes the best answer is: We don't know

The line of reasoning here is roughly: If we don't have good data to answer a question, we'll just replace it with bad data.

I can fully understand the call for making decisions based on data. That is usually a good thing. However, it may simply be that this is a scenario where getting reliable data is incredibly hard or simply impossible. In such a situation the best thing one can do is admit that and live with it. I don't think it's helpful to rely on data that's so weak that it's basically meaningless.

The core of the problem is that we're talking about an industry that wants to be secret. This secrecy is in a certain sense in direct conflict with good scientific practice. Transparency and data sharing are cornerstones of good science.

I should mention here that shortly afterwards another study was published by Trey Herr and Bruce Schneier which also tries to answer the question of bug collisions. I haven't read it yet, from a brief look it seems less bad than the RAND report. However I have my doubts about it as well. It is only based on public bug findings, which is at least something that has a chance of being verifiable by others. It has the same problem that one can hardly draw conclusions about the non-public space based on that. (My personal tie in to that is that I had a call with Trey Herr a while ago where he asked me about some of my bug findings. I told him my doubts about this.)

The bigger picture: We need better science

IT security isn't a field that's rich of rigorous scientific data.

There's a lively debate right now going on in many fields of science about the integrity of their methods. Psychologists had to learn that many theories they believed for decades were based on bad statistics and poor methodology and are likely false. Whenever someone tries to replicate other studies the replication rates are abysmal. Smart people claim that the majority scientific outcomes are not true.

I don't see this debate happening in computer science. It's certainly not happening in IT security. Almost nobody is doing replications. Meta analyses, trials registrations or registered reports are mostly unheard of.

Instead we have cargo cult science like this RAND report thrown around as “cold hard data” we should rely upon. This is ridiculous.

I obviously have my own thoughts on the zero days debate. But my opinion on the matter here isn't what this is about. What I do think is this: We need good, rigorous science to improve the state of things. We largely don't have that right now. And bad science is a poor replacement for good science.

March 14, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

As nearly every year we'll present a poster on Lab::Measurement at the spring meeting of the German Physical Society again. This time the conference is in Dresden - so visit us on upcoming 23 March 2017, 15:00-19:00, poster session TT75, poster TT75.7!

March 13, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Vodafone R205: opening it up (March 13, 2017, 00:03 UTC)

I have posted about using an R205 without a Vodafone blessed network, and I wrote a quick software inspection of the device. This time I’m writing some notes about the hardware itself.

I have originally expected to just destroy the device to try to gain access to it, but it turns out it’s actually simpler than expected to open: the back of the device is fastened through four Torx T5 screws, and then just a bit of pressure to tear the back apart from the front. No screws behind the label under the battery. I have managed to open and re-close the device without it looking much different, just a bit weathered down.

Unfortunately the first thing that becomes obvious is that the device is not designed to turn on without the battery plugged in. If you try to turn it on with just the USB connected – I tried that before disassembly, of course – the device only displays “BATTERY ERROR” and refuses to boot. This appears to be one of the errors coming from the bootloader of the board, as I can find it next to “TEMP INVALID” in the firmware strings.

Keeping the battery connected while the device is open is not possible by design. Particularly when you consider that all the interesting-looking pads are underneath the battery itself. The answer is thus to just go and solder some cables on the board and connect them to the battery — I care about my life enough not to intend to solder on the battery, that one can be connected with physical placement and tape. This is the time I had to make a call and sacrifice the device. Luckily, the answer was easy: I already have a backup device, another Vodafone Pocket WiFi, this time a R208, that my sister dismissed. Plus if I really wanted, I could get myself a newer 4G version as that’s what my SIM card supports.

There are two sets of pads that appear promising, and so I took a look at them with a simple multimeter. The first group is a 5-pads group, in which one of them correspond to ground, two appear to idle around 3V, and one appears to idle around 4V. This is not exactly your usual configuration of a serial port, but I have not managed to get more details yet. The other group is a 10-pad group with two grounds, and a number of idling pads at 1.85V, which is significantly more consistent with JTAG, but again I have not managed to inspect it yet.

Annotated image of the E586 board.

So I decided to get myself some solid core wires, and try to solder them on the five-pad configuration I saw on the board. Unfortunately the end result has been destructive, and so I had to discard the idea of getting any useful data from that board. Bummer. But, then I thought to myself that this device has to be fairly common, since Vodafone sold it anywhere from Ireland, to Italy to Australia at least. And indeed a quick look at eBay showed me a seller having, not the R205, but the Huawei E586 available for cheap. Indeed, a lot of four devices was being sold for $10 (plus €20 for postage and customs, sigh!). These were fully Huawei-branded E586 devices, with a quite different chassis and a Wind logo on them. Despite coming from New York State, this particular Wind-branded company was Canadian (now goes by Freedom Mobile); I’m not sure on the compatibility of the GSM network, but the package looked promising: four devices, but only one “complete” (battery and back cover). I bought it and it arrived just the other day.

An aside, it’s fun to note that I found just recently that Wind was used as a brand in Canada. The original brand comes from Italy, and I have been a customer of theirs for a number of years there. Indeed, my current Italian phone number is subscribed with Wind. The whole structure of co-owned brands seems to be falling apart though, with the Canadian brand gone, and with the Italian original company having been merged with Tre (Three Italy). I’m not sure on who owns what of that, but they appear to still advertise the Veon app, which matches the new name of the Russian company who owned them until now so who knows.

Opening the Wind devices is also significantly easier as it does not require as much force and does not have as many moving parts. Indeed, the whole chassis is mostly one plastic block, while the front comes away entirely. So indeed after I got home with them I opened one and looked into it, comparing it with the one I had already broken:

If you compare the two boards, you can see that on the top-right (front-facing) there is a small RF connector, which could be a Hirose U.FL or an otherwise similar connector; on the R205, this connector is on the back, making it not reachable by the user. The pad for both RF connectors are visible on as not used in the opposite board.

Next to the connector there is also a switch that is not present in the R205, which on the chassis is marked as reset. On the R205 the reset switch is towards the bottom, and there is nothing on the top side. The lower switch is marked as WPS on the chassis on the Wind device, which makes me think these are programmable somehow. I guess if I look at this deeply enough I’ll find out that these are just GPIOs for the CPU and they are just mapped differently in the firmware.

I have not managed to turn them up yet, also because I do not trust them that much. They appear to have at least the same bootloader since the BATTERY ERROR message appears on them just the same. On the other hand this gives me at least a secondary objective I can look into: if I can figure out how to extract the firmware from the resources of the update binary provided by Vodafone, and how the firmware upgrade process works, I should be able to flash a copy of the Vodafone firmware onto the Wind device as well, since they have the same board. And that would be a good starting point for it.

Having already ruined one of the boards also allows me to open up the RF shielding that is omnipresent on those boards and is hiding every detail, and it would be an interesting thing to document, and would allow to figure out if there is any chance of using OpenWRT or LEDE on it. I guess I’ll follow up with more details of the pictures, and more details of the software.

March 11, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Let's have a talk about TOTP (March 11, 2017, 01:03 UTC)

You probably noticed that I’m a firm believer in 2FA, and in particular on the usefulness of U2F against phishing. Unfortunately not all the services out there have upgraded to U2F to protect their users, though at least some of them managed to bring a modicum of extra safety against other threats by implementing TOTP 2FA/2SV (2-steps verification). Which is good, at least.

Unfortunately this can become a pain. In a previous post I have complained about the problems of SMS-based 2FA/2SV. In this post I’m complaining about the other kind: TOTP. Unlike the SMS, I find this is an implementation issue rather than a systemic one, but let’s see what’s going on.

I have changed my mobile phone this week, as I upgrade my work phone to one whose battery lasts more than half a day, which is important since my job requires me to be oncall and available during shifts. And as usual when this happens, I need to transfer my authenticator apps to the new phone.

Some of the apps are actually very friendly to this: Facebook and Twitter use a shared secret that, once login is confirmed, is passed onto their main app, which means you just need any logged in app to log in again and get a new copy. Neat.

Blizzard was even better. I have effectively copied my whole device config and data across with the Android transfer tool. The authenticator copied over the shared key and didn’t require me to do anything at all to keep working. I like things that are magic!

The rest of the apps was not as nice though.

Amazon and allow you to add at any time a new app using the same shared key. The latter has an explicit option to re-key the 2FA to disable all older apps. Amazon does not tell you anything about it, and does not let you re-key explicitly — my guess is that it re-keys if you disable authentication apps altogether and re-enable it.

WordPress, EA/Origin, Patreon and TunnelBroker don’t allow you to change the app, or get the previously shared key. Instead you have to disable 2FA, then re-enable it. Leaving you “vulnerable” for a (hopefully) limited time. Of these, EA allows you to receive the 2SV code by email, so I decided I’ll take that over having to remember to port this authenticator over every time.

If you remember in the previous post I complained about the separate Authenticator apps that kept piling up for me. I realized that I don’t need as many: the login approval feature, which effectively Microsoft and LastPass provide, is neat and handy, but it’s not worth having two more separate apps for it, so I downgraded them to just use normal TOTP on the Google Authenticator app, which gets me the Android Wear support to see the code straight on my watch. I have particularly failed to remember when I last logged into a Microsoft product except for setting up the security parameters.

Steam on the other hand, was probably the most complete disaster of trying to migrate. Their app, similarly to the one, is just a specially formatted TOTP with a shared key you’re not meant to see. Unfortunately to be able to move the Authenticator to a new phone, you need to disable it first — and you disable it from the logged-in app that has it enabled. Then you can re-enable it on a new phone. I assume there is some specific way to get recovery if that does not work, too. But I don’t count on it.

What does this all mean? TOTP is difficult, it’s hard for users, and it’s risky. Not having an obvious way to de-authenticate the app is bad. If you were at ENIGMA, you could have listened to a talk that was not recorded, on ground of the risky situations there described. The talk title was «Privacy and Security Practices of Individuals Coping with Intimate Partner Abuse». Among various topic that the talk touched upon, there was an important note on the power of 2FA/2SV for people being spied upon to gain certainty that somebody else is not logging in on their account. Not being able to de-authenticate TOTP apps goes against this certainty. Having to disable your 2FA to be able to change it to a new device makes it risky.

Then there are the features that become a compromise between usability and paranoia. As I said I love the Android Wear integration for the Authenticator app. But since the watch is not lockable, it means that anybody who could have access to my watch while I’m not attentive could have access to my TOTP. It’s okay for my use case, but it may not be for all. The Google Authenticator app also does not allow you to set a passcode and encrypt the shared keys, which means if you have enabled developer mode, run your phone unencrypted, or have a phone that is known vulnerable, your TOTP shared keys can be exposed.

What does this all mean? That there is no such thing as the perfect 100% solution that covers you against all possible threat models out there, but some solutions are better than others (U2F) and then compromises depend on what you’re defending against: a relative, or nation-states? If you remember I already went over this four years ago, and again the following year, and the one after talking about threat models. It’s a topic that I’m interested in, if it was not clear.

And a very similar concept was expressed by Zeynep Tufekci when talking about “low-profile activists”, wanting to protect their personal life, rather than the activism. This talk was recorded and is worth watching.

March 08, 2017
Marek Szuba a.k.a. marecki (homepage, bugs)
Hello world! (March 08, 2017, 02:12 UTC)

Welcome to Gentoo Blogs. This is your first post. Edit or delete it, then start blogging!

March 06, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Handling certificates in Gentoo Linux (March 06, 2017, 21:20 UTC)

I recently created a new article on the Gentoo Wiki titled Certificates which talks about how to handle certificate stores on Gentoo Linux. The write-up of the article (which might still change name later, because it does not handle everything about certificates, mostly how to handle certificate stores) was inspired by the observation that I had to adjust the certificate stores of both Chromium and Firefox separately, even though they both use NSS.

March 03, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Overall, I’ve been quite happy with my 2017 Honda Civic EX-T. However, there are some cosmetic changes that I personally think make a world of difference. One of them is debadging it (or removing the “Civic” emblem from the back). Another area for improvement is to swap out the side markers, which by default (and according to law in most of the United States), is amber in colour. As it is the only area on the car that is that gross yellow/orange colour, I thought that it could be vastly improved by swapping it to either clear or some smoked black look. Initially, I ordered the ASEAN market OEM Honda clear side markers on eBay. However, I decided that on my White Orchid Pearl Civic, “smoked black” may look better, so I ordered them instead. Here’s a before-and-after of it:

2017 Honda Civic side marker changed to smoked or clear

Following the great instructions provided by a CivicX forum member, I got started. Though his instructions are spot-on, the procedure for swapping to the non-OEM smoked markers was actually a little easier. Basically, step 4 (cutting the tabs on the socket) was unnecessary. So, a simplified and concise list of the steps required for my particular swap is:

  • Turn the wheels inward to give you more room to access the wheel liner
  • Remove the three screws holding the wheel liner
  • Press on the side marker clip that holds it to the body, whilst simultaneously pushing the marker itself outward away from the body
  • Use a very small flat head screwdriver to depress the tab holding the bulb socket to the harness
  • Swap in a new bulb (if you have one, and I can recommend the Philips 194/T10 white LED bulbs, but realise that since they are white, they will not be “street legal” in many municipalities)
  • Test the polarity once you have inserted the bulb by simply turning on your headlights
  • Place the harness/new bulb/socket into the new side marker (noting that one notch is larger than the rest, which may require rotation of the side marker)
  • Align the new side marker accordingly, and make sure that it snaps into place

The only caveat I found is that the marker on the passenger’s side did not seem to want to snap into place as easily as did the one on the driver’s side. It took a little wiggling, and ultimately required me to press more firmly on the marker itself in order to get it to stay put.

For a process that only took approximately 30 minutes, though, I think that the swap made a world of difference to the overall appearance of the car. I also am happy with my choice to use the white LED bulb, as it shows quite nicely through the smoked lens:

2017 Honda Civic side marker changed to smoked or clear with white LED bulb


March 02, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
cvechecker 3.7 released (March 02, 2017, 09:00 UTC)

After a long time of getting too little attention from me, I decided to make a new cvechecker release. There are few changes in it, but I am planning on making a new release soon with lots of clean-ups.

February 28, 2017
Denis Dupeyron a.k.a. calchan (homepage, bugs)
Gentoo is accepted to GSoC 2017 (February 28, 2017, 00:07 UTC)

There was good news in my mailbox today. The Gentoo Foundation was accepted to be a mentor organization for Google Summer of Code 2017!

What this means is we need you as a mentor, backup mentor or expert mentor. Whether you are a Gentoo developer and have done GSoC before does not matter at this point.

A mentor is somebody who will help during the selection of students, and will mentor a student during the summer. This should take at most one hour of your time on weekdays when student actually work on their project. What’s in it for you, you ask? A pretty exclusive Google T-shirt, a minion who does things you wouldn’t have the time or energy to do, but most importantly gratification and a lot of fun.

Backup mentors are for when the primary mentor of a student becomes unavailable for an extended period, typically for medical or family reasons. It rarely happens but it does happen. But a backup mentor can also be an experienced mentor (i.e., have done it at least once) who assists a primary mentor who is doing it for the first time.

Expert mentors have a very specific knowledge and are contacted on an as-needed basis to help with technical decisions.

You can be any combination of all that. However, our immediate need in the coming weeks is for people (again, not necessarily mentors or devs) who will help us evaluate student proposals.

If you’re a student, it’s the right time now to start thinking about what project idea you would want to work on during the summer. You can find ideas on our dedicated page, or you can come up with yours (these are the best!). One note though: you are going to be working on this full-time (i.e., 8 hours a day, we don’t allow for another even part-time job next to GSoC, although we do accommodate students who have a limited amount of classes or exams) for 3 months, so make sure your idea can keep you busy for this long. Whether you pick one of our ideas or come up with yours, it is strongly recommended to start discussing it with us on IRC.

As usual, we’d love to chat with you or answer your questions in #gentoo-soc on Freenode IRC. Make sure you stay long enough in the channel and give us enough time to respond to you. We are all volunteers and can’t maintain a 24/7 presence. It can take up to a few hours for one of us to see your request.

February 27, 2017
Thoughts on Mr Gates' Robot tax (February 27, 2017, 20:54 UTC)

In an article in QARTS Bill Gates, co-founder of Microsoft, propose that "The robot that takes your job should pay taxes". The issue also got a practical discussion outside of newspapers as the European Parliament rejected a similar proposal on February 16th, 2017 The debate is, however, an interesting one and in many ways an … Continue reading "Thoughts on Mr Gates' Robot tax"

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Reverse Engineering Notes: Accu-Chek Mobile (February 27, 2017, 09:04 UTC)

A couple of years ago, upon suggestion by a reader of this blog, I switched my glucometer to an Accu-Chek Mobile by Roche. I have not even looked into reverse engineering it at the time, as the killer application of that meter was not needing software at all. Indeed all the data is made available over a USB Mass Storage interface as CSV and graphs.

While there is a secondary interface available on the device to connect to software on PC, you had to order a copy of the download software online and receive it physically to be able to download the data to a computer, which I still find kind of silly for a device that is designed to have a standard USB connector.

Things changed recently, as Roche joined Abbott (and probably more to come) on the bandwagon of “show us yours”: upload your blood glucose reading to their cloud systems, and they will help you managing diabetes. I guess this is what happens when people are not waiting. I’m not particularly fond of uploading my health information to the cloud, but signing up for this service also meant being able to grab a proper chatter of the protocol over USB.

The software I’m using right now for snooping over the USB connection is USBlyzer which is proprietary and closed source — I’m not proud, but it gets its job done. I have been thinking of switching of a hardware snooping solution, and I bought a BeagleBone Black to use for that, but I have not started working on that due to time.

So instead, I have collected over time a set of libraries and utilities that operate on the CSV files that the software export (particularly as with version 2.2 they are actually quite well written, the older version was messier to post-process). I should look into publishing this collection, and I promise I’ll give it a try to publish before end of this year.

One of the tools I have prints out the “chatter”, coded hexdumps with include direction information to make it easier for me to read the stream. The first run of it was a bit noisy, but a quick check told me that what I’m interested in is bulk transfers (rather than control transfers, which are the most basic), so I filtered for those only, and then the first thing became obvious very quickly.

The maximum length of the bulk transfers in the trace is 64 bytes, which corresponds to the maximum size of bulk transfers for full speed endpoints. But the chatter shows the device sending multiple packets back from a single command, which is not unusual, as you can’t fit much blood sugar data in 64 bytes. And as usual when there is fragmentation, the actual data transfer size is coded somewhere at the beginning of the message.

0023 >>>> 00000000: E3 00 00 2C 00 03 50 79  00 26 80 00 00 00 80 00  ...,..Py.&......
0023 >>>> 00000010: 80 00 00 00 00 00 00 00  80 00 00 00 00 08 00 60  ...............`
0023 >>>> 00000020: 19 00 01 08 00 00 00 00  00 01 01 01 00 00 00 00  ................

0025 <<<< 00000000: E7 00 01 0A 01 08 00 00  01 01 01 02 00 00 FF FF  ................
0025 <<<< 00000010: FF FF 0D 1C 00 F8 50 00  00 05 00 F2 00 06 00 01  ......P.........
0025 <<<< 00000020: 00 04 00 24 09 2F 00 04  00 02 71 BC 0A 46 00 02  ...$./....q..F..
0025 <<<< 00000030: F0 40 09 96 00 02 08 52  0A 55 00 0C 00 02 00 08  .@.....R.U......
0025 <<<< 00000040: 09 90 00 08 0A 4C 00 02  00 06 00 02 00 04 00 24  .....L.........$
0025 <<<< 00000050: 09 2F 00 04 00 02 71 D0  0A 46 00 02 F0 40 09 96  ./....q..F...@..
0025 <<<< 00000060: 00 02 08 52 0A 55 00 0C  00 02 00 08 09 90 00 08  ...R.U..........
0025 <<<< 00000070: 0A 4C 00 02 00 05 00 03  00 03 00 1E 09 2F 00 04  .L.........../..
0025 <<<< 00000080: 00 80 71 D8 0A 46 00 02  F0 40 0A 55 00 0C 00 02  ..q..F...@.U....
0025 <<<< 00000090: 00 08 09 90 00 08 0A 66  00 02 00 05 00 04 00 03  .......f........
0025 <<<< 000000A0: 00 1E 09 2F 00 04 00 80  72 48 0A 46 00 02 F0 48  .../....rH.F...H
0025 <<<< 000000B0: 0A 55 00 0C 00 02 00 08  09 90 00 08 0A 49 00 02  .U...........I..
0025 <<<< 000000C0: 00 3D 00 05 00 08 00 46  0A 4D 00 02 98 20 09 43  .=.....F.M... .C
0025 <<<< 000000D0: 00 02 00 00 09 41 00 04  00 00 17 70 09 44 00 04  .....A.....p.D..
0025 <<<< 000000E0: 00 00 01 E5 09 53 00 02  00 00 0A 57 00 12 00 10  .....S.....W....
0025 <<<< 000000F0: 50 61 74 69 65 6E 74 20  52 65 73 75 6C 74 73 00  Patient Results.
0025 <<<< 00000100: 09 51 00 02 00 04 0A 63  00 04 00 00 5D C0        .Q.....c....].

0047 >>>> 00000000: E7 00 00 0E 00 0C 00 01  01 03 00 06 00 00 00 00  ................
0047 >>>> 00000010: 00 00                                             ..

0049 <<<< 00000000: E7 00 00 F6 00 F4 00 01  02 03 00 EE 00 00 00 08  ................
0049 <<<< 00000010: 00 E8 09 28 00 0E 00 06  52 6F 63 68 65 00 00 04  ...(....Roche...
0049 <<<< 00000020: 31 32 30 35 09 84 00 0A  00 08 00 60 19 04 B5 1B  1205.......`....
0049 <<<< 00000030: DF 5C 0A 44 00 02 40 00  09 2D 00 78 00 04 00 74  .\.D..@..-.x...t
0049 <<<< 00000040: 00 01 00 00 00 18 73 65  72 69 61 6C 2D 6E 75 6D  ......serial-num
0049 <<<< 00000050: 62 65 72 3A 20 30 30 31  38 32 36 36 35 32 00 04  ber: 001826652..
0049 <<<< 00000060: 00 00 00 14 73 77 2D 72  65 76 20 4D 45 3A 20 56  ....sw-rev ME: V
0049 <<<< 00000070: 30 33 2E 31 33 20 20 20  00 05 00 00 00 16 66 77  03.13   ......fw
0049 <<<< 00000080: 2D 72 65 76 69 73 69 6F  6E 3A 20 56 30 33 2E 39  -revision: V03.9
0049 <<<< 00000090: 30 20 20 20 00 06 00 00  00 1A 70 72 6F 74 6F 63  0   ......protoc
0049 <<<< 000000A0: 6F 6C 2D 72 65 76 69 73  69 6F 6E 3A 20 52 50 43  ol-revision: RPC
0049 <<<< 000000B0: 20 31 2E 30 09 87 00 08  20 17 02 24 21 07 00 00   1.0.... ..$!...
0049 <<<< 000000C0: 0A 45 00 10 C0 00 1F 00  FF FF FF FF 00 64 00 00  .E...........d..
0049 <<<< 000000D0: 00 00 00 00 0A 4B 00 16  00 02 00 12 02 01 00 08  .....K..........
0049 <<<< 000000E0: 01 05 00 01 00 02 20 11  02 02 00 02 00 00 0A 5A  ...... ........Z
0049 <<<< 000000F0: 00 08 00 01 00 04 10 11  00 01                    ..........

As you can see in this particular exchange, bytes at offset 2-3 represent a (big-endian) length for the whole transfer. You just keep reading until that is complete.

While I have not (yet at the time of writing) figured out what the command and response actually convey, one thing that is kind of obvious is that there is some kind of (type-)length-value encoding at play, although in a bit of a funny way.

All records with type 0xE700 appear to have two-level lengths, as you can see on the two responses and the second command: in red it’s the length of the packet, in magenta the same length minus two (which matches the size of the length itself). There are also a number of strings, some zero terminated (Roche\0) and some not (1205), but still encoded with a 16-bit length in front of them.

The next thing to figure out in these cases is whether there is a checksum or not. For effectively all the meters I have reverse engineered up to now, except for maybe the cheap Korean one, include a checksum somewhere. I checked the chatter and found that there are a number of packets that appear to include the same information, but adding a checksum to the packet showed them different.

Once I dumped the (recomposed) packets to binary files, I noticed a number of packets with the same sizes. hexdump, wdiff and colordiff make it very easy to tell what changed between them. It didn’t quite look like a cryptographic checksum, as changing one byte would replace it with a very different number, but it didn’t quite match up with a “dumb” checksum of all the bytes values.

A couple of diff later, it become obvious.

[flameeyes@saladin Accu-Chek]$ wdiff <(hexdump -C 0039-0034-in) <(hexdump -C 0049-0046-in) | colordiff
00000000  e7 00 00 f6 00 f4 00 [-2d-] {+01+}  02 03 00 ee 00 00 00 08  [-|.......-........|-]  {+|................|+}
00000010  00 e8 09 28 00 0e 00 06  52 6f 63 68 65 00 00 04  |...(....Roche...|
00000020  31 32 30 35 09 84 00 0a  00 08 00 60 19 04 b5 1b  |1205.......`....|
00000030  df 5c 0a 44 00 02 40 00  09 2d 00 78 00 04 00 74  |.\.D..@..-.x...t|
00000040  00 01 00 00 00 18 73 65  72 69 61 6c 2d 6e 75 6d  |......serial-num|
00000050  62 65 72 3a 20 30 30 31  38 32 36 36 35 32 00 04  |ber: 001826652..|
00000060  00 00 00 14 73 77 2d 72  65 76 20 4d 45 3a 20 56  |....sw-rev ME: V|
00000070  30 33 2e 31 33 20 20 20  00 05 00 00 00 16 66 77  |03.13   ......fw|
00000080  2d 72 65 76 69 73 69 6f  6e 3a 20 56 30 33 2e 39  |-revision: V03.9|
00000090  30 20 20 20 00 06 00 00  00 1a 70 72 6f 74 6f 63  |0   ......protoc|
000000a0  6f 6c 2d 72 65 76 69 73  69 6f 6e 3a 20 52 50 43  |ol-revision: RPC|
000000b0  20 31 2e 30 09 87 00 08  20 17 02 24 21 [-06 53-] {+07 00+} 00  | 1.0.... [-..$!.S.|-] {+..$!...|+}
000000c0  0a 45 00 10 c0 00 1f 00  ff ff ff ff 00 64 00 00  |.E...........d..|
000000d0  00 00 00 00 0a 4b 00 16  00 02 00 12 02 01 00 08  |.....K..........|
000000e0  01 05 00 01 00 02 20 11  02 02 00 02 00 00 0a 5a  |...... ........Z|
000000f0  00 08 00 01 00 04 10 11  00 01                    |..........|
[flameeyes@saladin Accu-Chek]$ wdiff <(hexdump -C 0039-0034-in) <(hexdump -C 0073-0068-in) | colordiff
00000000  e7 00 00 f6 00 f4 00 [-2d-] {+02+}  02 03 00 ee 00 00 00 08  [-|.......-........|-]  {+|................|+}
00000010  00 e8 09 28 00 0e 00 06  52 6f 63 68 65 00 00 04  |...(....Roche...|
00000020  31 32 30 35 09 84 00 0a  00 08 00 60 19 04 b5 1b  |1205.......`....|
00000030  df 5c 0a 44 00 02 40 00  09 2d 00 78 00 04 00 74  |.\.D..@..-.x...t|
00000040  00 01 00 00 00 18 73 65  72 69 61 6c 2d 6e 75 6d  |......serial-num|
00000050  62 65 72 3a 20 30 30 31  38 32 36 36 35 32 00 04  |ber: 001826652..|
00000060  00 00 00 14 73 77 2d 72  65 76 20 4d 45 3a 20 56  |....sw-rev ME: V|
00000070  30 33 2e 31 33 20 20 20  00 05 00 00 00 16 66 77  |03.13   ......fw|
00000080  2d 72 65 76 69 73 69 6f  6e 3a 20 56 30 33 2e 39  |-revision: V03.9|
00000090  30 20 20 20 00 06 00 00  00 1a 70 72 6f 74 6f 63  |0   ......protoc|
000000a0  6f 6c 2d 72 65 76 69 73  69 6f 6e 3a 20 52 50 43  |ol-revision: RPC|
000000b0  20 31 2e 30 09 87 00 08  20 17 02 24 21 [-06 53-] {+07 02+} 00  | 1.0.... [-..$!.S.|-] {+..$!...|+}
000000c0  0a 45 00 10 c0 00 1f 00  ff ff ff ff 00 64 00 00  |.E...........d..|
000000d0  00 00 00 00 0a 4b 00 16  00 02 00 12 02 01 00 08  |.....K..........|
000000e0  01 05 00 01 00 02 20 11  02 02 00 02 00 00 0a 5a  |...... ........Z|
000000f0  00 08 00 01 00 04 10 11  00 01                    |..........|
[flameeyes@saladin Accu-Chek]$ wdiff <(hexdump -C 0039-0034-in) <(hexdump -C 0087-0084-in) | colordiff
00000000  e7 00 00 f6 00 f4 00 [-2d-] {+04+}  02 03 00 ee 00 00 00 08  [-|.......-........|-]  {+|................|+}
00000010  00 e8 09 28 00 0e 00 06  52 6f 63 68 65 00 00 04  |...(....Roche...|
00000020  31 32 30 35 09 84 00 0a  00 08 00 60 19 04 b5 1b  |1205.......`....|
00000030  df 5c 0a 44 00 02 40 00  09 2d 00 78 00 04 00 74  |.\.D..@..-.x...t|
00000040  00 01 00 00 00 18 73 65  72 69 61 6c 2d 6e 75 6d  |......serial-num|
00000050  62 65 72 3a 20 30 30 31  38 32 36 36 35 32 00 04  |ber: 001826652..|
00000060  00 00 00 14 73 77 2d 72  65 76 20 4d 45 3a 20 56  |....sw-rev ME: V|
00000070  30 33 2e 31 33 20 20 20  00 05 00 00 00 16 66 77  |03.13   ......fw|
00000080  2d 72 65 76 69 73 69 6f  6e 3a 20 56 30 33 2e 39  |-revision: V03.9|
00000090  30 20 20 20 00 06 00 00  00 1a 70 72 6f 74 6f 63  |0   ......protoc|
000000a0  6f 6c 2d 72 65 76 69 73  69 6f 6e 3a 20 52 50 43  |ol-revision: RPC|
000000b0  20 31 2e 30 09 87 00 08  20 17 02 24 [-21 06 53-] {+20 55 15+} 00  | 1.0.... [-..$!.S.|-] {+..$ U..|+}
000000c0  0a 45 00 10 c0 00 1f 00  ff ff ff ff 00 64 00 00  |.E...........d..|
000000d0  00 00 00 00 0a 4b 00 16  00 02 00 12 02 01 00 08  |.....K..........|
000000e0  01 05 00 01 00 02 20 11  02 02 00 02 00 00 0a 5a  |...... ........Z|
000000f0  00 08 00 01 00 04 10 11  00 01                    |..........|

It may become much clearer if you go back at the first dump and observe the part highlighted in blue: it’s 8 bytes that represent 20 17 02 24 21 07 00 00. To help understanding this, you should know that I was looking at this around 9pm on February 24th, 2017. Indeed, these bytes effectively represent the date and time in binary-coded decimal, which is not something I was expecting to see, but make sense.

Once you know this, it’s easy to tell that there is no checksum in the messages, and that is one less problem to worry about. Indeed, when looking for the “big” packets, I could find the telltale representation of fixed-size records with what looked like (and I confirmed being) glucometer readings (in mg/dL, even though the device is mmol/l based).

If you’re wondering why the peculiar change in time on the last part of the diff, the reason is quite simple: the software noted that the time on the device didn’t match the time on the computer, and asked me to sync it. Which means I also know the command to set the time now.

Looking at the commands, there are a few more things that are interesting to see:

0037 >>>> 00000000: E7 00 00 0E 00 0C 00 2D  01 03 00 06 00 00 00 00  .......-........
0037 >>>> 00000010: 00 00                                             ..

0047 >>>> 00000000: E7 00 00 0E 00 0C 00 01  01 03 00 06 00 00 00 00  ................
0047 >>>> 00000010: 00 00                                             ..

0071 >>>> 00000000: E7 00 00 0E 00 0C 00 02  01 03 00 06 00 00 00 00  ................
0071 >>>> 00000010: 00 00                                             ..

0081 >>>> 00000000: E7 00 00 1A 00 18 00 03  01 07 00 12 00 00 0C 17  ................
0081 >>>> 00000010: 00 0C 20 17 02 24 20 54  57 00 00 00 00 00        .. ..$ TW.....

0085 >>>> 00000000: E7 00 00 0E 00 0C 00 04  01 03 00 06 00 00 00 00  ................
0085 >>>> 00000010: 00 00                                             ..

0095 >>>> 00000000: E7 00 00 14 00 12 00 05  01 07 00 0C 00 05 0C 0D  ................
0095 >>>> 00000010: 00 06 00 01 00 02 00 00                           ........

The time change command is 0081 (and I highlighted in green the new time, also provided as BCD). The remaining commands appear to be querying some information about the device. Commands 0037, 0047, 0071 and 0085 are exactly the same, except as I found out initially, no packet was identical. In blue I highlighted what appears to be a packet counter of sorts. I’m not sure why it starts at 0x2D, but after that it appears to increment normally, although only after 0xE7 commands (there appear to be a handful more).

Unfortunately this does not cover enough of the protocol yet, but it’s a good starting point for a few hours spent trying to prod things around on a Friday night (what an exciting life I live). I also managed to find how the device is reporting the readings, in blocks of less than 1KB records, but I have not figure out how the software knows when to stop asking for them. In this case it definitely is handy that I have so many readings on the device — this is probably the glucometer I used the most, and I still think is the best blood-reading glucometer, for handiness and results.

Stay tuned for more details, and hopefully to see a spec for the protocol soon, too.

Steev Klimaszewski a.k.a. steev (homepage, bugs)
And we’re back (February 27, 2017, 08:14 UTC)

Not entirely sure what happened, but hey, back online and such. Everything is gone from the old site (sorry all you old referrers 🙁 ) But hey, maybe now I’ll actually update more often and post more often.

Hello world! (February 27, 2017, 01:47 UTC)

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

February 24, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Password Managers and U2F (February 24, 2017, 00:04 UTC)

Yet another blog post that starts with a tweet, or two. Even though I’m not by either trade or training clearly a security person, I have a vested interest in security as many of you know, and I have written (or ranted) a lot about passwords and security in the past. I have in particular argued for LastPass as a password manager, and at the same time for U2F security keys for additional security (as you can notice in this post as I added an Amazon link to one).

The other week, a side-by-side of the real PayPal login window and a phishing one that looked way too similar was posted, and I quoted the tweet asking PayPal why do they still not support U2F — technically PayPal does support 2-factors authentication, but that as far as I know that is only for US customers, and it is based on a physical OTP token.

After I tweeted that, someone who I would have expected to understand the problem better, argued that password managers and random passwords completely solve the phishing problem. I absolutely disagree and I started arguing it on Twitter, but the 140 characters limit makes it very difficult to provide good information, and particularly to refer to it later on. So here is the post.

The argument: you can’t phish a password you don’t know, and using password managers, you don’t have to (or have a way to) know that password. Works well in theory, but let’s see how that does not actually work.

Password managers allow you to generate a random, complex password (well, as long as the websites allow you to), and thanks to features such as autofill, you never actually get to see the password. This is good.

Unfortunately, there are plenty of cases in which you need to either see, read, or copy to clipboard the password. Even LastPass, which has, in my opinion, a well defined way to deal with “equivalent domains”, is not perfect: not all Amazon websites are grouped together, for instance. While they do provide an easy way to add more domains to the list of equivalency, it does mean I have about 30 of them customised for my own account right now.

What this means is that users are actually trained to the idea that sometimes the autofill won’t work because the domain is just not in the right list. And even when it is, sometimes the form has changed, and autofill just does not work. I have seen plenty of those situations myself. And so, even though you may not know the password, phishing works if it convinces you that the reason why autofill is not working is not because the site is malicious, but just because the password manager is broken/not working as intended/whatever else.

This becomes even more likely when you’re using one of the more “open” password managers. Many think LastPass is bad, not just because they store your password server-side (encrypted) but also because it interacts directly with the browser. After all, the whole point of the LostPass vulnerability was that UI is hard. So the replacements for geeks who want to be even more secure usually is a separate app that requires you to copy-paste your password from it to the browser. And in that case, you may not know the password, but you can still be phished.

If you want to make the situation even more ridiculous, go back to read my musings on bank security. My current bank (and one of my former ones) explicitly disallow you from using either autofill or copy-paste. Worse they ask you to fill in parts of your password, e.g. “the 2nd, 4th, 12th character of your password” — so you either end up having to use a mnemonic password or you have to look at your password manager and count. And that is very easily phisable. Should I point out that they insist that this is done to reduce chances of phishing?

I have proposed some time ago a more secure way to handle equivalent domains, in which websites can feed back information to the password managers on who’s what. There is some support for things like this on iOS at least, where I think the website can declare which apps are allowed to take over links to themselves. But as far as I know, even now that SmartLock is a thing, Chrome/Android do not support anything like that. Nor does LastPass. I’m sad.

Let me have even more fun about password managers, U2F, and feel okay with having a huge Amazon link at the top of this post:

Edit: full report is available for those who just don’t believe the infographic.

This is not news, I have written about this over two years ago after U2F keys were announced publicly — I have been using one before that, that is not mystery clearly. Indeed, unlike autofill and copy-paste, U2F identification does not involve interaction with the user-person, but is rather tied to the website’s own identification, which means it can’t be phished.

Phishing, as described in the NilePhish case above, applies just as equally to SMS, Authenticator apps and push-notifications, because what the user may perceive is just a little bit more delay. It’s not impossible, though a bit complicated, for a “sophisticated” (or well-motivated) attacker to just render the same exact page as the original site, defeating all those security theatre measure such as login-request IDs, custom avatars and custom phrases — or systems that ask you the 4th, 9th and 15th characters of your password.

User certificates, whether as a file enrolled in the operating system or on a smartcard, are of course an even stronger protection than U2F, but having used them, they are not really that friendly. That’s a topic for another day, though.

February 20, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The geeks' wet dreams (February 20, 2017, 18:04 UTC)

As a follow up to my previous rant about FOSDEM I thought I would talk to what I define the “geeks’ wet dream”: IPv6 and public IPv4.

During the whole Twitter storm I had regarding IPv6 and FOSDEM, I said out loud that I think users should not care about IPv6 to begin with, and that IPv4 is not a legacy protocol but a current one. I stand by my description here: you can live your life on the Internet without access to IPv6 but you can’t do that without access to IPv4, at the very least you need a proxy or NAT64 to reach a wide variety of the network.

Having IPv6 everywhere is, for geeks, something important. But at the same time, it’s not really something consumers care about, because of what I just said. Be it for my mother, my brother-in-law or my doctor, having IPv6 access does not give them any new feature they would otherwise not have access to. So while I can also wish for IPv6 to be more readily available, I don’t really have any better excuse for it than making my personal geek life easier by allowing me to SSH to my hosts directly, or being able to access my Transmission UI from somewhere else.

Yes, there is a theoretical advantage for speed and performance, because NAT is not involved, but to be honest for most people that is not what they care about: a 36036 Mbit connection is plenty fast enough even when doing double-NAT, and even Ultra HD, 4K, HDR resolution is well provided by that. You could argue that an even lower latency network may enable more technologies to be available, but that not only sounds to me like a bit of a stretch, it also misses the point once again.

I already liked to Todd’s post, and I don’t need to repeat it here, but if it’s about the technologies that can be enabled, it should be something the service providers should care about. Which by the way is what is happening already: indeed the IPv6 forerunners are the big player companies that effectively are looking for a way to enable better technology. But at the same time, a number of other plans were executed so that better performance can be gained without gating it to the usage of a new protocol that, as we said, really brings nothing to the table.

Indeed, if you look at protocols such as QUIC or HTTP/2, you can notice how these reduce the amount of ports that need to be opened, and that has a lot to do with the double-NAT scenario that is more and more common in homes. Right now I’m technically writing under a three-layers NAT: the carrier-grade NAT used by my ISP for deploying DS-Lite, the NAT issued by my provider-supplied router, and the last NAT which is set up by my own router, running LEDE. I don’t even have a working IPv6 right now for various reasons, and know what? The bottleneck is not the NATs but rather the WiFi.

As I said before, I’m not doing a full Todd and thinking that ignoring IPv6 is a good idea, or that we should not spend any time fixing things that break with it. I just think that we should get a sense of proportion and figuring out what the relative importance of IPv6 is in this world. As I said in the other post, there are plenty of newcomers that do not need to be told to screw themselves if their systems don’t support IPv6.

And honestly the most likely scenario to test for is a dual-stack network in which some of the applications or services don’t work correctly because they misunderstand the system. Like OVH did. So I would have kept the default network dual-stack, and provided a single-stack, NAT64 network as a “perk”, for those who actually do care to test and improve apps and services — and possibly also have a clue not to go and ping a years old bug that was fixed but not fully released.

But there are more reasons why I call these dreams. A lot of the reasoning behind IPv6 appears to be grounded into the idea that geeks want something, and that has to be good for everybody even when they don’t know about it: IPv6, publicly-routed IPv4, static addresses and unfiltered network access. But that is not always the case.

Indeed, if you look even just at IPv6 addressing, and in particular to how stateless addressing works, you can see how there has been at least three different choices at different times for generating them:

  • Modified EUI-64 was the original addressing option, and for a while the only one supported; it uses the MAC address of the network card that receives the IPv6, and that is quite the privacy risk, as it means you can extract an unique identifier of a given machine and identify every single request coming from said machine even when it moves around different IPv6 prefixes.
  • RFC4941 privacy extensions were introduced to address that point. These are usually enabled at some point, but these are not stable: Wikipedia correctly calls these temporary addresses, and are usually fine to provide unique identifiers when connecting to an external service. This makes passive detection of the same machine across network not possible — actually, it makes it impossible to detect a given machine even in the same network, because the address changes over time. This is good on one side, but it means that you do need session cookies to maintain login section active, as you can’t (and you shouldn’t) rely on the persistence of an IPv6 address. It also allows active detection, at least of the presence of a given host within a network, as it does not by default disable the EUI-64 address, just not use it to connect to services.
  • RFC7217 adds another alternative for address selection: it provides a stable-within-a-network address, making it possible to keep long-running connections alive, and at the same time ensuring that at least a simple active probing does not give up the presence of a known machine in the network. For more details, refer to Lubomir Rintel’s post as he went in more details than me on the topic.

Those of you fastest on the uptake will probably notice the connection with all these problems: it all starts by designing the addressing assuming that the most important addressing option is stable and predictable. Which makes perfect sense for servers, and for the geeks who want to run their own home server. But for the average person, these are all additional risks that do not provide any desired additional feature!

There is one more extension to this: static IPv4 addresses suffer from the same problem. If your connection is always coming from the same IPv4 address, it does not matter how private your browser may be, your connections will be very easy to track across servers — passively, even. What is the remaining advantage of a static IP address? Certainly not authentication, as in 2017 you can’t claim ignorance of source address spoofing.

And by the way this is the same reason why providers started issuing dynamic IPv6 prefixes: you don’t want a household (if not a person strictly) to be tied to the same address forever, otherwise passive tracking is effectively a no-op. And yes, this is a pain for the geek in me, but it makes sense.

Static, publicly-routable IP addresses make accessing services running at home much easier, but at the same time puts you at risk. We all have been making about the “Internet of Things”, but at the same time it appears everybody wants to be able to set their own devices to be accessible from the outside, somehow. Even when that is likely the most obvious way for external attackers to access one’s unprotected internal network.

There are of course ways around this that do not require such a publicly routable address, and they are usually more secure. On the other hand, they are not quite a panacea of course. Indeed they effectively require a central call-back server to exist and be accessible, and usually that is tied to a single company, with customise protocols. As far as I know, no open-source such call-back system appears to exist, and that still surprises me.

Conclusions? IPv6, just like static and publicly routable IP addresses, are interesting tools that are very useful to technologists, and it is true that if you consider the original intentions behind the Internet these are pretty basic necessities, but if you think that the world, and thus requirements and importance of features, have not changed, then I’m afraid you may be out of touch.

February 18, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)


Just a quick tip on how to easily create a Fedora chroot environment from (even a non-Fedora) Linux distribution.

I am going to show the process on Debian stretch but it’s not be much different elsewhere.

Since I am going to leverage pip/PyPI, I need it available — that and a few non-Python widespread dependencies:

# apt install python-pip db-util lsb-release rpm yum
# pip install image-bootstrap pychroot

Now for the actual chroot creation, process and usage is very close to debootstrap of Debian:

# directory-bootstrap fedora --release 25 /var/lib/fedora_25_chroot

Done. Now let’s prove we have actual Fedora 25 in there. For lsb_release we need package redhat-lsb here, but the chroot was is functional before that already.

# pychroot /var/lib/fedora_25_chroot dnf -y install redhat-lsb
# pychroot /var/lib/fedora_25_chroot lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch:[..]:printing-4.1-noarch
Distributor ID: Fedora
Description:    Fedora release 25 (Twenty Five)
Release:        25
Codename:       TwentyFive

Note the use of pychroot which does bind mounts of /dev and friends out of the box, mainly.

directory-bootstrap is part of image-bootstrap and, besides Fedora, also supports creation of chroots for Arch Linux and Gentoo.

See you 🙂

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Reverse Engineering is just the first step (February 18, 2017, 13:03 UTC)

Last year I said that reverse engineering obsolete systems is useful giving as an example adding Coreboot support for very old motherboards that are simpler and whose components are more likely to have been described somewhere already. One thing that I realized I didn’t make very clear in that post is that there is an important step on reverse engineering: documenting. As you can imagine from this blog, I think that documenting the reverse engineering processes and results are important, but I found out that this is definitely not the case for everybody.

On the particularly good side, going to 33c3 had a positive impression on me. Talks such as The Ultimate GameBoy Talk were excellent: Michael Steil did an awesome job at describing a lot of the unknown details of Nintendo’s most popular handheld. He also did a great job at showing practical matters, such as which tricks did various games use to implement things that at first sight would look impossible. And this is only one of his talks, he has a series that is going on year after year, I’ve watched his talk about the Commodore 64, and the only reason why it’s less enjoyable to watch is that the recording quality suffers from the ages.

In other posts I already referenced Micah’s videos. These have also been extremely nice to start watching, as she does a great job at explaining complex concepts, and even the “stream of consciousness” streams are very interesting and a good time to learn new tricks. What attracted me to her content, though, is the following video:

I have been using Wacom tablets for years, and I had no idea how they really worked behind the scene. Not only she does a great explanation of the technology in general, but the teardown of the mouse was also awesome with full schematics and explanation of the small components. No wonder I have signed up for her Patreon right away: she deserve to be better known and have a bigger following. And if funding her means spreading more knowledge around, well, then I’m happy to do my bit.

For the free software, open source and hacking community, reverse engineering is only half the process. The endgame is not for one person to know exactly how something works, but rather for the collectivity to gain more insight on things, so that more people have access to the information and can even improve on it. The community needs not only to help with that but also to prioritise projects that share information. And that does not just mean writing blogs about things. I said this before: blogs don’t replace documentation. You can see blogs as Micah’s shop-streaming videos, while documentation is more like her video about the tablets: they synthesize documentation in actually usable form, rather than just throwing information around.

I have a similar problem of course: my blog posts are not usually a bit of a stream of consciousness and they do not serve an useful purpose to capture the factual state of information. Take for example my post about reverse engineering the OneTouch Verio and its rambling on, then compare it with the proper protocol documentation. The latter is the actual important product, compared to my ramblings, and that is the one I can be proud of. I would also argue that documenting these things in a easily consumable form is more important than writing tools implementing them as those only cover part of the protocol and in particular can only leverage my skills, that do not involve statistical, pharmaceutical or data visualisation skills.

Unfortunately there are obstacles to these idea of course. Sometimes, reverse engineering documentation is attacked by manufacturer even more than code implementing the same information. So for instance while I have some information I still haven’t posted about a certain gaming mouse, I already know that the libratbag people do not want documentation of the protocols in their repository or wiki, because it causes them more headaches than the code. And then of course there is the problem of hosting this documentation somewhere.

I have been pushing my documentation on GitHub, hoping nobody causes a stink, but the good thing about using git rather than Wiki or similar tools is exactly that you can just move it around without losing information. This is not always the case: a lot of documentation is still nowadays only available either as part of code itself, or on various people’s homepages. And there are at least two things that can happen with that, the first is the most obvious and morbid one: the author of the documentation dies, and the documentation disappears once their domain registration expires, or whatever else, or if the homepage is at a given university or other academic endeavour, it may very well be that the homepage gets to disappear before the person anyway.

I know a few other alternatives to store this kind of data have been suggested, including common wiki akin to Wikipedia, but allowing for original research, but I am still uncertain that is going to be very helpful. The most obvious thing I can think of, is making sure these information can actually be published in books. And I think that at least No Starch Press has been doing a lot for this, publishing extremely interesting books including Designing BSD Rootkits and more recently Rootkits and Bootkits which is still in Early Access. A big kudos to Bill for this.

From my side, I promise I’ll try to organize my findings of anything I’ll work on in the best of my ability, and possibly organize it in a different form than just a blog, because the community deserves better.

February 14, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2017-02-14 Blog moved to Gentoo blog (February 14, 2017, 10:36 UTC)

The blog as been moved ? here:

Still not sure if the move will be temporary or not.
So for now I will keep both the Blog.

February 09, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

So, FOSDEM 2017 is over, and as every year it was both fun and interesting. There will for sure be more blog posts, e.g., with photographs from talks by our developers, the booth, the annual Gentoo dinner, or (obviously) the beer event. The Gentoo booth, centrally located just opposite to KDE and Gnome and directly next to CoreOS, was quite popular; it's always great to hear from all the enthusiastic Gentoo fans. Many visitors also prepared, compiled, and installed their own Gentoo buttons at our button machine.
In addition we had a new Gentoo LiveDVD as handout - the "Crispy Belgian Waffle" FOSDEM 2017 edition. For those of you who couldn't make it to Brussels, you can still get it! Download the ISO here and burn it on a DVD or copy it on a USB stick - all done. Many thanks to Fernando Reyes (likewhoa) for all his work!
Finally, for those who are wondering, the "Gentoo Ecosystem" poster from our table can be downloaded as PDF here. It is based on work by Daniel Robbins and mitzip from Funtoo; the source files are available on Github. Of course this poster is continous work in progress, so tell me if you find something missing!

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo at Fosdem (February 09, 2017, 06:00 UTC)

At the stand

It was nice to meet everyone and hang out as well. There was an interview with Hacker Public Radio which you can find HERE as well.

Just a short one this time, but it was nice to meet everyone.

February 08, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Tracking Service Function Chaining with Skydive (February 08, 2017, 11:43 UTC)

Skydive is “an open source real-time network topology and protocols analyzer”. It is a tool (with CLI and web interface) to help analyze and debug your network (OpenStack, OpenShift, containers, …). Dropped packets somewhere? MTU issues? Routing problems? These are some issues where running skydive whill help.

So as an update on my previous demo post (this time based on the Newton release), let’s see how we can trace SFC  with this analyzer!

devstack installation

Not a lot of changes here, check out devstack on the stable/newton branch, grab the local.conf file I prepared (configure to use skydive 0.9 release) and run “./”!

For the curious, the SFC/Skydive specific parts are:
enable_plugin networking-sfc stable/newton

# Skydive
enable_plugin skydive refs/tags/v0.9.0
enable_service skydive-agent skydive-analyzer

Skydive web interface and demo instances

Before running the script to configure the SFC demo instances, open the skydive web interface (it listens on port 8082, check your instance firewall if you cannot connect):


The login was configured with devstack, so if you did not change, use admin/pass123456.
Then add the demo instances as in the previous demo:
$ git clone -b sfc_newton_demo
$ ./openstack-scripts/

And watch as your cloud goes from “empty” to “more crowded”:

Skydive CLI, start traffic capture

Now let’s enable traffic capture on the integration bridge (br-int), and all tap interfaces (more details on the skydive CLI available in the documentation):
$ export SKYDIVE_USERNAME=admin
$ export SKYDIVE_PASSWORD=pass123456
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client capture create --gremlin "G.V().Has('Name', 'br-int', 'Type', 'ovsbridge')"
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client capture create --gremlin "G.V().Has('Name', Regex('^tap.*'))"

Note this can be done in the web interface too, but I wanted to show both interfaces.

Track a HTTP request diverted by SFC

Make a HTTP request from the source VM to the destination VM (see previous post for details). We will highlight the nodes where this request has been captured: in the GUI, click on the capture create button, select “Gremlin expression”, and use the query:

This expression reads as “on all captured flows matching IP address and port 80, show nodes”. With the CLI you would get a nice JSON output of these nodes, here in the GUI these nodes will turn yellow:

If you look at our tap interface nodes, you will see that two are not highlighted. If you check their IDs, you will find that they belong to the same service VM, the one in group 1 that did not get the traffic.

If you want to single out a request, in the skydive GUI, select one node where capture is active (for example br-int). In the flows table, select the request, scroll down to get its layer 3 tracking ID “L3TrackingID” and use it as Gremlin expression:

Going further

Now it’s your time to experiment! Modify the port chain, send a new HTTP request, get its L3TrackingID, and see its new path. I find the latest ID quickly with this CLI command (we will see how the skydive experts will react to this):
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client topology query --gremlin "G.Flows().Has('Network','','Transport','80').Limit(1)" | jq ".[0].L3TrackingID"

You can also check each flow in turn, following the paths from a VM to another one, go further with SFC, or learn more about skydive:

February 07, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
I missed FOSDEM (February 07, 2017, 16:06 UTC)

I sadly had to miss out on the FOSDEM event. The entire weekend was filled with me being apathetic, feverish and overall zombie-like. Yes, sickness can be cruel. It wasn't until today that I had the energy back to fire up my laptop.

Sorry for the crew that I promised to meet at FOSDEM. I'll make it up, somehow.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Stricter JSON parsing with Haskell and Aeson (February 07, 2017, 05:10 UTC)

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

  "jobInputUrl": 123, // should have been a string
  "jobPriority": 123

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

  "jobInputUrl": "",
  "jobPriority": 100,
  "junkField": "junkValue"

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric      #-}

import Data.Aeson
import Data.Data
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Data, Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON json = do
    job <- genericParseJSON defaultOptions json
    if keysMatchRecords json job
      return job
      fail "extraneous keys in input"
      -- Make sure the set of JSON object keys is exactly the same as the fields in our object
      keysMatchRecords (Object o) d =
          objKeys   = sort . fmap unpack . keys
          recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr
          objKeys o == recFields d
      keysMatchRecords _ _          = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

February 06, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Tesseract is one of the best open-source OCR software available, and I recently took over ebuilds maintainership for it. Current development is still quite active, and since last stable release they added a new OCR engine based on LSTM neural networks. This engine is available in an alpha release, and initial numbers show a much faster OCR pass, with fewer errors.

Sounds interesting? If you want to try it, this alpha release is now in tree (along with a live ebuild). I insist on the alpha tag, this is for testing, not for production; so the ebuild masked by default, and you will have to add to your package.unmask file:
The ebuild also includes some additional changes, like current documentation generated with USE=doc (available in stable release too), and updated linguas.

Testing with paperwork

The initial reason I took over tesseract is that I also maintain paperwork ebuilds, a personal document manager, to handle scanned documents and PDFs (which is heavy tesseract user). It recently got a new 1.1 release, if you want to give it a try!