Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Yury German
. Zack Medico

Last updated:
December 05, 2016, 08:05 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

December 05, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Ethical implications of selling routers (December 05, 2016, 03:04 UTC)

I write this while back in Italy at my mother’s. As with many of my peers, visiting the family back in old country means having to do free tech support for them. I loathe that, but for politeness I may oblige.

In this particular case, my neighbour asked me to look at his tablet, because it was showing up scammy ads every time he was visiting the website of University of Venice. I checked, and beside some fake-protection apps (sigh) the tablet looked fine. I told him to avoid using the stock Samsung browser and prefer Chrome or Firefox, but then I realized something else was amiss.

A very brief check on his home router found that the problem was clearly with that one: the admin password was the default of admin, the router admin page is accessible from the WAN interface (that is, to the whole Internet) and indeed the DNS servers were hijacked. Stop-gap solution was changing the default admin password, and setting Google Public DNS as the new server in DHCP.

Unfortunately the proper solution (disabling remote access to the admin interface) is not viable for this router, because this router model (TP-Link TD-W8961N v2) does not have a firmware update to fix the absurd ACL system that should lock you up from the outside, and that doesn’t, really. Indeed, the firmware that is installed on the device looks newer than the one on TP-Link’s website, but that’s just because it’s the Italian localized version.

Note: make sure you change the default password of your router even if remote access is disabled! While I used not to care and keep admin:admin/admin:password pairs, it’s getting way too easy to hijack browsers and sidestep the remote access limitations.

Up to here it would be your usual tale of people who don’t (and really shouldn’t need to) have a clue about security being caught on the crossfire. Things changed when he told me that he brought the router to service to the store he bought it from, because he needed to enable port forwarding for some videogame (didn’t say which ones.) Which means a store sold this insecure device, serviced it, and left the customer in a horribly insecure state.

Unfortunately there is really not much I can do about that store. Even though I could leave a negative review to it, I doubt anybody would be checking those reviews over here. And because they are friendly my neighbour is unlikely to stop going to that store, even though I advised against him. He was also sure he found a good deal with this router — it was available online for €55 but they sold it for just €29 — but I have a hunch that the online version would have been the same model in V3 form (which includes a firmware to fix the vulnerability above), while the store sold their previous stock of V2.

This goes again to my previous point that technologists have a responsibility towards their users, whether they are geeks or not. I think OpenWrt was a very good starting point for this, unfortunately for what I see the project stagnated and instead a number of commercial projects around it flourished, which only help to a point. Also, while OpenWrt works great if you need a “pure router”, it becomes vastly useless the moment when you live in a country like Italy, where most of the broadband still arrives in form of DSL, and you then need to look for a modem/router.

FSFE boasts a campaign to let you use whichever router you want but, beside being a very local campaign (compulsory routers were never a thing in Italy, for instance, and as far as I can tell, their campaign only focused on the German market), it also opens the possibility that users will choose cheaper, significantly less secure devices because they don’t care or more properly because they don’t realize how bad that is for them and Internet as we know it.

Some time ago, someone on the Italian parliament (I completely forgot who and I don’t care about it right now) proposed a law for which you would have to have a license to be able to install customer-premises equipment — most of the free software people have been against this proposal, including me. But I sometimes wonder if it made sense, to a point. Unfortunately I doubt acquiring that license would provide you the ethics necessary for this kind of job.

I don’t have easy solutions, but I do think we should be thinking about them. We need devices that are actually secure by default, and where the user has to try to make them insecure. We need ways to reuse devices without having to spend more money for them to be replaced, and after-market ROMs or WRT-style firmwares are that, except, because of targets, too many of those don’t apply to the people who need them the most.

December 02, 2016
10 year anniversary for (December 02, 2016, 18:55 UTC)

December 3rd 2016 marks 10 years since was first announced on the sks-devel mailing list. The time really has passed by too quickly, driven by a community that is a pleasure to cooperate with. Sadly there is still a long way to go for OpenPGP to be used mainstream, but in this blog post … Continue reading "10 year anniversary for"

December 01, 2016

Graphicsmagick is an Image Processing System.

This is an old memory failure, discovered time ago. The maintainer, Mr. Bob Friesenhahn was able to reproduce the issue; I’m quoting his feedback about:

The problem is that the embedded JPEG data claims to have dimensions 59395×56833 and
this is only learned after we are in the JPEG reader.

But for some reasons (maybe not easy to fix) it is still not fixed.

The complete ASan output:

# gm identify $FILE
==12404==ERROR: AddressSanitizer failed to allocate 0xfb8065000 (67511930880) bytes of LargeMmapAllocator (error code: 12)
==12404==Process memory map follows:
	0x000000400000-0x000000522000	/usr/bin/gm
	0x000000722000-0x000000723000	/usr/bin/gm
	0x000000723000-0x000000726000	/usr/bin/gm
	0x7fcc55fbe000-0x7fcc56027000	/usr/lib64/
	0x7fcc56027000-0x7fcc56226000	/usr/lib64/
	0x7fcc56226000-0x7fcc56227000	/usr/lib64/
	0x7fcc56227000-0x7fcc56228000	/usr/lib64/
	0x7fcc56228000-0x7fcc56254000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56254000-0x7fcc56453000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56453000-0x7fcc56454000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56454000-0x7fcc56457000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5645b000-0x7fcc5648c000	/usr/lib64/
	0x7fcc5648c000-0x7fcc5668b000	/usr/lib64/
	0x7fcc5668b000-0x7fcc5668c000	/usr/lib64/
	0x7fcc5668c000-0x7fcc5668d000	/usr/lib64/
	0x7fcc5668d000-0x7fcc5671d000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5671d000-0x7fcc5691d000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5691d000-0x7fcc5691f000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc5691f000-0x7fcc56927000	/usr/lib64/GraphicsMagick-1.3.24/modules-Q32/coders/
	0x7fcc56932000-0x7fcc5cfa4000	/usr/lib64/locale/locale-archive
	0x7fcc5fdff000-0x7fcc5fe08000	/usr/lib64/
	0x7fcc5fe08000-0x7fcc60007000	/usr/lib64/
	0x7fcc60007000-0x7fcc60008000	/usr/lib64/
	0x7fcc60008000-0x7fcc60009000	/usr/lib64/
	0x7fcc60009000-0x7fcc6001e000	/lib64/
	0x7fcc6001e000-0x7fcc6021d000	/lib64/
	0x7fcc6021d000-0x7fcc6021e000	/lib64/
	0x7fcc6021e000-0x7fcc6021f000	/lib64/
	0x7fcc6021f000-0x7fcc6022e000	/lib64/
	0x7fcc6022e000-0x7fcc6042d000	/lib64/
	0x7fcc6042d000-0x7fcc6042e000	/lib64/
	0x7fcc6042e000-0x7fcc6042f000	/lib64/
	0x7fcc6042f000-0x7fcc604d6000	/usr/lib64/
	0x7fcc604d6000-0x7fcc606d6000	/usr/lib64/
	0x7fcc606d6000-0x7fcc606dc000	/usr/lib64/
	0x7fcc606dc000-0x7fcc606dd000	/usr/lib64/
	0x7fcc606dd000-0x7fcc60730000	/usr/lib64/
	0x7fcc60730000-0x7fcc60930000	/usr/lib64/
	0x7fcc60930000-0x7fcc60931000	/usr/lib64/
	0x7fcc60931000-0x7fcc60936000	/usr/lib64/
	0x7fcc60936000-0x7fcc60ac9000	/lib64/
	0x7fcc60ac9000-0x7fcc60cc9000	/lib64/
	0x7fcc60cc9000-0x7fcc60ccd000	/lib64/
	0x7fcc60ccd000-0x7fcc60ccf000	/lib64/
	0x7fcc60cd3000-0x7fcc60ce9000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ce9000-0x7fcc60ee8000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ee8000-0x7fcc60ee9000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60ee9000-0x7fcc60eea000	/usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/
	0x7fcc60eea000-0x7fcc60ef0000	/lib64/
	0x7fcc60ef0000-0x7fcc610f0000	/lib64/
	0x7fcc610f0000-0x7fcc610f1000	/lib64/
	0x7fcc610f1000-0x7fcc610f2000	/lib64/
	0x7fcc610f2000-0x7fcc61109000	/lib64/
	0x7fcc61109000-0x7fcc61308000	/lib64/
	0x7fcc61308000-0x7fcc61309000	/lib64/
	0x7fcc61309000-0x7fcc6130a000	/lib64/
	0x7fcc6130e000-0x7fcc6140b000	/lib64/
	0x7fcc6140b000-0x7fcc6160a000	/lib64/
	0x7fcc6160a000-0x7fcc6160b000	/lib64/
	0x7fcc6160b000-0x7fcc6160c000	/lib64/
	0x7fcc6160c000-0x7fcc6160e000	/lib64/
	0x7fcc6160e000-0x7fcc6180e000	/lib64/
	0x7fcc6180e000-0x7fcc6180f000	/lib64/
	0x7fcc6180f000-0x7fcc61810000	/lib64/
	0x7fcc61810000-0x7fcc61e6e000	/usr/lib64/
	0x7fcc61e6e000-0x7fcc6206e000	/usr/lib64/
	0x7fcc6206e000-0x7fcc6209f000	/usr/lib64/
	0x7fcc6209f000-0x7fcc62125000	/usr/lib64/
	0x7fcc621a0000-0x7fcc621c2000	/lib64/
	0x7fcc62322000-0x7fcc62329000	/usr/lib64/gconv/gconv-modules.cache
	0x7fcc62329000-0x7fcc6234c000	/usr/share/locale/it/LC_MESSAGES/
	0x7fcc623c1000-0x7fcc623c2000	/lib64/
	0x7fcc623c2000-0x7fcc623c3000	/lib64/
	0x7ffcfee34000-0x7ffcfee55000	[stack]
	0x7ffcfef4c000-0x7ffcfef4e000	[vvar]
	0x7ffcfef4e000-0x7ffcfef50000	[vdso]
	0xffffffffff600000-0xffffffffff601000	[vsyscall]
==12404==End of process memory map.
==12404==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/ "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
    #0 0x4c9b3d in AsanCheckFailed /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #1 0x4d0673 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #2 0x4d0861 in __sanitizer::ReportMmapFailureAndDie(unsigned long, char const*, char const*, int, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #3 0x4d989a in __sanitizer::MmapOrDie(unsigned long, char const*, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/
    #4 0x421c2f in __sanitizer::LargeMmapAllocator::Allocate(__sanitizer::AllocatorStats*, unsigned long, unsigned long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1033
    #5 0x421c2f in __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >, __sanitizer::LargeMmapAllocator >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >*, unsigned long, unsigned long, bool, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1302
    #6 0x421c2f in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #7 0x421c2f in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #8 0x4c0201 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/
    #9 0x7fcc61c6a3f2 in MagickRealloc /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/memory.c:471:18
    #10 0x7fcc61cbb2b0 in OpenCache /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3155:7
    #11 0x7fcc61cb98fd in ModifyCache /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:2955:18
    #12 0x7fcc61cbee4c in SetCacheNexus /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3878:7
    #13 0x7fcc61cbf5e1 in SetCacheViewPixels /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:3957:10
    #14 0x7fcc61cbf5e1 in SetImagePixels /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/pixel_cache.c:4023
    #15 0x7fcc56235483 in ReadJPEGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/jpeg.c:1344:9
    #16 0x7fcc61ad3a8a in ReadImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #17 0x7fcc566ed13e in ReadOneJNGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/png.c:3308:17
    #18 0x7fcc566d6f72 in ReadJNGImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/png.c:3516:9
    #19 0x7fcc61ad3a8a in ReadImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #20 0x7fcc61ad1a4b in PingImage /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9
    #21 0x7fcc61a23240 in IdentifyImageCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17
    #22 0x7fcc61a27786 in MagickCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17
    #23 0x7fcc61a81740 in GMCommandSingle /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10
    #24 0x7fcc61a7fce3 in GMCommand /tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16
    #25 0x7fcc6095661f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #26 0x418cd8 in _init (/usr/bin/gm+0x418cd8)

/usr/bin/gm identify: abort due to signal 6 (SIGABRT) "Abort"...

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-10-19: bug discovered and reported privately to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


graphicsmagick: memory allocation failure in MagickRealloc (memory.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed a null pointer access in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 7917 but filesize is 187
File version: 100
File size: 187
Frame size: (8452,8981)x(-4096,0)
Frame rate: 67.851562 / sec.
Total frames: 16387
 Stream out of sync after parse of blocktype 2 (SWF_DEFINESHAPE). 166 but expecting 23.

Offset: 21 (0x0015)
Block type: 2 (SWF_DEFINESHAPE)
Block length: 0

 CharacterID: 55319
 RECT:  (-2048,140)x(0,-1548):12
 FillStyleArray:  FillStyleCount:     18  FillStyleCountExtended:      0
 FillStyle:  FillStyleType: 0
 RGBA: ( 0, 1,9a,ff)
 FillStyle:  FillStyleType: 7f
 FillStyle:  FillStyleType: b
 FillStyle:  FillStyleType: fb
 FillStyle:  FillStyleType: 82                                                                                                                                                                 
 FillStyle:  FillStyleType: 24                                                                                                                                                                 
 FillStyle:  FillStyleType: 67                                                                                                                                                                 
 FillStyle:  FillStyleType: 67                                                                                                                                                                 
 FillStyle:  FillStyleType: 18                                                                                                                                                                 
 FillStyle:  FillStyleType: 9d                                                                                                                                                                 
 FillStyle:  FillStyleType: 6d                                                                                                                                                                 
 FillStyle:  FillStyleType: d7                                                                                                                                                                 
 FillStyle:  FillStyleType: 97                                                                                                                                                                 
 FillStyle:  FillStyleType: 1                                                                                                                                                                  
 FillStyle:  FillStyleType: 26                                                                                                                                                                 
 FillStyle:  FillStyleType: 1a                                                                                                                                                                 
 FillStyle:  FillStyleType: 17                                                                                                                                                                 
 FillStyle:  FillStyleType: 9a                                                                                                                                                                 
 LineStyleArray:  LineStyleCount: 19                                                                                                                                                           
 LineStyle:  Width: 1722                                                                                                                                                                       
 RGBA: (7a,38,df,ff)                                                                                                                                                                           
 LineStyle:  Width: 42742                                                                                                                                                                      
 RGBA: ( 0, 0, 0,ff)                                                                                                                                                                           
 LineStyle:  Width: 70                                                                                                                                                                         
 RGBA: (10,91,64,ff)                                                                                                                                                                           
 LineStyle:  Width: 37031                                                                                                                                                                      
 RGBA: (e7,c7,15,ff)                                                                                                                                                                           
 LineStyle:  Width: 9591                                                                                                                                                                       
 RGBA: (dc,ee,81,ff)                                                                                                                                                                           
 LineStyle:  Width: 4249                                                                                                                                                                       
 RGBA: ( 0,ee,ed,ff)                                                                                                                                                                           
 LineStyle:  Width: 60909                                                                                                                                                                      
 RGBA: (ed,ed,ed,ff)                                                                                                                                                                           
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,ed,ff)
 LineStyle:  Width: 60909
 RGBA: (ed,ed,a7,ff)
 LineStyle:  Width: 42919
 RGBA: (a7,a7,9c,ff)
 LineStyle:  Width: 40092
 RGBA: (9c,9c,9c,ff)
 LineStyle:  Width: 32156
 RGBA: (9c,bc,9c,ff)
 LineStyle:  Width: 33948
 RGBA: (9c,9c,9c,ff)
 LineStyle:  Width: 26404
 RGBA: ( 0, c,80,ff)
 LineStyle:  Width: 42752
 RGBA: (a7, 2, 2,ff)
 LineStyle:  Width: 514
 RGBA: (c6, 2, 0,ff)
 NumFillBits: 11
 NumLineBits: 13
 Curved EdgeRecord: 9 Control(-145,637) Anchor(-735,-1010)
 Curved EdgeRecord: 7 Control(-177,156) Anchor(16,32)
  StateNewStyles: 0 StateLineStyle: 1  StateFillStyle1: 0
  StateFillStyle0: 0 StateMoveTo: 0
   LineStyle: 257

Offset: 23 (0x0017)
Block type: 864 (Unknown Block Type)
Block length: 23

0000: 64 00 00 00 46 4f a3 12  00 00 01 9a 7f 0b fb 82    d...FO.. .......
0010: 24 67 67 18 9d 6d d7                               $gg..m.

Offset: 48 (0x0030)
Block type: 6 (SWF_DEFINEBITS)
Block length: 23

 CharacterID: 6694

Offset: 73 (0x0049)
Block length: 7

==27703==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000059d2ff bp 0x7ffe859e6fc0 sp 0x7ffe859e6f50 T0)
==27703==The signal is caused by a READ memory access.
==27703==Hint: address points to the zero page.
    #0 0x59d2fe in dumpBuffer /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:441:23
    #1 0x51c305 in outputSWF_UNKNOWNBLOCK /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2870:3
    #2 0x51c305 in outputBlock /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2937
    #3 0x527e83 in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:277:4
    #4 0x527e83 in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #5 0x7f0186c4461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x419b38 in _init (/usr/bin/listswf+0x419b38)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:441:23 in dumpBuffer

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: NULL pointer dereference in dumpBuffer (read.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed an overflow in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 18446744072727653119 but filesize is 165
File version: 128
File size: 165
Frame size: (-4671272,-4672424)x(-4703645,4404051)
Frame rate: 142.777344 / sec.
Total frames: 2696

Offset: 25 (0x0019)
Block type: 67 (Unknown Block Type)
Block length: 24

0000: 00 97 6b ba 06 91 6f 98  7a 38 01 00 a6 e3 80 2c    ..k...o. z8.....,
0010: 77 25 d3 d3 1a 19 80 7f                            w%.....

Offset: 51 (0x0033)
Block type: 24 (SWF_PROTECT)
Block length: 1                                                                                                                                                                                
==3132==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eff1 at pc 0x000000499d10 bp 0x7ffc34a55e10 sp 0x7ffc34a555c0                                                       
READ of size 2 at 0x60200000eff1 thread T0                                                                                                                                                     
    #0 0x499d0f in printf_common /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/       
    #1 0x499a9d in printf_common /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/       
    #2 0x49abfa in __interceptor_vfprintf /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/    
    #3 0x509dd7 in vprintf /usr/include/bits/stdio.h:38:10                                                                                                                                     
    #4 0x509dd7 in _iprintf /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:144                                                                                            
    #5 0x51f1f5 in outputSWF_PROTECT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:1873:5                                                                                
    #6 0x51c35b in outputBlock /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/outputtxt.c:2933:4                                                                                      
    #7 0x527e83 in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:277:4                                                                                              
    #8 0x527e83 in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350                                                                                                     
    #9 0x7f0f1ff6861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #10 0x419b38 in _init (/usr/bin/listswf+0x419b38)                                                                                                                                          
0x60200000eff1 is located 0 bytes to the right of 1-byte region [0x60200000eff0,0x60200000eff1)                                                                                                
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4d28f8 in malloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/                                                       
    #1 0x59b9ab in readBytes /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/read.c:201:17                                                                                             
    #2 0x592864 in parseSWF_PROTECT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:2668:26                                                                                   
    #3 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14                                                                                      
    #4 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11                                                                                             
    #5 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350                                                                                                     
    #6 0x7f0f1ff6861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/ in printf_common                                                                                                                                                                      
Shadow bytes around the buggy address:
  0x0c047fff9da0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9db0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9de0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9df0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[01]fa
  0x0c047fff9e00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: heap-based buffer overflow in _iprintf (outputtxt.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed an overflow in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 237 but filesize is 191
File version: 6
File size: 191
Frame size: (3493,-4999)x(-5076,9541)
Frame rate: 39.625000 / sec.
Total frames: 33032
 Stream out of sync after parse of blocktype 18 (SWF_SOUNDSTREAMHEAD). 29 but expecting 27.

Offset: 21 (0x0015)
Block length: 4

  PlaybackSoundRate 5.5 kHz
  PlaybackSoundSize 16 bit
  PlaybackSoundType stereo
  StreamSoundCompression MP3
  StreamSoundRate 44 kHz
  StreamSoundSize error
  StreamSoundType mono
  StreamSoundSampleCount 10838
  LatencySeek 53805

Offset: 27 (0x001b)
Block type: 840 (Unknown Block Type)
Block length: 45

0000: 2c 37 a6 30 3a 29 ab d2  54 6e 8e 88 0a f5 1b 6a    ,7.0:).. Tn.....j
0010: a2 f7 a1 a3 a3 a1 e1 06  70 04 8e 90 82 03 40 47    ........ p.....@G
0020: e0 30 c6 a6 83 57 ac 46  4f 8a 91 76 07             .0...W.F O..v.

Offset: 74 (0x004a)
Block type: 514 (Unknown Block Type)
Block length: 27

0000: b2 05 12 c2 3e 3a 01 20  d8 a7 7d 63 01 11 5c fc    ....>:.  ..}c..\.
0010: 15 8e 90 43 8f 64 8e 58  49 ad 95                   ...C.d.X I..

Offset: 103 (0x0067)
Block type: 297 (Unknown Block Type)
Block length: 20

0000: 27 79 a2 e3 2c 56 2a 2d  d2 2c 37 a6 30 3a 29 ab    'y..,V*- .,7.0:).
0010: d2 54 6e 8e                                        .Tn.

skipping 8 bytes

Offset: 125 (0x007d)
Block length: 8

255 gradients in SWF_MORPHGRADiENT, expected a max of 8=================================================================
==31250==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62400000df10 at pc 0x00000057f342 bp 0x7ffe24b21ef0 sp 0x7ffe24b21ee8
WRITE of size 1 at 0x62400000df10 thread T0
    #0 0x57f341 in parseSWF_RGBA /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:66:12
    #1 0x57f341 in parseSWF_MORPHGRADIENTRECORD /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:746
    #2 0x57f341 in parseSWF_MORPHGRADIENT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:761
    #3 0x57e25a in parseSWF_MORPHFILLSTYLE /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:777:7
    #4 0x58b9b8 in parseSWF_MORPHFILLSTYLES /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:804:7
    #5 0x58b9b8 in parseSWF_DEFINEMORPHSHAPE /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:2098
    #6 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14
    #7 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11
    #8 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #9 0x7f39cc7da61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #10 0x419b38 in _init (/usr/bin/listswf+0x419b38)

0x62400000df10 is located 0 bytes to the right of 7696-byte region [0x62400000c100,0x62400000df10)
allocated by thread T0 here:
    #0 0x4d2af5 in calloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x58b90a in parseSWF_MORPHFILLSTYLES /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:801:28
    #2 0x58b90a in parseSWF_DEFINEMORPHSHAPE /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:2098
    #3 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14
    #4 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11
    #5 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #6 0x7f39cc7da61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:66:12 in parseSWF_RGBA
Shadow bytes around the buggy address:
  0x0c487fff9b90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c487fff9ba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c487fff9bb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c487fff9bc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c487fff9bd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c487fff9be0: 00 00[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c487fff9bf0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c487fff9c00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c487fff9c10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c487fff9c20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c487fff9c30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: heap-based buffer overflow in parseSWF_RGBA (parser.c)

libming is a Flash (SWF) output library. It can be used from PHP, Perl, Ruby, Python, C, C++, Java, and probably more on the way..

A fuzzing revealed an overflow in listswf. The bug does not reside in any shared object but if you have a web application that calls directly the listswf binary to parse untrusted swf, then you are affected.

The complete ASan output:

# listswf $FILE
header indicates a filesize of 237 but filesize is 272
File version: 6
File size: 272
Frame size: (-4926252,-2829100)x(-2829100,-2829100)
Frame rate: 166.648438 / sec.
Total frames: 42662

Offset: 25 (0x0019)
Block type: 666 (Unknown Block Type)
Block length: 38

0000: a6 a6 a6 a6 a6 a6 a6 a6  a6 a6 a6 a6 a6 c5 c5 c5    ........ ........
0010: c5 c5 00 02 00 00 19 9a  02 ba 06 80 00 00 fe 38    ........ .......8
0020: 01 00 a6 e3 80 29                                  .....)

Offset: 65 (0x0041)
Block type: 149 (Unknown Block Type)
Block length: 55

0000: dc 20 1c db 31 89 c7 ff  7f 0a d8 97 c5 c5 c5 c5    . ..1... .......
0010: cb c5 ea fc 77 da c5 c5  c5 c5 c5 d3 d3 1a 19 9a    ....w... ........
0020: 7a 38 df f6 a6 e3 80 40  77 a5 e3 00 ba f5 90 6f    z8.....@ w......o
0030: d3 1a 5d f0 59 0e c2                               ..].Y..

Offset: 122 (0x007a)
Block type: 896 (Unknown Block Type)
Block length: 47

0000: 7f 41 41 41 67 67 18 9d  6d ea 3b 3f ff ff ba 06    AAAgg.. m.;?....
0010: 80 00 00 fe 38 01 00 a6  e3 80 29 77 25 dc 20 1c    ....8... ..)w%. .
0020: db 31 89 c7 ff 7f 0a d8  97 c5 c5 c5 c5 a6 2f       .1..... ....../

Offset: 171 (0x00ab)
Block type: 919 (Unknown Block Type)
Block length: 48

0000: ab d2 20 65 ff fe 7f 7f  0b 1c 62 24 67 89 18 79    .. e.. ..b$g..y
0010: a2 e3 2c 61 2a 2d c1 2c  37 a6 2f f0 e5 ab d2 20    ..,a*-., 7./.... 
0020: 65 65 65 65 65 c7 8e cb  0a d8 1b 75 85 c5 c5 03    eeeee... ...u....

Offset: 221 (0x00dd)
Block type: 791 (Unknown Block Type)
Block length: 7

0000: c5 b7 c5 d3 d3 1a 19                               .......

==634==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efb0 at pc 0x00000058582e bp 0x7fff1ed6df60 sp 0x7fff1ed6df58
WRITE of size 2 at 0x60200000efb0 thread T0
    #0 0x58582d in parseSWF_DEFINEFONT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:1656:29
    #1 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14
    #2 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11
    #3 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #4 0x7fad6007961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #5 0x419b38 in _init (/usr/bin/listswf+0x419b38)

0x60200000efb1 is located 0 bytes to the right of 1-byte region [0x60200000efb0,0x60200000efb1)
allocated by thread T0 here:
    #0 0x4d28f8 in malloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x58532d in parseSWF_DEFINEFONT /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:1655:36
    #2 0x5302cb in blockParse /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/blocktypes.c:145:14
    #3 0x527d4f in readMovie /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:265:11
    #4 0x527d4f in main /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/main.c:350
    #5 0x7fad6007961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/ming-0.4.7/work/ming-0_4_7/util/parser.c:1656:29 in parseSWF_DEFINEFONT
Shadow bytes around the buggy address:
  0x0c047fff9da0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9db0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9de0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9df0: fa fa fa fa fa fa[01]fa fa fa 00 fa fa fa 07 fa
  0x0c047fff9e00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-24: bug discovered and reported to upstream
2016-12-01: blog post about the issue

This bug was found with American Fuzzy Lop.


libming: listswf: heap-based buffer overflow in parseSWF_DEFINEFONT (parser.c)

imagemagick is a software suite to create, edit, compose, or convert bitmap images.

A fuzz on an updated version which includes the fix for CVE-2016-9556, revealed that the issue is still present.

The complete ASan output:

# identify $FILE
==30875==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x610000007cc0 at pc 0x7f897b123267 bp 0x7fff44a4ba70 sp 0x7fff44a4ba68
READ of size 4 at 0x610000007cc0 thread T0
    #0 0x7f897b123266 in IsPixelGray /tmp/portage/media-gfx/imagemagick-
    #1 0x7f897b123266 in IdentifyImageGray /tmp/portage/media-gfx/imagemagick-
    #2 0x7f897b123e2d in IdentifyImageType /tmp/portage/media-gfx/imagemagick-
    #3 0x7f897b3ca308 in IdentifyImage /tmp/portage/media-gfx/imagemagick-
    #4 0x7f897ab0e591 in IdentifyImageCommand /tmp/portage/media-gfx/imagemagick-
    #5 0x7f897ab85ee6 in MagickCommandGenesis /tmp/portage/media-gfx/imagemagick-
    #6 0x50a495 in MagickMain /tmp/portage/media-gfx/imagemagick-
    #7 0x50a495 in main /tmp/portage/media-gfx/imagemagick-
    #8 0x7f89797c061f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #9 0x419d28 in _init (/usr/bin/magick+0x419d28)

0x610000007cc0 is located 0 bytes to the right of 128-byte region [0x610000007c40,0x610000007cc0)
allocated by thread T0 here:
    #0 0x4d3685 in posix_memalign /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x7f897b44a619 in AcquireAlignedMemory /tmp/portage/media-gfx/imagemagick-
    #2 0x7f897b15840e in AcquireCacheNexusPixels /tmp/portage/media-gfx/imagemagick-
    #3 0x7f897b15840e in SetPixelCacheNexusPixels /tmp/portage/media-gfx/imagemagick-
    #4 0x7f897b14e891 in GetVirtualPixelsFromNexus /tmp/portage/media-gfx/imagemagick-
    #5 0x7f897b16d90e in GetCacheViewVirtualPixels /tmp/portage/media-gfx/imagemagick-
    #6 0x7f897b122878 in IdentifyImageGray /tmp/portage/media-gfx/imagemagick-
    #7 0x7f897b123e2d in IdentifyImageType /tmp/portage/media-gfx/imagemagick-
    #8 0x7f897b3ca308 in IdentifyImage /tmp/portage/media-gfx/imagemagick-
    #9 0x7f897ab0e591 in IdentifyImageCommand /tmp/portage/media-gfx/imagemagick-
    #10 0x7f897ab85ee6 in MagickCommandGenesis /tmp/portage/media-gfx/imagemagick-
    #11 0x50a495 in MagickMain /tmp/portage/media-gfx/imagemagick-
    #12 0x50a495 in main /tmp/portage/media-gfx/imagemagick-
    #13 0x7f89797c061f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-gfx/imagemagick- in IsPixelGray
Shadow bytes around the buggy address:
  0x0c207fff8f40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c207fff8f50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c207fff8f60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c207fff8f70: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c207fff8f80: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
=>0x0c207fff8f90: 00 00 00 00 00 00 00 00[fa]fa fa fa fa fa fa fa
  0x0c207fff8fa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c207fff8fb0: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  0x0c207fff8fc0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c207fff8fd0: 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
  0x0c207fff8fe0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-12-01: bug re-discovered and reported to upstream
2016-12-01: blog post about the issue
2016-12-02: upstream released a patch

This bug was found with American Fuzzy Lop.


imagemagick: heap-based buffer overflow in IsPixelGray (pixel-accessor.h) (Incomplete fix for CVE-2016-9556)

Libav is an open source set of tools for audio and video processing.

A fuzzing on an updated stable releases with the Undefined Behavior Sanitizer enabled, revealed multiple crashes. At the date I’m releasing this post, upstream didn’t give a response/feedback about.

All issues are reproducible with:

avconv -i $FILE -f null -

More details about:

Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo.c:2381:65: runtime error: left shift of negative value -1
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo.c:2382:65: runtime error: left shift of negative value -1
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo.c:2383:65: runtime error: left shift of negative value -1
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo_motion.c:323:47: runtime error: left shift of negative value -1
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo_motion.c:331:55: runtime error: left shift of negative value -1
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo_motion.c:336:55: runtime error: left shift of negative value -1
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpegvideo_parser.c:91:65: runtime error: signed integer overflow: 28573696 * 400 cannot be represented in type ‘int’
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/mpeg12dec.c:1401:41: runtime error: signed integer overflow: 28573696 * 400 cannot be represented in type ‘int’
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/x86/mpegvideo.c:53:18: runtime error: index -1 out of bounds for type ‘uint8_t [64]’
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libswscale/x86/swscale.c:189:64: runtime error: signed integer overflow: 65463 * 65537 cannot be represented in type ‘int’
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libswscale/utils.c:340:30: runtime error: left shift of negative value -1
Commit fix:
Fixed version:


Affected version / Tested on:

Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/ituh263dec.c:645:34: runtime error: left shift of negative value -16
Commit fix:
Fixed version:


Affected version / Tested on:
/tmp/portage/media-video/libav-11.8/work/libav-11.8/libavcodec/get_bits.h:530:5: runtime error: load of null pointer of type ‘int16_t’ (aka ‘short’)
Commit fix:
Fixed version:

These bugs were discovered by Agostino Sarubbo of Gentoo.

2016-11-08: bug discovered and reported to upstream
2016-12-01: blog post about the issue

These bugs were found with American Fuzzy Lop.


libav: multiple crashes from the Undefined Behavior Sanitizer

November 29, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Service Function Chaining demo with devstack (November 29, 2016, 14:09 UTC)

After a first high-level post, it is time to actually show networking-sfc in action! Based on a documentation example, we will create a simple demo, where we route some HTTP traffic through some VMs, and check the packets on them with tcpdump:

SFC demo diagram

This will be hosted on a single node devstack installation, and all VMs will use the small footprint CirrOS image, so this should run on “small” setups.

Installing the devstack environment

On your demo system (I used Centos 7), check out devstack on the Mitaka branch (remember to run devstack as a sudo-capable user, not root):

[stack@demo ~]$ git clone -b stable/mitaka

Grab my local configuration file that enables the networking-sfc plugin, rename it to local.conf in your devstack/ directory.
If you prefer to adapt your current configuration file, just make sure your devstack checkout is on the mitaka branch, and add the SFC parts:
enable_plugin networking-sfc

Then run the usual “./” command, and go grab a coffee.

Deploy the demo instances

To speed this step up, I regrouped all the following items in a script. You can check it out (at a tested revision for this demo):
[stack@demo ~]$ git clone -b sfc_mitaka_demo

The script will:

  • Configure security (disable port security, set a few things in security groups, create a SSH key pair)
  • Create source, destination systems (with a basic web server)
  • Create service VMs, configuring the network interfaces and static IP routing to forward the packets
  • Create the SFC items (port pair, port pair  group, flow classifier, port chain)

I highly recommend to read it, it is mostly straightforward and commented, and where most of the interesting commands are hidden. So have a look, before running it:
[stack@demo ~]$ ./openstack-scripts/
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
Updated network: private
Created a new port:

route: SIOCADDRT: File exists
WARN: failed: route add -net "" gw ""
You can safely ignore the route errors at the end of the script (they are caused by duplicate default route on the service VMs).

Remember, from now on, to source the credentials file in your current shell before running CLI commands:
[stack@demo ~]$ source ~/devstack/openrc demo demo

We first get the IP addresses for our source and destination demo VMs:[vagrant@defiant-devstack ~]$ openstack server show source_vm -f value -c addresses; openstack server show dest_vm -f value -c addresses

private=, fd73:381c:4fa2:0:f816:3eff:fe65:12fd

Now, we look for the tap devices associated to our service VMs:[stack@demo ~]$ neutron port-list -f table -c id -c name

| name           | id                                   |
| p1in           | 897df85a-26c3-4491-888e-8cc58f19cea1 |
| p1out          | fa838294-317d-46df-b10e-b1734dd62faf |
| p2in           | c86dafc7-bda6-4537-b806-be2282f7e11e |
| p2out          | 12e58ea8-a9ab-4d0b-9fd7-707dc6e99f20 |
| p3in           | ee14f406-e9d6-4047-812b-aa04514f50dd |
| p3out          | 2d86403b-4639-40a0-897e-68fa0c759f01 |

These devices names follow the tap<port ID first 10 digits> pattern, so for example tap897df85a-26 is the associated  for the p1in port here

See SFC in action

In this example we run a request loop from client_vm to dest_vm (remember to use the IP addresses found in the previous section):
[stack@demo ~]$ ssh cirros@
$ while true; do curl; sleep 1; done
Welcome to dest-vm
Welcome to dest-vm
Welcome to dest-vm

So we do have access to the web server! But does the packets really go through the service VMs? To confirm that, in another shell, run tcpdump on the tap interfaces:

# On the outgoing interface of VM 3
$ sudo tcpdump port 80 -i tap2d86403b-46
tcpdump: WARNING: tap2d86403b-46: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap2d86403b-46, link-type EN10MB (Ethernet), capture size 65535 bytes
11:43:20.806571 IP > Flags [S], seq 2951844356, win 14100, options [mss 1410,sackOK,TS val 5010056 ecr 0,nop,wscale 2], length 0
11:43:20.809472 IP > Flags [.], ack 3583226889, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.809788 IP > Flags [P.], seq 0:136, ack 1, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 136
11:43:20.812226 IP > Flags [.], ack 39, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.817599 IP > Flags [F.], seq 136, ack 40, win 3525, options [nop,nop,TS val 5010059 ecr 5008746], length 0

Here are some other examples (skipping the tcpdump output for clarity):
# You can check other tap devices, confirming both VM 1 and VM2 get traffic
$ sudo tcpdump port 80 -i tapfa838294-31
$ sudo tcpdump port 80 -i tap12e58ea8-a9

# Now we remove the flow classifier, and check the tcpdump output
$ neutron port-chain-update --no-flow-classifier PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # Quiet time

# We restore the classifier, but remove the group for VM3, so tcpdump will only show traffic on other VMs
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # No traffic
$ sudo tcpdump port 80 -i tapfa838294-31 # Packets!

# Now we remove VM1 from the first group
$ neutron port-pair-group-update PG1 --port-pair PP2
$ sudo tcpdump port 80 -i tapfa838294-31 # No more traffic
$ sudo tcpdump port 80 -i tap12e58ea8-a9 # Here it is

# Restore the chain to its initial demo status
$ neutron port-pair-group-update PG1 --port-pair PP1 --port-pair PP2
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 --port-pair-group PG2 PC1

Where to go from here

Between these examples, the commands used in the demo script, and the documentation, you should have enough material to try your own commands! So have fun experimenting with these VMs.

Note that in the meantime we released the Newton version (3.0.0), which also includes the initial OpenStackClient (OSC) interface, so I will probably update this to run on Newton and with some shiny “openstack sfc xxx” commands. I also hope to make a nicer-than-tcpdumping-around demo later on, when time permits.

November 22, 2016
metapixel: multiple assertion failures (November 22, 2016, 16:49 UTC)

metapixel is a program for generating photomosaics.

A fuzzing on metapixel-imagesize revealed multiple assertion failures. The latest upstream release was about ten years ago, so I didn’t made any report. The bugs do not reside in any shared object which aren’t provided by the package. If you have a web application which relies on the metapixel-imagesize binary, then you are affected. Since the crashes reside in the command line tool, they may don’t warrant a CVE at all, but some distros and packagers would have the bugs fixed in their repository, so I’m sharing them.

Affected version:
metapixel-imagesize: rwgif.c:59: void *open_gif_file(const char *, int *, int *): Assertion `data->file !=0′ failed.
Commit fix:
Fixed version:


Affected version:
metapixel-imagesize: rwgif.c:63: void *open_gif_file(const char *, int *, int *): Assertion `DGifGetRecordType(data->file, &record_type) != 0′ failed.
Commit fix:
Fixed version:


Affected version:
metapixel-imagesize: rwgif.c:68: void *open_gif_file(const char *, int *, int *): Assertion `DGifGetImageDesc(data->file) != 0′ failed.
Commit fix:
Fixed version:


Affected version:
metapixel-imagesize: rwgif.c:102: void *open_gif_file(const char *, int *, int *): Assertion `DGifGetExtension(data->file, &ext_code, &ext) != 0′ failed.
Commit fix:
Fixed version:


Affected version:
metapixel-imagesize: rwgif.c:106: void *open_gif_file(const char *, int *, int *): Assertion `DGifGetExtensionNext(data->file, &ext) != 0′ failed.
Commit fix:
Fixed version:

These bugs were discovered by Agostino Sarubbo of Gentoo.

2016-11-22: bugs discovered
2016-11-22: blog post about the issues

These bugs were found with American Fuzzy Lop.


metapixel: multiple assertion failures

metapixel is a program for generating photomosaics.

A fuzzing on metapixel-imagesize revealed an overflow. The latest upstream release was about ten years ago, so I didn’t made any report. The bug does not resides in any shared object which aren’t provided by the package. If you have a web application which relies on the metapixel-imagesize binary, then you are affected. Since the “READ of size 1” it may don’t warrant a CVE at all, but some distros and packagers would have the bug fixed in their repository, so I’m sharing it.

The complete ASan output:

# metapixel-imagesize $FILE
==24883==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eff9 at pc 0x00000050edcf bp 0x7ffce3891f90 sp 0x7ffce3891f88
READ of size 1 at 0x60200000eff9 thread T0
    #0 0x50edce in open_gif_file /tmp/portage/media-gfx/metapixel-1.0.2-r1/work/metapixel-1.0.2/rwimg/rwgif.c:132:60
    #1 0x50a4cd in open_image_reading /tmp/portage/media-gfx/metapixel-1.0.2-r1/work/metapixel-1.0.2/rwimg/readimage.c:88:9
    #2 0x50a18b in main /tmp/portage/media-gfx/metapixel-1.0.2-r1/work/metapixel-1.0.2/imagesize.c:37:14
    #3 0x7fcc5c3a861f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #4 0x41a1d8 in _init (/usr/bin/metapixel-imagesize+0x41a1d8)

0x60200000eff9 is located 3 bytes to the right of 6-byte region [0x60200000eff0,0x60200000eff6)
allocated by thread T0 here:
    #0 0x4d3195 in calloc /tmp/portage/sys-devel/llvm-3.9.0-r1/work/llvm-3.9.0.src/projects/compiler-rt/lib/asan/
    #1 0x7fcc5d267392 in GifMakeMapObject /tmp/portage/media-libs/giflib-5.1.4/work/giflib-5.1.4/lib/gifalloc.c:55

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-gfx/metapixel-1.0.2-r1/work/metapixel-1.0.2/rwimg/rwgif.c:132:60 in open_gif_file
Shadow bytes around the buggy address:
  0x0c047fff9da0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9db0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dc0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9dd0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9de0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9df0: fa fa fa fa fa fa fa fa fa fa 00 fa fa fa 06[fa]
  0x0c047fff9e00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff9e40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.


2016-11-22: bug discovered
2016-11-22: blog post about the issue

This bug was found with American Fuzzy Lop.


metapixel: heap-based buffer overflow in open_gif_file (rwgif.c)

November 21, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.3 (November 21, 2016, 12:40 UTC)

Ok I slacked by not posting for v3.1 and v3.2 and I should have since those previous versions were awesome and feature rich.

But v3.3 is another major milestone which was made possible by tremendous contributions from @tobes as usual and also greatly thanks to the hard work of @guiniol and @pferate who I’d like to mention and thank again !

Also, I’d like to mention that @tobes has become the first collaborator of the py3status project !

Instead of doing a changelog review, I’ll highlight some of the key features that got introduced and extended during those versions.

The py3 helper

Writing powerful py3status modules have never been so easy thanks to the py3 helper !

This magical object is added automatically to modules and provides a lot of useful methods to help normalize and enhance modules capabilities. This is a non exhaustive list of such methods:

  • format_units: to pretty format units (KB, MB etc)
  • notify_user: send a notification to the user
  • time_in: to handle module cache expiration easily
  • safe_format: use the extended formatter to handle the module’s output in a powerful way (see below)
  • check_commands: check if the listed commands are available on the system
  • command_run: execute the given command
  • command_output: execute the command and get its output
  • play_sound: sound notifications !

Powerful control over the modules’ output

Using the self.py3.safe_format helper will unleash a feature rich formatter that one can use to conditionally select the output of a module based on its content.

  • Square brackets [] can be used. The content of them will be removed from the output if there is no valid placeholder contained within. They can also be nested.
  • A pipe (vertical bar) | can be used to divide sections the first valid section only will be shown in the output.
  • A backslash \ can be used to escape a character eg \[ will show [ in the output.
  • \? is special and is used to provide extra commands to the format string, example \?color=#FF00FF. Multiple commands can be given using an ampersand & as a separator, example \?color=#FF00FF&show.
  • {<placeholder>} will be converted, or removed if it is None or empty. Formatting can also be applied to the placeholder eg {number:03.2f}.

Example format_string:

This will show artist - title if artist is present, title if title but no artist, and file if file is present but not artist or title.

"[[{artist} - ]{title}]|{file}"

More code and documentation tests

A lot of efforts have been put into py3status automated CI and feature testing allowing more confidence in the advanced features we develop while keeping a higher standard on code quality.

This is such as even modules’ docstrings are now tested for bad formatting 🙂

Colouring and thresholds

A special effort have been put in normalizing modules’ output colouring with the added refinement of normalized thresholds to give users more power over their output.

New modules, on and on !

  • new clock module to display multiple times and dates informations in a flexible way, by @tobes
  • new coin_balance module to display balances of diverse crypto-currencies, by Felix Morgner
  • new diskdata module to shows both usage data and IO data from disks, by @guiniol
  • new exchange_rate module to check for your favorite currency rates, by @tobes
  • new file_status module to check the presence of a file, by @ritze
  • new frame module to group and display multiple modules inline, by @tobes
  • new gpmdp module for Google Play Music Desktop Player by @Spirotot
  • new kdeconnector module to display information about Android devices, by @ritze
  • new mpris module to control MPRIS enabled music players, by @ritze
  • new net_iplist module to display interfaces and their IPv4 and IPv6 IP addresses, by @guiniol
  • new process_status module to check the presence of a process, by @ritze
  • new rainbow module to enlight your day, by @tobes
  • new tcp_status module to check for a given TCP port on a host, by @ritze


The changelog is very big and the next 3.4 milestone is very promising with amazing new features giving you even more power over your i3bar, stay tuned !

Thank you contributors

Still a lot of new timer contributors which I take great pride in as I see it as py3status being an accessible project.

  • @btall
  • @chezstov
  • @coxley
  • Felix Morgner
  • Gabriel Féron
  • @guiniol
  • @inclementweather
  • @jakubjedelsky
  • Jan Mrázek
  • @m45t3r
  • Maxim Baz
  • @pferate
  • @ritze
  • @rixx
  • @Spirotot
  • @Stautob
  • @tjaartvdwalt
  • Yuli Khodorkovskiy
  • @ZeiP

November 20, 2016

jasper is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.

A crafted image, through an intensive fuzz on the 1.900.22 version revealed a stack overflow.

The complete ASan output:

# imginfo -f $FILE
warning: trailing garbage in marker segment (9 bytes)
warning: trailing garbage in marker segment (28 bytes)
warning: trailing garbage in marker segment (40 bytes)
warning: ignoring unknown marker segment (0xffee)
type = 0xffee (UNKNOWN); len = 23;1f 32 ff ff ff 00 10 00 3d 4d 00 01 32 ff 00 e4 00 10 00 00 4f warning: trailing garbage in marker segment (14 bytes)
==9166==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7faf2e200c20 at pc 0x7faf320a985a bp 0x7ffd397b9b10 sp 0x7ffd397b9b08
WRITE of size 4 at 0x7faf2e200c20 thread T0
    #0 0x7faf320a9859 in jpc_tsfb_getbands2 /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_tsfb.c:227:16
    #1 0x7faf320a9009 in jpc_tsfb_getbands2 /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_tsfb.c:223:3
    #2 0x7faf320a8b9f in jpc_tsfb_getbands /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_tsfb.c:187:3
    #3 0x7faf3200eaa6 in jpc_dec_tileinit /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_dec.c:714:4
    #4 0x7faf3200eaa6 in jpc_dec_process_sod /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_dec.c:560
    #5 0x7faf3201c1c3 in jpc_dec_decode /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_dec.c:391:10
    #6 0x7faf3201c1c3 in jpc_decode /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_dec.c:255
    #7 0x7faf31f7e684 in jas_image_decode /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/base/jas_image.c:406:16
    #8 0x509c9a in main /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/appl/imginfo.c:203:16
    #9 0x7faf3108761f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #10 0x419988 in _init (/usr/bin/imginfo+0x419988)

Address 0x7faf2e200c20 is located in stack of thread T0 at offset 3104 in frame
    #0 0x7faf3200dbbf in jpc_dec_process_sod /tmp/portage/media-libs/jasper-1.900.22/work/jasper-1.900.22/src/libjasper/jpc/jpc_dec.c:544

  This frame has 1 object(s):
    [32, 3104) 'bnds.i' 0x0ff665c38180: 00 00 00 00[f3]f3 f3 f3 f3 f3 f3 f3 f3 f3 f3 f3
  0x0ff665c38190: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff665c381a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff665c381b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff665c381c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff665c381d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb

Affected version:

Fixed version:

Commit fix:

This bug was discovered by Agostino Sarubbo of Gentoo.



2016-11-09: bug discovered and reported to upstream
2016-11-20: upstream released a patch
2016-11-20: blog post about the issue
2016-11-23: CVE assigned

This bug was found with American Fuzzy Lop.


jasper: stack-based buffer overflow in jpc_tsfb_getbands2 (jpc_tsfb.c)

November 17, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book Review: Life Nomadic (November 17, 2016, 16:04 UTC)

I think it’s fitting that I’m starting the review of this book while sitting in the AirFrance lounge at Charles de Gaulle Airport, coming back from a four days trip to Paris to see Video Games Live playing at Le Grand Rex.

Life Nomadic was one of a set of suggestion that my dentist, of all people, gave me. Since the book is short, and it was available on Kindle Unlimited, I thought to start with it, even though it was possibly the one I was the least interesting of the list. Turns out my instincts were right and shouldn’t have even read this.

The premise of the book may be interesting, and depending on the cover you see for it, it might be what catches your eye: How to travel the world for less than you pay in rent. I find this is quite the clickbait (how do you call clickbait on a book cover?) because that’s not what it talks about at all; it’s not just travelling the world, the author argues for a complete overhaul of your life to be able to do so, and that, in my opinion, is myopic to say the least.

It might sound “trendy” to say this, but the book clearly reeks of white privilege — while the author never mentions that directly, it becomes very clear by oblique references that he’s white, and he’s clearly male, he says that at the beginning. He’s also healthy, and he’s insisting that this is thanks to his diet, rather than having won the health lottery and having grown up in a rich, healthy environment.

Indeed, the whole premise of the book is that if you’re privileged, in one way or another, travelling around the world is easy. But until you get to the end, when he’s talking about what to do with work (again in a fairly myopic way), you don’t realize that all his suggestions hinge on one important node: you have to be able to risk your job.

Forget to follow the advises of these books if (and this is an incomplete list, I’m sure):

  • you have a medical condition, light or heavy, that requires you have a relationship with a medical professional — I find it difficult to make appointments with my diabetologist while travelling for work, and with a relatively fixed schedule, if I were to follow his advise of just taking whichever next flight comes to your mind, I would probably be half-dead with untreated diabetes.
  • you are not American or European, as he’s ignoring all the difficulties of getting visas for most other nationalities; it is true that if you have an American or European passport getting visas for most of the world is just triviality and spending some money, it is less the case if you have other nationalities — and in some cases, it might actually get you outright in trouble;
  • you are not a white male, for whichever part of the world — the book starts with the author recounting doing something quite illegal and being given a slap on the wrist by the authorities; while it is true that I’d fear American police more than the rest of the world right now, plenty of stories from friends and acquaintances tell me this is a privilege; the risk of jail time is real if you’re not white even in Europe, let alone what may happen in some random country in which you happen to be one of the most obscure minorities;
  • your work actually requires presence, or timing, or any kind of (even flexible) schedule — the author does not quite specify what he works on, except by describing himself as a “die hard entrepreneur”; he says he stared by writing some software to sell, and points out he was mostly living off royalties of a previous book he published; I’ll get back to this;
  • you have any family ties at all — a relative, parent, close friend that is ill, or that you support directly or indirectly, as a lot of the talk in this book relies on how “cheap” (compared to US dollars) is the life in many developing countries.

To expand a little bit about how myopic his advises are, I’ll also point out how they can’t even apply to me, and I’m a well-off, single, straight, white guy with (loose) family ties. Even seeing my doctor three times a year (which is not much), I have to have with me a significant amount of “paraphernalia” for my diabetes: pills, insulin, needles, glucometer, etc. This by itself makes it almost impossible to just spend months at a time without a fixed schedule of being able to refill them. Not only some of those medications would not be available in some parts of the world at all, but even if they are, they are a significant cost, and I’m not even factoring in effects of the craziness of the US insurance system on the drugs prices. Besides, those things don’t really travel well. I have a refrigerated pack for my insulin, and luckily I never had trouble through airports before, but I have heard horror stories with insulin pumps and metal detectors. Even the Libre’s simple sensor managed to get me a stern questioning by the Nice airport security guards (and if you want to know, that was before the terrorist.)

While at it, I would like to present a thank you to AirFrance; their lounges at CDG are the only ones I’ve seen, up to now, that make it welcoming to take insulin: they have a sharps container in the bathroom, so you don’t have to ask for it (possibly embarrassingly for some people.) They also have signs on their aircrafts pointing you at their cabin crew for the container, which I’ve done before when flying back from Japan.

A particular note I’ll spend on the work section I noted above. As I said the guy defines himself as an entrepreneur, and if you have spent as much time as me around the Silicon Valley crowd, you can easily recognize the type in the book, even when they are from Austin, instead. He’s the kind of person who made “techie” into a bad word. The whole section about “Earning Money as You Travel” takes no consideration of workers outside of “our” (damn, I don’t want to be associated with this guy’s peers!) industry. The first suggestion is to start a business — well, I know how that goes, and it’s not easy at all, indeed it can only work well if you have capital to invest on it to begin with, which was a big problem for me when I did, because I had none, this guy clearly had since early on (as he goes on to say how he used to order random crap off Internet just for the giggles.)

The other suggestion is to go on contracting, or being a remote worker, suggesting that you may work on websites or software, or that (and I quote) «many office jobs can translate into contractor work» which to me sound like this guy has never seen an office worker outside of tech. And even within tech, he clearly has no idea what he’s talking about (emphasis mine):

Jobs that are particularly conductive to going mobile are jobs that require minimal interaction with others, like writing, editing, programming, graphic design and system administration.

If you have ever tried doing system administration remote, you would know right away that this is clearly bullshit — if you have no idea what the customer is doing, and how they are doing it, you are a horrible system administrator. Yes, you can work remote, but there is no way that they only require “minimal interaction”: they need plenty of interaction, maybe even more so when doing it remote.

Want to have even more fun? Again emphasis mine.

Figure out which program you’d like to become proficient at. Buy the software, buy the great tutorials to learn it from [omissis], and spend some time practicing. The money you’ve amassed from selling everything should easily last you through the transition period.

Yes, he did argue selling everything you own at the beginning of the book. He seems to think that you can do that, and amass money, probably while living in an RV and working from the tables of a closed-up restaurant in-between lunch and dinner, as he did. And that should be enough to learn something new you never did before. I’m sure this guy would have been able to, if he wasn’t lucky (because it’s all a matter of luck.)

Okay, enough with the usual SV-style no-work-is-important-but-tech part, let’s see if there is any value in this book otherwise.

He does have a significant amount of useful information on the actual travelling part, although some of it is clearly only important to note to Americans, such as the viability of train travel in Europe. He has good suggestions for the use of ferries (that I may actually try once in a lifetime too, because I am indeed lucky, and despite disliking travel, taking a week or so to traverse the ocean without a flight sounds cool to do.) Unfortunately that makes up only about half of the book.

He’s got a good list of suggestions about gear, too — although a lot of it is already part of privilege: most of the options are bloody expensive and not something that you can even consider without an injection of capital at the “transition” as he calls it. If you are lucky and privileged it might be worth a look, I have particularly been tempted by Smartwool, and particularly by their wool socks, as my diabetes makes it more likely I get blisters, if my feet get wet — but before those even got to me, I ended up buying a pair of tights in Paris, because it was very cold while I was there, and The North Face store in the city stocks Smartwool too; they are significantly expensive, but also much more comfortable that others I tried before.

His suggestions for services are also out of touch — among others he’s suggesting the American Express Platinum card, which admittedly it is a very useful card for a traveller lucky enough to afford one: not only it ignores the fact that not everybody can get a credit card but also that it’s not only the price that makes American Express an elitist card. It is effectively limitless (or rather, has a very flexible limit) which means their credit score requirements can be significantly higher.

In the book he points at his website to provide a list of gear — which sounded like a cool idea, I do something similar myself for my hardware, but the link is now dead. Which is a shame, because that might have been the only useful thing he could have done for the public. Too bad.

Finally, some of his suggestions are downright unethical, including abusing airfare rules to enter lounges he should not have access to (although I hope he took a shower while there, because one of the most anoying things while travelling is the well-off traveller smelling like goats for a four hours flight.) And mine and hsi point of view are clearly at odds on the general ethical side, too:

[speaking about airfare systems] I like systems like this — they reward the smart and determined at the cost of the lazy or ignorant.

I would rephrase it as “They reward the lucky elite at the cost of the otherwise busy masses.” But clearly my belief system and his are very different. It should not be a surprise by then that the book the guy got his money from, and that allowed him to start, sound like that a PUA title (or, in his words, “a book on dating for men”.

Travel should widen your horizons, it’s very hard not to, but my feeling is that this guy has been looking at the world as if he deserves all of it. Rather than being empathetic to the condition of others that might not have his privilege, he pours contempt for them: friends locked in their lives (whether by choice or lack of opportunities), people living in their own home countries, and even the readers of this book.

Final result: not a fan. I’ll actually synthesise this review in a form that is acceptable to Amazon and GoodReads and post it as a warning.

November 14, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The overengineering of ALSA userland (November 14, 2016, 11:04 UTC)

This is a bit of an interesting corner case of a rant. I have not written this when I came up with it, because I came up with it many years ago when I actively worked on multimedia software, but I have only given it in person to a few people before, because at the time it would have gained too much unwanted attention by random people, the same kind of people who might have threatened me for removing XMMS out of Gentoo so many years ago. I have, though, spoken about this with at least one of the people working on PulseAudio at the time, and I have repeated this at the office a few times, while the topic came up.

For context you may want to read this rant from almost ten years ago by Mike Melanson, who was at the time working for Adobe on Flash Player for Linux. It’s a bit unfortunate that the drawings from the post are missing (but maybe Mike has a copy?) but the whole gist is that the Linux Audio API were already bloody confusing at the time, and this was before PulseAudio came along to stay. So where are we right now?

Well, the good news is that for the most part things got simpler: aRTs and ESounD are now completely gone, eradicated in the favour of PulseAudio, which is essentially the only currently used consumer sound daemon. Jack2 is still the standard for the pro-audio crowd, but even those people seem to have accepted that multimedia players are unlikely to care for it, and it should be limited to proaudio software. On the kernel driver side, the actually fairly important out-of-kernel drivers are effectively gone, in favour of development happening as a separate branch of the Linux kernel itself (GIT was not a thing at the time, oh how things have changed!) and OSS is effectively gone. I don’t even know if it’s available in the kernel, but the OSS4 fanboys have been quiet for long enough that I assume they gave up too.

ALSA itself hasn’t really changed much in all this time, either in the kernel or as userland. In the kernel, it got more complex for supporting things like jack sense, as HDA started supporting soft-switching between speaker and headphones output. In the userland, the plugins interface that was barely known before is now a requirement to properly use PulseAudio, both in Gentoo and in most other distributions. Which effectively makes my rant not only still relevant, but possibly more relevant. But before I go into details, I should take a step back and explain what the whole thing with userland and drivers is, with ALSA. I’ll try to simplify the history and the details, so if you know this very well you may notice I may skip some details, but nobody really cares that much about those.

The ALSA project was born back when Linux was in version 2.4 — and unlike today, that version was the version for a long time. Indeed up until version 3.0, a “minor” version would just be around forever; the migration from 2.4 to 2.6 was a massive amount of work and took distributions, developers and users alike a lot of coordination. In Linux 2.4, the audio drivers were based off the OSS interface, which essentially meant you had /dev/dspX and /dev/mixerX, and you were done — most of the time mixer0 matched a number of dspX devices, and most devices would have input and output capabilities, but that’s about all you knew. Access to the device was almost always exclusive to one process, except if the soundcard had multiple hardware mixer channels, in which case you could open the device multiple times. If you needed processes to share the device, your only option was to use a daemon such as the already named aRTs or ESounD. The ALSA project aimed to replace the OSS interface (that by then became a piece of proprietary software in its newer versions) with a new, improved interface in the following “minor” version (2.5, which stabilized as 2.6), as well as on the old one through additional kernel modules — the major drawback from my point of view, is that this new interface became Linux-specific, while OSS has been (and is) supported by most of the BSDs as well. But, sometimes you have to do this anyway.

The ALSA approach provides a much more complex device API, but mostly for good reason, because sound cards are (or were) a complex interface, and are not consistent among themselves at all. To make things simpler to application developers who previously only had to use open() and similar functions, ALSA provided an userland library, provided in a package called alsa-lib, but more often known as its filename: libasound. While the interface of the library is not simple either, it does provide a bit of wrapping around the otherwise very low-level APIs. It also abstracts some of the problems away of figuring out which cards are present and which mixer refers to which. The project also provided a number of tools and utilities to configure the devices, query for information or playback raw sound — and even a wrapper for applications implementing OSS access only, in the form of a preloadable library catching accesses to /dev/dsp to convert them to ALSA API calls — not different from the similar utilities provided by arts, esd or PulseAudio.

In the original ALSA model, access to the device was still limited to one process per channel, but as soundcards with more than one hardware channel became quickly obsolete (particularly as soundcard kind-of standardized over AC’97, then HDA) the need for sharing access arose again, and since both arts and esd had their limits (and PulseAudio was far from ready), the dmix interface arrived — in this setup, the first process opening the device would actually have access, as well as set up a shared memory area for other processes to provide their audio, which then would be mixed together in userland, particularly in the process space of the first process opening the device. This had all sorts of problems, particularly when sharing across users, or when sharing with processes that only used sound for a limited amount of time.

What dmix actually used was the ability of ALSA to provide “virtual” devices, which can be configured for alsa-lib to see. Another feature that got more spotlight thanks to the lowering of featureset in soundcards, particularly with the HDA standard, is the ability to provide plugins to extend the functionality of alsa-lib — for a while the most important one was clearly the libsamplerate-based resampling plugin which almost ten years ago was the only way to provide non-crackling sound out of an HDA soundcard. These plugins included other features, such as a plugin providing a virtual device for encoding to Dolby AC3 so that you could us S/PDIF pass-through to a surround decoder. Nowadays, the really important plugin is the one PulseAudio one, which allows any ALSA-compatible application to talk to PulseAudio, by configuring a default virtual device.

Okay now that the history lesson is complete, let me see to write down what I think is a problem with our current, modern setup. I’ll exclude in my discussion proaudio workstations, as these have clearly different requirements from “mainstream” and most likely would still argue (from a different point) that the current setup is overengineered. I’ll also exclude most embedded devices, including Android, since I don’t think PA ever won over the phone manufacturers outside of Nokia — although I would expect that a lot of them actually do rely on PulseAudio a bit and so the discussion would apply.

In a current Linux desktop, your multimedia applications end up falling into two main categories: those that implement PulseAudio support and those that implement ALSA support. They may use some wrapper library such as SDL, but at the end of the day, these are the two APIs that allow you to output sound on modern Linux. A few rare cases of (proprietary, probably) apps implementing OSS can be ignored, as they would either then use aoss or padsp to preload the right library to provide support to whichever stack you prefer. Whichever distribution you’re using all of these two classes of apps are extremely likely to be going out of your speaker through PulseAudio. If the app only support ALSA, the distribution is likely providing a configuration file so that the default ALSA device is a virtual device pointing at the PulseAudio plugin.

When the app talks to PulseAudio directly, it’ll use its API through the client library, that then IPCs through its custom protocol to the PulseAudio Daemon, which will then use alsa-lib through its API, ignoring all the virtual devices configured, which in turn will talk with the kernel drivers through its device files. It’s a bit different for Bluetooth devices, but you get the gist. This at first sight should sound just fine.

If you look at an app that only supports ALSA interfaces, it’ll use the alsa-lib API to talk to the default device, which uses the PulseAudio client library to IPC to the PulseAudio daemon, and so as above. In this case you have alsa-lib on both sides: the source application and the sink daemon. So what am I complaining about? Well here is the thing: the parts of ALSA that the media application uses versus the parts of ALSA that the PulseAudio daemon uses are almost entirely distinct: one only provides access to the virtual devices configured, and the other only gives access to the raw hardware. The fact that they share the API is barely a matter, in my opinion.

From my point of view, what would be a better solution would be for libasound to be provided by PulseAudio directly, implements a subset of ALSA API, that either show the devices as the sinks configured in PulseAudio or, PA wants to maintain the stream/sink abstraction itself, just a single device that is PulseAudio. No configuration files, no virtual devices, no plugins whatsoever, but if the application is supporting ALSA, it gets automatically promoted to PulseAudio. Then on the daemon side, PulseAudio can either fork alsa-lib, or have alsa-lib provide a simpler library, that only provides access to the hardware devices, and removes support for configuration files and plugins (after all PulseAudio already has its own module system.) Last I heard, there actually is an embedded version of libasound that implements only the minimal amount of features needed to access a sound device through ALSA. This not only should reduce the amount of “code at play” (pardon the pun), but also reduce the chance that you can misconfigure ALSA to do the wrong thing.

Misconfiguring ALSA is probably the most common reason for your sound not working the way you expect on Linux — the configuration files and options, defaults and so on kept changing, and since ten years ago things are so different that you’re likely to find very bad, old advise out there. And it’s not always clear not to follow it. And for instance for the longest time Adobe Flash, thinking of doing the right thing, would not actually abide to the default ALSA configuration, and rather try to access the hardware device itself (mostly because of nasty bugs with dmix), which meant that PulseAudio wouldn’t be able to access it anymore itself. The quickly sketched architecture above would solve that problem, as the application would not actually be able to tell the difference between the hardware device and the PulseAudio virtual device — the former would just not be there!

And just to close up my ALSA rant, I would like to remember you all, that alsa-lib still comes with its own LISP interpreter: the ALISP dialect was meant to provide even more configurability to the sound access interface, and most distributions, as far as I know, still have it enabled. Gentoo provides a (default-off) alisp USE flag, so you’re at least spared that part in most cases.

November 11, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Technology and society, the cellphone example (November 11, 2016, 10:04 UTC)

After many months without blogging, you can notice I’m blogging a bit more about my own opinions than before. Part of it is because these are things I can write about without risking conflicts of interests with work, so that makes it easier to write, and part of it is because my opinions differing from what I perceive as the majority of Free Software advocates. My hope is that providing my opinions openly may, if not sway the opinion of others, find out that there are other people sharing them. To make it easier to filter out I’ll be tagging them as Opinions so you can just ignore them, if you use anything like NewsBlur and its Intelligence Trainer (I love that feature.)

Note: I had to implement this in Hugo as this was not available when I went to check if the Intelligence Trainer would have worked. Heh.

Okay, back on topic. You know how technologists, particularly around the Free Software movement, complain abut the lack of openness in cellphones and smartphones? Or of the lack of encryption, or trustworthy software? Sometimes together, sometimes one more important than the other? It’s very hard to disagree with the objective: if you care about Free Software you want more open platforms, and everybody should (to a point) care about safety and security. What I disagree with is the execution, for the most part.

The big problem I see with this is the lack of one big attribute for their ideal system: affordability. And that does not strictly mean being cheap, it also means being something people can afford to use — Linux desktops are cheap, if you look only at the bottom line of an invoice, but at least when I last had customers as a -Sysadmin for hire- Managed Services Provider, none of them could afford Linux desktops: they all had to deal with either proprietary software as part of their main enterprise, or with documents that required Microsoft Office or similar.

If you look at the smartphone field, there have been multiple generations of open source or free software projects trying to get something really open out, and yet what most people are using now is either Android (which is partly but not fully open, and clearly not an open source community) or iOS (which is completely closed and good luck with it.) These experiments. were usually bloody expensive high-end devices (mostly with the excuse of being development platforms) or tried to get the blessing of “pure free software” by hiding the binary blobs in non-writeable flash memory so that they could be shipped with the hardware but not with the operating systems.

There is, quite obviously, the argument that of course the early adopters end up paying the higher price for technology: when something is experimental it costs more, and can only become cheaper with enough numbers. But on the other hand, way too many of the choices became such just for the sake of showing off, in my opinion. For instance in cases like Nokia’s N900 and Blackphone.

Nowadays, one of the most common answers when talking about the lack of openness and updates of Android is still CyanogenMod despite some of the political/corporate shenanigans happening in the backstory of that project. Indeed, as an aftermarket solution, CyanogenMod provides a long list of devices with a significantly more up to date (and thus secure) Android version. It’s a great project, and the volunteers (who have been doing the bulk of the reverse engineering and set up for the builds) did a great job all these years. But it comes with a bit of a selection bias. It’s very easy to find builds for a newer flagship Android phone, even in different flavours (I see six separate builds for the Samsung Galaxy S4, since each US provider has different hardware) but it’s very hard to find up to date builds for cheaper phones, like the Huawei Y360 that Three UK offers (or used to offer) for £45 a few months back.

I can hear people saying “Well, of course you check before you buy if you can put a free ROM on it!” Which kind of makes sense if what constraints your choice is the openness, but expecting the majority of people to care about that primarily is significantly naïve. Give me a chance to explain my argument for why we should spend a significant amount of time working on the lower end of the scale rather than the upper.

I have a Huawei Y360 because I needed a 3G-compatible phone to connect my (UK) SIM card while in the UK. This is clearly a first world problem: I travel enough that I have separate SIM cards for different countries, and my UK card is handy for more than a few countries (including the US.) On the other hand, since I really just needed a phone for a few days (and going into why is a separate issue) I literally went to the store and asked them “What’s the cheapest compatible phone you sell?” and the Y360 was the answer.

This device is what many people could define craptastic: it’s slow, it has a bad touchscreen, very little memory for apps and company. It comes with a non-stock Android firmware by Huawei, based on Android 4.4. The only positive sides for the device are that it’s cheap, its battery actually tends to last, and for whatever reason it allows you to select GPS as the timesource, which is something I have not seen any other phone doing in a little while. It’s also not fancy-looking, it’s a quite boring plastic shell, but fairly sturdy if it falls. It’s actually fairly well targeted, if what you have is not a lot of money.

The firmware is clearly a problem in more than one way. This not being just a modified firmware by Huawei, but a custom one for the provider means that the updates are more than just unlikely: any modification would have to be re-applied by Three UK, and given the likely null margin they make on these phones, I doubt they would bother. And that is a security risk. At the same time the modifications made by Huawei to the operating system seem to go very far on the cosmetic side, which makes you wonder how much of the base components were modified. Your trust on Huawei, Chinese companies, or companies of any other country is your own opinion, but the fact that it’s very hard to tell if this behaves like any other phone out there is clearly not up for debate.

This phone model also appears to be very common in South America, for whatever reason, which is why googling for it might find you a few threads on Spanish-language forums where people either wondered if custom ROMs are available, or might have been able to get something to run on it. Unfortunately my Spanish is not functional so I have no idea what the status of it is, at this point. But this factoid is useful to make my point.

Indeed my point is that this phone model is likely very common with groups of people who don’t have so much to spend on “good hardware” for phones, and yet may need a smartphone that does Internet decently enough to be usable for email and similar services. These people are also the people who need their phones to last as long as possible, because they can’t afford to upgrade it every few years, so being able to replace the firmware with something more modern and forward looking, or with a slimmed down version, considering the lack of power of the hardware, is clearly a thing that would be very effective. And yet you can’t find a CyanogenMod build for it.

Before going down a bit of a road about the actual technicalities of why these ROMs may be missing, let me write down some effectively strawman answers to two complaints that I have heard before, and that I may have given myself when I as young and stupid (now I’m just stupid.)

If they need long-lasting phones, why not spend more upfront and get a future-proof device? It is very true that if you can afford a higher upfront investment, lots of devices become cheaper in the long term. This is not just the case for personal electronics like phones (and cameras, etc.) but also for home hardware such as dishwashers and so on. When some eight or so years ago my mother’s dishwasher died, we were mostly strapped on cash (but we were, at the time, still a family of four, so the dishwasher was handy for the time saving), so we ended up buying a €300 dishwasher on heavy discounts when a new hardware store just opened. Over the next four years, we had to have it repaired at least three times, which brought its TCO (without accounting for soap and supplies) to at least €650.

At the fourth time it broke, I was just back from my experience in Los Angeles, and thus I had the cash to buy a good dishwasher, for €700. Four years later the dishwasher is working fine, no repair needed. It needs less soap, too, and it has a significantly higher energy rating than the one we had before. Win! But I was lucky I could afford it at the time.

There are ways around this: paying things by instalments is one of these, but not everybody is eligible to that either. In my case at the time I was freelancing, which means that nobody would really give me a loan for it. The best I could have done would have been using my revolving credit card to pay for it, but let me just tell you that the interests compound much faster on that than with a normal loan. Flexibility costs.

This, by the way, relate to the same toilet paper study I have referenced yesterday.

Why do you need such a special device? There are cheaper smartphones out there, change provider! This is a variation of the the argument above. Three UK, like most of their Three counterparts across Europe, is a bit peculiar, because you cannot use normal GSM phones with them, you need at least UMTS. For this reason you need more expensive phones than your average Nokia SIM-free. So arguing that using a different provider may be warranted if all you care about is calls and text, but nowadays that is not really the case.

I’m now failing to find a source link of it, but I have been reading this not too long ago (likely on the Wall Street Journal or New York Times, as those are the usual newspapers I read when I’m at a hotel) how for migrants the importance of Internet-connected mobile phones is significant. The article listed a number of good reasons, among which I remember being able to access the Internet to figure out what kind of documents/information they need, being able to browse available jobs opening, and of course to be able to stay in touch with their family and friends that may well be in different countries.

Even without going to the full extreme of migrants who just arrived in a country, there are a number of “unskilled” job positions that are effectively “at call” — this is nothing new, the whole are of Dublin where I live now, one of the most expensive in the city, used to be a dormitory for dock workers, who needed to be as close as possible to the docks themselves so that they could get there quickly in the morning to find job. “Thanks” to technology, physical home proximity has been replaced with reachability. While GSM and SMS are actually fairly reliable, having the ability to use WiFi hotspots to receive text and SMS (which a smartphone allows, but a dumbphone doesn’t) is a significant advantage.

An aside on the term “unskilled” — I really hate the term. I have been told that delivering and assembling furniture is an unskilled job, I would challenge my peers to bring so many boxes inside an apartment as quickly as the folks who delivered my sofa and rest of furniture a few months ago without damaging either the content of the boxes or the apartment, except I don’t want to ruin my apartment. It’s all a set of different skills.

Once you factor in this, the “need” for a smartphone clearly outweighs the cheapness of a SIM-free phone. And once you are in for a smartphone, having a provider that does not nickel and dime your allowances is a plus.

Hopefully now this is enough social philosophy for the post — it’s not really my field and I can only trust my experience and my instincts for most of it.

So why are there not more ROMs for these devices? Well the first problem is that it’s a completely different set of skills, for the most part, between the people who would need those ROMs and the people who can make those ROMs. Your average geek that has access to the knowledge and tools to figure out how the device works and either extract or build the drivers needed is very unlikely to do that on a cheap, underpowered phone, because they would not be using one themselves.

But this is just the tip of the iceberg, as that could be fixed by just convincing a handful of people who know their stuff to maintain the ROM for these. The other problem with cheap device, and maybe less so with Huawei than others, for various reasons, is that the manufacturer is hard to reach, in case the drivers could be available but nobody has asked. In Italy there is a “brand” of smartphones that prides itself in advertisement material that they are the only manufacturer in Italy — turns out the firmware, and thus most likely the boards too, are mostly coming from random devshops in mainland China, and can be found in fake Samsung phones in that country. Going through the Italian “manufacturer” would lead to nothing if you need specs or source code. [After all I’ve seen that for myself with a different company before.

A possible answer to this would be to mandate better support for firmware over time, fining the manufacturers that refuse to comply with the policy. I heard this proposed a couple of times, particularly because of the recent wave of IoT-based DDoS that got to the news so easily. I don’t really favour this approach because policies are terrible to enforce, as it should be clear by now to most technologists who dealt with leaks and unhashed passwords. Or with certificate authorities. It also has the negative side effect of possibly increasing the costs as the smaller players might actually have a hard time to comply with these requirements, and thus end up paying the highest price or being kicked out of the market.

What I think we should be doing, is to change our point of view on the Free Software world and really become, as the organization calls itself software in the public interest. And public interest does not mean limiting to what the geeks think should be the public interest (that does, by the way, include me.) Enforcing the strict GPL has become a burden to so many companies by now, that most of the corporate-sponsored open source software nowadays is released under Apache 2 license. While I would love an ideal world in which all of the free software out there is always GPL and everybody just contributes back at every chance, I don’t think that is quite so likely, so let’s accept that and be realistic.

Instead of making it harder for manufacturers to build solutions based on free and open source software, make it easier. That is not just a matter of licensing, though that comes into play, it’s a matter of building communities with the intent of supporting enterprises to build upon them. With all the problems it shows, I think at least the Linux Foundation is trying this road already. But there are things that we can all do. My hopes are that we stop the talks and accusations for and against “purity’ of free software solutions. That we accept when a given proposal (proprietary, or coming out a proprietary shop) is a good idea, rather than ignore it because we think they are just trying to do vendor lock-in. Sometimes they are and sometimes they aren’t, judge ideas, formats, and protocols on their merits, not on who propose them.

Be pragmatic: support partially closed source solutions if they can be supported or supportive of Free Software. Don’t buy into the slippery slope argument. But strive to always build better open-source tool whenever there is a chance.

I’ll try to write down some of my preferences of what we should be doing, in the space of interaction between open- and closed-source environments, to make sure that the users are safe, and the software is as free as possible. For the moment, I’ll leave you with a good talk by Harald Welte from 32C3; in particular at the end of the talk there is an important answer from Harald about using technologies that already exist rather than trying to invent new ones that would not scale easily.

November 10, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)

Open Source Conference 2016 Tokyo

Many people came to the Gentoo booth,
mainly students and Open Source users
asking for Gentoo information.

We gave away around 200 flyers, and
many many stickers during the two days.

Unfortunately the sticker we ordered
from unixsticker had some SVG problem.

We had also in exposition some esoteric
enviroment like the IS01 sharp,
off course mounting Gentoo as Native and
as prefix.
Of course one of the first things we tried
was the 5 minutes long Gentoo sl command.

image from: @NTSC_J

We also had a Gentoo notebook
running wayland (the one in the middle).

It was an amazing event and I would
like to thanks everyone that came to
the Gentoo booth, everyone that helped
making the Gentoo booth and all the
amazing Gentoo community.

November 08, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My opinion on internet ads (November 08, 2016, 09:04 UTC)

You may or may not remember that I did post about my (controversial) privacy policy and some of my thoughts on threat models. A related, though should probably be separate, topic is how to handle internet advertisers, and tools like AdBlock, so I thought I would write down my personal preference and how I work.

First of all, I should point out the obvious elephants in the room: not only I work for a company that sells internet ads, but I also use ads on both this blog and Autotools Mythbuster — mostly to try reducing the cost of running these operations, which are mostly a personal whim. On the other hand, the opinions I express in this post are all personal, and are not being influenced by this. They have been forged over time and experience, and some of said experience may have been related to these, though.

Once this is clarified, I should describe my current setup, since that will spark the rest of the content of the post. I (still) use AdBlock Plus extension for Chrome — even with all the possibly shady behaviour that the current owners are behind, I have not found a good replacement; uBlock Origin is not a replacement, as I’ll get to later. I’ve set the extension to behave as an opt-in, rather than opt-out: ads are not blocked anywhere until I ask it to. Chrome for Android does not have AdBlock or similar, so I have nothing really there, on the other hand it’s less of an issue there because I usually just look at the same dozen websites most of the time.

To make ads generally less annoying, I signed up for Google Contributor which allows me to declare a target monthly contribution to use to replace Google Ads with whatever set of images (or nothing at all.) I set it to show me cats, including my own.

As I said above, I set my AdBlock to not block ads by default, so when do I decide to turn it on? Well, to start with I run it on my own websites (except when I’m testing them), since otherwise it’s a bit of a mess with the Terms of Service of AdSense, so this is easier. Other than that, I usually turn it on for various sites when I land on a page and I find it “scammy.” The definition of scammy is of course up to debate, so let me try to explain where I come from.

Also, I need to make this point here, so that if you completely disagree with my idea here, you can probably stop reading (and please don’t comment either): I don’t believe that advertising and marketing are inherently evil. I know plenty of privacy extremists take an issue with the statement, so if you do feel free to move on and read something else altogether.

Not all internet ads are created equal, I think this is obvious to essentially anybody who has been browsing the Internet for more than a few months. Ads may be more or less intrusive, they may be more or less relevant to your interests and they may or may not always be legal. While no supplier is immune, most of the big names thrive hard to avoid ads that outright lie, or that try to pass off for something else. The results are usually mixed as everybody knows already.

On the other hand, there are suppliers that explicitly go for the scams, and some website operators accept them quite willingly. The reason is usually monetary: these networks pay off much better, as the “advertisers” are happy to pay premium to get their (frequently) malware advertised. To give you a bit of an idea, I suggest you read or watch this presentation from the USENIX Security conference.

This is not all, of course. There are also the self-defined “content discovery networks”, that purport to point people at other content they should be interested in, mixing content from the same site with “sponsored links.” Even I tried it once before I noticed how useless it ended up being. Nowadays a lot of those kind of links are coming from two networks: Taboola and Outbrain; in my experience, the latter actually provides kind-of relevant content, the former has lots of almost definite scams that I do not appreciate.

To give you an idea, if I’m reading an article about Brexit, I find it perfectly reasonable to get links to articles suggesting cheap vacation to the UK, an ad for Transferwise and an ad for (which is, as far as I know, a totally legit tradit website I have no affiliation with, but just seem to spend lots of money in advertisement, as I see it on every other website.) If, on the other hand, a different article on the same topic proposes me links such as “This one trick hated by doctors to lose weight” and similar, then I think there is more than a little bit of a problem.

But you can get worse than this! Some months ago I was traveling to London, and an acquaintance of mine shared on Facebook an article he wrote for an Italian newspaper (since he’s still living around where I’m from.) Since I was curious about the topic, I looked at it and … well, you can see it by yourself:

Scammy ads from Italian newspaper site

Two things are kind of obvious when looking at it: “Make ¤NNN a day” scams are freaking common not only in comment-spam, and people really seem to believe you can look 30 years younger by buying something. Out of eight “links”, only half actually point back to the newspaper, two point to possibly fake cosmetics (from two “different” sites — which are clearly the same), and two points to outright scams that suggest you can make money without doing anything (these reporting the same site name at least.) It’s also apparent that those two sets are auto-generated by taking a set of stock images, a set of stock headline templates, and throwing different currency symbols, numbers and country names.

Now you may ask why a newspaper – one for which a friend of mine even writes! – would use such a blatantly scammy ad network. The answer is that they did not realize it was a scammy network until I showed him the screenshot. Indeed, from within Italy their ads are useless, but at least legit; it isn’t until you’re visiting from the outside that they start providing you with scam. This is, by the way, why sometimes you may find spam that simply links to a blog post of a newspaper or other site in a non-English language: they still want you to “see” these ads, if they are the only thing you understand in the page, that’s still okay. If you don’t know better, you may still fall for it.

There are more cases, but these are the major ones. So if I see any of these scammy ads, I just go and enable AdBlock for the whole domain. Usually, I also try to stay away from that website altogether, but sometimes it’s not as easy. For instance Wikia – yes, headed by the same Jimmy Wales that keeps insisting he doesn’t want ads on Wikipedia by putting a 50%-height banner of his face on it from time to time – uses the medium-grade scammy Taboola — it’s not quite outright illegal activity, but clearly it’s not something I care to see. So there goes AdBlock.

In addition to the actual scammy, I enable AdBlock plus if I see other ads that, whether legit or not, are just an active pain in the arse. For instance, some sites, particularly I noted around hardware reviews, use ad networks that hook on-hover ads to words. So if you’re like Randall and me and go on selecting text to remember where you were reading if you’re distracted, you may end up playing one of their stupid (sometimes scammy, sometimes not) ads. Bam. Auto-playing video ads with audio gets the AdBlock hammer too. Bam. And so do those sites that just get my CPU to spin though it’s not obvious there is any ad playing already. Bam.

So with all this explained, let me go back to uBlock Origin, which seems to be the only alternative to AdBlock Plus that is ever suggested. This extension is clearly written by privacy extremists. I already had a couple of times people replying to my complaints about it on twitter trying to be funny with “well, that’s intended” or “I don’t see a problem” — that does not make you smart, that makes you completely tone-deaf.

The extension does not only block ads, but it keeps insisting it wants to block all the client-side tracking. As I said before there is still plenty of space for server-side tracking, particularly for malicious purposes; client-side tracking is usually done for marketing purposes, and so I don’t really mind it.

It goes beyond that. The rulesets in uBlock Origin are designed to block based on regular expressions; some of these expressions are of significantly wide reach, for instance when I tried it I couldn’t even go and check my own AdSense console. Or even access SourceForge! — as much as I really disliked SourceForge’s turning to bundling malware last year, marking the whole site off-limits is crazy.

More bothersome for me, was the way the extension decided that any of the tracking-click from Skymiles Shopping were ads and so just decided it was a good thing to block them. For those who don’t know Skymiles Shopping, or one of its many other incarnation for hotels, airlines and other loyalty programs, it’s essentially a way to bridge the referral system of various online shopping venues with your own interests, pretty much the same as Socialvest used to do. When you click on a given offer from the portal, they ask you for your loyalty identifier (in my case a Delta SkyMiles frequent flyer number), then send you to the shopping site with a personalized tracker. After you order from the site, they get a referral commission, and credit you with something — in the case of Socialvest back in the days, you could donate that to non-profits, or get it added to your Flattr wallet, in the case of Skymiles Shopping, they give you a number of Delta rewards miles.

Am I trading part of my privacy away for some benefit? Yes. I’m okay with that, as I said. And so is, very likely, the majority of people out there. So without providing an option to disable this behaviour, and insisting that it’s the correct one, the only way they can read it is that the extension is not for them, and they will fallback to either the (possibly shady) AdBlock Plus, or to no extension whatsoever — and with badvertising being an actual problem, that’s not good either.

For you it might be that your privacy is just that valuable, but there are indeed enough people for which these cash-back, custom tailored offers, or generally legit, non-scammy ads are important. It’s not far from the toilet paper problem.

Indeed, this kind of tone-deaf response from many privacy and Free Software activists is what turned me significantly away from the movement over the past few months. I plan on writing more of it, but I thought this would be a good place to start.

November 07, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
What is “Service Function Chaining”? (November 07, 2016, 16:59 UTC)

This is the first article in a series about Service Function Chaining (SFC for short), and its OpenStack implementation, networking-sfc, that I have been working on.

The SFC acronym can easily appear in Software-defined networking (SDN), in a paper about Network function virtualization (NFV), in some IETF documents, … Some of these broader subjects use other names for SFC elements, but this is probably a good topic for another post/blog.
If you already know SFC elements, you can probably skip to the next blog post.


So what is this “Service Function Chaining”? Let me quote the architecture RFC:

The delivery of end-to-end services often requires various service functions. These include traditional network service functions such
as firewalls and traditional IP Network Address Translators (NATs),
as well as application-specific functions. The definition and instantiation of an ordered set of service functions and subsequent
“steering” of traffic through them is termed Service Function
Chaining (SFC).

I see SFC as a higher level of abstraction routing: in a typical network, you route all the traffic coming from Internet through a firewall box. So you set up the firewall system, with its network interfaces (Internet and intranet sides), and add some IP routes to steer the traffic through.
SFC uses the same concept, but with logical blocks: if a packet matches some conditions (it is Internet traffic), force it through a series of “functions” (in that case, only one function: a firewall system). And voilà, you have your Service function chain!

I like this simple comparison as it introduces most of the SFC elements:

  • service function: a.k.a. “bump in the wire”. This is a transparent system that you want some flows to go through (typical use cases: firewall, load balancer, analyzer).
  • flow classifier: the “entry point”, it determines if a flow should go through the chain. This can be based on IP attributes (source/dest adress/port, …), layer 7 attributes or even from metadata in the flow, set by a previous chain.
  • port pair:  as the name implies, this is a pair of ports (network interfaces) for a service function (the firewall in our example). The traffic is routed to the “in” port, and is expected to exit the VM through the “out” port. This can be the same port
  • port chain: the SFC object itself, a set of flow classifiers and a set of port pairs (that define the chain sequence).

An additional type not mentioned before is the port pair group: if you have multiple service functions of an identical type, you can regroup them to distribute the flows among them.

Use cases and advantages

OK, after seeing all these definitions, you may wonder “what’s the point?” What I have seen so far is that it allows:

  • complex routing made easier. Define a sequence of logical steps, and the flow will go through it.
  • HA deployments: add multiple VMS in a same group, and the load will be distributed between them.
  • dynamic inventory. Add or remove functions dynamically, either to scale a group (add a load balancer, remove an analyzer), change functions order, add a new function in the middle of some chain, …
  • complex classification. Flows can be classified based on L7 criterias, output from a previous chain (for example a Deep-Packet Inspection system).

Going beyond these technical advantages, you can read an RFC that is actually a direct answer to this question: RFC 7498

Going further

To keep a reasonable post length, I did not talk about:

  • How does networking-sfc tag traffic? Hint: MPLS labels
  • Service functions may or may not be SFC-aware: proxies can handle the SFC tagging
  • Upcoming feature: support for Network Service Header (NSH)
  • Upcoming feature: SFC graphs (allowing complex chains and chains of chains)
  • networking-sfc modularity: reference implementation uses OVS, but this is juste one of the possible drivers
  • Also, networking-sfc architecture in general
  • SFC use in VNF Forwarding Graphs (VNFFG)


SFC has abundant documentation, both in the OpenStack project and outside. Here is some additional reading if you are interested (mostly networking-sfc focused):

Denis Dupeyron a.k.a. calchan (homepage, bugs)
SCALE 15x CFP is closing soon (November 07, 2016, 04:07 UTC)

Just a quick reminder that the deadline for proposing a talk to SCALE 15x is on November 15th. More information, including topics of interest, is available on the SCALE website.

SCALE 15x is to be held on March 2-5, 2017 at the Pasadena Convention Center in Pasadena, California, near Los Angeles. This is the same venue as last year and is much nicer than the original one form the years before.

I’ll see you there.

November 06, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-01 Gentoo Study Meeting (November 06, 2016, 18:24 UTC)

Gentoo Study Meeting talking (English Summary):  
Live broadcast:  

    First Gentoo Study Meeting Tokyo with  
    How to become Gentoo Developer introduction talk.  
            Contributing Ebuild:  
                - sending Git pull request  
                - Searching for a Mentor on proxy-maint  
                - Asking in #gentoo-proxy-maint  
                - Using  
        Non committer developer:  
            - Contributing in Gentoo projects, with work that 
              not need gentoo git repository access.  
            - Contributing in the wiki (Not with translation also if 
              translator need the wiki translator permisson)  
    How to get help in Japanese:  
        - #gentoo-ja Freenode  
        - ?forum  
        - Gentoo勉強会 (Gentoo Study Meeting)  
    Gentoo News update:  
        Talk about Future EAPI 7 ulm slide  
            Question: When new EAPI are released?  
                I think there is not a setted release date for EAPI  
            New feature  
                - Runtime-switchable useflag  
                - eqwarn  
                - dohtml  
                - package.provided in profiles  
                - DESTTREE and INSDESTTREE  
    Talk about presence of Gentoo booth at Open Source Conference 
    2016 Tokyo :  
        - Stickers  
        ask to fondation:  
        - Banner  
            size and format  
        - Table cover  
            size and format  
        Presenter: Matsuu san  
        Slide: Isucon 6  
            - Team tuning speed contest  
            - This time was tuning speed contest on  
                Only distribution with company can give sapport on azure.  
                Debian have a third party company that is supporting azure.  
                Gentoo also need something similar.  
            - Is good to do past problem for have more score on isucon.  
                vagrant is nice to use for the doing previous problem  
            - Go language, varnish+ESI, mysql  
            - access log 
              analyzer for isucon/tuning  
            - sshrc 
              bring your .bashrc, .vimrc, etc. with you when you ssh.  
            - Matsu san as been choosen for become staff at the 
              isucon presentation in the future.  
        Presenter: @tkshnt  
        Slide: Report on last update  
            - let's make Gentoo goods shop for Gentoo-JP  
                previous OSC item:  
                    - t-shirt (@matsuu, @naota)  
                    - stickers (@matsuu)  
                next item:  
                    - Gentoo Tenugui (手拭い)  
                OSC booth:  
                    - presentation  
                    - flyer  
                Design repository:  
                    - Github  
                        - project management  
                        - simple file upload  
        Presenter: @d_aki  
        Slide: my chaotic /etc/portage  
            - package.use can become chaotic  
            - /var/lib/portage/world difficult to 
              remeber when you added something and why  
            - let's use package.use directory and put file 
              name about what you are installing  
            - not what but why you installed the package  
        Presenter: alicef  
        Slide: How to contribute on Gentoo Github  
            - recently Gentoo CVS repository as been converted to Git  
            - Using the Github mirror is possible to send pull request.  
            - Good point of pull request:  
                - Code comment and review from more than one developer  
                - fast way to send ebuild patch upstream  
                - QA automatic check   
            - Bad point of pull request:  
                - the review are open to see to everyone  
                - basic git knowledge is needed  
            When we clone Gentoo repository:   
                Use git clone --depth=50   
                for fast pull request with less log information  
                git clone and git clone --depth=50 time difference:  
        Presenter: @usaturn  
        Slide: systemd-nspawn & btrfs  
            - On Gentoo using systemd-nspawn  
                - copy on write  
                - using subvolume we can make snapshot  
                - compress is possible  
                - cannot make swapfile  
                - unit is the process file manager  
                - using systemd stage 3 is simple to install  
                - not using syslog but journald  
                - network setting by networkd  
                - instead of cron there is timer  
                - instead of ntp there is systemd-timesyncd  
                - grub is not needed, instead systemd-boot 
                  (ex gummi boot) work as bootloader  
                - docker is not needed, instead systemd-nspawn 
                  using machinectl command (good for testing gentoo package)  
        Presenter: @naota344  
        Slide: automatically resolving conflict   
            Gentoo developer, btrfs, linux kernel, emacs, T-code  
                resolving conflict  
                    - when USE flag is needed it will ask to 
                      add a USE flag.  
                    - when circulation dependency is detected it will 
                      ask to remove a USE flag, for example  
                Why there is a conflict:  
                    - Before installing a new package, we 
                      have a package (for example perl-5.20) with all 
                      the the dependency package setted  
                    - when we are goin to update world and we get a 
                      new package update (for example perl-5.22),
                       also some dependency of perl-5.22 get new update  
                    - in this situation can happen that some dependency
                      to perl-5.20 get in conflict with perl-5.22  
                How can we fix such situation:  
                    - we have a option to add --reinstall-atoms="Y"
                      to emerge command (Y= name of the dependency
                      package that is causing problem)  
                    - this command will instead of just update the
                      package it will reinstall the package as if 
                      they are not installed, solving such dependency
                Why package is anyway deciding to automatically 
                not fixing dependency  
                    maybe because trying to fix all the dependecy will not
                    work correctly  
                When portage have conflict for many package  
                    it became more complicated and we will have a command
                    similar to this:  
                    --reinstall-atoms="A B C D E F G H I L M N ..."  
                For solving such problem there is emerge --reinstall-atoms
                    - automatically fixing circle dependecy  
                    - showing dependency graph  
                    - there is also a function for make try the 
                      dependency graph in a container  
                    - emerge analyzer tool  

November 05, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Running services on Virgin Media (Ireland) (November 05, 2016, 18:04 UTC)

Update: just a week after I wrote this down (and barely after I managed to post this), Virgin Media turned off IPv6-PD on the Hub 3.0. I’m now following up with them and currently without working IPv6 (and no public IPv4) at home, which sucks.

I have not spoken much about my network setup since I moved to Dublin, mostly because there isn’t much to speak, although admittedly there are a few interesting things that are quite different from before.

The major one is that my provider (Virgin Media Ireland, formerly known as UPC Ireland) supports native IPv6 connectivity through DS-Lite. For those who are not experts of IPv6 deployments, this means that the network has native IPv6 but loses the public IPv4 addressing: the modem/router gets instead a network-local IPv4 address (usually in the RFC1918 or RFC6598 ranges), and one or more IPv6 prefix delegations from which it provides connectivity to the local network.

This means you lose the ability to port-forward a public IPv4 address to a local host, which many P2P users would be unhappy about, as well as having to deal with one more level of NAT (and that almost always involves rate limiting by the provider on the number of ports that can be opened simultaneously.) On the other hand, it gives you direct, native access to the IPv6 network without taking away (outbound) access to the legacy, IPv4 network, in a much more user-friendly way than useless IPv6-only networks that rely on NAT64. But it also brings a few other challenges with it.

Myself, I actually asked to be opted in the DS-Lite trial when it was still not mandatory. The reason for me is that I don’t really use P2P that much (although a couple of times it was simpler to find a “pirate” copy of a DVD I already own, rather than trying to rip it to watch it now that I have effectively no DVD reader), and so I have very few reasons to need a public IPv4 address. On the other hand, I do have a number of backend-only servers that are only configured over the IPv6 network, and so having native access to the network is preferable. On the other hand I do have sometimes the need to SSH into a local box or HTTP over Transmission or similar software.

Anyway back on my home network, I have a Buffalo router running OpenWRT behind the so-called Virgin Media Hub (which is also the cable modem — and no it’s not more convenient to just get a modem-mode device, because this is EuroDOCSIS which is different from the US version, and Virgin Media does not support it.) And yes this means that IPv4 is actually triple-natted! This device is configured to get a IPv6 prefix delegation from the Hub, and uses that for the local network, as well as the IPv4 NAT.

Note: for this to work your Hub needs to be configured to have DHCPv6 enabled, which may or may not do so by default (mine was enabled, but then a “factory restore” disabled it!) To do so, go to the Hub admin page, login, and under Advanced, DHCP make sure that IPv6 is set to Stateful. That’s it.

There are two main problems that needs to be solved to be able to provide external access to a webapp running on the local server: dynamic addressing and firewalls. These two issues are more intertwined than I would like, making it difficult to explain the solution step by step, so let me first present the problems.

On the machine that needs to serve the web app, the first problem to solve is making sure that it gets at least one stable IPv6 address that can be reached from the outside. It used to be very simple, because except for IPv6 privacy extensions the IPv6 address was stable and calculated based on prefix and hardware (MAC) address. Unfortunately this is not the the case now; RFC7217 provides “Privacy stable addressing”, and NetworkManager implements it. In a relatively-normal situation, these addresses are by all means stable, and you could use them just fine. Except there is a second dynamic issue at hand, at least with my provider: the prefix is not stable, both for the one assigned to the Hub, and for that then delegated to the Buffalo router. Which means the network address that the device gets is closer to random than stable.

While this first part is relatively easy to fix by using one a service that allows you to dynamically update a host name, and indeed this is part of my setup too (I use, it does not solve the next problem, which is to open the firewall to let the connections in. Indeed, firewalls are particularly important on IPv6 networks, where every device would otherwise be connected and visible to the public. Unfortunately unless you connect directly to the Hub, there is no way to tell the device to only allow a given device, no matter what the prefix assigned is. So I started by disabling the IPv6 firewall (since no device is connected to the Hub directly beside the OpenWRT), and rely exclusively on the OpenWRT-provided firewall. This is the first level passed. There is one more.

Since the prefix that the OpenWRT receives as delegation keeps changing, it’s not possible to just state the IPv6 you want to allow access to in the firewall config, as it’ll change every time the prefix changes, even without the privacy mode enabled. But there is a solution: when using stable, not privacy enabled addresses, the suffix of the address is stable, and you can bet that someone already added support in ip6tables to match against a suffix. Unfortunately the OpenWRT UI does not let you set it up, but you can do that from the config file itself.

On the target host, which I’m assuming is using NetworkManager (because if not, you can just let it use the default address and not have to do anything), you have to set this one property:

# nmcli connection show
[take note of the UUID shown in the list]
# nmcli connection modify ${uuid} +ipv6.addr-gen-mode eui64

This re-enables EUI-64 based addressing for IPv6, which is based off the mac address of the card. It’ll change the address (and will require reconfiguration in OpenWRT, too) if you change the network card or its MAC address. But it does the job for me.

From the OpenWRT UI, as I said, there is no way to set the right rule. But you can configure it just fine in the firewall configuration file, /etc/config/firewall:

config rule
        option enabled '1'
        option target 'ACCEPT'
        option name 'My service'
        option family 'ipv6'
        option src 'wan'
        option dest 'lan'
        option dest_ip '::0123:45ff:fe67:89AB/::ffff:ffff:ffff:ffff'

You have to replace ::0123:45ff:fe67:89AB with the correct EUI64 suffix, which includes splicing in ff:fe and flipping one bit. I never remember how to calculate it so I just copy-paste it from the machine as I need it. This should give you a way to punch through all the firealls and get you remote access.

What remains to be solved at this point is having a stable way to contact the service. This is usually easy, as dynamic DNS hosts have existed for over twenty years by now, and indeed the now-notorious (for being at the receiving end of one of the biggest DDoS attacks just a few days ago) Dyn built up their fame. Unfortunately, they appear to have actually vastly dropped the ball when it comes to dynamic DNS hosting, as I couldn’t convince them (at least at the time) to let me update a host with only IPv6. This might be more of a problem with the clients than the service, but it’s still the same. So, as I noted earlier, I ended up using, although it took me a while where to find the right way to update a v6-only host: the default curl command you can find is actually for IPv4 hosts.

Oh yeah there was a last one remaining problem with this, at least when I started looking into fixing this all up: at the time Let’s Encrypt did not support IPv6-only hosts, when it came to validating domains with HTTP requests, so I spent a few weeks fighting and writing tools trying to find a decent way to have a hostname that is both dynamic and allows for DNS-based domain control validation for ACME. I will write about that separately, since it takes us on a tangent that has nothing to do with the actual Virgin Media side of things.

In the land of dynamic DNS (November 05, 2016, 18:04 UTC)

In the previous post I talked about my home network and services, and I pointed out how I ended up writing some code while trying to work around lack of pure-IPv6 hosts in Let’s Encrypt. This is something I did post about on Google+, but that’s not really a reliable place to post this for future reference, so it’s time to write it down here.

In the post I referred to the fact that, up until around April this year, Let’s Encrypt did not support IPv6-only hosts, and since I only have DS-Lite connectivity at home, I wanted that. To be precise, it’s the http authentication that was not supported on IPv6, but the ACME protocol (which Let’s Encrypt designed and implements) supports another authentication method: dns-01, at least as a draft.

Since this is a DNS-based challenge, there is no involvement of IPv4 or IPv6 addresses altogether. Unfortunately, the original client, now called certbot, does not support this type of validation, among other things because it’s bloody complicated. On the bright side, lego (an alternative client written in Go), does support this validation, including a number of DNS provider code supported.

Unfortunately,, which is the dynamic host provider I started using for my home network, is not supported. The main reason is that its API do not allow creating TXT or CNAME records, which are needed for dns-01 validation. I did contact the owner hoping that a non-documented API was found, but I got no answer back.

Gandi, on the other hand, is supported, and is my main DNS provider, so I started looking into that direction. Unlike my previous provider (OVH), Gandi does not appear to provide you any support for delegating to a dynamic host system. So instead I looked for options around it, and I found that Gandi provides some APIs (which, after all, is what lego uses itself.)

I ended up writing two DNS updating tools for that; if nothing else because they are very similar, one for Gandi and one for (the one for was what I started with — at the time I thought that they didn’t have an endpoint to update IPv6 hosts, since the default endpoint was v4 only.) I got clearance to publish it and is now on GitHub, it can work as a framework for any other dynamic host provider, if you feel like writing one, it provides some basic helper methods to figure out the current IPv4 or IPv6 assigned to an interface — while this makes no sense behind NAT, it makes sense with DS-Lite.

But once I got it all up and running I realized something that should have been obvious from the start: Gandi’s API is not great for this use case at all. In the case of and OVH’s protocol, there is a per-host token, usually randomly generated, you deploy that to the host you want to keep up to date, and that’s it, nothing else can be done with that token: it’s a one-way update of the host.

Gandi’s API is designed to be an all-around provisioning API, so it allows executing any operation whatsoever with your token. Including registering or dropping domains, or dropping the whole zone or reconfiguring it. It’s a super-user access token. And it sidesteps the 2-factors authentication that you can set up on the Gandi. If you lose track of this API key, it’s game over.

So at the end of the day, I decided not to use this at all. But since I already wrote the tools, I thought it would be a good idea to leave it to the world. It was also a bit of a nice way for me to start writing some public Go code,

Playing old games (November 05, 2016, 18:04 UTC)

I already have a proper, beefy gamestation which I use to play games the few days a year I spend at home. It’s there for games like Fallout 4, Skyrim and the lot where actual processing power is needed. I also use it for my photo editing, since I ended up accepting that Adobe tools are actually superior (particularly in long-term compatibility support) to anything I could find open-source.

On the other hand, I spend a significant amount of time “on the road”, as they say, travelling for conferences, or meeting my supported development teams, or just trying to get some time for myself, playing Ingress, or whatever else. The guy who was so scared of flying is now clearly a frequent flyer and one that likes seeing confs and cons.

This means that I spend a significant amount of time in a hotel room, without my gamestation and with some will to play games. Particularly when I’m somewhere for work, and so not spending the evenings out with friends — I do that sometimes when I’m out for work too, but not always. I have for a very long while spent the hotel time writing blog posts, but since the blog went down I didn’t (and even now, because of what I chose to use, it’s going to be awkward since it ends up requiring SSH access to post.) After that I spent some of the time by effectively working overtime, writing design docs and figuring out work-related problems; this is not great, not only because it leaves me with a horrible work/life balance, but also because I wouldn’t want to give the impression to my colleagues that this is something we need to do, particularly those who joined after me.

So on my last US trip, back in April, I was thinking of what I could actually play during my stay. Games on mobile and tablet are… not quite satisfying pretty quickly. I used to have a PSP (I didn’t bring it with me), but except for Monster Hunter Freedom, most of the games I’ve played have been JRPGs — I was considering getting myself a PlayStation Vita so that I could play Tales of Heart R but then I thought against it, because seriously, the Vita platform clearly failed a long time ago. I briefly considered the latest iteration of Nintendo’s portable (remember this is before they announced the Switch) but I also thought against it because I simply don’t like their form factor.

I settled on getting myself an Ideapad 100S, a very cheap, Windows laptop, and a random HP bluetooth mouse, total damage, less than €200. This is a very underpowered device if you want to use it for anything at all, including browsing the net, but the reason why I bought it was actually much simpler: it is powerful enough to play games such as Caesar 3, Pharaoh, The Settlers IV and so on. And while I may have taken a not very ethical approach to these back in the days, these games are easily, and legally, available on

While they are not ported to Linux, some of them do play on Wine, on the other hand I did not want to spend time trying to get them to work on my Linux laptop because I want to play to relax, not to get even more aggravated when things stopped working. So instead I play them on that otherwise terrible laptop.

I actually did not play on it on my last trip, that included two 12-hours flight between, respectively, Paris and Shanghai, and Tokyo and Paris, but that was because I was visiting China, and I’m trained to be paranoid, but otherwise I have been having quite a bit of luck to play this even in the economy section to play Pharaoh and company. The only game I have not managed to play on it yet is NoX, for whatever reason the screen flickers when I try to start it up. I should just try that one on Wine, I’m fairly sure it works.

I’m actually wondering how many people have been considering reimplementing these games based on the original assets; I know people have over time done that for Dune 22000 and for Total Annihilation, but I have not dared trying to figure out if anyone else tried for other games. It would definitely be interested. I have not played any RTS in a while even though I do have a copy of Age of Empires 2 HD on my gamestation; I only played a couple of deathmatch games online with friends and even that was difficult to organize, what with all of us working, and me almost always being in different timezones.

On a more technical point, the Lenovo laptop is quite interesting. It’s very low specs, but it has some hardware that is rare to find on PCs at all, particularly it comes with an SDIO-based WiFi card. I have not tried even getting Linux to run on it, but if I were bored, I’m sure it would be an interesting set of hardware devices that might or might not work correctly, on that one.

Oh well, that’s a story for another time.

My thoughts on Keybase (November 05, 2016, 18:04 UTC)

Keybase is one of a series of new services that appear to have come up in the wake of the publication of Snowden’s document, and the desire for more and simpler crypto technologies. I may disagree with the overall message attached to the Church of Snowden (Jürgen phrases it much better than me), easier crypto is something I’m generally happy with.

Unfortunately I’m not sure if Keybase’s promise of making GnuPG easier and at the same time keep it safer is actually being maintained. It appears to make it easier, at least under certain conditions, but I disagree with it staying just as secure, particularly if you follow their “default flow.”

The first problem that comes to mind is that they even suggest you to upload your private key to their system so that you can use the browser for interacting with it! I hoped they were kidding me, but no, it seems like that’s an option, actually the first of three options when you try to do anything at all with the website.

The second is the fact that for a lot of the features to make even remote sense you have to use the command line at that point, either through their tool or through a combination of curl and gnupg. It might seem strange that I’m complaining at both ends, but it’s because I would have preferred for them to provide, say, a Chrome extension that interfaces with gnupg, than a command line tool. Even more so when you realize that the command line tool depends on NodeJS, and it includes a -TSR- background service.

The command line tool is also not great. Indeed when you try to log in with it, by default it’ll use pinentry, which, if started with a DISPLAY environment set, will use the graphical version (in my case, Qt.) The graphical version do not allow you to paste, which makes sense for the passphrase of a private key, or the PIN of a smartcard (if you save those in a password manager in the same system, there is very little protection provided anyway, you can leave them unkeyed.) But if you’re trying to access a service… significantly less so. I worked this around by unsetting the DISPLAY environment variable and using the console Pinentry, and just paste the password in Konsole.

But it goes more interesting when you start noticing things that are significantly broken. Keybase requires you prove access to the key you want to mark as yours, which is the obvious thing to do, and that’s good. Unfortunately they don’t seem to cope well with the idea of key expiration. From what I read in various related issues the reason is that they think key expiration is an useless concept thanks to Keybase. That may be the case if you have no other environment, but I’d think this is a myopic point of view. By the way it doesn’t matter if you extend your expiration date in time, you still have to re-prove it to Keybase because you don’t seem to be able to provide them with an updated copy of the key (like you would with a normal keyserver.)

Once I got access to my account back I managed to re-prove my website; this was needed because I moved providers (long story) for the blog and everything, and so the proof (which for whatever reason I forgot to add to the git repository I store my website on) went… poof. Unfortunately it was a bit more involved than just generating a new proof. Mostly because the fetcher that should verify said proof does not actually respect the HTTP standard requirements and provide no Accept header, which meant ModSecurity kicked it out. You’d expect that a service that is all about security and trust would at least be able to implement the protocol correctly.

To finish this off, I really dislike the “limited invites” options in general. I understand why that’s needed, but it just feels a bit useless to me, particularly when, just because I logged back in, the system granted me more invites — with the “cute and whimsical” notion that it’s the founder of the service to “grant” you those invites. Heh.

All in all, I don’t have much real use out of this system. I signed up because it was suggested it’s a nice way to prove my identity but I don’t feel it’s any better than the Web of Trust, and I’m not saying the WoT is good.

Oh well, if it takes off I’ll be there, if not, I have only spent a minimum amount of time on it.

Gentoo Miniconf 2016 (November 05, 2016, 18:04 UTC)

Gentoo Miniconf, Prague, October 2016

As I noted when I resurrected the blog, part of the reason why I managed to come back to “active duty” within Gentoo Linux is because Robin and Amy helped me set up my laptop and my staging servers for singing commits with GnuPG remotely.

And that happened because this year I finally managed to go to the Gentoo MiniConf hosted as part of LinuxDays in Prague, Czech Republic.

The conference track was fairly minimal; Robin gave us an update on the Foundation and on what Infra is doing — I’m really looking forward to the ability to send out changes for review, instead of having to pull and push Git directly. After spending three years using code reviews with a massive repository I feel I like it and want to see significantly more of it.

Ulrich gave us a nice presentation on the new features coming with EAPI 7, which together with Michal’s post on EAPI 6 made it significantly easier to pick up Gentoo again.

And of course, I managed to get my GnuPG key signed by some of the developers over there, so that there is proof that who’s committing those changes is really.

But the most important part for me has been seeing my colleagues again, and meeting the new ones. Hopefully this won’t be the last time I get to the Miniconf, although fitting this together with the rest of my work travel is not straightforward.

I’m hoping to be at 33C3 — I have a hotel reservation and flight tickets, but no ticket for the conference yet. If any of you, devs or users, is there, feel free to ping me over Twitter or something. I’ll probably be at FOSDEM next year too, although that is not a certain thing, because I might have some scheduling conflicts with ENIGMA (unless I can get Delta to give me the ticket I have in mind.)

So once again thank you for CVU and LinuxDays for hosting us, and hopefully see you all in the future!

November 04, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
GStreamer and Synchronisation Made Easy (November 04, 2016, 10:16 UTC)

A lesser known, but particularly powerful feature of GStreamer is our ability to play media synchronised across devices with fairly good accuracy.

The way things stand right now, though, achieving this requires some amount of fiddling and a reasonably thorough knowledge of how GStreamer’s synchronisation mechanisms work. While we have had some excellent talks about these at previous GStreamer conferences, getting things to work is still a fair amount of effort for someone not well-versed with GStreamer.

As part of my work with the Samsung OSG, I’ve been working on addressing this problem, by wrapping all the complexity in a library. The intention is that anybody who wants to implement the ability for different devices on a network to play the same stream and have them all synchronised should be able to do so with a few lines of code, and the basic know-how for writing GStreamer-based applications.

I’ve started work on this already, and you can find the code in the creatively named gst-sync-server repo.

Design and API

Let’s make this easier by starting with a picture …

Big picture of the architecture

Let’s say you’re writing a simple application where you have two ore more devices that need to play the same video stream, in sync. Your system would consist of two entities:

  • A server: this is where you configure what needs to be played. It instantiates a GstSyncServer object on which it can set a URI that needs to be played. There are other controls available here that I’ll get to in a moment.

  • A client: each device would be running a copy of the client, and would get information from the server telling it what to play, and what clock to use to make sure playback is synchronised. In practical terms, you do this by creating a GstSyncClient object, and giving it a playbin element which you’ve configured appropriately (this usually involves at least setting the appropriate video sink that integrates with your UI).

That’s pretty much it. Your application instantiates these two objects, starts them up, and as long as the clients can access the media URI, you magically have two synchronised streams on your devices.


The keen observers among you would have noticed that there is a control entity in the above diagram that deals with communicating information from the server to clients over the network. While I have currently implemented a simple TCP protocol for this, my goal is to abstract out the control transport interface so that it is easy to drop in a custom transport (Websockets, a REST API, whatever).

The actual sync information is merely a structure marshalled into a JSON string and sent to clients every time something happens. Once your application has some media playing, the next thing you’ll want to do from your server is control playback. This can include

  • Changing what media is playing (like after the current media ends)
  • Pausing/resuming the media
  • Seeking
  • “Trick modes” such as fast forward or reverse playback

The first two of these work already, and seeking is on my short-term to-do list. Trick modes, as the name suggets, can be a bit more tricky, so I’ll likely get to them after other things are done.

Getting fancy

My hope is to see this library being used in a few other interesting use cases:

  • Video walls: having a number of displays stacked together so you have one giant display — these are all effectively playing different rectangles from the same video

  • Multiroom audio: you can play the same music across different speakers in a single room, or multiple rooms, or even group sets of speakers and play different media on different groups

  • Media sharing: being able to play music or videos on your phone and have your friends be able to listen/watch at the same time (a silent disco app?)

What next

At this point, the outline of what I think the API should look like is done. I still need to create the transport abstraction, but that’s pretty much a matter of extracting out the properties and signals that are part of the existing TCP transport.

What I would like is to hear from you, my dear readers who are interested in using this library — does the API look like it would work for you? Does the transport mechanism I describe above cover what you might need? There is example code that should make it easier to understand how this library is meant to be used.

Depending on the feedback I get, my next steps will be to implement the transport interface, refine the API a bit, fix a bunch of FIXMEs, and then see if this is something we can include in gst-plugins-bad.

Feel free to comment either on the Github repository, on this blog, or via email.

And don’t forget to watch this space for some videos and measurements of how GStreamer synchronised fares in real life!

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

At the end of August—I know that it’s now November, but time seems to get away from me more often these days—I got the honour of trying the new 2015 vintage of Syncopation red blend (read about the 2014 vintage here) from Mike Ward on Wine! This is the second year that Mike has produced the incredible blend that changed my perspective on Missouri wines, and this year, it was joined by the new Acoustic white blend. Before getting into the new white blend, let’s take a look at the changes for this 2015 release of Syncopation Rhythmic red blend.

2015 Ward on Wine Syncopation Rhythmic Red and Acoustic White

Unlike the 2014 vintage—which was a blend of Chambourcin, Vidal blanc, Seyval blanc, and Traminette—this year was a cuvée of Chambourcin, Vignoles, Norton, and Traminette. So, the primary varietal is still Chambourcin, and the Traminette remains (though is slightly more prominent than last year). The Vidal blanc and Seyval blanc, though, were replaced by Vignoles and Norton. The breakdown in varietals is 70% Chambourcin, and 10% of the remaining three grapes.

Seeing as the Vidal blanc and Seyval blanc, which are both white grapes, were replaced by Vignoles (a complex hybrid) and Norton (a very deep purple grape, somewhat resembling Concords), I didn’t really have any idea what to expect from this new blend. Below are my impressions:

2015 Syncopation Rhythmic Red blend – tasting notes:
With its beautiful ruby-to-garnet colour, this wine shows wonderfully when backlit in the glass. Subdued purples shine through the burgundy in the centre, and it is encompassed by a dark pink ring at the edges. A bouquet of red plum and blueberries is evident, but completely unassuming and lovely in its simplicity. Interestingly, though, those fruits didn’t come through for me in taste. Instead, I found raspberry, strawberry, and forest underbrush (akin to some Pinot noirs from Oregon’s Willamette Valley) to be much more prominent on the palate. Those flavours were further complimented by slight hints of clove and white pepper. Fascinatingly, though this is not a sparkling wine in any way, there was a slight effervescent feel upfront. Like the previous vintage, I found that this Syncopation red blend is best enjoyed with a slight chill on it (14-16°C / 57-61°F).

Mike Ward of Ward on Wine with his 2015 Syncopation wines
Mike and his 2015 Syncopation wines
2015 Syncopation Rhythmic Red blend with a glass and Sommelier knife
Syncopation Rhythmic Red

I was quite confident that I would enjoy this new vintage of Syncopation red, but I wasn’t sure how I would feel about the new Acoustic white blend since this year was its debut. Once again, Mike Ward challenged what I thought I knew about my taste preferences by creating an absolutely outstanding white wine that is sure to please a wide array of tastes! Syncopation Acoustic White is a blend of 70% Vignoles, 20% Vidal Blanc, and 10% Traminette.

2015 Syncopation Acoustic White blend – tasting notes:
A light but vivid yellow in the glass, this brilliant blend demands your attention due to its dazzling vibrancy! On the nose, there is an elegant mix of less pronounced, almost musky fruits like apricot and the mellow sweetness of Bosc pears. There is an ever-so-faint hint of ginger and lemon zest that adds to the wine’s elusive profile. It has a crisp yet completely approachable acidity. The lemon starts to come through, but is almost immediately thwarted off by the more rounded flavours of nectarine and apricot.

2015 Ward on Wine Syncopation Acoustic White blend bottle with glass

Overall, I enjoyed both of these wines, chiefly seeing as Missouri wines are not usually my favourites. Having tasted the 2014 and 2015 Syncopation Rhythmic Red blends side-by-side, I slightly prefer the 2014. That could be caused by any number of factors, but I am willing to bet that it is due to my strong preference for Vidal blanc. Changing out two white grapes (the Vidal blanc and Seyval blanc) for another red grape (the Norton) significantly changed the flavour profile, especially given the almost mordant forwardness of big fruits exhibited by Norton. We are splitting hairs here though, because both years have shown me the intricacies that Missouri wines are capable of producing. Further, I was taken aback by the Acoustic White blend, and find it to rank amongst my favourites of Missouri whites. I am sure that I will enjoy many bottles of these two wines over the upcoming year, and am excited to experience the next incantation of Mike Ward’s Syncopation!

So, I encourage you to pick up at least a bottle of each and experience them for yourself—even if you were like me in thinking that Missouri wines didn’t hold their own. You can purchase them at several Saint Louis area Schnucks grocery stores, or by stopping in at The Wine Barrel on Lindbergh near Watson. At The Wine Barrel, you can also choose to try Syncopation by the glass, and if you’re lucky, Mike may even be there when you stop by. 🙂


October 31, 2016
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Intel MediaSDK mini-walkthrough (October 31, 2016, 14:24 UTC)

Using hwaccel

Had been a while since I mentioned the topic and we made a huge progress on this field.

Currently with Libav12 we already have nice support for multiple different hardware support for decoding, scaling, deinterlacing and encoding.

The whole thing works nicely but it isn’t foolproof yet so I’ll start describing how to setup and use it for some common tasks.

This post will be about Intel MediaSDK, the next post will be about NVIDIA Video Codec SDK.



  • A machine with QSV hardware, Haswell, Skylake or better.
  • The ability to compile your own kernel and modules
  • The MediaSDK mfx_dispatch

It works nicely both on Linux and Windows. If you happen to have other platforms feel free to contact Intel and let them know, they’ll be delighted.


The MediaSDK comes with either the usual Windows setup binary or a Linux bash script that tries its best to install the prerequisites.

# tar -xvf MediaServerStudioEssentials2017.tar.gz

Focus on SDK2017Production16.5.tar.gz.

tar -xvf SDK2017Production16.5.tar.gz


The MediaSDK leverages libva to access the hardware together with an highly extended DRI kernel module.
They support CentOS with rpms and all the other distros with a tarball.

BEWARE: if you use the installer script the custom libva would override your system one, you might not want that.

I’m using Gentoo so it is intel-linux-media_generic_16.5-55964_64bit.tar.gz for me.

The bits of this tarball you really want to install in the system no matter what is the firmware:


If you are afraid of adding custom stuff on your system I advise to offset the whole installation and then override the LD paths to use that only for Libav.

BEWARE: you must use the custom iHD libva driver with the custom i915 kernel module.

If you want to install using the provided script on Gentoo you should first emerge lsb-release.

emerge lsb-release
source /etc/profile.d/*.sh
echo /opt/intel/mediasdk/lib64/ >> /etc/

Kernel Modules

The patchset resides in:


The current set is 143 patches against linux 4.4, trying to apply on a more recent kernel requires patience and care.

The 4.4.27 works almost fine (even btrfs does not seem to have many horrible bugs).


In order to use the Media SDK with Libav you should use the mfx_dispatch from yours truly since it provides a default for Linux so it behaves in an uniform way compared to Windows.

Building the dispatcher

It is a standard autotools package.

git clone git://
cd mfx_dispatch
autoreconf -ifv
./configure --prefix=/some/where
make -j 8
make install

Building Libav

If you want to use the advanced hwcontext features on Linux you must enable both the vaapi and the mfx support.

git clone git://
cd libav
export PKG_CONFIG_PATH=/some/where/lib/pkg-config
./configure --enable-libmfx --enable-vaapi --prefix=/that/you/like
make -j 8
make install


Media SDK is sort of temperamental and the setup process requires manual tweaking so the odds of having to do debug and investigate are high.

If something misbehave here is a checklist:
– Make sure you are using the right kernel and you are loading the module.

uname -a
  • Make sure libva is the correct one and it is loading the right thing.
strace -e open ./avconv -c:v h264_qsv -i test.h264 -f null -
  • Make sure you aren’t using the wrong ratecontrol or not passing all the parameters required
./avconv -v verbose -filter_complex testsrc -c:v h264_qsv {ratecontrol params omitted} out.mkv

See below for some examples of working rate-control settings.
– Use the MediaSDK examples provided with the distribution to confirm that everything works in case the SDK is more recent than the updates.


The Media SDK support in Libav covers decoding, encoding, scaling and deinterlacing.

Decoding is straightforward, the rest has still quite a bit of rough edges and this blog post had been written mainly to explain them.

Currently the most interesting format supported are h264 and hevc, but even other formats such as vp8 and vc1 are supported.

./avconv -codecs | grep qsv


The decoders can output directly to system memory and can be used as normal decoders and feed a software implementation just fine.

./avconv -c:v h264_qsv -i input.h264 -c:v av1 output.mkv

Or they can decode to opaque (gpu backed) buffers so further processing can happen

./avconv -hwaccel qsv -c:v h264_qsv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v x265

NOTICE: you have to explicitly pass the filterchain hwdownload,format=nv12 not have mysterious failures.


The encoders are almost as straightforward beside the fact that the MediaSDK provides multiple rate-control systems and they do require explicit parameters to work.

./avconv -i input.mkv -c:v h264_qsv -q 20 output.mkv

Failing to set the nominal framerate or the bitrate would make the look-ahead rate control not happy at all.

Rate controls

The rate control is one of the most rough edges of the current MediaSDK support, most of them do require a nominal frame rate and that requires an explicit -r to be passed.

There isn’t a default bitrate so also -b:v should be passed if you want to use a rate-control that has a bitrate target.

Is it possible to use a look-ahead rate-control aiming to a quality metric passing -global_quality -la_depth.


It is possible to have a full hardware transcoding pipeline with Media SDK.


./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv -c:v h264_qsv -r 25 -b:v 2M


./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 5

Both at the same time

./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv,scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 5

Hardware filtering caveats

The hardware filtering system is quite new and introducing it shown a number of shortcomings in the Libavfilter architecture regarding format autonegotiation so for hybrid pipelines (those that do not keep using hardware frames all over) it is necessary to explicitly call for hwupload and hwdownload explictitly in such ways:

./avconv -hwaccel qsv -c:v h264_qsv -i in.mkv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v vp9 out.mkv

Future for MediaSDK in Libav

The Media SDK supports already a good number of interesting codecs (h264, hevc, vp8/vp9) and Intel seems to be quite receptive regarding what codecs support.
The Libav support for it will improve over time as we improve the hardware acceleration support in the filtering layer and we make the libmfx interface richer.

We’d need more people testing and helping us to figure use-cases and corner-cases that hadn’t been thought of yet, your feedback is important!

October 29, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy 19th Birthday, Noah (October 29, 2016, 05:04 UTC)

Happy 19th Birthday, Noah! I hope that, this year, you are able to spend your special day with family, friends and loved ones. Be safe out there, and have a good time! 🙂

I also wanted to let you know how proud I am of all that you’ve accomplished, and I hope that you are too. Juggling undergraduate studies (with classes, lectures, homework, and the likes) along with a job that carries with it a lot of hours is no easy task, but you are managing to do it quite well! Keep it up, and I know that you will go far in this life.

Love you, buddy,

October 27, 2016
Robin Johnson a.k.a. robbat2 (homepage, bugs)

Cross-posting from where I've written up some other pieces:
- How to set up Ceph RGW StaticSites (S3 Website mode). I wrote the code over the course of the last year, and here's the first solid documentation for setting it up now. As for 'using' it, your S3 client with WebsiteConfiguration support should just work.
- Boto S3: how to muck with where it actually connects. Boto S3 tries to be smart about where it's connecting to, such that it takes the hostname you give it and uses that for most things. This makes some testing fun where you want it to request a certain hostname but actually connect somewhere entirely different.

Flattr this

October 24, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-07 Gentoo kernel maintainer 4.7.x (October 24, 2016, 06:33 UTC)

Recently i became Gentoo Kernel Project member, maintaining the Gentoo Kernel branch 4.7
Kernel Project

I already made some release.

so you can ping me,
if the Gentoo Kernel is not up to date :)

Recently we had the Dirty COW (CVE-2016-5195) kernel vulnerability come out,
and the Gentoo 4.7 branch update was released after just some couple hours after the kernel patch was release.

October 16, 2016
Robin Johnson a.k.a. robbat2 (homepage, bugs)
LVM: convert linear to striped (October 16, 2016, 14:55 UTC)

This requires temporarily having 2x the size of your LVM volume. You need to create a mirror of your data, with the new leg of the mirror striped over the target disks, then drop the old leg of the mirror that was not striped. If you want to stripe over ALL of your disks (including the one that was already used), you also need to specify --alloc anywhere otherwise the mirror code will refuse to use any disk twice.

# convert to a mirror (-m1), with new leg striped over 4 disks: /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde
# --mirrorlog core - use in-memory status during the conversion
# --interval 1: print status every second
lvconvert --interval 1 -m1 $myvg/$mylv --mirrorlog core --type mirror --stripes 4 /dev/sd{b,c,d,e}
# drop the old leg, /dev/sda
lvconvert --interval 1 -m0 $myvg/$mylv  /dev/sda

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Fixing gtk behaviour (October 16, 2016, 13:26 UTC)

Recently I've noticed all gtk2 apps becoming quite ... what's the word ... derpy?
Things like scrollbars not working and stuff. And by "not working" I mean the gtk3 behaviour of not showing up/down arrows and being a grey smudge of stupid.

So accidentally I stumbled over an old gentoo bug where it was required to deviate from defaults to have, like, icons and stuff.
That sounds pretty reasonable to me, but with gtk upstream crippling the Ad-Waiter, err, adwaita theme, because gtk3, this is a pretty sad interaction. And unsurprisingly by switching to the upstream default theme, Raleigh, gtk2 apps start looking a lot better.(Like, scrollbars and stuff)

The change might make sense to apply to Gentoo globally, locally for each user it is simply:

$ cat ~/.gtkrc-2.0
gtk-theme-name = "Raleigh"
gtk-cursor-theme-name = "Raleigh"
I'm still experimenting with 'gtk-icon-theme-name' and 'gtk-fallback-icon-theme', maybe that should change too. And as a benefit we can remove the Ad-Waiter from dependencies, possibly drop gnome-themes too, and restore a fair amount of sanity to gtk2.

Changing console fontsize (October 16, 2016, 10:09 UTC)

Recently I accidentally aquired some "HiDPI" hardware. While it is awesome to use it quickly becomes irritating to be almost unable to read the bootup messages or work in a VT.
The documentation on fixing this is surprisingly sparse, but luckily it is very easy:

  • Get a font that comes in the required sizes. media-fonts/terminus-font was the first choice I found, there may be others that are nice to use. Since terminus works well enough I didn't bother to check.
  • Test the font with "setfont". The default path is /usr/share/consolefonts, and the font 'name' is just the filename without the .psf.gz suffix. If you break things you can revert to sane defaults by just calling "setfont" or rebooting the machine (ehehehehehe)
  • Set the font in /etc/conf.d/consolefont. For a 210dpi notebook display I chose 'ter-v24b', but I'm considering going down a font size or two, maybe 'ter-v20b'? It's all very subjective ...
  • On reboot the consolefont init script will set the required font.
Now I'm wondering if such fonts can be embedded into the kernel so that on boot it directly switches to a 'nice' font, but just being able to read the console output is a good start ...

October 12, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
GnuPG: private key suddenly missing? (October 12, 2016, 16:56 UTC)

After updating my workstation, I noticed that keychain reported that it could not load one of the GnuPG keys I passed it on.

 * keychain 2.8.1 ~
 * Found existing ssh-agent: 2167
 * Found existing gpg-agent: 2194
 * Warning: can't find 0xB7BD4B0DE76AC6A4; skipping
 * Known ssh key: /home/swift/.ssh/id_dsa
 * Known ssh key: /home/swift/.ssh/id_ed25519
 * Known gpg key: 0x22899E947878B0CE

I did not modify my key store at all, so what happened?

GnuPG upgrade to 2.1

The update I did also upgraded GnuPG to the 2.1 series. This version has quite a few updates, one of which is a change towards a new private key storage approach. I thought that it might have done a wrong conversion, or that the key which was used was of a particular method or strength that suddenly wasn't supported anymore (PGP-2 is mentioned in the article).

But the key is a relatively standard RSA4096 one. Yet still, when I listed my private keys, I did not see this key. I even tried to re-import the secring.gpg file, but it only found private keys that it already saw previously.

I'm blind - the key never disappeared

Luckily, when I tried to sign something with the key, gpg-agent still asked me for the passphraze that I had used for a while on that key. So it isn't gone. What happened?

Well, the key id is not my private key id, but the key id of one of the subkeys. Previously, gpg-agent sought and found the private key associated with the subkey, but now it no longer does. I don't know if this is a bug in the past that I accidentally used, or if this is a bug in the new version. I might investigate that a bit more, but right now I'm happy that I found it.

All I had to do was use the right key id in keychain, and things worked again.

Good, now I can continue debugging networking issues with an azure-hosted system...

October 11, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Openstack Newton Update (October 11, 2016, 05:00 UTC)

The short of it

Openstack Newton was packaged early last week (when rc2 was still going on upstream) and the tags for the major projects were packaged the day they released (nova and the like).

I've updated the openstack-meta package to 2016.2.9999 and would recommend people use that.

Heat has also been packaged this time around so you are able to use that if you wish.

I'll link to my keywords and use files so you may use them if you wish as well. Please keep in mind that my use file is for my personal setup (static kernel, vxlan/linuxbridge and postgresql)

October 08, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
New job and new blog category (October 08, 2016, 06:49 UTC)

Sorry blog, this announcement comes late for you (I updated sites like Linkedin some time ago), but better late than never!

I got myself a new job in May, joining the Red Hat software developers working on OpenStack. More specifically, I will work mostly on the network parts: Neutron itself (the “networking as a service” main project), but also other related projects like Octavia (load balancer), image building, and more recently Service Function Chaining.

Working upstream on these projects, I plan to write some posts about them, which will be regrouped in a new OpenStack category. I am not sure yet about the format (short popularisation items and tutorials, long advanced technical topics, a mix of both, …), we will see. In all cases, I hope it will be of interest to some people 🙂

PS for Gentoo Universe readers: don’t worry, that does not mean I will switch all my Linux boxes to RHEL/CentOS/Fedora! I still have enough free time to work on Gentoo

October 05, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-10-05 exam finished and news (October 05, 2016, 08:49 UTC)

School exam are almost finished.
I could give 20 class in one semester and get 37 school point.
in this second semester i need around 10 point for get in 4 year and start doing mainly research.

Because i had some free time i got to internship for look for work and did Gentoo Study Meeting, after almost 6 months.
And contribute on Gentoo.
I also could get into the open source research lab of school,
so in this months i will follow some few lessons and do opensource research.

September 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
We do not ship SELinux sandbox (September 27, 2016, 18:47 UTC)

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 26, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Mounting QEMU images (September 26, 2016, 17:26 UTC)

While working on the second edition of my first book, SELinux System Administration - Second Edition I had to test out a few commands on different Linux distributions to make sure that I don't create instructions that only work on Gentoo Linux. After all, as awesome as Gentoo might be, the Linux world is a bit bigger. So I downloaded a few live systems to run in Qemu/KVM.

Some of these systems however use cloud-init which, while interesting to use, is not set up on my system yet. And without support for cloud-init, how can I get access to the system?

Mounting qemu images on the system

To resolve this, I want to mount the image on my system, and edit the /etc/shadow file so that the root account is accessible. Once that is accomplished, I can log on through the console and start setting up the system further.

Images that are in the qcow2 format can be mounted through the nbd driver, but that would require some updates on my local SELinux policy that I am too lazy to do right now (I'll get to them eventually, but first need to finish the book). Still, if you are interested in using nbd, see these instructions or a related thread on the Gentoo Forums.

Luckily, storage is cheap (even SSD disks), so I quickly converted the qcow2 images into raw images:

~$ qemu-img convert root.qcow2 root.raw

With the image now available in raw format, I can use the loop devices to mount the image(s) on my system:

~# losetup /dev/loop0 root.raw
~# kpartx -a /dev/loop0
~# mount /dev/mapper/loop0p1 /mnt

The kpartx command will detect the partitions and ensure that those are available: the first partition becomes available at /dev/loop0p1, the second /dev/loop0p2 and so forth.

With the image now mounted, let's update the /etc/shadow file.

Placing a new password hash in the shadow file

A google search quickly revealed that the following command generates a shadow-compatible hash for a password:

~$ openssl passwd -1 MyMightyPassword

The challenge wasn't to find the hash though, but to edit it:

~# vim /mnt/etc/shadow
vim: Permission denied

The image that I downloaded used SELinux (of course), which meant that the shadow file was labeled with shadow_t which I am not allowed to access. And I didn't want to put SELinux in permissive mode just for this (sometimes I /do/ have some time left, apparently).

So I remounted the image, but now with the context= mount option, like so:

~# mount -o context="system_u:object_r:var_t:s0: /dev/loop0p1 /mnt

Now all files are labeled with var_t which I do have permissions to edit. But I also need to take care that the files that I edited get the proper label again. There are a number of ways to accomplish this. I chose to create a .autorelabel file in the root of the partition. Red Hat based distributions will pick this up and force a file system relabeling operation.

Unmounting the file system

After making the changes, I can now unmount the file system again:

~# umount /mnt
~# kpart -d /dev/loop0
~# losetup -d /dev/loop0

With that done, I had root access to the image and could start testing out my own set of commands.

It did trigger my interest in the cloud-init setup though...

September 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Few notes on locale craziness (September 22, 2016, 20:13 UTC)

Back in the EAPI 6 guide I shortly noted that we have added a sanitization requirement for locales. Having been informed of another locale issue in Python (pre-EAPI 6 ebuild), I have decided to write a short note of locale curiosities that could also serve in reporting issues upstream.

When l10n and i18n are concerned, most of the developers correctly predict that the date and time format, currencies, number formats are going to change. It’s rather hard to find an application that would fail because of changed system date format; however, much easier to find one that does not respect the locale and uses hard-coded format strings for user display. You can find applications that unconditionally use a specific decimal separator but it’s quite rare to find one that chokes itself combining code using hard-coded separator and system routines respecting locales. Some applications rely on English error messages but that’s rather obviously perceived as mistake. However, there are also two hard cases…

Lowercase and uppercase

For a start, if you thought that the ASCII range of lowercase characters would map clearly to the ASCII range of uppercase characters, you were wrong. The Turkish (tr_TR) locale is different here, and maps lowercase ‘i’ (LATIN SMALL LETTER I) into uppercase ‘İ’ (LATIN CAPITAL LETTER I WITH DOT ABOVE). Similarly, ‘I’ (LATIN CAPITAL LETTER I) maps to ‘ı’ (LATIN SMALL LETTER DOTLESS I). What does this mean in practice? That if you have a Turkish user, then depending on the software used, you Latin ‘i’ may be uppercased onto ‘I’ (as you expect it to be), ‘İ’ (as would be correct in free text) or… left as ‘i’.

What’s the solution for this? If you need to uppercase/lowercase an ASCII text (e.g. variable names), either use a function that does not respect locale (e.g. 'i' - ('a' - 'A') in C) or set LC_CTYPE to a sane locale (e.g. C). However, remember that LC_CTYPE affects the character encoding — i.e. if you read UTF-8, you need to use a locale with UTF-8 codeset.


The other problem is collation, i.e. sorting. The more obvious part of it is that the particular locales enforce specific sorting of their specific diacritic characters. For example, the Polish letter ‘ą’ would be sorted between ‘a’ and ‘b’ in the Polish locale, and somewhere at the end in the C locale. The intermediately obvious part of it is that some locales have different ordering of lowercase and uppercase characters — the C and German locales sort uppercase characters first (the former because of ASCII codes), while many other locales sort the opposite.

Now, the non-obvious part is that some locales actually reorder the Latin alphabet. For example, the Estonian (et_EE) locale puts ‘z’ somewhere between ‘s’ and ‘t’. Yep, seriously. What’s even less obvious is that it means that the [a-z] character class suddenly ends halfway through the lowercase characters!

What’s the solution? Again, either use non-locale-sensitive functions or sanitize LC_COLLATE. For regular expressions, the named character classes ([[:lower:]], [[:upper:]]) are always a better choice.

Does anyone know more fun locales?

September 18, 2016
Zack Medico a.k.a. zmedico (homepage, bugs)

For I/O bound tasks, python coroutines make a nice replacement for threads. Unfortunately, there’s no asynchronous API for reading files, as discussed in the Best way to read/write files with AsyncIO thread of the python-tulip mailing list.

Meanwhile, it is essential that a long-running coroutine contain some asynchronous calls, since otherwise it will run all the way to completion before any other event loop tasks are allowed to run. For a long-running coroutine that needs to call a conventional iterator (rather than an asynchronous iterator), I’ve found this converter class to be useful:

class AsyncIteratorExecutor:
    Converts a regular iterator into an asynchronous
    iterator, by executing the iterator in a thread.
    def __init__(self, iterator, loop=None, executor=None):
        self.__iterator = iterator
        self.__loop = loop or asyncio.get_event_loop()
        self.__executor = executor

    def __aiter__(self):
        return self

    async def __anext__(self):
        value = await self.__loop.run_in_executor(
            self.__executor, next, self.__iterator, self)
        if value is self:
            raise StopAsyncIteration
        return value

For example, it can be used to asynchronously read lines of a text file as follows:

async def cat_file_async(filename):
    with open(filename, 'rt') as f:
        async for line in AsyncIteratorExecutor(f):

if __name__ == '__main__':
    loop = asyncio.get_event_loop()

September 17, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Italy: Travel home days and trip summary (September 17, 2016, 03:00 UTC)

Today was the day that we say goodbye to our Italian vacation and headed back to the US. We awoke to unbelievable rains, and, hoping that they would pass before our drive back to FCO airport, we stopped in again at Bar Principe 2 for some breakfast. In full disclosure, it’s basically because I couldn’t resist the Nutella and white chocolate tart one more time before leaving Italy. Unfortunately, though, the rain didn’t let up at all and it made the drive back to the airport a bit more stressful than anticipated. That being said, we arrived safely, dropped off the car, and made the flights back to the US.

It should be mentioned here that our experience with Hertz car rental was absolutely awful! They charged me multiple times, and I had to contest the duplicates with my credit card company. They also tried to sneak in random service fees and charges even though the rental was prepaid before we even left the United States.

Overall, this was not my favourite trip, and I found it to be one of the most frustrating vacations that I’ve had. However, there were some really great points, and I wouldn’t trade the experiences that we had for anything. Here’s a quick recap of some of those points, with links to the posts about the respective days:

I know that it may seem crazy that my favourite wine ended up being the 2012 Casale del Giglio Madreselva even after experiencing several 1997 Brunellos during our stay at Casa Bolsinina. I need to clarify that it was my favourite because it was just completely unexpected! I didn’t think that Lazio would produce such a stellar wine, especially considering the very reasonable price!


September 16, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
We're very happy to announce that a few days ago one of our manuscripts, "Secondary electron interference from trigonal warping in clean carbon nanotubes", was accepted for publication in Physical Review Letters

Imagine a graphene "sheet" of carbon atoms rolled into a tube - and you get a carbon nanotube. Carbon nanotubes come in many variants, which influence strongly their electronic properties. They have different diameter, but also different "chiral angle", describing how the pattern of the carbon atoms twists around the tube axis. In our work, we show how to extract information on the nanotube structure from measurements of its conductance. At low temperature, electrons travel ballistically through a nanotube and are only scattered at its ends. For the quantum-mechanical electron wavefunction, metallic nanotubes act then analogous to an optical Fabry-Perot interferometer, i.e., a cavity with two semitransparent mirrors at either end, where a wave is partially reflected. Interference patterns are obtained by tuning the wavelength of the electrons; the current through the nanotube oscillates as a function of an applied gate voltage. The twisted graphene lattice then causes a distinct slow current modulation, which, as we show, allows a direct estimation of the chiral angle. This is an important step towards solving a highly nontrivial problem, namely identifying the precise
molecular structure of a nanotube from electronic measurements alone.

"Secondary electron interference from trigonal warping in clean carbon nanotubes"
A. Dirnaichner, M. del Valle, K. J. G. Götz, F. J. Schupp, N. Paradiso, M. Grifoni, Ch. Strunk, and A. K. Hüttel
accepted for publication in Physical Review Letters; arXiv:1602.03866 (PDF, supplemental information)

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

After not getting all that much sleep (due to the air conditioner problem in our hotel), we got up and had a pretty casual morning. At this particular bed and breakfast (Vicolo dei Pescatori), the morning meal isn’t served at the hotel. Instead, the owner provided us with vouchers that could be used at three different cafés in the area. We chose Bar Principe 2, and it ended up being quite nice. We had the usual coffee and tea, and I got this amazing tart that had Nutella and white chocolate cream. Honestly, I could eat that thing as the day is long. My waistline wouldn’t be too happy about it, but my taste buds would be! 🙂

After breakfast, we drove about 35 minutes southwest to the town of Cerveteri in order to see a very cool UNESCO World Heritage Site known as the Necropolis of Banditaccia. It is essentially a “City of the Dead” for the Etruscans, and is comprised of hundreds of tombs, one seemingly more elaborate than one before it!

Necropolis of Banditaccia tombs
Click to enlarge

It was raining off and on, so getting photos was more difficult than usual. However, the tombs were fascinating. In my opinion, though the full site is quite large, it is much of the same, so allowing for 1-2 hours there is sufficient. The land itself was certainly beautiful (as is much of northern Lazio).

Landscapes around the Necropolis of Banditaccia
Click to enlarge

We drove back to Anguillara Sabazia and spent the remainder of the day just walking around town and looking at the calming Lake Bracciano. I spent some time setting up my tripod and other equipment so that I could take multiple exposures in hopes of doing some HDR tone mapping. It’s a photographic technique with which I haven’t had much experience, but in a nutshell, it requires taking many shots (to varying degrees of under and overexposure), and combining them in order to increase the dynamic range of a photograph. I haven’t yet had a chance to play around with the photos that I took, but here is the “correct” exposure of the town:

Anguillara Sabazia and Lake Bracciano
Click to enlarge

It’s beautiful in and of itself!

We again made the drive to the neighbouring town of Bracciano because our dinner last night at Pane e Olio was so incredibly delicious that we wanted to experience it one more time before ending our trip to Italy. We tried ordering a different starter of fried lake fish, and that was a mistake. So, we instead went with the fried pizzas again. I, not wanting to miss my last chance of having real Carbonara (see yesterday’s post) ordered it again. Deb, however, changed it up and went with a pasta that had pork cheek and pistachios. My favourite thing about her dish was the pasta that was used, which was like Pici—a thicker, more doughy pasta with some weight to it.

Pane e Olio in Bracciano - Pici pasta with pistachio and pork cheek
Click to enlarge

We went back to Gran Caffe Principe di Napoli, and this time ordered a pistachio tart to take away with us to the hotel. We enjoyed it immensely, and had some wine on our balcony overlooking the lake. Not too shabby for our last night in Italy.


September 15, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Today is one of the few days on our trip that was, unfortunately, comprised of mostly travel. After we checked out of our hotel (Il Saracino), we made our way back down to the Port of Capraia. A staff member of the hotel took our bags down to Port, but we decided to make the nice walk one last time. Along the way, we stopped to take in the glorious view of the Port, and the pristine waters of the Ligurian Sea.

Torremar Ferry at the Port of Capraia returning to Livorno
Click to enlarge

We got to the Port well ahead of time. Why? In order to start the day off right with one last gelato from La Gelateria di Capraia. 😉 I went with the coconut again, because why change a good thing? We boarded the ferry and in under three hours, we were back in the port city of Livorno where we had parked the car for our mini-trip to Capraia Isola. In the car, it was about a 3h30m drive down the coast toward Rome. We left Tuscany, and entered back into Lazio (the region that encompasses Rome). About 30km northwest of Rome is the lake town of Bracciano, which I’ve heard can be a common vacation spot for Romans in the summer. Though Bracciano is the bigger city, we chose to stay in the neighbouring town of Anguillara Sabazia in a very small bed and breakfast called Vicolo dei Pescatori.

Al Vicolo dei Pescatori in Anguillara Sabazia - view of Lake Bracciano
Click to enlarge

As you can see, we were about as close to Lake Bracciano as possible. Directly across from our bed and breakfast was the owner’s restaurant, La Nepitella. We decided not to eat there, but he was nice enough to let us purchase some wine for us to enjoy later on our beautiful balcony (how could you not enjoy that view?) The only problem that we had with the room was that the air conditioning was not working. The owner explained how it worked, but to us, it wasn’t working at all. It was cooler outside, and you could hear it working in the main living space of the house, but not in our individual room. No AC made it very difficult to sleep. 🙁

We got in fairly late in the evening, so after unpacking, we got back in the car and made the short drive to Bracciano. We chose to eat at Pane e Olio which was in the plaza right next to the Castle of Bracciano. Boy, am I glad that we chose this restaurant! Everything from the service to the cool breeze of outside seating, and of course, the food, was impeccable (not to mention, affordable). The food, though, was the absolute star!

We started with the Prosciutto and fresh mozzarella, which came on a bed of shredded lettuce. The combination of the salty Prosciutto and the cool, refreshing mozzarella (the freshest I’ve ever had) was simply divine. We also ordered the Pizza fritta, which were essentially these beautifully fried hunks of dough with Marinara, basil, and some grated cheese.

Pane e Olio in Bracciano - fried pizza
Click to enlarge

For our mains, I had Carbonara, but this was REAL Carbonara—the sauce was primarily egg and NO cream. Deb had the gnocchi, which was lovely, but she claims was still not quite as good as at That’s Amore in Rome.

Pane e Olio in Bracciano - Carbonara
Click to enlarge

After that completely blissful meal, we stopped at Gran Caffe Principe di Napoli, which was a wonderful bakery. I picked up a Tartufo to take away, but this one was interesting in that it was vanilla instead of the typical chocolate. We went back to the hotel, sipped our wine on the patio, and called it a night. Even with all the driving and travelling today, it ended on some really nice notes.


September 14, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

We woke up ready to embrace our only full day on Capraia Isola. We spent most of yesterday learning the layout of the two main areas of the island (the Port and the Village). Today we planned on taking some of the trails into the undeveloped areas of the island (which is most of it). Before getting started, though, we headed back to La Gelateria di Capraia for some gelato (can you think of a better breakfast than ice cream)? 😉 This time, though, I ordered the coconut flavour, and realised that’s what I’ve been missing my whole life. Not only was the flavour incredibly refreshing and tropical, but the shaved coconut added a great textural contrast with the creamy gelato.

After our morning snack, we started out walking some of the shorter trails around the island. Even being fairly close to the hotel, some of the views were outstanding.

Capraia Isola village on the seaside cliff
Click to enlarge

Two things about the island have seemed to captivate me more than anything else. First of all, there are many places where the island just comes to a cliff that drops off into the Ligurian Sea. Second of all, the colours of the sea are variegated or even banded based on depths. They range from a light almost sea foam colour to a brilliant turquoise to a deep blue that borders on navy:

Capraia Isola prickly pears and the Ligurian Sea
Click to enlarge

After moseying around some of the shorter, nearby trails, we walked the road behind San Nicola church to the cemetery. The cemetery was gated and locked, and we wouldn’t have gone in out of respect anyway. That being said, the view of the path back to town gave a lot of perspective regarding the distances and space on the island.

Capraia Isola path from the cemetery to town
Click to enlarge

After talking with the one of the staff members at our hotel, we realised that we didn’t have enough time to do the longer trails on the island—some of them are upwards of 8-hours in duration and quite rugged! So, we decided on a much shorter trail that would still give us an idea of the undeveloped terrain. We went on the trail behind the Village that leads to Cala della Zurletto (Zurletto beach). The trail was moderate in spots, but it did give us a better understanding of the landscapes:

Capraia Isola - trail to Zurletto Beach 1
Click to enlarge

Capraia Isola - trail to Zurletto Beach 2
Click to enlarge

Capraia Isola - trail to Zurletto Beach 3
Click to enlarge

This particular trail, as I mentioned, leads to Zurletto beach. There is a fork where one can either head down many steps to the beach itself, or continue on back to the Village. We didn’t go all the way down to the beach because there were a lot of people down there. However, there was a bench at the top of the steps that overlooked the sea, and it was unbelievably serene! I could have sat on this bench and just looked and listened to the sea for hours on end. It was truly the first time on this trip that I’ve felt relaxed and at peace (hard to imagine that it took 12 days for that).

Capraia Isola - Zurletto Beach bench overlooking the Liguarian Sea
Click to enlarge

Unfortunately, though, we had to start heading back toward town before it got dark. We walked back to the hotel, and freshened up a little bit before setting out for the evening. We bought some more wine at the local market, and then walked back down to the Port for dinner. Like I said, we enjoyed our dinner at Al Vecchio Scorfano so much last evening that we made reservations again for tonight. In fact, we liked what we ate so much that we ordered the same things again. See yesterday’s post regarding the excellent Penne con pesto and Penne with squid. This time, though, we decided to share a litre of the house white wine, and were both taken aback by how good it was for such a low price! After our lovely dinner, we took the bus back to the hotel (for the wildly expensive fee of €1 per person 😛 ). We took our wine out by the pool, and again finished the evening sipping on it and listening to the sounds of the waves crashing up against the rocks below. I wish that I had allowed for some more time here, but alas…


September 13, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Today we were up before the sun so that we could get ready for our trip to Capraia Isola, which is a very small Tuscan island (with a population of ~400) off the coast of Livorno. We drove from our hotel to the port city of Livorno, and parked our car at S.T.P (Servizi Turistici Parcheggi), which we booked ahead of time using ParkVia. I didn’t really see any need to have a car on the island, and the price for parking for a couple days was vastly cheaper than paying to take the care on the ferry with us.

We then walked to Porto Mediceo, which was a little confusing only because I thought that our Torremar ferry departed from Porto Livorno instead. Other than that, it wasn’t that bad of a walk. Last night, we packed all the clothes and things that we would need into one smaller suitcase, which made it even easier to get our luggage onto the ferry. We did that, in part, because of the weight limit of 20kg per for each person, but found out that that rule wasn’t very strictly enforced anyway.

Torremar Ferry at Porto Mediceo in Livorno
Click to enlarge

The ferry only took 2h45m to get from Livorno to Capraia, and the journey didn’t seem long at all considering how luxurious it was! There was a full bar, café, children’s play area, and extremely comfortable seating inside.

When we arrived at Capraia, we quickly found out that it is separated into two sections: the Port and the Village. Obviously, the ferry arrives in the Port area. There are many shops and restaurants in the Port, but the Village is the location of the few hotels on the island. Upon our arrival, a staff member from our hotel (Il Saracino) loaded our luggage into a car, and drove us up to the Village.

Capraia Isola - Tuscan island - Port
Click to enlarge

After unpacking our bags, and settling in, we basically walked around the Village and Port areas just taking in the beauty of the island. We found that many places close in the afternoon, and reopen later in the evening. One place—La Gelateria di Capraia—stayed open most of the day though. We stopped in for a snack (gelato, of course). This place offered some really interesting flavours, like sweet biscuits (small cookies mixed in), and shaved chocolate. It was really refreshing, and nice, especially seeing as the portions were quite large for only €4! The owner was also really hospitable and friendly (as were most people on the island).

We continued walking around the Port and Village (as 90%+ of the island is undeveloped), and it seemed like every single place was photo-worthy! We saw two of the primary attractions, Forte San Giogio (but only from the outside), and Torre del Porto. We had a beautiful view of the Tower from our hotel room.

Torre del Porto on Capraia Isola
Click to enlarge

We found the one small store in the Village, which is Minimarket Cerri Stefano, located just a couple blocks east of San Nicola church on Via Carlo Alberto. We picked up some wine there to take back to the room, and I bought some mini Nutella tarts, because apparently I’m addicted to them. After dropping off our wine and desserts at the hotel room, we just took a quick look around the property before going back down to the Port to find a restaurant for dinner. The Il Saracino hotel property was really stunning, and had some wonderful views of the sea. There were also many beautiful places to sit and just overlook the water.

Seaside courtyard at Il Saracino hotel on Capraia Isola
Click to enlarge

We walked back down to the Port to try to decide on a restaurant for dinner. Instead of taking the main road that connects the Port and the Village, we found a small side trail that is only suitable for walking. Not only was it more comfortable to walk on this trail, but it offered some really nice views of the sea and Port.

Capraia Isola trail from the village to port
Click to enlarge

Once we got back down to the Port, we had a look at the menus of the various restaurants. Many of them looked good, but we narrowed it down to Cherie, or Al Vecchio Scorfano. We had also considered Nonno Beppe, but it was only open on the weekends during shoulder season.

We ended up eating at Al Vecchio Scorfano because the menu looked great, they opened earlier than Cherie, and the waitstaff was very welcoming. We were both really happy with our restaurant decision! We started off with some grilled vegetables, which were quite flavourful (we had both been missing them on this trip). I then ordered Penne con pesto, and it was some of the best pesto that I’ve had… ever! The pasta was cooked perfectly, with a very nice chew to it, and the pesto was unbelievably flavourful and bold. Deb got a Penne with grilled squid, and enjoyed it as well. It was more fishy than the squid dish that we had at That’s Amore in Rome, but it was very fresh. We also split a bottle of Falanghina at the restaurant. We enjoyed the meal so much that we asked the owner if we could go ahead and make reservations for tomorrow night as well. She seemed delighted, and welcomed us back the following night.

We walked back to the hotel, and had our wine on the edge of the world (near the hotel’s pool) overlooking the sea. It was truly beautiful, and the staff at Il Saracino was more than accommodating by giving us some of their wine glasses to use, even though we didn’t purchase the wine from them. Our first day in Capraia was very nice, and I’m looking forward to the adventures of hiking some of the island’s trails tomorrow.