Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. Luca Barbato
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
February 21, 2017, 02:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

February 20, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The geeks' wet dreams (February 20, 2017, 18:04 UTC)

As a follow up to my previous rant about FOSDEM I thought I would talk to what I define the “geeks’ wet dream”: IPv6 and public IPv4.

During the whole Twitter storm I had regarding IPv6 and FOSDEM, I said out loud that I think users should not care about IPv6 to begin with, and that IPv4 is not a legacy protocol but a current one. I stand by my description here: you can live your life on the Internet without access to IPv6 but you can’t do that without access to IPv4, at the very least you need a proxy or NAT64 to reach a wide variety of the network.

Having IPv6 everywhere is, for geeks, something important. But at the same time, it’s not really something consumers care about, because of what I just said. Be it for my mother, my brother-in-law or my doctor, having IPv6 access does not give them any new feature they would otherwise not have access to. So while I can also wish for IPv6 to be more readily available, I don’t really have any better excuse for it than making my personal geek life easier by allowing me to SSH to my hosts directly, or being able to access my Transmission UI from somewhere else.

Yes, there is a theoretical advantage for speed and performance, because NAT is not involved, but to be honest for most people that is not what they care about: a 36036 Mbit connection is plenty fast enough even when doing double-NAT, and even Ultra HD, 4K, HDR resolution is well provided by that. You could argue that an even lower latency network may enable more technologies to be available, but that not only sounds to me like a bit of a stretch, it also misses the point once again.

I already liked to Todd’s post, and I don’t need to repeat it here, but if it’s about the technologies that can be enabled, it should be something the service providers should care about. Which by the way is what is happening already: indeed the IPv6 forerunners are the big player companies that effectively are looking for a way to enable better technology. But at the same time, a number of other plans were executed so that better performance can be gained without gating it to the usage of a new protocol that, as we said, really brings nothing to the table.

Indeed, if you look at protocols such as QUIC or HTTP/2, you can notice how these reduce the amount of ports that need to be opened, and that has a lot to do with the double-NAT scenario that is more and more common in homes. Right now I’m technically writing under a three-layers NAT: the carrier-grade NAT used by my ISP for deploying DS-Lite, the NAT issued by my provider-supplied router, and the last NAT which is set up by my own router, running LEDE. I don’t even have a working IPv6 right now for various reasons, and know what? The bottleneck is not the NATs but rather the WiFi.

As I said before, I’m not doing a full Todd and thinking that ignoring IPv6 is a good idea, or that we should not spend any time fixing things that break with it. I just think that we should get a sense of proportion and figuring out what the relative importance of IPv6 is in this world. As I said in the other post, there are plenty of newcomers that do not need to be told to screw themselves if their systems don’t support IPv6.

And honestly the most likely scenario to test for is a dual-stack network in which some of the applications or services don’t work correctly because they misunderstand the system. Like OVH did. So I would have kept the default network dual-stack, and provided a single-stack, NAT64 network as a “perk”, for those who actually do care to test and improve apps and services — and possibly also have a clue not to go and ping a years old bug that was fixed but not fully released.

But there are more reasons why I call these dreams. A lot of the reasoning behind IPv6 appears to be grounded into the idea that geeks want something, and that has to be good for everybody even when they don’t know about it: IPv6, publicly-routed IPv4, static addresses and unfiltered network access. But that is not always the case.

Indeed, if you look even just at IPv6 addressing, and in particular to how stateless addressing works, you can see how there has been at least three different choices at different times for generating them:

  • Modified EUI-64 was the original addressing option, and for a while the only one supported; it uses the MAC address of the network card that receives the IPv6, and that is quite the privacy risk, as it means you can extract an unique identifier of a given machine and identify every single request coming from said machine even when it moves around different IPv6 prefixes.
  • RFC4941 privacy extensions were introduced to address that point. These are usually enabled at some point, but these are not stable: Wikipedia correctly calls these temporary addresses, and are usually fine to provide unique identifiers when connecting to an external service. This makes passive detection of the same machine across network not possible — actually, it makes it impossible to detect a given machine even in the same network, because the address changes over time. This is good on one side, but it means that you do need session cookies to maintain login section active, as you can’t (and you shouldn’t) rely on the persistence of an IPv6 address. It also allows active detection, at least of the presence of a given host within a network, as it does not by default disable the EUI-64 address, just not use it to connect to services.
  • RFC7217 adds another alternative for address selection: it provides a stable-within-a-network address, making it possible to keep long-running connections alive, and at the same time ensuring that at least a simple active probing does not give up the presence of a known machine in the network. For more details, refer to Lubomir Rintel’s post as he went in more details than me on the topic.

Those of you fastest on the uptake will probably notice the connection with all these problems: it all starts by designing the addressing assuming that the most important addressing option is stable and predictable. Which makes perfect sense for servers, and for the geeks who want to run their own home server. But for the average person, these are all additional risks that do not provide any desired additional feature!

There is one more extension to this: static IPv4 addresses suffer from the same problem. If your connection is always coming from the same IPv4 address, it does not matter how private your browser may be, your connections will be very easy to track across servers — passively, even. What is the remaining advantage of a static IP address? Certainly not authentication, as in 2017 you can’t claim ignorance of source address spoofing.

And by the way this is the same reason why providers started issuing dynamic IPv6 prefixes: you don’t want a household (if not a person strictly) to be tied to the same address forever, otherwise passive tracking is effectively a no-op. And yes, this is a pain for the geek in me, but it makes sense.

Static, publicly-routable IP addresses make accessing services running at home much easier, but at the same time puts you at risk. We all have been making about the “Internet of Things”, but at the same time it appears everybody wants to be able to set their own devices to be accessible from the outside, somehow. Even when that is likely the most obvious way for external attackers to access one’s unprotected internal network.

There are of course ways around this that do not require such a publicly routable address, and they are usually more secure. On the other hand, they are not quite a panacea of course. Indeed they effectively require a central call-back server to exist and be accessible, and usually that is tied to a single company, with customise protocols. As far as I know, no open-source such call-back system appears to exist, and that still surprises me.

Conclusions? IPv6, just like static and publicly routable IP addresses, are interesting tools that are very useful to technologists, and it is true that if you consider the original intentions behind the Internet these are pretty basic necessities, but if you think that the world, and thus requirements and importance of features, have not changed, then I’m afraid you may be out of touch.

audiofile: multiple ubsan crashes (February 20, 2017, 16:04 UTC)

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered multiple crashes because of undefined behavior.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
/tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/WAVE.cpp:289:14: runtime error: index 256 out of bounds for type 'int16_t [256][2]'
/tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/WAVE.cpp:290:14: runtime error: index 256 out of bounds for type 'int16_t [256][2]'

Reproducer:
https://github.com/asarubbo/poc/blob/master/00191-audiofile-indexoob

##########################################

# sfconvert @@ out.mp3 format aiff
/tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:42: runtime error: signed integer overflow: 65536 * 252936 cannot be represented in type 'int'

Reproducer:
https://github.com/asarubbo/poc/blob/master/00192-audiofile-signintoverflow-sfconvert

##########################################

# sfconvert @@ out.mp3 format aiff
/tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/modules/MSADPCM.cpp:115:27: runtime error: signed integer overflow: 5512570 * 409 cannot be represented in type 'int'

Reproducer:
https://github.com/asarubbo/poc/blob/master/00193-audiofile-signintoverflow-MSADPCM

##########################################

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
These bugs were discovered by Agostino Sarubbo of Gentoo.

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
These bugs were found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-multiple-ubsan-crashes

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered an heap overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==1731==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fd325141800 at pc 0x7fd324dab3e7 bp 0x7fff5fd78e20 sp 0x7fff5fd78e18                                                                                                                                       
WRITE of size 4 at 0x7fd325141800 thread T0                                                                                                                                                                                                                                    
    #0 0x7fd324dab3e6 in void Expand3To4Module::run(unsigned char const*, int*, int) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/SimpleModule.h:268:14                                                                           
    #1 0x7fd324dab3e6 in Expand3To4Module::run(Chunk&, Chunk&) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/SimpleModule.h:241                                                                                                         
    #2 0x7fd324d8105a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14                                                                                                                                             
    #3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29                                                                                                                                                 
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #5 0x7fd323e5678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
0x7fd325141800 is located 0 bytes to the right of 524288-byte region [0x7fd3250c1800,0x7fd325141800)                                                                                                                                                                           
allocated by thread T0 here:                                                                                                                                                                                                                                                   
    #0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64                                                                                                                                       
    #1 0x50bb48 in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:17                                                                                                                                                 
    #2 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #3 0x7fd323e5678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
                                                                                                                                                                                                                                                                               
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/SimpleModule.h:268:14 in void Expand3To4Module::run(unsigned char const*, int*, int)                                                 
Shadow bytes around the buggy address:                                                                                                                                                                                                                                         
  0x0ffae4a202b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ffae4a202c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ffae4a202d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ffae4a202e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ffae4a202f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
=>0x0ffae4a20300:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ffae4a20310: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ffae4a20320: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ffae4a20330: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ffae4a20340: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ffae4a20350: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==1731==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00190-audiofile-heapoverflow-Expand3To4Module-run

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-expand3to4modulerun-simplemodule-h

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered a division by zero.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==3538==ERROR: AddressSanitizer: FPE on unknown address 0x7f86a8cffe14 (pc 0x7f86a8cffe14 bp 0x7ffe41d2ae00 sp 0x7ffe41d2adf0 T0)                                                                                                                                              
    #0 0x7f86a8cffe13 in BlockCodec::reset1() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:74:61                                                                                                                        
    #1 0x7f86a8d0b794 in ModuleState::reset(_AFfilehandle*, Track*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/ModuleState.cpp:218:9                                                                                                 
    #2 0x7f86a8d0b794 in ModuleState::setup(_AFfilehandle*, Track*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/ModuleState.cpp:190                                                                                                   
    #3 0x7f86a8ced43c in afGetFrameCount /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/format.cpp:205:41                                                                                                                                        
    #4 0x50bb5c in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:329:29                                                                                                                                                 
    #5 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #6 0x7f86a7dbe78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #7 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
AddressSanitizer can not provide additional info.                                                                                                                                                                                                                              
SUMMARY: AddressSanitizer: FPE /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:74:61 in BlockCodec::reset1()                                                                                                               
==3538==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00189-audiofile-fpe-BlockCodec-reset1

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-divide-by-zero-in-blockcodecreset1-blockcodec-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered an heap overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
WRITE of size 2 at 0x7fb583d33800 thread T0                                                                                                                                                                                                                                    
    #0 0x7fb58398c8b1 in ulaw2linear_buf(unsigned char const*, short*, int) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:42:13                                                                                                
    #1 0x7fb58398c8b1 in G711::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:206                                                                                                                                     
    #2 0x7fb58397305a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14                                                                                                                                             
    #3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29                                                                                                                                                 
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #5 0x7fb582a4878f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
0x7fb583d33800 is located 0 bytes to the right of 917504-byte region [0x7fb583c53800,0x7fb583d33800)                                                                                                                                                                           
allocated by thread T0 here:                                                                                                                                                                                                                                                   
    #0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64                                                                                                                                       
    #1 0x50bb48 in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:17                                                                                                                                                 
    #2 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #3 0x7fb582a4878f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
                                                                                                                                                                                                                                                                               
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:42:13 in ulaw2linear_buf(unsigned char const*, short*, int)                                                                      
Shadow bytes around the buggy address:                                                                                                                                                                                                                                         
  0x0ff73079e6b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ff73079e6c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ff73079e6d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ff73079e6e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0ff73079e6f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
=>0x0ff73079e700:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ff73079e710: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ff73079e720: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ff73079e730: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ff73079e740: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
  0x0ff73079e750: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==2586==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00188-audiofile-heapoverflow-ulaw2linear_buf

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-ulaw2linear_buf-g711-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered a division by zero.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==2529==ERROR: AddressSanitizer: FPE on unknown address 0x7ff06b121920 (pc 0x7ff06b121920 bp 0x7ffd0ddf2d90 sp 0x7ffd0ddf2d00 T0)                                                                                                                                              
    #0 0x7ff06b12191f in BlockCodec::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:50:46                                                                                                                       
    #1 0x7ff06b15ac20 in RebufferModule::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/RebufferModule.cpp:122:3                                                                                                               
    #2 0x7ff06b10b05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14                                                                                                                                             
    #3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29                                                                                                                                                 
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #5 0x7ff06a1e078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
AddressSanitizer can not provide additional info.                                                                                                                                                                                                                              
SUMMARY: AddressSanitizer: FPE /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:50:46 in BlockCodec::runPull()                                                                                                              
==2529==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00187-audiofile-fpe-BlockCodec-runPull

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-divide-by-zero-in-blockcodecrunpull-blockcodec-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered an heap overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==2512==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62d00001c45a at pc 0x7fe7476f387d bp 0x7ffc3b0e3bf0 sp 0x7ffc3b0e3be8
WRITE of size 2 at 0x62d00001c45a thread T0
    #0 0x7fe7476f387c in MSADPCM::decodeBlock(unsigned char const*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/MSADPCM.cpp:222:14
    #1 0x7fe7476c1ac9 in BlockCodec::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:55:3
    #2 0x7fe7476fac20 in RebufferModule::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/RebufferModule.cpp:122:3
    #3 0x7fe7476ab05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14
    #4 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29
    #5 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
    #6 0x7fe74678078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #7 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)

0x62d00001c45a is located 0 bytes to the right of 32858-byte region [0x62d000014400,0x62d00001c45a)
allocated by thread T0 here:
    #0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
    #1 0x7fe746419687 in operator new(unsigned long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/libstdc++.so.6+0xb2687)
    #2 0x7fe7476af43c in afGetFrameCount /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/format.cpp:205:41
    #3 0x50bb5c in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:329:29
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
    #5 0x7fe74678078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/MSADPCM.cpp:222:14 in MSADPCM::decodeBlock(unsigned char const*, short*)
Shadow bytes around the buggy address:
  0x0c5a7fffb830: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5a7fffb840: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5a7fffb850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5a7fffb860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5a7fffb870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c5a7fffb880: 00 00 00 00 00 00 00 00 00 00 00[02]fa fa fa fa
  0x0c5a7fffb890: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5a7fffb8a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5a7fffb8b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5a7fffb8c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5a7fffb8d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==2512==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00186-audiofile-heapoverflow-MSADPCM-decodeBlock

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-msadpcmdecodeblock-msadpcm-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered an heap overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==2486==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62f0000286e8 at pc 0x7fc5db36626e bp 0x7ffcecb1cbf0 sp 0x7ffcecb1cbe8                                                                                                                                       
WRITE of size 2 at 0x62f0000286e8 thread T0                                                                                                                                                                                                                                    
    #0 0x7fc5db36626d in IMA::decodeBlockWAVE(unsigned char const*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:188:13                                                                                                
    #1 0x7fc5db365671 in IMA::decodeBlock(unsigned char const*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:110:10                                                                                                    
    #2 0x7fc5db361ac9 in BlockCodec::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:55:3                                                                                                                        
    #3 0x7fc5db39ac20 in RebufferModule::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/RebufferModule.cpp:122:3                                                                                                               
    #4 0x7fc5db34b05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14                                                                                                                                             
    #5 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29                                                                                                                                                 
    #6 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #7 0x7fc5da42078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
    #8 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
0x62f0000286e8 is located 0 bytes to the right of 49896-byte region [0x62f00001c400,0x62f0000286e8)                                                                                                                                                                            
allocated by thread T0 here:                                                                                                                                                                                                                                                   
    #0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64                                                                                                                                       
    #1 0x7fc5da0b9687 in operator new(unsigned long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/libstdc++.so.6+0xb2687)                                                                                                                                                           
    #2 0x7fc5db34f43c in afGetFrameCount /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/format.cpp:205:41                                                                                                                                        
    #3 0x50bb5c in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:329:29                                                                                                                                                 
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17                                                                                                                                                          
    #5 0x7fc5da42078f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                                                                                     
                                                                                                                                                                                                                                                                               
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:188:13 in IMA::decodeBlockWAVE(unsigned char const*, short*)                                                                      
Shadow bytes around the buggy address:                                                                                                                                                                                                                                         
  0x0c5e7fffd080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x0c5e7fffd090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5e7fffd0a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5e7fffd0b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c5e7fffd0c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c5e7fffd0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa
  0x0c5e7fffd0e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5e7fffd0f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5e7fffd100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5e7fffd110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c5e7fffd120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==2486==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00185-audiofile-heapoverflow-IMA-decodeBlockWAVE

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-imadecodeblockwave-ima-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered an heap overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff
==2480==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f5eb894d800 at pc 0x7f5eb85a699f bp 0x7ffe19064df0 sp 0x7ffe19064de8
WRITE of size 2 at 0x7f5eb894d800 thread T0
    #0 0x7f5eb85a699e in alaw2linear_buf(unsigned char const*, short*, int) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13
    #1 0x7f5eb85a699e in G711::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:209
    #2 0x7f5eb858d05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14
    #3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29
    #4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
    #5 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)

0x7f5eb894d800 is located 0 bytes to the right of 393216-byte region [0x7f5eb88ed800,0x7f5eb894d800)
allocated by thread T0 here:
    #0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
    #1 0x50bb48 in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:17
    #2 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
    #3 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13 in alaw2linear_buf(unsigned char const*, short*, int)
Shadow bytes around the buggy address:
  0x0fec57121ab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fec57121ac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fec57121ad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fec57121ae0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fec57121af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0fec57121b00:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0fec57121b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0fec57121b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0fec57121b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0fec57121b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0fec57121b50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==2480==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00184-audiofile-heapoverflow-alaw2linear_buf

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-alaw2linear_buf-g711-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz on it discovered a global overflow.

The complete ASan output:

# sfconvert @@ out.mp3 format aiff                                                                                                                                                                                                                                               
==1779==ERROR: AddressSanitizer: global-buffer-overflow on address 0x7f0add7e6a7a at pc 0x7f0add77c221 bp 0x7ffe13caabf0 sp 0x7ffe13caabe8
READ of size 2 at 0x7f0add7e6a7a thread T0
    #0 0x7f0add77c220 in decodeSample(adpcmState&, unsigned char) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:144:13
    #1 0x7f0add77c220 in IMA::decodeBlockWAVE(unsigned char const*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:186
    #2 0x7f0add77b671 in IMA::decodeBlock(unsigned char const*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:110:10
    #3 0x7f0add777ac9 in BlockCodec::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/BlockCodec.cpp:55:3
    #4 0x7f0add7b0c20 in RebufferModule::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/RebufferModule.cpp:122:3
    #5 0x7f0add76105a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14
    #6 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29
    #7 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
    #8 0x7f0adc83678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #9 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)

0x7f0add7e6a7a is located 6 bytes to the left of global variable 'indexTable' defined in '/tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:116:21' (0x7f0add7e6a80) of size 16
0x7f0add7e6a7a is located 40 bytes to the right of global variable 'stepTable' defined in '/tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:122:22' (0x7f0add7e69a0) of size 178
SUMMARY: AddressSanitizer: global-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/IMA.cpp:144:13 in decodeSample(adpcmState&, unsigned char)
Shadow bytes around the buggy address:
  0x0fe1dbaf4cf0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fe1dbaf4d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 07 f9 f9
  0x0fe1dbaf4d10: f9 f9 f9 f9 00 00 00 00 00 00 00 04 f9 f9 f9 f9
  0x0fe1dbaf4d20: 00 00 00 00 00 00 01 f9 f9 f9 f9 f9 00 00 01 f9
  0x0fe1dbaf4d30: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0fe1dbaf4d40: 00 00 00 00 00 00 00 00 00 00 02 f9 f9 f9 f9[f9]
  0x0fe1dbaf4d50: 00 00 f9 f9 f9 f9 f9 f9 00 00 03 f9 f9 f9 f9 f9
  0x0fe1dbaf4d60: 00 00 05 f9 f9 f9 f9 f9 00 00 00 00 00 00 00 00
  0x0fe1dbaf4d70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0fe1dbaf4d80: 00 00 00 00 01 f9 f9 f9 f9 f9 f9 f9 00 00 00 00
  0x0fe1dbaf4d90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==1779==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00183-audiofile-globaloverflow-decodeSample

Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-global-buffer-overflow-in-decodesample-ima-cpp

Description:
audiofile is a C-based library for reading and writing audio files in many common formats.

A fuzz with a wav file as input produced an heap overflow.

The complete ASan output:

# sfinfo $FILE
==6051==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61a00001f708 at pc 0x0000004513de bp 0x7ffc71379b20 sp 0x7ffc713792d0
WRITE of size 2 at 0x61a00001f708 thread T0
    #0 0x4513dd in read /tmp/portage/sys-devel/llvm-3.9.1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:765
    #1 0x7fd944373b2c in bool readValue(File*, short*) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/FileHandle.cpp:353:12
    #2 0x7fd944373b2c in bool readSwap(File*, short*, int) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/FileHandle.cpp:375
    #3 0x7fd944373b2c in _init /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/FileHandle.cpp:397
    #4 0x7fd94439ce2f in WAVEFile::parseFormat(Tag const&, unsigned int) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/WAVE.cpp:289:5
    #5 0x7fd9443a1568 in WAVEFile::readInit(_AFfilesetup*) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/WAVE.cpp:733:13
    #6 0x7fd9443b4fb9 in _afOpenFile(int, File*, char const*, _AFfilehandle**, _AFfilesetup*) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/openclose.cpp:356:15
    #7 0x7fd9443b6331 in afOpenFile /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/openclose.cpp:217:6
    #8 0x50a278 in printfileinfo /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/sfcommands/printinfo.c:45:22
    #9 0x509f98 in main /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/sfcommands/sfinfo.c:113:4
    #10 0x7fd94347f78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
    #11 0x419b68 in _init (/usr/bin/sfinfo+0x419b68)

0x61a00001f708 is located 0 bytes to the right of 1160-byte region [0x61a00001f280,0x61a00001f708)
allocated by thread T0 here:
    #0 0x4d2928 in malloc /tmp/portage/sys-devel/llvm-3.9.1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
    #1 0x7fd942ede687 in operator new(unsigned long) (/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/libstdc++.so.6+0xb2687)
    #2 0x7fd9443b4d63 in _afOpenFile(int, File*, char const*, _AFfilehandle**, _AFfilesetup*) /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/openclose.cpp:337:15
    #3 0x7fd9443b6331 in afOpenFile /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/libaudiofile/openclose.cpp:217:6
    #4 0x50a278 in printfileinfo /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/sfcommands/printinfo.c:45:22
    #5 0x509f98 in main /tmp/portage/media-libs/audiofile-0.3.6-r3/work/audiofile-0.3.6/sfcommands/sfinfo.c:113:4
    #6 0x7fd94347f78f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/sys-devel/llvm-3.9.1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:765 in read
Shadow bytes around the buggy address:
  0x0c347fffbe90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c347fffbea0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c347fffbeb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c347fffbec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c347fffbed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c347fffbee0: 00[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c347fffbef0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c347fffbf00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c347fffbf10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c347fffbf20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c347fffbf30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==6051==ABORTING

Affected version:
0.3.6

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00135-audiofile-heapoverflow-readValue

Timeline:
2017-01-30: bug discovered and reported to upstream
2017-02-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-readvalue-filehandle-cpp

February 18, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

Just a quick tip on how to easily create a Fedora chroot environment from (even a non-Fedora) Linux distribution.

I am going to show the process on Debian stretch but it’s not be much different elsewhere.

Since I am going to leverage pip/PyPI, I need it available — that and a few non-Python widespread dependencies:

# apt install python-pip db-util lsb-release rpm yum
# pip install image-bootstrap pychroot

Now for the actual chroot creation, process and usage is very close to debootstrap of Debian:

# directory-bootstrap fedora --release 25 /var/lib/fedora_25_chroot

Done. Now let’s prove we have actual Fedora 25 in there. For lsb_release we need package redhat-lsb here, but the chroot was is functional before that already.

# pychroot /var/lib/fedora_25_chroot dnf -y install redhat-lsb
# pychroot /var/lib/fedora_25_chroot lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch:[..]:printing-4.1-noarch
Distributor ID: Fedora
Description:    Fedora release 25 (Twenty Five)
Release:        25
Codename:       TwentyFive

Note the use of pychroot which does bind mounts of /dev and friends out of the box, mainly.

directory-bootstrap is part of image-bootstrap and, besides Fedora, also supports creation of chroots for Arch Linux and Gentoo.

See you 🙂

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Reverse Engineering is just the first step (February 18, 2017, 13:03 UTC)

Last year I said that reverse engineering obsolete systems is useful giving as an example adding Coreboot support for very old motherboards that are simpler and whose components are more likely to have been described somewhere already. One thing that I realized I didn’t make very clear in that post is that there is an important step on reverse engineering: documenting. As you can imagine from this blog, I think that documenting the reverse engineering processes and results are important, but I found out that this is definitely not the case for everybody.

On the particularly good side, going to 33c3 had a positive impression on me. Talks such as The Ultimate GameBoy Talk were excellent: Michael Steil did an awesome job at describing a lot of the unknown details of Nintendo’s most popular handheld. He also did a great job at showing practical matters, such as which tricks did various games use to implement things that at first sight would look impossible. And this is only one of his talks, he has a series that is going on year after year, I’ve watched his talk about the Commodore 64, and the only reason why it’s less enjoyable to watch is that the recording quality suffers from the ages.

In other posts I already referenced Micah’s videos. These have also been extremely nice to start watching, as she does a great job at explaining complex concepts, and even the “stream of consciousness” streams are very interesting and a good time to learn new tricks. What attracted me to her content, though, is the following video:

I have been using Wacom tablets for years, and I had no idea how they really worked behind the scene. Not only she does a great explanation of the technology in general, but the teardown of the mouse was also awesome with full schematics and explanation of the small components. No wonder I have signed up for her Patreon right away: she deserve to be better known and have a bigger following. And if funding her means spreading more knowledge around, well, then I’m happy to do my bit.

For the free software, open source and hacking community, reverse engineering is only half the process. The endgame is not for one person to know exactly how something works, but rather for the collectivity to gain more insight on things, so that more people have access to the information and can even improve on it. The community needs not only to help with that but also to prioritise projects that share information. And that does not just mean writing blogs about things. I said this before: blogs don’t replace documentation. You can see blogs as Micah’s shop-streaming videos, while documentation is more like her video about the tablets: they synthesize documentation in actually usable form, rather than just throwing information around.

I have a similar problem of course: my blog posts are not usually a bit of a stream of consciousness and they do not serve an useful purpose to capture the factual state of information. Take for example my post about reverse engineering the OneTouch Verio and its rambling on, then compare it with the proper protocol documentation. The latter is the actual important product, compared to my ramblings, and that is the one I can be proud of. I would also argue that documenting these things in a easily consumable form is more important than writing tools implementing them as those only cover part of the protocol and in particular can only leverage my skills, that do not involve statistical, pharmaceutical or data visualisation skills.

Unfortunately there are obstacles to these idea of course. Sometimes, reverse engineering documentation is attacked by manufacturer even more than code implementing the same information. So for instance while I have some information I still haven’t posted about a certain gaming mouse, I already know that the libratbag people do not want documentation of the protocols in their repository or wiki, because it causes them more headaches than the code. And then of course there is the problem of hosting this documentation somewhere.

I have been pushing my documentation on GitHub, hoping nobody causes a stink, but the good thing about using git rather than Wiki or similar tools is exactly that you can just move it around without losing information. This is not always the case: a lot of documentation is still nowadays only available either as part of code itself, or on various people’s homepages. And there are at least two things that can happen with that, the first is the most obvious and morbid one: the author of the documentation dies, and the documentation disappears once their domain registration expires, or whatever else, or if the homepage is at a given university or other academic endeavour, it may very well be that the homepage gets to disappear before the person anyway.

I know a few other alternatives to store this kind of data have been suggested, including common wiki akin to Wikipedia, but allowing for original research, but I am still uncertain that is going to be very helpful. The most obvious thing I can think of, is making sure these information can actually be published in books. And I think that at least No Starch Press has been doing a lot for this, publishing extremely interesting books including Designing BSD Rootkits and more recently Rootkits and Bootkits which is still in Early Access. A big kudos to Bill for this.

From my side, I promise I’ll try to organize my findings of anything I’ll work on in the best of my ability, and possibly organize it in a different form than just a blog, because the community deserves better.

February 14, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2017-02-14 Blog moved to Gentoo blog (February 14, 2017, 10:36 UTC)

The blog as been moved ? here:
https://blogs.gentoo.org/alicef/

Still not sure if the move will be temporary or not.
So for now I will keep both the Blog.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GnuPG and CCID support (February 14, 2017, 00:04 UTC)

As many of you probably know already, I use GnuPG with OpenPGP cards not only for encryption but also for SSH access. As some or others probably noticed on Twitter, I recently decided to restore to life my old Dell Latitude laptop, but since it’s a fairly old system I decided to try running something else rather than Gentoo on it, for the first time in many years. I settled on Antergos, which is a more user friendly install option for Arch Linux. Since I know a number of current or past Arch Linux developers, it seemed fitting.

Beside the obvious problems of Arch not being Gentoo, the first biggest problem I found was being unable to use the smartcard reader in the Dell laptop. The problem is that the BCM5880 device that this laptop comes from, is not quite the most friendly CCID device out there. Actually, it might have a better firmware available but I need to figure out how to install it, since it requires a Windows install and I don’t have one on this laptop.

But why did this work fine with Gentoo and not fine with Arch? Well the answer is that the default install of GnuPG from pacman enables the GnuPG own CCID driver. In my original drawing on the matter I did not capture the fact that GnuPG has its own CCID driver — although I did capture that in a follow-up diagram. This driver is fairly minimal. The idea is that for simple devices, where they implement the standard pretty closely, the driver works fine, and it reduces the amount of dependencies needed to get a smartcard or token working. Unfortunately the number of devices that appear to implement the CCID standard correctly is fairly low in my experience, and the end result is that you still end up having to use pcsc-lite to get this to work as intended.

Luckily, Arch Linux wiki has the answer and you do not need to rebuild GnuPG for this to work. Yay.

It may be easy to say “fix GnuPG” but the list of devices that are not CCID compliant is big, and Ludovic already has workarounds for most of them in the CCID driver. So why should that list of broken devices be repeated somewhere else? There really is no need. if anything, you may ask why the CCID driver is not an interface-independent library that GnuPG can just access directly, and there are actually a few good reasons why this is not the case. The first of which is that it would be pointless to depend on the small driver but not the pcsclite package that implements the otherwise more widely available interface.

As it turns out, though, the particular Gemalto device I use as my primary card card nowadays is actually pretty much CCID compliant, so it could be used with the GnuPG’s own CCID driver, sparing me the need to install and maintain pcsclite and ccid on my other laptop. Which would also mean I could avoid maintaining the packages in Gentoo even. But here is one of the biggest catches: the devices are not by default accessible to the console user. Indeed even when I made it easier to use pcscd and ccid a whopping six years ago, I only set up the udev rules when the CCID driver was installed.

You could expect systemd to just install by default a rule that allows CCID standard devices to be accessible to the console user, and that would make it possible to use GnuPG and its CCID driver with whichever common standard-compliant devices are available. Which I hope (but I have not tested) include the YubiKey 4 (don’t care aobut the NEO, since then you need to make sure to also have the right firmware, as the older ones have a nasty PIN bypass vulnerability.

But then again, I wonder if there are any security issue I don’t really expect that may be impeding a plan like this. Also, given the much wider compatibility of pcsclite with the devices, not only for CCID but for other non-standard protocols too, I would rather be interested to know if you could run pcscd as a user directly, maybe with proper systemd integration — because if there is one good thing about systemd is the ability to run proper per-user services, in a standardised fashion rather than having to fake it the way KDE and GNOME have done for so many years.

As for maintaining the packages on Gentoo, it is not bad at all. The releases are usually good, and very rarely problems came up during packaging. Ludovic’s handling fo the one security issue with pcsc-lite in the past few years has been a bit so-so, as he did not qualify the importance of the security issue in the changelog of the new release that fixed it, but except for that it was punctual and worked fine. The main problem that I have with those tools is having to deal with Alioth, which has this silly idea of adding an unique numeric ID to each file download, and then providing you with the file with whichever name you provide it when you download. Which effectively means you need to update the ID from the Alioth website every time a new release comes up, which is actually annoying.

February 11, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I started drafting this post just before I left Ireland for ENIGMA. While at ENIGMA I realized how important it is to write about this because it is too damn easy to forget about it altogether.

How secure and reliable are our personal infrastructure services, such as our ISPs? My educated guess is, not much.

The start of this story I already talked about: my card got cloned and I had to get it replaced. Among the various services that I needed it replaced in, there were providers in both Italy and Ireland: Wind and Vodafone in Italy, 3 IE in Ireland. As to why I had to use an Irish credit card in Italy, it is because SEPA Direct Debit does not actually work, so my Italian services cannot debit my Irish account directly, as I would like, but they can charge (nearly) any VISA or MasterCard credit card.

Changing the card on Wind Italy was trivial, except that when (three weeks later) I went to restore to the original Tesco card, Chrome 56 reported the site as Not Secure because the login page is served on a non-secure connection by default (which means it can be hijacked by a MITM attack). I bookmarked the HTTPS copy (which load non-encrypted resources, which makes it still unsafe) and will keep using that for the near future.

Vodafone Italy proved more interesting in many ways. The main problem is that I could not actually set up the payment with the temporary card I intended to use (Ulster Bank Gold), the website would just error out on me providing a backend error message — after annoying Vodafone Italy over Twitter, I found out that the problem is in the BIN of the credit card, as the Tesco Bank one is whitelisted in their backend, but the Ulster Bank is not. But that is not all; all the pages of the “Do it yourself” have mixed-content requests, making it not completely secure. But this is not completely uncommon.

What was uncommon and scary was that while I was trying to force them into accepting the card I got to the point where Chrome would not auto-fill the form because not secure. Uh? Turned out that, unlike news outlets, Vodafone decided that their website with payment information, invoices, and call details does not need to be hardened against MITM, and instead allows stripping HTTPS just fine: non-secure cookies and all.

In particular what happened was that the left-side navigation link to “Payment methods” used an explicit http:// link, and the further “Edit payment method” link is a relative link… so it would bring up the form in a non-encrypted page. I brought it up on Twitter (together with the problems with changing the credit card on file), and they appear to have fixed that particular problem.

But almost a month later when I went out to replace the card with the new Tesco replacement card, I managed to find something else with a similar problem: when going through the “flow” to change the way I receive my bill (I wanted the PDF attached), the completion stage redirects me to an HTTP page. And from there, even though the iframes are then loaded over HTTPS, the security is lost.

Of course there are two other problems: the login pane is rendered on HTTP, which means that Chrome 56 and the latest Firefox consider it not secure, and since the downgrade from HTTPS to HTTP does not log me out, it means the cookies are not secure, and that makes it possible for an attacker to steal them with not much difficulty. Particularly as the site does not seem to send any HTTP headers to make the connection safe (Archive.is of Mozilla Observatory).

Okay so these two Italian providers have horrible security, but at least I have to say that they mostly worked fine when I was changing the credit cards — despite the very cryptic error that Vodafone decided to give me because my card was foreign. Let’s now see two other (related) providers: Three Ireland and UK — ironically enough, in-between me having to replace the card and writing this post, Wind Italy has completed the merge with Three Italy.

Both the Threes websites are actually fairly secure, as they have a SAML flow on a separate host for login, and then a separate host again for the account management. Even though they also get a bad grade on Mozilla Observatory.

What is more interesting with these two websites is their reliability, or lack thereof. For now almost a month, the Three Ireland website does not allow me to check my connected payment cards, or change them. Which means the automatic top-up does not work and I have to top-up manually. Whenever I try to get to the “Payment Cards” page, it starts loading and then decides to redirect me back to the homepage of the self-service area. It also appears to be using a way to do redirection that is not compatible with some Chrome policy as there is a complicated warning message on the console when that happens.

Three UK is slightly better but not by much. All of this frustrating experience happened just before I left for my trip to the USA for ENIGMA 2017. As I wrote previously I generally use 3 UK roaming there. To use the roaming I need to enable an add-on (after topping up the prepaid account of course), but the add-ons page kept throwing errors. And the documentation suggested to call the wrong number to enable the add-ons on the phone. They gave me the right one over Twitter, though.

Without going into more examples of failures from phone providers, the question for me would be, why is that all we hear about security and reliability comes from either big companies like Google and Facebook, or startups like Uber and AirBnb, but not from ISPs.

While ISPs stopped being the default provider of email for most people years and years ago, they are still the one conduit we need to connect to the rest of the Internet. And when they screw up, they screw up big. Why is it that they are not driving the reliability efforts?

Another obvious question would be whether the open source movement can actually improve the reliability of ISPs by building more tools for management and accounting, just as they used to be more useful to ISPs by building mail and news servers. Unfortunately, that would require admitting that some times you need to be able to restrict the “freedom” of your users, and that’s not something the open source movement has ever been able to accept.

February 09, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

So, FOSDEM 2017 is over, and as every year it was both fun and interesting. There will for sure be more blog posts, e.g., with photographs from talks by our developers, the booth, the annual Gentoo dinner, or (obviously) the beer event. The Gentoo booth, centrally located just opposite to KDE and Gnome and directly next to CoreOS, was quite popular; it's always great to hear from all the enthusiastic Gentoo fans. Many visitors also prepared, compiled, and installed their own Gentoo buttons at our button machine.
In addition we had a new Gentoo LiveDVD as handout - the "Crispy Belgian Waffle" FOSDEM 2017 edition. For those of you who couldn't make it to Brussels, you can still get it! Download the ISO here and burn it on a DVD or copy it on a USB stick - all done. Many thanks to Fernando Reyes (likewhoa) for all his work!
Finally, for those who are wondering, the "Gentoo Ecosystem" poster from our table can be downloaded as PDF here. It is based on work by Daniel Robbins and mitzip from Funtoo; the source files are available on Github. Of course this poster is continous work in progress, so tell me if you find something missing!

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo at Fosdem (February 09, 2017, 06:00 UTC)

At the stand

It was nice to meet everyone and hang out as well. There was an interview with Hacker Public Radio which you can find HERE as well.

Just a short one this time, but it was nice to meet everyone.

February 08, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Tracking Service Function Chaining with Skydive (February 08, 2017, 11:43 UTC)

Skydive is “an open source real-time network topology and protocols analyzer”. It is a tool (with CLI and web interface) to help analyze and debug your network (OpenStack, OpenShift, containers, …). Dropped packets somewhere? MTU issues? Routing problems? These are some issues where running skydive whill help.

So as an update on my previous demo post (this time based on the Newton release), let’s see how we can trace SFC  with this analyzer!

devstack installation

Not a lot of changes here, check out devstack on the stable/newton branch, grab the local.conf file I prepared (configure to use skydive 0.9 release) and run “./stack.sh”!

For the curious, the SFC/Skydive specific parts are:
# SFC
enable_plugin networking-sfc https://git.openstack.org/openstack/networking-sfc stable/newton

# Skydive
enable_plugin skydive https://github.com/skydive-project/skydive.git refs/tags/v0.9.0
enable_service skydive-agent skydive-analyzer

Skydive web interface and demo instances

Before running the script to configure the SFC demo instances, open the skydive web interface (it listens on port 8082, check your instance firewall if you cannot connect):

http://${your_devstack_ip}:8082

The login was configured with devstack, so if you did not change, use admin/pass123456.
Then add the demo instances as in the previous demo:
$ git clone https://github.com/voyageur/openstack-scripts.git -b sfc_newton_demo
$ ./openstack-scripts/simple_sfc_vms.sh

And watch as your cloud goes from “empty” to “more crowded”:

Skydive CLI, start traffic capture

Now let’s enable traffic capture on the integration bridge (br-int), and all tap interfaces (more details on the skydive CLI available in the documentation):
$ export SKYDIVE_USERNAME=admin
$ export SKYDIVE_PASSWORD=pass123456
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client capture create --gremlin "G.V().Has('Name', 'br-int', 'Type', 'ovsbridge')"
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client capture create --gremlin "G.V().Has('Name', Regex('^tap.*'))"

Note this can be done in the web interface too, but I wanted to show both interfaces.

Track a HTTP request diverted by SFC

Make a HTTP request from the source VM to the destination VM (see previous post for details). We will highlight the nodes where this request has been captured: in the GUI, click on the capture create button, select “Gremlin expression”, and use the query:
G.Flows().Has('Network','10.0.0.18','Transport','80').Nodes()

This expression reads as “on all captured flows matching IP address 10.0.0.18 and port 80, show nodes”. With the CLI you would get a nice JSON output of these nodes, here in the GUI these nodes will turn yellow:

If you look at our tap interface nodes, you will see that two are not highlighted. If you check their IDs, you will find that they belong to the same service VM, the one in group 1 that did not get the traffic.

If you want to single out a request, in the skydive GUI, select one node where capture is active (for example br-int). In the flows table, select the request, scroll down to get its layer 3 tracking ID “L3TrackingID” and use it as Gremlin expression:
G.Flows().Has('L3TrackingID','5a7e4bd292e0ba60385a9cafb22cf37d744a6b46').Nodes()

Going further

Now it’s your time to experiment! Modify the port chain, send a new HTTP request, get its L3TrackingID, and see its new path. I find the latest ID quickly with this CLI command (we will see how the skydive experts will react to this):
$ /opt/stack/go/bin/skydive --conf /tmp/skydive.yaml client topology query --gremlin "G.Flows().Has('Network','10.0.0.18','Transport','80').Limit(1)" | jq ".[0].L3TrackingID"

You can also check each flow in turn, following the paths from a VM to another one, go further with SFC, or learn more about skydive:

February 07, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
I missed FOSDEM (February 07, 2017, 16:04 UTC)

I sadly had to miss out on the FOSDEM event. The entire weekend was filled with me being apathetic, feverish and overall zombie-like. Yes, sickness can be cruel. It wasn't until today that I had the energy back to fire up my laptop.

Sorry for the crew that I promised to meet at FOSDEM. I'll make it up, somehow.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Last year I posted about FOSDEM and the IPv6-only network as a technical solution with no problem: nobody should be running IPv6-only consumer networks, because there is zero advantage to them and lots of disadvantages. This year, despite me being in California and missing FOSDEM, their IPv6-only experiment expanded into a full blown security incident (Archive.is), and I heard about it over Twitter and Facebook.

I have criticized this entrenched decision of providing a default IPv6-only network, and last night (at the time of writing) I ended up on a full-blown Twitter rage against IPv6 in general. The FOSDEM twitter handler stepped in defending their choice, possibly not even reading correctly my article or understanding that the same @flameeyes they have been replying to is the same owner of https://blog.flameeyes.eu/, but that’s possibly beside the point:

Let me try to be clear: I do know that the dual-stack network is available. Last year it was FOSDEM-legacy, and this year is FOSDEM-ancient. How many people do you expect to connect to a network that is called ancient? Would you really expect that the ancient network is the only one running the dual-stack routing, rather than, say, a compatibility mode 2.4GHz 802.11b? Let’s get back to that later.

What did actually happen, since the FOSDEM page earlier doesn’t make it too clear: somebody decided that FOSDEM is just as good a place as BlackHat, DEFCON or Chaos Computer Congress to run a malicious hotspot on. So they decided to run a network called “FOSDEM FreeWifi by Google”, with a captive portal asking for your Google account address and password. It was clearly a a low-passion effort, as I noticed from the screenshots over twitter, and by what an unnamed source told me:

  • the login screen looked almost original, but asked for both username and password on the same form, Google never does that;
  • the page was obviously insecure;
  • the page was served over 10.0.0.0/8 network.

But while these are clearly signs of phishing for a tech user, and would report “Non Secure” on modern Chrome and Firefox, that does not mean they wouldn’t get a non-expert user. Of course the obvious answer of what I will from now on refer to as geek supremacists is that it’s their own fault if they get owned. Which is effectively what FOSDEM said above, paraphrasing: we know nothing of what happened on that network, go and follow Google’s tips on Gmail security.

Well, let me at least point out to go and grab yourself a FIDO key because it would save your skin in cases like that.

But here is a possible way this can fall short of a nice conference experience: there’s a new person interested in Free Software, who has started using Linux or some other FLOSS software and decided to come to what is ostensibly the biggest FLOSS conference in Europe, and probably still the biggest free (as in gratis) open source conference in the world. They are new to this, new to tech, rather than just Linux, and “OpSec” is an unknown term to them.

They arrive at FOSDEM and they try to connect to the default network with their device, which connects and can browse the Internet, but for some reason half the apps don’t work. They ignored the “ancient” network, because their device is clearly not ancient – whether they missed the communication about what it was, or it used the term dual-stack that they had no understanding of – but they see this Google network, let’s do that, even though it requires login… and now someone has their password.

Now the person or people who have their password may be ethical, and contact HIBP to provide a dump of usernames involved and notify them that their passwords were exposed, but probably they won’t. With hope, they won’t use those passwords for anything nefarious either, but at the same time, there is no guarantee that the FreeWifi people are the only ones having a copy of those passwords, because the first unethical person who noticed this phishing going on would have started a WiFi capture to get the clear-text usernames and passwords, with the certainty that if they would use these, the FreeWifi operators would be the ones taking them blame, oops.

Did I say that all the FOSDEM networks are unencrypted? At least 33c3 tried providing an anonymous 802.1x protected/encrypted connection. But of course for the geek supremacists, it’s your fault if you use anything unencrypted and not use a VPN when connecting to public networks. Go and pay the price of not being a geek!

So let’s go back to our new enthusiastic person. If something does happen to the account, it get compromised, or whatever else, the reaction the operators are expecting is probably one of awe: “Oh! They owned me good! I should learn how not to fall for this again!” — except it is quite more likely that the reaction is going to be of distrust “What jerks! Why did I even go there? No kidding nobody uses Linux.” And now we would have alienated one more person that could have become an open source contributor.

Now I have no doubt that FOSDEM organizers didn’t intend for this malicious network to be set up. But at the same time, they allowed it to happen by being too full of themselves, that by making it difficult to users to use the network, they lead to improvement in apps that would otherwise not support IPv6. That’s what they said on twitter: “we are trying to annoy people”. Great, bug fixes via annoyance I’m sure they work great, in a world of network services, that are not in control of the people using them, even for open source projects! And it sure worked great with the Android bug that, fixed almost a year before, kept receiving “me too” and “why don’t you fix this right now?” because most vendors have not released a new version in time for the following FOSDEM (and now an extra year later, many have not moved on from Android 5 either).

Oh and by the way, the reason why it’s called “ancient” is also to annoy people and force them to re-connect to the non-default network. Because calling it FOSDEM-legacy2017 would have been too friendly and would make less of a statement than “ancient”: look at you, you peasant using an ancient network instead of being the coolest geek on the planet and relying on IPv6!

So yes, if something malicious were to happen, I would blame the FOSDEM organizers for allowing that to happen, and for not even providing a “mea culpa” and admitting that maybe they are stressing this point a bit too much.

To close it off, since I do not want to spend too much time on this post on the technical analysis of IPv6 (I did that last year), I would leave you to Todd Underwood’s words and yes, that is an 11 years old post, which I find still relevant. I’m not quite on the same page as Todd, given how I try hard to use IPv6 and use it for backend servers, but his point, if hyperbolic, should be taken into consideration.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Stricter JSON parsing with Haskell and Aeson (February 07, 2017, 05:10 UTC)

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

{
  "jobInputUrl": 123, // should have been a string
  "jobPriority": 123
}

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

{
  "jobInputUrl": "http://arunraghavan.net",
  "jobPriority": 100,
  "junkField": "junkValue"
}

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric      #-}

import Data.Aeson
import Data.Data
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Data, Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON json = do
    job <- genericParseJSON defaultOptions json
    if keysMatchRecords json job
    then
      return job
    else
      fail "extraneous keys in input"
    where
      -- Make sure the set of JSON object keys is exactly the same as the fields in our object
      keysMatchRecords (Object o) d =
        let
          objKeys   = sort . fmap unpack . keys
          recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr
        in
          objKeys o == recFields d
      keysMatchRecords _ _          = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

February 06, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Tesseract is one of the best open-source OCR software available, and I recently took over ebuilds maintainership for it. Current development is still quite active, and since last stable release they added a new OCR engine based on LSTM neural networks. This engine is available in an alpha release, and initial numbers show a much faster OCR pass, with fewer errors.

Sounds interesting? If you want to try it, this alpha release is now in tree (along with a live ebuild). I insist on the alpha tag, this is for testing, not for production; so the ebuild masked by default, and you will have to add to your package.unmask file:
=app-text/tesseract-4.00.00_alpha*
The ebuild also includes some additional changes, like current documentation generated with USE=doc (available in stable release too), and updated linguas.

Testing with paperwork

The initial reason I took over tesseract is that I also maintain paperwork ebuilds, a personal document manager, to handle scanned documents and PDFs (which is heavy tesseract user). It recently got a new 1.1 release, if you want to give it a try!

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2017 is starting! (February 06, 2017, 01:53 UTC)

(A previous version of this post recommended #gentoo-soc-mentors on Freenode as the preferred discussion channel for GSoC, please use #gentoo-soc instead as the former is invite-only or ask us to invite you to it)

It’s time to send us your GSoC ideas whether you can/want to mentor or not. We need as many good ideas as possible to make sure Google will select us as an organization again this year. Experience has shown us that we’re not automatically selected. You can submit them yourself on the wiki or let us do it. Don’t waste any time because some polishing typically needs to occur before the deadline (February 27th). You can discuss your ideas with us on Freenode in #gentoo-soc (preferred), or by email at soc-mentors@gentoo.org.

If you’re potentially interested in being a mentor, only want to help during the early phases of discussing and reviewing projects, or are just curious and want to see what goes on there, please let us know and we’ll add you to the mail alias. Everybody from last year was removed so don’t assume you’ll be on the alias because you were last year. Note that you do not have to be a Gentoo developer to be a mentor or help us with GSoC in any way.

Finally, if you’re a student it’s not quite time yet to ask us about projects. Please be patient, we’ll let you know.

Now go and submit that idea!

February 04, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

One of the travel blogs I follow covered today the new EU directive on roaming charges, I complained quickly on Twitter, but as you can imagine, 140 characters are not enough to explain why I’m not actually happy about this particular change.

You can read the press release as reported by LoyaltyLobby, but here is where things get murky for me:

The draft rules will enable all European travellers using a SIM card of a Member State in which they reside or with which they have “stable links” to use their mobile device in any other EU country, just as they would at home.

Emphasis mine, of course.

This is effectively undermining the European common market of services: if you do not use a local provider, you’re now stuck to pay roaming just as before, or more likely even higher. Of course this makes perfect sense for those people who barely travel and so have only their local carrier, or those who never left a country and so never had to get a number in a different country while keeping the old one around. But for me, this sucks in two big and kind of separate ways.

The first problem is one of convenience, admittedly making use of a bit of a loophole. As I write this post I’m in the USA, but my phone is running a British SIM card, by Three UK. The reason is simple: with £10 I can get 1GB of mobile data (and an amount of call minutes and SMS, which I never use), and wit £20 I can get 12GB. This works both in th UK, and (as long as I visited in the previous 3 months) a number of other countries, including Ireland, Italy, USA, and Germany. So I use it when I’m in the USA and I used it when I went to 33c3 in Hamburg.

But I’m not a resident of the UK, and even though I do visit fairly often, I don’t really have “stable ties”.

It’s of course possible that Three UK will not stop their free roaming due to this. After all they include countries like the US and (not a country) Hong Kong in the areas of free roaming and they are not in Europe a all. Plus the UK may not be part of the EU that long anyway. But it also gives them leverage to raise the prices for non-residents.

The other use case I have is my Italian mobile phone number, which has been the same for about ten years or so, changing three separate mobile providers – although quite ironically, I changed from 3 ITA to Wind to get better USA roaming, and now 3 ITA bought Wind up, heh – but keeping the number as it is associated with a number of services, including my Italian bank.

Under the new rules I may be able to pull off a “stable links” indication thanks to being Italian, but that might require me to do paperwork in Italy, where I don’t go very often. If I don’t do that, I expect the roaming to become even more expensive than it is now.

Finally, there is another interesting part to this. In addition to UK, Irish and Italian numbers, I have a billpay subscription in France through free.fr — the reason is that I visit France fairly often, and it’s handy to have a local phone number when I visit. I have no roaming enabled on that contract though, so the directive has no effect on it anyway. That’s okay.

What is not okay in my opinion is that the directive says nothing about maintaining quality of service when roaming, it only impose prices. And indeed Free.fr sent an update this past July that, due to a similar directive within France, their in-country roaming will have reduced speeds:

De ce fait, les débits théoriques maximums atteignables par abonné sur le réseau de l’opérateur partenaire en itinérance 2G/3G seront de 5 Mbit/s (débit descendant) et de 448 kbit/s (débit montant) à compter du 1er septembre 2016 jusqu’au 31 décembre 2016. En 2017 et 2018, ces débits seront de 1 Mbit/s (débit descendant) et 448 kbit/s (débit montant). Ensuite, ils seront de 768 kbit/s (débit descendant) et 384 kbit/s (débit montant) pour l’année 2019 et de 384 kbit/s (débit descendant) et 384 kbit/s (débit montant) pour l’année 2020.

So sure, you’ll get free roaming, but it’ll have a speed that will make it completely useless.

My opinion on this directive is that it targets a particular set of complaints by a vocal part of the population that got screwed sideways by horrible roaming tariffs of many European providers when on vacation, and at the same time provide a feel-good response for those consumers that do not actually care, as they barely, if ever, leave their country.

Indeed if you travel, say, a week a year in the summer outside of the border, probably these fixed limits are pretty good: you do not have to figure out which is the most advantageous provider for roaming in your country (which may not be advantageous in other circumstances) and you do not risk ending up with multiple hundreds of euros of bill from your vacation.

On the other hand if you, like me, travel a lot, and effectively spend a significant amount of the year outside of your residence country, and you even live outside of your native country, well, you’re now very likely worse off. Indeed, with the various 3 companies and their special roaming plans I was very happy not having to have a bunch of separate SIM cards: in Germany, USA, Austria I just used my usual SIM cards. In UK, France and Italy I had both a free-roaming card and a local one. And instead before that I ended up having Vodafone SIM cards for the Netherlands, Czech Republic, Portugal and very nearly Spain (in that case I used Wind’s roaming package instead).

Don’t get me wrong: I”m not complaining about European meddling into mobile providers. I used to have a tagline for my blog: “Proud to be European”, and I still am. I’m not happy about Brexit, because that actually put a stop to my plans of moving to London eventually. But at the same time I think this regulation is a gut reaction rather than a proper solution.

If I was asked what the solution should be, my suggestion would be to allow companies such as 3 and Vodafone to provide an European number option. Get a new international prefix for EU, allow the companies that have wide enough reach to set up their own agreements locally where they do not have network themselves (3 and Vodafone clearly have already a wide reach) by providing a framework for capping the cost as applied to providers. Then get me a SIM that just travels Europe with no additional costs, and with a phone number that can be called at local rates everywhere (you can do that by ensuring that the international prefix maps to a local one in all the countries). Even if such a contract is more expensive than a normal one, the frequent travellers would be pretty happy not to have to switch SIM cards, phone numbers, and have unknown costs appearing out of nowhere.

February 03, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently I was on a mission to make my audio experience on my main desktop more enjoyable. I had previously just used some older Bose AE2 headphones from 2010 plugged in directly to the 3.5mm audio output on the back of my desktop. The sound quality was mediocre at best, and I knew that a combination of a Digital-to-Analogue Converter (DAC) and some better headphones would certainly improve the experience. I also knew that the DAC would probably yield the most noticeable improvements, so I purchased the Big Ego USB DAC from one of my favourite audiophile-grade manufacturers, Emotiva. I have several of their monoblock amplifiers and use their amazing XMC1 for my preamp/processor in my home audio system, so I knew that the quality would be outstanding, especially for the price.

Emotiva Big Ego DAC and V-Moda Crossfade M-100 headphones

Now, the Big Ego FAQ on the Emotiva website indicates that it should work with all modern computing devices:

Q: What devices can I use the Ego DACs with?
A: The Ego DACs are basically designed to work with any modern “computer device” which can be used
with an external USB sound card, which includes:
1) All modern Apple computers
2) All modern Windows computers (Windows XP, Vista, 7, 8.0, 8.1, and Windows 10)
3) Many Linux computers (as long as they support USB Audio Class 1 or 2)
4) Some Android tablets and phones (as long as they support UAC1 or UAC2)
5) Apple iPhone 5 and iPhone 6 (with the lightning to USB camera adapter)

For many Linux users, the Big Ego probably works without any manual intervention. However, if it doesn’t, it shouldn’t be that difficult to get it working properly, and I hope that this guide helps if you are running into trouble.

Firstly, let’s get something out of the way, and that’s USB Audio Class 2 (UAC2) support within Linux. With all modern distributions (>=2.6 kernel), UAC2 is readily available. It can be validated by looking at the audio-v2.h file within the kernel source:

# grep 'From the USB Audio' /usr/src/linux/include/linux/usb/audio-v2.h
* From the USB Audio spec v2.0:

Feel free to look at the full file to see the references to the UAC2 specification.

Kernel support:

Secondly, and also speaking to the kernel, if your distribution doesn’t even show the device, you are likely lacking the one needed kernel driver. To see if your system recognises the Emotiva Big Ego, try the following command and look for similar output:

$ lsusb -v | grep 'Emotiva Big Ego'
...
iProduct 3 Emotiva Big Ego
...

The full identifier (Vendor ID and Product ID) from lsusb is 20ee:0021, even though it doesn’t have a description:

# grep -A 4 /var/log/messages
kernel: usb 9-1: New USB device found, idVendor=20ee, idProduct=0021
kernel: usb 9-1: New USB device strings: Mfr=1, Product=3, SerialNumber=2
kernel: usb 9-1: Product: Emotiva Big Ego
kernel: usb 9-1: Manufacturer: Emotiva

$ lsusb | grep '20ee:0021'
Bus 009 Device 005: ID 20ee:0021

If you don’t get similar output, then you’re lacking kernel support for the Big Ego. The one driver in the kernel that you need is the “USB Audio/MIDI driver” which can be found in the make menuconfig hierarchy as:

Device Drivers --->
  <*> Sound card support --->
    <*> Advanced Linux Sound Architecture --->
      [*] USB sound devices --->
        <*> USB Audio/MIDI driver

You can also check your kernel .config for it, or if you have it as a module, load it:

$ grep -i snd_usb_audio /usr/src/linux/.config
CONFIG_SND_USB_AUDIO=y

OR

# modprobe snd-usb-audio

Emotiva Big Ego DAC and V-Moda Crossfade M-100 headphones

ALSA configurations:

Thirdly, and now that you have the appropriate kernel support, let’s move on to configuring and using the Big Ego with ALSA. You can see a list of device names by using aplay -l, and it’s best to address the device by name instead of number (because the numbering could possibly change upon reboot). This one-liner should show you precisely how it is named (note that your output may be different based on the available sound output devices on your system):

$ aplay -l | awk -F \: '/,/{print $2}' | awk '{print $1}' | uniq
Intel
NVidia
Ego

With that information, you are ready to set the Big Ego as your default sound output device by editing either .asoundrc (in your home directory, for a per-user directive) or within the system-wide /etc/asound.conf (which is the one that I would recommend for most situations). I tried various configurations for my ALSA configuration, but would end up with various oddities. For instance, I ran into a problem where I had sound in applications like Audacious, mpv, and even ALSA’s own speaker-test, but had no sound in other terminal applications like ogg123 or, more importantly, web browsers like Firefox and Chromium. The only configuration that worked fully for me was:

$ cat /etc/asound.conf
defaults.pcm.!card Ego
defaults.pcm.!device 0
defaults.ctl.!card Ego
defaults.ctl.!device 0

After changing your ALSA configuration, you need to reload it, and the method for doing so varies based on your distribution and init system. For me, using Gentoo Linux with OpenRC, I just issued, (as root), /etc/init.d/alsasound restart and it reloaded. Worst case, just reboot your system to test the changes.

Now that you have it set as the default card, applications like alsamixer and such should automatically choose the Big Ego for your levels and mixing. One thing that I noticed with alsamixer is that there are two adjustable level sliders:

alsamixer with the Emotiva Big Ego USB DAC

What I am guessing is that, even though they are labelled “Emotiva Big Ego” and “Emotiva Big Ego 1”, they actually correspond to the output that you are using on the DAC. For instance, I am using the 3.5mm headphone jack on the front, and that corresponds to the “Emotiva Big Ego 1” slider, whereas if I were using the line out jack on the back of the DAC (those rhymes are fun 😛 ), I would adjust it using the slider for “Emotiva Big Ego”.

Additional configurations:

Now that we have configured ALSA to use our USB DAC as the default sound card, there are some additional things that I would like for my convenience. I prefer to not use a full desktop environment (DE), but instead favour a more minimalistic approach. I just use the Openbox window manager (WM). One of the things that I like about Openbox is the ability to set my own key bindings. In this case, I would like to be able to control the volume by using the designated keys on my keyboard, regardless of the application that is using the USB DAC. Here are my key bindings, which are added to ~/.config/openbox/rc.xml:


    <!-- Keybinding for increasing Emotiva Big Ego volume by 1 -->
    <keybind key="XF86AudioRaiseVolume">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 1+</command>
      </action>
    </keybind>
    <!-- Keybinding for decreasing Emotiva Big Ego volume by 1 -->
    <keybind key="XF86AudioLowerVolume">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 1-</command>
      </action>
    </keybind>
    <!-- Keybinding for muting/unmuting volume -->
    <keybind key="XF86AudioMute">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 toggle</command>
      </action>
    </keybind>

Take note that the subdevice is ‘1’ (bold in the code above). That is because, like I showed in the alsamixer output, I’m using the headphone jack (so it corresponds to the secondary volume slider).

Further troubleshooting:

I hope that these instructions help you get your USB DAC working under Linux, but if they don’t, feel free to leave me a comment here. We’ll see what we can do to get it working for you. One last note is that I experienced some rather severe popping and other undesirable sounds when I had the Big Ego plugged into one of the USB2 ports on the back of my tower. Swapping it to its own non-shared USB3 port fixed that problem. So, if you have it plugged into a USB hub or something similar, try isolating it. Remember, it is a sensitive piece of audio equipment, and special considerations may need to be made. 🙂

Cheers,
Zach

February 02, 2017

FOSDEM 2017 logo

As FOSDEM 2017 approaches we are happy to announce there are a total of five Gentoo developers scheduled to give talks!

Developers and their talks include:

Only a few hours remain until the event kicks off. See you at FOSDEM!

February 01, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Emailing receipts (February 01, 2017, 07:03 UTC)

Before starting, I usually avoid taking political stances outside of Italy, since that is the only country I can vote in. But I think it’s clear to most people over here that, despite posting vastly about first-world problems, I do not thrive of the current political climate overall. So while you hear me complaining about things that are petty, don’t assume I don’t have just as much worries that are actually relevant to society as a whole. I just don’t have solutions, and tend to stick to talking to what I know.

I’m visiting the US, maybe for the last time for a while, given the current news. It reminds me of Japan and China, in the sense that it’s a country that mixes extremely high-tech and vintage solutions in the same space. The country had to be brought kicking and screaming into the 20th century some years ago to start issuing chip cards but on the other hand, thanks to Square, and LevelUp and all kind of other similar mobile payment platforms, email receipts are becoming more and more common.

I find this is interesting. I wrote some time ago my preference for electronic bills but I did not go into details of simpler receipts. I have touched on the topic when talking about expenses but I did not go into precise details either. And I thought that maybe it’s time to write something, if nothing else because this way I can share what my opinion on them is.

For those who may have not been to the States, or at least not in California or Manhattan, here is the deal I’m talking about: when you pay with a given credit card with Square for the first time (but the same applies to other platforms), it asks you if you want to receive a receipt confirmation via email. I usually say yes, because I prefer that to paper (more on that later). Afterwards, any payment made with the same card also gets emailed. You can unsubscribe if you want, but the very important part is that you can’t refuse the receipt at payment time. Which is fun, because after going to a bibimbap restaurant near the office last week, while on business travel, and taking a picture of the printed out receipt for the expense report, I got an email with the full receipt, including tip, straight into my work inbox (I paid with the company card, and I explicitly make the two go to different email addresses). The restaurant didn’t even have to ask.

As it happens, Square and mobile payments are not the only ones doing that. Macy’s, a fairly big department store in North America, also allows you to register a card, although as far as I remember, it allows you to still opt to only get the paper receipt. This different in options is interesting, and it kind of make sense, in the context of what spending pattern you may have: if you’re going to Macy’s to buy a gift for your significant other, it makes sense that you may not want to send them a copy of the receipt of the gift. On the other hand, I would not share my email password with a SO — maybe that’s why I’m single. Apple Stores also connect a card with an email address, but I remember the email receipt is also opt-in, which is not terribly good.

Why do I think it is important that the service allows you to opt-in to receipts but not opt-out of a single transaction? It’s a very good safeguard against fraud. If a criminal were to skim your card and use it through one of those establishment that send you email receipts, they would definitely opt out of the email receipts, as to no alert you. This is not a theoretical by the way, as this happened to me earlier this month. My primary card got skimmed – I have a feeling this happened in December, at the MIT Coop store in Cambridge, MA but that’s not important now – and used, twice at one or two Apple Store in Manhattan, buying the same item for something above €800, during what, for me, was a Saturday evening. I honestly don’t remember if I used that card at an Apple Store before, but assuming I did and the receipts would not be opt-in, I would have known to call my card company right away, rather than having to wait for them to call me on Monday morning.

While real-time alerts is something that a few banks do provide, no bank in Ireland does that, to my knowledge, and even in Italy the banks doing that make you pay extra for the service, which is kind of ludicrous, particularly for credit cards where the money at stake is usually the banks’. And since accounting of foreign transactions sometimes can easily take days, while the receipts are nearly instantaneous by design, this is very helpful to protect customers. I wish more companies started doing that.

An aside here about Apple: by complete coincidence, a colleague of mine had a different kind of encounter with criminals who tried to buy Apple devices with his card the week before me. In this case, the criminals got access to the card information to use online, and set up a new Apple ID to buy something. In this case, he did have the card attached to his real Apple ID account, and made online purchases from them not long before, so when they tried that, the risk engine on Apple’s side triggered, and they contacted him to verify whether the order was genuine. So in this case neither Apple nor the bank lost money, as the transaction was cancelled lately. He still had to cancel the card, though.

But there is more. Most people will treat receipts, and even more so credit card slips, as trash and just throw it the first chance they have. For most people and in most cases this is perfectly okay, but sometimes it is not. Check out this lecture by James Mickens — who I had the pleasure to listen to in person at LISA 2015… unfortunately not to meet and greet because I came under shock during his talk, as exactly at that time the Bataclan attacks in Paris happened, and I was distraught trying to reach all my Parisian friends.

If you have watched the full video, you now know that the last four digits of a credit card number are powerful. If you like fantasy novels, such as the Dresden Files, you probably read that “true names have power” — well, as it happens, a credit card number has possibly more power, in the real world. And the last four digits of a credit card can be found on most credit card slips, together with a full or partial name, as written on the card. So while it’s probably okay to leave the credit card slip on the table, at a random restaurant in the middle of the desert, if you’re the only patron inside… it might not be quite the same if you’re a famous person, or a person a risk of harassment. And let’s be honest, everybody is at risk nowadays.

While it is true that credit card slips and receipts are often separate, particularly when using chip cards, as the POS terminal and the registry are usually completely separated, this is not always the case, and almost never the case for big stores, both in the United States and abroad. Square cash registries, as well as a number of other similar providers, that graduated from mobile-only payments to full blown one-stop-shop of payment processing, tend to print out a single slip of paper (if you have not registered for the email receipts). This at least reduces the chance that you would throw away the receipt right away, as you probably want to bring it home with you for warranty purposes.

And then there is the remaining problem: when you throw away paper receipts directly into the trash, dumpster diving makes it possible to find out a lot about your habits, and in particular it makes significantly easier to target you, just as an opportunity, with the previously-mentioned power of the last four digits of your card, and a name.

Now, it is true that we have two different security problems now: the payment processing companies can now connect a credit card number with an email address — but I would hope that PCI-DSS would stop them from actually storing the payment information in cleartext, I hope they only store a one-way hash of the credit card number, to connect to the email address. It still is tricky, because even with the hashed card numbers, a leak of that database would make the above attacks even easier: you can find out the email address, and from that easily the accounts, of a credit card owner, and take control way too easily.

There is also a risk that you’re opening up more details of your personal life to whoever has access to your email account — let’s say your employer, if you’re not properly siloing your email accounts. This is a real problem, but only made slightly worse by the usage of email receipts for in-store purchases. Indeed, most likely for stores like CVS, you can have a order history straight from the website, which most likely you can already access if you have access to the email account — which, by the way, it’s why you should ask for 2FA!. As I said above, I only get sent email to my work account if they are undoubtedly work only; anything I buy with the work credit card is clearly work-related only, but for instance taxi receipts, flights or hotel bookings may be personal, and so the account is set to mail my personal account only — when needed I forward the email messages over, but usually I just need the receipts for expensing.

And hey, even the EFF, who I renewed my support today, uses Square to take donations, so why not?

January 28, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Logging in offline (January 28, 2017, 05:04 UTC)

Over two years ago, I described some of the advantages of U2F over OTP when FIDO keys were extremely new and not common at all. I have since moved most of my login access that support it to U2F, but the amount of services support it is still measly, which is sad. On the other hand, just a couple of days ago Facebook added support for it which is definitely good news for adoption.

But there is one more problem with the 2-factor authentication or, as some services now more correctly call it, 2-step verification: the current trend of service-specific authentication apps, not following any standard.

Right now my phone has a number of separate apps that are either dedicated as authentication app, or has authentication features built into a bigger app. There are pros and cons with this approach of course, but at this point I have at least four dedicated authorization apps (Google Authenticator, LastPass Authenticator, Battle.net Authenticator and Microsoft’) plus a few other apps that simply include authentication features in the same service client application.

Speaking of Microsoft’s authenticator app, which I didn’t like above. As of today what I had installed, and configured, was an app called “Microsoft Account” – when I went to look for a link I found out that it’s just not there anymore. It looks like Microsoft simply de-listed the application from the app store. Instead a new Microsoft Authenticator is now available, taking over the same functionality. It appears this app comes from their Azure development team, from the Play Store ID, but more important it is the fourth app that appears just as Authenticator on my phone.

Spreading the second factor authentication across different applications kind of make sense: since the TOTP/HOTP (from now I will only call them TOTP, I mean both) system relies on a shared key generated when you enroll the app, concentrating all the keys into a single application is clearly a bit of a risk – if you could easily access the data of a single authentication app and fetch all of its keys, you don’t want it to bring you access to all the services.

On the other hand, having to install one app for the service and one for the authentication is … cumbersome. Even more so when said authentication app is not using a standard procedure, case in point being the Twitter OTP implementation.

I’m on a plane, I have a “new” laptop (one I have not logged on Twitter with). I try to login, and Twitter nicely asks me for the SMS they sent to my Irish phone number. Ooops, I can’t connect phone service from high up in the air! But fear not it tells me that I can give them a code from the backups (on a different laptop, encoded with a GPG key I have with me but not at hand in the air) or a code from the Twitter app, even if I’m offline.

Except, you need first to have it set up. And that you can’t do offline. But turns out if you just visit the same page while online it does initialize and then work offline from them on. Guess when you may want to look for the offline code generator for the first time? Compare with the Facebook app, that also includes a code generator: once you enable the 2-step verification for Facebook, each time you log in to the app, a copy of the shared key is provided to the app, so every app will generate the same code. And you don’t need to request the code manually on the apps. The first time you need to login, with the phone offline, you’ll just have to follow the new flow.

Of course, both Facebook and Twitter allows you to add the code generator to any authenticator TOTP app. But Facebook effectively set that up for you transparently, on as many devices as you want, without needing a dedicated authentication app, just have any logged in Facebook app generate the code for you.

LastPass and Microsoft authenticator apps are dedicated, but both of them also work as a generic OTP app. Except they have a more user-friendly push-approval notification for their own account. This is a feature I really liked in Duo, and one that, with Microsoft in particular, makes it actually possible to log in even where otherwise the app would fail to log you in (like the Skype app for Android that kept crashing on me when I tried to put in a code). But the lack of standardization (at least as far as I could find) requires you have separate app for each of these.

Two of the remaining apps I have installed (almost) only for authentication are the Battle.net and Steam apps. It’s funny how the gaming communities and companies appears to have been those pushing the hardest at first, but I guess that’s not too difficult to imagine when you realize how much disposable money gamers tend to have, and how loud they can be when something’s not to their liking.

At least the Steam app tries to be something else beside an authenticator, although honestly I think it falls short: finding people with it is hard, and except for the chat (that I honestly very rarely use on desktop either) the remaining functions are so clunky that I only open the app to authenticate requests from the desktop.

Speaking of gaming community and authentication apps, Humble Bundle has added 2FA support some years ago. Unfortunately instead of implementing a standard TOTP they decided to use an alternative approach. You can choose between SMS and a service called Authy. The main difference between the Authy service and a TOTP app is that the service appears to keep a copy of your shared key. They also allow you to add other TOTP keys, and because of that I’m very unhappy with the idea of relying such a service: now all your key are not only concentrated on an app on your phone, but also on a remote server. And the whole point of using 2FA, for me, is that my passwords are stored in LastPass.

There is one more app in the list of mostly-authenticator apps: my Italian bank’s. I still have not written my follow up but let’s just say that my Italian bank used TOTP-token authentication before, and have since moved to an hybrid approach, one such authentication system is their mobile app, which I can use to authenticate my bank operations (as the TOTP-token expired over a year ago and have not replaced yet). It’s kind of okay, except I really find that bank app too bothersome to use and never bother using it right now.

The remaining authentication systems either send me SMS or are configured on Google Authenticator. In particular, for SMS, the most important services are configured to send me SMS to my real Irish phone number. The least important ones, such as Humble Bundle itself, and Kickstarter, which also insist on not even letting me read a single page without first logging in, send their authentication code to my Google Voice phone number, so they require an Internet connection, but that also means I can use them while on a flight.

Oh yes, and of course there are a couple of services for which the second factor can be an email address, in addition, or in place, of a mobile phone. This is actually handy for the same reason why I send them to Google Voice: the code is sent over the Internet, which means it can reach me when I’m out of mobile connectivity, but still online.

As for the OTP app, I’m still vastly using the Google Authenticator app, even though FreeOTP is available: the main reason is that the past couple of years finally made the app usable (no more blind key-overwrites when the username is the only information provided by the service, and the ability to change the sorting of the authenticator entries). But the killer feature in the app for me is the integration with Android Wear. Not having to figure out where I last used the phone to log in on Amazon, and just opening the app on the watch makes it much more user friendly – though it could be friendlier if Amazon supported U2F at this point.

I honestly wonder if a multiple-weeks battery device, whose only service would be to keep TOTP running, would be possible. I could theoretically use my old iPod Touch to just keep an Authenticator app on, but that’d be bothersome (lack of Android Wear) and probably just as unreliable (very shoddy battery). But a device that is usually disconnected from the network, only dedicated to keep TOTP running would definitely be an interesting security level.

What I can definitely suggest is making sure you get youserlf a FIDO U2F device, whether it is the Yubico, the cheaper, or the latest BLE-enabled release. The user friendliness over using SMS or app codes makes up for the small price to pay, and the added security is clearly worth it.

January 27, 2017
Yury German a.k.a. blueknight (homepage, bugs)
WordPress Blogs Maintenance (January 27, 2017, 22:12 UTC)

Changes for blogs.gentoo.org

With the update of the WordPress to 4.7.1 a few plug-ins have created instability to the platform.

We have disabled the WordPress Mobile Site Plugin and the Picasa Album update.

  • WordPress Mobile Site is causing all sorts of issues and an update just came out today. We will push the update and enable it for some testing.  if you were one of the users that was using it please let us know so that you can test it when we update it.
  • The Picasa Album is not working and is disabled pending updates.
If you have any questions please feel free to contact me on irc @blueknight

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.4 (January 27, 2017, 09:29 UTC)

Another community driven and incredible update of py3status has been released !

Our contributor star for this release is without doubt @lasers who is showing some amazing energy with challenging ideas and some impressive modules QA clean ups !

Thanks a lot as usual to @tobes who is basically leading the development of py3status now days with me being in a merge button mode most of the time.

By looking at the issues and pull requests I can already say that the 3.5 release will be grand !

Highlights

  • support of python 3.6 thanks to @tobes
  • a major effort in modules standardization, almost all of them support the format parameter now thanks to @lasers
  • modules documentation has been cleaned up
  • new do_not_disturb module to toggle notifications, by @maximbaz
  • new rss_aggregator module to display your unread feed items, by @raspbeguy
  • whatsmyip module: added geolocation support using ip-api.com, by @vicyap with original code from @neutronst4r

See the full changelog here.

Thank you guys !

January 25, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Vodafone R205: prodding at the software (January 25, 2017, 18:53 UTC)

In my previous post on the matter I have talked about using a Vodafone-branded Huawei R205 (E586) mobile hotspot with another network operator without needing any firmware modification or special tool except for a browser and the developer tools in there. I also let it understood that I have decided to sacrifice the device, because I have another similar device that I can use, more powerful, and anyway I should get a newer generation, as this one does not have 4G support at all.

Before I venture into the gory details of yet another personal project that will most likely not get to its end, let me point you all at Juan Carlos Jimenez’s series of posts reverse engineering another Huawei device. In his case, that was a significantly friendlier Huawei ADSL modem. I still consider that almost a prerequisite reading.

In my case, the device is a Vodafone R205 “Pocket Wifi” adapter, and as I said before it is, or was, common in at least Ireland, Italy and Australia, although with different colours (the Australian version is black, while the Irish and Italian are white). As I’ll show later (but actually a bit of Googling would have showed already), this device is actually a rebranded E586 mobile hotspot, which other providers, such as Three UK, also distributed.

I was honestly afraid I would end up breaking the device once I opened it. It turned out not to be the case, but anyway before doing that I decided to snoop around what I could from the outside, or rather, while connected to it. As showed in the previous post, the device does come with a lot of non-minified JavaScript written by Huawei engineers, which makes it particularly interesting. Indeed some of the files come with a very short changelog at the top, too:

/*******************************************************************************
File Name   : vendorWifi.js
File Author : s00168237
Create time : 2011-09-27  05:49
Description : support Huawei API interface
Copyright   : Copyright (C) 2008-2010 Huawei Tech.Co.,Ltd
History     : 2011-09-27       1.0      create the file
version     : R205 firmware v1.0 and webUI v0.41
release date: 2011-11-18
version     : R205 firmware v2.0 and webUI v0.5
release date: 2011-12-19
version     : R205 firmware v2.3 and webUI v0.51
release date: 2012-12-21
version     : R205 firmware v2.6 and webUI v0.52
release date: 2012-01-06
version     : R205 firmware v2.7 and webUI v0.52
release date: 2012-01-13
version     : R205 firmware v3.0 and webUI v0.6
release date: 2012-01-16
date         version               author(No.)         description
2012.03.29   FWv4.0 UIv1.12.3389   tangyao t81004060   for vodafone fireware v4.0
2012.04.24   FWv4.1 UIv1.14.3559   tangyao t81004060   for vodafone fireware v4.1
2012.04.25   FWv4.2 UIv1.14.3559   tangyao t81004060   for vodafone fireware v4.2
*******************************************************************************/

Considering that the current Vodafone firmware (or should I say fireware) is 8.0, at least on the Australian website (the Irish one does not seem to have any), mine is significantly behind times. I have explicitly not updated it, afraid that they may have covered the hole that I have been using to configure the device.

There is also another interesting fact to note at this point: if you start Googling around you’ll find plenty of shady sites (Astalavista-shady I would say) that try to either sell to you or trick you into installing possible malware. What they promise is a way to de-brand the Vodafone R205 into a standard E586, which is indeed something nice to have, but as I said they are shady, so I have not bothered even considering them.

On the other hand it means that someone already managed to find a way to extract the firmware files from the Windows executable that Huawei provides, and reverse engineered the protocol that they use to flash the firmware onto the device. And they decided not to publish them, but rather make proprietary software bundled with Norton-knows-what. I find that extremely uncool, and on this, I’m definitely more aligned with the CCC crowd than what looks like your average mobile phone/broadband “hacker”.

Moving on, I decided to check for what may be open on the device. For instance if telnet or SSH were open it would make it easier to figure out ways around the device, but that didn’t help either. Indeed very few ports are open on the device: DNS, DHCP, HTTP and HTTPS (more in a second) and ports 1900 UDP and 50000 TCP for UPnP/IGD.

Nmap reports the following for the HTTPS configuration, which suggest a well expired certificate, since it is not valid after 2008, even though the firmware is clearly more recent (the changelog above reads 2012).

ssl-cert: Subject: commonName=ipwebs.interpeak.com/organizationName=Interpeak/stateOrProvinceName=Stockholm/countryName=SE
Issuer: commonName=Test CA/organizationName=Interpeak/stateOrProvinceName=Stockholm/countryName=SE
Public Key type: rsa
Public Key bits: 1024
Signature Algorithm: md5WithRSAEncryption
Not valid before: 2003-09-22T11:33:43
Not valid after:  2008-09-20T11:33:43
MD5:   2fcc e6cc bac8 8ea2 ca80 287f 2b8d 7d75
SHA-1: 7c6f 422e 37cb 83bf c3ef b004 f050 2c6f deba 6be2
ssl-date: 1970-01-01T00:20:18+00:00; -47y4d22h09m00s from scanner time.
sslv2:
  SSLv2 supported
  ciphers:
    SSL2_RC4_128_WITH_MD5
    SSL2_RC2_128_CBC_WITH_MD5
    SSL2_DES_192_EDE3_CBC_WITH_MD5
    SSL2_RC4_128_EXPORT40_WITH_MD5
    SSL2_RC2_128_CBC_EXPORT40_WITH_MD5
    SSL2_DES_64_CBC_WITH_MD5

Both the HTTP and the HTTPS ports report themselves as IPWEBS/1.4.0 (nmap considers this “Huawei broadband router http admin”, so I suppose they use it for their non-mobile routers too), and that matches the commonName found in the certificate.

Port 50000 (which is used by UPnP but responds to obvious HTTP requests, thanks SOAP), seem to have a different server altogether, with all headers shouted (all-capitals) and reporting itself as:

SERVER: PACKAGE_VERSION  WIND version 2.8, UPnP/1.0, WindRiver SDK for UPnP devices/

Both servers actually point to the same owners: Interpeak appears to have been bought by Wind River Systems (as you can see if you go to interpeak.com), I assume some time between 2003 and 2009, as the certificate is still pointing at the Swedish company, and in 2009, Wind River itself was bought by Intel.

My first thought (which came before realizing Interpeak was bought by Wind River) was that maybe the Intel UPnP SDK was derived off Wind River’s, but said SDK was last released in 2007, so it appears the two are unrelated. What I did not realize until I checked the Wikipedia page I linked, is that Wind River is actually the company behind VxWorks, which points almost straight at this device using VxWorks internally.

Turns out that there is an easy way to confirm this. You can find the firmware update packages online, on various Vodafone websites (different versions for different countries); these are Windows executables that look like installers, but are rather flasher applications. You can also find the update packages for the E586 (which is the same device, just without the Vodafone branding.

Though I have not found a way to extract the firmware from those files yet, I knew it was not overly complicated. There are scammy-looking sites that provide tools to flash unbranded firmware onto branded device (which is very similar to what I’m trying to look for anyway), but I would not trust those to the point of running them on any of my systems. So I turned to what every bored person with a little bit of understanding of reverse engineering would: binwalk.

While it has not been able to clearly identify something like “start of VxWorks firmware at address 0xdeadbeef”, it did have a very long list of random things starting from copyright strings and finishing with a lot of HTML fragments. This looked promising so I decided to run strings on the same file. The results were very promising and interesting. Starting from figuring out that the E586 firmware updater is written using Qt, and you can even find an XPM cursor in it (XPM, in a binary file, really? Sigh!).

]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]
     ]]]]]]]]]]]  ]]]]     ]]]]]]]]]]       ]]              ]]]]         (R)
]     ]]]]]]]]]  ]]]]]]     ]]]]]]]]       ]]               ]]]]            
]]     ]]]]]]]  ]]]]]]]]     ]]]]]] ]     ]]                ]]]]            
]]]     ]]]]] ]    ]]]  ]     ]]]] ]]]   ]]]]]]]]]  ]]]] ]] ]]]]  ]]   ]]]]]
]]]]     ]]]  ]]    ]  ]]]     ]] ]]]]] ]]]]]]   ]] ]]]]]]] ]]]] ]]   ]]]]  
]]]]]     ]  ]]]]     ]]]]]      ]]]]]]]] ]]]]   ]] ]]]]    ]]]]]]]    ]]]] 
]]]]]]      ]]]]]     ]]]]]]    ]  ]]]]]  ]]]]   ]] ]]]]    ]]]]]]]]    ]]]]
]]]]]]]    ]]]]]  ]    ]]]]]]  ]    ]]]   ]]]]   ]] ]]]]    ]]]] ]]]]    ]]]]
]]]]]]]]  ]]]]]  ]]]    ]]]]]]]      ]     ]]]]]]]  ]]]]    ]]]]  ]]]] ]]]]]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]       Development System
 %s%s %s
]]]]]]]]]]]]]]]]]]]]]]]]]]]       
]]]]]]]]]]]]]]]]]]]]]]]]]]       KERNEL: 
 %s%s
]]]]]]]]]]]]]]]]]]]]]]]]]       Copyright Wind River Systems, Inc., 1984-2005

Yes, I guess this firmware is based on VxWorks, if someone still had doubts. The strings are also quite telling, including the fact that the device comes with wpa_supplicant, and thus is likely able to operate as a Wireless client rather than just as an access point, which is going to be nice if I ever wanted to implement my proof of concept.

The firmware itself appears to have a lot of juicy information: in addition to the JavaScript (that does appear to be minified in the new firmware version) it provides a number of information on the webapp itself, and possible commands for the AT interface of the device. All of this is generally going to be more interesting if I can find out how to extract the actual firmware image from the device. I just need to find a working PE dumper tool and figure out which resource is exactly 32MiB in size.

Hopefully by the time I get to publish the next post in this series, I will have something more useful for everybody, either a way to flash the new firmware without the original tools, or access to the serial port on board of the device (spoilers!)

During my December trip to the United States, I did what I almost always do when I go there: I went to a pharmacy and looked through the current models of glucometers that are for sale there. Part of the reason is that I just got to the point I enjoy a challenge of figuring out what difference in the protocol the various manufacturers put version after version, and part of it is that I think it is a kind of a public service that we provide tools not only for the nice and fancy meters but also for those that most people would end up buying in store. I have one more device that I got during one of my various trips that I have not written about, too, but let’s not go there.

This time around the device I got is an Abbott FreeStyle Precision Neo device. The factors I used to choose, as usual, were price (I didn’t want to spend much, this one was on sale for less than $10), and the ability to download the data over direct USB connection (the device I have noted above requires a custom cradle which I have not managed to acquire yet). In addition, while the device did not come with any strip at all, I (mistakenly) thought it was a US brand for the Optium Neo, which looks very similar and is sold in Europe and Canada. That meter I knew used the same strips that I use for the FreeStyle Libre, which means I wouldn’t have to pay extra for it at all. As I said I was wrong, but it still accepted the same strips, more on that later.

The device itself is fairly nice: at first it may appear to share a similar shape as the Libre, but it really seems to share nothing with it. It’s significantly flatter, although a bit wider. The display is not LCD at all, but rather appears to be e-ink and reminded me of the Motorola FONE. The display is technically a “touchscreen” but rather than having a resistive or capacitive display on the device, it appears to just have switches hidden behind the screen.

Given the minimal size of the device, there is not much space for a battery and charging circuitry, like there is in the Libre, instead the device is powered by a single CR2032 battery which is very nice as they are cheap enough, and easy to carry around, although you’re not meant to leave them in checked-in luggage.

Overall, the device is flatter than my current smartphones, and extremely light, and that gives it a significant advantage over a number of other meters I tried in the past few years. The e-ink display ha very big and easily readable numbers, so it’s ideal for the elderly or those who need to take the test even without their glasses.

Putting aside the physical size factor, the package I bought contained a small case for the meter, a new lancing device, and some lancets. No strips, and I guess that explains the very low price point. I would say that it’s the perfect package for the users of previous generations’ FreeStyle devices that need an upgrade.

As I said at the start, the main reason why I buy these devices is that I want to reverse engineer their protocol, so of course I decided to find the original software from Abbott to download the data out of it. Unfortunately it didn’t quite work the way I was expecting. I expected the device to be a rebranded version of the Optium Neo, but it isn’t. So the software from the Irish Abbott website (Auto-Assist Neo, v1.22) tried connecting to the meter but reported a failure. So did the same software downloaded from the Canadian website (v1.32).

Given that this is a US device, I went to the US website (again, thank you to TunnelBear as Abbott keeps insisting with the GeoIP-locking). The US version does not offer a download of the simple desktop software I was looking for, instead pointing you to the client for their online diabetes management software. What is it with online systems for health data access? I decided not to try that one, particularly as I would be afraid of its interaction with the FreeStyle Libre software. I really should just set up a number of virtual machines, particularly given that the computer I’m using for this appears to be dying.

On the bright side it appears that, even though the software declares the device is not compatible with it, the USB capture shows that all the commands are all being sent and responded to, so I have a clear trace for at least the data download. The protocol is almost identical to the Insulinx that Xavier already reverse engineered in parts, and both seem to match the basic format that Pascal found how to dump from the Libre so it was easy to implement. I’ll write more about that separately.

So what about the strips? Since the device came with no strips, I assumed I would just use the strips I had at home, which were bought for my Libre device and were of the Optimus series, and they worked fine. But when I looked into completing the reverse engineering of the protocol by figuring out which marking is used for the β-ketone readings, the device reported E-2 on the screen. So I looked into it and I found out that the Precision Neo device is meant to be used with Precision series strips. Somehow the Optimus blood glucose strips are compatible (I would guess they are effectively the same strips) while the β-ketone strips are not. So I still don’t have data of how those readings are reported. But this put a final nail in the coffin to the idea that this is the same device as the one sold outside of the USA.

Having this device was very useful to understand and document better the shared HID protocol that is used by Abbott FreeStyle devices, which made it very easy to implement the basic info and data dump from the Libre, as well as an untested InsuLinux driver, in my glucometerutils project. So I would say it was $10 spent well.

January 20, 2017
Michał Górny a.k.a. mgorny (homepage, bugs)
The Tale of Pythonia (January 20, 2017, 16:33 UTC)

Developers, gather round for I am about to tell thee a story. A story of a far away kingdom, great kings and their affairs. No dragons included.

With special dedication to William L. Thomson Jr.

Once upon a time, in a far away kingdom of Gentoo there was a small state called Pythonia. The state was widely known throughout the land for the manufacture of Python packages.

The state of Pythonia was ruled by Arfruvro the Magnificent. He was recognized as a great authority in the world of Python. Furthermore, he was doing a great deal of work himself, not leaving much to do for his fellow citizens. He had two weaknesses, though: he was an idealist, and he wanted Python packages to be perfect.

Arfruvro frequently changed the design of Python packages manufactured by his state to follow the best practices in the art. Frequently came to the neighbouring states edicts from Pythonia telling their citizens that the Python package design is changing and their own packages need to be changed in 6 or 12 months, or just new packages that broke everything. So did neighbouring states complain to Pythonia, yet Arfruvro did not heed their wishes.

One day, Arfruvro re-issued yet another broken set of Python packages. The uproar was so great that the King Flamessio of the Empowered State of Qualassia decided to invade Pythonia. He removed Arfruvro from the throne and caused him to flee the state. Then, he let the citizens of Pythonia elect a new king.

The new king was a fair and just ruler. However, he stayed in the shadow of his predecessor. The state was no longer able to keep up with the established standards, and the quality and quantity of Python packages decreased. When the progress demanded reforms, nobody in the whole kingdom was capable of doing them. In fact, nobody really knew how all the machinery worked.

At this point, you may think that the state of Pythonia would have surely fallen without Arfruvro. However, it eventually rose again. The old directions were abandoned and major reforms were done. Many have complained that the Python packages are changing again. However, today the state is shining once again. Many citizens are working together, and many of them have the knowledge to lead the state if necessary.

The moral of this story is: it does not matter how great deal of work you did if it is not self-sustainable. What matters is what you leave after you. Arfruvro did great deal of work but Pythonia fell into decay when he left. Today Pythonia is no longer dependent on a single person.

Disclaimer: the characters and events in this story are fictional. Any resemblance between the characters in this story and any persons is purely coincidental.

January 19, 2017
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)

I’ve written a bit in my last two blog posts about the work I’ve been doing in inter-device synchronised playback using GStreamer. I introduced the library and then demonstrated its use in building video walls.

The important thing in synchronisation, of course, is how much in-sync are the streams? The video in my previous post gave a glimpse into that, and in this post I’ll expand on that with a more rigorous, quantifiable approach.

Before I start, a quick note: I am currently providing freelance consulting around GStreamer, PulseAudio and open source multimedia in general. If you’re looking for help with any of these, do get in touch.

The sync measurement setup

Quantifying what?

What is it that we are trying to measure? Let’s look at this in terms of the outcome — I have two computers, on a network. Using the gst-sync-server library, I play a stream on both of them. The ideal outcome is that the same video frame is displayed at exactly the same time, and the audio sample being played out of the respective speakers is also identical at any given instant.

As we saw previously, the video output is not a good way to measure what we want. This is because video displays are updated in sync with the display clock, over which consumer hardware generally does not have control. Besides, our eyes are not that sensitive to minor differences in timing unless images are side-by-side. After all, we’re fooling it with static pictures that change every 16.67ms or so.

Using audio, though, we should be able to do better. Digital audio streams for music/videos typically consist of 44100 or 48000 samples a second, so we have a much finer granularity than video provides us. The human ear is also fairly sensitive to timings with regards to sound. If it hears the same sound at an interval larger than 10 ms, you will hear two distinct sounds and the echo will annoy you to no end.

Measuring audio is also good enough because once you’ve got audio in sync, GStreamer will take care of A/V sync itself.

Setup

Okay, so now that we know what we want to measure, but how do we measure it? The setup is illustrated below:

Sync measurement setup illustrated

As before, I’ve set up my desktop PC and laptop to play the same stream in sync. The stream being played is a local audio file — I’m keeping the setup simple by not adding network streaming to the equation.

The audio itself is just a tick sound every second. The tick is a simple 440 Hz sine wave (A₄ for the musically inclined) that runs for for 1600 samples. It sounds something like this:

I’ve connected the 3.5mm audio output of both the computers to my faithful digital oscilloscope (a Tektronix TBS 1072B if you wanted to know). So now measuring synchronisation is really a question of seeing how far apart the leading edge of the sine wave on the tick is.

Of course, this assumes we’re not more than 1s out of sync (that’s the periodicity of the tick itself), and I’ve verified that by playing non-periodic sounds (any song or video) and making sure they’re in sync as well. You can trust me on this, or better yet, get the code and try it yourself! :)

The last piece to worry about — the network. How well we can sync the two streams depends on how well we can synchronise the clocks of the pipeline we’re running on each of the two devices. I’ll talk about how this works in a subsequent post, but my measurements are done on both a wired and wireless network.

Measurements

Before we get into it, we should keep in mind that due to how we synchronise streams — using a network clock — how in-sync our streams are will vary over time depending on the quality of the network connection.

If this variation is small enough, it won’t be noticeable. If it is large (10s of milliseconds), then we may notice start to notice it as echo, or glitches when the pipeline tries to correct for the lack of sync.

In the first setup, my laptop and desktop are connected to each other directly via a LAN cable. The result looks something like this:

The first two images show the best case — we need to zoom in real close to see how out of sync the audio is, and it’s roughly 50µs.

The next two images show the “worst case”. This time, the zoomed out (5ms) version shows some out-of-sync-ness, and on zooming in, we see that it’s in the order of 500µs.

So even our bad case is actually quite good — sound travels at about 340 m/s, so 500µs is the equivalent of two speakers about 17cm apart.

Now let’s make things a little more interesting. With both my laptop and desktop connected to a wifi network:

On average, the sync can be quite okay. The first pair of images show sync to be within about 300µs.

However, the wifi on my desktop is flaky, so you can see it go off up to 2.5ms in the next pair. In my setup, it even goes off up to 10-20ms, before returning to the average case. The next two images show it go back and forth.

Why does this happen? Well, let’s take a quick look at what ping statistics from my desktop to my laptop look like:

Ping from desktop to laptop on wifi

That’s not good — you can see that the minimum, average and maximum RTT are very different. Our network clock logic probably needs some tuning to deal with this much jitter.

Conclusion

These measurements show that we can get some (in my opinion) pretty good synchronisation between devices using GStreamer. I wrote the gst-sync-server library to make it easy to build applications on top of this feature.

The obvious area to improve is how we cope with jittery networks. We’ve added some infrastructure to capture and replay clock synchronisation messages offline. What remains is to build a large enough body of good and bad cases, and then tune the sync algorithm to work as well as possible with all of these.

Also, Florent over at Ubicast pointed out a nice tool they’ve written to measure A/V sync on the same device. It would be interesting to modify this to allow for automated measurement of inter-device sync.

In a future post, I’ll write more about how we actually achieve synchronisation between devices, and how we can go about improving it.

January 18, 2017
Michal Hrusecky a.k.a. miska (homepage, bugs)
Running for re-election (January 18, 2017, 08:00 UTC)

As you might have noticed, I’m running for re-election. I served my first term as openSUSE Board member, learned a lot and I think I could represent you well for another two years. Although this years elections will be tough as we have in the end quite some strong candidates. So honestly, I have no worries regarding result of the elections as it can’t end badly. Compare it to real world politics and elections where the results can be either bad or even worse… But even though our elections are quite friendly, it is still competition. So what would I do if I get elected? Why should you vote for me? I’ll try to answer it in this post.

What does the board do?

I was a board member for two years. During that time, I learned more about what board actually does and would like to describe it at the beginning. Even if you decide not to vote for me, it can help you pick the best candidates. I believe that the following roles are the main responsibilities that board has.

Judge

Board is the last resort when there is some conflict. And there are some conflicts from time to time. Our task is to listen to the both sides of the story and help them to achieve some solution, peacefully if possible and deescalate things. Sometimes, there are quite some emotions and you might even know one or both parties of the argument. It could be sometimes hard staying objective and resolving stuff in a way that is defendable and if there are some consequences, it has to be be plainly visible what the cause was.

Budget keeper

We have a power to influence how SUSE spends money on openSUSE. Our responsibility is to help decide what to support and how. When there is a need for money, board asks SUSE and SUSE gives us money. Part of this role is being reasonable, if we start asking for Lamborghinis for everybody, they might start saying no. Also we need to be kinda predictable so SUSE can plan the budget for openSUSE. But lately part of that job was out given to Andrew – to keep our books.

Point of contact

We are single point of contact for people from outside of the project or for companies. Our task is to tell them how our community works. Also whenever they have interesting proposal to our community, put them in contact with the right people from our community. We are also in charge of our trademark – openSUSE name and logo. From time to time, somebody wants to do something with openSUSE label on top of it. Mostly it is producing merchandising, new cool spin-off, port openSUSE to some exotic architecture… In these cases, our task is to decide whether this would benefit openSUSE community or if it is an attempt to exploit it. Mostly, these requests are good ideas thought, and we just say yes.

Yes man

Last but one of the most import tasks that board has is to encourage people to do stuff. Board itself has no power over technical decisions. In openSUSE, who does the work, decides. But sometimes people still ask whether they can do something. Our job is to tell them that they can do it. Sometime people ask us to change something or implement something. Our job in that case is again to tell them that they can do it. We don’t have a pack of code monkeys to implement whatever whoever wishes. But we have power to encourage people to scratch their own itch and we can help to promote the idea and try to find more people to help.

How do I fit the board

So why should you vote for me in upcoming elections? I’m by nature calm person. It is really hard to upset or angry me. So if you ever get into conflict with somebody, you want me to be part of your jury as I will try to be as objective as possible. If you are a villain, you probably don’t want me there thought. Regarding budget, I’m quite frugal. I was a student for a long time and I learned to think twice before spending money. But I’m working on it and learning how to spend money. Instinctively I’m always thinking whether the goal justifies the expenses. So don’t expect those Lamborghinis for release parties.

Regarding communication, I worked in two big companies (one of them being SUSE) and I learned what is troublesome for those companies and what is easy. Quite often it is counterintuitive. Understanding how this works can help find a better deal for both sides. Regarding encouraging people to do stuff, I try to do it whenever I speak somewhere about openSUSE.

I think I would fit into board nicely but so would the others running for the board. Your task is to choose who do you think fits the best and who matches your our world view the most.

About me

For those who don’t know me, I’ll sum up who am I. As you probably noticed, I was openSUSE Board member last two years. Apart from that, I try to promote openSUSE whenever possible so you might have met me on some conferences and with Tomas Chvatal, we have lessons in local school teching kids Linux (on openSUSE). What I’m lately most known for is that I wrote a bot that tried to kick almost every eligible voter from openSUSE members. But even that bot was just and tried to kick people regardless whether I consider them my friends or whether I never heard about them. There was a bug, I found it and you can look forward to next round after the election. The goal is to know who is still around. It will help us to interpret how are people interested in elections. But in the future there might be some even more important things to decide. And if there ever will be need for some community wide decision that should be taken by majority of our contributors, we should know whether people just don’t care or whether the votes we got are representing roughly the people we still have and we just have too many late members. It can also help to decide whether package is still actively maintained – if it’s maintainer got kicked out, he probably is not around anymore to fix your issues and it’s time to step up. So it can be useful, but I’m sorry for all those falsely accusing mails. And it will be finished after elections regardless whether I get elected or not, so not voting for me will not stop it 🙂

What would I do if I get elected? Will I try to kick out more people? Probably not. I will represent you the best I can and given the power board has, I will encourage you to do whatever crazy projects you like. But I’m not going to promise to solve all the bugs or make you rich. That is not in boards powers.

Endorsements

Real world politics usually mention which famous artists are supporting them. I don’t have any and I think those doesn’t matter. What I would like to do instead is to endorse one of my competitors. Well, I could easily endorse all of them, but then you wouldn’t vote for me. With one endorsement, there is still the other seat 🙂 I would like to endorse Sarah. I know her for some time. During conferences you can find her on openSUSE booth promoting our awesome project. Between conferences helping with Leap releases and openSUSE infrastructure. I know she would represent openSUSE well (she already does) and I believe that as a board member she will always act in openSUSEs best interest.

January 15, 2017
Gentoo at FOSDEM 2017 (January 15, 2017, 00:00 UTC)

FOSDEM 2017 logo

On February, 4th and 5th, Gentoo will be attending FOSDEM 2017 in Brussels, Belgium.

This year one of our own, Jason A Donenfeld (zx2c4), will be speaking on WireGuard: a next generation secure kernel network tunnel.

Similar to last year, the event will be hosted at Université libre de Bruxelles. Gentoo developers will be taking rotating shifts at the Gentoo stand with gadgets, swag, and a new 2017 LiveDVD. You can visit this wiki article to see which developer will be manning the stand when you drop by.

We are looking forward to seeing those in the community who have been hard at work on their quizzes!

January 09, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: Security (or lack of) at Number26 (January 09, 2017, 22:57 UTC)

Hi!

I would like to share a talk that I attended at 33c3. It’s about a company with a banking license and accounts with actual money. Some people downplay these issues as “yeah, but the issues were fixed” and “every major bank probably has something like this”. I would like to reply:

  • With a bit of time and interest, any moderate hobby security researcher could have found what he found, including me.
  • The issues uncovered are not mere issues of a product, they are issues in processes and culture.

When I checked earlier, Number26 did not have open positions for security professionals. They do now:

Senior Security Engineer (f/m)
https://n26.com/jobs/547526/?gh_jid=547526

The video: Shut Up and Take My Money! (33c3)

January 05, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Last week I received an unexpected but great email that my new car had arrived way ahead of schedule. I had ordered a 2017 Honda Civic EX-T back in November, but it wasn’t slated to come in until February 2017. I had to order one from the factory because I wanted to get one that perfectly matched my needs and wants, and apparently, that’s rare. When the 2016 came out, I was uninspired because I wanted the 1.5L turbocharged engine instead of the 2.0L naturally aspirated one, and I also wanted some of the bells and whistles (like the larger display, more speakers, et cetera), but above all else, I wanted a manual transmission. For 2017, Honda came through in that I could get the manual transmission a trim higher than the base model. The EX-T had everything I wanted, and I picked it up on Friday, 30 December 2016.

As with all of my previous vehicles, there are things that I want to change about this car, but far fewer than ever before. I’m quite happy with the performance, the smooth ride, and the niceties that come with the higher trim level. Not even a week later, though, and I took on my first modification to the car (albeit a minor one). Though it wasn’t something as involved as swapping a JDM K20a / Y2M3 (from the ITR), it did make a noticeable difference with the car (just a cosmetic one this time). 🙂

Though I love the looks of the 2017 Civic, I think that the “Civic” emblem/badge makes the car look asymmetrical and a little less classy. So, I thought it best to remove it completely.

2017 Honda Civic emblem removed - debadged letters

The process was relatively straightforward, but I understand that it can be a little unnerving to remove a badge on a brand new car. What happens if I scratch the paint? What if the adhesive is really strong and leaves a full residue? Those are legitimate concerns, but this little project turned out to be pretty easy. Here’s what I did:

  • Used a hair dryer to heat the adhesive behind each letter for ~60-90 seconds
  • Used a piece of floss in a seesaw motion behind each letter until they came off
  • Used an old credit card to remove some of the excess adhesive
  • Applied Goo Gone Automative Spray Gel to the remaining residue
  • Held a rag under the letters to catch the excess Goo Gone that would otherwise drip
  • Used my handy-dandy AmazonBasic microfiber cloth to get rid of the remaining residue
  • Washed the spot with some soap and water
  • Dried the spot with another microfiber cloth
  • Basked in the glory of having a much cleaner look to the rear of the car 🙂

I think that the results were well worth the minimal amount of time and effort:

2017 Honda Civic emblem removed - debadged before and after

2017 Honda Civic emblem removed - debadged before and after - wide
Click each image to enlarge

The only thing that I would note is that I did need to apply a good amount of pressure when getting rid of the excessive adhesive with the old credit card, and especially when using the microfiber cloth & Goo Gone to clean the remaining residue. I was a bit nervous to press that hard at first, but soon realised that it was necessary, and as long as I was careful, it wouldn’t damage the clearcoat or the paint. I thought that it would take about 10 minutes, and it ended up taking about 45 to do it in my OCD manner. That being said, it could have been a lot worse. My friend Mike always used to say that “to estimate the time needed for a project—especially one involving a car—take your initial guess, multiply it by 2, and go up one unit of measure.” In that case, I’m glad that it didn’t take 20 hours. 😛

Cheers,
Zach

December 31, 2016
Domen Kožar a.k.a. domen (homepage, bugs)
Reflecting on 2016 (December 31, 2016, 18:00 UTC)

Haven't blogged in 2016, but a lot has happened.

A quick summary of highlighted events:

2016 was a functional programming year as I've planned by end of 2015.

I greatly miss Python community and in that spirit, I've attended EuroPython 2016 and helped organize DragonSprint in Ljubljana. I don't think there's a place for me in OOP anymore, but I'll surely attend community events as nostalgia will kick in.

2017 seems extremely exciting, plans will unveil as I go, starting with some exciting news in January for Nix community.

December 28, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Synchronised Playback and Video Walls (December 28, 2016, 18:01 UTC)

Hello again, and I hope you’re having a pleasant end of the year (if you are, maybe don’t check the news until next year).

I’d written about synchronised playback with GStreamer a little while ago, and work on that has been continuing apace. Since I last wrote about it, a bunch of work has gone in:

  • Landed support for sending a playlist to clients (instead of a single URI)

  • Added the ability to start/stop playback

  • The API has been cleaned up considerably to allow us to consider including this upstream

  • The control protocol implementation was made an interface, so you don’t have to use the built-in TCP server (different use-cases might want different transports)

  • Made a bunch of robustness fixes and documentation

  • Introduced API for clients to send the server information about themselves

  • Also added API for the server to send video transformations for specific clients to apply before rendering

While the other bits are exciting in their own right, in this post I’m going to talk about the last two items.

Video walls

For those of you who aren’t familiar with the term, a video wall is just an array of displays stacked to make a larger display. These are often used in public installations.

One way to set up a video wall is to have each display connected to a small computer (such as the Raspberry Pi), and have them play a part of the entire video, cropped and scaled for the display that is connected. This might look something like:

A 4×4 video wall

The tricky part, of course, is synchronisation — which is where gst-sync-server comes in. Since we’re able to play a given stream in sync across devices on a network, the only missing piece was the ability to distribute a set of per-client transformations so that clients could apply those, and that is now done.

In order to keep things clean from an API perspective, I took the following approach:

  • Clients now have the ability to send a client ID and a configuration (which is just a dictionary) when they first connect to the server

  • The server API emits a signal with the client ID and configuration, which allows you to know when a client connects, what kind of display it’s running, and where it is positioned

  • The server now has additional fields to send a map of client ID to a set of video transformations

This allows us to do fancy things like having each client manage its own information with the server dynamically adapting the set of transformations based on what is connected. Of course, the simpler case of having a static configuration on the server also works.

Demo

Since seeing is believing, here’s a demo of the synchronised playback in action:

The setup is my laptop, which has an Intel GPU, and my desktop, which has an NVidia GPU. These are connected to two monitors (thanks go out to my good friends from Uncommon for lending me their thin-bezelled displays).

The video resolution is 1920×800, and I’ve adjusted the crop parameters to account for the bezels, so the video actually does look continuous. I’ve uploaded the text configuration if you’re curious about what that looks like.

As I mention in the video, the synchronisation is not as tight than I would like it to be. This is most likely because of the differing device configurations. I’ve been working with Nicolas to try to address this shortcoming by using some timing extensions that the Wayland protocol allows for. More news on this as it breaks.

More generally, I’ve done some work to quantify the degree of sync, but I’m going to leave that for another day.

p.s. the reason I used kmssink in the demo was that it was the quickest way I know of to get a full-screen video going — I’m happy to hear about alternatives, though

Future work

Make it real

My demo was implemented quite quickly by allowing the example server code to load and serve up a static configuration. What I would like is to have a proper working application that people can easily package and deploy on the kinds of embedded systems used in real video walls. If you’re interested in taking this up, I’d be happy to help out. Bonus points if we can dynamically calculate transformations based on client configuration (position, display size, bezel size, etc.)

Hardware acceleration

One thing that’s bothering me is that the video transformations are applied in software using GStreamer elements. This works fine(ish) for the hardware I’m developing on, but in real life, we would want to use OpenGL(ES) transformations, or platform specific elements to have hardware-accelerated transformations. My initial thoughts are for this to be either API on playbin or a GstBin that takes a set of transformations as parameters and internally sets up the best method to do this based on whatever sink is available downstream (some sinks provide cropping and other transformations).

Why not audio?

I’ve only written about video transformations here, but we can do the same with audio transformations too. For example, multi-room audio systems allow you to configure the locations of wireless speakers — so you can set which one’s on the left, and which on the right — and the speaker will automatically play the appropriate channel. Implementing this should be quite easy with the infrastructure that’s currently in place.

Merry Happy *.*

I hope you enjoyed reading that — I’ve had great responses from a lot of people about how they might be able to use this work. If there’s something you’d like to see, leave a comment or file an issue.

Happy end of the year, and all the best for 2017!

December 22, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux System Administration, 2nd Edition (December 22, 2016, 18:26 UTC)

While still working on a few other projects, one of the time consumers of the past half year (haven't you noticed? my blog was quite silent) has come to an end: the SELinux System Administration - Second Edition book is now available. With almost double the amount of pages and a serious update of the content, the book can now be bought either through Packt Publishing itself, or the various online bookstores such as Amazon.

With the holidays now approaching, I hope to be able to execute a few tasks within the Gentoo community (and of the Gentoo Foundation) and get back on track. Luckily, my absence was not jeopardizing the state of SELinux in Gentoo thanks to the efforts of Jason Zaman.

December 21, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

OpenStack has a Grafana dashboard with infrastructure metrics, including CI jobs history (failure rate, …). These dashboards are configured via YML files, hosted in the project-config repo, with the help of grafyaml.

As a part of the Neutron stadium, projects like networking-sfc are expected to have a working grafana dashboard for failure rates in gates. I updated the configuration file for networking-sfc recently, but wanted to locally test these changes before sending them for review.

Documentation mentions the steps with the help of puppet, but I wanted to try and configure a local test server. Here are my notes on the process!

Installing the Grafana server

I run this on a Centos 7 VM, with some of the usual development packages already installed (git, gcc, python, pip, …). Some steps will be distribution-specific, like grafana install here.

Grafana has some nice documentation, but for my test server, I just installed it with packagecloud repository:
[root@grafana ~]# wget https://packagecloud.io/install/repositories/grafana/stable/script.rpm.sh
[root@grafana ~]# vi script.rpm.sh # Never blindly run a downloaded script ;)
[root@grafana ~]# bash script.rpm.sh

Then start the server:
[root@grafana ~]# systemctl start grafana-server
(optionally, run “systemctl enable grafana-server” if you want it to start at boot)
And check that you can connect to http://${SERVER_IP}:3000, the default login password is admin / admin

Install and configure grafyaml

Seeing the main dashboard? Good, now open the API keys menu, and generate a key with Admin role (required as we will change the data source).

Now install grafyaml via pip (some distributions have a package for it, but not Centos):
[root@grafana ~]# pip install grafyaml

Create the configuration file /etc/grafyaml/grafyaml.conf with the following content (use the API key you just generated):

[grafana]
url = http://localhost:3000
apikey = generated_admin_key

Configure a dashboard

Now get the current configuration for OpenStack dashboards, and add one of them:
[root@grafana ~]# git clone https://git.openstack.org/openstack-infra/project-config # or sync from your local copy
[root@grafana ~]# grafana-dashboard update project-config/grafana/datasource.yaml
[root@grafana ~]# grafana-dashboard update project-config/grafana/networking-sfc.yaml

The first update command will add the OpenStack graphite datasource, the second one adds the current networking-sfc dashboard (the one I wanted to update in this case).
If everything went fine, refresh the grafana page, you should be able to select the Networking SFC Failure rates dashboard and see the same graphs as on the main site.

Modifying the dashboard

But we did not set up this system just to mimick the existing dashboards, right? Now it’s time to add your modifications to the dashboard YAML file, and test them.

A small tip on metrics names: if you want to be sure “stats_counts.zuul.pipeline.check.job.gate-networking-sfc-python27-db-ubuntu-xenial.FAILURE” is a correct metric, http://graphite.openstack.org is your friend!
This is a web interface to the datasource, and allows you to look for metrics by exact name (Search), with some auto-completion help (Auto-completer), or browsing a full tree (Tree).

Now that you have your metrics, update the YAML file with new entries, then you can validate (the YAML structure only, for metrics names see previous paragraph) and update your grafana dashboard with:
[root@grafana ~]# grafana-dashboard validate project-config/grafana/networking-sfc.yaml
[root@grafana ~]# grafana-dashboard update project-config/grafana/networking-sfc.yaml

Refresh your browser and you can see how your modifications worked out!

Next steps

Remember that this is a simple local test setup (default account, api key with admin privileges, manual configuration, …). This can be used as a base guide for a real grafana/grafyaml server, but the next steps are left as an exercise for the reader!

In the meantime, I found it useful to be able to try and visualize my changes before sending the patch for review.

December 08, 2016
Mike Pagano a.k.a. mpagano (homepage, bugs)

Just a quick note that I am walking the patch for CVE-2016-8655 down the gentoo-sources kernels.

Yesterday, I released the following kernels with the patch backported:

sys-kernel/gentoo-sources-4.8.12-r1
sys-kernel/gentoo-sources-4.4.36
sys-kernel/gentoo-sources-4.1.36-r1

Updated: 12/08
Also patched:
sys-kernel/gentoo-sources-3.18.45-r1
sys-kernel/gentoo-sources-3.12.68-r1

Updated 12/09
sys-kernel/gentoo-sources-3.10.104-r1
sys-kernel/gentoo-sources-3.4.113-r1

Updated 12/11
sys-kernel/gentoo-sources-3.14.79-r1

If Alice does not get to the others before me, I will continue to walk down the versions until all of them are patched.

Done.

December 02, 2016
10 year anniversary for sks-keyservers.net (December 02, 2016, 18:55 UTC)

December 3rd 2016 marks 10 years since sks-keyservers.net was first announced on the sks-devel mailing list. The time really has passed by too quickly, driven by a community that is a pleasure to cooperate with. Sadly there is still a long way to go for OpenPGP to be used mainstream, but in this blog post … Continue reading "10 year anniversary for sks-keyservers.net"

November 29, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Service Function Chaining demo with devstack (November 29, 2016, 14:09 UTC)

After a first high-level post, it is time to actually show networking-sfc in action! Based on a documentation example, we will create a simple demo, where we route some HTTP traffic through some VMs, and check the packets on them with tcpdump:

SFC demo diagram

This will be hosted on a single node devstack installation, and all VMs will use the small footprint CirrOS image, so this should run on “small” setups.

Installing the devstack environment

On your demo system (I used Centos 7), check out devstack on the Mitaka branch (remember to run devstack as a sudo-capable user, not root):

[stack@demo ~]$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/mitaka

Grab my local configuration file that enables the networking-sfc plugin, rename it to local.conf in your devstack/ directory.
If you prefer to adapt your current configuration file, just make sure your devstack checkout is on the mitaka branch, and add the SFC parts:
# SFC
enable_plugin networking-sfc https://git.openstack.org/openstack/networking-sfc
SFC_UPDATE_OVS=False

Then run the usual “./stack.sh” command, and go grab a coffee.

Deploy the demo instances

To speed this step up, I regrouped all the following items in a script. You can check it out (at a tested revision for this demo):
[stack@demo ~]$ git clone https://github.com/voyageur/openstack-scripts.git -b sfc_mitaka_demo

The script simple_sfc_vms.sh will:

  • Configure security (disable port security, set a few things in security groups, create a SSH key pair)
  • Create source, destination systems (with a basic web server)
  • Create service VMs, configuring the network interfaces and static IP routing to forward the packets
  • Create the SFC items (port pair, port pair  group, flow classifier, port chain)

I highly recommend to read it, it is mostly straightforward and commented, and where most of the interesting commands are hidden. So have a look, before running it:
[stack@demo ~]$ ./openstack-scripts/simple_sfc_vms.sh
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
Updated network: private
Created a new port:
[...]

route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "192.168.0.1"
You can safely ignore the route errors at the end of the script (they are caused by duplicate default route on the service VMs).

Remember, from now on, to source the credentials file in your current shell before running CLI commands:
[stack@demo ~]$ source ~/devstack/openrc demo demo

We first get the IP addresses for our source and destination demo VMs:[vagrant@defiant-devstack ~]$ openstack server show source_vm -f value -c addresses; openstack server show dest_vm -f value -c addresses

private=fd73:381c:4fa2:0:f816:3eff:fe96:de8f, 10.0.0.9
private=10.0.0.10, fd73:381c:4fa2:0:f816:3eff:fe65:12fd

Now, we look for the tap devices associated to our service VMs:
[stack@demo ~]$ neutron port-list -f table -c id -c name

+----------------+--------------------------------------+
| name           | id                                   |
+----------------+--------------------------------------+
| p1in           | 897df85a-26c3-4491-888e-8cc58f19cea1 |
| p1out          | fa838294-317d-46df-b10e-b1734dd62faf |
| p2in           | c86dafc7-bda6-4537-b806-be2282f7e11e |
| p2out          | 12e58ea8-a9ab-4d0b-9fd7-707dc6e99f20 |
| p3in           | ee14f406-e9d6-4047-812b-aa04514f50dd |
| p3out          | 2d86403b-4639-40a0-897e-68fa0c759f01 |
[...]

These devices names follow the tap<port ID first 10 digits> pattern, so for example tap897df85a-26 is the associated  for the p1in port here

See SFC in action

In this example we run a request loop from client_vm to dest_vm (remember to use the IP addresses found in the previous section):
[stack@demo ~]$ ssh cirros@10.0.0.9
$ while true; do curl 10.0.0.10; sleep 1; done
Welcome to dest-vm
Welcome to dest-vm
Welcome to dest-vm
[...]

So we do have access to the web server! But does the packets really go through the service VMs? To confirm that, in another shell, run tcpdump on the tap interfaces:

# On the outgoing interface of VM 3
$ sudo tcpdump port 80 -i tap2d86403b-46
tcpdump: WARNING: tap2d86403b-46: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap2d86403b-46, link-type EN10MB (Ethernet), capture size 65535 bytes
11:43:20.806571 IP 10.0.0.9.50238 > 10.0.0.10.http: Flags [S], seq 2951844356, win 14100, options [mss 1410,sackOK,TS val 5010056 ecr 0,nop,wscale 2], length 0
11:43:20.809472 IP 10.0.0.9.50238 > 10.0.0.10.http: Flags [.], ack 3583226889, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.809788 IP 10.0.0.9.50238 > 10.0.0.10.http: Flags [P.], seq 0:136, ack 1, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 136
11:43:20.812226 IP 10.0.0.9.50238 > 10.0.0.10.http: Flags [.], ack 39, win 3525, options [nop,nop,TS val 5010057 ecr 5008744], length 0
11:43:20.817599 IP 10.0.0.9.50238 > 10.0.0.10.http: Flags [F.], seq 136, ack 40, win 3525, options [nop,nop,TS val 5010059 ecr 5008746], length 0
[...]

Here are some other examples (skipping the tcpdump output for clarity):
# You can check other tap devices, confirming both VM 1 and VM2 get traffic
$ sudo tcpdump port 80 -i tapfa838294-31
$ sudo tcpdump port 80 -i tap12e58ea8-a9

# Now we remove the flow classifier, and check the tcpdump output
$ neutron port-chain-update --no-flow-classifier PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # Quiet time

# We restore the classifier, but remove the group for VM3, so tcpdump will only show traffic on other VMs
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 PC1
$ sudo tcpdump port 80 -i tap2d86403b-46 # No traffic
$ sudo tcpdump port 80 -i tapfa838294-31 # Packets!

# Now we remove VM1 from the first group
$ neutron port-pair-group-update PG1 --port-pair PP2
$ sudo tcpdump port 80 -i tapfa838294-31 # No more traffic
$ sudo tcpdump port 80 -i tap12e58ea8-a9 # Here it is

# Restore the chain to its initial demo status
$ neutron port-pair-group-update PG1 --port-pair PP1 --port-pair PP2
$ neutron port-chain-update --flow-classifier FC_demo --port-pair-group PG1 --port-pair-group PG2 PC1

Where to go from here

Between these examples, the commands used in the demo script, and the documentation, you should have enough material to try your own commands! So have fun experimenting with these VMs.

Note that in the meantime we released the Newton version (3.0.0), which also includes the initial OpenStackClient (OSC) interface, so I will probably update this to run on Newton and with some shiny “openstack sfc xxx” commands. I also hope to make a nicer-than-tcpdumping-around demo later on, when time permits.

November 21, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.3 (November 21, 2016, 12:40 UTC)

Ok I slacked by not posting for v3.1 and v3.2 and I should have since those previous versions were awesome and feature rich.

But v3.3 is another major milestone which was made possible by tremendous contributions from @tobes as usual and also greatly thanks to the hard work of @guiniol and @pferate who I’d like to mention and thank again !

Also, I’d like to mention that @tobes has become the first collaborator of the py3status project !

Instead of doing a changelog review, I’ll highlight some of the key features that got introduced and extended during those versions.

The py3 helper

Writing powerful py3status modules have never been so easy thanks to the py3 helper !

This magical object is added automatically to modules and provides a lot of useful methods to help normalize and enhance modules capabilities. This is a non exhaustive list of such methods:

  • format_units: to pretty format units (KB, MB etc)
  • notify_user: send a notification to the user
  • time_in: to handle module cache expiration easily
  • safe_format: use the extended formatter to handle the module’s output in a powerful way (see below)
  • check_commands: check if the listed commands are available on the system
  • command_run: execute the given command
  • command_output: execute the command and get its output
  • play_sound: sound notifications !

Powerful control over the modules’ output

Using the self.py3.safe_format helper will unleash a feature rich formatter that one can use to conditionally select the output of a module based on its content.

  • Square brackets [] can be used. The content of them will be removed from the output if there is no valid placeholder contained within. They can also be nested.
  • A pipe (vertical bar) | can be used to divide sections the first valid section only will be shown in the output.
  • A backslash \ can be used to escape a character eg \[ will show [ in the output.
  • \? is special and is used to provide extra commands to the format string, example \?color=#FF00FF. Multiple commands can be given using an ampersand & as a separator, example \?color=#FF00FF&show.
  • {<placeholder>} will be converted, or removed if it is None or empty. Formatting can also be applied to the placeholder eg {number:03.2f}.

Example format_string:

This will show artist - title if artist is present, title if title but no artist, and file if file is present but not artist or title.

"[[{artist} - ]{title}]|{file}"

More code and documentation tests

A lot of efforts have been put into py3status automated CI and feature testing allowing more confidence in the advanced features we develop while keeping a higher standard on code quality.

This is such as even modules’ docstrings are now tested for bad formatting 🙂

Colouring and thresholds

A special effort have been put in normalizing modules’ output colouring with the added refinement of normalized thresholds to give users more power over their output.

New modules, on and on !

  • new clock module to display multiple times and dates informations in a flexible way, by @tobes
  • new coin_balance module to display balances of diverse crypto-currencies, by Felix Morgner
  • new diskdata module to shows both usage data and IO data from disks, by @guiniol
  • new exchange_rate module to check for your favorite currency rates, by @tobes
  • new file_status module to check the presence of a file, by @ritze
  • new frame module to group and display multiple modules inline, by @tobes
  • new gpmdp module for Google Play Music Desktop Player by @Spirotot
  • new kdeconnector module to display information about Android devices, by @ritze
  • new mpris module to control MPRIS enabled music players, by @ritze
  • new net_iplist module to display interfaces and their IPv4 and IPv6 IP addresses, by @guiniol
  • new process_status module to check the presence of a process, by @ritze
  • new rainbow module to enlight your day, by @tobes
  • new tcp_status module to check for a given TCP port on a host, by @ritze

Changelog

The changelog is very big and the next 3.4 milestone is very promising with amazing new features giving you even more power over your i3bar, stay tuned !

Thank you contributors

Still a lot of new timer contributors which I take great pride in as I see it as py3status being an accessible project.

  • @btall
  • @chezstov
  • @coxley
  • Felix Morgner
  • Gabriel Féron
  • @guiniol
  • @inclementweather
  • @jakubjedelsky
  • Jan Mrázek
  • @m45t3r
  • Maxim Baz
  • @pferate
  • @ritze
  • @rixx
  • @Spirotot
  • @Stautob
  • @tjaartvdwalt
  • Yuli Khodorkovskiy
  • @ZeiP

November 10, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)

Open Source Conference 2016 Tokyo

Many people came to the Gentoo booth,
mainly students and Open Source users
asking for Gentoo information.

We gave away around 200 flyers, and
many many stickers during the two days.

Unfortunately the sticker we ordered
from unixsticker had some SVG problem.

We had also in exposition some esoteric
enviroment like the IS01 sharp,
off course mounting Gentoo as Native and
as prefix.
Of course one of the first things we tried
was the 5 minutes long Gentoo sl command.



image from: @NTSC_J

We also had a Gentoo notebook
running wayland (the one in the middle).

It was an amazing event and I would
like to thanks everyone that came to
the Gentoo booth, everyone that helped
making the Gentoo booth and all the
amazing Gentoo community.

November 07, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
What is “Service Function Chaining”? (November 07, 2016, 16:59 UTC)

This is the first article in a series about Service Function Chaining (SFC for short), and its OpenStack implementation, networking-sfc, that I have been working on.

The SFC acronym can easily appear in Software-defined networking (SDN), in a paper about Network function virtualization (NFV), in some IETF documents, … Some of these broader subjects use other names for SFC elements, but this is probably a good topic for another post/blog.
If you already know SFC elements, you can probably skip to the next blog post.

Definitions

So what is this “Service Function Chaining”? Let me quote the architecture RFC:

The delivery of end-to-end services often requires various service functions. These include traditional network service functions such
as firewalls and traditional IP Network Address Translators (NATs),
as well as application-specific functions. The definition and instantiation of an ordered set of service functions and subsequent
“steering” of traffic through them is termed Service Function
Chaining (SFC).

I see SFC as a higher level of abstraction routing: in a typical network, you route all the traffic coming from Internet through a firewall box. So you set up the firewall system, with its network interfaces (Internet and intranet sides), and add some IP routes to steer the traffic through.
SFC uses the same concept, but with logical blocks: if a packet matches some conditions (it is Internet traffic), force it through a series of “functions” (in that case, only one function: a firewall system). And voilà, you have your Service function chain!

I like this simple comparison as it introduces most of the SFC elements:

  • service function: a.k.a. “bump in the wire”. This is a transparent system that you want some flows to go through (typical use cases: firewall, load balancer, analyzer).
  • flow classifier: the “entry point”, it determines if a flow should go through the chain. This can be based on IP attributes (source/dest adress/port, …), layer 7 attributes or even from metadata in the flow, set by a previous chain.
  • port pair:  as the name implies, this is a pair of ports (network interfaces) for a service function (the firewall in our example). The traffic is routed to the “in” port, and is expected to exit the VM through the “out” port. This can be the same port
  • port chain: the SFC object itself, a set of flow classifiers and a set of port pairs (that define the chain sequence).

An additional type not mentioned before is the port pair group: if you have multiple service functions of an identical type, you can regroup them to distribute the flows among them.

Use cases and advantages

OK, after seeing all these definitions, you may wonder “what’s the point?” What I have seen so far is that it allows:

  • complex routing made easier. Define a sequence of logical steps, and the flow will go through it.
  • HA deployments: add multiple VMS in a same group, and the load will be distributed between them.
  • dynamic inventory. Add or remove functions dynamically, either to scale a group (add a load balancer, remove an analyzer), change functions order, add a new function in the middle of some chain, …
  • complex classification. Flows can be classified based on L7 criterias, output from a previous chain (for example a Deep-Packet Inspection system).

Going beyond these technical advantages, you can read an RFC that is actually a direct answer to this question: RFC 7498

Going further

To keep a reasonable post length, I did not talk about:

  • How does networking-sfc tag traffic? Hint: MPLS labels
  • Service functions may or may not be SFC-aware: proxies can handle the SFC tagging
  • Upcoming feature: support for Network Service Header (NSH)
  • Upcoming feature: SFC graphs (allowing complex chains and chains of chains)
  • networking-sfc modularity: reference implementation uses OVS, but this is juste one of the possible drivers
  • Also, networking-sfc architecture in general
  • SFC use in VNF Forwarding Graphs (VNFFG)

Links

SFC has abundant documentation, both in the OpenStack project and outside. Here is some additional reading if you are interested (mostly networking-sfc focused):

Denis Dupeyron a.k.a. calchan (homepage, bugs)
SCALE 15x CFP is closing soon (November 07, 2016, 04:07 UTC)

Just a quick reminder that the deadline for proposing a talk to SCALE 15x is on November 15th. More information, including topics of interest, is available on the SCALE website.

SCALE 15x is to be held on March 2-5, 2017 at the Pasadena Convention Center in Pasadena, California, near Los Angeles. This is the same venue as last year and is much nicer than the original one form the years before.

I’ll see you there.