Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Yury German
. Zack Medico

Last updated:
September 30, 2016, 13:06 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

September 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
We do not ship SELinux sandbox (September 27, 2016, 18:47 UTC)

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 26, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Mounting QEMU images (September 26, 2016, 17:26 UTC)

While working on the second edition of my first book, SELinux System Administration - Second Edition I had to test out a few commands on different Linux distributions to make sure that I don't create instructions that only work on Gentoo Linux. After all, as awesome as Gentoo might be, the Linux world is a bit bigger. So I downloaded a few live systems to run in Qemu/KVM.

Some of these systems however use cloud-init which, while interesting to use, is not set up on my system yet. And without support for cloud-init, how can I get access to the system?

Mounting qemu images on the system

To resolve this, I want to mount the image on my system, and edit the /etc/shadow file so that the root account is accessible. Once that is accomplished, I can log on through the console and start setting up the system further.

Images that are in the qcow2 format can be mounted through the nbd driver, but that would require some updates on my local SELinux policy that I am too lazy to do right now (I'll get to them eventually, but first need to finish the book). Still, if you are interested in using nbd, see these instructions or a related thread on the Gentoo Forums.

Luckily, storage is cheap (even SSD disks), so I quickly converted the qcow2 images into raw images:

~$ qemu-img convert root.qcow2 root.raw

With the image now available in raw format, I can use the loop devices to mount the image(s) on my system:

~# losetup /dev/loop0 root.raw
~# kpartx -a /dev/loop0
~# mount /dev/mapper/loop0p1 /mnt

The kpartx command will detect the partitions and ensure that those are available: the first partition becomes available at /dev/loop0p1, the second /dev/loop0p2 and so forth.

With the image now mounted, let's update the /etc/shadow file.

Placing a new password hash in the shadow file

A google search quickly revealed that the following command generates a shadow-compatible hash for a password:

~$ openssl passwd -1 MyMightyPassword
$1$BHbMVz9i$qYHmULtXIY3dqZkyfW/oO.

The challenge wasn't to find the hash though, but to edit it:

~# vim /mnt/etc/shadow
vim: Permission denied

The image that I downloaded used SELinux (of course), which meant that the shadow file was labeled with shadow_t which I am not allowed to access. And I didn't want to put SELinux in permissive mode just for this (sometimes I /do/ have some time left, apparently).

So I remounted the image, but now with the context= mount option, like so:

~# mount -o context="system_u:object_r:var_t:s0: /dev/loop0p1 /mnt

Now all files are labeled with var_t which I do have permissions to edit. But I also need to take care that the files that I edited get the proper label again. There are a number of ways to accomplish this. I chose to create a .autorelabel file in the root of the partition. Red Hat based distributions will pick this up and force a file system relabeling operation.

Unmounting the file system

After making the changes, I can now unmount the file system again:

~# umount /mnt
~# kpart -d /dev/loop0
~# losetup -d /dev/loop0

With that done, I had root access to the image and could start testing out my own set of commands.

It did trigger my interest in the cloud-init setup though...

September 25, 2016

Description:
Mujstest, which is part of mupdf is a scriptable tester for mupdf + js.

A fuzzing revealed a strcpy-param-overlap.

The complete ASan output:

# mujstest $FILE
==26843==ERROR: AddressSanitizer: strcpy-param-overlap: memory ranges [0x0000013c5d40,0x0000013c62ed) and [0x0000013c6285, 0x0000013c6832) overlap
    #0 0x473129 in __interceptor_strcpy /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:545
    #1 0x4f7910 in main /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/platform/x11/jstest_main.c:353:6
    #2 0x7f8af37a961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #3 0x41ade8 in _init (/usr/bin/mujstest+0x41ade8)

0x0000013c6140 is located 0 bytes to the right of global variable 'filename' defined in 'platform/x11/jstest_main.c:15:13' (0x13c5d40) of size 1024
0x0000013c6285 is located 5 bytes inside of global variable 'getline_buffer' defined in 'platform/x11/jstest_main.c:24:13' (0x13c6280) of size 4096
SUMMARY: AddressSanitizer: strcpy-param-overlap /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:545 in __interceptor_strcpy
==26843==ABORTING

Affected version:
1.9a

Fixed version:
2.0 (not yet released)

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=cfe8f35bca61056363368c343be36812abde0a06

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-04: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-25: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: mujstest: strcpy-param-overlap in main (jstest_main.c)

September 24, 2016

Description:
Mujstest, which is part of mupdf is a scriptable tester for mupdf + js.

A fuzzing revealed a global buffer overflow write.

The complete ASan output:

# mujstest $FILE
=================================================================
==2244==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0000013c6140 at pc 0x000000473526 bp 0x7fff866f77d0 sp 0x7fff866f6f80
WRITE of size 1181 at 0x0000013c6140 thread T0
    #0 0x473525 in __interceptor_strcpy /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:547
    #1 0x4f7910 in main /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/platform/x11/jstest_main.c:353:6
    #2 0x7f3a6c18661f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #3 0x41ade8 in _init (/usr/bin/mujstest+0x41ade8)

0x0000013c6140 is located 0 bytes to the right of global variable 'filename' defined in 'platform/x11/jstest_main.c:15:13' (0x13c5d40) of size 1024
SUMMARY: AddressSanitizer: global-buffer-overflow /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:547 in __interceptor_strcpy
Shadow bytes around the buggy address:
  0x000080270bd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270be0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270bf0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270c10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x000080270c20: 00 00 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 f9 f9
  0x000080270c30: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270c40: f9 f9 f9 f9 f9 f9 f9 f9 04 f9 f9 f9 f9 f9 f9 f9
  0x000080270c50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270c60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270c70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==2244==ABORTING

Affected version:
1.9a

Fixed version:
2.0 (not yet released)

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=cfe8f35bca61056363368c343be36812abde0a06

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-04: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-24: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: mujstest: global-buffer-overflow in main (jstest_main.c)

Description:
Mujstest, which is part of mupdf is a scriptable tester for mupdf + js.

A fuzzing revealed a global buffer overflow write.

The complete ASan output:

# mujstest $FILE
==1278==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0000013c7280 at pc 0x0000004fa432 bp 0x7ffea75837d0 sp 0x7ffea75837c8
WRITE of size 1 at 0x0000013c7280 thread T0
    #0 0x4fa431 in my_getline /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/platform/x11/jstest_main.c:214:5
    #1 0x4fa431 in main /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/platform/x11/jstest_main.c:335
    #2 0x7fb62229661f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #3 0x41ade8 in _init (/usr/bin/mujstest+0x41ade8)

0x0000013c7280 is located 0 bytes to the right of global variable 'getline_buffer' defined in 'platform/x11/jstest_main.c:24:13' (0x13c6280) of size 4096
SUMMARY: AddressSanitizer: global-buffer-overflow /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/platform/x11/jstest_main.c:214:5 in my_getline
Shadow bytes around the buggy address:
  0x000080270e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270e10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270e20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270e30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x000080270e40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x000080270e50:[f9]f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270e60: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270e70: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270e80: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270e90: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
  0x000080270ea0: f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9 f9
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==1278==ABORTING

Affected version:
1.9a

Fixed version:
2.0 (not yet released)

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=446097f97b71ce20fa8d1e45e070f2e62676003e

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-04: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-24: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: mujstest: global-buffer-overflow in my_getline (jstest_main.c)

September 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Few notes on locale craziness (September 22, 2016, 20:13 UTC)

Back in the EAPI 6 guide I shortly noted that we have added a sanitization requirement for locales. Having been informed of another locale issue in Python (pre-EAPI 6 ebuild), I have decided to write a short note of locale curiosities that could also serve in reporting issues upstream.

When l10n and i18n are concerned, most of the developers correctly predict that the date and time format, currencies, number formats are going to change. It’s rather hard to find an application that would fail because of changed system date format; however, much easier to find one that does not respect the locale and uses hard-coded format strings for user display. You can find applications that unconditionally use a specific decimal separator but it’s quite rare to find one that chokes itself combining code using hard-coded separator and system routines respecting locales. Some applications rely on English error messages but that’s rather obviously perceived as mistake. However, there are also two hard cases…

Lowercase and uppercase

For a start, if you thought that the ASCII range of lowercase characters would map clearly to the ASCII range of uppercase characters, you were wrong. The Turkish (tr_TR) locale is different here, and maps lowercase ‘i’ (LATIN SMALL LETTER I) into uppercase ‘İ’ (LATIN CAPITAL LETTER I WITH DOT ABOVE). Similarly, ‘I’ (LATIN CAPITAL LETTER I) maps to ‘ı’ (LATIN SMALL LETTER DOTLESS I). What does this mean in practice? That if you have a Turkish user, then depending on the software used, you Latin ‘i’ may be uppercased onto ‘I’ (as you expect it to be), ‘İ’ (as would be correct in free text) or… left as ‘i’.

What’s the solution for this? If you need to uppercase/lowercase an ASCII text (e.g. variable names), either use a function that does not respect locale (e.g. 'i' - ('a' - 'A') in C) or set LC_CTYPE to a sane locale (e.g. C). However, remember that LC_CTYPE affects the character encoding — i.e. if you read UTF-8, you need to use a locale with UTF-8 codeset.

Collation

The other problem is collation, i.e. sorting. The more obvious part of it is that the particular locales enforce specific sorting of their specific diacritic characters. For example, the Polish letter ‘ą’ would be sorted between ‘a’ and ‘b’ in the Polish locale, and somewhere at the end in the C locale. The intermediately obvious part of it is that some locales have different ordering of lowercase and uppercase characters — the C and German locales sort uppercase characters first (the former because of ASCII codes), while many other locales sort the opposite.

Now, the non-obvious part is that some locales actually reorder the Latin alphabet. For example, the Estonian (et_EE) locale puts ‘z’ somewhere between ‘s’ and ‘t’. Yep, seriously. What’s even less obvious is that it means that the [a-z] character class suddenly ends halfway through the lowercase characters!

What’s the solution? Again, either use non-locale-sensitive functions or sanitize LC_COLLATE. For regular expressions, the named character classes ([[:lower:]], [[:upper:]]) are always a better choice.

Does anyone know more fun locales?

mupdf: use-after-free in pdf_to_num (pdf-object.c) (September 22, 2016, 15:33 UTC)

Description:
mupdf is a lightweight PDF viewer and toolkit written in portable C.

A fuzzing through mutool revealed a use-after-free.

The complete ASan output:

# mutool info $FILE
==5430==ERROR: AddressSanitizer: heap-use-after-free on address 0x60300000ea42 at pc 0x7fbc4c3824e5 bp 0x7ffee68ead70 sp 0x7ffee68ead68                                                                                                                                       
READ of size 1 at 0x60300000ea42 thread T0                                                                                                                                                                                                                                    
    #0 0x7fbc4c3824e4 in pdf_to_num /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:375:35                                                                                                                                                       
    #1 0x53f042 in gatherfonts /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:259:46                                                                                                                                                             
    #2 0x53f042 in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:595                                                                                                                                                         
    #3 0x53913a in gatherpageinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:661:2                                                                                                                                                           
    #4 0x53913a in showinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:957                                                                                                                                                                   
    #5 0x537d46 in pdfinfo_info /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:1029:3                                                                                                                                                            
    #6 0x537d46 in pdfinfo_main /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:1077                                                                                                                                                              
    #7 0x4f8ace in main /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/mutool.c:104:12                                                                                                                                                                     
    #8 0x7fbc4ae1f61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                                                                                                       
    #9 0x41f9c8 in _init (/usr/bin/mutool+0x41f9c8)                                                                                                                                                                                                                           
                                                                                                                                                                                                                                                                              
0x60300000ea42 is located 2 bytes inside of 24-byte region [0x60300000ea40,0x60300000ea58)                                                                                                                                                                                    
freed by thread T0 here:                                                                                                                                                                                                                                                      
    #0 0x4c6c10 in free /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:38                                                                                                                                    
    #1 0x7fbc4bf33830 in fz_free /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/memory.c:187:2                                                                                                                                                              
                                                                                                                                                                                                                                                                              
previously allocated by thread T0 here:                                                                                                                                                                                                                                       
    #0 0x4c6f18 in malloc /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:52                                                                                                                                  
    #1 0x7fbc4bf2a86f in do_scavenging_malloc /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/memory.c:17:7                                                                                                                                                  
    #2 0x7fbc4bf2a86f in fz_malloc /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/memory.c:57                                                                                                                                                               
    #3 0x7fbc4c37f94d in pdf_new_indirect /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:186:8                                                                                                                                                  
                                                                                                                                                                                                                                                                              
SUMMARY: AddressSanitizer: heap-use-after-free /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:375:35 in pdf_to_num                                                                                                                              
Shadow bytes around the buggy address:                                                                                                                                                                                                                                        
  0x0c067fff9cf0: fd fd fa fa fd fd fd fa fa fa fd fd fd fa fa fa                                                                                                                                                                                                             
  0x0c067fff9d00: fd fd fd fd fa fa fd fd fd fa fa fa fd fd fd fa                                                                                                                                                                                                             
  0x0c067fff9d10: fa fa fd fd fd fd fa fa fd fd fd fa fa fa fd fd                                                                                                                                                                                                             
  0x0c067fff9d20: fd fa fa fa fd fd fd fd fa fa fd fd fd fa fa fa
  0x0c067fff9d30: fd fd fd fa fa fa fd fd fd fa fa fa fd fd fd fa
=>0x0c067fff9d40: fa fa 00 00 00 fa fa fa[fd]fd fd fa fa fa fd fd
  0x0c067fff9d50: fd fd fa fa 00 00 00 fa fa fa 00 00 00 fa fa fa
  0x0c067fff9d60: 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00 00 fa
  0x0c067fff9d70: fa fa 00 00 00 fa fa fa 00 00 00 06 fa fa 00 00
  0x0c067fff9d80: 01 fa fa fa 00 00 05 fa fa fa 00 00 00 fa fa fa
  0x0c067fff9d90: 00 00 00 00 fa fa 00 00 00 00 fa fa 00 00 00 fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==5430==ABORTING

Affected version:
1.9a

Fixed version:
1.10 (not yet released)

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=1e03c06456d997435019fb3526fa2d4be7dbc6ec

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:

Timeline:
2016-08-05: bug discovered
2016-08-05: bug reported privately to upstream
2016-09-22: upstream released a patch
2016-09-22: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: use-after-free in pdf_to_num (pdf-object.c)

Description:
mupdf is a lightweight PDF viewer and toolkit written in portable C.

A fuzzing through mutool revealed an infinite loop in gatherresourceinfo if mutool tries to get info from a crafted pdf.

The output is a bit cutted because the original is ~1300 lines (because of the loop)

# mutool info $FILE
[cut here]
warning: not a font dict (0 0 R)
ASAN:DEADLYSIGNAL
=================================================================
==8763==ERROR: AddressSanitizer: stack-overflow on address 0x7ffeb34e6f6c (pc 0x7f188e685b2e bp 0x7ffeb34e7410 sp 0x7ffeb34e6ea0 T0)
    #0 0x7f188e685b2d in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266
    #1 0x7f188e6888c0 in buffered_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:2346
    #2 0x7f188e685cd4 in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1292
    #3 0x49927f in __interceptor_vfprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1111
    #4 0x499352 in fprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1156
    #5 0x7f188f70f03c in fz_flush_warnings /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:18:3
    #6 0x7f188f70f03c in fz_throw /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:168
    #7 0x7f188fac98d5 in pdf_parse_ind_obj /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-parse.c:565:3
    #8 0x7f188fb5fe6b in pdf_cache_object /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2029:13
    #9 0x7f188fb658d2 in pdf_resolve_indirect /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2155:12
    #10 0x7f188fbc0a0d in pdf_is_dict /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:268:2
    #11 0x53ea6a in gatherfonts /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:257:8
    #12 0x53ea6a in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:595
    #13 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5
    [cut here]
    #253 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5

SUMMARY: AddressSanitizer: stack-overflow /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266 in _IO_vfprintf
==8763==ABORTING
1152.crashes:
PDF-1.4
Pages: 1
Retrieving info from pages 1-1...

Affected version:
1.9a

Fixed version:
1.10 (not yet released)

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=fdf71862fe929b4560e9f632d775c50313d6ef02

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

Timeline:
2016-08-05: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-22: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: mutool: infinite loop in gatherresourceinfo (pdfinfo.c)

September 21, 2016

Description:
Libav is an open source set of tools for audio and video processing.

A fuzzing with an mp3 file as input discovered a divide-by-zero in sbr_make_f_master.

The complete ASan output:

# avconv -i $FILE -f null -
avconv version 11.7, Copyright (c) 2000-2016 the Libav developers
  built on Aug 16 2016 15:34:42 with clang version 3.8.1 (tags/RELEASE_381/final)
[mpeg @ 0x61a00001f280] Format detected only with low score of 25, misdetection possible!
[aac @ 0x619000000580] Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x619000000580] SBR was found before the first channel element.
ASAN:DEADLYSIGNAL
=================================================================
==29103==ERROR: AddressSanitizer: FPE on unknown address 0x7fbd80295491 (pc 0x7fbd80295491 bp 0x7ffde63eb2f0 sp 0x7ffde63eafa0 T0)
    #0 0x7fbd80295490 in sbr_make_f_master /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacsbr.c:338:57
    #1 0x7fbd80295490 in sbr_reset /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacsbr.c:1045
    #2 0x7fbd80295490 in ff_decode_sbr_extension /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacsbr.c:1093
    #3 0x7fbd801efe1b in decode_extension_payload /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacdec.c:2196:15
    #4 0x7fbd801efe1b in aac_decode_frame_int /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacdec.c:2866
    #5 0x7fbd801d3bbb in aac_decode_frame /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacdec.c:2959:15
    #6 0x7fbd823ed42a in avcodec_decode_audio4 /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/utils.c:1657:15
    #7 0x7fbd83f00b20 in try_decode_frame /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavformat/utils.c:1914:19
    #8 0x7fbd83ef76e2 in avformat_find_stream_info /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavformat/utils.c:2276:9
    #9 0x50d195 in open_input_file /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv_opt.c:726:11
    #10 0x50b625 in open_files /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv_opt.c:2127:15
    #11 0x50af81 in avconv_parse_options /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv_opt.c:2164:11
    #12 0x541414 in main /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2630:11
    #13 0x7fbd7e77f61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #14 0x41d098 in _init (/usr/bin/avconv+0x41d098)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/aacsbr.c:338:57 in sbr_make_f_master
==29103==ABORTING

Affected version:
11.7

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2016-7499

Timeline:
2016-08-15: bug discovered
2016-08-16: bug reported to upstream
2016-09-21: blog post about the issue
2016-09-21: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libav: divide-by-zero in sbr_make_f_master (aacsbr.c)

September 20, 2016

Description:
Libav is an open source set of tools for audio and video processing.

A fuzzing with an mp3 file as input discovered a null pointer access in ff_put_pixels8_xy2_mmx.

The complete ASan output:

# avconv -i $FILE -f null -
avconv version 11.7, Copyright (c) 2000-2016 the Libav developers
  built on Aug 16 2016 15:34:42 with clang version 3.8.1 (tags/RELEASE_381/final)
[h263 @ 0x61a00001f280] Format detected only with low score of 25, misdetection possible!
[h263 @ 0x619000000580] warning: first frame is no keyframe
[h263 @ 0x619000000580] cbpc damaged at 2 0
[h263 @ 0x619000000580] Error at MB: 2
[h263 @ 0x619000000580] concealing 6336 DC, 6336 AC, 6336 MV errors
[h263 @ 0x61a00001f280] Estimating duration from bitrate, this may be inaccurate
Input #0, h263, from '70.crashes':
  Duration: N/A, bitrate: N/A
    Stream #0.0: Video: h263, yuv420p, 1408x1152 [PAR 12:11 DAR 4:3], 25 fps, 25 tbn, 29.97 tbc
Output #0, null, to 'pipe:':
  Metadata:
    encoder         : Lavf56.1.0
    Stream #0.0: Video: rawvideo, yuv420p, 1408x1152 [PAR 12:11 DAR 4:3], q=2-31, 200 kb/s, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc56.1.0 rawvideo
Stream mapping:
  Stream #0:0 -> #0:0 (h263 (native) -> rawvideo (native))
Press ctrl-c to stop encoding
[h263 @ 0x61900001cc80] warning: first frame is no keyframe
[h263 @ 0x61900001cc80] cbpc damaged at 2 0
[h263 @ 0x61900001cc80] Error at MB: 2
[h263 @ 0x61900001cc80] concealing 6336 DC, 6336 AC, 6336 MV errors
[h263 @ 0x61900001cc80] warning: first frame is no keyframe
[h263 @ 0x61900001cc80] cbpc damaged at 0 0
[h263 @ 0x61900001cc80] Error at MB: 0
[h263 @ 0x61900001cc80] concealing 99 DC, 99 AC, 99 MV errors
Input stream #0:0 frame changed from size:1408x1152 fmt:yuv420p to size:176x144 fmt:yuv420p
[h263 @ 0x61900001cc80] warning: first frame is no keyframe
ASAN:DEADLYSIGNAL
=================================================================
==28973==ERROR: AddressSanitizer: SEGV on unknown address 0x7f22da99ac95 (pc 0x7f22e80d8892 bp 0x7ffcd7c28e90 sp 0x7ffcd7c28e20 T0)
    #0 0x7f22e80d8891 in ff_put_pixels8_xy2_mmx /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/x86/rnd_template.c:37:5
    #1 0x7f22e7217de0 in hpel_motion /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:224:5
    #2 0x7f22e7217de0 in apply_8x8 /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:798
    #3 0x7f22e7217de0 in mpv_motion_internal /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:877
    #4 0x7f22e7217de0 in ff_mpv_motion /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:981
    #5 0x7f22e714459b in mpv_decode_mb_internal /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo.c:2223:21
    #6 0x7f22e714459b in ff_mpv_decode_mb /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo.c:2358
    #7 0x7f22e6056c95 in decode_slice /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/h263dec.c:273:13
    #8 0x7f22e60522cd in ff_h263_decode_frame /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/h263dec.c:575:11
    #9 0x7f22e79dd906 in avcodec_decode_video2 /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/utils.c:1600:19
    #10 0x5647eb in decode_video /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:1259:11
    #11 0x5647eb in process_input_packet /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:1398
    #12 0x550e63 in process_input /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2440:11
    #13 0x550e63 in transcode /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2488
    #14 0x550e63 in main /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2647
    #15 0x7f22e3d7261f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #16 0x41d098 in _init (/usr/bin/avconv+0x41d098)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/x86/rnd_template.c:37:5 in ff_put_pixels8_xy2_mmx
==28973==ABORTING

Affected version:
11.7

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2016-7477

Timeline:
2016-08-15: bug discovered
2016-08-16: bug reported to upstream
2016-09-20: blog post about the issue
2016-09-21: CVE assigned

Note:
This bug was found with American Fuzzy Lop.

Permalink:

libav: NULL pointer dereference in ff_put_pixels8_xy2_mmx (rnd_template.c)

September 18, 2016
Zack Medico a.k.a. zmedico (homepage, bugs)

For I/O bound tasks, python coroutines make a nice replacement for threads. Unfortunately, there’s no asynchronous API for reading files, as discussed in the Best way to read/write files with AsyncIO thread of the python-tulip mailing list.

Meanwhile, it is essential that a long-running coroutine contain some asynchronous calls, since otherwise it will run all the way to completion before any other event loop tasks are allowed to run. For a long-running coroutine that needs to call a conventional iterator (rather than an asynchronous iterator), I’ve found this converter class to be useful:

class AsyncIteratorExecutor:
    """
    Converts a regular iterator into an asynchronous
    iterator, by executing the iterator in a thread.
    """
    def __init__(self, iterator, loop=None, executor=None):
        self.__iterator = iterator
        self.__loop = loop or asyncio.get_event_loop()
        self.__executor = executor

    def __aiter__(self):
        return self

    async def __anext__(self):
        value = await self.__loop.run_in_executor(
            self.__executor, next, self.__iterator, self)
        if value is self:
            raise StopAsyncIteration
        return value

For example, it can be used to asynchronously read lines of a text file as follows:

async def cat_file_async(filename):
    with open(filename, 'rt') as f:
        async for line in AsyncIteratorExecutor(f):
            print(line.rstrip())

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    try:
        loop.run_until_complete(
            cat_file_async('/path/of/file.txt'))
    finally:
        loop.close()

September 16, 2016

Description:
Libav is an open source set of tools for audio and video processing.

A fuzzing with an mp3 file as input discovered a null pointer access in put_no_rnd_pixels8_xy2_mmx.

The complete ASan output:

# avconv -i $FILE -f null -
avconv version 11.7, Copyright (c) 2000-2016 the Libav developers
  built on Aug 16 2016 15:34:42 with clang version 3.8.1 (tags/RELEASE_381/final)
[h263 @ 0x61a00001f280] Format detected only with low score of 25, misdetection possible!
[IMGUTILS @ 0x7ff589955420] Picture size 0x0 is invalid
[h263 @ 0x619000000580] header damaged
[h263 @ 0x619000000580] Syntax-based Arithmetic Coding (SAC) not supported
[h263 @ 0x619000000580] Independent Segment Decoding not supported
[h263 @ 0x619000000580] warning: first frame is no keyframe
[h263 @ 0x619000000580] cbpc damaged at 0 0
[h263 @ 0x619000000580] Error at MB: 0
[h263 @ 0x619000000580] concealing 1584 DC, 1584 AC, 1584 MV errors
[h263 @ 0x61a00001f280] Estimating duration from bitrate, this may be inaccurate
Input #0, h263, from '9.crashes':
  Duration: N/A, bitrate: N/A
    Stream #0.0: Video: h263, yuv420p, 704x576 [PAR 12:11 DAR 4:3], 25 fps, 25 tbn, 18.73 tbc
Output #0, null, to 'pipe:':
  Metadata:
    encoder         : Lavf56.1.0
    Stream #0.0: Video: rawvideo, yuv420p, 704x576 [PAR 12:11 DAR 4:3], q=2-31, 200 kb/s, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc56.1.0 rawvideo
Stream mapping:
  Stream #0:0 -> #0:0 (h263 (native) -> rawvideo (native))
Press ctrl-c to stop encoding
[h263 @ 0x61900001ea80] warning: first frame is no keyframe
ASAN:DEADLYSIGNAL
=================================================================
==26790==ERROR: AddressSanitizer: SEGV on unknown address 0x7ff584ddb77f (pc 0x7ff5910cdeee bp 0x7ffdc464d7f0 sp 0x7ffdc464d780 T0)
    #0 0x7ff5910cdeed in put_no_rnd_pixels8_xy2_mmx /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/x86/rnd_template.c:37:5
    #1 0x7ff590209de0 in hpel_motion /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:224:5
    #2 0x7ff590209de0 in apply_8x8 /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:798
    #3 0x7ff590209de0 in mpv_motion_internal /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:877
    #4 0x7ff590209de0 in ff_mpv_motion /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo_motion.c:981
    #5 0x7ff59013659b in mpv_decode_mb_internal /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo.c:2223:21
    #6 0x7ff59013659b in ff_mpv_decode_mb /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/mpegvideo.c:2358
    #7 0x7ff58f048c95 in decode_slice /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/h263dec.c:273:13
    #8 0x7ff58f0442cd in ff_h263_decode_frame /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/h263dec.c:575:11
    #9 0x7ff5909cf906 in avcodec_decode_video2 /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/utils.c:1600:19
    #10 0x5647eb in decode_video /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:1259:11
    #11 0x5647eb in process_input_packet /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:1398
    #12 0x550e63 in process_input /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2440:11
    #13 0x550e63 in transcode /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2488
    #14 0x550e63 in main /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/avconv.c:2647
    #15 0x7ff58cd6461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #16 0x41d098 in _init (/usr/bin/avconv+0x41d098)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-video/libav-11.7/work/libav-11.7/libavcodec/x86/rnd_template.c:37:5 in put_no_rnd_pixels8_xy2_mmx
==26790==ABORTING

Affected version:
11.7

Fixed version:
N/A

Commit fix:
https://git.libav.org/?p=libav.git;a=commit;h=136f55207521f0b03194ef5b55ba70f1635d6aee

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2016-7424

Timeline:
2016-08-15: bug discovered
2016-08-16: bug reported to upstream
2016-09-16: upstream released a patch
2016-09-17: blog post about the issue
2016-09-17: CVE Assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was reported F4B3CD@STARLAB on 2016-09-12 via libav-security while it was already public since
2016-08-15 on the upstream bugtracker.

Permalink:

libav: NULL pointer dereference in put_no_rnd_pixels8_xy2_mmx (rnd_template.c)

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

http://www.akhuettel.de/publications/fabryperot.pdf
We're very happy to announce that a few days ago one of our manuscripts, "Secondary electron interference from trigonal warping in clean carbon nanotubes", was accepted for publication in Physical Review Letters

Imagine a graphene "sheet" of carbon atoms rolled into a tube - and you get a carbon nanotube. Carbon nanotubes come in many variants, which influence strongly their electronic properties. They have different diameter, but also different "chiral angle", describing how the pattern of the carbon atoms twists around the tube axis. In our work, we show how to extract information on the nanotube structure from measurements of its conductance. At low temperature, electrons travel ballistically through a nanotube and are only scattered at its ends. For the quantum-mechanical electron wavefunction, metallic nanotubes act then analogous to an optical Fabry-Perot interferometer, i.e., a cavity with two semitransparent mirrors at either end, where a wave is partially reflected. Interference patterns are obtained by tuning the wavelength of the electrons; the current through the nanotube oscillates as a function of an applied gate voltage. The twisted graphene lattice then causes a distinct slow current modulation, which, as we show, allows a direct estimation of the chiral angle. This is an important step towards solving a highly nontrivial problem, namely identifying the precise
molecular structure of a nanotube from electronic measurements alone.

"Secondary electron interference from trigonal warping in clean carbon nanotubes"
A. Dirnaichner, M. del Valle, K. J. G. Götz, F. J. Schupp, N. Paradiso, M. Grifoni, Ch. Strunk, and A. K. Hüttel
accepted for publication in Physical Review Letters; arXiv:1602.03866 (PDF, supplemental information)

September 15, 2016

Description:
Graphicsmagick is an Image Processing System.

After the first round of fuzzing where I discovered some slowness issues that make the fuzz hard, the second round revealed a memory allocation failure.

The complete ASan output:

# gm identify $FILE
==20592==ERROR: AddressSanitizer failed to allocate 0x7fff03000 (34358702080) bytes of LargeMmapAllocator (error code: 12)
==20592==Process memory map follows:
        0x000000400000-0x000000522000   /usr/bin/gm
        0x000000722000-0x000000723000   /usr/bin/gm
        0x000000723000-0x000000726000   /usr/bin/gm
        0x000000726000-0x000001399000
        0x00007fff7000-0x00008fff7000
        0x00008fff7000-0x02008fff7000
        0x02008fff7000-0x10007fff8000
        0x600000000000-0x602000000000
        0x602000000000-0x602000010000
        0x602000010000-0x603000000000
        0x603000000000-0x603000010000
        0x603000010000-0x604000000000
        0x604000000000-0x604000010000
        0x604000010000-0x606000000000
        0x606000000000-0x606000010000
        0x606000010000-0x607000000000
        0x607000000000-0x607000010000
        0x607000010000-0x608000000000
        0x608000000000-0x608000010000
        0x608000010000-0x60a000000000
        0x60a000000000-0x60a000010000
        0x60a000010000-0x60b000000000
        0x60b000000000-0x60b000010000
        0x60b000010000-0x60c000000000
        0x60c000000000-0x60c000010000
        0x60c000010000-0x60d000000000
        0x60d000000000-0x60d000010000
        0x60d000010000-0x60f000000000
        0x60f000000000-0x60f000010000
        0x60f000010000-0x610000000000
        0x610000000000-0x610000010000
        0x610000010000-0x611000000000
        0x611000000000-0x611000010000
        0x611000010000-0x612000000000
        0x612000000000-0x612000010000
        0x612000010000-0x614000000000
        0x614000000000-0x614000020000
        0x614000020000-0x616000000000
        0x616000000000-0x616000020000
        0x616000020000-0x618000000000
        0x618000000000-0x618000020000
        0x618000020000-0x619000000000
        0x619000000000-0x619000020000
        0x619000020000-0x61a000000000
        0x61a000000000-0x61a000020000
        0x61a000020000-0x61b000000000
        0x61b000000000-0x61b000020000
        0x61b000020000-0x61d000000000
        0x61d000000000-0x61d000020000
        0x61d000020000-0x61e000000000
        0x61e000000000-0x61e000020000
        0x61e000020000-0x621000000000
        0x621000000000-0x621000020000
        0x621000020000-0x623000000000
        0x623000000000-0x623000020000
        0x623000020000-0x624000000000
        0x624000000000-0x624000020000
        0x624000020000-0x625000000000
        0x625000000000-0x625000020000
        0x625000020000-0x640000000000
        0x640000000000-0x640000003000
        0x7f889986d000-0x7f889988b000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/sgi.so
        0x7f889988b000-0x7f8899a8a000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/sgi.so
        0x7f8899a8a000-0x7f8899a8b000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/sgi.so
        0x7f8899a8b000-0x7f8899a8c000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/sgi.so
        0x7f8899a8c000-0x7f8899a8e000
        0x7f8899a8e000-0x7f88a0100000   /usr/lib64/locale/locale-archive
        0x7f88a0100000-0x7f88a0200000
        0x7f88a0300000-0x7f88a0400000
        0x7f88a049b000-0x7f88a27ed000
        0x7f88a27ed000-0x7f88a27f6000   /usr/lib64/libltdl.so.7.3.1
        0x7f88a27f6000-0x7f88a29f5000   /usr/lib64/libltdl.so.7.3.1
        0x7f88a29f5000-0x7f88a29f6000   /usr/lib64/libltdl.so.7.3.1
        0x7f88a29f6000-0x7f88a29f7000   /usr/lib64/libltdl.so.7.3.1
        0x7f88a29f7000-0x7f88a2a0c000   /lib64/libz.so.1.2.8
        0x7f88a2a0c000-0x7f88a2c0b000   /lib64/libz.so.1.2.8
        0x7f88a2c0b000-0x7f88a2c0c000   /lib64/libz.so.1.2.8
        0x7f88a2c0c000-0x7f88a2c0d000   /lib64/libz.so.1.2.8
        0x7f88a2c0d000-0x7f88a2c1c000   /lib64/libbz2.so.1.0.6
        0x7f88a2c1c000-0x7f88a2e1b000   /lib64/libbz2.so.1.0.6
        0x7f88a2e1b000-0x7f88a2e1c000   /lib64/libbz2.so.1.0.6
        0x7f88a2e1c000-0x7f88a2e1d000   /lib64/libbz2.so.1.0.6
        0x7f88a2e1d000-0x7f88a2ec4000   /usr/lib64/libfreetype.so.6.12.3
        0x7f88a2ec4000-0x7f88a30c4000   /usr/lib64/libfreetype.so.6.12.3
        0x7f88a30c4000-0x7f88a30ca000   /usr/lib64/libfreetype.so.6.12.3
        0x7f88a30ca000-0x7f88a30cb000   /usr/lib64/libfreetype.so.6.12.3
        0x7f88a30cb000-0x7f88a311f000   /usr/lib64/liblcms2.so.2.0.6
        0x7f88a311f000-0x7f88a331e000   /usr/lib64/liblcms2.so.2.0.6
        0x7f88a331e000-0x7f88a331f000   /usr/lib64/liblcms2.so.2.0.6
        0x7f88a331f000-0x7f88a3324000   /usr/lib64/liblcms2.so.2.0.6
        0x7f88a3324000-0x7f88a34b7000   /lib64/libc-2.22.so
        0x7f88a34b7000-0x7f88a36b7000   /lib64/libc-2.22.so
        0x7f88a36b7000-0x7f88a36bb000   /lib64/libc-2.22.so
        0x7f88a36bb000-0x7f88a36bd000   /lib64/libc-2.22.so
        0x7f88a36bd000-0x7f88a36c1000
        0x7f88a36c1000-0x7f88a36d7000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7f88a36d7000-0x7f88a38d6000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7f88a38d6000-0x7f88a38d7000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7f88a38d7000-0x7f88a38d8000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7f88a38d8000-0x7f88a38de000   /lib64/librt-2.22.so
        0x7f88a38de000-0x7f88a3ade000   /lib64/librt-2.22.so
        0x7f88a3ade000-0x7f88a3adf000   /lib64/librt-2.22.so
        0x7f88a3adf000-0x7f88a3ae0000   /lib64/librt-2.22.so
        0x7f88a3ae0000-0x7f88a3af7000   /lib64/libpthread-2.22.so
        0x7f88a3af7000-0x7f88a3cf6000   /lib64/libpthread-2.22.so
        0x7f88a3cf6000-0x7f88a3cf7000   /lib64/libpthread-2.22.so
        0x7f88a3cf7000-0x7f88a3cf8000   /lib64/libpthread-2.22.so
        0x7f88a3cf8000-0x7f88a3cfc000
        0x7f88a3cfc000-0x7f88a3df9000   /lib64/libm-2.22.so
        0x7f88a3df9000-0x7f88a3ff8000   /lib64/libm-2.22.so
        0x7f88a3ff8000-0x7f88a3ff9000   /lib64/libm-2.22.so
        0x7f88a3ff9000-0x7f88a3ffa000   /lib64/libm-2.22.so
        0x7f88a3ffa000-0x7f88a3ffc000   /lib64/libdl-2.22.so
        0x7f88a3ffc000-0x7f88a41fc000   /lib64/libdl-2.22.so
        0x7f88a41fc000-0x7f88a41fd000   /lib64/libdl-2.22.so
        0x7f88a41fd000-0x7f88a41fe000   /lib64/libdl-2.22.so
        0x7f88a41fe000-0x7f88a4a0d000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7f88a4a0d000-0x7f88a4c0d000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7f88a4c0d000-0x7f88a4c3e000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7f88a4c3e000-0x7f88a4cc4000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7f88a4cc4000-0x7f88a4d3f000
        0x7f88a4d3f000-0x7f88a4d61000   /lib64/ld-2.22.so
        0x7f88a4eab000-0x7f88a4ec0000
        0x7f88a4ec0000-0x7f88a4ec7000   /usr/lib64/gconv/gconv-modules.cache
        0x7f88a4ec7000-0x7f88a4eea000   /usr/share/locale/it/LC_MESSAGES/libc.mo
        0x7f88a4eea000-0x7f88a4f54000
        0x7f88a4f54000-0x7f88a4f60000
        0x7f88a4f60000-0x7f88a4f61000   /lib64/ld-2.22.so
        0x7f88a4f61000-0x7f88a4f62000   /lib64/ld-2.22.so
        0x7f88a4f62000-0x7f88a4f63000
        0x7ffe83ea9000-0x7ffe83eca000   [stack]
        0x7ffe83f49000-0x7ffe83f4b000   [vvar]
        0x7ffe83f4b000-0x7ffe83f4d000   [vdso]
        0xffffffffff600000-0xffffffffff601000   [vsyscall]
==20592==End of process memory map.
==20592==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:183 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
    #0 0x4c9aed in AsanCheckFailed /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_rtl.cc:67
    #1 0x4d0623 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:159
    #2 0x4d0811 in __sanitizer::ReportMmapFailureAndDie(unsigned long, char const*, char const*, int, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:183
    #3 0x4d984a in __sanitizer::MmapOrDie(unsigned long, char const*, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_posix.cc:122
    #4 0x421bdf in __sanitizer::LargeMmapAllocator::Allocate(__sanitizer::AllocatorStats*, unsigned long, unsigned long) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1033
    #5 0x421bdf in __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >, __sanitizer::LargeMmapAllocator >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >*, unsigned long, unsigned long, bool, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1302
    #6 0x421bdf in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_allocator.cc:368
    #7 0x421bdf in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_allocator.cc:718
    #8 0x4c01b1 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1-r2/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:53
    #9 0x7f88a479e12d in MagickMalloc /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/memory.c:156:10
    #10 0x7f88a479e12d in MagickMallocArray /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/memory.c:347
    #11 0x7f8899872d7a in ReadSGIImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/coders/sgi.c:498:19
    #12 0x7f88a4558b13 in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/constitute.c:1607:13
    #13 0x7f88a4556a94 in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/constitute.c:1370:9
    #14 0x7f88a446bb25 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:8375:17
    #15 0x7f88a447197c in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:8865:17
    #16 0x7f88a44e96fe in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:17379:10
    #17 0x7f88a44e7926 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:17432:16
    #18 0x7f88a334461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #19 0x418c88 in _init (/usr/bin/gm+0x418c88)

Affected version:
1.3.25

Fixed version:
1.3.26 (not yet released)

Commit fix:
http://hg.code.sf.net/p/graphicsmagick/code/rev/c53725cb5449

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-09-09: bug discovered
2016-09-09: bug reported privately to upstream
2016-09-10: no upstream response
2016-09-15: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

graphicsmagick: memory allocation failure in MagickMalloc (memory.c)

Description:
Graphicsmagick is an Image Processing System.

After the first round of fuzzing where I discovered some slowness issues that make the fuzz hard, the second round revealed a memory allocation failure.

The complete ASan output:

# gm identify $FILE
==10139==ERROR: AddressSanitizer failed to allocate 0x4cd6a6000 (20626169856) bytes of LargeMmapAllocator (error code: 12)                                                                                                                                                     
==10139==Process memory map follows:                                                                                                                                                                                                                                           
        0x000000400000-0x00000051f000   /usr/bin/gm                                                                                                                                                                                                                            
        0x00000071e000-0x00000071f000   /usr/bin/gm                                                                                                                                                                                                                            
        0x00000071f000-0x000000722000   /usr/bin/gm                                                                                                                                                                                                                            
        0x000000722000-0x000001394000                                                                                                                                                                                                                                          
        0x00007fff7000-0x00008fff7000                                                                                                                                                                                                                                          
        0x00008fff7000-0x02008fff7000                                                                                                                                                                                                                                          
        0x02008fff7000-0x10007fff8000                                                                                                                                                                                                                                          
        0x600000000000-0x602000000000                                                                                                                                                                                                                                          
        0x602000000000-0x602000010000                                                                                                                                                                                                                                          
        0x602000010000-0x603000000000                                                                                                                                                                                                                                          
        0x603000000000-0x603000010000                                                                                                                                                                                                                                          
        0x603000010000-0x604000000000                                                                                                                                                                                                                                          
        0x604000000000-0x604000010000                                                                                                                                                                                                                                          
        0x604000010000-0x606000000000                                                                                                                                                                                                                                          
        0x606000000000-0x606000010000                                                                                                                                                                                                                                          
        0x606000010000-0x607000000000                                                                                                                                                                                                                                          
        0x607000000000-0x607000010000                                                                                                                                                                                                                                          
        0x607000010000-0x608000000000                                                                                                                                                                                                                                          
        0x608000000000-0x608000010000                                                                                                                                                                                                                                          
        0x608000010000-0x60a000000000                                                                                                                                                                                                                                          
        0x60a000000000-0x60a000010000                                                                                                                                                                                                                                          
        0x60a000010000-0x60b000000000                                                                                                                                                                                                                                          
        0x60b000000000-0x60b000010000                                                                                                                                                                                                                                          
        0x60b000010000-0x60c000000000                                                                                                                                                                                                                                          
        0x60c000000000-0x60c000010000                                                                                                                                                                                                                                          
        0x60c000010000-0x60f000000000                                                                                                                                                                                                                                          
        0x60f000000000-0x60f000010000                                                                                                                                                                                                                                          
        0x60f000010000-0x610000000000                                                                                                                                                                                                                                          
        0x610000000000-0x610000010000                                                                                                                                                                                                                                          
        0x610000010000-0x611000000000                                                                                                                                                                                                                                          
        0x611000000000-0x611000010000                                                                                                                                                                                                                                          
        0x611000010000-0x612000000000                                                                                                                                                                                                                                          
        0x612000000000-0x612000010000                                                                                                                                                                                                                                          
        0x612000010000-0x614000000000                                                                                                                                                                                                                                          
        0x614000000000-0x614000020000                                                                                                                                                                                                                                          
        0x614000020000-0x616000000000                                                                                                                                                                                                                                          
        0x616000000000-0x616000020000                                                                                                                                                                                                                                          
        0x616000020000-0x618000000000                                                                                                                                                                                                                                          
        0x618000000000-0x618000020000                                                                                                                                                                                                                                          
        0x618000020000-0x619000000000                                                                                                                                                                                                                                          
        0x619000000000-0x619000020000                                                                                                                                                                                                                                          
        0x619000020000-0x61a000000000
        0x61a000000000-0x61a000020000
        0x61a000020000-0x61b000000000
        0x61b000000000-0x61b000020000
        0x61b000020000-0x61d000000000
        0x61d000000000-0x61d000020000
        0x61d000020000-0x61e000000000
        0x61e000000000-0x61e000020000
        0x61e000020000-0x621000000000
        0x621000000000-0x621000020000
        0x621000020000-0x623000000000
        0x623000000000-0x623000020000
        0x623000020000-0x624000000000
        0x624000000000-0x624000020000
        0x624000020000-0x625000000000
        0x625000000000-0x625000020000
        0x625000020000-0x640000000000
        0x640000000000-0x640000003000
        0x7ff8e8877000-0x7ff8e888c000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/pcx.so
        0x7ff8e888c000-0x7ff8e8a8c000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/pcx.so
        0x7ff8e8a8c000-0x7ff8e8a8d000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/pcx.so
        0x7ff8e8a8d000-0x7ff8e8a8e000   /usr/lib64/GraphicsMagick-1.3.25/modules-Q32/coders/pcx.so
        0x7ff8e8a8e000-0x7ff8ef100000   /usr/lib64/locale/locale-archive
        0x7ff8ef100000-0x7ff8ef200000
        0x7ff8ef300000-0x7ff8ef400000
        0x7ff8ef4ab000-0x7ff8f17fd000
        0x7ff8f17fd000-0x7ff8f1806000   /usr/lib64/libltdl.so.7.3.1
        0x7ff8f1806000-0x7ff8f1a05000   /usr/lib64/libltdl.so.7.3.1
        0x7ff8f1a05000-0x7ff8f1a06000   /usr/lib64/libltdl.so.7.3.1
        0x7ff8f1a06000-0x7ff8f1a07000   /usr/lib64/libltdl.so.7.3.1
        0x7ff8f1a07000-0x7ff8f1a1c000   /lib64/libz.so.1.2.8
        0x7ff8f1a1c000-0x7ff8f1c1b000   /lib64/libz.so.1.2.8
        0x7ff8f1c1b000-0x7ff8f1c1c000   /lib64/libz.so.1.2.8
        0x7ff8f1c1c000-0x7ff8f1c1d000   /lib64/libz.so.1.2.8
        0x7ff8f1c1d000-0x7ff8f1c2c000   /lib64/libbz2.so.1.0.6
        0x7ff8f1c2c000-0x7ff8f1e2b000   /lib64/libbz2.so.1.0.6
        0x7ff8f1e2b000-0x7ff8f1e2c000   /lib64/libbz2.so.1.0.6
        0x7ff8f1e2c000-0x7ff8f1e2d000   /lib64/libbz2.so.1.0.6
        0x7ff8f1e2d000-0x7ff8f1ed4000   /usr/lib64/libfreetype.so.6.12.3
        0x7ff8f1ed4000-0x7ff8f20d4000   /usr/lib64/libfreetype.so.6.12.3
        0x7ff8f20d4000-0x7ff8f20da000   /usr/lib64/libfreetype.so.6.12.3
        0x7ff8f20da000-0x7ff8f20db000   /usr/lib64/libfreetype.so.6.12.3
        0x7ff8f20db000-0x7ff8f212f000   /usr/lib64/liblcms2.so.2.0.6
        0x7ff8f212f000-0x7ff8f232e000   /usr/lib64/liblcms2.so.2.0.6
        0x7ff8f232e000-0x7ff8f232f000   /usr/lib64/liblcms2.so.2.0.6
        0x7ff8f232f000-0x7ff8f2334000   /usr/lib64/liblcms2.so.2.0.6
        0x7ff8f2334000-0x7ff8f24c7000   /lib64/libc-2.22.so
        0x7ff8f24c7000-0x7ff8f26c7000   /lib64/libc-2.22.so
        0x7ff8f26c7000-0x7ff8f26cb000   /lib64/libc-2.22.so
        0x7ff8f26cb000-0x7ff8f26cd000   /lib64/libc-2.22.so
        0x7ff8f26cd000-0x7ff8f26d1000
        0x7ff8f26d1000-0x7ff8f26e7000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7ff8f26e7000-0x7ff8f28e6000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7ff8f28e6000-0x7ff8f28e7000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7ff8f28e7000-0x7ff8f28e8000   /usr/lib64/gcc/x86_64-pc-linux-gnu/4.9.3/libgcc_s.so.1
        0x7ff8f28e8000-0x7ff8f28ee000   /lib64/librt-2.22.so
        0x7ff8f28ee000-0x7ff8f2aee000   /lib64/librt-2.22.so
        0x7ff8f2aee000-0x7ff8f2aef000   /lib64/librt-2.22.so
        0x7ff8f2aef000-0x7ff8f2af0000   /lib64/librt-2.22.so
        0x7ff8f2af0000-0x7ff8f2b07000   /lib64/libpthread-2.22.so
        0x7ff8f2b07000-0x7ff8f2d06000   /lib64/libpthread-2.22.so
        0x7ff8f2d06000-0x7ff8f2d07000   /lib64/libpthread-2.22.so
        0x7ff8f2d07000-0x7ff8f2d08000   /lib64/libpthread-2.22.so
        0x7ff8f2d08000-0x7ff8f2d0c000
        0x7ff8f2d0c000-0x7ff8f2e09000   /lib64/libm-2.22.so
        0x7ff8f2e09000-0x7ff8f3008000   /lib64/libm-2.22.so
        0x7ff8f3008000-0x7ff8f3009000   /lib64/libm-2.22.so
        0x7ff8f3009000-0x7ff8f300a000   /lib64/libm-2.22.so
        0x7ff8f300a000-0x7ff8f300c000   /lib64/libdl-2.22.so
        0x7ff8f300c000-0x7ff8f320c000   /lib64/libdl-2.22.so
        0x7ff8f320c000-0x7ff8f320d000   /lib64/libdl-2.22.so
        0x7ff8f320d000-0x7ff8f320e000   /lib64/libdl-2.22.so
        0x7ff8f320e000-0x7ff8f387c000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7ff8f387c000-0x7ff8f3a7b000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7ff8f3a7b000-0x7ff8f3aa3000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7ff8f3aa3000-0x7ff8f3afd000   /usr/lib64/libGraphicsMagick.so.3.15.1
        0x7ff8f3afd000-0x7ff8f3b01000
        0x7ff8f3b01000-0x7ff8f3b23000   /lib64/ld-2.22.so
        0x7ff8f3c79000-0x7ff8f3c8e000
        0x7ff8f3c8e000-0x7ff8f3c95000   /usr/lib64/gconv/gconv-modules.cache
        0x7ff8f3c95000-0x7ff8f3cb8000   /usr/share/locale/it/LC_MESSAGES/libc.mo
        0x7ff8f3cb8000-0x7ff8f3d16000
        0x7ff8f3d16000-0x7ff8f3d22000
        0x7ff8f3d22000-0x7ff8f3d23000   /lib64/ld-2.22.so
        0x7ff8f3d23000-0x7ff8f3d24000   /lib64/ld-2.22.so
        0x7ff8f3d24000-0x7ff8f3d25000
        0x7fffd09c8000-0x7fffd09e9000   [stack]
        0x7fffd09f0000-0x7fffd09f2000   [vvar]
        0x7fffd09f2000-0x7fffd09f4000   [vdso]
        0xffffffffff600000-0xffffffffff601000   [vsyscall]
==10139==End of process memory map.
==10139==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:183 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
    #0 0x4c973d in AsanCheckFailed /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_rtl.cc:67
    #1 0x4d0273 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:159
    #2 0x4d0461 in __sanitizer::ReportMmapFailureAndDie(unsigned long, char const*, char const*, int, bool) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:183
    #3 0x4d949a in __sanitizer::MmapOrDie(unsigned long, char const*, bool) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_posix.cc:122
    #4 0x42182f in __sanitizer::LargeMmapAllocator::Allocate(__sanitizer::AllocatorStats*, unsigned long, unsigned long) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1033
    #5 0x42182f in __sanitizer::CombinedAllocator<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback>, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >, __sanitizer::LargeMmapAllocator >::Allocate(__sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator64<105553116266496ul, 4398046511104ul, 0ul, __sanitizer::SizeClassMap, __asan::AsanMapUnmapCallback> >*, unsigned long, unsigned long, bool, bool) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_allocator.h:1302
    #6 0x42182f in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_allocator.cc:368
    #7 0x42182f in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_allocator.cc:718
    #8 0x4bfe01 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:53
    #9 0x7ff8e887beba in ReadPCXImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/coders/pcx.c:467:16
    #10 0x7ff8f34a4c4e in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/constitute.c:1607:13
    #11 0x7ff8f34a4294 in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/constitute.c:1370:9
    #12 0x7ff8f33f5836 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:8375:17
    #13 0x7ff8f33f9e23 in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:8865:17
    #14 0x7ff8f344fc3e in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:17379:10
    #15 0x7ff8f344e5d1 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.25/work/GraphicsMagick-1.3.25/magick/command.c:17432:16
    #16 0x7ff8f235461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #17 0x4188d8 in _init (/usr/bin/gm+0x4188d8)

Affected version:
1.3.25

Fixed version:
1.3.26 (not yet released)

Commit fix:
http://hg.code.sf.net/p/graphicsmagick/code/rev/b9edafd479b9

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-09-09: bug discovered
2016-09-09: bug reported privately to upstream
2016-09-10: no upstream response
2016-09-15: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

graphicsmagick: memory allocation failure in ReadPCXImage (pcx.c)

September 11, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
4 Weeks Left to Gentoo Miniconf (September 11, 2016, 17:50 UTC)

4 weeks are left until LinuxDays and Gentoo Miniconf 2016 in Prague.

Are you excited to see Gentoo in action? Here is something to look forward to:

  • Gentoo amd64-fbsd on an ASRock E350M1
  • Gentoo arm64 on a Raspberry Pi 3 (yes, running a 64 bit kernel and userland) with systemd
  • Gentoo mipsel on a MIPS Creator CI20 with musl libc
  • Gentoo arm on an Orange Pi PC, with Clang as system compiler

dsc00015

September 06, 2016
Greg KH a.k.a. gregkh (homepage, bugs)
4.9 == next LTS kernel (September 06, 2016, 07:59 UTC)

As I briefly mentioned a few weeks ago on my G+ page, the plan is for the 4.9 Linux kernel release to be the next “Long Term Supported” (LTS) kernel.

Last year, at the Linux Kernel Summit, we discussed just how to pick the LTS kernel. Many years ago, we tried to let everyone know ahead of time what the kernel version would be, but that caused a lot of problems as people threw crud in there that really wasn’t ready to be merged, just to make it easier for their “day job”. That was many years ago, and people insist they aren’t going to do this again, so let’s see what happens.

I reserve the right to not pick 4.9 and support it for two years, if it’s a major pain because people abused this notice. If so, I’ll possibly drop back to 4.8, or just wait for 4.10 to be released. I’ll let everyone know by updating the kernel.org releases page when it’s time (many months from now.)

If people have questions about this, email me and I will be glad to discuss it.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
GStreamer on Android and universal builds (September 06, 2016, 03:34 UTC)

This is a quick PSA for those of you using the GStreamer binary builds for Android.

With the Android NDK r12, the default behaviour while building native code changed from building for armeabi to building for all ABIs. So if your app doesn’t specify APP_ABI in its Application.mk, you will now get an error about unsupported architectures. This was tracked as bug 770631.

The idea behind this change is that your Android app should ship versions of your native code for all supported architectures as a “universal” build, so it is accessible to as many devices as possible.

To deal with this, we now provide a universal tarball which contains binaries for all archiectures that we support. This is currently ARM, ARMv7-A, ARMv8-A (64-bit), x86, and x86-64. That leaves MIPS and MIPS64 that are not currently supported.

If you’ve been using the GStreamer Android binaries before GStreamer 1.9.2, then you should start using the universal tarball rather than the architecture-specific tarball. You will need minor updates to your native build, like we made to the player example. You probably want to put the gstAndroidRoot variable in ~/.gradle/gradle.properties instead, though.

As Sebastian announced, assuming all goes well with the universal tarballs, we will stop shipping the per-arch tarballs — they are redundant, and just take up CI and disk resources.

There are some things that I’d like for us to be able to do better. The first is that Android Studio doesn’t pick up native code with our current build approach. This is a limitation of the Android Gradle NDK plugin, which doesn’t support a custom build. This should change with Android Studio 2.2.

I would also like to integrate better with Android Studio — either be able to specify the GStreamer Android binary path in the UI (like you do for the SDK/NDK), or better yet, have it be possible to specify the dependency in Gradle, and have it be automatically pulled from the Internet. If any of you are familiar with how we can do this, please shout out!

September 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RethinkDB on Gentoo Linux (September 02, 2016, 06:56 UTC)

2014-11-05-cat-instagram

It was about time I added a new package to portage and I’m very glad it to be RethinkDB and its python driver !

  • dev-db/rethinkdb
  • dev-python/python-rethinkdb

For those of you who never heard about this database, I urge you to go about their excellent website and have a good read.

Packaging RethinkDB

RethinkDB has been under my radar for quite a long time now and when they finally got serious enough about high availability I also got serious about using it at work… and obviously “getting serious” + “work” means packaging it for Gentoo Linux :)

Quick notes on packaging for Gentoo Linux:

  • This is a C++ project so it feels natural and easy to grasp
  • The configure script already offers a way of using system libraries instead of the bundled ones which is in line with Gentoo’s QA policy
  • The only grey zone about the above statement is the web UI which is used precompiled

RethinkDB has a few QA violations which the ebuild is addressing by modifying the sources:

  • There is a configure.default which tries to force some configure options
  • The configure is missing some options to avoid auto installing some docs and init scripts
  • The build system does its best to guess the CXX compiler but it should offer an option to set it directly
  • The build system does not respect users’ CXXFLAGS and tries to force the usage of -03

Getting our hands into RethinkDB

At work, we finally found the excuse to get our hands into RethinkDB when we challenged ourselves with developing a quizz game for our booth as a sponsor of Europython 2016.

It was a simple game where you were presented a question and four possible answers and you had 60 seconds to answer as much of them as you could. The trick is that we wanted to create an interactive game where the participant had to play on a tablet but the rest of the audience got to see who was currently playing and follow their score progression + their ranking for the day and the week in real time on another screen !

Another challenge for us in the creation of this game is that we only used technologies that were new to us and even switched jobs so the backend python guys would be doing the frontend javascript et vice et versa. The stack finally went like this :

  • Game quizz frontend : Angular2 (TypeScript)
  • Game questions API : Go
  • Real time scores frontend : Angular2 + autobahn
  • Real time scores API : python 3.5 asyncio + autobahn
  • Database : RethinkDB

As you can see on the stack we chose RethinkDB for its main strength : real time updates pushed to the connected clients. The real time scores frontend and API were bonded together using autobahn while the API was using the changefeeds (realtime updates coming from the database) and broadcasting them to the frontend.

What we learnt about RethinkDB

  • We’re sure that we want to use it in production !
  • The ReQL query language is a pipeline so its syntax is quite tricky to get familiar with (even more when coming from mongoDB like us), it is as powerful as it can be disconcerting
  • Realtime changefeeds have limitations which are sometimes not so easy to understand/find out (especially the order_by / secondary index part)
  • Changefeeds limitations is a constraint you have to take into account in your data modeling !
  • Changefeeds + order_by can do the ordering for you when using the include_offsets option, this is amazing
  • The administration web UI is awesome
  • The python 3.5 asyncio proper support is still not merged, this is a pain !

Try it out

Now that you can emerge rethinkdb I encourage you to try this awesome database.

Be advised that the ebuild also provides a way of configuring your rethinkdb instance by running emerge –config dev-db/rethinkdb !

I’ll now try to get in touch with upstream to get Gentoo listed on their website.

August 23, 2016
In Memory of Jonathan “avenj” Portnoy (August 23, 2016, 00:00 UTC)

The Gentoo project mourns the loss of Jonathan Portnoy, better known amongst us as Jon, or avenj.

Jon was an active member of the International Gentoo community, almost since its founding in 1999. He was still active until his last day.

His passing has struck us deeply and with disbelief. We all remember him as a vivid and enjoyable person, easy to reach out to and energetic in all his endeavors.

On behalf of the entire Gentoo Community, all over the world, we would like to convey our deepest sympathy for his family and friends. As per his wishes, the Gentoo Foundation has made a donation in his memory to the Perl Foundation.

Please join the community in remembering Jon on our forums.

August 18, 2016
Events: FrOSCon 11 (August 18, 2016, 00:00 UTC)

This weekend, the University of Applied Sciences Bonn-Rhein-Sieg will host the Free and Open Source Software Conference, better known as FrOSCon. Gentoo will be present there on 20 and 21 August with a chance for you to meet devs and other users, grab merchandise, and compile your own Gentoo button badges.

See you there!

August 17, 2016
OpenPGP: Duplicate keyids - short vs long (August 17, 2016, 16:40 UTC)

Lately there seems to be a lot of discussion around regarding the use of short keyids as a large number of duplicates/collisions were uploaded to the keyserver network as seen in the chart below: The problem with most of these posts are they are plain wrong. But lets look at it from a few different … Continue reading "OpenPGP: Duplicate keyids - short vs long"

August 16, 2016
Robin Johnson a.k.a. robbat2 (homepage, bugs)

Some quick notes on upgrading a Hammer-era Ceph RGW setup to Jewel, because the upstream notes don't cover it well. The multisite docs are the closest there is, but here's what I put together instead.

  • The Zone concept has remained the same.
  • A Region is now a Zonegroup.
  • The top-level RegionMap is moved inside the content of a Period
  • Only one Period can be live at a time, and changes are made to a non-live Period
  • The Realm describes which Period is live.
  • Additionally, there can be a default Zonegroup and Zone inside the period, as well as a default Zone inside a Zonegroup.

Initial state, if you were to look on Hammer:
# radosgw-admin region list
{
  "default_info": {
    "default_region": "default"
  },
  "regions": [
    "default"
  ]
}
# radosgw-admin region-map get
{
  "master_region": "default",
  "bucket_quota": {
    "max_objects": -1,
    "enabled": false,
    "max_size_kb": -1
  },
  "user_quota": {
    "max_objects": -1,
    "enabled": false,
    "max_size_kb": -1
  },
  "regions": [
    {
      "val": {
        "zones": [
          {
            "name": "default",
            "log_meta": "false",
            "endpoints": [

            ],
            "bucket_index_max_shards": 31,
            "log_data": "false"
          }
        ],
        "name": "default",
        "endpoints": [
	      "https://CENSORED-1.EXAMPLE.COM",
	      "https://CENSORED-2.EXAMPLE.COM"
        ],
        "api_name": "CENSORED",
        "default_placement": "default-placement",
        "is_master": "true",
        "hostnames": [
	      "CENSORED-1.EXAMPLE.COM",
	      "CENSORED-2.EXAMPLE.COM"
        ],
        "placement_targets": [
          {
            "name": "default-placement",
            "tags": [

            ]
          }
        ],
        "master_zone": ""
      },
      "key": "default"
    }
  ]
}
# radosgw-admin region get --rgw-region=default
{
  "zones": [
    {
      "log_meta": "false",
      "name": "default",
      "bucket_index_max_shards": 31,
      "endpoints": [

      ],
      "log_data": "false"
    }
  ],
  "master_zone": "",
  "is_master": "true",
  "placement_targets": [
    {
      "name": "default-placement",
      "tags": [

      ]
    }
  ],
  "default_placement": "default-placement",
  "name": "default",
  "hostnames": [
	"CENSORED-1.EXAMPLE.COM",
	"CENSORED-2.EXAMPLE.COM"
  ],
  "endpoints": [
    "https://CENSORED-1.EXAMPLE.COM",
    "https://CENSORED-2.EXAMPLE.COM"
  ],
  "api_name": "CENSORED"
}
# radosgw-admin zone get --rgw-region=default --rgw-zone=default
{
  "log_pool": ".log",
  "user_swift_pool": ".users.swift",
  "placement_pools": [
    {
      "val": {
        "data_pool": ".rgw.buckets",
        "data_extra_pool": ".rgw.buckets.extra",
        "index_pool": ".rgw.buckets.index"
      },
      "key": "default-placement"
    }
  ],
  "user_keys_pool": ".users",
  "control_pool": ".rgw.control",
  "domain_root": ".rgw",
  "usage_log_pool": ".usage",
  "gc_pool": ".rgw.gc",
  "system_key": {
    "access_key": "",
    "secret_key": ""
  },
  "intent_log_pool": ".intent-log",
  "user_uid_pool": ".users.uid",
  "user_email_pool": ".users.email"
}


Initial state, if you were to look on Jewel:
# radosgw-admin zone list
{
    "default_info": "",
    "zones": [
        "default"
    ]
}
# radosgw-admin zonegroup list
{
    "default_info": "",
    "zonegroups": [
        "default"
    ]
}
# TODO: fill the rest of this up.

# Now changing stuff up:
# export SYSTEM_ACCESS_KEY=... SYSTEM_SECRET_KEY=...
# radosgw-admin user create \
  --system  
  --uid=zone.user \
  --display-name="Zone User" \
  --access-key=$SYSTEM_ACCESS_KEY \
  --secret=$SYSTEM_SECRET_KEY
{
  "user_id": "zone.user",
  "display_name": "Zone User",
  "email": "",
  "suspended": 0,
  "max_buckets": 1000,
  "auid": 0,
  "subusers": [],
  "keys": [
      {
          "user": "zone.user",
          "access_key": "...",
          "secret_key": "..."
      }
  ],
  "swift_keys": [],
  "caps": [],
  "op_mask": "read, write, delete",
  "system": "true",
  "default_placement": "",
  "placement_tags": [],
  "bucket_quota": {
      "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1
  },
  "user_quota": {
      "enabled": false,
      "max_size_kb": -1,
      "max_objects": -1
  },
  "temp_url_keys": []
}


# radosgw-admin realm create --rgw-realm gold
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "name": "gold",
    "current_period": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1
}


# radosgw-admin realm list
{
    "default_info": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realms": [
        "gold"
    ]
}


# radosgw-admin realm get
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "name": "gold",
    "current_period": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1
}


# radosgw-admin period list
{
    "periods": [
        "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb"
    ]
}


# radosgw-admin period get
{
    "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "epoch": 1,
    "predecessor_uuid": "",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 1
}


# radosgw-admin period update --master-zone=default --master-zonegroup=default
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103:staging",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


# radosgw-admin period prepare
{
    "id": "8fb1cfbc-ad63-4d92-886a-d939cc52862b",
    "epoch": 1,
    "predecessor_uuid": "",
    "sync_status": [],
    "period_map": {
        "id": "8fb1cfbc-ad63-4d92-886a-d939cc52862b",
        "zonegroups": [],
        "short_zone_ids": []
    },
    "master_zonegroup": "",
    "master_zone": "",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 1
}

# radosgw-admin zone get --rgw-zonegroup=default --rgw-zone=default >zone.json
# radosgw-admin zonegroup get --rgw-zonegroup=default --rgw-zone=default >zonegroup.json
# $EDITOR zonegroup.json zone.json
## Add the following data:
## both files: Set realm_id
## zone.json: Set system_user.access_key, Set system_user.secret_key
## zonegroup.json: Set master_zone to "default", Set is_master to "true".
# radosgw-admin zone set --rgw-zone=default --rgw-zonegroup=default \
  --realm-id=1ac4fd8d-9e77-4fd2-ad54-b591f1734103 \
  --infile zone.json \
  --master --default
# radosgw-admin zonegroup set --rgw-zonegroup=default \
  --realm-id=1ac4fd8d-9e77-4fd2-ad54-b591f1734103 \
  --infile zonegroup.json \
  --master --default


# radosgw-admin period update
{
    "id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103:staging",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [],
    "period_map": {
        "id": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
        "zonegroups": [
            {
                "id": "default",
                "name": "default",
                "api_name": "CENSORED",
                "is_master": "true",
                "endpoints": [
                    "https:\/\/CENSORED-1.EXAMPLE.COM",
                    "https:\/\/CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames": [
                    "CENSORED-1.EXAMPLE.COM",
                    "CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames_s3website": [],
                "master_zone": "default",
                "zones": [
                    {
                        "id": "default",
                        "name": "default",
                        "endpoints": [],
                        "log_meta": "true",
                        "log_data": "false",
                        "bucket_index_max_shards": 31,
                        "read_only": "false"
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": []
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103"
            }
        ],
        "short_zone_ids": [
            {
                "key": "default",
                "val": 2610307010
            }
        ]
    },
    "master_zonegroup": "default",
    "master_zone": "default",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


# radosgw-admin period commit
2016-08-16 17:51:22.324368 7f8562da6900  0 error read_lastest_epoch .rgw.root:periods.8d0d4955-592c-48b5-93d1-3fa1cec17579.latest_epoch
2016-08-16 17:51:22.347375 7f8562da6900  1 Set the period's master zonegroup default as the default
{
    "id": "8d0d4955-592c-48b5-93d1-3fa1cec17579",
    "epoch": 1,
    "predecessor_uuid": "f8fafae9-b6d2-41f6-b7aa-7b03fea57bfb",
    "sync_status": [
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        ""
    ],
    "period_map": {
        "id": "8d0d4955-592c-48b5-93d1-3fa1cec17579",
        "zonegroups": [
            {
                "id": "default",
                "name": "default",
                "api_name": "CENSORED",
                "is_master": "true",
                "endpoints": [
                    "https:\/\/CENSORED-1.EXAMPLE.COM",
                    "https:\/\/CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames": [
                    "CENSORED-1.EXAMPLE.COM",
                    "CENSORED-2.EXAMPLE.COM"
                ],
                "hostnames_s3website": [],
                "master_zone": "default",
                "zones": [
                    {
                        "id": "default",
                        "name": "default",
                        "endpoints": [],
                        "log_meta": "true",
                        "log_data": "false",
                        "bucket_index_max_shards": 31,
                        "read_only": "false"
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": []
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103"
            }
        ],
        "short_zone_ids": [
            {
                "key": "default",
                "val": 2610307010
            }
        ]
    },
    "master_zonegroup": "default",
    "master_zone": "default",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }
    },
    "realm_id": "1ac4fd8d-9e77-4fd2-ad54-b591f1734103",
    "realm_name": "gold",
    "realm_epoch": 2
}


August 14, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
Gentoo Miniconf 2016 Call for Papers closed (August 14, 2016, 21:08 UTC)

The Call for Papers for the Gentoo Miniconf is now closed and the acceptance notices have been sent out.
Missed the deadline? Don’t despair, the LinuxDays CfP is still open and you can still submit talk proposals there until the end of August.

July 31, 2016
Zack Medico a.k.a. zmedico (homepage, bugs)

Suppose that you host a gentoo rsync mirror on your company intranet, and you want it to gracefully handle bursts of many connections from clients, queuing connections as long as necessary for all of the clients to be served (if they don’t time out first). However, you don’t want to allow unlimited rsync processes, since that would risk overloading your server. In order to solve this problem, I’ve created socket-burst-dampener, an inetd-like daemon for handling bursts of connections.

It’s a very simple program, which only takes command-line arguments (no configuration file). For example:

socket-burst-dampener 873 \
--backlog 8192 --processes 128 --load-average 8 \
-- rsync --daemon

This will allow up to 128 concurrent rsync processes, while automatically backing off on processes if the load average exceeds 8. Meanwhile, the --backlog 8192 setting means that the kernel will queue up to 8192 connections (until they are served or they time out). You need to adjust the net.core.somaxconn sysctl in order for the kernel to queue that many connections, since net.core.somaxconn defaults to 128 connections (cat /proc/sys/net/core/somaxconn).

July 18, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gimp 2.9.4 now in Gentoo (July 18, 2016, 16:24 UTC)

Hi there!

Just a quick heads up that Gimp 2.9.4 is now available in Gentoo.

Upstream has an article on what’s new with Gimp 2.9.4: GIMP 2.9.4 Released

Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

The call for papers for the Gentoo Miniconf 2016 (8th+9th October 2016 in Prague) will close in two weeks, on 1 August 2016. Time to get your submission ready!

July 15, 2016
Hanno Böck a.k.a. hanno (homepage, bugs)
Insecure updates in Joomla before 3.6 (July 15, 2016, 17:35 UTC)

In early April I reported security problems with the update process to the security contact of Joomla. While the issue has been fixed in Joomla 3.6, the communication process was far from ideal.

The issue itself is pretty simple: Up until recently Joomla fetched information about its updates over unencrypted and unauthenticated HTTP without any security measures.

The update process works in three steps. First of all the Joomla backend fetches a file list.xml from update.joomla.org that contains information about current versions. If a newer version than the one installed is found then the user gets a button that allows him to update Joomla. The file list.xml references an URL for each version with further information about the update called extension_sts.xml. Interestingly this file is fetched over HTTPS, while - in version 3.5 - the file list.xml is not. However this does not help, as the attacker can already intervene at the first step and serve a malicious list.xml that references whatever he wants. In extension_sts.xml there is a download URL for a zip file that contains the update.

Exploiting this for a Man-in-the-Middle-attacker is trivial: Requests to update.joomla.org need to be redirected to an attacker-controlled host. Then the attacker can place his own list.xml, which will reference his own extension_sts.xml, which will contain a link to a backdoored update. I have created a trivial proof of concept for this (just place that on the HTTP host that update.joomla.org gets redirected to).

I think it should be obvious that software updates are a security sensitive area and need to be protected. Using HTTPS is one way of doing that. Using any kind of cryptographic signature system is another way. Unfortunately it seems common web applications are only slowly learning that. Drupal only switched to HTTPS updates earlier this year. It's probably worth checking other web applications that have integrated update processes if they are secure (Wordpress is secure fwiw).

Now here's how the Joomla developers handled this issue: I contacted Joomla via their webpage on April 6th. Their webpage form didn't have a way to attach files, so I offered them to contact me via email so I could send them the proof of concept. I got a reply to that shortly after asking for it. This was the only communication from their side. Around two months later, on June 14th, I asked about the status of this issue and warned that I would soon publish it if I don't get a reaction. I never got any reply.

In the meantime Joomla had published beta versions of the then upcoming version 3.6. I checked that and noted that they have changed the update url from http://update.joomla.org/ to https://update.joomla.org/. So while they weren't communicating with me it seemed a fix was on its way. I then found that there was a pull request and a Github discussion that started even before I first contacted them. Joomla 3.6 was released recently, therefore the issue is fixed. However the release announcement doesn't mention it.

So all in all I contacted them about a security issue they were already in the process of fixing. The problem itself is therefore solved. But the lack of communication about the issue certainly doesn't cast a good light on Joomla's security process.

July 11, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Common filesystem I/O pitfalls (July 11, 2016, 12:41 UTC)

Filesystem I/O is one of the key elements of the standard library in many programming languages. Most of them derive it from the interfaces provided by the standard C library, potentially wrapped in some portability and/or OO sugar. Most of them share an impressive set of pitfalls for careless programmers.

In this article, I would like to shortly go over a few more or less common pitfalls that come to my mind.

Overwriting the file in-place

This one will be remembered by me as the ‘setuptools screwup’ for quite some time. Consider the following snippet:

if not self.dry_run:
    ensure_directory(target)
    f = open(target,"w"+mode)
    f.write(contents)
    f.close()

This is the code that setuptools used to install scripts. At a first glance, it looks good — and seems to work well, too. However, think of what happens if the file at target exists already.

The obvious answer would be: it is overwritten. The more commonly noticed pitfall here is that the old contents are discarded before the new are written. If user happens to run the script before it is completely written, he’ll get unexpected results. If writes fail for some reason, user will be left with partially written new script.

While in the case of installations this is not very significant (after all, failure in middle of installation is never a good thing, mid-file or not), this becomes very important when dealing with data. Imagine that a program would update your data this way — and a failure to add new data (as well as unexpected program termination, power loss…) would instantly cause all previous data to be erased.

However, there is another problem with this concept. In fact, it does not strictly overwrite the file — it opens it in-place and implicitly truncates it. This causes more important issues in a few cases:

  • if the file is hardlinked to another file(s) or is a symbolic link, then the contents of all the linked files are overwritten,
  • if the file is a named pipe, the program will hang waiting for the other end of the pipe to be open for reading,
  • other special files may cause other unexpected behavior.

This is exactly what happened in Gentoo. Package-installed script wrappers were symlinked to python-exec, and setuptools used by pip attempted to install new scripts on top of those wrappers. But instead of overwriting the wrappers, it overwrote python-exec and broke everything relying on it.

The lesson is simple: don’t overwrite files like this. The easy way around it is to unlink the file first — ensuring that any links are broken, and special files are removed. The more correct way is to use a temporary file (created safely), and use the atomic rename() call to replace the target with it (no unlinking needed then). However, it should be noted that the rename can fail and a fallback code with unlink and explicit copy is necessary.

Path canonicalization

For some reason, many programmers have taken a fancy to canonicalize paths. While canonicalization itself is not that bad, it’s easy to do it wrongly and it cause a major headache. Let’s take a look at the following path:

//foo/../bar/example.txt

You could say it’s ugly. It has a double slash, and a parent directory reference. It almost itches to canonicalize it to more pretty:

/bar/example.txt

However, this path is not necessarily the same as the original.

For a start, let’s imagine that foo is actually a symbolic link to baz/ooka. In this case, its parent directory referenced by .. is actually /baz, not /, and the obvious canonicalization fails.

Furthermore, double slashes can be meaningful. For example, on Windows double slash (yes, yes, backslashes are used normally) would mean a network resource. In this case, stripping the adjacent slash would change the path to a local one.

So, if you are really into canonicalization, first make sure to understand all the rules governing your filesystem. On POSIX systems, you really need to take symbolic links into consideration — usually you start with the left-most path component and expand all symlinks recursively (you need to take into consideration that link target path may carry more symlinks). Once all symbolic links are expanded, you can safely start interpreting the .. components.

However, if you are going to do that, think of another path:

/usr/lib/foo

If you expand it on common Gentoo old-style multilib system, you’ll get:

/usr/lib64/foo

However, now imaging that the /usr/lib symlink is replaced with a directory, and the appropriate files are moved to it. At this point, the path recorded by your program is no longer correct since it relies on a canonicalization done using a different directory structure.

To summarize: think twice before canonicalizing. While it may seem beneficial to have pretty paths or use real filesystem paths, you may end up discarding user’s preferences (if I set a symlink somewhere, I don’t want program automagically switching to another path). If you really insist on it, consider all the consequences and make sure you do it correctly.

Relying on xattr as an implementation for ACL/caps

Since common C libraries do not provide proper file copying functions, many people attempted to implement their own with better or worse results. While copying the data is a minor problem, preserving the metadata requires a more complex solution.

The simpler programs focused on copying the properties retrieved via stat() — modes, ownership and times. The more correct ones added also support for copying extended attributes (xattrs).

Now, it is a known fact that Linux filesystems implement many metadata extensions using extended attributes — ACLs, capabilities, security contexts. Sadly, this causes many people to assume that copying extended attributes is guaranteed to copy all of that extended metadata as well. This is a bad assumption to make, even though it is correct on Linux. It will cause your program to work fine on Linux but silently fail to copy ACLs on other systems.

Therefore: always use explicit APIs, and never rely on implementation details. If you want to work on ACLs, use the ACL API (provided by libacl on Linux). If you want to use capabilities, use the capability API (libcap or libcap-ng).

Using incompatible APIs interchangeably

Now for something less common. There are at least three different file locking mechanisms on Linux — the somehow portable, non-standardized flock() function, the POSIX lockf() and (also POSIX) fcntl() commands. The Linux manpage says that commonly both interfaces are implemented using the fcntl. However, this is not guaranteed and mixing the two can result in unpredictable results on different systems.

Dealing with the two standard file APIs is even more curious. On one hand, we have high-level stdio interfaces including FILE* and DIR*. On the other, we have all fd-oriented interfaces from unistd. Now, POSIX officially supports converting between the two — using fileno(), dirfd(), fdopen() and fddiropen().

However, it should be noted that the result of such a conversion reuses the same underlying file descriptor (rather than duplicating it). Two major points, however:

  1. There is no well-defined way to destroy a FILE* or DIR* without closing the descriptor, nor any guarantee that fclose() or closedir() will work correctly on a closed descriptor. Therefore, you should not create more than one FILE* (or DIR*) for a fd, and if you have one, always close it rather than the fd itself.
  2. The stdio streams are explicitly stateful, buffered and have some extra magic on top (like ungetc()). Once you start using stdio I/O operations on a file, you should not try to use low-level I/O (e.g. read()) or the other way around since the results are pretty much undefined. Supposedly fflush() + rewind() could help but no guarantees.

So, if you want to do I/O, decide whether you want stdio or fd-based I/O. Convert between the two types only when you need to use additional routines not available for the other one; but if those routines involve some kind of content-related operations, avoid using the other type for I/O. If you need to do separate I/O, use dup() to get a clone of the file descriptor.

To summarize: avoid combining different APIs. If you really insist on doing that, check if it is supported and what are the requirements for doing so. You have to be especially careful not to run into undefined results. And as usual — remember that different systems may implement things differently.

Atomicity of operations

For the end, something commonly known, and even more commonly repeated — race conditions due to non-atomic operations. Long story short, all the unexpected results resulting from the assumption that nothing can happen to the file between successive calls to functions.

I think the most common mistake is the ‘does the file exist?’ problem. It is awfully common for programs to use some wrappers over stat() (like os.path.exists() in Python) to check if a file exists, and then immediately proceed with opening or creating it. For example:

def do_foo(path):
    if not os.path.exists(path):
        return False

    f = open(path, 'r')

Usually, this will work. However, if the file gets removed between the precondition check and the open(), the program will raise an exception instead of returning False. For example, this can practically happen if the file is part of a large directory tree being removed via rm -r.

The double bug here could be easily fixed via introducing explicit error handling, that will also render the precondition unnecessary:

def do_foo(path):
    try:
        f = open(path, 'r')
    except OSError as e:
        if e.errno == errno.ENOENT:
            return False
        raise

The new snippet ensures that the file will be open if it exists at the point of open(). If it does not, errno will indicate an appropriate error. For other errors, we are re-raising the exception. If the file is removed post open(), the fd will still be valid.

We could extend this to a few generic rules:

  1. Always check for errors, even if you asserted that they should not happen. Proper error checks make many (unsafe) precondition checks unnecessary.
  2. Open file descriptors will remain valid even when the underlying files are removed; paths can become invalid (i.e. referencing non-existing files or directories) or start pointing to another file (created using the same path). So, prefer opening the file as soon as necessary, and fstat(), fchown(), futimes()… over stat(), chown(), utimes()
  3. Open directory descriptors will continue to reference the same directory even when the underlying path is removed or replaced; paths may start referencing another directory. When performing operations on multiple files in a directory, prefer opening the directory and using openat(), unlinkat()… However, note that the directory can still be removed and therefore further calls may return ENOENT.
  4. If you need to atomically overwrite a file with another one, use rename(). To atomically create a new file, use open() with O_EXCL. Usually, you will want to use the latter to create a temporary file, then the former to replace the actual file with it.
  5. If you need to use temporary files, use mkstemp() or mkdtemp() to create them securely. The former can be used when you only need an open fd (the file is removed immediately), the latter if you need visible files. If you want to use tmpnam(), put it in a loop and try opening with O_EXCL to ensure you do not accidentally overwrite something.
  6. When you can’t guarantee atomicity, use locks to prevent simultaneous operations. For file operations, you can lock the file in question. For directory operations, you can create and lock lock files (however, do not rely on existence of lock files alone). Note though that the POSIX locks are non-mandatory — i.e. only prevent other programs from acquiring the lock explicitly but do not block them from performing I/O ignoring the locks.
  7. Think about the order of operations. If you create a world-readable file, and afterwards chmod() it, it is possible for another program to open it before the chmod() and retain the open handle while secure data is being written. Instead, restrict the access via mode parameter of open() (or umask()).

July 08, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Lab::Measurement 3.512 released (July 08, 2016, 16:03 UTC)

Immediately at the heels of the previous post, I've just uploaded Lab::Measurement 3.512. It fixes some problems in the Yokogawa GS200 driver introduced in the previous version. Enjoy!

July 05, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

It's been some time since the last Lab::Measurement blog post; we are at Lab::Measurement version 3.511 by now. Here are the most important changes since 3.31:

  • One big addition "under the hood", which is still in flux, was a generic input/output framework for status and error messages. 
  • The device cache code has seen updates and bugfixes. 
  • Agilent multimeter drivers have been cleaned up and rewritten. 
  • Minimal support has been added for the Agilent E8362A network analyzer.
  • The Oxford Instruments IPS driver has been sprinkled with consistency checks and debug output, the ITC driver has seen bugfixes.
  • Controlling an Oxford Instruments Triton system is work in progress.
  • The Stanford Research SR830 lock-in now supports using the auxiliary inputs as "multimeters" and the auxiliary outputs as voltage sources.
  • Support for the Keithley 2400 multimeter, the Lakeshore 224 temperature monitor, and the Rohde&Schwarz SMB100A rf-source  has been added.
  • Work on generic SCPI parsing utilities has started.
  • Sweeps can now also vary pulse length and pulse repetition rate; the "time sweep" has been enhanced.
  • Test routines (both with instruments attached and software-only) are work in progress.
 Lab::VISA has also seen a new bugfix release, 3.04. Changes since version 3.01 are:
  • Support for VXI_SERVANT ressources has been removed; these are NI-specific and not available in 64bit VISA.
  • The documentation, especially on compiling and installing on recent Windows installations, has been improved. No need for Visual Studio and similar giga-downloads anymore!
  • Compiling on both 32bit and 64bit Windows 10 should now work without manual modifications in the Makefile.pl.
Enjoy!

July 04, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)

After quite a bit of hard work, I've at long last launched WireGuard, a secure network tunnel that uses modern crypto, is extremely fast, and is easy and pleasurable to use. You can read about it at the website, but in short, it's based on the simple idea of an association between public keys and permitted IP addresses. Along the way it uses some nice crypto trick to achieve it's goal. For performance it lives in the kernel, though cross-platform versions in safe languages like Rust, Go, etc are on their way.

The launch was wildly successful. About 10 minutes after I ran /etc/init.d/nginx restart, somebody had already put it on Hacker News and the Twitter sphere, and within 24 hours I had received 150,000 unique IPs. The reception has been very warm, and the mailing list has already started to get some patches. Distro maintainers have stepped up and packages are being prepared. There are currently packages for Gentoo, Arch, Debian, and OpenWRT, which is very exciting.

Although it's still experimental and not yet in final stable/secure form, I'd be interested in general feedback from experimenters and testers.

June 29, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Beamforming in PulseAudio (June 29, 2016, 05:22 UTC)

In case you missed it — we got PulseAudio 9.0 out the door, with the echo cancellation improvements that I wrote about. Now is probably a good time for me to make good on my promise to expand upon the subject of beamforming.

As with the last post, I’d like to shout out to the wonderful folks at Aldebaran Robotics who made this work possible!

Beamforming

Beamforming as a concept is used in various aspects of signal processing including radio waves, but I’m going to be talking about it only as applied to audio. The basic idea is that if you have a number of microphones (a mic array) in some known arrangement, it is possible to “point” or steer the array in a particular direction, so sounds coming from that direction are made louder, while sounds from other directions are rendered softer (attenuated).

Practically speaking, it should be easy to see the value of this on a laptop, for example, where you might want to focus a mic array to point in front of the laptop, where the user probably is, and suppress sounds that might be coming from other locations. You can see an example of this in the webcam below. Notice the grilles on either side of the camera — there is a microphone behind each of these.

Webcam with 2 micsWebcam with 2 mics

This raises the question of how this effect is achieved. The simplest approach is called “delay-sum beamforming”. The key idea in this approach is that if we have an array of microphones that we want to steer the array at a particular angle, the sound we want to steer at will reach each microphone at a different time. This is illustrated below. The image is taken from this great article describing the principles and math in a lot more detail.

Delay-sum beamformingDelay-sum beamforming

In this figure, you can see that the sound from the source we want to listen to reaches the top-most microphone slightly before the next one, which in turn captures the audio slightly before the bottom-most microphone. If we know the distance between the microphones and the angle to which we want to steer the array, we can calculate the additional distance the sound has to travel to each microphone.

The speed of sound in air is roughly 340 m/s, and thus we can also calculate how much of a delay occurs between the same sound reaching each microphone. The signal at the first two microphones is delayed using this information, so that we can line up the signal from all three. Then we take the sum of the signal from all three (actually the average, but that’s not too important).

The signal from the direction we’re pointing in is going to be strongly correlated, so it will turn out loud and clear. Signals from other directions will end up being attenuated because they will only occur in one of the mics at a given point in time when we’re summing the signals — look at the noise wavefront in the illustration above as an example.

Implementation

(this section is a bit more technical than the rest of the article, feel free to skim through or skip ahead to the next section if it’s not your cup of tea!)

The devil is, of course, in the details. Given the microphone geometry and steering direction, calculating the expected delays is relatively easy. We capture audio at a fixed sample rate — let’s assume this is 32000 samples per second, or 32 kHz. That translates to one sample every 31.25 µs. So if we want to delay our signal by 125µs, we can just add a buffer of 4 samples (4 × 31.25 = 125). Sound travels about 4.25 cm in that time, so this is not an unrealistic example.

Now, instead, assume the signal needs to be delayed by 80 µs. This translates to 2.56 samples. We’re working in the digital domain — the mic has already converted the analog vibrations in the air into digital samples that have been provided to the CPU. This means that our buffer delay can either be 2 samples or 3, not 2.56. We need another way to add a fractional delay (else we’ll end up with errors in the sum).

There is a fair amount of academic work describing methods to perform filtering on a sample to provide a fractional delay. One common way is to apply an FIR filter. However, to keep things simple, the method I chose was the Thiran approximation — the literature suggests that it performs the task reasonably well, and has the advantage of not having to spend a whole lot of CPU cycles first transforming to the frequency domain (which an FIR filter requires)(edit: converting to the frequency domain isn’t necessary, thanks to the folks who pointed this out).

I’ve implemented all of this as a separate module in PulseAudio as a beamformer filter module.

Now it’s time for a confession. I’m a plumber, not a DSP ninja. My delay-sum beamformer doesn’t do a very good job. I suspect part of it is the limitation of the delay-sum approach, partly the use of an IIR filter (which the Thiran approximation is), and it’s also entirely possible there is a bug in my fractional delay implementation. Reviews and suggestions are welcome!

A Better Implementation

The astute reader has, by now, realised that we are already doing a bunch of processing on incoming audio during voice calls — I’ve written in the previous article about how the webrtc-audio-processing engine provides echo cancellation, acoustic gain control, voice activity detection, and a bunch of other features.

Another feature that the library provides is — you guessed it — beamforming. The engineers at Google (who clearly are DSP ninjas) have a pretty good beamformer implementation, and this is also available via module-echo-cancel. You do need to configure the microphone geometry yourself (which means you have to manually load the module at the moment). Details are on our wiki (thanks to Tanu for that!).

How well does this work? Let me show you. The image below is me talking to my laptop, which has two microphones about 4cm apart, on either side of the webcam, above the screen. First I move to the right of the laptop (about 60°, assuming straight ahead is 0°). Then I move to the left by about the same amount (the second speech spike). And finally I speak from the center (a couple of times, since I get distracted by my phone).

The upper section represents the microphone input — you’ll see two channels, one corresponding to each mic. The bottom part is the processed version, with echo cancellation, gain control, noise suppression, etc. and beamforming.

WebRTC beamformingWebRTC beamforming

You can also listen to the actual recordings …

… and the processed output.

Feels like black magic, doesn’t it?

Finishing thoughts

The webrtc-audio-processing-based beamforming is already available for you to use. The downside is that you need to load the module manually, rather than have this automatically plugged in when needed (because we don’t have a way to store and retrieve the mic geometry). At some point, I would really like to implement a configuration framework within PulseAudio to allow users to set configuration from some external UI and have that be picked up as needed.

Nicolas Dufresne has done some work to wrap the webrtc-audio-processing library functionality in a GStreamer element (and this is in master now). Adding support for beamforming to the element would also be good to have.

The module-beamformer bits should be a good starting point for folks who want to wrap their own beamforming library and have it used in PulseAudio. Feel free to get in touch with me if you need help with that.

June 25, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.0 (June 25, 2016, 15:31 UTC)

Oh boy, this new version is so amazing in terms of improvements and contributions that it’s hard to sum it up !

Before going into more explanations I want to dedicate this release to tobes whose contributions, hard work and patience have permitted this ambitious 3.0 : THANK YOU !

This is the graph of contributed commits since 2.9 just so you realise how much this version is thanks to him:
2016-06-25-165245_1289x248_scrotI can’t continue on without also thanking Horgix who started this madness by splitting the code base into modular files and pydsigner for his everlasting contributions and code reviews !

The git stat since 2.9 also speaks for itself:

 73 files changed, 7600 insertions(+), 3406 deletions(-)

So what’s new ?

  • the monolithic code base have been split into modules responsible for the given tasks py3status performs
  • major improvements on modules output orchestration and execution resulting in considerable CPU consumption reduction and i3bar responsiveness
  • refactoring of user notifications with added dbus support and rate limiting
  • improved modules error reporting
  • py3status can now survive an i3status crash and will try to respawn it
  • a new ‘container’ module output type gives the ability to group modules together
  • refactoring of the time and tztime modules support brings the support of all the time macros (%d, %Z etc)
  • support for stopping py3status and its modules when i3bar hide mode is used
  • refactoring of general, contribution and most noticeably modules documentation
  • more details on the rest of the changelog

Modules

Along with a cool list of improvements on the existing modules, these are the new modules:

  • new group module to cycle display of several modules (check it out, it’s insanely handy !)
  • new fedora_updates module to check for your Fedora packages updates
  • new github module to check a github repository and notifications
  • new graphite module to check metrics from graphite
  • new insync module to check your current insync status
  • new timer module to have a simple countdown displayed
  • new twitch_streaming module to check is a Twitch Streamer is online
  • new vpn_status module to check your VPN status
  • new xrandr_rotate module to rotate your screens
  • new yandexdisk_status module to display Yandex.Disk status

Contributors

And of course thank you to all the others who made this version possible !

  • @egeskow
  • Alex Caswell
  • Johannes Karoff
  • Joshua Pratt
  • Maxim Baz
  • Nathan Smith
  • Themistokle Benetatos
  • Vladimir Potapev
  • Yongming Lai

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Time to retire (June 25, 2016, 07:08 UTC)

sign-big-150dpi-magnified-name-200x200I’m sad to say it’s the end of the road for me with Gentoo, after 13 years volunteering my time (my “anniversary” is tomorrow). My time and motivation to commit to Gentoo have steadily declined over the past couple of years and eventually stopped entirely. It was an enormous part of my life for more than a decade, and I’m very grateful to everyone I’ve worked with over the years.

My last major involvement was running our participation in the Google Summer of Code, which is now fully handed off to others. Prior to that, I was involved in many things from migrating our X11 packages through the Big Modularization and maintaining nearly 400 packages to serving 6 terms on the council and as desktop manager in the pre-council days. I spent a long time trying to change and modernize our distro and culture. Some parts worked better than others, but the inertia I had to fight along the way was enormous.

No doubt I’ve got some packages floating around that need reassignment, and my retirement bug is already in progress.

Thanks, folks. You can reach me by email using my nick at this domain, or on Twitter, if you’d like to keep in touch.


Tagged: gentoo, x.org

June 24, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2016-06-24 (June 24, 2016, 04:55 UTC)

The development as been a bit slowed down because of University exam of July and August.
It will continue soon.

June 23, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

I think there has been more than enough news on nextcloud origins, the fork from owncloud, … so I will keep this post to the technical bits.

Installing nextcloud on Gentoo

For now, the differences between owncloud 9 and nextcloud 9 are mostly cosmetic, so a few quick edits to owncloud ebuild ended in a working nextcloud one. And thanks to the different package names, you can install both in parallel (as long as they do not use the same database, of course).

So if you want to test nextcloud, it’s just a command away:

# emerge -a nextcloud

With the default webapp parameters, it will install alongside owncloud.

Migrating owncloud data

Nothing official again here, but as I run a small instance (not that much data) with simple sqlite backend, I could copy the data and configuration to nextcloud and test it while keeping owncloud.
Adapt the paths and web user to your setup: I have these webapps in the default /var/www/localhost/htdocs/ path, with www-server as the web server user.

First, clean the test data and configuration (if you logged in nextcloud):

# rm /var/www/localhost/htdocs/nextcloud/config/config.php
# rm -r /var/www/localhost/htdocs/nextcloud/data/*

Then clone owncloud’s data and config. If you feel adventurous (or short on available disk space), you can move these files instead of copying them:

# cp -a /var/www/localhost/htdocs/owncloud/data/* /var/www/localhost/htdocs/nextcloud/data/
# cp -a /var/www/localhost/htdocs/owncloud/config/config.php /var/www/localhost/htdocs/nextcloud/config/

Change all owncloud occurences in config.php to nextcloud (there should be only one, for ‘datadirectory’. And then run the (nextcloud) updater. You can do it via the web interface, or (safer) with the CLI occ tool:

# sudo -u www-server php /var/www/localhost/htdocs/nextcloud/occ upgrade

As with “standard” owncloud upgrades, you will have to reactivate additional plugins after logging in. Also check the nextcloud log for potential warnings and errors.

In my test, the only non-official plugin that I use, files_reader (for ebooks)  installed fine in nextcloud, and the rest worked as fine as owncloud with a lighter default theme 🙂
For now, owncloud-client works if you point it to the new /nextcloud URL on your server, but this can (and probably will) change in the future.

More migration tips and news can be found in this nextcloud forum post, including some nice detailed backup steps for mysql-backed systems migration.

June 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

In my previous post I have described a number of pitfalls regarding Gentoo dependency specifications. However, I have missed a minor point of correctness of various dependency types in specific dependency classes. I am going to address this in this short post.

There are three classes of dependencies in Gentoo: build-time dependencies that are installed before the source build happens, runtime dependencies that should be installed before the package is installed to the live system and ‘post’ dependencies which are pretty much runtime dependencies whose install can be delayed if necessary to avoid dependency loops. Now, there are some fun relationships between dependency classes and dependency types.

Blockers

Blockers are the dependencies used to prevent a specific package from being installed, or to force its uninstall. In modern EAPIs, there are two kinds of blockers: weak blockers (single !) and strong blockers (!!).

Weak blockers indicate that if the blocked package is installed, its uninstall may be delayed until the blocking package is installed. This is mostly used to solve file collisions between two packages — e.g. it allows the package manager to replace colliding files, then unmerge remaining files of the blocked package. It can also be used if the blocked package causes runtime issues on the blocking package.

Weak blockers make sense only in RDEPEND. While they’re technically allowed in DEPEND (making it possible for DEPEND=${RDEPEND} assignment to be valid), they are not really meaningful in DEPEND alone. That’s because weak blockers can be delayed post build, and therefore may not influence the build environment at all. In turn, after the build is done, build dependencies are no longer used, and unmerging the blocker does not make sense anymore.

Strong blockers indicate that the blocked package must be uninstalled before the dependency specification is considered satisfied. Therefore, they are meaningful both for build-time dependencies (where they indicate the blocker must be uninstalled before source build starts) and for runtime dependencies (where they indicate it must be uninstalled before install starts).

This leaves PDEPEND which is a bit unclear. Again, technically both blocker types are valid. However, weak blockers in PDEPEND would be pretty much equivalent to those in RDEPEND, so there is no reason to use that class. Strong blockers in PDEPEND would logically be equivalent to weak blockers — since the satisfaction of this dependency class can be delayed post install.

Any-of dependencies and :* slot operator

This is going just to be a short reminder: those types of dependencies are valid in all dependency classes but no binding between those occurences is provided.

An any-of dependency in DEPEND indicates that at least one of the packages will be installed before the build starts. An any-of dependency in RDEPEND (or PDEPEND) indicates that at least one of them will be installed at runtime. There is no guarantee that the dependency used to satisfy DEPEND will be the same as the one used to satisfy RDEPEND, and the latter is fully satisfied when one of the listed packages is replaced by another.

A similar case occurs for :* operator — only that slots are used instead of separate packages.

:= slot operator

Now, the ‘equals’ slot operator is a fun one. Technically, it is valid in all dependency classes — for the simple reason of DEPEND=${RDEPEND}. However, it does not make sense in DEPEND alone as it is used to force rebuilds of installed package while build-time dependencies apply only during the build.

The fun part is that for the := slot operator to be valid, the matching package needs to be installed when the metadata for the package is recorded — i.e. when a binary package is created or the built package is installed from source. For this to happen, a dependency guaranteeing this must be in DEPEND.

So, the common rule would be that a package dependency using := operator would have to be both in RDEPEND and DEPEND. However, strictly speaking the dependencies can be different as long as a package matching the specification from RDEPEND is guaranteed by DEPEND.

June 21, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

During my work on Gentoo, I have seen many types of dependency pitfalls that developers fell in. Sad to say, their number is increasing with new EAPI features — we are constantly increasing new ways into failures rather than working on simplifying things. I can’t say the learning curve is getting much steeper but it is considerably easier to make a mistake.

In this article, I would like to point out a few common misunderstandings and pitfalls regarding slots, slot operators and any-of (|| ()) deps. All of those constructs are used to express dependencies that can be usually be satisfied by multiple packages or package versions that can be installed in parallel, and missing this point is often the cause of trouble.

Separate package dependencies are not combined into a single slot

One of the most common mistakes is to assume that multiple package dependency specifications listed in one package are going to be combined somehow. However, there is no such guarantee, and when a package becomes slotted this fact actually becomes significant.

Of course, some package managers take various precautions to prevent the following issues. However, such precautions not only can not be relied upon but may also violate the PMS.

For example, consider the following dependency specification:

>=app-misc/foo-2
<app-misc/foo-5

It is a common way of expressing version ranges (in this case, versions 2*, 3* and 4* are acceptable). However, if app-misc/foo is slotted and there are versions satisfying the dependencies in different slots, there is no guarantee that the dependency could not be satisfied by installing foo-1 (satisfies <foo-5) and foo-6 (satisfies >=foo-2) in two slots!

Similarly, consider:

app-misc/foo[foo]
bar? ( app-misc/foo[baz] )

This one is often used to apply multiple sets of USE flags to a single package. Once again, if the package is slotted, there is no guarantee that the dependency specifications will not be satisfied by installing two slots with different USE flag configurations.

However, those problems mostly apply to fully slotted packages such as sys-libs/db where multiple slots are actually meaningfully usable by a package. With the more common use of multiple slots to provide incompatible versions of the package (e.g. binary compatibility slots), there is a more important problem: that even a single package dependency can match the wrong slot.

For non-truly multi-slotted packages, the solution to all those problems is simple: always specify the correct slot. For truly multi-slotted packages, there is no easy solution.

For example, a version range has to be expressed using an any-of dep:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)

Multiple sets of USE flags? Well, if you really insist, you can combine them for each matching slot separately…

|| (
    ( sys-libs/db:5.3 tools? ( sys-libs/db:5.3[cxx] ) )  
    ( sys-libs/db:5.1 tools? ( sys-libs/db:5.1[cxx] ) )  
	…
)

The ‘equals’ slot operator and multiple slots

A similar problem applies to the use of the EAPI 5 ‘equals’ slot operator. The PMS notes that:

=
Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will break unless a matching package with slot and sub-slot equal to the slot and sub-slot of the best installed version at the time the package was installed is available.

[…]

To implement the equals slot operator, the package manager will need to store the slot/sub-slot pair of the best installed version of the matching package. […]

PMS, 8.2.6.3 Slot Dependencies

The significant part is that the slot and subslot is recorded for the best package version matched by the specification containing the operator. So again, if the operator is used on multiple dependencies that can match multiple slots, multiple slots can actually be recorded.

Again, this becomes really significant in truly slotted packages:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)
sys-libs/db:=

While one may expect the code to record the slot of sys-libs/db used by the package, this may actually record any newer version that is installed while the package is being built. In other words, this may implicitly bind to db-6* (and pull it in too).

For this to work, you need to ensure that the dependency with the slot operator can not match any version newer than the two requested:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)
<sys-libs/db-6:=

In this case, the dependency with the operator could still match earlier versions. However, the other dependency enforces (as long as it’s in DEPEND) that at least one of the two versions specified is installed at build-time, and therefore is used by the operator as the best version matching it.

The above block can easily be extended by a single set of USE dependencies (being applied to all the package dependencies including the one with slot operator). For multiple conditional sets of USE dependencies, finding a correct solution becomes harder…

The meaning of any-of dependencies

Since I have already started using the any-of dependencies in the examples, I should point out yet another problem. Many of Gentoo developers do not understand how any-of dependencies work, and make wrong assumptions about them.

In an any-of group, at least one immediate child element must be matched. A blocker is considered to be matched if its associated package dependency specification is not matched.

PMS, 8.2.3 Any-of Dependency Specifications

So, PMS guarantees that if at least one of the immediate child elements (package dependencies, nested blocks) of the any-of block, the dependency is considered satisfied. This is the only guarantee PMS gives you. The two common mistakes are to assume that the order is significant and that any kind of binding between packages installed at build time and at run time is provided.

Consider an any-of dependency specification like the following:

|| (
    A
    B
    C
)

In this case, it is guaranteed that at least one of the listed packages is installed at the point appropriate for the dependency class. If none of the packages are installed already, it is customary to assume the Package Manager will prefer the first one — while this is not specified and may depend on satisfiability of the dependencies, it is a reasonable assumption to make.

If multiple packages are installed, it is undefined which one is actually going to be used. In fact, the package may even provide the user with explicit run time choice of the dependency used, or use multiple of them. Assuming that A will be preferred over B, and B over C is simply wrong.

Furthermore, if one of the packages is uninstalled, while one of the remaining ones is either already installed or being installed, the dependency is still considered satisfied. It is wrong to assume that in any case the Package Manager will bind to the package used at install time, or cause rebuilds when switching between the packages.

The ‘equals’ slot operator in any-of dependencies

Finally, I am reaching the point of lately recurring debates. Let me make it clear: our current policy states that under no circumstances may := appear anywhere inside any-of dependency blocks.

Why? Because it is meaningless, it is contradictory. It is not even undefined behavior, it is a case where requirements put for the slot operator can not be satisfied. To explain this, let me recall the points made in the preceding sections.

First of all, the implementation of the ‘equals’ slot operator requires the Package Manager to explicitly bind the slot/subslot of the dependency to the installed version. This can only happen if the dependency is installed — and an any-of block only guarantees that one of them will actually be installed. Therefore, an any-of block may trigger a case when PMS-enforced requirements can not be satisfied.

Secondly, the definition of an any-of block allows replacing one of the installed packages with another at run time, while the slot operator disallows changing the slot/subslot of one of the packages. The two requested behaviors are contradictory and do not make sense. Why bind to a specific version of one package, while any version of the other package is allowed?

Thirdly, the definition of an any-of block does not specify any particular order/preference of packages. If the listed packages do not block one another, you could end up having multiple of them installed, and bound to specific slots/subslots. Therefore, the Package Manager should allow you to replace A:1 with B:2 but not with B:1 nor with A:2. We’re reaching insanity now.

Now, all of the above is purely theoretical. The Package Manager can do pretty much anything given invalid input, and that is why many developers wrongly assume that slot operators work inside any-of. The truth is: they do not, the developer just did not test all the cases correctly. The Portage behavior varies from allowing replacements with no rebuilds, to requiring both of mutually exclusive packages to be installed simultaneously.

June 19, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Since GLEP 67 was approved, bug assignment became easier. However, there were still many metadata.xml files which made this suboptimal. Today, I have fixed most of them and I would like to provide this short guide on how to write good metadata.xml files.

The bug assignment procedure

To understand the points that I am going to make, let’s take a look at how bug assignment happens these days. Assuming a typical case of bug related to a specific package (or multiple packages), the procedure for assigning the bug involves, for each package:

  1. reading all <maintainer/> elements from the package’s metadata.xml file, in order;
  2. filtering the maintainers based on restrict="" attributes (if any);
  3. filtering and reordering the maintainers based on <description/>s;
  4. assigning the bug to the first maintainer left, and CC-ing the remaining ones.

I think the procedure is quite clear. Since we no longer have <herd/> elements with special meaning applied to them, the assignment is mostly influenced by maintainer occurrence order. Restrictions can be used to limit maintenance to specific versions of a package, and descriptions to apply special rules and conditions.

Now, for semi-automatic bug assignment, only the first or the first two of the above steps can be clearly automated. Applying restrictions correctly requires understanding whether the bug can be correctly isolated to a specific version range, as some bugs (e.g. invalid metadata) may require being fixed in multiple versions of the package. Descriptions, in turn, are written for humans and require a human to interpret them.

What belongs in a good description

Now, many of existing metadata.xml files had either useless or even problematic maintainer descriptions. This is a problem since it increases time needed for bug assignment, and makes automation harder. Common examples of bad maintainer descriptions include:

  1. Assign bugs to him; CC him on bugs — this is either redundant or contradictory. Ensure that maintainers are listed in correct order, and bugs will be assigned correctly. Those descriptions only force a human to read them and possibly change the automatically determined order.
  2. Primary maintainer; proxied maintainer — this is some information but it does not change anything. If the maintainer comes first, he’s obviously the primary one. If the maintainer has non-Gentoo e-mail and there are proxies listed, he’s obviously proxied. And even if we did not know that, does it change anything? Again, we are forced to read information we do not need.

Good maintainer descriptions include:

  1. Upstream; CC on bugs concerning upstream, Only CC on bugs that involve USE=”d3d9″ — useful information that influences bug assignment;
  2. Feel free to fix/update, All modifications to this package must be approved by the wxwidgets herd. — important information for other developers.

So, before adding another description, please answer two questions: will the information benefit anyone? Can’t it be expressed in machine-readable form?

Proxy-maintained packages

Since a lot of the affected packages are maintained by proxied maintainers, I’d like to explicitly point out how proxy-maintained packages are to be described. This overlaps with the current Proxy maintainers team policy.

For proxy-maintained packages, the maintainers should be listed in the following order:

  1. actual package maintainers, in appropriate order — including developers maintaining or co-maintaining the package, proxied maintainers and Gentoo projects;
  2. developer proxies, preferably described as such — i.e. developers who do not actively maintain the package but only proxy for the maintainers;
  3. Proxy-maintainers project — serving as the generic fallback proxy.

I would like to put more emphasis on the key point here — the maintainers should be listed in an order making it clearly possible to distinguish packages that are maintained only by a proxied maintainer (with developers acting as proxies) from packages that are maintained by Gentoo developers and co-maintained by a proxied maintainer.

Third-party repositories (overlays)

As a last point, I would like to point out the special case of unofficial Gentoo repositories. Unlike the core repositories, metadata.xml files can not be fully trusted there. The reason for this is quite simple — many users copy (fork) packages from Gentoo along with metadata.xml files. If we were to trust those files — we would be assigning overlay bugs to Gentoo developers maintaining the original package!

For this reason, all bugs on unofficial repository packages are assigned to the repository owners.

June 17, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
bugs.gentoo.org: bug assignment UserJS (June 17, 2016, 12:48 UTC)

Since time does not permit me to write in more extent, just a short note: yesterday, I have published a Gentoo Bugzilla bug assignment UserJS. When enabled, it automatically tries to find package names in bug summary, fetches maintainers for them (from packages.g.o) and displays them in a table with quick assignment/CC checkboxes.

Note that it’s still early work. If you find any bugs, please let me know. Patches will be welcome too. And some redesign, since it looks pretty bad, standard Bugzilla style applied to plain HTML.

Update: now on GitHub as bug-assign-user-js

June 15, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Comparing Hadoop with mainframe (June 15, 2016, 18:55 UTC)

At my work, I have the pleasure of being involved in a big data project that uses Hadoop as the primary platform for several services. As an architect, I try to get to know the platform's capabilities, its potential use cases, its surrounding ecosystem, etc. And although the implementation at work is not in its final form (yay agile infrastructure releases) I do start to get a grasp of where we might be going.

For many analysts and architects, this Hadoop platform is a new kid on the block so I have some work explaining what it is and what it is capable of. Not for the fun of it, but to help the company make the right decisions, to support management and operations, to lift the fear of new environments. One thing I've once said is that "Hadoop is the poor man's mainframe", because I notice some high-level similarities between the two.

Somehow, it stuck, and I was asked to elaborate. So why not bring these points into a nice blog post :)

The big fat disclaimer

Now, before embarking on this comparison, I would like to state that I am not saying that Hadoop offers the same services, or even quality and functionality of what can be found in mainframe environments. Considering how much time, effort and experience was already put in the mainframe platform, it would be strange if Hadoop could match the same. This post is to seek some similarities and, who knows, learn a few more tricks from one or another.

Second, I am not an avid mainframe knowledgeable person. I've been involved as an IT architect in database and workload automation technical domains, which also spanned the mainframe parts of it, but most of the effort was within the distributed world. Mainframes remain somewhat opaque to me. Still, that shouldn't prevent me from making any comparisons for those areas that I do have some grasp on.

And if my current understanding is just wrong, I'm sure that I'll learn from the comments that you can leave behind!

With that being said, here it goes...

Reliability, Availability, Serviceability

Let's start with some of the promises that both platforms make - and generally are also able to deliver. Those promises are of reliability, availability and serviceability.

For the mainframe platform, these quality attributes are shown as the mainframe strengths. The platform's hardware has extensive self-checking and self-recovery capabilities, the systems can recover from failed components without service interruption, and failures can be quickly determined and resolved. On the mainframes, this is done through a good balance and alignment of hardware and software, design decisions and - in my opinion - tight control over the various components and services.

I notice the same promises on Hadoop. Various components are checking the state of the hardware and other components, and when something fails, it is often automatically recovered without impacting services. Instead of tight control over the components and services, Hadoop uses a service architecture and APIs with Java virtual machine abstractions.

Let's consider hardware changes.

For hardware failure and component substitutions, both platforms are capable of dealing with those without service disruption.

  • Mainframe probably has a better reputation in this matter, as its components have a very high Mean Time Between Failure (MTBF), and many - if not all - of the components are set up in a redundant fashion. Lots of error detection and failure detection processes try to detect if a component is close to failure, and ensure proper transitioning of any workload towards the other components without impact.
  • Hadoop uses redundancy on a server level. If a complete server fails, Hadoop is usually able to deal with this without impact. Either the sensor-like services disable a node before it goes haywire, or the workload and data that was running on the failed node is restarted on a different node.

Hardware (component) failures on the mainframe side will not impact the services and running transactions. Component failures on Hadoop might have a noticeable impact (especially if it is OLTP-like workload), but will be quickly recovered.

Failures are more likely to happen on Hadoop clusters though, as it was designed to work with many systems that have a worse MTBF design than a mainframe. The focus within Hadoop is on resiliency and fast recoverability. Depending on the service that is being used, active redundancy can be in use (so disruptions are not visible to the user).

If the Hadoop workload includes anything that resembles online transactional processing, you're still better off with enterprise-grade hardware such as ECC memory to at least allow improved hardware failure detection (and perform proactive workload management). CPU failures are not that common (at least not those without any upfront Machine Check Exception - MCE), and disk/controller failures are handled through the abstraction of HDFS anyway.

For system substitutions, I think both platforms can deal with this in a dynamic fashion as well:

  • For the mainframe side (and I'm guessing here) it is possible to switch machines with no service impact if the services are running on LPARs that are joined together in a Parallel Sysplex setup (sort-of clustering through the use of the Coupling Facilities of mainframe, which is supported through high-speed data links and services for handling data sharing and IPC across LPARs). My company switched to the z13 mainframe last year, and was able to keep core services available during the migration.
  • For Hadoop systems, the redundancy on system level is part of its design. Extending clusters, removing nodes, moving services, ... can be done with no impact. For instance, switching the active HiveServer2 instance means de-registering it in the ZooKeeper service. New client connects are then no longer served by that HiveServer2 instance, while active client connections remain until finished. There are also in-memory data grid solutions such as through the Ignite project, allowing for data sharing and IPC across nodes, as well as building up memory-based services with Arrow, allowing for efficient memory transfers.

Of course, also application level code failures tend to only disrupt that application, and not the other users. Be it because of different address spaces and tight runtime control (mainframe) or the use of different containers / JVMs for the applications (Hadoop), this is a good feat to have (even though it is not something that differentiates these platforms from other platforms or operating systems).

Let's talk workloads

When we look at a mainframe setup, we generally look at different workload patterns as well. There are basically two main workload approaches for the mainframe: batch, and On-Line Transactional Processing (OLTP) workload. In the OLTP type, there is often an additional distinction between synchronous OLTP and asynchronous OLTP (usually message-based).

Well, we have the same on Hadoop. It was once a pure batch-driven platform (and many of its components are still using batches or micro-batches in their underlying designs) but now also provides OLTP workload capabilities. Most of the OLTP workload on Hadoop is in the form of SQL-like or NoSQL database management systems with transaction manager support though.

To manage these (different) workloads, and to deal with prioritization of the workload, both platforms offer the necessary services to make things both managed as well as business (or "fit for purpose") focused.

  • Using the Workload Manager (WLM) on the mainframe, policies can be set on the workload classes so that an over-demand of resources (cross-LPARs) results in the "right" amount of allocations for the "right" workload. To actually manage jobs themselves, the Job Entry Subsystem (JES) to receive jobs and schedule then for processing on z/OS. For transactional workload, WLM provides the right resources to for instance the involved IMS regions.
  • On Hadoop, workload management is done through Yet Another Resource Negotiator (YARN), which uses (logical) queues for the different workloads. Workload (Application Containers) running through these queues can be, resource-wise, controlled both on the queue level (high-level resource control) as well as process level (low-level resource control) through the use of Linux Control Groups (CGroups - when using Linux based systems course).

If I would try to compare both against each other, one might say that the YARN queues are like WLMs service classes, and for batch applications, the initiators on mainframe are like the Application Containers within YARN queues. The latter can also be somewhat compared to IMS regions in case of long-running Application Containers.

The comparison will not hold completely though. WLM can be tuned based on goals and will do dynamic decision making on the workloads depending on its parameters, and even do live adjustments on the resources (through the System Resources Manager - SRM). Heavy focus on workload management on mainframe environments is feasible because extending the available resources on mainframes is usually expensive (additional Million Service Units - MSU). On Hadoop, large cluster users who notice resource contention just tend to extend the cluster further. It's a different approach.

Files and file access

Another thing that tends to confuse some new users on Hadoop is its approach to files. But when you know some things about the mainframe, this does remain understandable.

Both platforms have a sort-of master repository where data sets (mainframe) or files (Hadoop) are registered in.

  • On the mainframe, the catalog translates data set names into the right location (or points to other catalogs that do the same)
  • On Hadoop, the Hadoop Distributed File System (HDFS) NameNode is responsible for tracking where files (well, blocks) are located across the various systems

Considering the use of the repository, both platforms thus require the allocation of files and offer the necessary APIs to work with them. But this small comparison does not end here.

Depending on what you want to store (or access), the file format you use is important as well. - On mainframe, Virtual Storage Access Method (VSAM) provides both the methods (think of it as API) as well as format for a particular data organization. Inside a VSAM, multiple data entries can be stored in a structured way. Besides VSAM, there is also Partitioned Data Set/Extended (PDSE), which is more like a directory of sorts. Regular files are Physical Sequential (PS) data sets. - On Hadoop, a number of file formats are supported which optimize the use of the files across the services. One is Avro, which holds both methods and format (not unlike VSAM), another is Optimized Row Columnar (ORC). HDFS also has a number of options that can be enabled or set on certain locations (HDFS uses a folder-like structure) such as encryption, or on files themselves, such as replication factor.

Although I don't say VSAM versus Avro are very similar (Hadoop focuses more on the concept of files and then the file structure, whereas mainframe focuses on the organization and allocation aspect if I'm not mistaken) they seem to be sufficiently similar to get people's attention back on the table.

Services all around

What makes a platform tick is its multitude of supported services. And even here can we find similarities between the two platforms.

On mainframe, DBMS services can be offered my a multitude of softwares. Relational DBMS services can be provided by IBM DB2, CA Datacom/DB, NOMAD, ... while other database types are rendered by titles such as CA IDMS and ADABAS. All these titles build upon the capabilities of the underlying components and services to extend the platform's abilities.

On Hadoop, several database technologies exist as well. Hive offers a SQL layer on top of Hadoop managed data (so does Drill btw), HBase is a non-relational database (mainly columnar store), Kylin provides distributed analytics, MapR-DB offers a column-store NoSQL database, etc.

When we look at transaction processing, the mainframe platform shows its decades of experience with solutions such as CICS and IMS. Hadoop is still very much at its infancy here, but with projects such as Omid or commercial software solutions such as Splice Machine, transactional processing is coming here as well. Most of these are based on underlying database management systems which are extended with transactional properties.

And services that offer messaging and queueing are also available on both platforms: mainframe can enjoy Tibco Rendezvous and IBM WebSphere MQ, while Hadoop is hitting the news with projects such as Kafka and Ignite.

Services extend even beyond the ones that are directly user facing. For instance, both platforms can easily be orchestrated using workload automation tooling. Mainframe has a number of popular schedulers up its sleeve (such as IBM TWS, BMC Control-M or CA Workload Automation) whereas Hadoop is generally easily extended with the scheduling and workload automation software of the distributed world (which, given its market, is dominated by the same vendors, although many smaller ones exist as well). Hadoop also has its "own" little scheduling infrastructure called Oozie.

Programming for the platforms

Platforms however are more than just the sum of the services and the properties that it provides. Platforms are used to build solutions on, and that is true for both mainframe as well as Hadoop.

Let's first look at scripting - using interpreted languages. On mainframe, you can use the Restructed Extended Executor (REXX) or CLIST (Command LIST). Hadoop gives you Tez and Pig, as well as Python and R (through PySpark and SparkR).

If you want to directly interact with the systems, mainframe offers the Time Sharing Option/Extensions (TSO/E) and Interactive System Productivity Facility (ISPF). For Hadoop, regular shells can be used, as well as service-specific ones such as Spark shell. However, for end users, web-based services such as Ambari UI (Ambari Views) are generally better suited.

If you're more fond of compiled code, mainframe supports you with COBOL, Java (okay, it's "a bit" interpreted, but also compiled - don't shoot me here), C/C++ and all the other popular programming languages. Hadoop builds on top of Java, but supports other languages such as Scala and allows you to run native applications as well - it's all about using the right APIs.

To support development efforts, Integrated Development Environments (IDEs) are provided for both platforms as well. You can use Cobos, Micro Focus Enterprise Developer, Rational Developer for System z, Topaz Workbench and more for mainframe development. Hadoop has you covered with web-based notebook solutions such as Zeppelin and JupyterHub, as well as client-level IDEs such as Eclipse (with the Hadoop Development Tools plugins) and IntelliJ.

Governing and managing the platforms

Finally, there is also the aspect of managing the platforms.

When working on the mainframe, management tooling such as the Hardware Management Console (HMC) and z/OS Management Facility (z/OSMF) cover operations for both hardware and system resources. On Hadoop, central management software such as Ambari, Cloudera Manager or Zettaset Orchestrator try to cover the same needs - although most of these focus more on the software side than on the hardware level.

Both platforms also have a reasonable use for multiple roles: application developers, end users, system engineers, database adminstrators, operators, system administrators, production control, etc. who all need some kind of access to the platform to support their day-to-day duties. And when you talk roles, you talk authorizations.

On the mainframe, the Resource Access Control Facility (RACF) provides access control and auditing facilities, and supports a multitude of services on the mainframe (such as DB2, MQ, JES, ...). Many major Hadoop services, such as HDFS, YARN, Hive and HBase support Ranger, providing a single pane for security controls on the Hadoop platform.

Both platforms also offer the necessary APIs or hooks through which system developers can fine-tune the platform to fit the needs of the business, or develop new integrated solutions - including security oriented ones. Hadoop's extensive plugin-based design (not explicitly named) or mainframe's Security Access Facility (SAF) are just examples of this.

Playing around

Going for a mainframe or a Hadoop platform will always be a management decision. Both platforms have specific roles and need particular profiles in order to support them. They are both, in my opinion, also difficult to migrate away from once you are really using them actively (lock-in) although it is more digestible for Hadoop given its financial implications.

Once you want to start meddling with it, getting access to a full platform used to be hard (the coming age of cloud services makes that this is no longer the case though), and both therefore had some potential "small deployment" uses. Mainframe experience could be gained through the Hercules 390 emulator, whereas most Hadoop distributions have a single-VM sandbox available for download.

To do a full scale roll-out however is much harder to do by your own. You'll need to have quite some experience or even expertise on so many levels that you will soon see that you need teams (plural) to get things done.

This concludes my (apparently longer than expected) write-down of this matter. If you don't agree, or are interested in some insights, be sure to comment!

Jan Kundrát a.k.a. jkt (homepage, bugs)

Trojitá, a fast Qt IMAP e-mail client, has a shiny new release. A highlight of the 0.7 version is support for OpenPGP ("GPG") and S/MIME ("X.509") encryption -- in a read-only mode for now. Here's a short summary of the most important changes:

  • Verification of OpenPGP/GPG/S-MIME/CMS/X.509 signatures and support for decryption of these messages
  • IMAP, MIME, SMTP and general bugfixes
  • GUI tweaks and usability improvements
  • Zooming of e-mail content and improvements for vision-impaired users
  • New set of icons matching the Breeze theme
  • Reworked e-mail header display
  • This release now needs Qt 5 (5.2 or newer, 5.6 is recommended)

As usual, the code is available in our git as a "v0.7" tag. You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

The Trojitá developers

June 10, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Comet Coffee and Microbakery, St. Louis, MOAs with most cities, Saint Louis has a plethora of places to get a cup of coffee and some pastries or treats. The vast majority of those places are good, some of them are great, even fewer are exceptional standouts, and the top-tier is comprised of those that are truly remarkable. In my opinion, Comet Coffee and Microbakery finds its way into that heralded top tier. What determines whether or not a coffee shop or café earns that high of marks, you ask? Well, to some degree, that’s up to individual preference. For me, the criteria are:

  • A relaxing and inviting environment
  • Friendly and talented people
  • Exceptional food and beverage quality

First and foremost, I look for the environment that the coffee shop provides. Having a rather hectic work schedule, I like the idea of a place that I can go to unwind and just enjoy some of the more simplistic pleasures of life. Comet Coffee offers the small, intimate café-style setting that fits the bill for me. There are five or six individual tables and some longer benches inside, and a handful of tables outside on the front patio. Though the smaller space sometimes leads to crowding and a bit of a “hustle and bustle” feel, it doesn’t ever seem distracting. Also, during non-peak times, it tends to be quiet and peaceful.

Secondly, a café—or any other eatery, really—is about more than just the space itself. The employees make all the difference! At Comet Coffee, everyone is exceptionally talented in their craft, and it’s apparent that they deeply care about not only the customers they’re serving, but also the food and drinks that they’re making!

Comet Coffee employee Gretchen making a latte
Gretchen starting a latte
Comet Coffee employee Daniel making a pour-over
Daniel making a pour-over coffee

Thirdly, it should go without say that the food and drink quality are incredibly important factors for any café. At Comet, the coffee choices are seemingly limitless, so there are options that will satisfy any taste. Just in pour-overs alone, there are several different roasters (like Kuma, Sweet Bloom, Intelligentsia, Saint Louis’s own Blueprint, and others), from whom Comet offers an ever-changing list of varieties based on region (South American, African, et cetera). In addition to the pour-overs, there are many of the other coffee shop standards like lattes, espressos, macchiatos, cappuccinos, flat whites, and so on. Coffee’s not your thing? That’s fine too because they have an excellent and extensive selection of teas, ranging from the standard blacks, whites, and Darjeelings, to less common Oolongs, and my personal favourite green tea— the Genmaicha, which combines the delicate green tea flavours with toasted rice.

Comet Coffee's extensive tea selection

So between the coffees, espressos, and teas, you shouldn’t have any problem finding a beverage for any occasion or mood. But it isn’t just called “Comet Coffee”, it’s “Comet Coffee and Microbakery.” Though it almost sounds like an afterthought to the coffee, I assure you that the pastries and other baked goods share the stage as costars, and ones that often steal the show! There really isn’t a way for me to describe them that will do them justice. As someone who follows a rather rigid eating regimen, I won’t settle for anything less than stellar desserts and treats. That said, I’ve been blown away by every… single… one of them.

 

Comet Coffee Microbakery - Oat Cookies
Oat Cookies
Comet Coffee Microbakery - Strawberry Rhubarb Pie
Strawberry Rhubarb Pie
Comet Coffee Microbakery - Strawberry and Chocolate Ganache Macarons
Strawberry & Chocolate Ganache Macarons
Comet Coffee Microbakery - Buckwheat Muffin - Strawberry and Pistachio
Buckwheat muffin – Strawberry & Pistachio
Comet Coffee Microbakery - Tomato, Basil, and Mozzerella Quiche
Tomato, Basil, & Mozzerella Quiche
Comet Coffee Microbakery - Chocolate Chip Cookies and Cocoa Nibblers
Chocolate Chip Cookies & Cocoa Nibblers

You should definitely click on each image to view them in full size

 
Though I like essentially all of the treats, I do have my favourites. I tend to get the Oat Cookies most often because they are simple and fulfilling. One time, though, I went to Comet on a Sunday afternoon and the only thing left was a Buckwheat Muffin. Knowing that they simply don’t make any bad pastries, I went for it. Little did I know that it would become my absolute favourite! The baked goods vary with the availability of seasonal and local ingredients. For instance, the spring iteration of the Buckwheat Muffin is Strawberry Pistachio, but the previous one (which was insanely delicious) was Milk Chocolate & Hazelnut (think Nutella done in the best possible way). :)

One other testament to the quality of the treats is that Comet makes a few items that I have never liked anywhere else. For instance, I’m not a big fan of scones because they tend to be dry and often have the texture of coarse-grit sandpaper. However, the Lemon Poppy seed Scone and the Pear, Walnut & Goat Cheese Scone are both moist and satisfying. Likewise, I don’t really think much of Macarons, because they’re so light. These ones, however, have some substance to them and don’t make me think of overpriced cotton candy.

Okay, so now that I’ve sold you on Comet’s drinks and baked goods, here’s a little background about this great place. I recently had the opportunity to sit down with owners Mark and Stephanie, and talk with them about Comet’s past, current endeavours, and their future plans.

 

Coffee is about subtle nuances, and it can be continually improved upon. With all those nuances, I like it when one particular flavour note pops out.–Mark

 

Comet Coffee first opened its doors in August of 2012, and Mark immediately started renovating in order to align the space with his visions for the perfect shop. He and his fiancée Stephanie had worked together at Kaldi’s Coffee beforehand, but were inspired to open their own place. Between the two of them—Mark holding a degree in Economics, and Stephanie with degrees in both Hotel & Restaurant Management as well as Baking & Pastry Arts—the decision to foray into the industry together seemed like a given.

Mark had originally anticipated calling the shop “Demitasse,” after the small Turkish coffee cup by the same French name. Stephanie, though, did some research on the area of Saint Louis in which the shop is located (where the Forest Park Highlands amusement parks used to stand), and eventually found out about The Comet roller coaster. The name pays homage to those parks, and may have even been a little hopeful foreshadowing that the shop would become a well-established staple of the community.

When asked what separates Comet from other coffee shops, Mark readily mentioned that they themselves do not roast their own beans. He explained that doing so “requires purchasing [beans] in large quantities,” and that would disallow them from varying the coffee choices day-to-day. Similar to Mark’s comment about the quest of continuously improving the coffee experience, Stephanie indicated that the key to baking is to constantly modify the recipe based on the freshest available ingredients.

 

There are no compromises when baking. You must be meticulous with measurements, and you have to taste throughout the process to make adjustments.–Stephanie

 

Mark and Stephanie are currently in the process of opening an ice cream and bake shop in the Kirkwood area, and plan on carrying many of those items at Comet as well. Looking further to the future, Mark would like to open a doughnut shop where everything is made to order. His rationale—which, being a doughnut connoisseur myself, I find to be completely sound—is that everything fried needs to be as fresh as possible.

I, for one, can’t wait to try the new ice creams that Mark and Stephanie will offer in their Kirkwood location. For now, though, I will continue to enjoy the outstanding brews and unparalleled pastries at Comet Coffee. It has become a weekly go-to spot for me, and one that I look forward to greatly for unwinding after those difficult “Monday through Friday” stretches.

Comet Coffee Macchiato and seltzer
Macchiato and Seltzer
Comet Coffee pour-over brew
Pour-over brew

Cheers,
Zach

 
P.S. No time to leisurely enjoy the excellent café atmosphere? No worries, at least grab one to go. It will definitely beat what you can get from any of those chain coffee shops. :)

Comet Coffee to-go cup

June 06, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo, Openstack and OSIC (June 06, 2016, 05:00 UTC)

What to use it for

I recently applied for, and use an allocation from https://osic.org/ do extend more support for running Openstack on Gentoo. The end goal of this is to allow Gentoo to become a gated job within the Openstack test infrastructure. To do that, we need to add support for building an image that can be used.

(pre)work

To speed up the work on adding support for generating an openstack infra Gentoo image I already completed work on adding Gentoo to diskimage builder. You can see images at http://gentoo.osuosl.org/experimental/amd64/openstack/

(actual)work

The actual work has been slow going unfortunately, working with upstreams to add Gentoo support has tended to find other issues that need fixing along the way. The main thing that slowed me down though was the Openstack summit (Newton). That went on at the same time and reveiws were delated at least a week, usually two.

Since then though I've been able to work though some of the issues and have started testing the final image build in diskimage builder.

More to do

The main things left to do is to add gentoo support to the bindep elemet within diskimage builder and finish and other rough edges in other elements (if they exist). After that, Openstack Infra can start caching a Gentoo image and the real work can begin. Adding Gentoo support to the Openstack Ansible project to allow for better deployments.

May 27, 2016
New Gentoo LiveDVD "Choice Edition" (May 27, 2016, 00:00 UTC)

We’re happy to announce the availability of an updated Gentoo LiveDVD. As usual, you can find it on our downloads page.

May 26, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Akonadi for e-mail needs to die (May 26, 2016, 10:48 UTC)

So, I'm officially giving up on kmail2 (i.e., the Akonadi-based version of kmail) on the last one of my PCs now. I have tried hard and put in a lot of effort to get it working. However, it costs me a significant amount of time and effort just to be able to receive and read e-mail - meaning hanging IMAP resources every few minutes, the feared "Multiple merge candidates" bug popping up again and again, and other surprise events. That is plainly not acceptable in the workplace, where I need to rely on e-mail as means of communication. By leaving kmail2 I seem to be following many many other people... Even dedicated KDE enthusiasts that I know have by now migrated to Trojita or Thunderbird.

My conclusion after all these years, based on my personal experience, is that the usage of Akonadi for e-mail is a failed experiment. It was a nice idea in theory, and may work fine for some people. I am certain that a lot of effort has been put into improving it, I applaud the developers of both kmail and Akonadi for their tenaciousness and vision and definitely thank them for their work. Sadly, however, if something doesn't become robust and error-tolerant after over 5 (five) years of continuous development effort, the question pops up whether the initial architectural idea wasn't a bad one in the first place - in particular in terms of unhandleable complexity.

I am not sure why precisely in my case things turn out so badly. One possible candidate is the university mail server that I'm stuck with, running Novell Groupwise. I've seen rather odd behaviour in the IMAP replies in the past there. That said, there's the robustness principle for software to consider, and even if Groupwise were to do silly things, other IMAP clients seem to get along with it fine.

Recently I've heard some rumors about a new framework called Sink (or Akonadi-Next), which seems to be currently under development... I hope it'll be less fragile, and less overcomplexified. The choice of name is not really that convincing though (where did my e-mails go again)?

Now for the question and answer session...

Question: Why do you post such negative stuff? You are only discouraging our volunteers.
Answer: Because the motto of the drowned god doesn't apply to software. What is dead should better remain dead, and not suffer continuous revival efforts while users run away and the brand is damaged. Also, I'm a volunteer myself and invest a lot of time and effort into Linux. I've been seeing the resulting fallout. It likely scared off other prospective help.

Question: Have you tried restarting Akonadi? Have you tried clearing the Akonadi cache? Have you tried starting with a fresh database?
Answer: Yes. Yes. Yes. Many times. And yes to many more things. Did I mention that I spent a lot of time with that? I'll miss the akonadiconsole window. Or maybe not.

Question: Do you think kmail2 (the Akonadi-based kmail) can be saved somehow?
Answer: Maybe. One could suggest an additional agent as replacement to the usual IMAP module. Let's call it IMAP-stupid, and mandate that it uses only a bare minimum of server features and always runs in disconnected mode... Then again, I don't know the code, and don't know if that is feasible. Also, for some people kmail2 seems to work perfectly fine.

Question: So what e-mail program will you use now?
Answer: I will use kmail. I love kmail. Precisely, I will use Pali Rohar's noakonadi fork, which is based on kdepim 4.4. It is neither perfect nor bug-free, but accesses all my e-mail accounts reliably. This is what I've been using on my home desktop all the time (never upgraded) and what I downgraded my laptop to some time ago after losing many mails.

Question: So can you recommend running this ages-old kmail1 variant?
Answer: Yes and no. Yes, because (at least in my case) it seems to get the basic job done much more reliably. Yes, because it feels a lot snappier and produces far less random surprises. No, because it is essentially unmaintained, has some bugs, and is written for KDE 4, which is slowly going away. No, because Qt5-based kmail2 has more features and does look sexier. No, because you lose the useful Akonadi integration of addressbook and calendar.
That said, here are the two bugs of kmail1 that I find most annoying right now: 1) PGP/MIME cleartext signature is broken (at random some signatures are not verified correctly and/or bad signatures are produced), and 2), only in a Qt5 / Plasma environment, attachments don't open on click anymore, but can only be saved. (Which is odd since e.g. Okular as viewer is launched but never appears on screen, and the temporary file is written but immediately disappears... need to investigate.)

Question: I have bugfixes / patches for kmail1. What should I do?
Answer: Send them!!! I'll be happy to test and forward.

Question: What will you do when Qt4 / kdelibs goes away?
Answer: Dunno. Luckily I'm involved in packaging myself. :)


May 25, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Today I saw the cover of the 30 May 2016 edition of The New Yorker, which was designed by artist R. Kikuo Johnson, and it really hit home for me. The illustration depicts the graduating class of 2016 walking out of their commencement ceremony whilst a member of the 2015 graduating class is working as a groundskeeper:

Graduating Class 2016 - The New Yorker - R. Kikuo Johnson
Click for full quality

I won’t go into a full tirade here about my thoughts of higher education within the United States throughout recent years, but I do think that this image sums up a few key points nicely:

  • Many graduates (either from baccalaureate or higher-level programmes) are not working in their respective fields of study
  • A vast majority of students have accrued a nearly insurmountable amount of debt
  • Those two points may be inextricably linked to one another

I know that, for me, I am not able to work in my field of study (child and adolescent development / elementary education) for those very reasons—the corresponding jobs (which I find incredibly rewarding), unfortunately, do not yield high enough salaries for me to even make ends meet. Though the cover artwork doesn’t necessarily offer any suggestion as to a solution to the problem, I think that it very poignantly brings further attention to it.

Cheers,
Zach

May 24, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.

Thanks!!!

May 21, 2016
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
balde internals, part 1: Foundations (May 21, 2016, 15:25 UTC)

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.