Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Erik Mackdanz
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Vlastimil Babka
. Yury German
. Zack Medico

Last updated:
September 28, 2016, 18:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

September 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
We do not ship SELinux sandbox (September 27, 2016, 18:47 UTC)

A few days ago a vulnerability was reported in the SELinux sandbox user space utility. The utility is part of the policycoreutils package. Luckily, Gentoo's sys-apps/policycoreutils package is not vulnerable - and not because we were clairvoyant about this issue, but because we don't ship this utility.

What is the SELinux sandbox?

The SELinux sandbox utility, aptly named sandbox, is a simple C application which executes its arguments, but only after ensuring that the task it launches is going to run in the sandbox_t domain.

This domain is specifically crafted to allow applications most standard privileges needed for interacting with the user (so that the user can of course still use the application) but removes many permissions that might be abused to either obtain information from the system, or use to try and exploit vulnerabilities to gain more privileges. It also hides a number of resources on the system through namespaces.

It was developed in 2009 for Fedora and Red Hat. Given the necessary SELinux policy support though, it was usable on other distributions as well, and thus became part of the SELinux user space itself.

What is the vulnerability about?

The SELinux sandbox utility used an execution approach that did not shield off the users' terminal access sufficiently. In the POC post we notice that characters could be sent to the terminal through the ioctl() function (which executes the ioctl system call used for input/output operations against devices) which are eventually executed when the application finishes.

That's bad of course. Hence the CVE-2016-7545 registration, and of course also a possible fix has been committed upstream.

Why isn't Gentoo vulnerable / shipping with SELinux sandbox?

There's some history involved why Gentoo does not ship the SELinux sandbox (anymore).

First of all, Gentoo already has a command that is called sandbox, installed through the sys-apps/sandbox application. So back in the days that we still shipped with the SELinux sandbox, we continuously had to patch policycoreutils to use a different name for the sandbox application (we used sesandbox then).

But then we had a couple of security issues with the SELinux sandbox application. In 2011, CVE-2011-1011 came up in which the seunshare_mount function had a security issue. And in 2014, CVE-2014-3215 came up with - again - a security issue with seunshare.

At that point, I had enough of this sandbox utility. First of all, it never quite worked enough on Gentoo as it is (as it also requires a policy which is not part of the upstream release) and given its wide open access approach (it was meant to contain various types of workloads, so security concessions had to be made), I decided to no longer support the SELinux sandbox in Gentoo.

None of the Gentoo SELinux users ever approached me with the question to add it back.

And that is why Gentoo is not vulnerable to this specific issue.

September 22, 2016

Description:
mupdf is a lightweight PDF viewer and toolkit written in portable C.

A fuzzing though mutool revealed an infinite loop in gatherresourceinfo if mutool try to get info from a crafted pdf.

The output is a bit cutted because the original is ~1300 lines (because of the loop)

# mutool info $FILE
[cut here]
warning: not a font dict (0 0 R)
ASAN:DEADLYSIGNAL
=================================================================
==8763==ERROR: AddressSanitizer: stack-overflow on address 0x7ffeb34e6f6c (pc 0x7f188e685b2e bp 0x7ffeb34e7410 sp 0x7ffeb34e6ea0 T0)
    #0 0x7f188e685b2d in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266
    #1 0x7f188e6888c0 in buffered_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:2346
    #2 0x7f188e685cd4 in _IO_vfprintf /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1292
    #3 0x49927f in __interceptor_vfprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1111
    #4 0x499352 in fprintf /var/tmp/portage/sys-devel/llvm-3.8.0-r3/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:1156
    #5 0x7f188f70f03c in fz_flush_warnings /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:18:3
    #6 0x7f188f70f03c in fz_throw /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/fitz/error.c:168
    #7 0x7f188fac98d5 in pdf_parse_ind_obj /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-parse.c:565:3
    #8 0x7f188fb5fe6b in pdf_cache_object /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2029:13
    #9 0x7f188fb658d2 in pdf_resolve_indirect /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-xref.c:2155:12
    #10 0x7f188fbc0a0d in pdf_is_dict /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/pdf/pdf-object.c:268:2
    #11 0x53ea6a in gatherfonts /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:257:8
    #12 0x53ea6a in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:595
    #13 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5
    [cut here]
    #253 0x53f31b in gatherresourceinfo /var/tmp/portage/app-text/mupdf-1.9a/work/mupdf-1.9a/source/tools/pdfinfo.c:603:5

SUMMARY: AddressSanitizer: stack-overflow /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/stdio-common/vfprintf.c:1266 in _IO_vfprintf
==8763==ABORTING
1152.crashes:
PDF-1.4
Pages: 1
Retrieving info from pages 1-1...

Affected version:
1.9a

Fixed version:
N/A

Commit fix:
http://git.ghostscript.com/?p=mupdf.git;h=fdf71862fe929b4560e9f632d775c50313d6ef02

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

Timeline:
2016-08-05: bug discovered
2016-08-05: bug reported to upstream
2016-09-22: upstream released a patch
2016-09-22: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

mupdf: mutool: stack-based buffer overflow after an infinite loop in gatherresourceinfo (pdfinfo.c)

September 11, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
4 Weeks Left to Gentoo Miniconf (September 11, 2016, 17:50 UTC)

4 weeks are left until LinuxDays and Gentoo Miniconf 2016 in Prague.

Are you excited to see Gentoo in action? Here is something to look forward to:

  • Gentoo amd64-fbsd on an ASRock E350M1
  • Gentoo arm64 on a Raspberry Pi 3 (yes, running a 64 bit kernel and userland) with systemd
  • Gentoo mipsel on a MIPS Creator CI20 with musl libc
  • Gentoo arm on an Orange Pi PC, with Clang as system compiler

dsc00015

September 07, 2016

Description:
Graphicsmagick is an Image Processing System.

A fuzzing revealed a NULL pointer access in the TIFF parser.

The complete ASan output:

# gm identify $FILE
ASAN:DEADLYSIGNAL
=================================================================
==19028==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fbd36dd6c3c bp 0x7ffe3c007090 sp 0x7ffe3c006d10 T0)
    #0 0x7fbd36dd6c3b in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:3
    #1 0x7fbd2b714c26 in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2048:13
    #2 0x7fbd36ac506a in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13
    #3 0x7fbd36ac46ac in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9
    #4 0x7fbd36a165a0 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17
    #5 0x7fbd36a1affb in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17
    #6 0x7fbd36a6fee3 in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10
    #7 0x7fbd36a6eb78 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16
    #8 0x7fbd3597761f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:3 in MagickStrlCpy
==19028==ABORTING

Affected version:
1.3.24 (and maybe past)

Fixed version:
1.3.25

Commit fix:
http://hg.code.sf.net/p/graphicsmagick/code/rev/eb58028dacf5

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-17: bug discovered
2016-08-18: bug reported privately to upstream
2016-08-19: upstream released a patch
2016-09-05: upstream released 1.3.25
2016-09-07: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

graphicsmagick: NULL pointer dereference in MagickStrlCpy (utility.c)

September 06, 2016

Description:
ettercap is a comprehensive suite for man in the middle attacks.

Etterlog, which is part of the package, fails to read malformed data produced from the fuzzer and then it overflows.

Since there are three issues, to make it short, I’m cutting a bit the ASan output.

# etterlog $FILE
==10077==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60e00000dac0 at pc 0x00000047b8cf bp 0x7ffdc6850580 sp 0x7ffdc684fd30                                                      
READ of size 43780 at 0x60e00000dac0 thread T0                                                                                                                                                 
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:418            
    #1 0x50e26e in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:106:12                                                             
    #2 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7                                                              
    #3 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4                                                                   
    #4 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52                                                                           
    #5 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4                                                                               
    #6 0x7faa8656161f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #7 0x41a408 in _start (/usr/bin/etterlog+0x41a408)                                                                                                                                         

0x60e00000dac0 is located 0 bytes to the right of 160-byte region [0x60e00000da20,0x60e00000dac0)                                                                                              
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:66                                              
    #1 0x50e215 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:99:4                                                               


==10144==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60600000efa0 at pc 0x00000047b8cf bp 0x7ffe22881090 sp 0x7ffe22880840
READ of size 256 at 0x60600000efa0 thread T0
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:418
    #1 0x50ff6f in update_user_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:251:12
    #2 0x50e555 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:91:10
    #3 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7
    #4 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4
    #5 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52
    #6 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4
    #7 0x7f53a781461f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #8 0x41a408 in _start (/usr/bin/etterlog+0x41a408)

0x60600000efa0 is located 0 bytes to the right of 64-byte region [0x60600000ef60,0x60600000efa0)
allocated by thread T0 here:
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:66
    #1 0x50ffea in update_user_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:256:4


==10212==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60e00000de40 at pc 0x00000047b8cf bp 0x7ffe0a6b9460 sp 0x7ffe0a6b8c10
READ of size 37636 at 0x60e00000de40 thread T0
    #0 0x47b8ce in memcmp /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:418
    #1 0x50e1a3 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:89:12
    #2 0x4f3d3a in create_hosts_list /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_analyze.c:128:7
    #3 0x4fcf0c in display_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:218:4
    #4 0x4fcf0c in display /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_display.c:52
    #5 0x507818 in main /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_main.c:94:4
    #6 0x7f562119261f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #7 0x41a408 in _start (/usr/bin/etterlog+0x41a408)

0x60e00000de40 is located 0 bytes to the right of 160-byte region [0x60e00000dda0,0x60e00000de40)
allocated by thread T0 here:
    #0 0x4c1ae0 in calloc /var/tmp/temp/portage/sys-devel/llvm-3.8.0-r2/work/llvm-3.8.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:66
    #1 0x50e215 in profile_add_info /tmp/portage/net-analyzer/ettercap-9999/work/ettercap-9999/utils/etterlog/el_profiles.c:99:4

Affected version:
0.8.2

Fixed version:
N/A

Commit fix:
N/a

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-10: bug discovered
2016-08-11: bug reported to upstream
2016-09-06: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

ettercap: etterlog: multiple (three) heap-based buffer overflow (el_profiles.c)

Greg KH a.k.a. gregkh (homepage, bugs)
4.9 == next LTS kernel (September 06, 2016, 07:59 UTC)

As I briefly mentioned a few weeks ago on my G+ page, the plan is for the 4.9 Linux kernel release to be the next “Long Term Supported” (LTS) kernel.

Last year, at the Linux Kernel Summit, we discussed just how to pick the LTS kernel. Many years ago, we tried to let everyone know ahead of time what the kernel version would be, but that caused a lot of problems as people threw crud in there that really wasn’t ready to be merged, just to make it easier for their “day job”. That was many years ago, and people insist they aren’t going to do this again, so let’s see what happens.

I reserve the right to not pick 4.9 and support it for two years, if it’s a major pain because people abused this notice. If so, I’ll possibly drop back to 4.8, or just wait for 4.10 to be released. I’ll let everyone know by updating the kernel.org releases page when it’s time (many months from now.)

If people have questions about this, email me and I will be glad to discuss it.

September 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RethinkDB on Gentoo Linux (September 02, 2016, 06:56 UTC)

2014-11-05-cat-instagram

It was about time I added a new package to portage and I’m very glad it to be RethinkDB and its python driver !

  • dev-db/rethinkdb
  • dev-python/python-rethinkdb

For those of you who never heard about this database, I urge you to go about their excellent website and have a good read.

Packaging RethinkDB

RethinkDB has been under my radar for quite a long time now and when they finally got serious enough about high availability I also got serious about using it at work… and obviously “getting serious” + “work” means packaging it for Gentoo Linux :)

Quick notes on packaging for Gentoo Linux:

  • This is a C++ project so it feels natural and easy to grasp
  • The configure script already offers a way of using system libraries instead of the bundled ones which is in line with Gentoo’s QA policy
  • The only grey zone about the above statement is the web UI which is used precompiled

RethinkDB has a few QA violations which the ebuild is addressing by modifying the sources:

  • There is a configure.default which tries to force some configure options
  • The configure is missing some options to avoid auto installing some docs and init scripts
  • The build system does its best to guess the CXX compiler but it should offer an option to set it directly
  • The build system does not respect users’ CXXFLAGS and tries to force the usage of -03

Getting our hands into RethinkDB

At work, we finally found the excuse to get our hands into RethinkDB when we challenged ourselves with developing a quizz game for our booth as a sponsor of Europython 2016.

It was a simple game where you were presented a question and four possible answers and you had 60 seconds to answer as much of them as you could. The trick is that we wanted to create an interactive game where the participant had to play on a tablet but the rest of the audience got to see who was currently playing and follow their score progression + their ranking for the day and the week in real time on another screen !

Another challenge for us in the creation of this game is that we only used technologies that were new to us and even switched jobs so the backend python guys would be doing the frontend javascript et vice et versa. The stack finally went like this :

  • Game quizz frontend : Angular2 (TypeScript)
  • Game questions API : Go
  • Real time scores frontend : Angular2 + autobahn
  • Real time scores API : python 3.5 asyncio + autobahn
  • Database : RethinkDB

As you can see on the stack we chose RethinkDB for its main strength : real time updates pushed to the connected clients. The real time scores frontend and API were bonded together using autobahn while the API was using the changefeeds (realtime updates coming from the database) and broadcasting them to the frontend.

What we learnt about RethinkDB

  • We’re sure that we want to use it in production !
  • The ReQL query language is a pipeline so its syntax is quite tricky to get familiar with (even more when coming from mongoDB like us), it is as powerful as it can be disconcerting
  • Realtime changefeeds have limitations which are sometimes not so easy to understand/find out (especially the order_by / secondary index part)
  • Changefeeds limitations is a constraint you have to take into account in your data modeling !
  • Changefeeds + order_by can do the ordering for you when using the include_offsets option, this is amazing
  • The administration web UI is awesome
  • The python 3.5 asyncio proper support is still not merged, this is a pain !

Try it out

Now that you can emerge rethinkdb I encourage you to try this awesome database.

Be advised that the ebuild also provides a way of configuring your rethinkdb instance by running emerge –config dev-db/rethinkdb !

I’ll now try to get in touch with upstream to get Gentoo listed on their website.

August 29, 2016
potrace: memory allocation failure (August 29, 2016, 17:23 UTC)

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A crafted image, through a fuzz testing, causes the memory allocation to fail.

Since the ASan symbolyzer didn’t start up correctly, I’m reporting only what it prints at the end.

# potrace $FILE
potrace: warning: 2.hangs: premature end of file
==13660==ERROR: AddressSanitizer failed to allocate 0x200003000 (8589946880) bytes of LargeMmapAllocator (error code: 12)
==13660==AddressSanitizer CHECK failed: /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/sanitizer_common/sanitizer_common.cc:183 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)

Affected version:
1.13

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-26: bug discovered
2016-08-27: bug reported privately to upstream
2016-08-29: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

potrace: memory allocation failure

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A crafted image revealed, through a fuzz testing, the presence of a NULL pointer access.

The complete ASan output:

# potrace $FILE
potrace: warning: 48.crashes: premature end of file                                                                                                                                            
ASAN:DEADLYSIGNAL                                                                                                                                                                              
=================================================================                                                                                                                              
==13940==ERROR: AddressSanitizer: SEGV on unknown address 0x7fd7b865b800 (pc 0x7fd7ec5bcbf4 bp 0x7fff9ebad590 sp 0x7fff9ebad360 T0)                                                            
    #0 0x7fd7ec5bcbf3 in findnext /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:436:11                                                                             
    #1 0x7fd7ec5bcbf3 in getenv /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:478                                                                                  
    #2 0x7fd7ec5c3ed9 in potrace_trace /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/potracelib.c:76:7                                                                         
    #3 0x4fea6e in process_file /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/main.c:1102:10                                                                                   
    #4 0x4f872b in main /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/main.c:1250:7                                                                                            
    #5 0x7fd7eb4d961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #6 0x418fc8 in getenv (/usr/bin/potrace+0x418fc8)                                                                                                                                          
                                                                                                                                                                                               
AddressSanitizer can not provide additional info.                                                                                                                                              
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.13/work/potrace-1.13/src/decompose.c:436:11 in findnext                                                                   
==13940==ABORTING

Affected version:
1.13

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-26: bug discovered
2016-08-27: bug reported privately to upstream
2016-08-29: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

potrace: NULL pointer dereference in findnext (decompose.c)

August 23, 2016

Description:
Graphicsmagick is an Image Processing System.

A fuzzing revealed two minor issues in the TIFF parser. Both issues come out from different line in the tiff.c file but the problem seems to be the same.

The complete ASan output:

# gm identify $FILE
==6321==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000eb12 at pc 0x7fa98ca1fcf4 bp 0x7fff957069a0 sp 0x7fff95706998                                                       
READ of size 1 at 0x60200000eb12 thread T0                                                                                                                                                     
    #0 0x7fa98ca1fcf3 in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:10                                                    
    #1 0x7fa98135de5a in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2060:13                                                       
    #2 0x7fa98c70e06a in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13                                                     
    #3 0x7fa98c70d6ac in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9                                                      
    #4 0x7fa98c65f5a0 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17                                             
    #5 0x7fa98c663ffb in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17                                                    
    #6 0x7fa98c6b8ee3 in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10                                                 
    #7 0x7fa98c6b7b78 in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16                                                       
    #8 0x7fa98b5c061f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)                                                                                                                                                
                                                                                                                                                                                               
0x60200000eb12 is located 0 bytes to the right of 2-byte region [0x60200000eb10,0x60200000eb12)                                                                                                
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4c01a8 in realloc /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:71                                                     
    #1 0x7fa9810ebe5b in _TIFFCheckRealloc /var/tmp/portage/media-libs/tiff-4.0.6/work/tiff-4.0.6/libtiff/tif_aux.c:73

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4567:10 in MagickStrlCpy


# gm identify $FILE
==26025==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000ecf2 at pc 0x7f07a3aaab3c bp 0x7ffc558602c0 sp 0x7ffc558602b8                                                      
READ of size 1 at 0x60300000ecf2 thread T0                                                                                                                                                     
    #0 0x7f07a3aaab3b in MagickStrlCpy /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4557:7                                                     
    #1 0x7f07983e851c in ReadTIFFImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/coders/tiff.c:2048:13                                                       
    #2 0x7f07a3797a62 in ReadImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1607:13                                                     
    #3 0x7f07a3796f18 in PingImage /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/constitute.c:1370:9                                                      
    #4 0x7f07a36e6648 in IdentifyImageCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8372:17                                             
    #5 0x7f07a36eb01b in MagickCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:8862:17                                                    
    #6 0x7f07a3740a3e in GMCommandSingle /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17370:10                                                 
    #7 0x7f07a373f5bb in GMCommand /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/command.c:17423:16                                                       
    #8 0x7f07a264961f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289                                                                        
    #9 0x4188d8 in _init (/usr/bin/gm+0x4188d8)                                                                                                                                                
                                                                                                                                                                                               
0x60300000ecf2 is located 0 bytes to the right of 18-byte region [0x60300000ece0,0x60300000ecf2)                                                                                               
allocated by thread T0 here:                                                                                                                                                                   
    #0 0x4bfe28 in malloc /var/tmp/portage/sys-devel/llvm-3.8.1/work/llvm-3.8.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:52                                                      
    #1 0x7f0798178fd4 in setByteArray /var/tmp/portage/media-libs/tiff-4.0.6/work/tiff-4.0.6/libtiff/tif_dir.c:51                                                                              
                                                                                                                                                                                               
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/graphicsmagick-1.3.24/work/GraphicsMagick-1.3.24/magick/utility.c:4557:7 in MagickStrlCpy

Affected version:
1.3.24 (and maybe past)

Fixed version:
1.3.25

Commit fix:
http://hg.code.sf.net/p/graphicsmagick/code/rev/eb58028dacf5

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2016-08-17: bug discovered
2016-08-18: bug reported privately to upstream
2016-08-19: upstream released a patch
2016-08-23: blog post about the issue
2016-09-05: upstream released 1.3.25

Note:
This bug was found with American Fuzzy Lop.

Permalink:

graphicsmagick: two heap-based buffer overflow in ReadTIFFImage (tiff.c)

In Memory of Jonathan “avenj” Portnoy (August 23, 2016, 00:00 UTC)

The Gentoo project mourns the loss of Jonathan Portnoy, better known amongst us as Jon, or avenj.

Jon was an active member of the International Gentoo community, almost since its founding in 1999. He was still active until his last day.

His passing has struck us deeply and with disbelief. We all remember him as a vivid and enjoyable person, easy to reach out to and energetic in all his endeavors.

On behalf of the entire Gentoo Community, all over the world, we would like to convey our deepest sympathy for his family and friends. As per his wishes, the Gentoo Foundation has made a donation in his memory to the Perl Foundation.

Please join the community in remembering Jon on our forums.

August 20, 2016

Description:
Libav is an open source set of tools for audio and video processing.

A crafted file causes a stack-based buffer overflow. The ASan report may be confused because it mentions get_bits, but the issue is in aac_sync.
This issue was discovered the past year, I reported it to Luca Barbato privately and I didn’t follow the state.
Before I made the report, the bug was noticed by Janne Grunau because the fate test reported a failure, then he fixed it, but at that time there wasn’t no stable release(s) that included the fix.

The complete ASan output:

~ # avconv -i $FILE -f null -
==20736==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd3bd34f4a at pc 0x7f0805611189 bp 0x7ffd3bd34e20 sp 0x7ffd3bd34e18
READ of size 4 at 0x7ffd3bd34f4a thread T0
    #0 0x7f0805611188 in get_bits /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/get_bits.h:244:5
    #1 0x7f0805611188 in avpriv_aac_parse_header /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aacadtsdec.c:58
    #2 0x7f080560f19e in aac_sync /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_parser.c:43:17
    #3 0x7f080560a87b in ff_aac_ac3_parse /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_ac3_parser.c:48:25
    #4 0x7f0806fcd8e6 in av_parser_parse2 /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/parser.c:157:13
    #5 0x7f0808efd4dd in parse_packet /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:794:15
    #6 0x7f0808edae64 in read_frame_internal /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:960:24
    #7 0x7f0808ee8783 in avformat_find_stream_info /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2156:15
    #8 0x4f62f6 in open_input_file /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:726:11
    #9 0x4f474f in open_files /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:2127:15
    #10 0x4f3f62 in avconv_parse_options /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv_opt.c:2164:11
    #11 0x528727 in main /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/avconv.c:2629:11
    #12 0x7f0803c83aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #13 0x43a5d6 in _start (/usr/bin/avconv+0x43a5d6)

Address 0x7ffd3bd34f4a is located in stack of thread T0 at offset 170 in frame
    #0 0x7f080560ee3f in aac_sync /var/tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/aac_parser.c:31

  This frame has 3 object(s):
    [32, 64) 'bits'
    [96, 116) 'hdr'
    [160, 168) 'tmp' 0x10002779e9e0: 00 00 04 f2 f2 f2 f2 f2 00[f3]f3 f3 00 00 00 00                                                                                                                                                                                                              
  0x10002779e9f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1                                                                                                                                                                                                              
  0x10002779ea10: 00 f2 f2 f2 04 f2 04 f3 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
  0x10002779ea30: f1 f1 f1 f1 00 f3 f3 f3 00 00 00 00 00 00 00 00                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5                                                                                                                                                                                                                                                  
  Stack use after scope:   f8                                                                                                                                                                                                                                                  
  Global redzone:          f9                                                                                                                                                                                                                                                  
  Global init order:       f6                                                                                                                                                                                                                                                  
  Poisoned by user:        f7                                                                                                                                                                                                                                                  
  Container overflow:      fc                                                                                                                                                                                                                                                  
  Array cookie:            ac                                                                                                                                                                                                                                                  
  Intra object redzone:    bb                                                                                                                                                                                                                                                  
  ASan internal:           fe                                                                                                                                                                                                                                                  
  Left alloca redzone:     ca                                                                                                                                                                                                                                                  
  Right alloca redzone:    cb                                                                                                                                                                                                                                                  
==20736==ABORTING                                                                                                                                                                                                                                                              

Affected version:
11.3 (and maybe past versions)

Fixed version:
11.5

Commit fix:
https://git.libav.org/?p=libav.git;a=commit;h=fb1473080223a634b8ac2cca48a632d037a0a69d

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.
This bug was also discovered by Janne Grunau.

CVE:
N/A

Timeline:
2015-07-27: bug discovered
2015-07-28: bug reported privately to upstream
2016-08-20: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug does not affect ffmpeg.
A same fix, was applied to another part of (similar) code in the ac3_parser.c file.

Permalink:

libav: stack-based buffer overflow in aac_sync (aac_parser.c)

August 18, 2016
Events: FrOSCon 11 (August 18, 2016, 00:00 UTC)

This weekend, the University of Applied Sciences Bonn-Rhein-Sieg will host the Free and Open Source Software Conference, better known as FrOSCon. Gentoo will be present there on 20 and 21 August with a chance for you to meet devs and other users, grab merchandise, and compile your own Gentoo button badges.

See you there!

August 17, 2016
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)
The Meson build system at GUADEC 2016 (August 17, 2016, 16:10 UTC)

centricular-logoFor the third year in a row, Centricular was at GUADEC, and this year we sponsored the evening party on the final day at Hoepfner’s Burghof! Hopefully everyone enjoyed it as much as we hoped. :)

The focus for me this year was to try and tell people about the work we've been doing on porting GStreamer to Meson and to that end, I gave a talk on the second day about how to build your GNOME app ~2x faster than before.

The talk title itself was a bit of a lie, since most of the talk was about how Autotools is a mess and how Meson has excellent features (better syntax!) and in-built support for most of the GNOME infrastructure to make it easy for people to use it. But for some people the attraction is also that Meson provides better support on platforms such as Windows, and improves build times on all platforms massively; ranging from 2x on Linux to 10-15x on Windows.

Thanks to the excellent people at c3voc.de, the talks were all live-streamed, and you can see my talk at their relive website for GUADEC 2016.

It was heartening to see that over the past year people have warmed up to the idea of using Meson as a replacement for Autotools. Several people said kind and encouraging words to me and Jussi over the course of the conference (it helps that GNOME is filled with a friendly community!). We will continue to improve Meson and with luck we can get rid of Autotools over time.

The best approach, as always, is to start with the simple projects, get familiar with the syntax, and report any bugs you find! We look forward to your bugs and pull requests. ;)

August 14, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)
Gentoo Miniconf 2016 Call for Papers closed (August 14, 2016, 21:08 UTC)

The Call for Papers for the Gentoo Miniconf is now closed and the acceptance notices have been sent out.
Missed the deadline? Don’t despair, the LinuxDays CfP is still open and you can still submit talk proposals there until the end of August.

August 08, 2016

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A crafted images (bmp) revealed, through a fuzz testing, the presence of three NULL pointer access.

The complete ASan output:

ASAN:SIGSEGV
=================================================================
==13806==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f027c bp 0x7ffd8442c190 sp 0x7ffd8442bfc0 T0)
    #0 0x4f027b in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717:4
    #1 0x4f027b in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f2f77104aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717 bm_readbody_bmp
==13806==ABORTING


ASAN:SIGSEGV
=================================================================
==13812==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f0958 bp 0x7ffd1e689a50 sp 0x7ffd1e689880 T0)
    #0 0x4f0957 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744:4
    #1 0x4f0957 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7fbc3b936aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744 bm_readbody_bmp
==13812==ABORTING


ASAN:SIGSEGV
=================================================================
==13885==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000004f10b8 bp 0x7ffdf745fff0 sp 0x7ffdf745fe20 T0)
    #0 0x4f10b7 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651:11
    #1 0x4f10b7 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7fc675763aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651 bm_readbody_bmp
==13885==ABORTING

Affected version:
1.12

Fixed version:
1.13

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

potrace: multiple (three) NULL pointer dereference in bm_readbody_bmp (bitmap_io.c)

potrace: divide-by-zero in bm_new (bitmap.h) (August 08, 2016, 14:51 UTC)

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A crafted image (bmp) revealed, through a fuzz testing, the presence of a division by zero.

The complete ASan output:

# potrace $FILE.bmp
ASAN:DEADLYSIGNAL
=================================================================
==25102==ERROR: AddressSanitizer: FPE on unknown address 0x000000508d52 (pc 0x000000508d52 bp 0x7ffc381edff0 sp 0x7ffc381ede20 T0)
    #0 0x508d51 in bm_new /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap.h:63:24
    #1 0x508d51 in bm_readbody_bmp /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:548
    #2 0x508d51 in bm_read /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #3 0x4fe12d in process_file /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #4 0x4f82af in main /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #5 0x7f8d6729e61f in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.22-r4/work/glibc-2.22/csu/libc-start.c:289
    #6 0x419018 in getenv (/usr/bin/potrace+0x419018)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap.h:63:24 in bm_new
==25102==ABORTING

Affected version:
1.12

Fixed version:
1.13

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

potrace: divide-by-zero in bm_new (bitmap.h)

Description:
potrace is a utility that transforms bitmaps into vector graphics.

A crafted images (bmp) revealed, through a fuzz testing, the presence of SIX heap-based buffer overflow.

To avoid to make the post much long, I splitted the ASan output to leave only the relevant trace.

==13565==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61100000a2fc at pc 0x0000004f370a bp 0x7ffd81d22f90 sp 0x7ffd81d22f88
READ of size 4 at 0x61100000a2fc thread T0
    #0 0x4f3709 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717:4
    #1 0x4f3709 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f9a1c8f4aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:717 bm_readbody_bmp


==13663==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe4 at pc 0x0000004f3729 bp 0x7fff07737d30 sp 0x7fff07737d28
READ of size 4 at 0x60200000efe4 thread T0
    #0 0x4f3728 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651:11
    #1 0x4f3728 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f3adde99aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:651 bm_readbody_bmp


==13618==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000f00c at pc 0x0000004f37a9 bp 0x7ffc33e306b0 sp 0x7ffc33e306a8
READ of size 4 at 0x60300000f00c thread T0
    #0 0x4f37a8 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:652:11
    #1 0x4f37a8 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f20147f7aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:652 bm_readbody_bmp


==13624==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe8 at pc 0x0000004f382a bp 0x7fff60b8bed0 sp 0x7fff60b8bec8
READ of size 4 at 0x60200000efe8 thread T0
    #0 0x4f3829 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:690:4
    #1 0x4f3829 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129                                                                                       
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9                                                                                    
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7                                                                                            
    #4 0x7f35633d5aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289                                                                        
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)                                                                                                                                          
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:690 bm_readbody_bmp                                                  
                                                                                                                                                                                               
                                                                                                                                                                                               
==13572==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000f018 at pc 0x0000004f38d5 bp 0x7ffc994b68d0 sp 0x7ffc994b68c8                                                      
READ of size 4 at 0x60200000f018 thread T0                                                                                                                                                     
    #0 0x4f38d4 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744:4                                                                             
    #1 0x4f38d4 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f11b6253aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:744 bm_readbody_bmp


==13753==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000efe8 at pc 0x0000004f3948 bp 0x7fff4f6df6b0 sp 0x7fff4f6df6a8
READ of size 4 at 0x60200000efe8 thread T0
    #0 0x4f3947 in bm_readbody_bmp /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:601:2
    #1 0x4f3947 in bm_read /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:129
    #2 0x4e6377 in process_file /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1056:9
    #3 0x4e0e08 in main /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/main.c:1212:7
    #4 0x7f26d3d28aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
    #5 0x435f56 in _start (/usr/bin/potrace+0x435f56)
SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/media-gfx/potrace-1.12/work/potrace-1.12/src/bitmap_io.c:601 bm_readbody_bmp

Affected version:
1.12

Fixed version:
1.13

Commit fix:
There is no public git/svn repository, If you need the single patches, feel free to ask.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Timeline:
2015-07-04: bug discovered
2015-07-05: bug reported privately to upstream
2015-10-22: upstream realeased 1.13
2016-08-08: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.

Permalink:

potrace: multiple(six) heap-based buffer overflow in bm_readbody_bmp (bitmap_io.c)

July 31, 2016
Zack Medico a.k.a. zmedico (homepage, bugs)

Suppose that you host a gentoo rsync mirror on your company intranet, and you want it to gracefully handle bursts of many connections from clients, queuing connections as long as necessary for all of the clients to be served (if they don’t time out first). However, you don’t want to allow unlimited rsync processes, since that would risk overloading your server. In order to solve this problem, I’ve created socket-burst-dampener, an inetd-like daemon for handling bursts of connections.

It’s a very simple program, which only takes command-line arguments (no configuration file). For example:

socket-burst-dampener 873 \
--backlog 8192 --processes 128 --load-average 8 \
-- rsync --daemon

This will allow up to 128 concurrent rsync processes, while automatically backing off on processes if the load average exceeds 8. Meanwhile, the --backlog 8192 setting means that the kernel will queue up to 8192 connections (until they are served or they time out). You need to adjust the net.core.somaxconn sysctl in order for the kernel to queue that many connections, since net.core.somaxconn defaults to 128 connections (cat /proc/sys/net/core/somaxconn).

July 30, 2016
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)
GStreamer and Meson: A New Hope (July 30, 2016, 20:52 UTC)

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Update: I wrote a new post with detailed steps on how to build using Cerbero and generate Visual Studio project files.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

July 27, 2016
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

July 18, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gimp 2.9.4 now in Gentoo (July 18, 2016, 16:24 UTC)

Hi there!

Just a quick heads up that Gimp 2.9.4 is now available in Gentoo.

Upstream has an article on what’s new with Gimp 2.9.4: GIMP 2.9.4 Released

Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

The call for papers for the Gentoo Miniconf 2016 (8th+9th October 2016 in Prague) will close in two weeks, on 1 August 2016. Time to get your submission ready!

July 04, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)

After quite a bit of hard work, I've at long last launched WireGuard, a secure network tunnel that uses modern crypto, is extremely fast, and is easy and pleasurable to use. You can read about it at the website, but in short, it's based on the simple idea of an association between public keys and permitted IP addresses. Along the way it uses some nice crypto trick to achieve it's goal. For performance it lives in the kernel, though cross-platform versions in safe languages like Rust, Go, etc are on their way.

The launch was wildly successful. About 10 minutes after I ran /etc/init.d/nginx restart, somebody had already put it on Hacker News and the Twitter sphere, and within 24 hours I had received 150,000 unique IPs. The reception has been very warm, and the mailing list has already started to get some patches. Distro maintainers have stepped up and packages are being prepared. There are currently packages for Gentoo, Arch, Debian, and OpenWRT, which is very exciting.

Although it's still experimental and not yet in final stable/secure form, I'd be interested in general feedback from experimenters and testers.

June 25, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.0 (June 25, 2016, 15:31 UTC)

Oh boy, this new version is so amazing in terms of improvements and contributions that it’s hard to sum it up !

Before going into more explanations I want to dedicate this release to tobes whose contributions, hard work and patience have permitted this ambitious 3.0 : THANK YOU !

This is the graph of contributed commits since 2.9 just so you realise how much this version is thanks to him:
2016-06-25-165245_1289x248_scrotI can’t continue on without also thanking Horgix who started this madness by splitting the code base into modular files and pydsigner for his everlasting contributions and code reviews !

The git stat since 2.9 also speaks for itself:

 73 files changed, 7600 insertions(+), 3406 deletions(-)

So what’s new ?

  • the monolithic code base have been split into modules responsible for the given tasks py3status performs
  • major improvements on modules output orchestration and execution resulting in considerable CPU consumption reduction and i3bar responsiveness
  • refactoring of user notifications with added dbus support and rate limiting
  • improved modules error reporting
  • py3status can now survive an i3status crash and will try to respawn it
  • a new ‘container’ module output type gives the ability to group modules together
  • refactoring of the time and tztime modules support brings the support of all the time macros (%d, %Z etc)
  • support for stopping py3status and its modules when i3bar hide mode is used
  • refactoring of general, contribution and most noticeably modules documentation
  • more details on the rest of the changelog

Modules

Along with a cool list of improvements on the existing modules, these are the new modules:

  • new group module to cycle display of several modules (check it out, it’s insanely handy !)
  • new fedora_updates module to check for your Fedora packages updates
  • new github module to check a github repository and notifications
  • new graphite module to check metrics from graphite
  • new insync module to check your current insync status
  • new timer module to have a simple countdown displayed
  • new twitch_streaming module to check is a Twitch Streamer is online
  • new vpn_status module to check your VPN status
  • new xrandr_rotate module to rotate your screens
  • new yandexdisk_status module to display Yandex.Disk status

Contributors

And of course thank you to all the others who made this version possible !

  • @egeskow
  • Alex Caswell
  • Johannes Karoff
  • Joshua Pratt
  • Maxim Baz
  • Nathan Smith
  • Themistokle Benetatos
  • Vladimir Potapev
  • Yongming Lai

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Time to retire (June 25, 2016, 07:08 UTC)

sign-big-150dpi-magnified-name-200x200I’m sad to say it’s the end of the road for me with Gentoo, after 13 years volunteering my time (my “anniversary” is tomorrow). My time and motivation to commit to Gentoo have steadily declined over the past couple of years and eventually stopped entirely. It was an enormous part of my life for more than a decade, and I’m very grateful to everyone I’ve worked with over the years.

My last major involvement was running our participation in the Google Summer of Code, which is now fully handed off to others. Prior to that, I was involved in many things from migrating our X11 packages through the Big Modularization and maintaining nearly 400 packages to serving 6 terms on the council and as desktop manager in the pre-council days. I spent a long time trying to change and modernize our distro and culture. Some parts worked better than others, but the inertia I had to fight along the way was enormous.

No doubt I’ve got some packages floating around that need reassignment, and my retirement bug is already in progress.

Thanks, folks. You can reach me by email using my nick at this domain, or on Twitter, if you’d like to keep in touch.


Tagged: gentoo, x.org

June 23, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

I think there has been more than enough news on nextcloud origins, the fork from owncloud, … so I will keep this post to the technical bits.

Installing nextcloud on Gentoo

For now, the differences between owncloud 9 and nextcloud 9 are mostly cosmetic, so a few quick edits to owncloud ebuild ended in a working nextcloud one. And thanks to the different package names, you can install both in parallel (as long as they do not use the same database, of course).

So if you want to test nextcloud, it’s just a command away:

# emerge -a nextcloud

With the default webapp parameters, it will install alongside owncloud.

Migrating owncloud data

Nothing official again here, but as I run a small instance (not that much data) with simple sqlite backend, I could copy the data and configuration to nextcloud and test it while keeping owncloud.
Adapt the paths and web user to your setup: I have these webapps in the default /var/www/localhost/htdocs/ path, with www-server as the web server user.

First, clean the test data and configuration (if you logged in nextcloud):

# rm /var/www/localhost/htdocs/nextcloud/config/config.php
# rm -r /var/www/localhost/htdocs/nextcloud/data/*

Then clone owncloud’s data and config. If you feel adventurous (or short on available disk space), you can move these files instead of copying them:

# cp -a /var/www/localhost/htdocs/owncloud/data/* /var/www/localhost/htdocs/nextcloud/data/
# cp -a /var/www/localhost/htdocs/owncloud/config/config.php /var/www/localhost/htdocs/nextcloud/config/

Change all owncloud occurences in config.php to nextcloud (there should be only one, for ‘datadirectory’. And then run the (nextcloud) updater. You can do it via the web interface, or (safer) with the CLI occ tool:

# sudo -u www-server php /var/www/localhost/htdocs/nextcloud/occ upgrade

As with “standard” owncloud upgrades, you will have to reactivate additional plugins after logging in. Also check the nextcloud log for potential warnings and errors.

In my test, the only non-official plugin that I use, files_reader (for ebooks)  installed fine in nextcloud, and the rest worked as fine as owncloud with a lighter default theme 🙂
For now, owncloud-client works if you point it to the new /nextcloud URL on your server, but this can (and probably will) change in the future.

More migration tips and news can be found in this nextcloud forum post, including some nice detailed backup steps for mysql-backed systems migration.

June 22, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

In my previous post I have described a number of pitfalls regarding Gentoo dependency specifications. However, I have missed a minor point of correctness of various dependency types in specific dependency classes. I am going to address this in this short post.

There are three classes of dependencies in Gentoo: build-time dependencies that are installed before the source build happens, runtime dependencies that should be installed before the package is installed to the live system and ‘post’ dependencies which are pretty much runtime dependencies whose install can be delayed if necessary to avoid dependency loops. Now, there are some fun relationships between dependency classes and dependency types.

Blockers

Blockers are the dependencies used to prevent a specific package from being installed, or to force its uninstall. In modern EAPIs, there are two kinds of blockers: weak blockers (single !) and strong blockers (!!).

Weak blockers indicate that if the blocked package is installed, its uninstall may be delayed until the blocking package is installed. This is mostly used to solve file collisions between two packages — e.g. it allows the package manager to replace colliding files, then unmerge remaining files of the blocked package. It can also be used if the blocked package causes runtime issues on the blocking package.

Weak blockers make sense only in RDEPEND. While they’re technically allowed in DEPEND (making it possible for DEPEND=${RDEPEND} assignment to be valid), they are not really meaningful in DEPEND alone. That’s because weak blockers can be delayed post build, and therefore may not influence the build environment at all. In turn, after the build is done, build dependencies are no longer used, and unmerging the blocker does not make sense anymore.

Strong blockers indicate that the blocked package must be uninstalled before the dependency specification is considered satisfied. Therefore, they are meaningful both for build-time dependencies (where they indicate the blocker must be uninstalled before source build starts) and for runtime dependencies (where they indicate it must be uninstalled before install starts).

This leaves PDEPEND which is a bit unclear. Again, technically both blocker types are valid. However, weak blockers in PDEPEND would be pretty much equivalent to those in RDEPEND, so there is no reason to use that class. Strong blockers in PDEPEND would logically be equivalent to weak blockers — since the satisfaction of this dependency class can be delayed post install.

Any-of dependencies and :* slot operator

This is going just to be a short reminder: those types of dependencies are valid in all dependency classes but no binding between those occurences is provided.

An any-of dependency in DEPEND indicates that at least one of the packages will be installed before the build starts. An any-of dependency in RDEPEND (or PDEPEND) indicates that at least one of them will be installed at runtime. There is no guarantee that the dependency used to satisfy DEPEND will be the same as the one used to satisfy RDEPEND, and the latter is fully satisfied when one of the listed packages is replaced by another.

A similar case occurs for :* operator — only that slots are used instead of separate packages.

:= slot operator

Now, the ‘equals’ slot operator is a fun one. Technically, it is valid in all dependency classes — for the simple reason of DEPEND=${RDEPEND}. However, it does not make sense in DEPEND alone as it is used to force rebuilds of installed package while build-time dependencies apply only during the build.

The fun part is that for the := slot operator to be valid, the matching package needs to be installed when the metadata for the package is recorded — i.e. when a binary package is created or the built package is installed from source. For this to happen, a dependency guaranteeing this must be in DEPEND.

So, the common rule would be that a package dependency using := operator would have to be both in RDEPEND and DEPEND. However, strictly speaking the dependencies can be different as long as a package matching the specification from RDEPEND is guaranteed by DEPEND.

June 21, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

During my work on Gentoo, I have seen many types of dependency pitfalls that developers fell in. Sad to say, their number is increasing with new EAPI features — we are constantly increasing new ways into failures rather than working on simplifying things. I can’t say the learning curve is getting much steeper but it is considerably easier to make a mistake.

In this article, I would like to point out a few common misunderstandings and pitfalls regarding slots, slot operators and any-of (|| ()) deps. All of those constructs are used to express dependencies that can be usually be satisfied by multiple packages or package versions that can be installed in parallel, and missing this point is often the cause of trouble.

Separate package dependencies are not combined into a single slot

One of the most common mistakes is to assume that multiple package dependency specifications listed in one package are going to be combined somehow. However, there is no such guarantee, and when a package becomes slotted this fact actually becomes significant.

Of course, some package managers take various precautions to prevent the following issues. However, such precautions not only can not be relied upon but may also violate the PMS.

For example, consider the following dependency specification:

>=app-misc/foo-2
<app-misc/foo-5

It is a common way of expressing version ranges (in this case, versions 2*, 3* and 4* are acceptable). However, if app-misc/foo is slotted and there are versions satisfying the dependencies in different slots, there is no guarantee that the dependency could not be satisfied by installing foo-1 (satisfies <foo-5) and foo-6 (satisfies >=foo-2) in two slots!

Similarly, consider:

app-misc/foo[foo]
bar? ( app-misc/foo[baz] )

This one is often used to apply multiple sets of USE flags to a single package. Once again, if the package is slotted, there is no guarantee that the dependency specifications will not be satisfied by installing two slots with different USE flag configurations.

However, those problems mostly apply to fully slotted packages such as sys-libs/db where multiple slots are actually meaningfully usable by a package. With the more common use of multiple slots to provide incompatible versions of the package (e.g. binary compatibility slots), there is a more important problem: that even a single package dependency can match the wrong slot.

For non-truly multi-slotted packages, the solution to all those problems is simple: always specify the correct slot. For truly multi-slotted packages, there is no easy solution.

For example, a version range has to be expressed using an any-of dep:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)

Multiple sets of USE flags? Well, if you really insist, you can combine them for each matching slot separately…

|| (
    ( sys-libs/db:5.3 tools? ( sys-libs/db:5.3[cxx] ) )  
    ( sys-libs/db:5.1 tools? ( sys-libs/db:5.1[cxx] ) )  
	…
)

The ‘equals’ slot operator and multiple slots

A similar problem applies to the use of the EAPI 5 ‘equals’ slot operator. The PMS notes that:

=
Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will break unless a matching package with slot and sub-slot equal to the slot and sub-slot of the best installed version at the time the package was installed is available.

[…]

To implement the equals slot operator, the package manager will need to store the slot/sub-slot pair of the best installed version of the matching package. […]

PMS, 8.2.6.3 Slot Dependencies

The significant part is that the slot and subslot is recorded for the best package version matched by the specification containing the operator. So again, if the operator is used on multiple dependencies that can match multiple slots, multiple slots can actually be recorded.

Again, this becomes really significant in truly slotted packages:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)
sys-libs/db:=

While one may expect the code to record the slot of sys-libs/db used by the package, this may actually record any newer version that is installed while the package is being built. In other words, this may implicitly bind to db-6* (and pull it in too).

For this to work, you need to ensure that the dependency with the slot operator can not match any version newer than the two requested:

|| (
     =sys-libs/db-5*
     =sys-libs/db-4*
)
<sys-libs/db-6:=

In this case, the dependency with the operator could still match earlier versions. However, the other dependency enforces (as long as it’s in DEPEND) that at least one of the two versions specified is installed at build-time, and therefore is used by the operator as the best version matching it.

The above block can easily be extended by a single set of USE dependencies (being applied to all the package dependencies including the one with slot operator). For multiple conditional sets of USE dependencies, finding a correct solution becomes harder…

The meaning of any-of dependencies

Since I have already started using the any-of dependencies in the examples, I should point out yet another problem. Many of Gentoo developers do not understand how any-of dependencies work, and make wrong assumptions about them.

In an any-of group, at least one immediate child element must be matched. A blocker is considered to be matched if its associated package dependency specification is not matched.

PMS, 8.2.3 Any-of Dependency Specifications

So, PMS guarantees that if at least one of the immediate child elements (package dependencies, nested blocks) of the any-of block, the dependency is considered satisfied. This is the only guarantee PMS gives you. The two common mistakes are to assume that the order is significant and that any kind of binding between packages installed at build time and at run time is provided.

Consider an any-of dependency specification like the following:

|| (
    A
    B
    C
)

In this case, it is guaranteed that at least one of the listed packages is installed at the point appropriate for the dependency class. If none of the packages are installed already, it is customary to assume the Package Manager will prefer the first one — while this is not specified and may depend on satisfiability of the dependencies, it is a reasonable assumption to make.

If multiple packages are installed, it is undefined which one is actually going to be used. In fact, the package may even provide the user with explicit run time choice of the dependency used, or use multiple of them. Assuming that A will be preferred over B, and B over C is simply wrong.

Furthermore, if one of the packages is uninstalled, while one of the remaining ones is either already installed or being installed, the dependency is still considered satisfied. It is wrong to assume that in any case the Package Manager will bind to the package used at install time, or cause rebuilds when switching between the packages.

The ‘equals’ slot operator in any-of dependencies

Finally, I am reaching the point of lately recurring debates. Let me make it clear: our current policy states that under no circumstances may := appear anywhere inside any-of dependency blocks.

Why? Because it is meaningless, it is contradictory. It is not even undefined behavior, it is a case where requirements put for the slot operator can not be satisfied. To explain this, let me recall the points made in the preceding sections.

First of all, the implementation of the ‘equals’ slot operator requires the Package Manager to explicitly bind the slot/subslot of the dependency to the installed version. This can only happen if the dependency is installed — and an any-of block only guarantees that one of them will actually be installed. Therefore, an any-of block may trigger a case when PMS-enforced requirements can not be satisfied.

Secondly, the definition of an any-of block allows replacing one of the installed packages with another at run time, while the slot operator disallows changing the slot/subslot of one of the packages. The two requested behaviors are contradictory and do not make sense. Why bind to a specific version of one package, while any version of the other package is allowed?

Thirdly, the definition of an any-of block does not specify any particular order/preference of packages. If the listed packages do not block one another, you could end up having multiple of them installed, and bound to specific slots/subslots. Therefore, the Package Manager should allow you to replace A:1 with B:2 but not with B:1 nor with A:2. We’re reaching insanity now.

Now, all of the above is purely theoretical. The Package Manager can do pretty much anything given invalid input, and that is why many developers wrongly assume that slot operators work inside any-of. The truth is: they do not, the developer just did not test all the cases correctly. The Portage behavior varies from allowing replacements with no rebuilds, to requiring both of mutually exclusive packages to be installed simultaneously.

June 19, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Since GLEP 67 was approved, bug assignment became easier. However, there were still many metadata.xml files which made this suboptimal. Today, I have fixed most of them and I would like to provide this short guide on how to write good metadata.xml files.

The bug assignment procedure

To understand the points that I am going to make, let’s take a look at how bug assignment happens these days. Assuming a typical case of bug related to a specific package (or multiple packages), the procedure for assigning the bug involves, for each package:

  1. reading all <maintainer/> elements from the package’s metadata.xml file, in order;
  2. filtering the maintainers based on restrict="" attributes (if any);
  3. filtering and reordering the maintainers based on <description/>s;
  4. assigning the bug to the first maintainer left, and CC-ing the remaining ones.

I think the procedure is quite clear. Since we no longer have <herd/> elements with special meaning applied to them, the assignment is mostly influenced by maintainer occurrence order. Restrictions can be used to limit maintenance to specific versions of a package, and descriptions to apply special rules and conditions.

Now, for semi-automatic bug assignment, only the first or the first two of the above steps can be clearly automated. Applying restrictions correctly requires understanding whether the bug can be correctly isolated to a specific version range, as some bugs (e.g. invalid metadata) may require being fixed in multiple versions of the package. Descriptions, in turn, are written for humans and require a human to interpret them.

What belongs in a good description

Now, many of existing metadata.xml files had either useless or even problematic maintainer descriptions. This is a problem since it increases time needed for bug assignment, and makes automation harder. Common examples of bad maintainer descriptions include:

  1. Assign bugs to him; CC him on bugs — this is either redundant or contradictory. Ensure that maintainers are listed in correct order, and bugs will be assigned correctly. Those descriptions only force a human to read them and possibly change the automatically determined order.
  2. Primary maintainer; proxied maintainer — this is some information but it does not change anything. If the maintainer comes first, he’s obviously the primary one. If the maintainer has non-Gentoo e-mail and there are proxies listed, he’s obviously proxied. And even if we did not know that, does it change anything? Again, we are forced to read information we do not need.

Good maintainer descriptions include:

  1. Upstream; CC on bugs concerning upstream, Only CC on bugs that involve USE=”d3d9″ — useful information that influences bug assignment;
  2. Feel free to fix/update, All modifications to this package must be approved by the wxwidgets herd. — important information for other developers.

So, before adding another description, please answer two questions: will the information benefit anyone? Can’t it be expressed in machine-readable form?

Proxy-maintained packages

Since a lot of the affected packages are maintained by proxied maintainers, I’d like to explicitly point out how proxy-maintained packages are to be described. This overlaps with the current Proxy maintainers team policy.

For proxy-maintained packages, the maintainers should be listed in the following order:

  1. actual package maintainers, in appropriate order — including developers maintaining or co-maintaining the package, proxied maintainers and Gentoo projects;
  2. developer proxies, preferably described as such — i.e. developers who do not actively maintain the package but only proxy for the maintainers;
  3. Proxy-maintainers project — serving as the generic fallback proxy.

I would like to put more emphasis on the key point here — the maintainers should be listed in an order making it clearly possible to distinguish packages that are maintained only by a proxied maintainer (with developers acting as proxies) from packages that are maintained by Gentoo developers and co-maintained by a proxied maintainer.

Third-party repositories (overlays)

As a last point, I would like to point out the special case of unofficial Gentoo repositories. Unlike the core repositories, metadata.xml files can not be fully trusted there. The reason for this is quite simple — many users copy (fork) packages from Gentoo along with metadata.xml files. If we were to trust those files — we would be assigning overlay bugs to Gentoo developers maintaining the original package!

For this reason, all bugs on unofficial repository packages are assigned to the repository owners.

June 17, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
bugs.gentoo.org: bug assignment UserJS (June 17, 2016, 12:48 UTC)

Since time does not permit me to write in more extent, just a short note: yesterday, I have published a Gentoo Bugzilla bug assignment UserJS. When enabled, it automatically tries to find package names in bug summary, fetches maintainers for them (from packages.g.o) and displays them in a table with quick assignment/CC checkboxes.

Note that it’s still early work. If you find any bugs, please let me know. Patches will be welcome too. And some redesign, since it looks pretty bad, standard Bugzilla style applied to plain HTML.

Update: now on GitHub as bug-assign-user-js

June 15, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)

Trojitá, a fast Qt IMAP e-mail client, has a shiny new release. A highlight of the 0.7 version is support for OpenPGP ("GPG") and S/MIME ("X.509") encryption -- in a read-only mode for now. Here's a short summary of the most important changes:

  • Verification of OpenPGP/GPG/S-MIME/CMS/X.509 signatures and support for decryption of these messages
  • IMAP, MIME, SMTP and general bugfixes
  • GUI tweaks and usability improvements
  • Zooming of e-mail content and improvements for vision-impaired users
  • New set of icons matching the Breeze theme
  • Reworked e-mail header display
  • This release now needs Qt 5 (5.2 or newer, 5.6 is recommended)

As usual, the code is available in our git as a "v0.7" tag. You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

The Trojitá developers

June 06, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo, Openstack and OSIC (June 06, 2016, 05:00 UTC)

What to use it for

I recently applied for, and use an allocation from https://osic.org/ do extend more support for running Openstack on Gentoo. The end goal of this is to allow Gentoo to become a gated job within the Openstack test infrastructure. To do that, we need to add support for building an image that can be used.

(pre)work

To speed up the work on adding support for generating an openstack infra Gentoo image I already completed work on adding Gentoo to diskimage builder. You can see images at http://gentoo.osuosl.org/experimental/amd64/openstack/

(actual)work

The actual work has been slow going unfortunately, working with upstreams to add Gentoo support has tended to find other issues that need fixing along the way. The main thing that slowed me down though was the Openstack summit (Newton). That went on at the same time and reveiws were delated at least a week, usually two.

Since then though I've been able to work though some of the issues and have started testing the final image build in diskimage builder.

More to do

The main things left to do is to add gentoo support to the bindep elemet within diskimage builder and finish and other rough edges in other elements (if they exist). After that, Openstack Infra can start caching a Gentoo image and the real work can begin. Adding Gentoo support to the Openstack Ansible project to allow for better deployments.

May 27, 2016
New Gentoo LiveDVD "Choice Edition" (May 27, 2016, 00:00 UTC)

We’re happy to announce the availability of an updated Gentoo LiveDVD. As usual, you can find it on our downloads page.

May 26, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Akonadi for e-mail needs to die (May 26, 2016, 10:48 UTC)

So, I'm officially giving up on kmail2 (i.e., the Akonadi-based version of kmail) on the last one of my PCs now. I have tried hard and put in a lot of effort to get it working. However, it costs me a significant amount of time and effort just to be able to receive and read e-mail - meaning hanging IMAP resources every few minutes, the feared "Multiple merge candidates" bug popping up again and again, and other surprise events. That is plainly not acceptable in the workplace, where I need to rely on e-mail as means of communication. By leaving kmail2 I seem to be following many many other people... Even dedicated KDE enthusiasts that I know have by now migrated to Trojita or Thunderbird.

My conclusion after all these years, based on my personal experience, is that the usage of Akonadi for e-mail is a failed experiment. It was a nice idea in theory, and may work fine for some people. I am certain that a lot of effort has been put into improving it, I applaud the developers of both kmail and Akonadi for their tenaciousness and vision and definitely thank them for their work. Sadly, however, if something doesn't become robust and error-tolerant after over 5 (five) years of continuous development effort, the question pops up whether the initial architectural idea wasn't a bad one in the first place - in particular in terms of unhandleable complexity.

I am not sure why precisely in my case things turn out so badly. One possible candidate is the university mail server that I'm stuck with, running Novell Groupwise. I've seen rather odd behaviour in the IMAP replies in the past there. That said, there's the robustness principle for software to consider, and even if Groupwise were to do silly things, other IMAP clients seem to get along with it fine.

Recently I've heard some rumors about a new framework called Sink (or Akonadi-Next), which seems to be currently under development... I hope it'll be less fragile, and less overcomplexified. The choice of name is not really that convincing though (where did my e-mails go again)?

Now for the question and answer session...

Question: Why do you post such negative stuff? You are only discouraging our volunteers.
Answer: Because the motto of the drowned god doesn't apply to software. What is dead should better remain dead, and not suffer continuous revival efforts while users run away and the brand is damaged. Also, I'm a volunteer myself and invest a lot of time and effort into Linux. I've been seeing the resulting fallout. It likely scared off other prospective help.

Question: Have you tried restarting Akonadi? Have you tried clearing the Akonadi cache? Have you tried starting with a fresh database?
Answer: Yes. Yes. Yes. Many times. And yes to many more things. Did I mention that I spent a lot of time with that? I'll miss the akonadiconsole window. Or maybe not.

Question: Do you think kmail2 (the Akonadi-based kmail) can be saved somehow?
Answer: Maybe. One could suggest an additional agent as replacement to the usual IMAP module. Let's call it IMAP-stupid, and mandate that it uses only a bare minimum of server features and always runs in disconnected mode... Then again, I don't know the code, and don't know if that is feasible. Also, for some people kmail2 seems to work perfectly fine.

Question: So what e-mail program will you use now?
Answer: I will use kmail. I love kmail. Precisely, I will use Pali Rohar's noakonadi fork, which is based on kdepim 4.4. It is neither perfect nor bug-free, but accesses all my e-mail accounts reliably. This is what I've been using on my home desktop all the time (never upgraded) and what I downgraded my laptop to some time ago after losing many mails.

Question: So can you recommend running this ages-old kmail1 variant?
Answer: Yes and no. Yes, because (at least in my case) it seems to get the basic job done much more reliably. Yes, because it feels a lot snappier and produces far less random surprises. No, because it is essentially unmaintained, has some bugs, and is written for KDE 4, which is slowly going away. No, because Qt5-based kmail2 has more features and does look sexier. No, because you lose the useful Akonadi integration of addressbook and calendar.
That said, here are the two bugs of kmail1 that I find most annoying right now: 1) PGP/MIME cleartext signature is broken (at random some signatures are not verified correctly and/or bad signatures are produced), and 2), only in a Qt5 / Plasma environment, attachments don't open on click anymore, but can only be saved. (Which is odd since e.g. Okular as viewer is launched but never appears on screen, and the temporary file is written but immediately disappears... need to investigate.)

Question: I have bugfixes / patches for kmail1. What should I do?
Answer: Send them!!! I'll be happy to test and forward.

Question: What will you do when Qt4 / kdelibs goes away?
Answer: Dunno. Luckily I'm involved in packaging myself. :)


May 24, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.

Thanks!!!

May 21, 2016
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
balde internals, part 1: Foundations (May 21, 2016, 15:25 UTC)

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.

May 16, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
How LINGUAS are thrice wrong! (May 16, 2016, 06:34 UTC)

The LINGUAS environment variable serves two purposes in Gentoo. On one hand, it’s the USE_EXPAND flag group for USE flags controlling installation of localizations. On the other, it’s a gettext-specfic environment variable controlling installation of localizations in some of build systems supporting gettext. Fun fact is, both uses are simply wrong.

Why LINGUAS as an environment variable is wrong?

Let’s start with the upstream-blessed LINGUAS environment variable. If set, it limits localization files installed by autotools+gettext-based build systems (and some more) to the subset matching specified locales. At first, it may sound like a useful feature. However, it is an implicit feature, and therefore one causing a lot of confusion for the package manager.

Long story short, in this context the package manager does not know anything about LINGUAS. It’s just a random environment variable, that has some value and possibly may be saved somewhere in package metadata. However, this value can actually affect the installed files in a hardly predictable way. So, even if package managers actually added some special meaning to LINGUAS (which would be non-PMS compliant), it still would not be good enough.

What does this practically mean? It means that if I set LINGUAS to some value on my system, then most of the binary packages produced by it suddenly have files stripped, as compared to non-LINGUAS builds. If I installed the binary package on some other system, it would match the LINGUAS of build host rather than the install host. And this means the binary packages are simply incorrect.

Even worse, any change to LINGUAS can not be propagated correctly. Even if the package manager decided to rebuild packages based on changes in LINGUAS, it has no way of knowing which locales were supported by a package, and if LINGUAS was used at all. So you end up rebuilding all installed packages, just in case.

Why LINGUAS USE flags are wrong?

So, how do we solve all those problems? Of course, we introduce explicit LINGUAS flags. This way, the developer is expected to list all supported locales in IUSE, the package manager can determine the enabled localizations and match binary packages correctly. All seems fine. Except, there are two problems.

The first problem is that it is cumbersome. Figuring out supported localizations and adding a dozen flags on a number of packages is time-consuming. What’s even worse, those flags need to be maintained once added. Which means you have to check supported localizations for changes on every version bump. Not all developers do that.

The second problem is that it is… a QA violation, most of the time. We already have quite a clear policy that USE flags are not supposed to control installation of small files with no explicit dependencies — and most of the localization files are exactly that!

Let me remind you why we have that policy. There are two reasons: rebuilds and binary packages.

Rebuilds are bad because every time you change LINGUAS, you end up rebuilding relevant packages, and those can be huge. You may think it uncommon — but just imagine you’ve finally finished building your new shiny Gentoo install, and noticed that you forgot to enable the localization. And guess what! You have to build a number of packages, again.

Binary packages are even worse since they are tied to a specific USE flag combination. If you build a binary package with specific LINGUAS, it can only be installed on hosts with exactly the same LINGUAS. While it would be trivial to strip localizations from installed binary package, you have to build a fresh one. And with dozen lingua-flags… you end up having thousands of possible binary package variants, if not more.

Why EAPI 5 makes things worse… or better?

Reusing the LINGUAS name for the USE_EXPAND group looked like a good idea. After all, the value would end up in ebuild environment for use by the build system, and in most of the affected packages, LINGUAS worked out of the box with no ebuild changes! Except that… it wasn’t really guaranteed to before EAPI 5.

In earlier EAPIs, LINGUAS could contain pretty much anything, since no special behavior was reserved for it. However, starting with EAPI 5 the package manager guarantees that it will only contain those values that correspond to enabled flags. This is a good thing, after all, since it finally makes LINGUAS work reliably. It has one side effect though.

Since LINGUAS is reduced to enabled USE flags, and enabled USE flags can only contain defined USE flags… it means that in any ebuild missing LINGUAS flags, LINGUAS should be effectively empty (yes, I know Portage does not do that currently, and it is a bug in Portage). To make things worse, this means set to an empty value rather than unset. In other words, disabling localization completely.

This way, a small implicit QA issue of implicitly affecting installed localization files turned out into a bigger issue of suddenly stopping to install localizations. Which in turn can’t be fixed without introducing proper set of LINGUAS everywhere, causing other kind of QA issues and additional maintenance burden.

What would be the good solution, again?

First of all, kill LINGUAS. Either make it unset for good (and this won’t be easy since PMS kinda implies making all PM-defined variables read-only), or disable any special behavior associated with it. Make the build system compile and install all localizations.

Then, use INSTALL_MASK. It’s designed to handle this. It strips files from installed systems while preserving them in binary packages. Which means your binary packages are now more portable, and every system you install them to will get correct localizations stripped. Isn’t that way better than rebuilding things?

Now, is that going to happen? I doubt it. People are rather going to focus on claiming that buggy Portage behavior was good, that QA issues are fine as long as I can strip some small files from my system in the ‘obvious’ way, that the specification should be changed to allow a corner case…

May 13, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Whew, I know that’s a long title for a post, but I wanted to make sure that I mentioned every term so that people having the same problem could readily find the post that explains what solved it for me. For some time now (ever since the 346.x series [340.76, which was the last driver that worked for me, was released on 27 January 2015]), I have had a problem with the NVIDIA Linux Display Drivers (known as nvidia-drivers in Gentoo Linux). The problem that I’ve experienced is that the newer drivers would, upon starting an X session, immediately clock up to Performance Level 2 or 3 within PowerMizer.

Before using these newer drivers, the Performance Level would only increase when it was really required (3D rendering, HD video playback, et cetera). I probably wouldn’t have even noticed that the Performance Level was changing, except that it would cause the GPU fan to spin faster, which was noticeably louder in my office.

After scouring the interwebs, I found that I was not the only person to have this problem. For reference, see this article, and this one about locking to certain Performance Levels. However, I wasn’t able to find a solution for the exact problem that I was having. If you look at the screenshot below, you’ll see that the Performance Level is set at 2 which was causing the card to run quite hot (79°C) even when it wasn’t being pushed.

NVIDIA Linux drivers stuck on high Performance Level in Xorg
Click to enlarge

It turns out that I needed to add some options to my X Server Configuration. Unfortunately, I was originally making changes in /etc/X11/xorg.conf, but they weren’t being honoured. I added the following lines to /etc/X11/xorg.conf.d/20-nvidia.conf, and the changes took effect:


Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

The portion in bold (the RegistryDwords option) was what ultimately fixed the problem for me. More information about the NVIDIA drivers can be found in their README and Installation Guide, and in particular, these settings are described on the X configuration options page. The PowerMizerDefaultAC setting may seem like it is for laptops that are plugged in to AC power, but as this system was a desktop, I found that it was always seen as being “plugged in to AC power.”

As you can see from the screenshots below, these settings did indeed fix the PowerMizer Performance Levels and subsequent temperatures for me:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Whilst I was adding X configuration options, I also noticed that Coolbits (search for “Coolbits” on that page) were supported with the Linux driver. Here’s the excerpt about Coolbits for version 364.19 of the NVIDIA Linux driver:

Option “Coolbits” “integer”
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer’s design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer’s operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

When “2” (Bit 1) is set in the “Coolbits” option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.

When “4” (Bit 2) is set in the “Coolbits” option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.

When “8” (Bit 3) is set in the “Coolbits” option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or performance levels may be modified.

When “16” (Bit 4) is set in the “Coolbits” option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.

When this option is set for an X screen, it will be applied to all X screens running on the same GPU.

The default for this option is 0 (unsupported features are disabled).

I found that I would personally like to have the options enabled by “4” and “8”, and that one can combine Coolbits by simply adding them together. For instance, the ones I wanted (“4” and “8”) added up to “12”, so that’s what I put in my configuration:


Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "Coolbits" "12"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

and that resulted in the following options being available within the nvidia-settings utility:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Though the Coolbits portions aren’t required to fix the problems that I was having, I find them to be helpful for maintenance tasks and configurations. I hope, if you’re having problems with the NVIDIA drivers, that these instructions help give you a better understanding of how to workaround any issues you may face. Feel free to comment if you have any questions, and we’ll see if we can work through them.

Cheers,
Zach

May 10, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
New Company: Edge Security (May 10, 2016, 14:48 UTC)

I've just launched a website for my new information security consulting company, Edge Security. We're expert hackers, with a fairly diverse skill set and a lot of experience. I mention this here because in a few months we plan to release an open-source kernel module for Linux called WireGuard. No details yet, but keep your eyes open in this space.

May 08, 2016
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
How to deal with portage blockers (May 08, 2016, 21:27 UTC)

TL;DR:

  • use --autounmask=n
  • use --backtrack=1000 (or more)
  • package.mask all outdated packages that cause conflicts (usually requires more iterations)
  • run world update

The problem

Occasionally (more frequently on haskel packages) portage starts taking long time to only tell you it was not able to figure out the upgrade path.

Some people suggest "wipe-blockers-and-reinstall" solution. This post will try to explain how to actually upgrade (or find out why it’s not possible) without actually destroying your system.

Real-world example

I’ll take a real-world example in Gentoo’s bugzilla: bug 579520 where original upgrade error looked like that:

!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

x11-libs/gtk+:3

  (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by
    (no parents that aren't satisfied by other packages in this slot)

  (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by
    >=x11-libs/gtk+-3.19.12:3[introspection?] required by (gnome-base/nautilus-3.20.0:0/0::gnome, installed)
    ^^              ^^^^^^^^^
    >=x11-libs/gtk+-3.20.0:3[cups?] required by (gnome-base/gnome-core-libs-3.20.0:3.0/3.0::gnome, installed)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.4:3[introspection?] required by (media-video/totem-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.0:3[introspection?] required by (app-editors/gedit-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.5:3 required by (gnome-base/dconf-editor-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.6:3[introspection?] required by (x11-libs/gtksourceview-3.20.0:3.0/3::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.3:3[introspection,X] required by (media-gfx/eog-3.20.0:1/1::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.8:3[X,introspection?] required by (x11-wm/mutter-3.20.0:0/0::gnome, installed)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.12:3[X,wayland?] required by (gnome-base/gnome-control-center-3.20.0:2/2::gnome, installed)
    ^^              ^^^^^^^^^
    >=x11-libs/gtk+-3.19.11:3[introspection?] required by (app-text/gspell-1.0.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^^

x11-base/xorg-server:0

  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-nouveau-1.0.12:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-fbdev-0.4.4:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-input-evdev-2.10.1:0/0::gentoo, installed)
                        ^^^^^^^^^^

  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-video-vesa-2.3.4:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-synaptics-1.8.2:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-mouse-1.9.1:0/0::gentoo, installed)
                        ^^^^^^^^^^

app-text/poppler:0

  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
    >=app-text/poppler-0.32:0/51=[cxx,jpeg,lcms,tiff,xpdf-headers(+)] required by (net-print/cups-filters-1.5.0:0/0::gentoo, installed)
                           ^^^^^^
    >=app-text/poppler-0.16:0/51=[cxx] required by (app-office/libreoffice-5.0.5.2:0/0::gentoo, installed)
                           ^^^^^^
    >=app-text/poppler-0.12.3-r3:0/51= required by (app-text/texlive-core-2014-r4:0/0::gentoo, installed)
                                ^^^^^^

  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by
    >=app-text/poppler-0.33[cairo] required by (app-text/evince-3.20.0:0/evd3.4-evv3.3::gnome, ebuild scheduled for merge)
    ^^                 ^^^^

net-fs/samba:0

  (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by
    (no parents that aren't satisfied by other packages in this slot)

  (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by
    net-fs/samba[smbclient] required by (media-sound/xmms2-0.8-r2:0/0::gentoo, ebuild scheduled for merge)
                 ^^^^^^^^^


It may be possible to solve this problem by using package.mask to
prevent one of those packages from being selected. However, it is also
possible that conflicting dependencies exist such that they are
impossible to satisfy simultaneously.  If such a conflict exists in
the dependencies of two different packages, then those packages can
not be installed simultaneously.

For more information, see MASKED PACKAGES section in the emerge man
page or refer to the Gentoo Handbook.


emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=".
(dependency required by "app-office/libreoffice-5.0.5.2::gentoo" [installed])
(dependency required by "@selected" [set])
(dependency required by "@world" [argument])

A lot of text! Let’s trim it down to essential detail first (AKA how to actually read it). I’ve dropped the "cause" of conflcts from previous listing and left only problematic packages:

!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

x11-libs/gtk+:3
  (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by
  (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by

x11-base/xorg-server:0
  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by

app-text/poppler:0
  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by

net-fs/samba:0
  (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by
  (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by

emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=".

That is more manageable. We have 4 "conflicts" here and one "missing" package.

Note: all the listed requirements under "conflicts" (the previous listing) suggest these are >= depends. Thus the dependencies themselves can’t block upgrade path and error message is misleading.

For us it means that portage leaves old versions of gtk+ (and others) for some unknown reason.

To get an idea on how to explore that situation we need to somehow hide outdated packages from portage and retry an update. It will be the same as uninstalling and reinstalling a package but not actually doing it:)

package.mask does exactly that. To make package hidden for real we will also need to disable autounmask: --autounmask=n (default is y).

Let’s hide outdated x11-libs/gtk+-3.18.7 package from portage:

# /etc/portage/package.mask
<x11-libs/gtk+-3.20.0:3

Blocker list became shorter but still does not make sense:

x11-base/xorg-server:0
  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by
  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
                        ^^^^^^^^^^

app-text/poppler:0
  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by

Blocking more old stuff:

# /etc/portage/package.mask
<x11-libs/gtk+-3.20.0:3
<x11-base/xorg-server-1.18.3
<app-text/poppler-0.42.0

The output is:

[blocks B      ] <dev-util/gdbus-codegen-2.48.0 ("<dev-util/gdbus-codegen-2.48.0" is blocking dev-libs/glib-2.48.0)

* Error: The above package list contains packages which cannot be
* installed at the same time on the same system.

 (dev-libs/glib-2.48.0:2/2::gentoo, ebuild scheduled for merge) pulled in by

 (dev-util/gdbus-codegen-2.46.2:0/0::gentoo, ebuild scheduled for merge) pulled in by

That’s our blocker! Stable <dev-util/gdbus-codegen-2.48.0 blocks unstable blocking dev-libs/glib-2.48.0.

The solution is to ~arch keyword dev-util/gdbus-codegen-2.48.0:

# /etc/portage/package.accept_keywords
dev-util/gdbus-codegen

And run world update.

Simple!


April 28, 2016
GSoC 2016: Five projects accepted (April 28, 2016, 00:00 UTC)

We are excited to announce that 5 students have been selected to participate with Gentoo during the Google Summer of Code 2016!

You can follow our students’ progress on the gentoo-soc mailing list and chat with us regarding our GSoC projects via IRC in #gentoo-soc on freenode.
Congratulations to all the students. We look forward to their contributions!

GSoC logo

Accepted projects

Clang native support - Lei Zhang: Bring native clang/LLVM support in Gentoo.

Continuous Stabilization - Pallav Agarwal: Automate the package stabilization process using continuous integration practices.

kernelconfig - André Erdmann: Consistently generate custom Linux kernel configurations from curated sources.

libebuild - Denys Romanchuk: Create a common shared C-based implementation for package management and other ebuild operations in the form of a library.

Gentoo-GPG- Angelos Perivolaropoulos: Code the new Meta­-Manifest system for Gentoo and improve Gentoo Keys capabilities.

Events: Gentoo Miniconf 2016 (April 28, 2016, 00:00 UTC)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University.

Want to participate? The call for papers is open until 1 August 2016.

April 25, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University in Prague (FIT ČVUT).

The call for papers is now open where you can submit your session proposal until 1 August 2016. Want to have a meeting, discussion, presentation, workshop, do ebuild hacking, or anything else? Tell us!

miniconf-2016

April 15, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level — and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.

Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There’s the whole Gentoo-keys project that aims to eventually solve the problem but it’s not there yet. Maybe this year’s Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I’m not saying it would be that hard — but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I’m not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won’t be kept unencrypted outside the memory. All nice and shiny.

The problem is — this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I’m around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there’s one more issue: the software. I won’t be running all those scripts in my home. So it’s not just me you have to trust — you have to trust all other people with administrative access to the machine that’s running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won’t be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What’s the value of this signature then?

And this is the exact reason why I don’t do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I’ve mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What’s important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What’s the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn’t guarantee that the alterations are genuine, the danger coming from them is rather small (if any).

April 11, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

A couple weeks ago, I decided to update my primary laptop’s kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:


#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the ‘iwlwifi’ driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:


[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn’t loading:


[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:


# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):


[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the “Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2” error message.

Cheers,
Zach

March 31, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

I’ll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account!  So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program.  Since I don’t like writing trivial stuff, I’m going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions.  Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on.  Chpt 1 of Kerrisk’s The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today.  If you look thorugh the code base of uClibc you’ll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU.  These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro.  Consider the approach to portability taken by GNU autotools.  Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__.  Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__.  If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)

This is getting a bit ugly and syntactically ambiguous.  Someone not familiar with this could easily misinterpret it, or reject it.

So I’ve hit bugs like these.  I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__.   Alternatively I hit this in geocode-glib and geoclue, and they did accept it.  I went with the wrong minded approach because that’s what was already there, and I didn’t feel like sifting through their code base and revamping their build system.  This isn’t just laziness, its historical weight.

So kudos to musl.  And for all the faults of GNU autotools, at least its approach to portability is correct.

 

 

 

 

March 28, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

RBAC is a security feature of the hardened-sources kernels.  As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources.   Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root.  In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned!  I wouldn’t recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not.  RBAC has to be part of a more comprehensive security plan and is not a single security solution.  In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers.  Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges.  So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself.  In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel.  Of course, if you’re planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy.  This is where the fun begins because you need to ask yourself what kind of a system you’re going to be running and decide on the policies you’re going to implement.  Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack.  I did this for years when I ran a moodle server for D’Youville College.  For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file.  RBAC is ideally suited for this.  Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs.  Basically your services should have attempted to access each resource at least once so the event gets logged.  You then distill those logs into a policy file describing only what should be permitted and tweak as needed.  Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm –F –L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm –F –L /etc/grsec/learning.log –O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to

Easy right?  This will get you pretty far but you’ll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning.  A typical example here, is you might have ssh’ed in from one IP, but now you’re ssh-ing in from a different IP and you’re getting denied.  To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role.  Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0

This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied.  As we’ll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root.  To do this you edit the policy file and look for subject /bin/ls under role root.  You then add an ‘l’ to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the ‘l’

You restart RBAC using  gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log  -O /etc/grsec/partial-learning.policy.  That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied.  But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there?  While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let’s start with a failry permissive policy file for root:

role admin sA
subject / rvka {
	/			rwcdmlxi
}

role default
subject / {
	/			h
	-CAP_ALL
	connect disabled
	bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
	/			r
	/boot			h
#
	/bin			rx
	/sbin			rx
	/usr/bin		rx
	/usr/libexec		rx
	/usr/sbin		rx
	/usr/local/bin		rx
	/usr/local/sbin		rx
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
	/usr/lib32		rx
	/usr/lib64		rx
	/usr/local/lib32	rx
	/usr/local/lib64	rx
#
	/dev			hx
	/dev/log		r
	/dev/urandom		r
	/dev/null		rw
	/dev/tty		rw
	/dev/ptmx		rw
	/dev/pts		rw
	/dev/initctl		rw
#
	/etc/grsec		h
#
	/home			rwcdl
	/root			rcdl
#
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
#
	/run/lock		rwcdl
	/sys			h
	/tmp			rwcdl
	/var			rwcdl
#
	+CAP_ALL
	-CAP_MKNOD
	-CAP_NET_ADMIN
	-CAP_NET_BIND_SERVICE
	-CAP_SETFCAP
	-CAP_SYS_ADMIN
	-CAP_SYS_BOOT
	-CAP_SYS_MODULE
	-CAP_SYS_RAWIO
	-CAP_SYS_TTY_CONFIG
	-CAP_SYSLOG
#
	bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
	connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
	sock_allow_family all
}

The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is ‘special’ meaning that it doesn’t correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role’s behavior. The ‘s’ and ‘A’ in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn’t have (eg, it is not subject ptrace restrictions). Its good to have the ‘A’ flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it’ll still function the same way.

The root role has the ‘u’ and the ‘G’ flags. The ‘u’ flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the ‘g’ flag instead which says to match a group by the same name. The ‘G’ flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don’t need to add those permissions yourself. Finally the default role doesn’t and shouldn’t have any flags. If its not a ‘u’ or ‘g’ or ‘s’ role, then its a default role.

Before we jump into the subject blocks, you’ll notice a couple of lines after the root role. The first says ‘role_transitions admin’ and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says ‘role_allow_ip 0.0.0.0/0’. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don’t have the line in there, get used to logging on on console because you’ll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the ‘subject’ keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the ‘/’ subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We’ll see this towards the end of the article.

Let’s start by looking at the ‘/’ subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have ‘/ h’ which says that every file in the file system starting from ‘/’ downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don’t forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn’t the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the ‘connect’ and ‘bind’ lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here ‘disabled’ means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let’s look at the ‘/’ subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags ‘rvka’. Here ‘r’ means that we relax ptrace restrictions for this subject, ‘a’ means we do not hide access to /dev/grsec, ‘k’ means we allow this subject to kill protected processes and ‘v’ means we allow this subject to view hidden processes. So ‘k’ and ‘v’ are interesting and have counterparts ‘p’ and ‘h’ respectively. If a subject is flagged as ‘p’ it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with ‘k’. Similarly processes belonging to a subject marked ‘h’ can only be viewed by processes belonging to a subject marked ‘v’. Nifty, eh? The only object line in this subject block is ‘/ rwcdmlxi’. This says that this subject can ‘r’ead, ‘w’rite, ‘c’reate, ‘d’elete, ‘m’ark as setuid/setgid, hard ‘l’ink to, e’x’ecute, and ‘i’nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let’s look at the ‘/’ subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don’t open any security holes. Notice that here we have ‘+CAP_ALL’ followed by a series of ‘-CAP_*’. Each of these were included otherwise gradm would complain. For example, if ‘CAP_SYS_ADMIN’ is not removed, an attacker can mount filesystems to bypass your policies.

So I won’t go through this entire subject block in detail, but let me highlight a few points. First consider these lines

	/			r
	/boot			h
	/etc/grsec		h
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
	/sys			h

The first line gives ‘r’ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by ‘h’iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory

but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module

Next consider these lines:

	/bin			rx
	/sbin			rx
	...
	/lib32			rx
	/lib64			rx
	/lib64/modules		h

Since the ‘x’ flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The ‘r’ flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the ‘r’ flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop ‘r’ on /bin and not break our system. This, however, is not the case with the library directories. Dropping ‘r’ on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw

The ‘h’ flag will hide /dev and its contents, but the ‘x’ flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The ‘h’ is required to hide the directory and its contents because, as we saw above, ‘x’ is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log

In the admin role, all those files are visible.

Let’s end our study of this subject by looking at the ‘bind’, ‘connect’ and ‘sock_allow_family’ lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The ‘sock_allow_family’ allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename ‘role root’ to ‘role myusername’. You’ll probably want to remove the ‘role_transitions’ line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don’t want this user to be able to list any files or directories. You can simply add a line to his ‘/’ subject block which reads ‘/bin/ls h’ and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just ‘h’ all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user’s listing of a directory to just his home. To do this, we’ll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let’s proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}

Note that this is identical to the extremely restrictive ‘/’ subject for the default role except that the subject is ‘/bin/ls’ not ‘/’. There is also a subject flag ‘o’ which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0

Well that makes sense. We’ve started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we’ll restore read access to it by adding a line ‘/lib64/ld-2.21.so r’. Repeating our test, we get a seg fault! Obviously, we don’t just need read access to the ld.so, but we also execute privileges. We add ‘x’ and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

Of course! We need ‘rx’ for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

All we need now is ‘/home/myusername r’ and we’re done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}

Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System.  A good reference for the role, subject and object flags can be found in these  Tables.

March 23, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Of OpenStack and uwsgi (March 23, 2016, 05:00 UTC)

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

  • Install >=www-servers/uwsgi-2.0.11.2-r1 with the python use flag as it has an updated init script.
  • Make sure you note the group you want for the webserver to access the uwsgi sockets, I chose nginx.

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi

UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'

/etc/uwsgi.d/keystone-admin.ini

[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin

/etc/uwsgi.d/keystone-main.ini

[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main

I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini

[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

/var/www/keystone/main perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon

#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()

March 22, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

A random OpenPGP-signed e-mail

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

Mail with a trusted signature

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

Something fishy is going on!

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

All the glory details about an X.509 trust chain

Encrypted messages are of course supported, too:

An ancrypted message

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!