Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
April 04, 2013, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 04, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

If you’re using dev-db/postgresql-server, update now.

CVE-2013-1899 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13}
------------------------------------------------------------
A connection request containing a database name that begins
with "-" may be crafted to damage or destroy files within a server's data directory.

CVE-2013-1900 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13,8.4.17}
-------------------------------------------------------------------
Random numbers generated by contrib/pgcrypto functions may be easy for another
database user to guess

CVE-2013-1901 <dev-db/postgresql-server-{9.2.4,9.1.9}
-----------------------------------------------------
An unprivileged user can run commands that could interfere with in-progress backups.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Matching packages with CVEs (April 04, 2013, 19:44 UTC)

I’ve come across a few posts on forums (Gentoo and elsewhere) asking why Gentoo doesn’t make security-related patches on the tree. Some people think this is the case because they do not notice (m)any GLSAs, which are Gentoo’s security advisories. However, it isn’t that Gentoo doesn’t push out security fixes – it is a matter of putting the necessary human resources against it to write down the GLSAs.

Gentoo is often quick with creating the necessary ebuilds for newer versions of software. And newer versions often contain security fixes that mitigate problems detected in earlier versions. So by keeping your system up to date, you get those security fixes as well. But without GLSA, it is difficult to really know which packages are necessary and which aren’t, let alone be aware that there are potential problems with your system.

I already captured one of those needs through the cvechecker application, so I took a step further and wrote an extremely ugly script (it’s so ugly, it would spontaneously become a joke of itself when published) which compiles a list of potential CPEs (identifiers for products used in CVEs) from the Gentoo package list (ugliness 1: it assumes that the package name is the product name). It then tries to assume what the version of that software is based on the ebuild version (ugliness 2: it just takes the a.b.c number). Then, it lists the CVEs affiliated with a particular package, and checks this list with the list of CVEs from an earlier version (ugliness 3: it requires the previous, vulnerable version to still be in the tree). If one of the CVEs has “disappeared”, it will report that the given package might fix that CVE. Oh, and if the CVE has a CPE that contains more than just a version, the script ignores it (ugliness 4). And it probably ignores a lot of other things as well while not checking the input (ugliness 5 and higher).

But if we ignore all that, what does that give for the Gentoo portage tree for the last 7 days? In other words, what releases have been made on the tree that might contain security fixes (and that do comply with the above ugliness)?

app-editors/emacs-23.4-r5 might fix CVE-2010-0825
app-editors/emacs-24.2-r1 might fix CVE-2012-0035
app-editors/emacs-24.2-r1 might fix CVE-2012-3479
dev-lang/python-2.6.8-r1 might fix CVE-2010-3492
dev-lang/python-2.6.8-r1 might fix CVE-2011-1521
dev-lang/python-2.6.8-r1 might fix CVE-2012-0845
dev-lang/python-2.6.8-r1 might fix CVE-2012-1150
dev-lang/python-2.6.8-r1 might fix CVE-2008-5983
dev-php/smarty-2.6.27 might fix CVE-2009-5052
dev-php/smarty-2.6.27 might fix CVE-2009-5053
dev-php/smarty-2.6.27 might fix CVE-2009-5054
dev-php/smarty-2.6.27 might fix CVE-2010-4722
dev-php/smarty-2.6.27 might fix CVE-2010-4723
dev-php/smarty-2.6.27 might fix CVE-2010-4724
dev-php/smarty-2.6.27 might fix CVE-2010-4725
dev-php/smarty-2.6.27 might fix CVE-2010-4726
dev-php/smarty-2.6.27 might fix CVE-2010-4727
dev-php/smarty-2.6.27 might fix CVE-2012-4277
dev-php/smarty-2.6.27 might fix CVE-2012-4437
media-sound/rhythmbox-2.97 might fix CVE-2012-3355
net-im/empathy-3.6.3 might fix CVE-2011-3635
net-im/empathy-3.6.3 might fix CVE-2011-4170
sys-cluster/glusterfs-3.3.1-r2 might fix CVE-2012-4417
www-client/seamonkey-2.17 might fix CVE-2013-0788
www-client/seamonkey-2.17 might fix CVE-2013-0789
www-client/seamonkey-2.17 might fix CVE-2013-0791
www-client/seamonkey-2.17 might fix CVE-2013-0792
www-client/seamonkey-2.17 might fix CVE-2013-0793
www-client/seamonkey-2.17 might fix CVE-2013-0794
www-client/seamonkey-2.17 might fix CVE-2013-0795
www-client/seamonkey-2.17 might fix CVE-2013-0796
www-client/seamonkey-2.17 might fix CVE-2013-0797
www-client/seamonkey-2.17 might fix CVE-2013-0800

As you can see, there is still a lot of work to remove bad ones (and add matches for non-default ones), but at least it gives an impression (especially those that have CVEs of 2012 or even 2013 are noteworthy), which is the purpose of this post.

It would be very neat if ebuilds, or the package metadata, could give pointers on the CPEs. That way, it would be much easier to check a system for known vulnerabilities through the (publicly) available CVE databases as we then only have to do simple matching. A glsa-check-ng (what’s in a name) script would then construct the necessary CPEs based on the installed package list (and the metadata on it), check if there are CVEs against it, and if there are, see if a newer version of the same package is available that has no (or fewer) CVEs assigned to it.

Perhaps someone can create a GSoC proposal out of that?

April 03, 2013
Matthew Thode a.k.a. prometheanfire (homepage, bugs)

Disclaimer

  1. Keep in mind that ZFS on Linux is supported upstream, for differing values of support
  2. I do not care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). I uploaded an iso that works for me at this link Live DVDs newer then 12.1 should also have support, but the previous link has the stable version of zfsonlinux. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry Most newer drives are 4k advanced format drives. Because of this you need ashift=12, some/most newer SSDs need ashift=13 compression set to lz4 will make your system incompatible with upstream (oracle) zfs, if you want to stay compatible then just set compression=on

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=lz4 rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /tmp/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources                #or hardned-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-0.6.1/work/spl-0.6.1 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-0.6.1/work/zfs-zfs-0.6.1/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
mkdir -p /etc/portage/profile                                                   
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask      
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use                    
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

April 02, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Linux Sea and ePub update (April 02, 2013, 18:16 UTC)

I just “published” a small update on the Linux Sea online book. Nothing major, some path updates (like the move to /etc/portage for the make.conf file). But I wouldn’t put a blog post online if there wasn’t anything else to say ;-)

Recently I was made aware that the ePub versions I publish were broken. I don’t use ePub readers myself, so all I do is read the ePubs through a Firefox plug-in and it’s been a while that I did that on my own ePubs. Apparently, the stylesheets I used to convert the Docbook to ePub changes behavior (or my scripts abused an error in the previous stylesheets that are fixed now). So right now the ePub version should work again, and the code snippet below is what I use now to build it:

xsltproc --stringparam base.dir linuxsea-epub/OEBPS/ /usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl LINUXSEA.xml;
cp -r /path/to/src/linux_sea/images linuxsea-epub/OEBPS;
cd linuxsea-epub;
zip -X0 linux_sea.epub mimetype;
zip -r -X9 linux_sea.epub META-INF OEBPS;
mv linux_sea.epub ../;

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The WebP experiment (April 02, 2013, 17:58 UTC)

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP at all… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.

I’m a content publisher, whether I like it or not. This blog is relatively well followed, and I write quite a lot in it. While my hosting provider does not give me grief for my bandwidth usage, optimizing it is something I’m always keen on, especially since I have been Slashdotted once before. This is one of the reasons why my ModSecurity Ruleset validates and filters crawlers as much as spammers.

Blogs’ feeds, be them RSS or Atom (this blog only supports the latter) are a very neat way to optimize bandwidth: they get you the content of the articles without styles, scripts or images. But they can also be quite big. The average feed for my blog’s articles is 100KiB which is a fairly big page, if you consider that feed readers are supposed to keep pinging the blog to check for new items. Luckily for everybody, the authors of HTTP did consider this problem, and solved it with two main features: conditional requests and compressed responses.

Okay there’s a sense of déjà-vu in all of this, because I already complained about software not using the features even when it’s designed to monitor web pages constantly.

By using conditional requests, even if you poke my blog every fifteen minutes, you won’t use more than 10KiB an hour, if no new article has been posted. By using compressed responses, instead of a 100KiB response you’ll just have to download 33KiB. With Google Reader, things were even better: instead of 113 requests for the feed, a single request was made by the FeedFetcher, and that was it.

But now Google Reader is no more (almost). What happens now? Well, of the 113 subscribers, a few will most likely not re-subscribe to my blog at all. Others have migrated to NewsBlur (35 subscribers), the rest seem to have installed their own feed reader or aggregator, including tt-rss, owncloud, and so on. This was obvious looking at the statistics from either AWStats or Munin, both showing a higher volume of requests and delivered content compared to last month.

I’ve then decided to look into improving the bandwidth a bit more than before, among other things, by providing WebP alternative for images, but that does not really work as intended — I have enough material for a rant post or two so I won’t discuss it now. But while doing so I found out something else.

One of the changes I made while hoping to use WebP is to serve the image files from a different domain (assets.flameeyes.eu) which meant that the access log for the blog, while still not perfect, is decidedly cleaner than before. From there I noticed that a new feed reader started requesting my blog’s feed every half an hour. Without compression. In full every time. That’s just shy of 5MiB of traffic per day, but that’s not the worst part. The worst part is that said 5MiB are for a single reader as the requests come from a commercial, proprietary feed reader webapp.

And this is not the only one! Gwene also does the same, even though I sent a pull request to get it to use compressed responses, which hasn’t had a single reply. Even Yandex’s new product has the same issue.

While 5MiB/day is not too much taken singularly, my blog’s traffic averages on 50-60 MiB/day so it’s basically a 10% traffic for less than 1% of users, just because they do not follow the best practices when writing web software. I’ve now added these crawlers to the list of stealth robots, this means that they will receive a “406 Unacceptable” unless they finally implement at least the compressed responses support (which is the easy part in all this).

This has an unfortunate implication on users of those services that were reading me, who won’t get any new updates. If I was a commercial entity, I couldn’t afford this at all. The big problem, to me, is that with Google Reader going away, I expect more and more of this kind of issues to crop up repeatedly. Even NewsBlur, which is now my feed reader of choice fixed their crawlers yet, which I commented upon before — the code is open-source but I don’t want to deal with Python just yet.

Seriously, why are there so many people who expect to be able to deal with web software and yet have no idea how the web works at all? And I wonder if somebody expected this kind of fallout from the simple shut down of a relatively minor service like Google Reader.

March 31, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
udev-200 interface names (March 31, 2013, 00:59 UTC)

Just updated to udev-200 and figured it was time to read the news item and deal with the Predictable Network Interface Names. I only have one network card and connect with a static ip address. It looked to me like more trouble to keep net.eth0 then to just go with the flow and paddle downstream and not fight it so here is what I did.

First I read the news item :) then found out what my new name would be.

eselect news read
udevadm test-builtin net_id /sys/class/net/eth0 2> /dev/null

That returned enp0s25 ...

Next remove the old symlink and create the new one.

cd /etc/init.d/
rm net.eth0
ln -s net.lo net.enp0s25

I removed all the files from /etc/udev/rules.d/

Next set up /etc/conf.d/net for my static address.

# Static
 
config_enp0s25="192.168.1.68/24"
routes_enp0s25="default via 192.168.1.254"
dns_servers_enp0s25="192.168.1.254 8.8.8.8"

That was it, rebooted, held my breath, and everything seems just fine, YES!

 ifconfig
enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.68  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::21c:c0ff:fe91:5798  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:c0:91:57:98  txqueuelen 1000  (Ethernet)
        RX packets 3604  bytes 1310220 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2229  bytes 406258 (396.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xd3400000-d3420000  
 
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 16436
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Update
I had to edit /etc/vnstat.conf and change eth0 to enp0s25. I use vnstat with conky.

rm /var/lib/vnstat/*
vnstat -u -i enp0s25

March 30, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

The article’s title is a play on the phrase “don’t open that door”, and makes more sense in Italian as we use the same word for ‘door’ and ‘port’…

So you left your hero (me) working on setting up a Raspberry Pi with at least a partial base of cross-compilation. The whole thing worked to a decent extent, but it wasn’t really as feasible as I hoped. Too many things, including Python, cannot cross-compile without further tricks, and the time it takes to figure out how to cross-compile them, tend to be more than that needed to just wait for it to build on the board itself. I guess this is why there is that little interest in getting cross-compilation supported.

But after getting a decent root, or stage4 as you prefer to call it, I needed to get a kernel to boot the device. This wasn’t easy.; there is no official configuration file published — what they tell you is, if you want to build a new custom kernel, to zcat /proc/config.gz from Raspian. I didn’t want to use Raspian, so I looked further. The next step is to check out the defconfig settings that the kernel repository includes, a few, different of them exist.

You’d expect them to be actually thought out to enable exactly what the RaspberryPi provides, and nothing more or less. Some leeway can be expected for things like network options, but at least the “cutdown” version should not include all of IrDA, Amateur Radio, Wireless, Bluetooth, USB network, PPP, … After disabling a bunch of options, since the system I need to run will have very few devices connected – in particular, only the Davis Vantage Pro station, maybe a printer – I built the kernel and copied it over the SD card. It booted, it crashed. Kernel panicked right away, due to a pointer dereference.

After some rebuild-copy-test cycles I was able to find out what the problem is. It’s a problem that is not unique to the RPi actually, as I found the same trace from an OMAP3 user reporting it somewhere else. The trick was disabling the (default-enabled) in-kernel debugger – which I couldn’t access anyway, as I don’t have an USB keyboard at hand right now – so that it would print the full trace of the error .That pointed at the l4_init function, which is the initialization of the Lightning 4 gameport controller — an old style, MIDI game port.

My hunch is that this expansion card is an old-style ISA card, since it does not rely on PCI structures to probe for the device — I cannot confirm it because googling for “lightning 4” only comes up with images of iPad and accessories. What it does, is simply poking at the 0×201 address, and the moment when it does, you get a bad dereference from the kernel exactly at that address. I’ve sent a (broken, unfortunately) patch to the LKML to see if there is an easy way to solve this.

To be honest and clear, if you just take a defconfig and build it exactly as-is, you won’t be hitting that problem. The problem happens to me because in this kernel, like in almost every other one I built, I do one particular thing: I disable modules so that a single, statically build kernel. This in turn means that all the drivers are initialized when you start the kernel, and the moment when the L4 driver is started, it crashes the kernel. Possibly it’s not the only one.

This is most likely not strictly limited to the RaspberryPi but it doesn’t help that there is no working minimal configuration – mine is, by the way, available here – and I’m pretty sure there are other similar situations even when the arch is x86… I guess it’s just a matter of reporting them when you encounter them.

Flattr for comments (March 30, 2013, 08:27 UTC)

You probably know already that my blog is using Flattr for micro-donation, both to the blog as a whole and to the single articles posted here. For those who don’t know, Flattr is a microdonation platform that splits a monthly budget into equal parts to share with your content creators of choice.

I’ve been using, and musing about, Flattr for a while and sometimes I ranted a little bit of how things have been moving in their camp. One of the biggest problems with the service is the relative scarce adoption. I’ve got a ton of “pending flattrs” as described on their blog for Twiter and Flickr users, mostly.

Riling up adoption of the service is key for it to be useful for both content creators and consumers: the former can only get something out of the system if their content is liked by enough people, and the latter will only care about adding money to the system if they find great content to donate to. Or if they use Socialvest to get the money while they spend it somewhere else.

So last night I did my part in trying to increase the usefulness of Flattr: I added it to the comments of my blog. If you do leave a comment and fill the email field, that email will be used, hashed, to create a new “thing” on Flattr, whether you’re already registered or not — if you’re not registered, the things will be kept pending until you register and associate the email address. This is not much different from what I’ve been doing already with gravatar, which uses the same method (the hashed email address).

Even though the description of the parameters needed to integrate Flattr for comments are described in the partnership interface there doesn’t seem to be a need to be registered as a partner – indeed you can see in the pages’ sources that there is no revenue key present – and assuming you already are loading the Flattr script for your articles’ buttons, all you have to add is the following code to the comment template (for Typo, other languages and engines will differ slightly of course!):

<% if comment.email != "" -%>
  <div class="comment_flattr right">
    <a class="FlattrButton" style="display:none;"
       title="Comment on <%= comment.article.title %>"
       data-flattr-tags="text, comment"
       data-flattr-category="text"
       data-flattr-owner="email:<%= Digest::MD5.hexdigest(comment.email.strip) %>"
       href="<%= comment.article.permalink_url %>#comment-<%= comment.id %>">
    </a>
  </div>
<% end -%>

So if I’m not making money with the partner site idea, why am I bothering with adding these extra buttons? Well, I often had people help me out a lot in comments, pointing out obvious mistakes I made or things I missed… and I’d like to be able to easily thank the commenters when they help me out… and now I can. Also, since this requires a valid email field, I hope for more people to fill it in, so that I can contact them if I want to ask or tell them something in private (sometimes I wished to contact people who didn’t really leave an easy way to contact them).

At any rate, I encourage you all to read the comments on the posts, and Flattr those you find important, interesting or useful. Think of it like a +1 or a “Like”. And of course, if you’re not subscribed with Flattr, do so! You’ll never know what other people could like, that you posted!

March 29, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Predictable persistently (non-)mnemonic names (March 29, 2013, 20:09 UTC)

This is part two of a series of articles looking into the new udev “predictable” names. Part one is here and talks about the path-based names.

As Steve also asked on the comments from last post, isn’t it possible to just use the MAC address of an interface to point at it? Sure it’s possible! You just need to enable the mac-based name generator. But what does that mean? It means that your new interface names will be enx0026b9d7bf1f and wlx0023148f1cc8 — do you see yourself typing them?

Myself, I’m not going to type them. My favourite suggestion to solve the issue is to rely on rules similar to the previous persistent naming, but not re-using the eth prefix to avoid collisions (which will no longer be resolved by future versions of udev). I instead use the names wan0 and lan0 (and so on), when the two interfaces sit stranding between a private and a public network. How do I achieve that? Simple:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:17:31:c6:4a:ca", NAME="lan0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:07:e9:12:07:36", NAME="wan0"

Yes these simple rules are doing all the work you need if you just want to make sure not to mix the two interfaces by mistake. If your server or vserver only has one interface, and you want to have it as wan0 no matter what its mac address is (easier to clone, for instance), then you can go for

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="*", NAME="wan0"

As long as you only have a single network interface, that will work just fine. For those who use Puppet, I also published a module that you can use to create the file, and ensure that the other methods to achieve “sticky” names are not present.

My reasoning to actually using this kind of names is relatively simple: the rare places where I do need to specify the interface name are usually in ACLs, the firewall, and so on. In these, the most important part to me is knowing whether the interface is public or not, so the wan/lan distinction is the most useful. I don’t intend trying to remember whether enp5s24k1f345totheright4nextothebaker is the public or private interface.

Speaking about which, one of the things that appears obvious even from Lennart’s comment to the previous post, is that there is no real assurance that the names are set in stone — he says that an udev upgrade won’t change them, but I guess most people would be sceptic, remembering the track record that udev and systemd has had over the past few months alone. In this situation my personal, informed opinion is that all this work on “predictable” names is a huge waste of time for almost everybody.

If you do care about stable interface names, you most definitely expect them to be more meaningful than 10-digits strings of paths or mac addresses, so you almost certainly want to go through with custom naming, so that at least you attach some sense into the names themselves.

On the other hand, if you do not care about interface names themselves, for instance because instead of running commands or scripts, you just use NetworkManager… well what the heck are you doing playing around with paths? If it doesn’t bother you that the interface for an USB device changes considerably between one port and another, how can it matter to you whether it’s called wwan0 or wwan123? And if the name of the interface does not matter to you, why are you spending useless time trying to get these “predictable” names working?

All in all, I think this is just an useless nice trick, that will only cause more headaches than it can possibly solve. Bahumbug!

Pacho Ramos a.k.a. pacho (homepage, bugs)
Gnome 3.8 released (March 29, 2013, 17:08 UTC)

Gnome 3.8 Released, and already available in main tree hardmasked for adventurous people willing to help with it being fixed for stable "soon" ;)

Thanks for your help!

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Predictably non-persistent names (March 29, 2013, 10:51 UTC)

This is going to be fun. The Gentoo “udev team”, in the person of Samuli – who seems to suffer from 0-day bump syndrome – decided to now enable by default the new predictable names featuer that is supposed to make things so much nicer in Linux land where, especially for people coming from FreeBSD, things have been pretty much messed up. This replaces the old “persistent” names, that were often enough too fragile to work, as they did in-place renaming of interfaces, and would cause way too often conflicts at boot time, since swapping two devices’ names is not an atomic operation for obvious reasons.

So what’s this predictable name all around? Well, it’s mostly a merge of the previous persistent naming system, and the BIOS label naming project which was developed by RedHat for a few years already so that the names of interfaces for server hardware in the operating system match the documentation of said server, so that you can be sure that if you’re connecting the port marked with “1” on the chassis, out of four on the motherboard, it will bring up eth2.

But why were those two technologies needed? Let’s start first with explaining how (more or less) the kernel naming scheme works: unlike the BSD systems, where the interfaces are named after the kernel driver (en0, dc0, etc.), the Linux kernel uses generic names, mostly eth, wlan and wwan, and maybe a couple more, for tunnels and so on. This causes the first problem: if you have multiple devices of the same class (ethernet, wlan, wwan) coming from different drivers, the order of the interface may very well vary between reboots, either because of changes in the kernel, if the drivers are built-in, or simply because of locking and execution of modules load (which is much more common for binary distributions).

The reason why changes in the kernel can change the order is that the order in which drivers are initialized has changed before and might change again in the future. A driver could also decide to change the order with which its devices are initialized (PCI tree scanning order, PCI ID order, MAC address order, …) and so on, causing it to change the order of interfaces even for the same driver. More about this later.

But here’s my first doubt arises: how common is for people to have more than one interface of the same class from vendors different enough to use different drivers? Well it depends on the class of device; on a laptop you’d have to search hard for a model with more than one Ethernet or wireless interface, unless you add an ExpressCard or PCMCIA expansion card (and even those are not that common). On a desktop, I’ve seen a few very recent motherboards with more than one network port, and I have yet to see one with different chips for the two. Servers, that’s a different story.

Indeed, it’s not that uncommon to have multiple on-board and expansion card ports on a server. For instance you could use the two onboard ports as public and private interfaces for the host… and then add a 4-port card to split between virtual machines. In this situation, having a persistent naming of the interfaces is indeed something you would be glad of. How can you tell which one of eth{0..5} is your onboard port #2, otherwise? This would be problem number two.

Another situation in which having a persistent naming of interfaces is almost a requirement is if you’re setting up a router: you definitely don’t want to switch the LAN and WAN interface names around, especially where the firewall is involved.

This background is why the persistent-net rules were devised quite a few years ago for udev. Unfortunately almost everybody got at least one nasty experience with them. Sometimes the in-place rename would fail, and you’d end up with the temporary names at the end of boot. In a few cases the name was not persistent at all: if the kernel driver for the device would change, or change name at least, the rules wouldn’t match and your eth0 would become eth1 (this was the case when Intel split the e1000 and e1000e drivers, but it’s definitely more common with wireless drivers, especially if they move from staging to main).

So the old persistent net rules were flawed. What about the new predictable rules? Well, not only they combined the BIOS naming scheme (which is actually awesome when it works — SuperMicro servers such as Excelsior do not expose the label; my Dell laptop only exposes a label for the Ethernet port but doesn’t for either the wireless adapter or the 3G one), but it has two “fallbacks” that are supposed to be used when the labels fail, one based on the MAC address of the interface, and the other based on the “path” — which for most PCI, PCI-E, onboard, ExpressCard ports is basically the PCI address; for USB… we’ll see in a moment.

So let’s see, from my laptop:

# lspci | grep &aposNetwork controller&apos
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6200 (rev 35)
# ifconfig | grep wlp3
wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

Why “wlp3s0”? It’s the Wireless adapter (wl) PCI (p) card at bus 3, slot 0 (s0): 03:00.0. Matches lspci properly. But let’s see the WWAN interface on the same laptop:

# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Much longer name! What’s going on then? Let’s see, it’s reporting it’s card at bus 0, slot 29 (0×1d) — lspci will use hexadecimal numbers for the addresses:

# lspci | grep &apos00:1d&apos
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

Okay so it’s an USB device, even though the physical form factor is a mini-PCIE card. It’s common. Does it match lsusb?

# lsusb | grep Broadband
Bus 002 Device 004: ID 413c:8184 Dell Computer Corp. F3607gw v2 Mobile Broadband Module

Not the Bus/Device specification there, which is good: the device number will increase every time you pop something in/out of the port, so it’s not persistent across reboots at all. What it uses is the path to the device standing by USB ports, which is a tad more complex, but basically means it matches /sys/bus/usb/devices/2-1.6:1.6/ (I don’t pretend to know how the thing works exactly, but it describe to which physical port the device is connected).

In my laptop’s case, the situation is actually quite nice: I cannot move either the WLAN or WWAN device on a different slot so the name assigned by the slot is persistent as well as predictable. But what if you’re on a desktop with an add-on WLAN card? What happens if you decide to change your video card, with a more powerful one that occupies the space of two slots, one of which happen to be the place where you WLAN card is? You move it, reboot and .. you just changed the interface name! If you’ve been using Network Manager, you’ll just have to reconfigure the network I suppose.

Let’s take a different example. My laptop, with its integrated WWAN card, is a rare example; most people I know use USB “keys”, as the providers give them away for free, at least in Italy. I happen to have one as well, so let me try to plug it in one of the ports of my laptop:

# lsusb | grep modem
Bus 002 Device 014: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u2i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Okay great this is a different USB device, connected to the same USB controller as the onboard one, but at different ports, neat. Now, what if I had all my usual ports busy, and I decided to connect it to the USB3 add-on ExpressCard I got on the laptop?

# lsusb | grep modem
Bus 003 Device 004: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wws1u1i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500

What’s this? Well, the USB3 controller provides slot information, so udev magically uses that to rename the interface, so it avoids using the otherwise longer wwp6s0u1i1 name (the USB3 controller is on the PCI bus 6).

Let’s go back to the on-board ports:

# lsusb | grep modem
Bus 002 Device 016: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u3i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Seems the same, but it’s not. Now it’s u3 not u2. Why? I used a different port on the laptop. And the interface name changed. Yes, any port change will produce a different interface name, predictably. But what happens if the kernel decides to change the way the ports are enumerated? What happens if the USB 2 driver is buggy and is supposed to provide slot information, and they fix it? You got it, even in these cases, the interface names are changed.

I’m not saying that the kernel naming scheme is perfect. But if you’re expected to always only have an Ethernet port, a WLAN card and a WWAN USB stick, with it you’ll be sure to have eth0, wlan0 and wwan0, as long as the drivers are not completely broken as they are now (like if the WLAN is appearing as eth1), and as long as you don’t muck with the interface names in userspace.

Next up, I’ll talk about the MAC addresses based naming and my personal preference when setting up servers and routers. Have fun in the mean time figuring out what your interface names will be.

March 25, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why Puppet? (March 25, 2013, 10:56 UTC)

Seems like the only thing everybody has to comment on my previous post was to ask me why I haven’t used $this, $that and ${the kitchen sink}. Or to be precise they asked about cfengine, chef and bcfg2. I have to say I don’t really like being forced into justifying myself but at this point I couldn’t just avoid answering, or I would keep getting the same requests over and over again.

So first of all, why a configuration management system? I have three production vservers at IOS (one is this, another is xine, and another is a customer’s of mine). I have a standby backup server at OVH. And then there’s excelsior, which has four “spacedocks” (containers that I use for building binpkgs for the IOS servers), three tinderbox (but only two usually running), and a couple of “testing” containers (for x32 and musl), beside the actual container I use in it to maintain stuff.

That’s a lot of systems, and while they are very similar between themselves, they are not identical. To begin with, they are in three different countries. And they us three different CPUs. And this is without adding the RaspberryPi I set up with the weather station for a friend of mine. The result is that trying to maintain all those systems manually is a folly, even though I already reduced the number of hosts, since the print shop customer – the one I wrote so often about – moved on and found someone else to pick up their sysadmin tasks (luckily for both of us, since it was a huge time sink).

But the reason why I focused almost exclusively onto Puppet is easy to understand: people I know have been using it for a while. Even though this might sound stupid, I do follow the crowd of friends of mine when I have to figure out what to use. This is because the moment when I have no idea how to do something, it’s easier to ask to a friend than going through the support chain at the upstream project. Gentoo infra people are using and working on Puppet, so that’s a heavy factor to me. I don’t know why they chose puppet but at this point I really don’t care.

But there is another thing, a lesson I learned with Munin: I need to judge the implementation language. The reason is simple, and that’s that I’ll find bugs, for sure. I have this bad knack at finding bugs in stuff I use… which is the main reason why I got interested in open source development: I could then fix the bugs I found! But to do so I have to understand what’s written. And even though learning Perl was easy, understanding Munin’s code… was, and is, tricky. I was able to get some degree of stuff done. Puppet being written in Ruby is a positive note.

I know, chef is also written in Ruby. But I do have a reason to not wanting to deal with chef: its maintainer in Gentoo. Half the bugs I find have to do with the way things are packaged, which is the reason why I became a developer in the first place. This means though that I have to be able to collaborate with the remaining developers, and sometimes that’s just not possible. Sometimes it’s due to upstream developers but in the case of chef the problem is the Gentoo developer who’s definitely not somebody I want to work with, since he’s been “fiddling” with Ruby ebuilds for chef messing up a lot of the work that the Ruby team, me included, kept pouring to improve the quality of the Ruby packages.

So basically these are the reason why I decided to start using Puppet and writing Puppet modules.

March 23, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book review — Counting From Zero (March 23, 2013, 01:49 UTC)

I might be a masochist, I don’t know. Certainly I didn’t find it enjoyable to read this book, but at least I got to the end of it, unlike the previous one which I also found difficult to digest.

The book, Counting from Zero by Alan B. Johnson is one of the worst books I’ve read in a while, to be entirely honest. It’s another cyber-thriller, if we want to use this name, akin to Russinovich’s Zero Day (review) Trojan Horse (review) which I read last year and found.. not so thrilling — but in comparison, they are masterpieces.

So the authors in both the Russinovich’s and Johnson’s cases are actually IT professionals; the former author works at Microsoft, the latter has been co-author of the ZRTP protocol for encrypting audio/video conversations. Those who had to deal with that and Zfone before are probably already facepalming. While Russinovich’s world is made up of nuclear plants running Windows on their control systems, and connecting it to Internet, Johnson’s a world that is.. possibly more messed up.

Let’s start with what I found obnoxious almost immediately: the affectations. The cover of the book already shows a Ø sign — while I’m not a typographer, and I didn’t feel like asking one of my many friends who are, it looks like a bold Bodoni or something similar. It’s not referring to the Scandinavian letter though, and that’s the sad news. In the whole text, the character zero (0) has been replaced with this (wrong) character. For a person who can get angry when he has to replace ò with o for broken systems to accept his name, this is irksome enough. The reasoning for this is declared in the second half of the book as all programmers write it this way to not mistake it for an ‘o’ vowel — bad news for the guy, I don’t know people who do that consistently.

Even if I’m writing a password where the letters and numbers can be mistaken – which is not common, as I usually use one or the other ­– my preferred note for zeros is a dot at the center. Why a dot and not the slash that the author so much like? It’s to not confuse it with the many similar symbols some of which are actually used in mathematics, where the zeros are common (and this is indeed something that my math teacher in high school convinced me of). Furthermore – as Wikipedia notes – the slashed zero’s slash does not go over the circle, for the same reason as me using the dot: it would be too easy to mistake for an empty set, or a diameter sign.

Once, the use of this fake slashed zero is cute, done as a sed replacement all over the book? Bleah.

It’s not the only affectation though, another one is that chapters have been numbered … in hexadecimal. And before somebody asks, no it was not 0x-prefixed, which would probably have made more sense. And finally, there are email quoted almost every chapter, and they have a “PGP” block at the end for the signature (even though it is left to intend that they are actually encrypted, and not just signed). I’m pretty sure that there is some meaning behind those blocks but I can’t be bothered searching. There are also a bunch of places where words are underlined like if they were hyperlinks — if they were, they were lost in translation on the Kindle Paperwhite (which I have bought last week after breaking my Keyboard), as they are not clickable.

Stylistically, the book stinks. I’m sorry, I know it’s not very polite to criticize something this harshly, but it really does. It reads like something I was trying to write in middle school: infodumps a-plenty – not only in computer stuff but even on motorbike details – and not in a long-winded, descriptive, “look how cool” kind of way, just in a paragraph of dumping info on the reader, most of which is really not important to the story – action driven, and repeating the subject, the protagonist’s name, every line – Mick did this. Mick did that. Mick went somewhere – and in general very few descriptions of environments, people, or anything at all.

But, style is an acquired skill. I didn’t like the first Harry Potter book, and I enjoyed the later ones. In Russinovich’s case, the style issues on the first book were solved on the second (even though the story went from so-so to bad). So let’s look into the story instead. It’s something already seen: unknowns find zero-days, you got the self-employed wizkid who gets to find a fix, and save the world. With nothing new to add there, two things remain to save a book: characters and, since this is a cyberthriller, a realistic approach to computers.

These should be actually the strong points of the book, standing to the Praise between ToC and Prologue — Vint Cerf describe it “credible and believable”, while Phil Zimmerman calls it a “believable cast of characters”. It sets the expectation high.

The main protagonist is your stereotypical nerd’s wet dream: young self-employed professional, full of money, with a crew of friends, flying around the world. This might actually be something Johnson feels he’s himself, given that his biography on both the book and Amazon points that he’s a “Million Miler” with American Airlines. Honestly, I don’t dream to travel that much — but you all know how I hate flying. Not only he’s a perfect security expert and bike rider, he’s also a terrific mechanic, a sailor, and so many more things. His only defect in the whole book? He only speaks English. I’m not kidding you, he doesn’t go as far as shouting at a woman in the whole book! Have you ever met a guy like that in a security or FLOSS conference? I certainly haven’t, including myself. Seriously, no defects.. sigh… I like characters when they have defects because they need to compensate to become lovable.

Scratch the protagonist then. Given the continuous turmoil in the IT scene about sexism and the limited showcase of women in a positive light, you’d expect that somebody writing about IT might want to tip the scale a little bit in their favor — or at least that’s what I would do, and what I’d like to see. How many female characters are there in the book? The protagonist’s sister, and his niece her daughter; the protagonist’s “on-again, off-again”, a new woman joining the crew at the beginning of the book, and … spoiler … a one-off, one-chapter hacker that falls for one of the oldest tricks in the book (after being said to be brilliant — even though her solutions are said not to be elegant).

The on-and-off, who’s supposed to be one of the crew of security experts, is neither seen, nor said, doing anything useful at all in the story, beside helping out in the second chapter crisis where the protagonist and his friends save a conference by super-humanly cloning a whole battery of servers and routers in a few hours from scratch, dissect a zero-day vulnerability on a web server, fix it, and do an “anonymous commit” (whatever the heck that should be!). Did you say “stereotype!”, expecting the protagonist to be madly in love with his long-time friend? No, worse, she’s the one who wants him, but he’s just not there.

The newly-joining gal? Works for a company that would have otherwise been badmouthed at the conference, and has a completely platonic relationship with the protagonist all over the book. Her only task is to “push papers” from the protagonist to her company’s techs — Daryl from Russinovich’s books is more proactive, and if you read them, you know that’s a hard record to beat.

Family-wise … parents are dead sister is married with child. Said child, even if coming up many times during the book, is almost always called “Sam” ­— a play with a tomboysh girl? I’d say more like an interchangeable character, as it could easily have been a boy instead of a girl, for what the book’s concerned. The sister is, by the way, a librarian — this is only noted once, and the reason is to do yet another infodump on RFID.

If you want to know the kind of dump of infodumps this book is, the author goes on a limb to comment about “obsolete” measure units, including an explanation of what the nautical knots are modeled after, explains the origins of “reboot”, the meaning of “order of magnitude”, ranted about credit card companies “collecting databases of purchasing habits and data”, the fact that you use dig to run a “DNS trace”, the fact that Tube is the “unofficial name for London’s underground railway” (unofficial? TFL calls it Tube!), the fact that there is a congestion charge in London, the fact that Škoda is a Czech brand, and what the acronym RAM stands for!

If anything, the rest of the “crew” does even less than all these people, all the work is done by the protagonist… even though all the important pieces are given to him by others! Sigh.

Before closing the review (that you can guess is not positive at this point), let’s look at the tech side. Given the author is a colleague, and given the kind of praises coming from other people “in the scene”, you’d expect a very realistic approach, wouldn’t you? Well, the kind of paranoia that the protagonist is subject to (not accepting un-encrypted email, phone calls or video) is known to be rampant, although I found that this is often more common among wannabes than actual professionals.

But (and I did take notes, thanks to the Kindle), even accepting that in the fury of disconnecting a possibly infected or to-be-infected network from the Internet you can identify in a nanosecond which are the (multiple) cables to the internet and at the same time damaging them (without even damaging the connectors)… since when you need a “makeshift soldering iron to repair the broken Ethernet connector” ? If it was equipment-side, a soldering iron is not going to be enough; if it was the cable… WTF are you using a soldering iron for?!

Ah! At some point the protagonist is given by “an uncle in Australia” some “magnetic GPS trackers” to use against the bad guys. How the uncle could have guessed that he needed them is already a good question. The fact that the ones used toward the end are for no use at all, is something I don’t want to spend time on. My question is going to be do you call realistic a throwable magnetic bug that receive GPS signal on the underside of a car and can be traced by a cellphone in real-time?

Oh and of course, this is the world-famous, filthy-rich security expert who only has one password for every service and changes it every week. If somebody thinks this is a good idea, let me remember that this extends the surface on which you’re vulnerable to MITM or sniffing attacks on in an incredible way! And they even steal his private key, not once, but twice! It seems like he knows everything about PGP and encryption but not about the existence of SmartCards.

Even though the guy has an impressive collection of SIM cards and mobile phones that work all over the world, including in the middle of the Atlantic ocean. And when he buys a new phone, he can just download a compile the operating system. And we have to fight to get Android sources for our phones…

Okay the review is getting longer than I expected, so I’ll just note down that the guy “performed a disk wipe on the solid state storage” — and yes he’s referring to the 37-or-however-much-that-was wiping that was debunked by the paper’s author, as most people misinterpreted it altogether. And that is completely irrelevant to solid state storage (and most modern non-solid state storage as well!). Oh and he doesn’t buy off-the-shelf systems because they could have keyloggers or malware in them, but trusts computer parts bought at the first store he finds on his phone.

Of course he can find components for a laptop in a store, and just fit it in his custom CNC case without an issue. He can also fit a state-of-the-art days-long battery that he was given earlier, without a charger design! Brilliant, just brilliant. Nothing for a guy who “did a mental calculation of how much lighter it would be in a titanium case… and how much more expensive”. I don’t even know the current price of dollars, he can calculate the weight difference and price of a titanium case in his mind.

Last pieces before the bombshell: the guy ends up in the TSA’s No-fly List; they actually spell the full TSA name. Then he’s worried he can’t take a plane from London to Kiev. Message for somebody who spent too much time in the USA even though he’s Australian (the author): TSA’s competence stops at the US border! And, even in the situation where somebody left their passport in the side pocket of somebody else’s carry on bag (so fortunate, deus ex machina knows no borders!), you don’t have to find the same glasses he had on the photo… they let you change glasses from time to time. And if you do have to find them you don’t need to find real glasses, if they give you headaches.

Sorry, I know, these are nitpicks — there is much more in the book though. These are just the ones that had me wondering out loud why I was still reading the book. But the bombshell I referred above is the following dialogue line:


“Sir, he uses ZRTP encryption for all his calls, and strong encryption on all his messaging. We know who he communicates with but we haven’t been able to break any yet…”


Thanks, Randall! XKCD #538

I know the guy is a co-author of ZRTP. But…

March 22, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

OpenSSH releases are always interesting, they never go without at least introducing one feature that makes paranoid security conscious people and no exception is OpenSSH 6.2 which was just released. In particular in this case I was very happy to read:


sshd(8): Added support for multiple required authentication in SSH
protocol 2 via an AuthenticationMethods option. This option lists
one or more comma-separated lists of authentication method names.
Successful completion of all the methods in any list is required for
authentication to complete. This allows, for example, requiring a
user having to authenticate via public key or GSSAPI before they
are offered password authentication.

What’s going on is not extremely obvious, but basically it means that you can now chain, instead of replace, the authentication method. Okay, step back. Have you ever noticed that when ah SSH connection fail, often enough you have a message saying public-key,keyboard-interactive? That means that it tried, in sequence, the two options, and when they failed, it refused the connection. Any of the two would have been enough for a successful connection. This is akin to the sufficient option in PAM, which means that a single positive result is enough to produce a valid login. The new option in this version of OpenSSH is equivalent to PAM’s requisite option, so if any of the configured login methods fails, the connection is dropped.

So why is this important? Well, this allows two-factor – or actually multi-factor – authentication, as now just having access to your laptop with an unprotected SSH key might not be enough to enter your critical servers. In my case, that would become at least a three-factor authentication as I could make it pass through my SmartCard (which requires the physical card and knowledge of its PIN), and then ask me the password itself.

Since the second factor can be PAM itself, there is no reason why you cannot add more than one factor there. In particular you might remember that just last year I looked into two-factor authentication options and in particular I looked into DuoSecurity — unfortunately, using DuoSecurity as is with PAM is a bit of a mess. I did send a patchset to upstream to make the PAM module more useful, and reliable – among other things making it feasible to use with sudo, so that get asked the second factor only when executing commands as root – but upstream does not care. It doesn’t fit their marketing and their main consulting target, so they basically left me dangling there with a patchset that is now probably to forwardport.

If you’re interested in getting a version of duo_unix with my patchset apply available in Portage (and available in source form for other distributions), i suggest you poke Jon and Doug at DuoSecurity, and ask them about it. If they were to ask me I would be glad to forwardport the patches, and change them as needed if they don’t like the precise way I wrote them. But I reached out enough myself, trying to help them with the packaging, – including proposing to just chat on the phone about what they want to do with it while I was in the US – at this point it’s their turn.

Alternatively, I’m pretty sure you can use YubiKey but I never ended up having one of them in my hands.

March 20, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Managing configuration (March 20, 2013, 19:53 UTC)

So I’ve finally bit the bullet and decided to look into installing and setting up Puppet to manage the configuration of my servers. The reason is to be found in my transfer to Dublin as I expect I won’t have the same time I had before, and that means that any streamlining in the administration of my servers is a net improvement.

In particular, just the other day I spent a lot of time fighting just to set up SSL properly on the servers, and I kept scp’ing files around — it was obvious I wasn’t doing it right.

It goes deeper than this though; since Puppet is obviously trying to get me to standardize the configurations between different servers, I’ve ended up uncovering a number of situations where the configuration of different servers was, well, different. Most of the times without a real reason. For instance, the Munin plugins configured didn’t match, even those that are not specific to a service — of three vservers, one uses PostgreSQL, another uses MySQL and the third, being the standby backup for the two, has both.

Certainly there’s a conflict between your average Gentoo Linux way to do things and the way Puppet expects things to be done. Indeed, the latter requires you to make configurations very similar, while the former tends to make you install each system like its own snowflake — but if you are even partially sane, you would know that to manage more than one Gentoo system, you’ll have to at least standardize some configurations.

The other big problem with using Puppet on Gentoo is that there is a near-showstoppper lack of modules that support our systems. While Theo and Adrien are maintaining a very nice Portage module, there is nothing that allows us to set the OpenRC oldnet-style network configuration, for instance. For other services, often times the support is written with only CentOS or Debian in mind, and the only way to get them to work in Gentoo is to fix the module.

To solve this problem, I started submitting pull requests to modules such as timezone and ntp so that they work on Gentoo. It’s usually relatively easy to do, but it can get tricky, when the CentOS and Gentoo way to set something up are radically different. By the way, the ntp module is sweet because finally I can no longer forget that we have two places to set the NTP server pools.

I also decided to create a module to fit in whatever is Gentoo-specific enough, although this is not yet the kind of stuff you want to rely upon forever — it would have to be done through a real parsed file to set it up properly. On the other hand it allows me set up all my servers’ networks, so it should be okay. And another module allows me to set environment variables on different systems.

You can probably expect me to publish a few more puppet modules – and editing even more – in the next few weeks while I transition as much configuration as I can from custom files to Puppet. In particular, but that’s worth of a separate blog post, I’ll have to work hard to get a nice, easy, and dependable Munin module.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Fiddling with puppet apply (March 20, 2013, 10:31 UTC)

As part of a larger exercise, I am switching my local VM set from a more-or-less scripted manual configuration towards a fully Puppet-powered one. Of course, it still uses a lot of custom modules and is most likely too ugly to expose to the wider internet, but it does seem to improve my ability to quickly rebuild images if I corrupt them somehow.

One of the tricks I am using is to use a local apply instead of using a Puppet master server – mainly because that master server is again a VM that might need to be build up and consumes some resources that I’d rather have free for other VMs. So what I do now is akin to the following:

~# puppet apply --modulepath /mnt/puppet/modules /mnt/puppet/manifests/site.pp

All I have to do is make sure that the /mnt/puppet location is a shared resource (in my case, an NFSv4 read-only mount) which I can just mount on a fresh image.

Part of this exercise I noticed that Puppet by default uses the regular gentoo provider for the services. I’d like to use the openrc provider instead, as I can easily tweak that one to work with SELinux (I need to prepend run_init to the rc-service calls, otherwise SELinux wants to authenticate the user and Puppet doesn’t like that; I have a pam_rootok.so statement in the run_init PAM file to allow unattended calls towards rc-service).

A quick Google revealed that all I had to do was to add a provider => openrc in the service definitions, like so:

service { "net.eth0":
  provider => openrc,
  ensure => running,
}

As mentioned, I still manually patch the openrc provider (located in /usr/lib64/ruby/site_ruby/1.9.1/puppet/provider/service) so that the run_init command is known as well, and that all invocations of the rc-service is prepended with run_init:

...
  commands :runinit => '/usr/sbin/run_init'
  commands :rcservice => '/sbin/rc-service'
...
 [command(:runinit), command(:rcservice), @resource[:name], :start ]

And the same for the stop and status definitions. I might use Portage’ postinst hook to automatically apply the patch so I don’t need to do this manually each time.

March 19, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Opportunities for Gentoo (March 19, 2013, 15:36 UTC)

When I’ve wanted to play in some new areas lately, it’s been a real frustration because Gentoo hasn’t had a complete set of packages ready in any of them. I feel like these are some opportunities for Gentoo to be awesome and gain access to new sets of users (or at least avoid chasing away existing users who want better tools):

  • Data science. Package Hadoop. Package streaming options like Storm. How about related tools like Flume? RabbitMQ is in Gentoo, though. I’ve heard anecdotally that a well-optimized Hadoop-on-Gentoo installation showed double-digit performance increases over the usual Hadoop distributions (i.e., not Linux distributions, but companies specializing in providing Hadoop solutions). Just heard from Tim Harder (radhermit) than he’s got some packages in progress for a lot of this, which is great news.
  • DevOps. This is an area where Gentoo historically did pretty well, in part because our own infrastructure team and the group at the Open Source Lab have run tools like CFEngine and Puppet. But we’re lagging behind the times. We don’t have Jenkins or Travis. Seriously? Although we’ve got Vagrant packaged, for example, we don’t have Veewee. We could be integrating the creation of Vagrant boxes into our release-engineering process.
  • Relatedly: Monitoring. Look for some of the increasingly popular open-source tools today, things like Graphite, StatsDLogstash, LumberjackElasticSearch, Kibana, Sensu, Tasseo, Descartes, Riemann. None of those are there.
  • Cloud. Public cloud and on-premise IaaS/PaaS. How about IaaS: OpenStack, CloudStack, Eucalyptus, or OpenNebula? Not there, although some work is happening for OpenStack according to Matthew Thode (prometheanfire). How about a PaaS like Cloud Foundry or OpenShift? Nope. None of the Netflix open-source tools are there. On the public side, things are a bit better — we’ve got lots of AWS tools packaged, even stretching to things like Boto. We could be integrating the creation of AWS images into our release engineering to ensure AWS users always have a recent, official Gentoo image.
  • NoSQL. We’ve got a pretty decent set here with some holes. We’ve got Redis, Mongo, and CouchDB not to mention Memcached, but how about graph databases like Neo4j, or other key-value stores like RiakCassandra, or Voldemort?
  • Android development. Gentoo is perfect as a development environment. We should be pushing it hard for mobile development, especially Android given its Linux base. There’s a couple of halfhearted wiki pages but that does not an effort make. If the SDKs and related packages are there, the docs need to be there too.

Where does Gentoo shine? As a platform for developers, as a platform for flexibility, as a platform to eke every last drop of performance out of a system. All of the above use cases are relevant to at least one of those areas.

I’m writing this post because I would love it if anyone else who wants to help Gentoo be more awesome would chip in with packaging in these specific areas. Let me know!

Update: Michael Stahnke suggested I point to some resources on Gentoo packaging, for anyone interested, so take a look at the Gentoo Development Guide. The Developer Handbook contains some further details on policy as well as info on how to get commit access by becoming a Gentoo developer.


Tagged: development, gentoo, greatness

Josh Saddler a.k.a. nightmorph (homepage, bugs)
fonts (March 19, 2013, 10:18 UTC)

i think i’ve sorted out some of my desktop font issues, and created a few more in the process.

for a long time, i’ve had to deal with occasionally jagged, hard-to-read fonts when viewing webpages, because i ran my xfce desktop without any font antialiasing.

i’ve always hated the way modern desktop environments try to “fool” my eyes with antialiasing and subpixel hinting to convince me that a group of square pixels can be smoothed into round shapes. turning off antialiasing tends to make the rounder fonts, especially serif fonts, look pretty bad at large sizes, as seen here:

display issues

my preferred font for the desktop and the web is verdana, which looks pretty good without antialiasing. but most websites use other fonts, so rather than force one size of verdana everywhere (which causes flow/layout issues), i turned on antialiasing for my entire desktop, including my preferred browser, and started disabling antialiasing where needed.

before and after font settings:

before/after settings

i tried the infinality patchset for freetype, but unfortunately none of the eselect configurations produced the crisply rounded antialiased text the patches are known for. i rebuilt freetype without the patchset, and went into /etc/fonts to do some XML hacking.

while eselect-fontconfig offers painless management of existing presets, the only way to customize one’s setup is to get into nitty-gritty text editing, and font configs are in XML format. this is what i ended up with:

$ cat ~/.fonts.conf

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="font">
    <edit name="antialias" mode="assign">
        <bool>false</bool>
    </edit>
</match>
<match target="font" >
    <test name="size" qual="any" compare="more">
        <double>11</double>
    </test>
    <edit name="antialias" mode="assign">
        <bool>true</bool>
    </edit>
</match>
<match target="font" >
    <test name="pixelsize" qual="any" compare="more">
        <double>16</double>
    </test>
    <edit name="antialias" mode="assign">
        <bool>true</bool>
    </edit>
</match>
<match target="pattern">
    <test qual="any" name="family"><string>Helvetica</string></test>
    <edit name="antialias" mode="assign">
      <bool>true</bool>
    </edit>
</match>

let’s step through the rules:

first, all antialiasing is disabled. then, any requested font size over 11, or anything that would display more than 16 pixels high, is antialiased. finally, since the common helvetica font really needs to be antialiased at all sizes, a rule turns that on. in theory, that is — firefox and xfce both seem to be ignoring this. unless antialiasing really is enabled at the smallest sizes with no visible effect, since there are only so many pixel spaces available at that scale to “fake” rounded corners.

a test webpage shows the antialiasing effect on different fonts and sizes:

desktop and browser fonts

besides the helvetica issue, there are a few xfce font display problems. xfce is known for mostly ignoring the “modern” xorg font config files, and each app in the desktop environment follows its own aliasing and hinting rules. gvim’s monospace font is occasionally antialiased, resulting in hard-to-read code. the terminal, which uses the exact same font and size, is not antialiased, since it has its own control for text display.

the rest of the gtk+ apps in the above screenshot are size 10 verdana, so they have no antialiasing, being under the “size 11″ rule. firefox doesn’t always obey the system’s font smoothing and hinting settings, even with the proper options in about:config set. unlike user stylesheets, there’s no way to enforce desktop settings with something like !important CSS code. i haven’t found any pattern in what firefox ignores or respects.

also, i haven’t found a workable fontconfig rule that enables antialiasing only for specific fonts at certain sizes. i’m not sure it’s even possible to set such a rule, despite putting together well-formed XML to do just that.

* * *

to sum up: font management on linux can be needlessly complicated, even if you don’t have special vision needs. my environment is overall a bit better, but i’m not ready to move entirely to antialiased text, not until it’s less blurry. i need crispy, sharp text.

fonts on my android phone’s screen look pretty good despite the antialiasing used everywhere, but the thing’s pixel density is so much higher than laptop and desktop LCDs that the display server doesn’t need to resort to complicated smoothing/hinting techniques to achieve that look.

as a general resource, the arch linux wiki page has very useful information on font configuration. there are some great ideas in there, even if they don’t all work on my system. the gentoo linux wiki page on fontconfig is a more basic; i didn’t use anything from it.

March 18, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux tutorial series, update (March 18, 2013, 21:22 UTC)

Just a small update – the set of SELinux tutorials has been enhanced since my last blog post about it with information on SELinux booleans, customizable types, run-time modi (enforcing versus permissive), some bits about unconfined domains, information on policy loading, purpose of SELinux roles, SELinux users and an example on how a policy works regarding init scripts.

The near future will give more information about the multi-level security aspect, about multi-category support, a review on the SELinux context (as we then have handled each field in the context string) and i’ll also start with the second series that focuses more on policy enhancements and policy building.

And probably a few dozen more. Happy reading!

March 16, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
a haskell dev survey (March 16, 2013, 20:58 UTC)

Ladies and gentlemen!

If you happen to be involved in using/developing haskell-powered software you might like to answer our poll on that matter.

Thanks in advance!


Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
PostgreSQL 8.3 Has Reached End of Life (March 16, 2013, 13:48 UTC)

Today I’ll be masking PostgreSQL 8.3 for removal. If you haven’t already, you should move to a more recent version of PostgreSQL.

March 14, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux tutorial series (March 14, 2013, 22:34 UTC)

As we get a growing number of SELinux users within Gentoo Hardened and because the SELinux usage at the firm I work at is most likely going to grow as well, I decided to join the bunch of documents on SELinux that are “out there” and start a series of my own. After all, too much documentation probably doesn’t hurt, and SELinux definitely deserves a lot of documentation.

I decided to use the Gentoo Wiki for this endeavour instead of a GuideXML approach (which is the format used for Gentoo documentation on the main site). The set of tutorials that I already wrote can be found under the SELinux : Gentoo Hardened SELinux Tutorials location. Although of course meant to support the Gentoo Hardened SELinux users, I’m hoping to keep the initial set of tutorial articles deliberately distribution-independent so I can refer to them at work as well.

For now (this is a week’s work, so don’t expect this amount of tutorials to double in the next few days) I wrote about the security context of a process, how SELinux controls file and directory accesses, where to find SELinux permission denial details, controlling file contexts yourself and how a process gets into a certain context.

I hope I can keep the articles in good shape and with a gradual step-up in complexity. That does mean that most articles are not complete (for instance, when talking about domain transitions, I don’t talk about constraints that might prohibit them, or about the role and type mismatches (invalid context) that you might get, etc.) and that those details will follow in later articles. Hopefully that allows users to learn step by step.

At the end of each tutorial, you will find a “What you need to remember” section. This is a very short overview of what was said in the tutorial and that you will need to know in future articles. If you ever read a tutorial article, then this section might be sufficient for you to remember again what it was about – no need to reread the entire article.

Consider it an attempt at a tl;dr for articles ;-) Enjoy your reading, and if you have any remarks, don’t hesitate to contribute on the wiki or talk through the “Talk” pages.

March 11, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
openSUSE 12.3 Release party in Nürnberg (March 11, 2013, 16:35 UTC)

Party AnimalEverybody probably already knows, that openSUSE 12.3 is going to be released this Wednesday. I’m currently in SUSE offices in Nuremberg, helping to polish last bits and pieces for the upcoming release. But more importantly, as every release, we need to celebrate it! And this time, due to the lucky circumstances, I’ll be here for Nuremberg release party!

Nuremberg release party will take place the same day as release at Artefakt, in Nuremberg’s city centre from 19:00 (local time, of course). It’s an open event so everybody is welcomed.

You can meet plenty of fellow Geekos there and there will be some food and also openSUSE beer available (some charges may apply). Most of the openSUSE Team at SUSE (former Boosters and Jos) will be there and we hope to meet every openSUSE enthusiastic, supporter or user from Nuremberg.

There will be demo computer running 12.3 and hopefully even public Google Hangout for people who wants to join us remotely – follow +openSUSE G+ page to see it if we will manage it ;-)

So see you in great numbers on Wednesday in Artefakt!

PS: If you expected announcement for Prague release party from me, don’t worry, I haven’t forgot about it, we are planning it, expect announcement soon and party in few weeks ;-)

March 09, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
Open links with urxvt stopped working (March 09, 2013, 00:55 UTC)

INCOMPATIBLE CHANGE: renamed urlLauncher resource to url-launcher

so .Xdefaults becomes;

URxvt.perl-ext-common: default,matcher
URxvt.url-launcher: /usr/bin/firefox
URxvt.matcher.button: 1

https://bugzilla.redhat.com/show_bug.cgi?id=901544

March 08, 2013
Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)
Prague Installfest results (March 08, 2013, 13:37 UTC)

Last weekend (2.-3.3. 2013) we had a lovely conference here in Prague. People could attend to quite few very cool talks and even play OpenArena tournament :-) Anyway that ain’t so interesting for Gentoo users. The cool part for us is the Gentoo track that I tried to assemble in there and which I will try to describe here.

Setup of the venue

This was easy task as I borrowed computer room in the dormatories basement which was large enough to hold around 30 students. I just carried in my laptop, checked the beamer works. Ensured the chairs are not falling apart and replaced the broken ones. Verified the wifi works (which it did not but the admins made it working just in time). And for last brought some drinks from main track so we do not dry out.

The classroom was in bit different area than the main track I tried to put some arrows for people to find the place. But when people started getting in and calling me where the hell is the place I figured out something is wrong. This pointy was then adjusted but still it shows up that we should rather not split of the main tracks or ensure there are HUGE and clear arrows pointing in directions where people can find us.

Talks

During the day there were only three talks, two held by me and one that was not on the plan done by Theo.

Hardened talk

I was supposed to start this talk at 10:00 but given the issue with the arrows people showed up around 10:20 so I had to cut back some informations and live examples.
Anyway I hope it was interesting hardened overview and at least Petr Krcmar wrote lots of stuff so we maybe will se some articles about it in czech media (something like “How I failed to install hardened Gentoo” :P).

Gentoo global stuff

This was more discussion about features than talk. The users were pointing out what they would like to see happening in Gentoo and what were their largest issues lately.

From issues people pointed out the broken udev update which rendered some boxes non-bootable (yes there was message but they are quite easy to overlook, I forgot to do it on one machine myself). Some sugesstions went for genkernel to actually trigger rebuild of kernel right away in post stage for user with the enabled required options. This sounds like quite nice idea, as since you are using genkernel you probably want your kernel automatically adjusted and updated for the cases where the apps require option additions. As I am not aware of the genkernel stuff I told the users to open bug about this.

Second big thing we were talking about were binary packages. The idea was to have some tinderbox which produce generic binary packages available for most useflag variants. So you could specify -K and it would use the binary form or if not provided compiled localy. For this the most work would need to be done on portage side because we would have to somehow figure out multiple versions of the same package with different enabled uses.

Infra talk

Theo did awesome job explaining how infra uses puppet and what services and servers we have. This was on-demand talk which people that were on-site wanted.

Hacking — aka stuff that we somehow did

Martin “plusky” Pluskal (SU) went over our prehistoric bugs from 2k5 and 2k6 and created list of cantfix ones which are no longer applicable or are new pkg requests with dead upstream. I still have to close them or give him editbugz privs (this sounds more like it as I am lazy like hell, or better make him developer :P).
Ondrej Sukup (ACR) attending over hangout worked on python-r1 porting and I commited his work to cvs.
Cyril “metan” Hrubis (SU) worked on crossdev on some magic avr bug I don’t want to hear much about but he seems optimistic that he might finish the work in near future.
David Heidelberger worked first on fixing bugs with his lappy and then helped on the bug wrangling with Martin.
Jan “yac” Matejka (SU) finished his quizzes and thus he got shiny bug and is now in lovely hands of our recruiters to became our newest addition to the team.
Michal “miska” Hrusecky (SU) worked on update of osc tools update to match latest we have in opensuse buildservice and he plans to commit them soonish to cvs.
Pavel “pavlix” Simerda (RH) who is the guy responsible for latest networkmanager bugs expressed his intentions to became dev and I agreed with him
Tampakrap (SU) worked on breaking one laptop with fresh install of Gentoo, which I then picked up and finished with some nice KDE love :-)
Amy Winston helped me a lot with setup for the venue and also kept us with Theo busy breaking her laptop, which I hope she is still happily using and does not want to kill us, other then that she focused on our sweet bugzie and wrangling. She seems not willing to finish her quizzes to became full developer, so we will have work hard on that in the future :-)
And lastly I (SU) helped users with issues they had on their local machines and explained how to avoid those or report directly to bugzie with relevant informations and so on.

In case you wonder SU = SUSE ; RH = RedHat; ACR = Armed forces CR.

For the future events we have to keep in mind that we need to better setup those and have prepared small buglists rather then wide-range ones where people spend more time picking ideal work than working on those :-)

Lunch/Afterparty

The lunch and the afterparty were done in nice pub nearby which had decent food and plenty of beer so everyone was happy. The only problem was that it take some waiting to get the food as suddenly there were 40 people in the pub (I still think this could’ve been somehow prepared so they had only limited subset of foods really fast so you can choose between waiting a bit or picking something and going back fast).

During the night one of Gentoo attendees got quite drunk and had to be delivered home by other ogranizers as I had to leave bit early (being up from 5 am is not something I fancy).
The big problem here was with the location where one should put him, because he was not able to talk and his ID contained residency info for different city. So for the next time when you go for linux event where you don’t know much put into your pockets some paper with the address. It is superconvenient and we don’t have to bother your parents at 1 am to find out what to do with their “sweet” child.

Endword

I would like to say huge thanks to all attendees for making the event possible and also appologize for everything I frogot to mention here.

March 07, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

Another month has passed, so time for a new progress meeting…

Toolchain

GCC v4.7 has been unmasked, allowing a large set of users to test out the new GCC. It is also expected that GCC 4.8-rc1 will hit the tree next week. In the hardened-dev overlay, hardened support for x86, amd64 and arm has been added (SPEC updates) and the remainder of architectures will be added by the end of the week.

Kernel and grSecurity/PaX

Kernel 3.7.5 had a security issue (local root privilege escalation) so 3.7.5-r1 which held a fix for this was stabilized quickly. However, other (non-security) problems have been reported, such as one with dovecot, regarding the VSIZE memory size. This should be fixed in the 3.8 series, so these are candidate for a faster stabilization. This faster stabilization is never fun, as it increases the likelihood that we miss other things, but they are needed as the vulnerability in the previous stable kernel was too severe.

Regarding XATTR_PAX, we are getting pretty close to the migration. The eclass is ready and will be announced for review on the appropriate mailinglists later this week. A small problem still remains on Paludis-using systems (Paludis does not record NEEDED.ELF.2 information – linkage information – so it is hard to get all the linkage information on a system). A different revdep-pax and migrate-pax toolset will be built that detects the necessary linkage information, but much slower than on a Portage-running system.

SELinux

The 11th revision of the policies are now stable, and work is on the way for the 12th revision which will hit the tree soon. Some work is on the way for setools and policycoreutils (one due to a new release – setools – and the other one due to a build failure if PAM is not set). Both packages will hit the hardened-dev overlay soon.

A new “edition” of the selinuxnode virtual image has been pushed to the mirror system, providing a SELinux-enabled (enforcing) Gentoo Hardened system with grSecurity and PaX, as well as IMA and EVM enabled.

Profiles

The 13.0 profiles have been running fine for a while at a few of our developer systems. No changes have been needed (yet) so things are looking good.

System Integrity

The necessary userland utilities have been moved to the main tree. The documentation for IMA/EVM has been updated as well to reflec the current state of IMA/EVM within Gentoo Hardened. IMA, even with the custom policies, seems to be working well. EVM on the other hand has some issues, so you might need to run with EVM=fix for now. Debugging on this issue is on the way.

Documentation

Some of the user oriented documentation (integrity and SELinux) have been moved to the Gentoo Wiki for easier user contributions and simplified management. Other documents will follow soon.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Having fun with integer factorization (March 07, 2013, 02:45 UTC)

Given the input

 # yafu "factor(10941738641570527421809707322040357612003732945449205990913842131476349984288934784717997257891267332497625752899781833797076537244027146743531593354333897)" -threads 4 -v -noecm
if one is patient enough gives this output:
sqrtTime: 1163
NFS elapsed time = 3765830.4643 seconds.
pretesting / nfs ratio was 0.00
Total factoring time = 3765830.6384 seconds


***factors found***

PRP78 = 106603488380168454820927220360012878679207958575989291522270608237193062808643
PRP78 = 102639592829741105772054196573991675900716567808038066803341933521790711307779
What does that mean?
The input number is conveniently chosen from the RSA challenge numbers and was the "world record" until 2003. Advances in algorithms, compilers and hardware have made it possible for me to re-do that record attempt in about a month walltime on a single machine ( 4-core AMD64).

Want to try yourself?
emerge yafu
that's the "easiest" tool to manage. The dependencies are a bit fiddly, but it works well for up to ~512bit, maybe a bit more. It depends on msieve, which is quite impressive, and gmp-ecm, which I find even more intriguing.

If you feel like more of a challenge:
emerge cado-nfs
This tool even supports multi-machine setups out of the box using ssh, but it's slightly intimidating and might not be obvious to figure out. Also for a "small" input in the 120 decimal digits range it was about 25% slower than yafu - but it's still impressive what these tools can do.

February 28, 2013
Jan Kundrát a.k.a. jkt (homepage, bugs)

There's a lot of people who are very careful to never delete a single line from an e-mail they are replying to, always quoting the complete history. There's also a lot of people who believe that it wastes time to eyeball such long, useless texts. One of the fancy features introduced in this release of Trojitá, a fast Qt IMAP e-mail client, is automatic quote collapsing. I won't show you an example of an annoying mail for obvious reasons :), but this feature is useful even for e-mails which employ reasonable quoting strategy. It looks like this in the action:

When you click on the ... symbols, the first level expands to reveal the following:

When everything is expanded, the end results looks like this:

This concept is extremely effective especially when communicating with a top-posting community.

We had quite some internal discussion about how to implement this feature. For those not familiar with Trojitá's architecture, we use a properly restricted QtWebKit instance for e-mail rendering. The restrictions which are active include click-wrapped loading of remote content for privacy (so that a spammer cannot know whether you have read their message), no plugins, no HTML5 local storage, and also no JavaScript. With JavaScript, it would be easy to do nice, click-controlled interactive collapsing of nested citations. However, enabling JavaScript might have quite some security implications (or maybe "only" keeping your CPU busy and draining your battery by a malicious third party). We could have enabled JavaScript for plaintext contents only, but that would not be as elegant as the solution we chose in the end.

Starting with Qt 4.8, WebKit ships with support for the :checked CSS3 pseudoclass. Using this feature, it's possible to change the style based on whether an HTML checkbox is checked or not . In theory, that's everything one might possibly need, but there's a small catch -- the usual way of showing/hiding contents based on a state of a checkbox hits a WebKit bug (quick summary: it's tough to have it working without the ~ adjacent-sibling selector unless you use it in one particular way). Long story short, I now know more about CSS3 than I thought I would ever want to know, and it works (unless you're on Qt5 already where it assert-fails and crashes the WebKit).

Speaking of WebKit, the way we use it in Trojitá is a bit unusual. The QWebView class contains full support for scrolling, so it is not necessary to put it inside a QScrollArea. However, when working with e-mails, one has to account for messages containing multiple body parts which have to be shown separately (again, for both practical and security reasons). In addition, the e-mail header which is typically implemented as a custom QWidget for flexibility, is usually intended to combine with the message bodies into a single entity to be scrolled together. With WebKit, this is doable (after some size hints magic, and I really mean magic -- thanks to Thomas Lübking of the KWin fame for patches), but there's a catch -- internal methods like the findText which normally scroll the contents of the web page into the matching place no longer works when the whole web view is embedded into a QScrollArea. I've dived into the source code of WebKit and the interesting thing is that there is code for exactly this case, but it is only implemented in Apple's version of WebKit. The source code even says that Apple needed this for its own Mail.app -- an interesting coincidence, I guess.

Compared with the last release, Trojitá has also gained support for "smart replying". It will now detect that a message comes from a mailing list and Ctrl+R will by default reply to list. Thomas has added support for saving drafts, so that you are not supposed to lose your work when you accidentally kill Trojitá anymore. There's also been the traditional round of bug fixes and compatibility improvements. It is entertaining to see that Trojitá is apparently triggering certain code paths in various IMAP server implementations, proprietary and free software alike, for the first time.

The work on support for multiple IMAP accounts is getting closer to being ready for prime time. It isn't present in the current release, though -- the GUI integration in particular needs some polishing before it hits the masses.

I'm happy to observe that Trojitá is getting features which are missing from other popular e-mail clients. I'm especially fond of my pet contribution, the quote collapsing. Does your favorite e-mail application offer a similar feature?

In the coming weeks, I'd like to focus on getting the multiaccounts branch merged into master, adding better integration with the address book (Trojitá can already offer tab completion with data coming from Mutt's abook) and general GUI improvements. It would also be great to make it possible to let Trojitá act as a handler for the mailto: URLs so that it gets invoked when you click on an e-mail address in your favorite web browser, for example.

And finally, to maybe lure a reader or two into trying Trojitá, here's a short quote from a happy user who came to our IRC channel a few days ago:

17:16 < Sir_Herrbatka> i had no idea that it's possible for mail client to be THAT fast
One cannot help but be happy when reading this. Thanks!

If you're on Linux, you can get the latest version of Trojitá from the OBS or the usual place.

Cheers,
Jan

Greg KH a.k.a. gregkh (homepage, bugs)
Linux 3.8 is NOT a longterm kernel (February 28, 2013, 00:15 UTC)

I said this last week on Google+ when I was at a conference, and needed to get it out there quickly, but as I keep getting emails and other queries about this, I might as make it "official" here. For no other reason that it provides a single place for me to point people at.

Anyway, I would like to announce that the 3.8 Linux kernel series is NOT going to be a longterm stable kernel release. I will NOT be maintaining it for long time, and in fact, will stop maintaining it right after the 3.9 kernel is released.

The 3.0 and 3.4 kernel releases are both longterm, and both are going to be maintained by me for at least 2 years. If I were to pick 3.8 right now, that would mean I would be maintaining 3 longterm kernels, plus whatever "normal" stable kernels are happening at that time. That is something that I can not do without loosing even more hair than I currently have. To do so would be insane to attempt.

Hopefully this puts to rest all of the rumors.

February 25, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Uploading selinuxnode test VM (February 25, 2013, 01:05 UTC)

At the time of writing (but I’ll delay the publication of this post a few hours), I’m uploading a new SELinux-enabled KVM guest image. This is not an update on the previous image though (it’s a reinstalled system – after all, I use VMs for testing, so it makes sense to reinstall from time to time to check if the installation instructions are still accurate). However, the focus remains the same:

  • A minimal Gentoo Linux installation for amd64 (x86_64) as guest within a KVM hypervisor. The image is about 190 Mb in size compressed, and 1.6 Gb in size uncompressed. The file format is Qemu’s QCOW2 so expect the image to grow as you work with it. The file systems are, in total, sized to about 50 Gb.
  • The installation has SELinux enabled (strict policy, enforcing mode), various grSecurity settings enabled (including PaX and TPE), but now also includes IMA (Integrity Measurement Architecture) and EVM (Extended Verification Module) although EVM is by default started in fix mode.
  • The image will not start any network-facing daemons by default (unlike the previous image) for security reasons (if I let this image stay around this long as I did with the previous, it’s prone to have some vulnerabilities in the future, although I’m hoping I can update the image more frequently). This includes SSH, so you’ll need access to the image console first after which you can configure the network and start SSH (run_init rc-service sshd start does the trick).
  • A couple of default accounts are created, and the image will display those accounts and their passwords on the screen (it is a test/play VM, not a production VM).

There are still a few minor issues with it, that I hope to fix by the next upload:

  • Bug 457812 is still applicable to the image, so you’ll notice lots of SELinux denials on the mknod capability. They seem to be cosmetic though.
  • At shutdown, udev somewhere fails with a SELinux initial context problem. I thought I had it covered, but I noticed after compressing the image that it is still there. I’ll fix it – I promise ;)
  • EVM is enabled in fix mode, because otherwise EVM is prohibiting mode changes on files in /run. I still have to investigate this further though – I had to use the EVM=fix workaround due to time pressure.

When uploaded, I’ll ask the Gentoo infrastructure team to synchronise the image with our mirrors so you can enjoy it. It’ll be on the distfiles, under experimental/amd64/qemu-selinux (it has the 20130224 date in the name, so you can see for yourself if the sync has already occurred or not).

February 23, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Working on a new selinuxnode VM (February 23, 2013, 12:04 UTC)

A long time ago, I made a SELinux enabled VM for people to play with, displaying a minimal Gentoo installation, including the hardening features it supports (PIE/PIC toolchain, grSecurity, PaX and SELinux). I’m currently trying to create a new one, which also includes IMA/EVM, but it looks like I still have many things to investigate further…

First of all, I notice that many SELinux domains want to use the mknod capability, even for domains of which I have no idea whatsoever why they need it. I don’t notice any downsides though, and running in permissive mode doesn’t change the domain behavior. But still, I’m reluctant to mark them dontaudit as long as I’m not 100% sure.

Second, the gettys (I think it is the getty) result in a “Cannot change SELinux context: permission denied” error, even though everything is running in the right SELinux context. I still have to confirm if it really is the getty process or something else (the last run I had the impression it was a udev-related process). But there are no denials and no SELinux errors in the logs.

Third, during shutdown, many domains have problems accessing their PID files in /var/run (which is a link to /run). I most likely need to allow read privileges on all domains that have access to var_run_t towards the var_t symlinks. It isn’t a problem per se (the processes still run correctly) but ugly as hell, and if you introduce monitoring it’ll go haywire (as no PID files were either found, or were stale).

Also, EVM is giving me a hard time, not allowing me to change mode and ownership in files on /var/run. I have received some feedback from the IMA user list on this so it is still very much a work-in-progress.

Finally, the first attempt to generate a new VM resulted in a download of 817 MB (instead of the 158 MB of the previous release), so I still have to correct my USE flags and doublecheck the installed applications. Anyway, definitely to be continued. Too bad time is a scarce resource :-(

February 17, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
LightZone in Gentoo betagarden (February 17, 2013, 19:08 UTC)

If you are running Gentoo, heard about the release of the LightZone source code and got curious to see it for yourself:

sudo layman -a betagarden
sudo emerge -av media-gfx/LightZone

What you get is LightZone 100% built from sources, no more shipped .jar files included.

One word of warning: the software has not seen much testing in this form, yet. So if your pictures mean a lot you, make backups before. Better safe than sorry.

February 15, 2013
LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 97 Interview with WilliamH (February 15, 2013, 00:46 UTC)

Interview with WilliamH, Gentoo Linux Developer

Links

OpenRC
http://en.wikipedia.org/wiki/OpenRC
http://git.overlays.gentoo.org/gitweb/?p=proj/openrc.git
udev
http://en.wikipedia.org/wiki/Udev
espeak
http://espeak.sourceforge.net/
speakup
http://www.linux-speakup.org/
espeakup
https://github.com/williamh/espeakup
Gentoo Accessibility
http://www.gentoo.org/proj/en/desktop/accessibility/

Download

ogg

February 14, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

http://pyfound.blogspot.de/2013/02/python-trademark-at-risk-in-europe-we.html

Greg KH a.k.a. gregkh (homepage, bugs)
A year in my life. (February 14, 2013, 17:58 UTC)

I've now been with the Linux Foundation for just over a year. When I started, I posted a list of how you can watch to see what I've been doing. But, given that people like to see year-end-summary reports, the excellent graphic designers at the Linux Foundation have put together an image summarizing my past year, in numbers:

Year in the life of a kernel maintainer

February 13, 2013
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Gentoo Bugday Strikes Back (February 13, 2013, 09:38 UTC)

bugday gentoo

In a try to revive the Gentoo Bugday I wrote this article in order to give some guide lines and encourage both users and developers to join. I think it would be great to get this event back and collaborate. Of course everyone can open/close bugs silently but this type of event is a good way to close bugs, attract new developers/users and improve community relations.  There is no need to be a Gentoo expert.  So I will give you some information about the event.

About:

Bugday is a monthly online event that takes place every first Saturday of every month  in #gentoo-bugs in the Freenode network. Its goal is to have users and developers collaborate  to close/open bugs, update current packages and improve documentation.

Location:

Gentoo Bugday take place in our official IRC channel #gentoo-bugs @ Freenode.  You can talk about almost everything. Your ebuilds, version bumps, bugs that you will choose to fix, etc.. This is a 24h event, so don’t worry about the timezone difference.

Requirements:

  1. A Gentoo installation (real hardware) or in a Virtual Machine.
  2. IRC Client to join #gentoo-bugs , #gentoo-dev-help (ebuild help) and #gentoo-wiki (wiki help)
  3. Bugzilla account.
  4. Positive energy / Will to help.
  5. (bonus): Coffee ;)

Goals:

  1. Improve quality of Bugzilla
  2. Improve Wiki’s documentation.
  3. Improve community relations.
  4. Attract new developers and users.
  5. Promote Gentoo.

Tasks:

  1. Fix bugs (users/developers)
  2. Triage incoming bugs (users/developers) (Good to start!)
  3. Version bumps (users/developers) ( Good to start!)
  4. Improve wiki’s articles(users/developers) (Good to start!)
  5. Add new wiki’s articles(users/developers)
  6. Close old fixed bugs (developers-only)

A good way to start is to take a look at the ‘maintainer-needed‘ list . In addtion try picking up a bug from maintainer-wanted alias  at Bugzilla.

TIP: You should DOUBLE/TRIPLE check everything before submit a new bug/patch/ebuild.

TIP2: Please avoid 0day bump requests.

And do not forget every day is a bugday!!

Organize your schedule and join us every first Saturday of every month @ #gentoo-bugs.

Consider starting from today reading the following docs in order to help you.

Useful Docs:

  1. Gentoo Bugday
  2. Get Involved in Gentoo Linux
  3. How to contribute to Gentoo
  4. Gentoo Dev Manual
  5. Contributing Ebuilds
  6. Gentoo Bug Reporting Guide
  7. Beautiful bug reports
  8. Gentoo’s Bugzilla User’s Guide
  9. How to get meaningful backtraces in Gentoo
  10. The Basics of Autotools

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

As one of my four talks at FOSDEM, I gave one on Gentoo titled “Package management and creation in Gentoo Linux.” The basic idea was, what could packagers and developers of other, non-Gentoo distros learn from Gentoo’s packaging format and how it’s iterated on that format multiple times over the years. It’s got some slides but the interesting part is where we run through actual ebuilds to see how they’ve changed as we’ve advanced through EAPIs (Ebuild APIs), starting at 16:39.

If you click through to YouTube, the larger (but not fullscreen) version seems to be the easiest to read.

It was scaled from 720×576 to a 480p video, so if you find it too hard to read the code, you can view the original WebM here.


Tagged: development, gentoo

February 12, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Transforming GuideXML to wiki (February 12, 2013, 18:12 UTC)

The Gentoo project has its own official wiki for some time now, and we are going to use it more and more in the next few months. For instance, in the last Gentoo Hardened meeting, we already discussed that most user-oriented documentation should be put on the wiki, and I’ve heard that there are ideas on moving Gentoo project pages at large towards the wiki. And also for the regular Gentoo documentation I will be moving those guides that we cannot maintain ourselves anymore easily towards the wiki.

To support migrations of documents, I created a gxml2wiki.xsl stylesheet. Such a stylesheet can be used, together with tools like xsltproc, to transform GuideXML documents into text output somewhat suitable for the wiki. It isn’t perfect (far from it actually) but at least it allows for a more simple migration of documents with minor editing afterwards.

Currently, using it is as simple as invoking it against the GuideXML document you want to transform:

~$ xsltproc gxml2wiki.xsl /path/to/document.xml

The output shown on the screen can then be used as a page. The following things still need to be corrected manually:

  • Whitespace is broken, sometimes there are too many newlines. I had to make the decision to put in newlines when needed (which makes too many newlines) rather than a few newlines too few (which makes it more difficult to find where to add in).
  • Links need to be double/triple checked, but i’ll try to fix that in later editions of the stylesheet
  • Commands will have “INTERNAL” in them – you’ll need to move the commands themselves into the proper location and only put the necessary output in the pre-tags. This is because the wiki format has more structure than GuideXML in this matter, thus transformations are more difficult to write in this regard.

The stylesheet currently automatically adds in a link towards a Server and security category, but of course you’ll need to change that to the proper category for the document you are converting.

Happy documentation hacking!

February 09, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I guess many people may hit similar problems, so here is my experience of the upgrades. Generally it was pretty smooth, but required paying attention to the details and some documentation/forums lookups.

udev-171 -> udev-197 upgrade

  1. Make sure you have CONFIG_DEVTMPFS=y in kernel .config, otherwise the system becomes unbootable for sure (I think the error message during boot mentions that config option, which is good).
  2. The ebuild also asks for CONFIG_BLK_DEV_BSG=y, not sure if that's strictly needed but I'm including it here for completeness.
  3. Things work fine for me without DEVTMPFS_MOUNT. I haven't tried with it enabled, I guess it's optional.
  4. I do not have a split /usr. YMMV then if you do.
  5. Make sure to run "rc-update del udev-postmount".
  6. Expect network device names to change (I guess this is a non-issue for systems with a single network card). This can really mess up things in quite surprising ways. It seems /etc/udev/rules.d/70-persistent-net.rules no longer works (bug #453494). Note that the "new way" to do the same thing (http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames) is disabled by default in Gentoo (see /etc/udev/rules.d/80-net-name-slot.rules). For now I've adjusted my firewall and other configs, but I think I'll need to figure out the new persistent net naming system.

iptables-1.4.13 -> iptables-1.4.16.3

* Loading iptables state and starting firewall ...
WARNING: The state match is obsolete. Use conntrack instead.
iptables-restore v1.4.16.3: state: option "--state" must be specified

It can be really non-obvious what to do with this one. Change your rules from e.g. "-m state --state RELATED" to "-m conntrack --ctstate RELATED". See http://forums.gentoo.org/viewtopic-t-940302.html for more info.
  Also note that iptables-restore doesn't really provide good error messages, e.g. "iptables-restore: line 48 failed". I didn't find a way to make it say what exactly was wrong (the line in question was just a COMMIT line, it didn't actually identify the real offending line). These mysterious errors are usually caused by missing kernel support for some firewall features/targets.

two upgrades together

Actually what adds to the confusion is having these two upgrades done simultaneously. This makes it harder to identify which upgrade is responsible for which breakage. For an even smoother ride, I'd recommend upgrading iptables first, making sure the updated rules work, and then proceed with udev.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We've generated a new set of profiles for Gentoo installation. These are now called 13.0 instead of 10.0, e.g., "default/linux/amd64/10.0/desktop" becomes "default/linux/amd64/13.0/desktop".
Everyone should upgrade as soon as possible. This brings (nearly) no user-visible changes. Some new files have been added to the profile directories that make it possible for the developers to do more fine-grained use flag masking (see PMS-5 for the details), and this formally requires a new profile tree with EAPI=5 (and a recent portage version, but anything since sys-apps/portage-2.1.11.31 should work and anything since sys-apps/portage-2.1.11.50 should be perfect).
Since the 10.0 profiles will be deprecated immediately and removed in a year, emerge will suggest a replacement on every run. I strongly suggest you just follow that recommendation.
One additional change has been added to the package: the "server" profiles will be removed; they do not exist in the 13.0 tree anymore. If you have used a server profile so far, you should migrate to its parent, i.e. from "default/linux/amd64/10.0/server" to "default/linux/amd64/13.0". This may change the default value of some use-flags (the setting in "server" was USE="-perl -python snmp truetype xml"), so you may want to check the setting of these flags after switching profile, but otherwise nothing happens.

February 08, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

While on my machine KDE 4.10.0 runs perfectly fine, unfortunately a lot of Gentoo users see immediate crashes of plasma-desktop - which makes the graphical desktop environment completely unuseable. We know more or less what happened in the meantime, just not how to properly fix it...
The problem:

  • plasma-desktop uses a new code path in 4.10, which triggers a Qt bug leading to immediate SIGSEGV. 
  • The Qt bug only becomes fatal for some compiler options, and only on 64bit systems (amd64).
  • The Qt bug may be a fundamental architectural problem that needs proper thought.
The links:
The bugfixing situation:
  • Reverting the commit to plasma-workspace that introduced the problem makes the crash go away, but plasma-desktop starts hogging 100% CPU after a while. (This is done in plasma-workspace-4.10.0-r1 as a stopgap measure.) Kinda makes sense since the commit was there to fix a problem - now we hit the original problem.
  • The bug seems not to occur if Qt is compiled with CFLAGS="-Os". Cause unknown. 
  • David E. Narváez aka dmaggot wrote a patch for Qt that fixes this particular codepath but likely does not solve the global problem.
  • So far comments from Qt upstream indicate that this is in their opinion not the right way to fix the problem.
  • Our Gentoo Qt team understandably only wants to apply a patch if it has been accepted upstream.
Right now, the only option we (as Gentoo KDE team) have is wait for someone to pick up the phone. Either from KDE (to properly use the old codepath or provide some alternative), or from Qt (to fix the bug or apply a workaround)...

Sorry & stay tuned.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

Update! Update! Read all about it!You can find the recent updates in a tree near you. They are currently keyworded, but will be stablized as soon as the arch teams find time to do so. You may not want to wait that long as it is a Denial of Service, which is not as severe as it sounds in this case. The user would have to be logged in to cause a DoS.

There have been some other updates to the PostgreSQL ebuilds as well. PostgreSQL will no longer restart if you restarted your system logger. The ebuilds install PAM service files unique to each slot so you don’t have to worry about it being removed when you uninstall an old slot. And, finally, you can write your PL/Python in Python 3.

Greg KH a.k.a. gregkh (homepage, bugs)
AF_BUS, D-Bus, and the Linux kernel (February 08, 2013, 18:37 UTC)

There's been a lot of information scattered around the internet about these topic recently, so here's my attempt to put them all in one place to (hopefully) settle things down and give my inbox a break.

Last week I spent a number of days at the GNOME Developer Hackfest in Brussels, with the goal to help make the ability to distribute applications written for GNOME (and even more generally, Linux) in a better manner. A great summary of what happened there can be found in this H-Online article. Also please read Alexander Larsson's great summary of what we discussed and worked on for another view of this.

Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today.

Our goal (and I use "goal" in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a "libdbus" interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system.

nothing blocks

"But Greg!" some of you will shout, "What about the existing AF_BUS kernel patches that have been floating around for a while and that you put into the LTSI 3.4 kernel release?"

The existing AF_BUS patches are great for users who need a very low-latency, high-speed, D-Bus protocol on their system. This includes the crazy automotive Linux developers, who try to shove tens of thousands of D-Bus messages through their system at boot time, all while using extremely underpowered processors. For this reason, I included the AF_BUS patches in the LTSI kernel release, as that limited application can benefit from them.

Please remember the LTSI kernel is just like a distro kernel, it has no relation to upstream kernel development other than being a consumer of it. Patches are in this kernel because the LTSI member groups need them, they aren't always upstream, just like all Linux distro kernels work.

However, given that the AF_BUS patches have been rejected by the upstream Linux kernel developers, I advise that anyone relying on them be very careful about their usage, and be prepared to move away from them sometime in the future when this new "kernel dbus" code is properly merged.

As for when this new kernel code will be finished, I can only respond with the traditional "when it is done" mantra. I can't provide any deadlines, and at this point in time, don't need any additional help with it, we have enough people working on it at the moment. It's available publicly if you really want to see it, but I'll not link to it as it's nothing you really want to see or watch right now. When it gets to a usable state, I'll announce it in the usual places (linux-kernel mailing list) where it will be torn to the usual shreds and I will rewrite it all again to get it into a mergable state.

In the meantime, if you see me at any of the many Linux conferences I'll be attending around the world this year, and you are curious about the current status, buy me a beer and I'll be glad to discuss it in person.

If there's anything else people are wondering about this topic, feel free to comment on it here on google+, or email me.

February 07, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened goes onward (aka project meeting) (February 07, 2013, 21:40 UTC)

It’s been a while again, so time for another Gentoo Hardened online progress meeting.

Toolchain

GCC 4.8 is on development stage 4, so the hardened patches will be worked on next week. Some help on it is needed to test the patches on ARM, PPC and MIPS though. For those interested, keep a close eye on the hardened-dev overlay as those will contain the latest fixes. When GCC 4.9 starts development phase 1, Zorry will again try to upstream the patches.

With the coming fixes, we might probably (need to) remove the various hardenedno* GCC profiles from the hardened Gentoo profiles. This shouldn’t impact too many users as ebuilds add in the correct flags anyhow (for instance when needing to turn off PIE/PIC).

Kernel, grSecurity and PaX

The kernel release 3.7.0 that we have stable in our tree has seen a few setbacks, but no higher version is stable yet (mainly due to the stabilization period needed). 3.7.4-r1 and 3.7.5 are prime candidates with good track record,
so we might be stabilizing 3.7.5 in the very near future (next week probably).

On the PaX flag migration (you know, from ELF-header based marking to extended attributes marking), the documentation has seen its necessary upgrades and the userland utilities have been updated to reflect the use of xattr markings. The eclass we use for the markings will use the correct utility based on the environment.

One issue faced when trying to support both markings is that some actions (like the “paxctl -Cc” which creates the PT_PAX header if it is missing) make no sense with the other (as there is no header when using XATTR_PAX). The eclass will be updated to ignore these flags when XATTR_PAX is selected.

SELinux

Revision 10 is stable in the tree, and revision 11 is waiting stabilization period. A few more changes have been put in the policy repository already (which are installed when using the live ebuilds) and will of course be part of
revision 12.

A change in the userland utilities was also pushed out to allow permissive domains (so run a single domain in permissive mode instead of the entire system).

Finally, the SELinux eclass has been updated to remove SELinux modules from all defined SELinux module stores if the SELinux policy package is removed from the system. Before that, the user had to remove the modules from the store himself manually, but this is error-prone and easily forgotten, especially for the non-default SELinux policy stores.

Profiles

All hardened subprofiles are marked as deprecated now (you’ve seen the discussions on this on the mailinglist probably on this) so we now have a sane set of hardened profiles to manage. The subprofiles were used for things like
“desktop” or “server”, whereas users can easily stack their profiles as they see fit anyhow – so there was little reason for the project to continue managing those subprofiles.

Also, now that Gentoo has released its 13.0 profile, we will need to migrate our profiles to the 13.0 ones as well. So, the idea is to temporarily support 13.0 in a subprofile, test it thoroughly, and then remove the subprofile and switch the main one to 13.0.

System Integrity

The documentation for IMA and EVM is available on the Gentoo Hardened project site. They currently still refer to the IMA and EVM subsystems as development-only, but they are available in the stable kernels now. Especially the default policy that is available in the kernel is pretty useful. When you want to consider custom policies (for instance with SELinux integration) you’ll need a kernel patch that is already upstreamed but not applied to the stable kernels yet.

To support IMA/EVM, a package called ima-evm-utils is available in the hardened-dev overlay, which will be moved to the main tree soon.

Documentation

As mentioned before, the PaX documentation has seen quite a lot of updates. Other documents that have seen updates are the Hardened FAQ, Integrity subproject and SELinux documentation although most of them were small changes.

Another suggestion given is to clean up the Hardened project page; however, there has been some talk within Gentoo to move project pages to the Gentoo wiki. Such a move might make the suggestion easier to handle. And while on the subject of the wiki, we might want to move user guides to the wiki already.

Bugs

Bug 443630 refers to segmentation faults with libvirt when starting Qemu domains on a SELinux-enabled host. Sadly, I’m not able to test libvirt myself so either someone with SELinux and libvirt
expertise can chime in, or we will need to troubleshoot it by bug (using gdb, strace’ing more, …) which might take quite some time and is not user friendly…

Media

Various talks where held at FOSDEM regarding Gentoo Hardened, and a lot of people attended those talks. Also the round table was quite effective, with many users interacting with developers all around. For next year, chances are very high that we’ll give a “What has changed since last year” session and a round table again.

With many thanks to the usual suspects: Zorry, blueness, prometheanfire, lejonet, klondike and the several dozen contributors that are going to kill me for not mentioning their (nick)names.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
January in review: Istanbul, Dubai (February 07, 2013, 17:33 UTC)

Preface: It appears that I have fallen behind in my writings. It’s a shame really because I think of things that I should write in the moment and then forget. However, as I’m embracing slowish travel, sometimes I just don’t really do anything that is interesting to write about every day/week.

My last post was about my time in Greece. Since then I have been to Istanbul, Dubai, and (now) Sri Lanka. I was in Istanbul for about 10 days. My lasting impressions of Istanbul were:

  • +: Istanbul was the first Muslim country I’ve been to. This is is a positive because it opened up some thoughts of what to expect as I continue east. To see all the impressive mosques, to hear the azan (call to prayer) in the streets, to talk to some Turks about religion, really made it a new experience for me.
  • +: Istanbul receives many visitors per year, which makes it such that it is easy to converse, find stuff you need, etc
  • -: Istanbul receives many visitors per year, which makes it very touristy in some parts.
  • +: Istanbul is a huge city and there is much to see. I stepped on Asia for the first time. There are many old, old, buildings that leave you in awe. Oldest shopping area in the world, the Grand Bazaar, stuff like that.
  • -: Istanbul is a huge city and the public transit is not well connected, I thought.
  • –: Every shop owner harasses you to come in the store! The best defense that I can recommend is to walk with a purpose (like you are running an errand) but not in a hurry. This will bring the least amount of attention to yourself at risk of “missing” the finer details as you meander.

Turkey - Jan 2013-67

Let’s not joke anyone, Dubai was a skydiving trip, for sure. I spent 15 days in Dubai and made 30 jumps. It was a blast. I was at the dropzone most everyday and on the weather days, my generous hosts showed me around the city. I didn’t feel the need to take any pictures of the sites because, while impressive, they seemed too “fake” to me (outrageous, silly, etc). I went to the largest mall in the world, ate brunch in the shadow of the largest building in the world, largest aquarium, indoor ski hill in a desert, eventually it was just…meh. However, I will never forget “The Palm”

When deciding where to go onwards, as I knew I shouldn’t stay in Dubai too long (money matters, of course, I would spend my whole lot on fun and there is so much more to see). I ended up in Sri Lanka, because skyscanner told me there was a direct flight there on a budget airline. I don’t see the point in accepting layovers in my flight details at my pace. Then I found someone on HelpX that wanted an English teacher in exchange for accommodation. While I’m not a teacher, I am a native speaker, and that was acceptable at this level of classes. I did a week stint of that in a small village and now I’m relaxing at the beach…I’ll write more about Sri Lanka later and post pics, a fun photo so far:

20130209-134926.jpg

January 31, 2013
LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 96 OpenRC | SystemD | Pulseaudio (January 31, 2013, 22:38 UTC)

LC

In this podcast, comprookie talks about Gentoo and the OpenRC, udev, SystemD debate, his slacking abilities and so much less ...

Links

SystemD
http://www.freedesktop.org/wiki/Software/systemd
http://0pointer.de/blog/projects/the-biggest-myths.html
OpenRC
http://www.gentoo.org/proj/en/base/openrc/
eudev
http://www.gentoo.org/proj/en/eudev/
Gentoo udev
http://wiki.gentoo.org/wiki/Udev

Download

ogg

Markos Chandras a.k.a. hwoarang (homepage, bugs)
What happened to all the mentors? (January 31, 2013, 19:07 UTC)

I had this post in the Drafts for a while, but now it’s time to publish it since the situation does not seem to be improving at all.

As you probably now, if you want to become a Gentoo developer, you need to find yourself a mentor[1]. This used to be easy. I mean, all you had to do was to contact the teams you were interested in contributing as a developer and then one of the team members would step up and help you with your quizzes. However, lately, I find myself in the weird situation of having to become a mentor myself because potential recruits come back to recruiters and say that they could not find someone from the teams to help them. This is sub-optimal  for a couple of reasons. First of all, time constrains  Mentoring someone can take days, weeks or months. Recruiting someone after being trained (properly or not), can also take days, weeks or months. So somehow, I ended up spending twice as much time as I used to.  So we are back to those good old days, where someone needed to wait months before we fully recruit him. Secondly, a mentor and a recruiter should be different persons. This is necessary for recruits to gain a wider and more efficient training as different people will focus on different areas during this training period.

One may wonder, why teams are not willing to spend time to train new developers. I guess, this is because training people takes quite a lot of someone’s time and people tend to prefer fixing bugs and writing code than spending time training people. Another reason could be that teams are short on manpower, so try are mostly busy with other stuff and they just can’t do both at the same time. Others just don’t feel ready to become mentors which is rather weird because every developer was once a mentee. So it’s not like they haven’t done something similar before. Truth is that this seems to be a vicious circle. No manpower to train people -> less people are trained -> Not enough manpower in the teams.

In my opinion, getting more people on board is absolutely crucial for Gentoo. I strongly believe that people must spend time training new people because a) They could offload work to them ;) and b) it’s a bit sad to have quite a few interested and motivated people out there and not spend time to train them properly and get them on board. I sincerely hope this is a temporary situation and things will become better in the future.

ps: I will be in FOSDEM this weekend. If you are there and you would like to discuss about the Gentoo recruitment process or anything else, come and find me ;)

 

[1] http://www.gentoo.org/proj/en/devrel/handbook/handbook.xml?part=1&chap=2#doc_chap3

January 30, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: iO Tillett Wright: Fifty shades of gay (January 30, 2013, 23:01 UTC)

Since the TED player seems to skip the last few seconds, I’m linking to the TED talk page but embedding a version from YouTube:

January 29, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)