Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
May 19, 2013, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

May 19, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I was very sceptic for a long time. Then, I slowly started to trust the kmail2/akonadi combination. I've been using it on my office desktop for a long time, and it works well and is very stable and fast there. (Might be related to the fact that the IMAP server is just across the lawn.) Some time ago, when I deemed things solid enough I even upgraded my laptop again, despite earlier problems. In Gentoo, we've been keeping kdepim-4.4 around all the time, and as you may have read, internal discussions led indeed to the decision to finally drop it some time ago.
What happened in the meantime?
1) One of the more annoying bugs mentioned in my last blog post was fixed with some help from Kevin Kofler. Seems like Debian stumbled into the same issue long ago.
2) I was on vacation. Which was fun, but mostly unrelated to the issue at hand. None of my Gentoo colleagues went ahead with the removal in the meantime. A lot of e-mails accumulated in my account.
3) Coming back, I was on the train with my laptop, sorting the mail. The train was full, the onboard WLAN slightly overstressed, the 4G network just about more reliable. Network comes and goes sometime with a tunnel, no problem. Or so I thought.
4) Half an hour before arriving back home I realized that silently a large part of the e-mails that I had (I though) moved (using kmail2-4.10.3 / akonadi-1.9.2) from one folder to another over ~3 hours had disappeared on one side, and not re-appeared on the other. Restarting kmail2 and akonadi did not help. A quick check of the webmail interface of my provider confirmed that also on the IMAP server the mails were gone in both folders. &%(/&%(&/$/&%$§&/
I wasn't happy. Luckily there were daily server backup snapshots, and after a few days delay I had all the documents back. Nevertheless... Now, I am considering what to do next. (Needless to say, in my opinion we should forget dropping kmail1 in Gentoo for now.) Options...
a) migrate the laptop back to kmail1, which is way more resistant to dropped connections and flaky internet connection - doable but takes a bit of time
b) install OfflineIMAP and Dovecot on the laptop, and let kmail2/akonadi access the localhost Dovecot server - probably the most elegant solution but for the fact that OfflineIMAP seems to have trouble mirroring our Novell Groupwise IMAP server
c) other e-mail client? I've heard good things about trojita...
Summarizing... no idea still how to go ahead, no good solution available. And I actually like the kdepim integration idea, so I'll never be the first one to completely migrate away from it! I am sincerely sorry for the sure fact that this post is disheartening to all the people who put a lot of effort into improving kmail2 and akonadi. It has become a huge lot better. However, I am just getting more and more convinced that the complexity of this combined system is too much to handle and that kmail should never have gone the akonadi way.

Sven Vermeulen a.k.a. swift (homepage, bugs)
The weird “audit_access” permission (May 19, 2013, 01:50 UTC)

While writing up the posts on capabilities, one thing I had in my mind was to give some additional information on frequently occurring denials, such as the dac_override and dac_read_search capabilities, and when they are triggered. For the DAC-related capabilities, policy developers often notice that these capabilities are triggered without a real need for them. So in the majority of cases, the policy developer wants to disable auditing of this:

dontaudit <somedomain> self:capability { dac_read_search dac_override };

When applications wants to search through directories not owned by the user as which the application runs, both capabilities will be checked – first the dac_read_search one and, if that is denied (it will be audited though) then dac_override is checked. If that one is denied as well, it too will be audited. That is why many developers automatically dontaudit both capability calls if the application itself doesn’t really need the permission.

Let’s say you allow this because the application needs it. But then another issue comes up when the application checks file attributes or access permissions (which is a second occurring denial that developers come across with). Such applications use access() or faccessat() to get information about files, but other than that don’t do anything with the files. When this occurs and the domain does not have read, write or execute permissions on the target, then the denial is shown even when the application doesn’t really read, write or execute the file.

#include <stdio.h>
#include <unistd.h>

int main(int argc, char ** argv) {
  printf("%s: Exists (%d), Readable (%d), Writeable (%d), Executable (%d)\n", argv[1],
    access(argv[1], F_OK), access(argv[1], R_OK),
    access(argv[1], W_OK), access(argv[1], X_OK));
}
$ check /var/lib/logrotate.status
/var/lib/logrotate.status: Exists (0), Readable (-1), Writeable (-1), Executable (-1)

$ tail -1 /var/log/audit.log
...
type=AVC msg=audit(1367400559.273:5224): avc:  denied  { read } for  pid=12270 comm="test" name="logrotate.status" dev="dm-3" ino=2849 scontext=staff_u:staff_r:staff_t tcontext=system_u:object_r:logrotate_var_lib_t tclass=file

This gives the impression that the application is doing nasty stuff, even when it is merely checking permissions. One way would be to dontaudit read as well, but if the application does the check against several files of various types, that might mean you need to include dontaudit statements for various domains. That by itself isn’t wrong, but perhaps you do not want to audit such checks but do want to audit real read attempts. This is what the audit_access permission is for.

The audit_access permission is meant to be used only for dontaudit statements: it has no effect on the security of the system itself, so using it in allow statements has no effect. The purpose of the permission is to allow policy developers to not audit access checks without really dontauditing other, possibly malicious, attempts. In other words, checking the access can be dontaudited while actually attempting to use the access (reading, writing or executing the file) will still result in the proper denial.

May 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo CUPS-1.6 status (May 18, 2013, 21:02 UTC)

We've had CUPS 1.6 in the Gentoo portage tree for a while now already. It has even been keyworded by most of the arches (hooray!), and from the bug reports quite some people use it. Sometime in the intermediate future we'll stabilize it, however until then quite some bugs still have to be resolved.
CUPS 1.6 brings changes. The move to Apple has messed up the project priorities, and backward compatibility was kicked out of the window with a bang. As I've already detailed in a short previous blog post, per se, CUPS 1.6 does not "talk" the printer browsing protocol of previous versions anymore but solely relies on zeroconf (which is implemented in Gentoo by net-dns/avahi). Some other features were dropped as well...
Luckily, CUPS was and is open source, and that the people at Apple removed the code from the main CUPS distribution did not mean that it was actually gone. In the end, all these feature just made their way from the main CUPS package to a new package net-print/cups-filters maintained at The Linux Foundation. There, the code is evolving fast, bugs are fixed and features are introduced. Even network browsing with the CUPS-1.5 protocol has been restored by now; cups-filters includes a daemon called cups-browsed which can generate print queues on the fly and accepts configuration directives similar to CUPS-1.5. As far as we in Gentoo (and any other Linux distribution) are concerned, we can get along without zeroconf just fine.
The main thing that is hindering CUPS-1.6 stabilization a the moment is that the CUPS website is down, kind of. Their server had a hardware failure, and since nearly a month (!!!) only minimal, static pages are up. In particular, what's missing is the CUPS bugtracker (no I won't sign up for an Apple ID to submit CUPS bugs) and access to the Subversion repository of the source. (Remind me to git-svn clone the code history as soon as it's back and push it to gitorious.)
So... feel free to try out CUPS-1.6, testing and submitting bugs for sure helps. However, it may take some time to get these fixed...

Sven Vermeulen a.k.a. swift (homepage, bugs)

To work on SELinux policies, I use a couple of functions that I can call on the shell (command line): seshowif, sefindif, seshowdef and sefinddef. The idea behind the methods is that I want to search (find) for an interface (if) or definition (def) that contains a particular method or call. Or, if I know what the interface or definition is, I want to see it (show).

For instance, to find the name of the interface that allows us to define file transitions from the postfix_etc_t label:

$ sefindif filetrans.*postfix_etc
contrib/postfix.if: interface(`postfix_config_filetrans',`
contrib/postfix.if:     filetrans_pattern($1, postfix_etc_t, $2, $3, $4)

Or to show the content of the corenet_tcp_bind_http_port interface:

$ seshowif corenet_tcp_bind_http_port
interface(`corenet_tcp_bind_http_port',`
        gen_require(`
                type http_port_t;
        ')

        allow $1 http_port_t:tcp_socket name_bind;
        allow $1 self:capability net_bind_service;
')

For the definitions, this is quite similar:

$ sefinddef socket.*create
obj_perm_sets.spt:define(`create_socket_perms', `{ create rw_socket_perms }')
obj_perm_sets.spt:define(`create_stream_socket_perms', `{ create_socket_perms listen accept }')
obj_perm_sets.spt:define(`connected_socket_perms', `{ create ioctl read getattr write setattr append bind getopt setopt shutdown }')
obj_perm_sets.spt:define(`create_netlink_socket_perms', `{ create_socket_perms nlmsg_read nlmsg_write }')
obj_perm_sets.spt:define(`rw_netlink_socket_perms', `{ create_socket_perms nlmsg_read nlmsg_write }')
obj_perm_sets.spt:define(`r_netlink_socket_perms', `{ create_socket_perms nlmsg_read }')
obj_perm_sets.spt:define(`client_stream_socket_perms', `{ create ioctl read getattr write setattr append bind getopt setopt shutdown }')

$ seshowdef manage_files_pattern
define(`manage_files_pattern',`
        allow $1 $2:dir rw_dir_perms;
        allow $1 $3:file manage_file_perms;
')

I have these defined in my ~/.bashrc (they are simple functions) and are used on a daily basis here ;-) If you want to learn a bit more on developing SELinux policies for Gentoo, make sure you read the Gentoo Hardened SELinux Development guide.

May 17, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

It is a common request in squid to have it block downloading certain files based on their extension in the url path. A quick look at google’s results on the subject apparently gives us the solution to get this done easily by squid.

The common solution is to create an ACL file listing regular expressions of the extensions you want to block and then apply this to your http_access rules.

blockExtensions.acl

\.exe$

squid.conf

acl blockExtensions urlpath_regex -i "/etc/squid/blockExtensions.acl"

[...]

http_access allow localnet !blockExtensions

Unfortunately this is not enough to prevent users from downloading .exe files. The mistake here is that we assume that the URL will strictly finish by the extension we want to block, consider the two examples below :

http://download.com/badass.exe     // will be DENIED as expected

http://download.com/badass.exe?    // WON'T be denied as it does not match the regex !

Squid uses the extended regex processor which is the same as egrep. So we need to change our blockExtensions.acl file to handle the possible ?whatever string which may be trailing our url_path. Here’s the solution to handle all the cases :

blockExtensions.acl

\.exe(\?.*)?$
\.msi(\?.*)?$
\.msu(\?.*)?$
\.torrent(\?.*)?$

You will still be hated for limiting people’s need to download and install shit on their Windows but you implemented it the right way and no script kiddie can brag about bypassing you ;)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Life in the new city (May 17, 2013, 19:54 UTC)

Okay so now it’s over a month I’ve been staying in Dublin, it’s actually over a month I’m at my new job, and it is shaping up as a very good new experience for me. But even more than the job, the new experiences come with having an apartment. Last year I was leaving within the office where I was working, and before that I’ve been living with my mother, so finally having a place of mine is a new world entirely. Well, I’ll admit it: only partially.

Even though I’ve been living with my mother, like the stereotype of Italian guys suggests, it’s not like I’ve bee a parasite. Indeed, I’ve been paying all the bills for the past four years, and still I’m paying them from here. I’ve also been doing my share of grocery shopping, cleaning and maintenance tasks, but at least I did avoid the washing machine most of the time. So yeah, it wasn’t a complete revolution for my life, but it was a partial one. So right now I do feel slightly worse for wear, especially because I had a very bad experience with the kitchen, which was not cleaned before I moved in.

Thankfully, Ikea exists everywhere. And their plastic mats for drawers and cabinets are a lifesaver. Too bad I already finished the roll and I’ve not completed half the kitchen yet. I think I’ll go back to Ikea in two weeks (not next week because my sister’s visiting). With this time I bought the same identical lamp three times. Originally in Italy, then again in Los Angeles, and now in Dublin — only difference is that the American version has a loop to be able to orient it, probably because health and safety does not require having enough common sense as to not touch the hot cone…

The end line is that I’m very happy about having moved to Dublin. I love the place, and I love the people. My new job is also quite interesting, even if not as open-source focused as my previous ones (which does not mean it is completely out of the way of open source anyway), and the colleagues are terrific… hey some even read my blog before, thanks guys!

While settling down took most of my time and left me no time to do real Gentoo contributions or blogging (luckily Sven seems to have taken my place on Planet Gentoo), things are getting much better (among others I finally have a desk in the apartment, and tomorrow I’m going to get a TV as well, which I know will boost my ability to keep the house clean — because it won’t require me to stick to the monitor to watch something). So expect more presence from me soon enough!

Greg KH a.k.a. gregkh (homepage, bugs)

A few years ago, I gave a history of the 2.6.32 stable kernel, and mentioned the previous stable kernels as well. I'd like to apologize for not acknowledging the work of Adrian Bunk in maintaining the 2.6.16 stable kernel for 2 years after I gave up on it, allowing it to be used by many people for a very long time.

I've updated the previous post with this information in it at the bottom, for the archives. Again, many apologies, I never meant to ignore the work of this developer.

Sven Vermeulen a.k.a. swift (homepage, bugs)

There has been a few posts already on the local Linux kernel privilege escalation, which has received the CVE-2013-2094 ID. arstechnica has a write-up with links to good resources on the Internet, but I definitely want to point readers to the explanation that Brad Spengler made on the vulnerability.

In short, the vulnerability is an out-of-bound access to an array within the Linux perf code (which is a performance measuring subsystem enabled when CONFIG_PERF_EVENTS is enabled). This subsystem is often enabled as it offers a wide range of performance measurement techniques (see its wiki for more information). You can check on your own system through the kernel configuration (zgrep CONFIG_PERF_EVENTS /proc/config.gz if you have the latter pseudo-file available – it is made available through CONFIG_IKCONFIG_PROC).

The public exploit maps memory in userland, fills it with known data, then triggers an out-of-bound decrement that tricks the kernel into decrementing this data (mapped in userland). By looking at where the decrement occurred, the exploit now knows the base address of the array. Next, it targets (through the same vulnerability) the IDT base (Interrupt Descriptor Table) and targets the overflow interrupt vector. It increments the top part of the address that the vector points to (which is 0xffffffff, becoming 0×00000000 thus pointing to the userland), maps this memory region itself with shellcode, and then triggers the overflow. The shell code used in the public exploit modifies the credentials of the current task, sets uid/gid with root and gives full capabilities, and then executes a shell.

As Brad mentions, UDEREF (an option in a grSecurity enabled kernel) should mitigate the attempt to get to the userland. On my system, the exploit fails with the following (start of) oops (without affecting the system further) when it tries to close the file descriptor returned from the syscall that invokes the decrement:

[ 1926.226678] PAX: please report this to pageexec@freemail.hu
[ 1926.227019] BUG: unable to handle kernel paging request at 0000000381f5815c
[ 1926.227019] IP: [] sw_perf_event_destroy+0x1a/0xa0
[ 1926.227019] PGD 58a7c000 
[ 1926.227019] Thread overran stack, or stack corrupted
[ 1926.227019] Oops: 0002 [#4] PREEMPT SMP 
[ 1926.227019] Modules linked in: libcrc32c
[ 1926.227019] CPU 0 
[ 1926.227019] Pid: 4267, comm: test Tainted: G      D      3.8.7-hardened #1 Bochs Bochs
[ 1926.227019] RIP: 0010:[]  [] sw_perf_event_destroy+0x1a/0xa0
[ 1926.227019] RSP: 0018:ffff880058a03e08  EFLAGS: 00010246
...

The exploit also finds that the decrement didn’t succeed:

test: semtex.c:76: main: Assertion 'i<0x0100000000/4' failed.

A second mitigation is that KERNEXEC (also offered through grSecurity) which prevents the kernel from executing data that is writable (including userland data). So modifying the IDT would be mitigated as well.

Another important mitigation is TPE – Trusted Path Execution. This feature prevents the execution of binaries that are not located in a root-owned directory and owned by a trusted group (which on my system is 10 = wheel). So users attempting to execute such code will fail with a Permission denied error, and the following is shown in the logs:

[ 3152.165780] grsec: denied untrusted exec (due to not being in trusted group and file in non-root-owned directory) of /home/user/test by /home/user/test[bash:4382] uid/euid:1000/1000 gid/egid:100/100, parent /bin/bash[bash:4352] uid/euid:1000/1000 gid/egid:100/100

However, even though a nicely hardened system should be fairly immune against the currently circling public exploit, it should be noted that it is not immune against the vulnerability itself. The methods above mentioned make it so that that particular way of gaining root access is not possible, but it still allows an attacker to decrement and increment memory in specific locations so other exploits might be found to modify the system.

Now out-of-bound vulnerabilities are not new. Recently (february this year), a vulnerability in the networking code also provided an attack vector to get a local privilege escalation. A mandatory access control system like SELinux has little impact on such vulnerabilities if you allow users to execute their own code. Even confined users can modify the exploit to disable SELinux (since the shell code is ran with ring0 privileges it can access and modify the SELinux state information in the kernel).

Many thanks to Brad for the excellent write-up, and to the Gentoo Hardened team for providing the grSecurity PaX/TPE protections in its hardened-sources kernel.

May 16, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened spring notes (May 16, 2013, 20:54 UTC)

We got back together on the #gentoo-hardened chat channel to discuss the progress of Gentoo Hardened, so it’s time for another write-up of what was said.

Toolchain

GCC 4.8.1 will be out soon, although nothing major has occurred with it since the last meeting. There is a plugin header install problem in 4.8 and its not certain that the (trivial) fix is in 4.8.1, but it certainly is inside Gentoo’s release.

Blueness is also (still, and hopefully for a long time ;-) maintaining the uclibc hardened related toolchain aspects.

Kernel and grSecurity/PaX

The further progress on the XATTR_PAX migration was put on a lower level the past few weeks due to busy, busy… very busy weeks (but this was announced and known in advance). We still need to do XATTR copying in install for packages that do pax markings before src_install() and include the user.pax XATTR patch in the gentoo-sources kernel. This will silence the errors for non-hardened users and fix the loss of XATTR markings for those packages that do pax-mark before install.

The set then needs to be documented further and tested on vanilla and hardened systems.

Zorry asked if a separate script can be provided for those ebuilds that directly call paxctl. These ebuilds might want to switch to the eclass, but if they need to call paxctl or similar directly (for instance because the result is immediately used for further building), a separate script or tool should be made available. Blueness will look into this.

On hardened-sources, we are now with stable 2.6.32-r160, 3.2.42-r1 and 3.8.6 due to some vulnerabilities in earlier versions (in networking code). There is still some bug (nfs-related) that is fixed in 3.2.44 so that part might need a bump as well soon.

SELinux

The selocal command is now available for Gentoo SELinux users, allowing them to easily enhance the policy without having to maintain their own SELinux policy modules (the script is a wrapper that does all that).

The setools package now also uses the SLOT’ed swig, so no more dependency breakage.

On SELinux userspace and policy, both have seen a new release last month, and both are already in the Gentoo portage tree.

Finally, the SELinux policy ebuilds now also call epatch_user so users can customize the policies even further without having to copy ebuilds to their overlay.

Now that tar supports XATTR well, we might want to look into SELinux stages again. Jmbsvicetto did some work on that, but the builds failed during stage1. We’ll look into that later.

Integrity

Nothing much to say, we’re waiting a bit until the patches proposed by the IMA team are merged in the main kernel.

Profiles

Two no-multilib fixes have been applied to the hardened/amd64/no-multilib profiles. One was a QA issue and quickly resolved, the other is due to the profile stacking within Gentoo profiles, where we missed a profile and thus were missing a few masks defined in that (missed) profile. But including the profile creates a lot of duplicates again, so we are going to copy the masks across until the duplicates are resolved in the other profiles.

Blueness will also clean up the experimental 13.0 directory since all hardened profiles now follow 13.0.

Docs

The latest changes on SELinux have been added to the Gentoo SELinux handbook. Also, I’ve been slowly (but surely) adding topics to the SELinux tutorials listing on the Gentoo wiki.

The grSecurity 2 document is very much out of date, blueness hopes to put some time in fixing that soon.

So that’s about it for the short write-up. Zorry will surely post the log later on the appropriate channels. Good work done (again) by all team members!

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Fujifilm GF670W (May 16, 2013, 20:12 UTC)

It’s been so long since I switched to film-only photography that I decided a few months ago to sell all my digital equipment. I already own a Nikon FM2 camera which I love but I’ve to admit that I was and still am totally amazed by the pictures taken by my girlfriend’s Rolleiflex 3.5F. The medium format is the kind of rendering I was craving to get and that sooner or later I’d step into the medium format world. Well, I didn’t have to wait as when we were in Tokyo to celebrate new year 2013 I fell in love with what was the perfect match between my love for wide angles and medium format film photography : the Fujifilm GF670W !

For my soon to come birthday, I got myself my new toy in advance so I could use it in my upcoming roadtrip around France (I’ll talk about it soon, it was awesome). Oddly, the only places in the world where you can get this camera is in the UK and in Japan so I bought it from the very nice guys at Dale photographic. Here is the beast (literally) :

IMG_20130412_215344

 

 

 

 

 

 

 

 

Yes, this is a big camera and it comes with a very nice leather case and a lens hood. This is a telemetric camera with a comfortable visor, it accepts 120 and 220 films and is capable of shooting in standard 6×6 and 6×7 !

In the medium format world, the 55mm lens is actually a wide angle one as it is comparable to a 28mm in the usual 24×36 world. Its performances are not crazy on paper with a 4.5 aperture and a shutter speed going from 4s to 1/500s (as fast as a 1956 Rolleiflex) but the quality is just stunning as it’s sharp and offers a somewhat inexistant chromatic abberation.

Want proof ? These are some of my first roll’s shoots uploaded at full resolution :

07760003

07760006

Sven Vermeulen a.k.a. swift (homepage, bugs)
Public support channels: irc (May 16, 2013, 01:50 UTC)

I’ve said it before – support channels for free software are often (imo) superior to the commercial support that you might get with vendors. And although those vendors often try to use “modern” techniques, I fail to see why the old, but proven/stable methods would be wrong.

Consider the “Chat with Support” feature that many vendors have on their site. Often, these services use a webbrowser, AJAX-driven method for talking with support engineers. The problem with this that I see is that it is difficult to keep track of the feedback you got over time (unless you manually copy/paste the information), and again that it isn’t public. With free software communities, we still often redirect such “online” support requests to IRC.

Internet Relay Chat has been around for ages (1988 according to wikipedia) and still quite active. Gentoo has all of its support channels on the freenode IRC network: a community-driven, active #gentoo channel with often crosses the 1000 users, a #gentoo-dev development-related channel where many developers communicate, the #gentoo-hardened channel for all questions and support regarding Gentoo Hardened specifics, etc.

Using IRC has many advantages. One is that logs can be kept (either individually or by the project itself) that can be queried later by the people who want to provide support (to see if questions have already been popping up, see what the common questions are for the last few days, etc.) or get support (to see if their question was already answered in the past). Of course, these logs can be made public through web interfaces quite easily. For users, such log functionality is offered through the IRC client. Another very simple, yet interesting feature is highlighting: give the set of terms for which you want to be notified (usually through a highlight and a specific notification in the client), making it easier to be on multiple channels without having to constantly follow-up on all discussions.

Another advantage is that there is such a thing like “bots”. Most Gentoo related channels do not allow active bots on the channels except for the project-approved ones (such as willikens). These bots can provide project-specific help to users and developers alike:

  • Give one-line information about bugs reported on bugzilla (id, assignee, status, but also the URL where the user/developer can view the bug etc.)
  • Give meta information about a package (maintainer, herd, etc.), herd (members), GLSA details, dependency information, etc.
  • Allow users to query if a developer is away or not
  • Create notes (messages) for users that are not online yet but for which you know they come online later (and know their nickname or registered username)
  • Notify when commits are made, or when tweets are sent that match a particular expression, etc.

Furthermore, the IRC protocol has many features that are very interesting to use in free software communities as well. You can still do private chats (when potentially confidential data is exchanged) for instance, or even exchange files (although that is less common to use in free software communities). There is also still some hierarchy in case of abuse (channel operators can remove users from the chat or even ban them for a while) and one can even quiet a channel when for instance online team meetings are held (although using a different channel for that might be an alternative).

IRC also has the advantage that connecting to the IRC channels has a very low requirement (software-wise): one can use console-only chat clients (in case users cannot get their graphical environment to work – example is irssi) or even webbrowser based ones (if one wants to chat from other systems). Even smartphones have good IRC applications, like AndChat for Android.

IRC is also distributed: an IRC network consists of many interconnected servers who pass on all IRC traffic. If one node goes down, users can access a different node and continue. That makes IRC quite high-available. IRC network operators do need to try and keep the network from splitting (“netsplit”) which occurs when one part of the distributed network gets segregated from the other part and thus two “independent” IRC networks are formed. When that occurs, IRC operators will try to join them back as fast as possible. I’m not going to explain the details on this – it suffices to understand that IRC is a distributed manner and thus often much more available than the “support chat” sites that vendors provide.

So although IRC looks archaic, it is a very good match for support channel requirements.

May 15, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Overriding the default SELinux policies (May 15, 2013, 01:50 UTC)

Extending SELinux policies with additional rules is easy. As SELinux uses a deny by default approach, all you need to do is to create a policy module that contains the additional (allow) rules, load that and you’re all set. But what if you want to remove some rules?

Well, sadly, SELinux does not support deny rules. Once an allow rule is loaded in memory, it cannot be overturned anymore. Yes, you can disable the module itself that provides the rules, but you cannot selectively disable rules. So what to do?

Generally, you can disable the module that contains the rules you want to disable, and load a custom module that defines everything the original module did, except for those rules you don’t like. For instance, if you do not want the skype_t domain to be able to read/write to the video device, create your own skype-providing module (myskype) with the exact same content (except for the module name at the first line) as the original skype module, except for the video device:

dev_read_sound(skype_t)
# dev_read_video_dev(skype_t)
dev_write_sound(skype_t)
# dev_write_video_dev(skype_t)

Load in this policy, and you now have the skype_t domain without the video access. You will get post-install failures when Gentoo pushes out an update to the policy though, since it will attempt to reload the skype.pp file (through the selinux-skype package) and fail because it declares types and attributes already provided (by myskype). You can exclude the package from being updated, which works as long as no packages depend on it. Or live with the post-install failure ;-) But there might be a simpler approach: epatch_user.

Recently, I added in support for epatch_user in the policy ebuilds. This allows users to create patches against the policy source code that we use and put them in /etc/portage/patches in the directory of the right category/package. For module patches, the working directory used is within the policy/modules directory of the policy checkout. For base, it is below the policy checkout (in other words, the patch will need to use the refpolicy/ directory base). But because of how epatch_user works, any patch taken from the base will work as it will start stripping directories up to the fourth one.

This approach is also needed if you want to exclude rules from interfaces rather than from the .te file: create a small patch and put it in /etc/portage/patches for the sec-policy/selinux-base package (as this provides the interfaces).

May 14, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
Spring Europen 2013 (May 14, 2013, 17:00 UTC)

Europen talkThis Monday I was the first time guest and speaker at (contrary to it’s name) local Czech conference Europen. It was interesting experience. And I would like to share a bit of what I experienced. What made it different from conferences I usually speak at was the audience. Not many Linux guys and quite some Windows guys. I was told that this conference is for various IT professionals and people from academia interested in Open Source.

I was asked to speak there about something techy, low-levelly, genericy, and not SUSE only stuff. I offered OBS and Studio introduction as these are crown jewels of openSUSE environment, but I was told that they would prefer something more generic and little bit more hardcore. So in the end I decided to speak about packaging as that is something I do that since a long time ago. And to make it nor a workshop nor SUSE specific talk, I put in two more packaging systems that I worked with apart from rpm – Portage (from Gentoo) and BitBake (from Open Embedded).

Whenever I visit open source event in Czech Republic, I always know quite some people there already. I know the most prominent people from Linux magazines, other distributions and some other people who are big open source enthusiasts. On this conference, I knew something like six attendees in total (and all of them were there to give a talk and not sure what to expect from audience). Almost everybody was running MS Windows with few MacOS exceptions. Really quite different world.

As I said, in the end I spoke about why do we do software packages in Linux and how do we do it. I spoke about rpm and spec files, about Portage and BitBake showing how nice it is to have inheritance. And in the end I put in part about how great OBS is anyway.

From the almost a day I was at the conference, most questions and feedback got LibUCW library, but Martin Mareš gave amazing presentation and he had a really interesting topic. LibUCW is cool. If I’ll find a free time, I’ll write something about it separately. Otherwise audience was quite calm and quiet. For my presentation, I got question about cross-compilation of rpms, so in the end after the talk I could recommend OBS once more ;-)

It was definitely interesting experience as these people were mostly out of our usual scope. If you are interested in browsing the slides, you can, sources are on my github, but they contain quite some pages of example recipes that I was commenting on the spot.

I use it since a long time, so since it works pretty good for me, I want to share how to handle the spam for your @gentoo.org address with procmail.

First, you need to say that procmail will filter your email(s):
echo "| /usr/bin/procmail" > /home/${USER}/.forward

Then create a simple /home/${USER}/.procmailrc with this content:
:0:
* ^X-Spam-Status: Yes
/dev/null

:0:
* ^X-Spam-Level: \*\*\*
/dev/null/

:0:
* ! ^List-Id
* ^X-Spam-Level: \*\*
/dev/null/

:0:
* ^Subject:.*viagra*
/dev/null

:0:
* ^Subject:.*cialis*
/dev/null

:0:
* ^Subject:.*money*
/dev/null

:0:
* ^Subject:.*rolex*
/dev/null

:0:
* ^Subject:.*scount*
/dev/null

:0:
* ^Subject:.*Viagra*
/dev/null

:0:
* ^Subject:.*Cialis*
/dev/null

:0:
* ^Subject:.*Marketing*
/dev/null

:0:
* ^Subject:.*marketing*
/dev/null

:0:
* ^Subject:.*Money*
/dev/null

:0:
* ^Subject:.*Rolex*
/dev/null

:0:
* ^Subject:.*Scount*
/dev/null

:0:
* ^Subject:.*glxgug*
/dev/null

:0:
* ^Subject:.*offizielle sieger*
/dev/null

:0:
* ^Subject:.*educational*
/dev/null

:0 B:
* $ content-[^:]+:${WS}*.+(\<)*(file)?name${WS}*=${WS}*\/.+\.(pif|scr|com|cpl|vbs|mim|hqx|bhx|uue|uu|b64)\"?$
/dev/null

:0 B:
* ^Content-Type: .*;$[ ]*(file)?name=\"?.*\.(pif|scr|com|cpl|vbs)\"?$
/dev/null

:0 B:
* ^Content-Type: .*; [ ]*(file)?name=\"?.*\.(pif|scr|com|cpl|vbs)\"?$
/dev/null

With the filter for X-Spam-Status and X-Spam-Level you will avoid the majority of the incoming spam.
Some mails that does not have any Spam flag, contains subject like viagra, cialis ( which I absolutely don’t need :D ), rolex and scount.
Yes, I could you the (c|C)ase syntax, but I had problems, so I prefer to write twice the rules instead of have any sort of troubles.
Note: with this email address I’m not subscribed to any newsletter or any sort of offers/catalogs so I filtered scount, markerting, money.

Sometimes I receive mails from people that are not spammer, with the X-Spam-Level flag with one star, so I decided to move these email into a folder, they will be double-checked with naked eye:

:0:
* ^X-Spam-Level: \*
/home/ago/.maildir/.INBOX.pspam/

To avoid confusion I always prefer to use a complete path here.

After a stabilization you will always see the annoying mail from the bugzilla which contains ${arch} stable, so if you want to drop them:

:0 B
* ^*(alpha|amd64|arm|hppa|ia64|m68k|ppc|ppc64|s390|sh|sparc|x86) stable*
/dev/null

Now, if you are using more email clients, on more computers, you may need to set the filters here instead of on all clients you are using, so for example:

:0
* ^From.*bugzilla-daemon@gentoo.org
* ^TO.*amd64@gentoo.org
/home/ago/.maildir/.INBOX.amd64/

And so on….
These, hints obviously are valid on all postfix-based mailserver; if you are using e.g. qmail, you need to move the .procmailrc, but this is still valid.
I hope this will help :)

EDIT:
If you need a particular set of rules, you can write it if you take a look at the source/header of the message, so If for example I don’t like to see the mails from bugzilla of the bugs that I reported:

the header says: X-Bugzilla-Reporter: ago@gentoo.org
so:

:0
* ^From.*bugzilla-daemon@gentoo.org
* ^X-Bugzilla-Reporter.*ago@gentoo.org
/dev/null

Sven Vermeulen a.k.a. swift (homepage, bugs)

With all the reports surrounding Cdorked, I took a look at if SELinux and/or other Gentoo Hardened technologies could reduce the likelihood that this infection occurs on your system.

First of all, we don’t know yet how the malware gets installed on the server. We do know that the Apache binaries themselves are modified, so the first thing to look at is to see if this risk can be reduced. Of course, using an intrusion detection system like AIDE helps, but even with Gentoo’s qcheck command you can test the integrity of the files:

# qcheck www-servers/apache
Checking www-servers/apache-2.2.24 ...
  * 424 out of 424 files are good

If the binary is modified, this would result in something equivalent to:

Checking www-servers/apache-2.2.24 ...
 MD5-DIGEST: /usr/sbin/apache2
  * 423 out of 424 files are good

I don’t know if the modified binary would otherwise work just fine, I have not been able to find exact details on the infected binary to (in a sandbox environment of course) analyze this further. Also, because we don’t know how they are installed, it is not easy to know if binaries that you built yourself are equally likely to be modified/substituted or if the attack checks checksums of the binaries against a known list.

Assuming that it would run, then the infecting malware would need to set the proper SELinux context on the file (if it overwrites the existing binary, then the context is retained, otherwise it gets the default context of bin_t). If the context is wrong, then starting Apache results in:

apache2: Syntax error on line 61 of /etc/apache2/httpd.conf: Cannot load /usr/lib64/apache2/modules/mod_actions.so into server: /usr/lib64/apache2/modules/mod_actions.so: cannot open shared object file: Permission denied

This is because the modified binary stays in the calling domain context (initrc_t). If you use a targeted policy, then this will not present itself as initrc_t is an unconfined domain. But with strict policies, initrc_t is not allowed to read httpd_modules_t. Even worse, the remainder of SELinux protections don’t apply anymore, since with unconfined domains, all bets are off. That is why Gentoo focuses this hard on using a strict policy.

So, what if the binary runs in the proper domain? Well then, from the articles I read, the malware can do a reverse connect. That means that the domain will attempt to connect to an IP address provided by the attacker (in a specifically crafted URL). For SELinux, this means that the name_connect permission is checked:

# sesearch -s httpd_t -c tcp_socket -p name_connect -ACTS
Found 20 semantic av rules:
   allow nsswitch_domain dns_port_t : tcp_socket { name_connect } ; 
DT allow httpd_t port_type : tcp_socket { name_connect } ; [ httpd_can_network_connect ]
DT allow httpd_t ftp_port_t : tcp_socket { name_connect } ; [ httpd_can_network_relay ]
DT allow httpd_t smtp_port_t : tcp_socket { name_connect } ; [ httpd_can_sendmail ]
DT allow httpd_t postgresql_port_t : tcp_socket { name_connect } ; [ httpd_can_network_connect_db ]
DT allow httpd_t oracledb_port_t : tcp_socket { name_connect } ; [ httpd_can_network_connect_db ]
DT allow httpd_t squid_port_t : tcp_socket { name_connect } ; [ httpd_can_network_relay ]
DT allow httpd_t mssql_port_t : tcp_socket { name_connect } ; [ httpd_can_network_connect_db ]
DT allow httpd_t kerberos_port_t : tcp_socket { name_connect } ; [ allow_kerberos ]
DT allow nsswitch_domain ldap_port_t : tcp_socket { name_connect } ; [ authlogin_nsswitch_use_ldap ]
DT allow httpd_t http_cache_port_t : tcp_socket { name_connect } ; [ httpd_can_network_relay ]
DT allow httpd_t http_port_t : tcp_socket { name_connect } ; [ httpd_can_network_relay ]
DT allow httpd_t http_port_t : tcp_socket { name_connect } ; [ httpd_graceful_shutdown ]
DT allow httpd_t mysqld_port_t : tcp_socket { name_connect } ; [ httpd_can_network_connect_db ]
DT allow httpd_t ocsp_port_t : tcp_socket { name_connect } ; [ allow_kerberos ]
DT allow nsswitch_domain kerberos_port_t : tcp_socket { name_connect } ; [ allow_kerberos ]
DT allow httpd_t pop_port_t : tcp_socket { name_connect } ; [ httpd_can_sendmail ]
DT allow nsswitch_domain ocsp_port_t : tcp_socket { name_connect } ; [ allow_kerberos ]
DT allow httpd_t gds_db_port_t : tcp_socket { name_connect } ; [ httpd_can_network_connect_db ]
DT allow httpd_t gopher_port_t : tcp_socket { name_connect } ; [ httpd_can_network_relay ]

So by default, the Apache (httpd_t) domain is allowed to connect to DNS port (to resolve hostnames). All other name_connect calls depend on SELinux booleans (mentioned after it) that are by default disabled (at least on Gentoo). Disabling hostname resolving is not really feasible, so if the attacker uses a DNS port as port that the malware needs to connect to, SELinux will not deny it (unless you use additional networking constraints).

Now, the reverse connect is an interesting feature of the malware, but not the main one. The main focus of the malware is to redirect customers to particular sites that can trick the user in downloading additional (client) malware. Because this is done internally within Apache, SELinux cannot deal with this. As a user, make sure you configure your browser not to trust non-local iframes and such (always do this, not just because there is a possible threat right now). The configuration of Cdorked is a shared memory segment of Apache itself. Of course, since Apache uses shared memory, the malware embedded within will also have access to the shared memory. However, if this shared memory would need to be accessed by third party applications (the malware seems to grant read/write rights on everybody to this segment) SELinux will prevent this:

# sesearch -t httpd_t -c shm -ACTS
Found 2 semantic av rules:
   allow unconfined_domain_type domain : shm { create destroy getattr setattr read write associate unix_read unix_write lock } ; 
   allow httpd_t httpd_t : shm { create destroy getattr setattr read write associate unix_read unix_write lock } ; 

Only unconfined domains and the httpd_t domain itself have access to httpd_t labeled shared memory.

So what about IMA/EVM? Well, those will not help here since IMA checks for integrity of files that were modified offline. As the modification of the Apache binaries is most likely done online, IMA would just accept this.

For now, it seems that a good system integrity approach is the most effective until we know more about how the malware-infected binary is written to the system in the first place (as this is better protected by MAC controls like SELinux).

May 13, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SECMARK and SELinux (May 13, 2013, 01:50 UTC)

When using SECMARK, the administrator configures the iptables or netfilter rules to add a label to the packet data structure (on the host itself) that can be governed through SELinux policies. Unlike peer labeling, here the labels assigned to the network traffic is completely locally defined. Consider the following command:

# iptables -t mangle -A INPUT -p tcp --src 192.168.1.2 --dport 443
  -j SECMARK --selctx system_u:object_r:myauth_packet_t

With this command, packets that originate from the 192.168.1.2 host and arrive on port 443 (typically used for HTTPS traffic) are marked as myauth_packet_t. SELinux policy writers can then allow domains to receive this type of packets (or send) through the packet class:

# Allow sockets with mydomain_t context to receive packets labeled myauth_packet_t
allow mydomain_t myauth_packet_t:packet recv;

The SELinux policy modules enable this through the corenet_sendrecv_<type>_{client,server}_packets interfaces:

corenet_sendrecv_http_client_packets(mybrowser_t)
# allow mybrowser_t http_client_packet_t:packet { send recv };

As a common rule, packets are marked as client packets or server packets, depending on the role of the domain. In the above example, the domain is a browser, so acts as a web client. So, it needs to send and receive http_client_packet_t. A web server on the other hand would need to send and receive http_server_packet_t. Note that the packets that are sent over the wire do not have any labels assigned to them – this is all local to the system. So even when the source and destination use SELinux with SECMARK, on the source server the packets might be labeled as http_client_packet_t whereas on the target they are seen as http_server_packet_t.

As far as I know, when you want to use SECMARK, you will need to set the contexts with iptables yourself (there is no default labeling), so knowing about the above convention is important.

Again, Paul Moore has more information about this.

May 12, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Lab::Measurement 3.11 released (May 12, 2013, 10:34 UTC)

Lab::Measurement 3.11 has been uploaded to CPAN. This is a minor maintenance release, with small bug fixes in the voltage source handling (gate protect and sweep functionality) and the Yokogawa drivers (output voltage range settings).

Sven Vermeulen a.k.a. swift (homepage, bugs)
Peer labeling in SELinux policy (May 12, 2013, 01:50 UTC)

Allow me to start with an important warning: I don’t have much hands-on experience with the remainder of this post. Its based on the few resources I found on the Internet and a few tests done locally which I’ve investigated in my attempt to understand SELinux policy writing for networking stuff.

So, with that out of the way, let’s look into peer labeling. As mentioned in my previous post, SELinux supports some more advanced networking security features than the default socket restrictions. I mentioned SECMARK and NetLabel before, but NetLabel is actually part of the family of peer labeling technologies.

With this technology approach, all participating systems in the network must support the same labeling method. NetLabel supports CIPSO (Commerial IP Security Option) where hosts label their network traffic to be part of a particular “Domain of Interpretation”. The labels are used by the hosts to identify where a packet should be for. NetLabel, within Linux, is then used to translate those CIPSO labels. SELinux itself labels the incoming sockets based on the NetLabel information and the context of the listening socket, resulting in a context that is governed policy-wise through the peer class. Since this is based on the information in the packet instead of defined on the system itself, this allows remote systems to have a say in how the packets are labeled.

Another peer technology is the Labeled IPSec one. In this case the labels are fully provided by the remote system. I think they are based on the security association within the IPSec setup.

In both cases, in the SELinux policies, three definitions are important to keep an eye out on: interface definitions, node definitions and peer definitions.

Interface definitions allow users to (mainly) set the sensitivity that is allowed to pass the interface. Using semanage interface this can be controlled by the user. One can also assign a different context to the interface – by default, this is netif_t. The permissions that are checked on the traffic is ingress (incoming) and egress (outgoing) traffic, and most policies set this through the following call (comment shows the underlying SELinux rules, where tcp_send and tcp_recv are – I think – obsolete):

corenet_tcp_sendrecv_generic_if(something_t)
# allow something_t netif_t:netif { tcp_send tcp_recv egress ingress };

Node definitions define which targets (nodes, which can be IP addresses or subnets) traffic meant for a particular socket is allow to originate from (recvfrom) or sent to (sendto). Again, users can define their own node types and manage them using semanage node. The default node I already covered in the previous post (node_t) and is allowed by most policies by default through the following call (where the tcp_send and tcp_recv are probably deprecated as well):

corenet_tcp_sendrecv_generic_node(something_t)
# allow something_t node_t:node { tcp_send tcp_recv sendto recvfrom };

Finally, peer definitions are based on the labels from the traffic. If the system uses NetLabel, then the target label will always be netlabel_peer_t since the workings of CIPSO are mainly (only?) mapped towards sensitivity labels (in MLS policy). As a result, SELinux always displays the peer as being netlabel_peer_t. In case of Labeled IPSec, this isn’t the case as the peer label is transmitted by the peer itself.

For NetLabel support, policies generally include two methods – one is to support unlabeled traffic (only needed the moment you have support for labeled traffic) and one is to allow the NetLabel’ed traffic:

corenet_all_recvfrom_unlabeled(something_t)
# allow something_t unlabeled_t:peer recv;
corenet_all_recvfrom_netlabel(something_t)
# allow something_t netlabel_peer_t:peer recv;

In case of IPSec for instance, the peer will have a provided label, as is shown by the call for accepting hadoop traffic:

hadoop_recvfrom(something_t)
# allow something_t hadoop_t:peer recv;

However, this alone is not sufficient for labeled IPSec. We also need to allow the domain to be allowed to send anything towards an IPSec security association. There is an interface called corenet_tcp_recvfrom_labeled that takes two arguments which, amongst other things, enables sendto towards its association.

corenet_tcp_recvfrom_labeled(some_t, thing_t)
# allow { some_t thing_t} self:association sendto;
# allow some_t thing_t:peer recv;
# allow thing_t some_t:peer recv;
# corenet_tcp_recvfrom_netlabel(some_t)
# corenet_tcp_recvfrom_netlabel(thing_t)

This interface is usually called within a *_tcp_connect() interface for a particular domain, like with the mysql_tcp_connect example:

interface(`mysql_tcp_connect',`
        gen_require(`
                type mysqld_t;
        ')

        corenet_tcp_recvfrom_labeled($1, mysqld_t)
        corenet_tcp_sendrecv_mysqld_port($1) # deprecated
        corenet_tcp_connect_mysqld_port($1)
        corenet_sendrecv_mysqld_client_packets($1)
')

When using peer labeling, the domain that is allowed something is based on the socket context of the application. Also, the rules when using peer labeling are in addition to the rules mentioned before (“standard” networking control): name_bind and name_connect are always checked.

For more information, make sure you check Paul Moore’s blog, such as the egress/ingress information. And if you know of resources that show this in a more practical setting (above is mainly to work with the SELinux policy) I’m all ears.

May 11, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux policy and network controls (May 11, 2013, 01:50 UTC)

Let’s talk about how SELinux governs network streams (and how it reflects this into the policy).

When you don’t do fancy stuff like SECMARK or netlabeling, then the classes that you should keep an eye on are tcp_socket and udp_socket (depending on the protocol). There used to be node and netif as well, but the support (enforcement) for these have been removed a while ago for the “old style” network control enforcement. The concepts are still available though, and I believe they take effect when netlabeling is used. But let’s first look at the regular networking aspects.

The idea behind the regular network related permissions are that you define either daemon-like behavior (which “binds” to a port) or client-like behavior (which “connects” to a port). Consider an FTP daemon (domain ftpd_t) versus FTP client (example domain ncftp_t).

In case of a daemon, the policy would contain the following (necessary) rules:

corenet_tcp_bind_generic_node(ftpd_t) # Somewhat legacy but still needed
corenet_tcp_bind_ftp_port(ftpd_t)
corenet_tcp_bind_ftp_data_port(ftpd_t)
corenet_tcp_bind_all_unreserved_ports(ftpd_t) # In case of passive mode

This gets translated to the following “real” SELinux statements:

allow ftpd_t node_t:tcp_socket node_bind;
allow ftpd_t ftp_port_t:tcp_socket name_bind;
allow ftpd_t ftp_data_port_t:tcp_socket name_bind;
allow ftpd_t unreserved_port_type:tcp_socket name_bind;

I mention that corenet_tcp_bind_generic_node as being somewhat legacy. When you use netlabeling, you can define different nodes (a “node” in that case is a label assigned to an IP address or IP subnet) and as such define policy-wise where daemons can bind on (or clients can connect to). However, without netlabel, the only node that you get to work with is node_t which represents any possible node. Also, the use of passive mode within the ftp policy is governed through the ftpd_use_passive_mode boolean.

For a client, the following policy line would suffice:

corenet_tcp_connect_ftp_port(ncftp_t)
# allow ncftp_t ftp_port_t:tcp_socket name_connect;

Well, I lied. Because of how FTP works, if you use active connections, you need to allow the client to bind on an unreserved port, and allow the server to connect to unreserved ports (cfr code snippet below), but you get the idea.

corenet_tcp_connect_all_unreserved_ports(ftpd_t)

corenet_tcp_bind_generic_node(ncftp_t)
corenet_tcp_bind_all_unreserved_ports(ncftp_t)

In the past, policy developers also had to include other lines, but these have by time become obsolete (corenet_tcp_sendrecv_ftp_port for instance). These methods defined the ability to send and receive messages on the port, but this is no longer controlled this way. If you need such controls, you will need to look at SELinux and SECMARK (which uses packets with the packet class) or netlabel (which uses the peer class and peer types to send or receive messages from).

And that’ll be for a different post.

Sebastian Pipping a.k.a. sping (homepage, bugs)

When working on (the still on-going) migration of the Gentoo java project repositories from SVN to Git I ran into bugs with svn2git 1.0.8 and my own svneverever 1.2.1.

The bug with svn2git 1.0.8 was a regression that broke support for (non-ASCII) UTF-8 author names in identity maps. That’s fixed in dev-vcs/svn2git-1.0.8-r1 in Gentoo. I sent the patch upstream and to the Debian package maintainer, too.

For svneverever, a directory that re-appeared after deletion was reported to only live once, e.g. the output was

(2488; 9253)  /projects
(2490; 9253)      /java-config-2
(2490; 2586)          /trunk

if directory /projects/java-config-2/trunk/ got deleted at revision 2586, no matter if was re-created later. With 9253 revisions in total, the correct output (with svneverever 1.2.2) is:

(2488; 9253)  /projects
(2490; 9253)      /java-config-2
(2490; 9253)          /trunk

That’s fixed in svneverever 1.2.2.

If svneverever is of help to you, please support me on Flattr. Thanks!

May 10, 2013
Gentoo at LinuxTag 2013 in Berlin (May 10, 2013, 01:03 UTC)

LinuxTag 2013 runs from May 22nd to May 25th in Berlin, Germany. With more than 10,000 visitors last year, it is one of the biggest Linux and open source events in Europe.

You will find the Gentoo booth at Hall 7.1c, Booth 179. Come and visit us! You will meet many of our developers and users, talk with us, plus get some of the Gentoo merchandise you have always wanted.

May 07, 2013
Jan Kundrát a.k.a. jkt (homepage, bugs)
On Innovation, NIH, Trojita and KDE PIM (May 07, 2013, 08:03 UTC)

Jos wrote a blog post yesterday commenting on the complexity of the PIM problem. He raises an interesting concern about whether we would be all better if there was no Trojitá and I just improved KMail instead. As usual, the matter is more complicated than it might seem on a first sight.

Executive Summary: I tried working with KDEPIM. The KDEPIM IMAP stack required a total rewrite in order to be useful. At the time I started, Akonadi did not exist. The rewrite has been done, and Trojitá is the result. It is up to the Akonadi developers to use Trojitá's IMAP implementation if they are interested; it is modular enough.

People might wonder why Trojitá exists at all. I started working on it because I wasn't happy with how the mail clients performed back in 2006. The supported features were severely limited, the speed was horrible. After studying the IMAP protocol, it became obvious that the reason for this slowness is the rather stupid way in which the contemporary clients treated the remote mail store. Yes, it's really a very dumb idea to load tens of thousands of messages when opening a mailbox for the first time. Nope, it does not make sense to block the GUI until you fetch that 15MB mail over a slow and capped cell phone connection. Yes, you can do better with IMAP, and the possibility has been there for years. The problem is that the clients were not using the IMAP protocol in an efficient manner.

It is not easy to retrofit a decent IMAP support into an existing client. There could be numerous code paths which just assume that everything happens synchronously and block the GUI when the data are stuck on the wire for some reason. Doing this properly, fetching just the required data and doing all that in an asynchronous manner is not easy -- but it's doable nonetheless. It requires huge changes to the overall architecture of the legacy applications, however.

Give Trojitá a try now and see how fast it is. I'm serious here -- Trojitá opens a mailbox with tens of thousands of messages in a fraction of second. Try to open a big e-mail with vacation pictures from your relatives over a slow link -- you will see the important textual part pop up immediately with the images being loaded in the background, not disturbing your work. Now try to do the same in your favorite e-mail client -- if it's as fast as Trojitá, congratulations. If not, perhaps you should switch.

Right now, the IMAP support in Trojitá is way more advanced than what is shipped in Geary or KDE PIM -- and it is this solid foundation which leads to Trojitá's performance. What needs work now is polishing the GUI and making it play well with the rest of a users' system. I don't care whether this polishing means improving Trojitá's GUI iteratively or whether its IMAP support gets used as a library in, say, KMail -- both would be very succesfull outcomes. It would be terrific to somehow combine the nice, polished UI of the more established e-mail clients with the IMAP engine from Trojitá. There is a GSoC proposal for integrating Trojitá into KDE's Kontact -- but for it to succeed, people from other projects must get involved as well. I have put seven years of my time into making the IMAP support rock; I would not be able to achieve the same if I was improving KMail instead. I don't need a fast KMail, I need a great e-mail client. Trojitá works well enough for me.

Oh, and there's also a currently running fundraiser for better address book integration in Trojitá. We are not asking for $ 100k, we are asking for $ 199. Let's see how many people are willing to put the money where their mouth is and actually do something to help the PIM on a free desktop. Patches and donations are both equally welcome. Actually, not really -- great patches are much more appreciated. Because Jos is right -- it takes a lot of work to produce great software, and things get better when there are more poeple working towards their common goal together.

Update: it looks like my choice of kickstarter platform was rather poor, catincan apparently doesn't accept PayPal :(. There's the possiblity of direct donations over SourceForge/PayPal -- please keep in mind that these will be charged even if less donors pledge to the idea.

May 05, 2013

Since a long time I realized that is a pita every time that I keyword, receive a repoman failure for dependency.bad(mostly) that does not regard the arch that I’m changing.
So, checking in the repoman manual, I realized that –ignore-arches looks bad for my case and I decided to request a new feature: –include-arches.
This feature, as explained in the bug, checks only for the arches that you write as argument and should be used only when you are keywording/stabilizing.

Some examples/usage:

First, it saves time, the following example will try to run repoman full in the kdelibs directory:
$ time repoman full > /dev/null 2>&1
real 0m12.434s

$ time repoman full --include-arches "amd64" > /dev/null 2>&1
real 0m3.880s

Second, kdelibs suffers for a dependency.bad on amd64-fbsd, so:
$ repoman full
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs
dependency.bad 2
kde-base/kdelibs/kdelibs-4.10.2.ebuild: PDEPEND: ~amd64-fbsd(default/bsd/fbsd/amd64/9.0) ['>=kde-base/nepomuk-widgets-4.10.2:4[aqua=]']

$ repoman full --include-arches "amd64"
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs

Now when I will keyword the packages I can check for specific arches and skip the unuseful checks since they causes, in this case, only a waste of time.
Thanks to Zac for the work on it.

May 03, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
May 3rd = Day Against DRM (May 03, 2013, 14:23 UTC)

Learn more at dayagainstdrm.org (and drm.info).

Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

Now, after tons of research and design/coding work, we are proud to announce the first release of our Devanagari to Bharati Braille converter! You can read more about the converter here, and download the source code on Github.

If you know anyone who might find this useful, please tell them about it!

May 01, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

If you’re a university student, time is running out! You could get paid to hack on Gentoo or other open-source software this summer, but you’ve gotta act now. The deadline to apply for the Google Summer of Code is this Friday.

If this sounds like your dream come true, you can find some Gentoo project ideas here and Gentoo’s GSoC homepage here. For non-Gentoo projects, you can scan through the GSoC website to find the details.


Tagged: gentoo, gsoc

April 28, 2013
Raúl Porcel a.k.a. armin76 (homepage, bugs)
The new BeagleBone Black and Gentoo (April 28, 2013, 18:02 UTC)

Hi all, long time no see.

Some weeks ago I got an early version of the BeagleBone Black from the people at Beagleboard.org to create the documentation I always create with every device I get.

Like always i’d like to announce the guide for installing Gentoo in the BeagleBone Black. Have a look at: http://dev.gentoo.org/~armin76/arm/beagleboneblack/install.xml . Feel free to send any corrections my way.

This board is a new version of the original BeagleBone, known in the community as BeagleBone white, for which I wrote a post for it: http://armin762.wordpress.com/2012/01/01/beaglebone-and-gentoo/

This new version differs in some aspects with the previous version:

  • Cheaper: 45$ vs 89$ of the BeagleBone white
  • 512MB DDR3L RAM vs 256MB DDR2 RAM of the BeagleBone white
  • 1GHz of processor speed vs 720MHz of the BeagleBone white, both when using an external PSU for power

Also it has more features which the old BeagleBone didn’t had

  • miniHDMI output
  • 2GB eMMC

However the new version has missing:

  • Serial port and JTAG through the miniUSB interface

The reason for missing this feature is cost cutting measures, as can be read in the Reference manual.

The full specs of the BeagleBone Black are:
# ARMv7-A 1GHz TI AM3358/9 ARM Cortex-A8 processor
# 512MB DDR3L RAM
# SMSC LAN8710 Ethernet card
#
# 1x microSDHC slot
# 1x USB 2.0 Type-A port
# 1x mini-USB 2.0 OTG port
# 1x RJ45
# 1x 6 pin 3.3V TTL Header for serial
#
# Reset, power and user-defined button

More info about the specs in BeagleBone Black’s webpage.

For those curious as me, here’s the bootlog and the cpuinfo.

I’ve found two issues while working on it:

  1. The USB port doesn’t have a working hotplug detection. That means that if you plug an USB device in the USB port, it will be only detected once, if you remove the USB device, the USB port will stop working. I’ve been told that they are working on it. I haven’t been able to find a workaround for it.
  2. The BeagleBone Black doesn’t detect an microSD card when plugged in when its been booted from the eMMC. If you want to use a microSD card for additional storage, it must be inserted before it boots.

I’d like to thank the people at Beagleboard.org for providing me a Beaglebone Black to document this.

Have fun!


April 26, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB and Pacemaker recent bumps (April 26, 2013, 14:23 UTC)

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

All of this is live on portage, enjoy.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My time abroad: loyalty cards (April 26, 2013, 12:56 UTC)

Compared to most people around me now, and probably most of the people who read my blog, my life is not that extraordinary, in the terms of travel and moving around. I’ve been, after all, scared of planes for years, and it wasn’t until last year that I got out of the continent — in an year, though, I more than doubled the number of flights I’ve been on, with 18 last year, and more than doubled the number of countries I’ve been to, counting Luxembourg even though I only landed there and got on a bus to get back to Brussels after Alitalia screwed up.

On the other hand, compared to most of the people I know in Italy, I’ve been going around quite a bit, as I spent a considerable amount of time last year in Los Angeles, and I’ve now moved to Dublin, Ireland. And there are quite a few differences between these places and Italy. I’ve already written a bit about the differences I found during my time in the USA but this time I want to focus on something which is quite a triviality, but still is a remarkable difference between the three countries I got to know up to now. As the title suggest I’m referring to stores’ loyalty cards.

Interestingly enough, there was just this week an article on the Irish Times about the “privacy invasion” of loyalty cards.. I honestly don’t see it as big a deal as many others. Yes, they do profile your shopping habits. Yes, if you do not keep private the kind of offers they sent you, they might tell others something about you as well — the newspaper actually brought up the example of a father who discovered the pregnancy of the daughter because of the kind of coupons the supermarket was sending, based on her change of spending habits; I’m sorry but I cannot really feel bad about it. After all, absolute privacy and relevant offers are kinda at the opposite sides of a range.. and I’m usually happy enough when companies are relevant to me.

So of course stores want to know the habits of a single person, or of a single household, and for that they give you loyalty cards… but for you to use them, they have to give you something in return, don’t they? This is where the big difference on this topic appears clearly, if you look at the three countries:

  • in both Italy and Ireland, you get “points” with your shopping; in the USA, instead, the card gives you immediate discounts; I’m pretty sure that this gives not-really-regular-shoppers a good reason to get the card as well: you can easily save a few dollars on a single grocery run by getting the loyalty card at the till;
  • in Italy you redeem the points to get prizes – this works not so differently than with airlines after all – sometimes by adding a contribution, sometimes for free; in my experience the contribution is never worth it, so either you get something for free or just forget about it;
  • in Ireland I still haven’t seen a single prize system; instead they work with coupons: you get a certain amount of points each euro you spend (usually, one point per euro), and then when you get to a certain amount of points, they get a value (usually, one cent per point), and a coupon redeemable for the value is sent you.

Of course, the “European” method (only by contrast with American, since I don’t know what other countries do), is a real loyalty scheme: you need a critical mass of points for them to be useful, which means that you’ll try to get on the same store as much as you can. This is true for airlines as well, after all. On the other hand, people who shop occasionally are less likely to request the card at all, so even if there is some kind of data to be found in their shopping trends, they will be completely ignored by this kind of scheme.

I’m honestly not sure which method I prefer, at this point I still have one or two loyalty cards from my time in Los Angeles, and I’m now collecting a number of loyalty cards here in Dublin. Some are definitely a good choice for me, like the Insomnia card (I love getting coffee at a decent place where I can spend time to read, in the weekends), others, like Dunnes, make me wonder.. the distance from the supermarket to where I’m going to live is most likely offsetting the usefulness of their coupons compared to the (otherwise quite more expensive) Spar at the corner.

At any rate, I just want to write my take on the topic, which is definitely not of interest to most of you…

April 25, 2013
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Recently, I have been toying around with GateOne, a web-based SSH client/terminal emulator. However, installing it on my server proved to be a bit challenging: it requires tornado as a webserver, and uses websockets, while I have an Apache 2.2 instance already running with a few sites on it (and my authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were not officially supported by Apache... until recently! Jim Jagielski added the proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the mailing list, backporting to 2.4 is easy to do (and was suggested as an official backport), but 2.2 required a few additional changes to the original patch (and current upstream trunk).

A few fixes later, I got a working patch (based on Apache 2.2.24), available here: http://cafarelli.fr/gentoo/apache-2...

Recompile with this patch, and you will get a nice and shiny mod_proxy_wstunnel.so module file!

Now just load it (in /etc/apache2/httpd.conf in Gentoo):
<IfDefine PROXY>
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
</IfDefine>

and add a location pointing to your GateOne installation:

<Location /gateone/ws>
    ProxyPass wss://127.0.0.1:1234/gateone/ws
    ProxyPassReverse wss://127.0.0.1:1234/gateone/ws
</Location>

<Location /gateone>
    Order deny,allow
    Deny from all
    Allow from #your favorite rule

    ProxyPass http://127.0.0.1:1234/gateone
    ProxyPassReverse http://127.0.0.1:1234/gateone
</Location>


Reload Apache, and you now have Gateone running behind your Apache server :) If it does not work, first check GateOne log and configuration, especially the "origins" variable

For other websocket applications, Jim Jagielski comments here :

ProxyPass /whatever ws://websocket-srvr.example/com/

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed protocols between the client and the backend, so you tell Apache that you'll be talking 'ws' with the backend (same as ajp://whatever sez that httpd will be talking ajp to the backend).

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Tarsnap and backup strategies (April 25, 2013, 13:52 UTC)

After having had a quite traumatic experience with a customer’s service running on one of the virtual servers I run last November, I made sure to have a very thorough backup for all my systems. Unfortunately, it turns out to be a bit too thorough, so let me explore with you what was going on.

First of all, the software I use to run the backup is tarsnap — you might have heard of it or not, but it’s basically a very smart service, that uses an open-source client, based upon libarchive, and then a server system that stores content (de-duplicated, compressed and encrypted with a very flexible key system). The author is a FreeBSD developer, and he’s charging an insanely small amount of money.

But the most important part to know when you use tarsnap is that you just always create a new archive: it doesn’t really matter what you changed, just get everything together, and it will automatically de-duplicate the content that didn’t change, so why bother? My first dumb method of backups, which is still running as of this time, is to simply, every two hours, dump a copy of the databases (one server runs PostgreSQL, the other MySQL — I no longer run MongoDB but I start to wonder about it, honestly), and then use tarsnap to generate an archive of the whole /etc, /var and a few more places where important stuff is. The archive is named after date and time of the snapshot. And I haven’t deleted any snapshot since I started, for most servers.

It was a mistake.

The moment when I went to recover the data out of earhart (the host that still hosts this blog, a customer’s app, and a couple more sites, like the assets for the blog and even Autotools Mythbuster — but all the static content, as it’s managed by git, is now also mirrored and served active-active from another server called pasteur), the time it took to extract the backup was unsustainable. The reason was obvious when I thought about it: since it has been de-duplicating for almost an year, it would have to scan hundreds if not thousands of archives to get all the small bits and pieces.

I still haven’t replaced this backup system, which is very bad for me, especially since it takes a long time to delete the older archives even after extracting them. On the other hand it’s probably a lot of a matter of tradeoff in the expenses as well, as going through all the older archives to remove the old crap drained my credits with tarsnap quickly. Since the data is de-duplicated and encrypted, the archives’ data needs to be downloaded to be decrypted, before it can be deleted.

My next preference is going to be to set it up so that the script is executed in different modes: 24 times in 48 hours (every two hours), 14 times in 14 days (daily), and 8 times in two months (weekly). The problem is actually doing the rotation properly with a script, but I’ll probably publish a Puppet module to take care of that, since it’s the easiest thing for me to do, to make sure it executes as intended.

The essence of this post is basically to warn you all that, no matter whether it’s cheap to keep around the whole set of backups since the start of time, it’s still a good idea to just rotate them.. especially for content that does not change that often! Think about it even when you set up any kind of backup strategy…

April 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Hello Gentoo Planet (April 24, 2013, 08:51 UTC)

Hey Gentoo folks !

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.

Kernel

For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.

Firmware

The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
-------------------------------------------
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is a follow-up on my last post for autotools introduction. I’m trying to keep these posts bite sized both because it seems to work nicely, and because this way I can avoid leaving the posts rotting in the drafts set.

So after creating a simple autotools build system in the previous now you might want to know how to build a library — this is where the first part of complexity kicks in. The complexity is not, though, into using libtool, but into making a proper library. So the question is “do you really want to use libtool?”

Let’s start from a fundamental rule: if you’re not going to install a library, you don’t want to use libtool. Some projects that only ever deal with programs still use libtool because that way they can rely on .la files for static linking. My suggestion is (very simply) not to rely on them as much as you can. Doing it this way means that you no longer have to care about using libtool for non-library-providing projects.

But in the case you are building said library, using libtool is important. Even if the library is internal only, trying to build it without libtool is just going to be a big headache for the packager that looks into your project (trust me I’ve seen said projects). Before entering the details on how you use libtool, though, let’s look into something else: what you need to make sure you think about, in your library.

First of all, make sure to have an unique prefix to your public symbols, be them constants, variables or functions. You might also want to have one for symbols that you use within your library on different translation units — my suggestion in this example is going to be that symbols starting with foo_ are public, while symbols starting with foo__ are private to the library. You’ll soon see why this is important.

Reducing the amount of symbols that you expose is not only a good performance consideration, but it also means that you avoid the off-chance to have symbol collisions which is a big problem to debug. So do pay attention.

There is another thing that you should consider when building a shared library and that’s the way the library’s ABI is versioned but it’s a topic that, in and by itself, takes more time to discuss than I want to spend in this post. I’ll leave that up to my full guide.

Once you got these details sorted out, you should start by slightly change the configure.ac file from the previous post so that it initializes libtool as well:

AC_INIT([myproject], [123], [flameeyes@flameeyes.eu], [http://blog.flameeyes.eu/tag/autotoolsmythbuster])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])
LT_INIT

AC_PROG_CC

AC_OUTPUT([Makefile])

Now it is possible to provide a few options to LT_INIT for instance to disable by default the generation of static archives. My personal recommendation is not to touch those options in most cases. Packagers will disable static linking when it makes sense, and if the user does not know much about static and dynamic linking, they are better off getting everything by default on a manual install.

On the Makefile.am side, the changes are very simple. Libraries built with libtool have a different class than programs and static archives, so you declare them as lib_LTLIBRARIES with a .la extension (at build time this is unavoidable). The only real difference between _LTLIBRARIES and _PROGRAMS is that the former gets its additional links from _LIBADD rather than _LDADD like the latter.

bin_PROGRAMS = fooutil1 fooutil2 fooutil3
lib_LTLIBRARIES = libfoo.la

libfoo_la_SOURCES = lib/foo1.c lib/foo2.c lib/foo3.c
libfoo_la_LIBADD = -lz
libfoo_la_LDFLAGS = -export-symbols-regex &apos^foo_[^_]&apos

fooutil1_LDADD = libfoo.la
fooutil2_LDADD = libfoo.la
fooutil3_LDADD = libfoo.la -ldl

pkginclude_HEADERS = lib/foo1.h lib/foo2.h lib/foo3.h

The _HEADERS variable is used to define which header files to install and where. In this case, it goes into ${prefix}/include/${PACKAGE}, as I declared it a pkginclude install.

The use of -export-symbols-regex ­– further documented in the guide – ensures that only the symbols that we want to have publicly available are exported and does so in an easy way.

This is about it for now — one thing that I haven’t added in the previous post, but which I’ll expand in the next iteration or the one after, is that the only command you need to regenerate autotools is autoreconf -fis and that still applies after introducing libtool support.

April 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.9 (April 18, 2013, 17:41 UTC)

First of all py3status is on pypi ! You can now install it with the simple and usual :

$ pip install py3status

This new version features my first pull request from @Fandekasp who kindly wrote a pomodoro module which helps this technique’s adepts by having a counter on their bar. I also fixed a few glitches on module injection and some documentation.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: configure.ac and Makefile.am.

The first of the two files (configure.ac) is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic configure.ac file:

AC_INIT([myproject], [123], [flameeyes@flameeyes.eu], [http://blog.flameeyes.eu/tag/autotoolsmythbuster])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])

AC_PROG_CC

AC_OUTPUT([Makefile])

Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use Makefile.in but we’ll create Makefile.am instead soon).

To build a program, you need then to create a Makefile.am similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic Makefile.am.

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the configure.ac calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.2 released (April 18, 2013, 10:53 UTC)

After the security issue related bumps of the previous releases which happened last weeks it was about time 10gen released a 2.4.x fixing the following issues:

  • Fix for upgrading sharded clusters
  • TTL assertion on replica set secondaries
  • Several V8 memory leak and performance fixes
  • High volume connection crash

I guess everything listed above would have affected our cluster at work so I’m glad we’ve been patient on following-up this release :) See the changelog for details.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you skyscanner.net). That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : streets (April 18, 2013, 06:02 UTC)

05440019
05440016
05440020
000018

April 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : chinatown (April 16, 2013, 20:41 UTC)

05440005
05440002

Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for quirksmode.org) has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece

Website?http://hua.fosscomm.gr

FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)

28283482

We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.


April 14, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.8 (April 14, 2013, 20:33 UTC)

I went on a coding frenzy to implement most of the stuff I was not happy with py3status so far. Here comes py3status code name : San Francisco (more photos to come).
San Francisco

PEP8

I always had the habit of using tabulators to indent my code. @Lujeni pointed out that this is not a PEP8 recommended method and that we should start respecting more of it in the near future. Well, he’s right and I guess it was time to move on so I switched to using spaces and corrected a lot of other coding style stuff which got my code a score going from around -1/10 to around 9.5/10 on pylint !

Threaded modules’ execution

This was the major thing I was not happy with : when a user-written module was executed for injection, the time it took to get its response would cause py3status to stop updating the bar. This means that if you had a database call to make to get some stuff you need displayed on the bar and it took 10 seconds, py3status was sleeping for those 10 seconds to update the bar ! This behavior could cause some delays in the clock ticking for example.

I decided to offload all of the modules’ detection and execution to a thread to solve this problem. To be frank, this also helped to rationalize the code better as well. No more delays and a cleaner handling is what you get, stuff will start appending themselves whatever the time they take to execute !

Python3

It was about time the examples available on py3status would also work using python3.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.

April 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So it stats my time in Ireland (April 13, 2013, 19:58 UTC)

With today it makes a full week I survived my move to Dublin. Word’s out on who my new employer is (but as usual, since this blog is personal and should not be tied to my employer, I’m not even going to name it), and I started the introductory courses. One thing I can be sure of: I will be eating healthily and compatibly with my taste — thankfully, chicken, especially spicy chicken, seems to be available everywhere in Ireland, yai!

I have spent almost all my life in Venice, never stayed for long periods of time away from it, with the exception of last year, which I spent for the most time, as you probably know, in Los Angeles — 2012 was a funny year like that: I never partied for the new year, but at 31st December 2011 I was at a friend’s place with friends, after which some of us ended up leaving at around 3am… for the first time in my life I ended up sleeping on a friend’s couch. Then it was time for my first week-long vacation since ever with the same group of friends in the Venetian Alps.

With this premise, it’s obvious that Dublin is looking a bit alien to me. It helps I’ve spent a few weeks over the past years in London, so that at least a few customs that are shared between the British and the Irish I already was used to — they probably don’t like to be remembered that they share some customs with the British, but there it goes. But it’s definitely more similar to Italy than Los Angeles.

Funny episode of the day was me going to Boots, and after searching the aisle for a while asking one of the workers if they kept hydrogen peroxide, which I used almost daily both in Italy and the US as a disinfectant – I cut or scrape very easily – and after being looked at in a very strange way I was informed that is not possible to sell it anymore in Ireland…. I’d guess it has something to do with the use of it in the London bombings of ‘05. Luckily they didn’t call the police.

I have to confess though that I like the restaurants better on the touristy, commercial areas than those in the upscale modern new districts — I love Nando’s for instance, which is nowhere Irish, but I love its spiciness (and this time around I could buy the freaking salt!). But also most pubs have very good chicken.

I still don’t have a permanent place though. I need to look into one soonish I suppose, but the job introduction took the priority for the moment. Even though, if the guests in the next apartment are going to throw another party at 4.30am I might decide to find something sooner, rather than later.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...