Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
May 10, 2013, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

May 10, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo metadata support for CPE (May 10, 2013, 01:50 UTC)

Recently, the metadata.xml file syntax definition (the DTD for those that know a bit of XML) has been updated to support CPE definitions. A CPE (Common Platform Enumeration) is an identifier that describes an application, operating system or hardware device using its vendor, product name, version, update, edition and language. This CPE information is used in the CVE releases (Common Vulnerabilities and Exposures) – announcements about vulnerabilities in applications, operating systems or hardware. Not all security vulnerabilities are assigned a CVE number, but this is as close as you get towards a (public) elaborate dictionary of vulnerabilities.

By allowing Gentoo package maintainers to enter (part of) the CPE information in the metadata.xml file, applications that parse the CVE information can now more easily match if software installed on Gentoo is related to a CVE. I had a related post to this not that long ago on my blog and I’m glad this change has been made. With this information at hand, we can start feeding CPE information to the packages and then easily match this with CVEs.

I had a request to “provide” the scripts I used for the previous post. Mind you, these are taking too many assumptions (and probably wrong ones) for now (and I’m not really planning on updating them as I have different methods for getting information related to CVEs), but I’m planning on integrating CPE data in Gentoo’s packages more and then create a small script that generates a “watchlist” that I can feed to cvechecker. But anyway, here are the scripts.

First, I took all CVE information and put it in a simple CSV file. The CSV is the same one used by cvechecker, so check out the application to see where it fetches the data from (there is a CVE RSS feed and a simple XSL transformation). Second, I create a “hitlist” which generates the CPEs. With the recent change to metadata.xml this step can be simplified a lot. Third, I try to match the CPE data with the CVE data, depending on a given time delay of commits. In other words, you can ask possible CVE fixes for commits made in the last few XXX days.

Gentoo at LinuxTag 2013 in Berlin (May 10, 2013, 01:03 UTC)

LinuxTag 2013 runs from May 22nd to May 25th in Berlin, Germany. With more than 10,000 visitors last year, it is one of the biggest Linux and open source events in Europe.

You will find the Gentoo booth at Hall 7.1c, Booth 179. Come and visit us! You will meet many of our developers and users, talk with us, plus get some of the Gentoo merchandise you have always wanted.

May 09, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Enabling Kernel Samepage Merging (KSM) (May 09, 2013, 01:50 UTC)

When using virtualization extensively, you will pretty soon hit the limits of your system (at least, the resources on it). When the virtualization is used primarily for testing (such as in my case), the limit is memory. So it makes sense to seek memory optimization strategies on such systems. The first thing to enable is KSM or Kernel Samepage Merging.

This Linux feature looks for memory pages that the applications have marked as being a possible candidate for optimization (sharing) which are then reused across multiple processes. The idea is that, especially for virtualized environments (but KSM is not limited to that), some processes will have the same contents in memory. Without any sharing abilities, these memory pages will be unique (meaning at different locations in your system’s memory). With KSM, such memory pages are consolidated to a single page which is then referred to by the various processes. When one process wants to modify the page, it is “unshared” so that there is no corruption or unwanted modification of data for the other processes.

Such features are not new – VMWare has it named TPS (Transparent Page Sharing) and Xen calls it “Memory CoW” (Copy-on-Write). One advantage of KSM is that it is simple to setup and advantageous for other processes as well. For instance, if you host multiple instances of the same service (web service, database, tomcat, whatever) there is a high chance that several of its memory pages are prime candidates for sharing.

Now before I do mention that this sharing is only enabled when the application has marked it as such. This is done through the madvise() method, where applications mark the memory with MADV_MERGEABLE, meaning that the applications explicitly need to support KSM in order for it to be successful. There is work on the way to support transparent KSM (such as UKSM and PKSM) where no madvise calls would be needed anymore. But beyond quickly reading the home pages (or translated home pages in case of UKSM ;-) I have no experience with those projects.

So let’s get back to KSM. I am currently running three virtual machines (all configured to take at most 1.5 Gb of memory). Together, they take just a little over 1 Gb of memory (sum of their resident set sizes). When I consult KSM, I get the following information:

 # grep -H '' /sys/kernel/mm/ksm/pages_*
/sys/kernel/mm/ksm/pages_shared:48911
/sys/kernel/mm/ksm/pages_sharing:90090
/sys/kernel/mm/ksm/pages_to_scan:100
/sys/kernel/mm/ksm/pages_unshared:123002
/sys/kernel/mm/ksm/pages_volatile:1035

The pages_shared tells me that 48911 pages are shared (which means about 191 Mb) through 90090 references (pages_sharing – meaning the various processes have in total 90090 references to pages that are being shared). That means a gain of 41179 pages (160 Mb). Note that the resident set sizes do not take into account shared pages, so the sum of the RSS has to be subtracted with this to find the “real” memory consumption. The pages_unshared value tells me that 123002 pages are marked with the MADV_MERGEABLE advise flag but are not used by other processes.

If you want to use KSM yourself, configure your kernel with CONFIG_KSM and start KSM by echo’ing the value “1″ into /sys/kernel/mm/ksm/run. That’s all there is to it.

May 08, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
The Linux “.d” approach (May 08, 2013, 01:50 UTC)

Many services on a Linux system use a *.d directory approach to make their configuration easily configurable by other services. This is a remarkably simple yet efficient method for exposing services towards other applications. Let’s look into how this .d approach works.

Take a look at the /etc/pam.d structure: services that are PAM-aware can place their PAM configuration files in this location, without needing any additional configuration steps or registration. Same with /etc/cron.d: applications that need specific cronjobs do not need to edit /etc/crontab directly (with the problem of concurrent access, overwriting changes, etc.) but instead can place their definitions in the cron.d directory.

This approach is getting more traction, as can be seen from the available “dot-d” directories on a system:

$ ls -d /etc/*.d
/etc/bash_completion.d  /etc/ld.so.conf.d  /etc/pam.d          /etc/sysctl.d
/etc/conf.d             /etc/local.d       /etc/profile.d      /etc/wgetpaste.d
/etc/dracut.conf.d      /etc/logrotate.d   /etc/request-key.d  /etc/xinetd.d
/etc/env.d              /etc/makedev.d     /etc/sandbox.d      /etc/cron.d
/etc/init.d             /etc/modprobe.d    /etc/sudoers.d

An application can place its configuration files in these directories, automatically “plugging” it in into the operating system and the services that it provides. And the more services adopt this approach, the easier it is for applications to be pluggable within the operating system. Even complex systems such as database systems can easily configure themselves this way. And for larger organizations, this is a very interesting approach.

Consider the need to deploy a database server on a Linux system in a larger organization. Each organization has its standards for file system locations, policies for log file management, etc. With the *.d approach, these organizations only need to put files on the file system (a rather primitive feature that every organization supports) and manage these files instead of using specific, proprietary interfaces to configure the environment. But to properly control this flexibility, a few attention points need to be taken into account.

The first is to use a proper naming convention. If the organization has a data management structure, it might have specific names for services. These names are then used throughout the organization to properly identify owners or responsibilities. When using the *.d directories, these naming conventions also allow administrators to easily know who to contact if a malfunctioning definition is placed. For instance, if a log rotation definition has a wrong entry, a file called mylogrotation does not reveal much information. However, CDBM-postgres-querylogs might reveal that the file is placed there by the customer database management team for a postgresql database. And it isn’t only about knowing who to contact (because that could easily be done by comments as well), but also to ensure no conflicts occur. On a shared database system, it is much more likely that two different teams place a postgresql file (which would overwrite the file already there) unless they use a proper naming convention.

The second is to use something identifying where the file comes from. A best practice when using Puppet for instance is to add in a comment to the file such as the following:

# This file is managed by Puppet through the org-pgsl-def module
# Please do not modify manually

This informs the administrator how the file is put there; you might even want to include version information.

A third one is when the order of configuration entries is important. Most *.d supporting tools do not really care about ordering, but some, like udev, do. When that is the case, the common consensus is to use numbers in the beginning of the file name. The numbers then provide a good ordering of the files.

Not all services already offer *.d functionality, although it isn’t that difficult to provide it as well. Consider the Linux audit daemon, whose rules are managed in the /etc/audit/audit.rules file. Not that flexible, isn’t it? But one can create a /etc/audit/audit.rules.d location and have the audit init script read these files (in alphanumeric order), creating the same functionality.

Given enough service adoption, software distribution can be sufficient to configure an application completely and integrate it with all services used by the operating system. And even services that do not support *.d directories can still be easily wrapped around so that their configuration file itself is generated based on the information in such directories. Consider a hypothetical AIDE configuration, where the aide.conf is generated based on the aide.conf.head, aide.d/* and aide.conf.tail files (similar to how resolv.conf is sometimes managed). The generation is triggered right before aide itself is called (perhaps all in a single script).

Such an approach allows full integration:

  • A PAM configuration file is placed, allowing the service authentication to be easily managed by administrators. Changes on the authentication (for instance, switch to an LDAP authentication or introduce some trust relation) is done by placing an updated file.
  • A log rotation configuration file is placed, making sure that the log files for the service do not eventually fill the partitions
  • A syslog configuration is provided, allowing for some events to be sent to a different server instead of keeping it local – or perhaps both
  • A cron configuration is stored so that statistics and other house-cleaning jobs for the service can run at night
  • An audit configuration snippet is added to ensure critical commands and configuration files are properly checked
  • Intrusion detection rules are added when needed
  • Monitoring information is placed on the file system, causing additional monitoring metrics to be automatically picked up
  • Firewall definitions are extended based on the snippets placed on the system

etc. And all this by only placing files on the file system. Keep It Simple, and efficient ;-)

May 07, 2013
Jan Kundrát a.k.a. jkt (homepage, bugs)
On Innovation, NIH, Trojita and KDE PIM (May 07, 2013, 08:03 UTC)

Jos wrote a blog post yesterday commenting on the complexity of the PIM problem. He raises an interesting concern about whether we would be all better if there was no Trojitá and I just improved KMail instead. As usual, the matter is more complicated than it might seem on a first sight.

Executive Summary: I tried working with KDEPIM. The KDEPIM IMAP stack required a total rewrite in order to be useful. At the time I started, Akonadi did not exist. The rewrite has been done, and Trojitá is the result. It is up to the Akonadi developers to use Trojitá's IMAP implementation if they are interested; it is modular enough.

People might wonder why Trojitá exists at all. I started working on it because I wasn't happy with how the mail clients performed back in 2006. The supported features were severely limited, the speed was horrible. After studying the IMAP protocol, it became obvious that the reason for this slowness is the rather stupid way in which the contemporary clients treated the remote mail store. Yes, it's really a very dumb idea to load tens of thousands of messages when opening a mailbox for the first time. Nope, it does not make sense to block the GUI until you fetch that 15MB mail over a slow and capped cell phone connection. Yes, you can do better with IMAP, and the possibility has been there for years. The problem is that the clients were not using the IMAP protocol in an efficient manner.

It is not easy to retrofit a decent IMAP support into an existing client. There could be numerous code paths which just assume that everything happens synchronously and block the GUI when the data are stuck on the wire for some reason. Doing this properly, fetching just the required data and doing all that in an asynchronous manner is not easy -- but it's doable nonetheless. It requires huge changes to the overall architecture of the legacy applications, however.

Give Trojitá a try now and see how fast it is. I'm serious here -- Trojitá opens a mailbox with tens of thousands of messages in a fraction of second. Try to open a big e-mail with vacation pictures from your relatives over a slow link -- you will see the important textual part pop up immediately with the images being loaded in the background, not disturbing your work. Now try to do the same in your favorite e-mail client -- if it's as fast as Trojitá, congratulations. If not, perhaps you should switch.

Right now, the IMAP support in Trojitá is way more advanced than what is shipped in Geary or KDE PIM -- and it is this solid foundation which leads to Trojitá's performance. What needs work now is polishing the GUI and making it play well with the rest of a users' system. I don't care whether this polishing means improving Trojitá's GUI iteratively or whether its IMAP support gets used as a library in, say, KMail -- both would be very succesfull outcomes. It would be terrific to somehow combine the nice, polished UI of the more established e-mail clients with the IMAP engine from Trojitá. There is a GSoC proposal for integrating Trojitá into KDE's Kontact -- but for it to succeed, people from other projects must get involved as well. I have put seven years of my time into making the IMAP support rock; I would not be able to achieve the same if I was improving KMail instead. I don't need a fast KMail, I need a great e-mail client. Trojitá works well enough for me.

Oh, and there's also a currently running fundraiser for better address book integration in Trojitá. We are not asking for $ 100k, we are asking for $ 199. Let's see how many people are willing to put the money where their mouth is and actually do something to help the PIM on a free desktop. Patches and donations are both equally welcome. Actually, not really -- great patches are much more appreciated. Because Jos is right -- it takes a lot of work to produce great software, and things get better when there are more poeple working towards their common goal together.

Update: it looks like my choice of kickstarter platform was rather poor, catincan apparently doesn't accept PayPal :(. There's the possiblity of direct donations over SourceForge/PayPal -- please keep in mind that these will be charged even if less donors pledge to the idea.

Sven Vermeulen a.k.a. swift (homepage, bugs)

Being long overdue – like many of our documentation-reported bugs :-( I worked on bug 466262 to update the Gentoo Handbook with information about Network Interface Naming. Of course, the installation instructions have also seen the necessary updates to refer to this change.

With some luck (read: time) I might be able to fix various other documentation-related ones soon. I had some problems with the new SELinux userspace that I wanted to get fixed before, and then I worked on the new SELinux policies as well as trying to figure out how SELinux deals with network related aspects. Hence I saw time fly by at the speed of a neutrino…

BTW, the 20130424 policies are in the tree.

May 06, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 3 (May 06, 2013, 01:50 UTC)

In previous posts I talked about capabilities and gave an introduction to how this powerful security feature within Linux can be used (and also exploited). I also covered a few capabilities, so let’s wrap this up with the remainder of them.

CAP_AUDIT_CONTROL
Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules
CAP_AUDIT_WRITE
Write records to kernel auditing log
CAP_BLOCK_SUSPEND
Employ features that can block system suspend
CAP_MAC_ADMIN
Override Mandatory Access Control (implemented for the SMACK LSM)
CAP_MAC_OVERRIDE
Allow MAC configuration or state changes (implemented for the SMACK LSM)
CAP_NET_ADMIN
Perform various network-related operations:

  • interface configuration
  • administration of IP firewall, masquerading and accounting
  • modify routing tables
  • bind to any address for transparent proxying
  • set type-of-service (TOS)
  • clear driver statistics
  • set promiscuous mode
  • enabling multicasting
  • use setsockopt() for privileged socket operations
CAP_NET_BIND_SERVICE
Bind a socket to Internet domain privileged ports (less than 1024)
CAP_NET_RAW
Use RAW and PACKET sockets, and bind to any address for transparent proxying
CAP_SETPCAP
Allow the process to add any capability from the calling thread’s bounding set to its inheritable set, and drop capabilities from the bounding set (using prctl()) and make changes to the securebits flags.
CAP_SYS_ADMIN
Very powerful capability, includes:

  • Running quota control, mount, swap management, set hostname, …
  • Perform VM86_REQUEST_IRQ vm86 command
  • Perform IPC_SET and IPC_RMID operations on arbitrary System V IPC objects
  • Perform operations on trusted.* and security.* extended attributes
  • Use lookup_dcookie

and many, many more. man capabilities gives a good overview of them.

CAP_SYS_BOOT
Use reboot() and kexec_load()
CAP_SYS_CHROOT
Use chroot()
CAP_SYS_MODULE
Load and unload kernel modules
CAP_SYS_RESOURCE
Another capability with many consequences, including:

  • Use reserved space on ext2 file systems
  • Make ioctl() calls controlling ext3 journaling
  • Override disk quota limits
  • Increase resource limits
  • Override RLIMIT_NPROC resource limits

and many more.

CAP_SYS_TIME
Set system clock and real-time hardware clock
CAP_SYS_TTY_CONFIG
Use vhangup() and employ various privileged ioctl() operations on virtual terminals
CAP_SYSLOG
Perform privileged syslog() operations and view kernel addresses exposed with /proc and other interfaces (if kptr_restrict is set)
CAP_WAKE_ALARM
Trigger something that will wake up the system

Now when you look through the manual page of the capabilities, you’ll notice it talks about securebits as well. This is an additional set of flags that govern how capabilities are used, inherited etc. System administrators don’t set these flags – they are governed by the applications themselves (when creating threads, forking, etc.) These flags are set on a per-thread level, and govern the following behavior:

SECBIT_KEEP_CAPS
Allow a thread with UID 0 to retain its capabilities when it switches its UIDs to a nonzero (non-root) value. By default, this flag is not set, and even if it is set, it is cleared on an execve call, reducing the likelihood that capabilities are “leaked”.
SECBIT_NO_SETUID_FIXUP
When set, the kernel will not adjust the capability sets when the thread’s effective and file system UIDs are switched between zero (root) and non-zero values.
SECBIT_NOROOT
If set, the kernel does not grant capabilities when a setuid-root program is executed, or when a process with an effective or real UID of 0 (root) calls execve.

Manipulating these bits requires the CAP_SETPCAP capability. Except for the SECBIT_KEEP_CAPS security bit, the others are preserved on an execve() call, and all bits are inherited by child processes (such as when fork() is used).

As a user or admin, you can also see capability-related information through the /proc file system:

 # grep ^Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 0000001fffffffff
CapEff: 0000001fffffffff
CapBnd: 0000001fffffffff

$ grep ^Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000001fffffffff

The capabilities listed therein are bitmasks for the various capabilities. The mask 1FFFFFFFFF holds 37 positions, which match the 37 capabilities known (again, see uapi/linux/capabilities.h in the kernel sources to see the values of each of the capabilities). Again, the pscap can be used to get information about the enabled capabilities of running processes in a more human readable format. But another tool provided by the sys-libs/libcap is interested as well to look at: capsh. The tool offers many capability-related features, including decoding the status fields:

$ capsh --decode=0000001fffffffff
0x0000001fffffffff=cap_chown,cap_dac_override,cap_dac_read_search,
cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,
cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,
cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,
cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,
cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,
cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,
cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,
cap_syslog,35,36

Next to fancy decoding, capsh can also launch a shell with reduced capabilities. This makes it a good utility for jailing chroots even more.

May 05, 2013

Since a long time I realized that is a pita every time that I keyword, receive a repoman failure for dependency.bad(mostly) that does not regard the arch that I’m changing.
So, checking in the repoman manual, I realized that –ignore-arches looks bad for my case and I decided to request a new feature: –include-arches.
This feature, as explained in the bug, checks only for the arches that you write as argument and should be used only when you are keywording/stabilizing.

Some examples/usage:

First, it saves time, the following example will try to run repoman full in the kdelibs directory:
$ time repoman full > /dev/null 2>&1
real 0m12.434s

$ time repoman full --include-arches "amd64" > /dev/null 2>&1
real 0m3.880s

Second, kdelibs suffers for a dependency.bad on amd64-fbsd, so:
$ repoman full
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs
dependency.bad 2
kde-base/kdelibs/kdelibs-4.10.2.ebuild: PDEPEND: ~amd64-fbsd(default/bsd/fbsd/amd64/9.0) ['>=kde-base/nepomuk-widgets-4.10.2:4[aqua=]']

$ repoman full --include-arches "amd64"
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs

Now when I will keyword the packages I can check for specific arches and skip the unuseful checks since they causes, in this case, only a waste of time.
Thanks to Zac for the work on it.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 2 (May 05, 2013, 01:50 UTC)

As I’ve (in a very high level) described capabilities and talked a bit on how to work with them, I started with a small overview of file-related capabilities. So next up are process-related capabilities (note, this isn’t a conform terminology, more some categorization that I do myself).

CAP_IPC_LOCK
Allow the process to lock memory
CAP_IPC_OWNER
Bypass the permission checks for operations on System V IPC objects (similar to the CAP_DAC_OVERRIDE for files)
CAP_KILL
Bypass permission checks for sending signals
CAP_SETUID
Allow the process to make arbitrary manipulations of process UIDs and create forged UID when passing socket credentials via UNIX domain sockets
CAP_SETGID
Same, but then for GIDs
CAP_SYS_NICE
This capability governs several permissions/abilities, namely to allow the process to

  • change the nice value of itself and other processes
  • set real-time scheduling priorities for itself, and set scheduling policies and priorities for arbitrary processes
  • set the CPU affinity for arbitrary processes
  • apply migrate_pages to arbitrary processes and allow processes to be migrated to arbitrary nodes
  • apply move_pages to arbitrary processes
  • use the MPOL_MF_MOVE_ALL flag with mbind() and move_pages()

The abilities related to page moving, migration and nodes is of importance for NUMA systems, not something most workstations have or need.

CAP_SYS_PACCT
Use acct(), to enable or disable system resource accounting for the process
CAP_SYS_PTRACE
Allow the process to trace arbitrary processes using ptrace(), apply get_robust_list() against arbitrary processes and inspect processes using kcmp().
CAP_SYS_RAWIO
Allow the process to perform I/O port operations, access /proc/kcore and employ the FIBMAP ioctl() operation.

Capabilities such as CAP_KILL and CAP_SETUID are very important to govern correctly, but this post would be rather dull (given that the definitions of the above capabilities can be found from the manual page) if I wouldn’t talk a bit more about its feasibility. Take a look at the following C application code:

#include <errno.h>
#include <stdio.h>
#include <string.h>
#include <sys/capability.h>
#include <sys/prctl.h>
#include <sys/types.h>
#include <unistd.h>

int main(int argc, char ** argv) {
  printf("cap_setuid and cap_setgid: %d\n", prctl(PR_CAPBSET_READ, CAP_SETUID|CAP_SETGID, 0, 0, 0));
  printf(" %s\n", cap_to_text(cap_get_file(argv[0]), NULL));
  printf(" %s\n", cap_to_text(cap_get_proc(), NULL));
  if (setresuid(0, 0, 0));
    printf("setresuid(): %s\n", strerror(errno));
  execve("/bin/sh", NULL, NULL);
}

At first sight, it looks like an application to get root privileges (setresuid()) and then spawn a shell. If that application would be given CAP_SETUID and CAP_SETGID effectively, it would allow anyone who executed it to automatically get a root shell, wouldn’t it?

$ gcc -o test -lcap test.c
# setcap cap_setuid,cap_setgid+ep test
$ ./test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
 =
setresuid() failed: Operation not permitted

So what happened? After all, the two capabilities are set with the +ep flags given. Then why aren’t these capabilities enabled? Well, this binary was stored on a file system that is mounted with the nosuid option. As a result, this capability is not enabled and the application didn’t work. If I move the file to another file system that doesn’t have the nosuid option:

$ /usr/local/bin/test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
 = cap_setgid,cap_setuid+ep
setresuid() failed: Operation not permitted

So the capabilities now do get enabled, so why does this still fail? This now is due to SELinux:

type=AVC msg=audit(1367393377.342:4778): avc:  denied  { setuid } for  pid=21418 comm="test" capability=7  scontext=staff_u:staff_r:staff_t tcontext=staff_u:staff_r:staff_t tclass=capability

And if you enable grSecurity’s TPE, we can’t even start the binary to begin with:

$ ./test
-bash: ./test: Permission denied
$ /lib/ld-linux-x86-64.so.2 /home/test/test
/home/test/test: error while loading shared libraries: /home/test/test: failed to map segment from shared object: Permission denied

# dmesg
...
[ 5579.567842] grsec: From 192.168.100.1: denied untrusted exec (due to not being in trusted group and file in non-root-owned directory) of /home/test/test by /home/test/test[bash:4221] uid/euid:1002/1002 gid/egid:100/100, parent /bin/bash[bash:4195] uid/euid:1002/1002 gid/egid:100/100

When all these “security obstacles” are not enabled, then the call succeeds:

$ /usr/local/bin/test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
 = cap_setgid,cap_setuid+ep
setresuid() failed: Success
root@hpl tmp # 

This again shows how important it is to regularly review capability-enabled files on the file system, as this is a major security problem that cannot be detected by only looking for setuid binaries, but also that securing a system is not limited to one or a few settings: one always has to take the entire setup into consideration, hardening the system so it becomes more difficult for malicious users to abuse it.

# filecap -a
file                 capabilities
/usr/local/bin/test     setgid, setuid

May 04, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 1 (May 04, 2013, 01:50 UTC)

In the previous posts, I talked about capabilities and how they can be used to allow processes to run in a privileged fashion without granting them full root access to the system. An example given was how capabilities can be leveraged to run ping without granting it setuid root rights. But what are the various capabilities that Linux is, well, capable of?

There are many, and as time goes by, more capabilities are added to the set. The last capability added to the main Linux kernel tree was the CAP_BLOCK_SUSPEND in the 3.5 series. An overview of all capabilities can be seen with man capabilities or by looking at the Linux kernel source code, include/uapi/linux/capability.h. But because you are all lazy, and because it is a good exercise for myself, I’ll go through many of them in this and the next few posts.

For now, let’s look at file related capabilities. As a reminder, if you want to know which SELinux domains are “granted” a particular capability, you can look this up using sesearch. The capability is either in the capability or capability2 class, and is named after the capability itself, without the CAP_ prefix:

$ sesearch -c capability -p chown -A
CAP_CHOWN
Allow making changes to the file UIDs and GIDs.
CAP_DAC_OVERRIDE
Bypass file read, write and execute permission checks. I came across a reddit post that was about this capability not that long ago.
CAP_DAC_READ_SEARCH
Bypass file read permission and directory read/search permission checks.
CAP_FOWNER
This capability governs 5 capabilities in one:

  • Bypass permission checks on operations that normally require the file system UID of the process to match the UID of the file (unless already granted through CAP_DAC_READ_SEARCH and/or CAP_DAC_OVERRIDE)
  • Allow to set extended file attributes
  • Allow to set access control lists
  • Ignore directory sticky bit on file deletion
  • Allow specifying O_NOATIME for files in open() and fnctl() calls
CAP_FSETID
Do not clear the setuid/setgid permission bits when a file is modified
CAP_LEASE
Allow establishing leases on files
CAP_LINUX_IMMUTABLE
Allow setting FS_APPEND_FL and FP_IMMUTABLE_FL inode flags
CAP_MKNOD
Allow creating special files with mknod
CAP_SETFCAP
Allow setting file capabilities (what I did with the anotherping binary in the previous post)

When working with SELinux (especially when writing applications), you’ll find that the CAP_DAC_READ_SEARCH and CAP_DAC_OVERRIDE capability come up often. This is the case when applications are written to run as root yet want to scan through, read or even execute non-root owned files. Without SELinux, because these run as root, this is all granted. However, when you start confining those applications, it becomes apparent that they require this capability. Another example is when you run user applications, as root, like when trying to play a movie or music file with mplayer when this file is owned by a regular user:

type=AVC msg=audit(1367145131.860:18785): avc:  denied  { dac_read_search } for
pid=8153 comm="mplayer" capability=2  scontext=staff_u:sysadm_r:mplayer_t
tcontext=staff_u:sysadm_r:mplayer_t tclass=capability

type=AVC msg=audit(1367145131.860:18785): avc:  denied  { dac_override } for
pid=8153 comm="mplayer" capability=1  scontext=staff_u:sysadm_r:mplayer_t
tcontext=staff_u:sysadm_r:mplayer_t tclass=capability

Notice the time stamp: both checks are triggered at the same time. What happens is that the Linux security hooks first check for DAC_READ_SEARCH (the “lesser” grants of the two) and then for DAC_OVERRIDE (which contains DAC_READ_SEARCH and more). In both cases, the check failed in the above example.

The CAP_LEASE capability is one that I had not heard about before (actually, I had not heard of getting “file leases” on Linux either). A file lease allows for the lease holder (which requires this capability) to be notified when another process tries to open or truncate the file. When that happens, the call itself is blocked and the lease holder is notified (usually using SIGIO) about the access. It is not really to lock a file (since, if the lease holder doesn’t properly release it, it is forcefully “broken” and the other process can continue its work) but rather to properly close the file descriptor or flushing caches, etc.

BTW, on my system, only 5 SELinux domains hold the lease capability.

There are 37 capabilities known by the Linux kernel at this time. The above list has 9 file related ones. So perhaps next I can talk about process capabilities.

May 03, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
May 3rd = Day Against DRM (May 03, 2013, 14:23 UTC)

Learn more at dayagainstdrm.org (and drm.info).

Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

Now, after tons of research and design/coding work, we are proud to announce the first release of our Devanagari to Bharati Braille converter! You can read more about the converter here, and download the source code on Github.

If you know anyone who might find this useful, please tell them about it!

Sven Vermeulen a.k.a. swift (homepage, bugs)
Restricting and granting capabilities (May 03, 2013, 01:50 UTC)

As capabilities are a way for running processes with some privileges, without having the need to grant them root privileges, it is important to understand that they exist if you are a system administrator, but also as an auditor or other security-related function. Having processes run as a non-root user is no longer sufficient to assume that they do not hold any rights to mess up the system or read files they shouldn’t be able to read.

The grsecurity kernel patch set, which is applied to the Gentoo hardened kernel sources, contains for instance CONFIG_GRKERNSEC_CHROOT_CAPS which, as per its documentation, “restrcts the capabilities on all root processes within a chroot jail to stop module insertion, raw i/o, system and net admin tasks, rebooting the system, modifying immutable files, modifying IPC owned by another, and changing the system time.” But other implementations might even use capabilities to restrict the users. Consider LXC (Linux Containers). When a container is started, CAP_SYS_BOOT (the ability to shutdown/reboot the system/container) is removed so that users cannot abuse this privilege.

You can also grant capabilities to users selectively, using pam_cap.so (the Capabilities Pluggable Authentication Module). For instance, to allow some users to ping, instead of granting the cap_net_raw immediately (+ep), we can assign the capability to some users through PAM, and have the ping binary inherit and use this capability instead (+p). That doesn’t mean that the capability is in effect, but rather that it is in a sort-of permitted set. Applications that are granted a certain permission this way can either use this capability if the user is allowed to have it, or won’t otherwise.

# setcap cap_net_raw+p anotherping

# vim /etc/pam.d/system-login
... add in something like
auth     required     pam_cap.so

# vim /etc/security/capability.conf
... add in something like
cap_net_raw           user1

The logic used with capabilities can be described as follows (it is not as difficult as it looks):

        pI' = pI
  (***) pP' = fP | (fI & pI)
        pE' = pP' & fE          [NB. fE is 0 or ~0]

  I=Inheritable, P=Permitted, E=Effective // p=process, f=file
  ' indicates post-exec().

So, for instance, the second line reads “The permitted set of capabilities of the newly forked process is set to the permitted set of capabilities of its executable file, together with the result of the AND operation between the inherited capabilities of the file and the inherited capabilities of the parent process.”

As an admin, you might want to keep an eye out for binaries that have particular capabilities set. With filecap you can list which capabilities are in the effective set of files found on the file system (for instance, +ep).

# filecap 
file                 capabilities
/bin/anotherping     net_raw

Similarly, with pscap you can see the capabilities set on running processes.

# pscap -a
ppid  pid   name        command           capabilities
6148  6152  root        bash              full

It might be wise to take this up in the daily audit reports.

May 02, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Capabilities, a short intro (May 02, 2013, 01:50 UTC)

Capabilities. You probably have heard of them already, but when you start developing SELinux policies, you’ll notice that you come in closer contact with them than before. This is because SELinux, when applications want to do something “root-like”, checks the capability of that application. Without SELinux, this either requires the binary to have the proper capability set, or the application to run in root modus. With SELinux, the capability also needs to be granted to the SELinux context (the domain in which the application runs).

But forget about SELinux for now, and let’s focus on capabilities. Capabilities in Linux are flags that tell the kernel what the application is allowed to do, but unlike file access, capabilities for an application are system-wide: there is no “target” to which it applies. Think about an “ability” of an application. See for yourself through man capabilities. If you have no additional security mechanism in place, the Linux root user has all capabilities assigned to it. And you can remove capabilities from the root user if you want to, but generally, capabilities are used to grant applications that tiny bit more privileges, without needing to grant them root rights.

Consider the ping utility. It is marked setuid root on some distributions, because the utility requires the (cap)ability to send raw packets. This capability is known as CAP_NET_RAW. However, thanks to capabilities, you can now mark the ping application with this capability and drop the setuid from the file. As a result, the application does not run with full root privileges anymore, but with the restricted privileges of the user plus one capability, namely the CAP_NET_RAW.

Let’s take this ping example to the next level: copy the binary (possibly relabel it as ping_exec_t if you run with SELinux), make sure it does not hold the setuid and try it out:

# cp ping anotherping
# chcon -t ping_exec_t anotherping

Now as a regular user:

$ ping -c 1 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms

$ anotherping -c 1 127.0.0.1
ping: icmp open socket: Operation not permitted

Let’s assign the binary with the CAP_NET_RAW capability flag:

# setcap cap_net_raw+ep anotherping

And tadaa:

$ anotherping -c 1 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms

What setcap did was place an extended attribute to the file, which is a binary representation of the capabilities assigned to the application. The additional information (+ep) means that the capability is permitted and effective.

So long for the primer, I’ll talk about the various capabilities in a later post.

May 01, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

If you’re a university student, time is running out! You could get paid to hack on Gentoo or other open-source software this summer, but you’ve gotta act now. The deadline to apply for the Google Summer of Code is this Friday.

If this sounds like your dream come true, you can find some Gentoo project ideas here and Gentoo’s GSoC homepage here. For non-Gentoo projects, you can scan through the GSoC website to find the details.


Tagged: gentoo, gsoc

Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux mount options (May 01, 2013, 01:50 UTC)

When you read through the Gentoo Hardened SELinux handbook, you’ll notice that we sometimes update /etc/fstab with some SELinux-specific settings. So, what are these settings about and are there more of them?

First of all, let’s look at a particular example from the installation instructions so you see what I am talking about:

tmpfs  /tmp  tmpfs  defaults,noexec,nosuid,rootcontext=system_u:object_r:tmp_t  0 0

What the rootcontext= option does here is to set the context of the “root” of that file system (meaning, the context of /tmp in the example) to the specified context before the file system is made visible to the userspace. Because we do it soon, the file system is known as tmp_t throughout its life cycle (not just after the mount or so).

Another option that you’ll frequently see on the Internet is the context= option. This option is most frequently used for file systems that do not support extended attributes, and as such cannot store the context of files on the file system. With the context= mount option set, all files on that file system get the specified context. For instance, context=system_u:object_r:removable_t.

If the file system does support extended attributes, you might find some benefit in using the defcontext= option. When set, the context of files and directories (and other resources on that file system) that do not have a SELinux context set yet will use this default context. However, once a context is set, it will use that context instead.

The last context-related mount option is fscontext=. With this option, you set the context of the “filesystem” class object of the file system rather than the mount itself (or the files). Within SELinux, “filesystem” is one of the resource classes that can get a context. Remember the /tmp mount example from before? Well, even though the files are labeled tmp_t, the file system context itself is still tmpfs_t.

It is important to know that, if you use one of these mount options, context= is mutually exclusive to the other options as it “forces” the context on all resources (including the filesystem class).

April 28, 2013
Raúl Porcel a.k.a. armin76 (homepage, bugs)
The new BeagleBone Black and Gentoo (April 28, 2013, 18:02 UTC)

Hi all, long time no see.

Some weeks ago I got an early version of the BeagleBone Black from the people at Beagleboard.org to create the documentation I always create with every device I get.

Like always i’d like to announce the guide for installing Gentoo in the BeagleBone Black. Have a look at: http://dev.gentoo.org/~armin76/arm/beagleboneblack/install.xml . Feel free to send any corrections my way.

This board is a new version of the original BeagleBone, known in the community as BeagleBone white, for which I wrote a post for it: http://armin762.wordpress.com/2012/01/01/beaglebone-and-gentoo/

This new version differs in some aspects with the previous version:

  • Cheaper: 45$ vs 89$ of the BeagleBone white
  • 512MB DDR3L RAM vs 256MB DDR2 RAM of the BeagleBone white
  • 1GHz of processor speed vs 720MHz of the BeagleBone white, both when using an external PSU for power

Also it has more features which the old BeagleBone didn’t had

  • miniHDMI output
  • 2GB eMMC

However the new version has missing:

  • Serial port and JTAG through the miniUSB interface

The reason for missing this feature is cost cutting measures, as can be read in the Reference manual.

The full specs of the BeagleBone Black are:
# ARMv7-A 1GHz TI AM3358/9 ARM Cortex-A8 processor
# 512MB DDR3L RAM
# SMSC LAN8710 Ethernet card
#
# 1x microSDHC slot
# 1x USB 2.0 Type-A port
# 1x mini-USB 2.0 OTG port
# 1x RJ45
# 1x 6 pin 3.3V TTL Header for serial
#
# Reset, power and user-defined button

More info about the specs in BeagleBone Black’s webpage.

For those curious as me, here’s the bootlog and the cpuinfo.

I’ve found two issues while working on it:

  1. The USB port doesn’t have a working hotplug detection. That means that if you plug an USB device in the USB port, it will be only detected once, if you remove the USB device, the USB port will stop working. I’ve been told that they are working on it. I haven’t been able to find a workaround for it.
  2. The BeagleBone Black doesn’t detect an microSD card when plugged in when its been booted from the eMMC. If you want to use a microSD card for additional storage, it must be inserted before it boots.

I’d like to thank the people at Beagleboard.org for providing me a Beaglebone Black to document this.

Have fun!


April 26, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB and Pacemaker recent bumps (April 26, 2013, 14:23 UTC)

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

All of this is live on portage, enjoy.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My time abroad: loyalty cards (April 26, 2013, 12:56 UTC)

Compared to most people around me now, and probably most of the people who read my blog, my life is not that extraordinary, in the terms of travel and moving around. I’ve been, after all, scared of planes for years, and it wasn’t until last year that I got out of the continent — in an year, though, I more than doubled the number of flights I’ve been on, with 18 last year, and more than doubled the number of countries I’ve been to, counting Luxembourg even though I only landed there and got on a bus to get back to Brussels after Alitalia screwed up.

On the other hand, compared to most of the people I know in Italy, I’ve been going around quite a bit, as I spent a considerable amount of time last year in Los Angeles, and I’ve now moved to Dublin, Ireland. And there are quite a few differences between these places and Italy. I’ve already written a bit about the differences I found during my time in the USA but this time I want to focus on something which is quite a triviality, but still is a remarkable difference between the three countries I got to know up to now. As the title suggest I’m referring to stores’ loyalty cards.

Interestingly enough, there was just this week an article on the Irish Times about the “privacy invasion” of loyalty cards.. I honestly don’t see it as big a deal as many others. Yes, they do profile your shopping habits. Yes, if you do not keep private the kind of offers they sent you, they might tell others something about you as well — the newspaper actually brought up the example of a father who discovered the pregnancy of the daughter because of the kind of coupons the supermarket was sending, based on her change of spending habits; I’m sorry but I cannot really feel bad about it. After all, absolute privacy and relevant offers are kinda at the opposite sides of a range.. and I’m usually happy enough when companies are relevant to me.

So of course stores want to know the habits of a single person, or of a single household, and for that they give you loyalty cards… but for you to use them, they have to give you something in return, don’t they? This is where the big difference on this topic appears clearly, if you look at the three countries:

  • in both Italy and Ireland, you get “points” with your shopping; in the USA, instead, the card gives you immediate discounts; I’m pretty sure that this gives not-really-regular-shoppers a good reason to get the card as well: you can easily save a few dollars on a single grocery run by getting the loyalty card at the till;
  • in Italy you redeem the points to get prizes – this works not so differently than with airlines after all – sometimes by adding a contribution, sometimes for free; in my experience the contribution is never worth it, so either you get something for free or just forget about it;
  • in Ireland I still haven’t seen a single prize system; instead they work with coupons: you get a certain amount of points each euro you spend (usually, one point per euro), and then when you get to a certain amount of points, they get a value (usually, one cent per point), and a coupon redeemable for the value is sent you.

Of course, the “European” method (only by contrast with American, since I don’t know what other countries do), is a real loyalty scheme: you need a critical mass of points for them to be useful, which means that you’ll try to get on the same store as much as you can. This is true for airlines as well, after all. On the other hand, people who shop occasionally are less likely to request the card at all, so even if there is some kind of data to be found in their shopping trends, they will be completely ignored by this kind of scheme.

I’m honestly not sure which method I prefer, at this point I still have one or two loyalty cards from my time in Los Angeles, and I’m now collecting a number of loyalty cards here in Dublin. Some are definitely a good choice for me, like the Insomnia card (I love getting coffee at a decent place where I can spend time to read, in the weekends), others, like Dunnes, make me wonder.. the distance from the supermarket to where I’m going to live is most likely offsetting the usefulness of their coupons compared to the (otherwise quite more expensive) Spar at the corner.

At any rate, I just want to write my take on the topic, which is definitely not of interest to most of you…

April 25, 2013
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Recently, I have been toying around with GateOne, a web-based SSH client/terminal emulator. However, installing it on my server proved to be a bit challenging: it requires tornado as a webserver, and uses websockets, while I have an Apache 2.2 instance already running with a few sites on it (and my authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were not officially supported by Apache... until recently! Jim Jagielski added the proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the mailing list, backporting to 2.4 is easy to do (and was suggested as an official backport), but 2.2 required a few additional changes to the original patch (and current upstream trunk).

A few fixes later, I got a working patch (based on Apache 2.2.24), available here: http://cafarelli.fr/gentoo/apache-2...

Recompile with this patch, and you will get a nice and shiny mod_proxy_wstunnel.so module file!

Now just load it (in /etc/apache2/httpd.conf in Gentoo):
<IfDefine PROXY>
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
</IfDefine>

and add a location pointing to your GateOne installation:

<Location /gateone/ws>
    ProxyPass wss://127.0.0.1:1234/gateone/ws
    ProxyPassReverse wss://127.0.0.1:1234/gateone/ws
</Location>

<Location /gateone>
    Order deny,allow
    Deny from all
    Allow from #your favorite rule

    ProxyPass http://127.0.0.1:1234/gateone
    ProxyPassReverse http://127.0.0.1:1234/gateone
</Location>


Reload Apache, and you now have Gateone running behind your Apache server :) If it does not work, first check GateOne log and configuration, especially the "origins" variable

For other websocket applications, Jim Jagielski comments here :

ProxyPass /whatever ws://websocket-srvr.example/com/

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed protocols between the client and the backend, so you tell Apache that you'll be talking 'ws' with the backend (same as ajp://whatever sez that httpd will be talking ajp to the backend).

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Tarsnap and backup strategies (April 25, 2013, 13:52 UTC)

After having had a quite traumatic experience with a customer’s service running on one of the virtual servers I run last November, I made sure to have a very thorough backup for all my systems. Unfortunately, it turns out to be a bit too thorough, so let me explore with you what was going on.

First of all, the software I use to run the backup is tarsnap — you might have heard of it or not, but it’s basically a very smart service, that uses an open-source client, based upon libarchive, and then a server system that stores content (de-duplicated, compressed and encrypted with a very flexible key system). The author is a FreeBSD developer, and he’s charging an insanely small amount of money.

But the most important part to know when you use tarsnap is that you just always create a new archive: it doesn’t really matter what you changed, just get everything together, and it will automatically de-duplicate the content that didn’t change, so why bother? My first dumb method of backups, which is still running as of this time, is to simply, every two hours, dump a copy of the databases (one server runs PostgreSQL, the other MySQL — I no longer run MongoDB but I start to wonder about it, honestly), and then use tarsnap to generate an archive of the whole /etc, /var and a few more places where important stuff is. The archive is named after date and time of the snapshot. And I haven’t deleted any snapshot since I started, for most servers.

It was a mistake.

The moment when I went to recover the data out of earhart (the host that still hosts this blog, a customer’s app, and a couple more sites, like the assets for the blog and even Autotools Mythbuster — but all the static content, as it’s managed by git, is now also mirrored and served active-active from another server called pasteur), the time it took to extract the backup was unsustainable. The reason was obvious when I thought about it: since it has been de-duplicating for almost an year, it would have to scan hundreds if not thousands of archives to get all the small bits and pieces.

I still haven’t replaced this backup system, which is very bad for me, especially since it takes a long time to delete the older archives even after extracting them. On the other hand it’s probably a lot of a matter of tradeoff in the expenses as well, as going through all the older archives to remove the old crap drained my credits with tarsnap quickly. Since the data is de-duplicated and encrypted, the archives’ data needs to be downloaded to be decrypted, before it can be deleted.

My next preference is going to be to set it up so that the script is executed in different modes: 24 times in 48 hours (every two hours), 14 times in 14 days (daily), and 8 times in two months (weekly). The problem is actually doing the rotation properly with a script, but I’ll probably publish a Puppet module to take care of that, since it’s the easiest thing for me to do, to make sure it executes as intended.

The essence of this post is basically to warn you all that, no matter whether it’s cheap to keep around the whole set of backups since the start of time, it’s still a good idea to just rotate them.. especially for content that does not change that often! Think about it even when you set up any kind of backup strategy…

April 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Hello Gentoo Planet (April 24, 2013, 08:51 UTC)

Hey Gentoo folks !

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.

Kernel

For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.

Firmware

The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
-------------------------------------------
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is a follow-up on my last post for autotools introduction. I’m trying to keep these posts bite sized both because it seems to work nicely, and because this way I can avoid leaving the posts rotting in the drafts set.

So after creating a simple autotools build system in the previous now you might want to know how to build a library — this is where the first part of complexity kicks in. The complexity is not, though, into using libtool, but into making a proper library. So the question is “do you really want to use libtool?”

Let’s start from a fundamental rule: if you’re not going to install a library, you don’t want to use libtool. Some projects that only ever deal with programs still use libtool because that way they can rely on .la files for static linking. My suggestion is (very simply) not to rely on them as much as you can. Doing it this way means that you no longer have to care about using libtool for non-library-providing projects.

But in the case you are building said library, using libtool is important. Even if the library is internal only, trying to build it without libtool is just going to be a big headache for the packager that looks into your project (trust me I’ve seen said projects). Before entering the details on how you use libtool, though, let’s look into something else: what you need to make sure you think about, in your library.

First of all, make sure to have an unique prefix to your public symbols, be them constants, variables or functions. You might also want to have one for symbols that you use within your library on different translation units — my suggestion in this example is going to be that symbols starting with foo_ are public, while symbols starting with foo__ are private to the library. You’ll soon see why this is important.

Reducing the amount of symbols that you expose is not only a good performance consideration, but it also means that you avoid the off-chance to have symbol collisions which is a big problem to debug. So do pay attention.

There is another thing that you should consider when building a shared library and that’s the way the library’s ABI is versioned but it’s a topic that, in and by itself, takes more time to discuss than I want to spend in this post. I’ll leave that up to my full guide.

Once you got these details sorted out, you should start by slightly change the configure.ac file from the previous post so that it initializes libtool as well:

AC_INIT([myproject], [123], [flameeyes@flameeyes.eu], [http://blog.flameeyes.eu/tag/autotoolsmythbuster])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])
LT_INIT

AC_PROG_CC

AC_OUTPUT([Makefile])

Now it is possible to provide a few options to LT_INIT for instance to disable by default the generation of static archives. My personal recommendation is not to touch those options in most cases. Packagers will disable static linking when it makes sense, and if the user does not know much about static and dynamic linking, they are better off getting everything by default on a manual install.

On the Makefile.am side, the changes are very simple. Libraries built with libtool have a different class than programs and static archives, so you declare them as lib_LTLIBRARIES with a .la extension (at build time this is unavoidable). The only real difference between _LTLIBRARIES and _PROGRAMS is that the former gets its additional links from _LIBADD rather than _LDADD like the latter.

bin_PROGRAMS = fooutil1 fooutil2 fooutil3
lib_LTLIBRARIES = libfoo.la

libfoo_la_SOURCES = lib/foo1.c lib/foo2.c lib/foo3.c
libfoo_la_LIBADD = -lz
libfoo_la_LDFLAGS = -export-symbols-regex &apos^foo_[^_]&apos

fooutil1_LDADD = libfoo.la
fooutil2_LDADD = libfoo.la
fooutil3_LDADD = libfoo.la -ldl

pkginclude_HEADERS = lib/foo1.h lib/foo2.h lib/foo3.h

The _HEADERS variable is used to define which header files to install and where. In this case, it goes into ${prefix}/include/${PACKAGE}, as I declared it a pkginclude install.

The use of -export-symbols-regex ­– further documented in the guide – ensures that only the symbols that we want to have publicly available are exported and does so in an easy way.

This is about it for now — one thing that I haven’t added in the previous post, but which I’ll expand in the next iteration or the one after, is that the only command you need to regenerate autotools is autoreconf -fis and that still applies after introducing libtool support.

April 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.9 (April 18, 2013, 17:41 UTC)

First of all py3status is on pypi ! You can now install it with the simple and usual :

$ pip install py3status

This new version features my first pull request from @Fandekasp who kindly wrote a pomodoro module which helps this technique’s adepts by having a counter on their bar. I also fixed a few glitches on module injection and some documentation.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: configure.ac and Makefile.am.

The first of the two files (configure.ac) is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic configure.ac file:

AC_INIT([myproject], [123], [flameeyes@flameeyes.eu], [http://blog.flameeyes.eu/tag/autotoolsmythbuster])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])

AC_PROG_CC

AC_OUTPUT([Makefile])

Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use Makefile.in but we’ll create Makefile.am instead soon).

To build a program, you need then to create a Makefile.am similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic Makefile.am.

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the configure.ac calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.2 released (April 18, 2013, 10:53 UTC)

After the security issue related bumps of the previous releases which happened last weeks it was about time 10gen released a 2.4.x fixing the following issues:

  • Fix for upgrading sharded clusters
  • TTL assertion on replica set secondaries
  • Several V8 memory leak and performance fixes
  • High volume connection crash

I guess everything listed above would have affected our cluster at work so I’m glad we’ve been patient on following-up this release :) See the changelog for details.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you skyscanner.net). That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : streets (April 18, 2013, 06:02 UTC)

05440019
05440016
05440020
000018

April 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : chinatown (April 16, 2013, 20:41 UTC)

05440005
05440002

Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for quirksmode.org) has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece

Website?http://hua.fosscomm.gr

FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)

28283482

We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.


April 14, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.8 (April 14, 2013, 20:33 UTC)

I went on a coding frenzy to implement most of the stuff I was not happy with py3status so far. Here comes py3status code name : San Francisco (more photos to come).
San Francisco

PEP8

I always had the habit of using tabulators to indent my code. @Lujeni pointed out that this is not a PEP8 recommended method and that we should start respecting more of it in the near future. Well, he’s right and I guess it was time to move on so I switched to using spaces and corrected a lot of other coding style stuff which got my code a score going from around -1/10 to around 9.5/10 on pylint !

Threaded modules’ execution

This was the major thing I was not happy with : when a user-written module was executed for injection, the time it took to get its response would cause py3status to stop updating the bar. This means that if you had a database call to make to get some stuff you need displayed on the bar and it took 10 seconds, py3status was sleeping for those 10 seconds to update the bar ! This behavior could cause some delays in the clock ticking for example.

I decided to offload all of the modules’ detection and execution to a thread to solve this problem. To be frank, this also helped to rationalize the code better as well. No more delays and a cleaner handling is what you get, stuff will start appending themselves whatever the time they take to execute !

Python3

It was about time the examples available on py3status would also work using python3.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.

April 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So it stats my time in Ireland (April 13, 2013, 19:58 UTC)

With today it makes a full week I survived my move to Dublin. Word’s out on who my new employer is (but as usual, since this blog is personal and should not be tied to my employer, I’m not even going to name it), and I started the introductory courses. One thing I can be sure of: I will be eating healthily and compatibly with my taste — thankfully, chicken, especially spicy chicken, seems to be available everywhere in Ireland, yai!

I have spent almost all my life in Venice, never stayed for long periods of time away from it, with the exception of last year, which I spent for the most time, as you probably know, in Los Angeles — 2012 was a funny year like that: I never partied for the new year, but at 31st December 2011 I was at a friend’s place with friends, after which some of us ended up leaving at around 3am… for the first time in my life I ended up sleeping on a friend’s couch. Then it was time for my first week-long vacation since ever with the same group of friends in the Venetian Alps.

With this premise, it’s obvious that Dublin is looking a bit alien to me. It helps I’ve spent a few weeks over the past years in London, so that at least a few customs that are shared between the British and the Irish I already was used to — they probably don’t like to be remembered that they share some customs with the British, but there it goes. But it’s definitely more similar to Italy than Los Angeles.

Funny episode of the day was me going to Boots, and after searching the aisle for a while asking one of the workers if they kept hydrogen peroxide, which I used almost daily both in Italy and the US as a disinfectant – I cut or scrape very easily – and after being looked at in a very strange way I was informed that is not possible to sell it anymore in Ireland…. I’d guess it has something to do with the use of it in the London bombings of ‘05. Luckily they didn’t call the police.

I have to confess though that I like the restaurants better on the touristy, commercial areas than those in the upscale modern new districts — I love Nando’s for instance, which is nowhere Irish, but I love its spiciness (and this time around I could buy the freaking salt!). But also most pubs have very good chicken.

I still don’t have a permanent place though. I need to look into one soonish I suppose, but the job introduction took the priority for the moment. Even though, if the guests in the next apartment are going to throw another party at 4.30am I might decide to find something sooner, rather than later.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...



April 11, 2013
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio in GSoC 2013 (April 11, 2013, 11:34 UTC)

That’s right — PulseAudio will be participating in the Google Summer of Code again this year! We had a great set of students and projects last year, and you’ve already seen some their work in the last release.

There are some more details on how to get involved on the mailing list. We’re looking forward to having another set of smart and enthusiastic new contributors this year!

p.s.: Mentors and students from organisations (GStreamer and BlueZ, for example), do feel free to get in touch with us if you have ideas for projects related to PulseAudio that overlap with those other projects.

April 10, 2013
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
GCC 4.8 - building everything? (April 10, 2013, 13:49 UTC)

The last few days I've spent a few hundred CPU-hours building things with gcc 4.8. So far, alphabetically up to app-office/, it's been really boring.
The amount of failing packages is definitely lower than with 4.6 or 4.7. And most of the current troubles are unrelated - for example the whole info page generation madness.
At the current rate of filing and fixing bugs we should be able to unleash this new version on the masses really soon - maybe in about a month? (Or am I just too optimistic?)

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Forking ebuilds (April 10, 2013, 00:14 UTC)

Here’s a response to an email thread I sent recently. This was on a private alias but I’m not exposing the context or quoting anybody, so I’m not leaking anything but my own opinion which has no reason to be secret.

GLEP39 explicitly states that projects can be competing. I don’t see how you can exclude competing ebuilds from that since nothing prevents anybody from starting a project dedicated to maintaining an ebuild.

So, if you want to prevent devs from pushing competing ebuilds to the tree you have to change GLEP 39 first. No arguing or “hey all, hear my opinion” emails on whatever list will be able to change that.

Some are against forking ebuilds and object duplicating effort and lack of manpower. I will bluntly declare those people shortsighted. Territoriality is exactly what prevents us from getting more manpower. I’m interested in improving package X but developer A who maintains it is an ass and won’t yield on anything. At best I’ll just fork it in an overlay (with all the issues that having a package in an overlay entail, i.e. no QA, it’ll die pretty quickly, etc…), at worst I’m moving to Arch, or Exherbo, or else… What have we gained by not duplicating effort? We have gained negative manpower.

As long as forked ebuilds can cohabit peacefully in the tree using say a virtual (note: not talking about the devs here but about the packages) we should see them as progress. Gentoo is about choice. Let consumers, i.e. users and devs depending on the ebuild in various ways, have that choice. They’ll quickly make it known which one is best, at which point the failing ebuild will just die by itself. Let me say it again: Gentoo is about choice.

If it ever happened that devs of forked ebuilds could not cohabit peacefully on our lists or channels, then I would consider that a deliberate intention of not cooperating. As with any deliberate transgression of our rules if I were devrel lead right now I would simply retire all involved developers on the spot without warning. Note the use of the word “deliberate” here. It is important we allow devs to make mistakes, even encourage it. But we are adults. If one of us knowingly chooses to not play by the rules he or she should not be allowed to play. “Do not be an ass” is one of those rules. We’ve been there before with great success and it looks like we are going to have to go there again soon.

There you have it. You can start sending me your hate mail in 3… 2… 1…

April 09, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So there, I'm in Ireland (April 09, 2013, 21:50 UTC)

Just wanted to let everybody know that I’m in Ireland, as I landed at Dublin Airport on Saturday, and been roaming around the city for a few days now. Time looks like it’s running faster than usual, so I haven’t had much time to work on Gentoo stuff.

My current plan is to work, by the end of the week, on a testing VM as there’s an LVM2 bug that I owe Enrico to fix, and possibly work on the Autotools Mythbuster guide as well, there’s work to do there.

But today, I’m a bit too tired to keep going, it’s 11pm… I’ll doze off!

April 08, 2013
What’s cookin’ on the BBQ (April 08, 2013, 16:27 UTC)

While Spring has yet to come here, the rainy days are giving me some time to think about the future of Sabayon and summarize what’s been done during the last months.

donations

As far as I can see, donations are going surprisingly well. The foundation has now enough money (see the pledgie.com campaign at sabayon.org) to guarantee 24/7 operations, new hardware purchase and travel expenses for several months. Of course, the more the better (paranoia mode on) but I cannot really complain, given that’s our sole source of funds. Here is a list of stuff we’ve been able to buy during the last year (including prices, we’re in the EU, prices in the US are much lower, sigh):

  • one Odroid X2 (for Sabayon on ARM experiments) – 131€
  • one PandaBoard ES (for Sabayon on ARM experiments) – 160€
  • two 2TB Seagate Barracuda HDDs (one for Joost’s experiments, one for the Entropy tinderbox) – 185€
  • two 480GB Vertex3 OCZ SSDs for the Entropy tinderbox (running together with the Samsung 830 SSDs in a LVM setup) – 900€
  • one Asus PIKE 2008 SAS controller for the Entropy tinderbox – 300€
  • other 16GB of DDR3 for the Entropy tinderbox (now running with 64G) – 128€
  • mirror.de.sabayon.org @ hetzner.de maintenance (33€/mo for 1 year) – 396€
  • my personal FOSDEM 2013 travel expenses – 155€

Plus, travel expenses to data centers whenever there is a problem that cannot be fixed remotely. That’s more or less from 40€ to 60€ each depending on the physical distance.
As you may understand, this is just a part of the “costs”, because the time donated by individual developers is not accounted there, and I believe that it’s much more important than a piece of silicon.

monthly releases, entropy

Besides the money part, I spent the past months on Sabayon 11 (of course), on advancing with the automation agenda for 2013. Ideally, I would like to have stable releases automatically produced and tested monthly, and eventually pushed to mirrors. This required me to migrate to a different bittorrent tracker, one that scrapes a directory containing .torrents and publishes them automatically: you can see the outcome at http://torrents.sabayon.org. Furthermore, a first, yet not advertised, set of monthly ISO images is available on our mirrors into the iso/monthly/ sub-directory. You can read more about them here. This may (eheh) indicate that the next Sabayon release will be versioned something like 13.05, who knows…
On the Entropy camp, nothing much has changed, besides the usual set of bug fixe, little improvements and the migration to an .ini-like repositories configuration files syntax for both Entropy Server and Client modules, see here. You may start realizing that all the good things I do are communicated through the devel mailing list.

leh systemd

I spent a week working on a Sabayon systemd system to see how it works and performs compared to openrc. Long story short, I am about to arrange some ideas on making the systemd migration come true at some point in the (near) future. Joost and I are experimenting with a private Entropy repository (thus chroot) that’s been migrated to systemd, from openrc. While I don’t want to start yet another flamewar about openrc vs systemd, I do believe in science, facts and benchmarks. Even though I don’t really like the vertical architecture of systemd, I am starting to appreciate its features and most importantly, its performance. The first thing I would like to sort out is to be able to switch between systemd and openrc at runtime, this may involve the creation of an eselect module (trivial) and patching some ebuilds. I think that’s the best thing to do, if we really want to design and deploy a migration path for current openrc users (I would like to remind people that Gentoo is about choice, after all). If you’re a Gentoo developer that hasn’t been bugged by me yet, feel free to drop a line to lxnay@g.o (expand the domain, duh!) if you’re interested.


April 07, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
FOSDEM 2013 & etc-update (April 07, 2013, 16:00 UTC)

FOSDEM 2013

FOSDEM 2013

I started writing this post after FOSDEM, but never actually managed to finish it. But as I plan to blog about something again “soon”, I wanted to get this one out first. So let’s start with FOSDEM – it is awesome event and every open source hacker is there unless he has some really huge reasons why not to come (like being dead, in prison or locked down in psychiatric care). I was there together with bunch of openSUSE/SUSE folks. It was a lot of fun and we even managed to get some work done during the event. So how was it?

FOSDEM

We had a lot of fun on the way already. You know, every year, we rent a bus just for us and we go from Nuremberg to Brussels and back all together by bus. And we talk and drink and talk and drink some more…. So although it sounds crazy – 8 hours drive – it’s not as bad as it sounds.

etc-update

What the hack is etc-update and what does it have to do with me, openSUSE or FOSDEM? Isn’t it Gentoo tool? Yes, it is. It is Gentoo tool (actually part of portage, Gentoo package manager) that is used in Gentoo to merge updates to the configuration files. When you install package, portage is not going to overwrite your configuration files that you have spend days and nights tuning. It will create a new file with new upstream configuration and it is up to you to merge them. But you know, rpm does the same thing. In almost all cases rpm is not going to overwrite your configuration file, but will install the new one as config_file.rpmnew. And it is up to you to merge the changes. But it’s not fun. Searching for all files, compare them manually and choose what to merge and how. And here comes etc-update o the rescue ;-)

How does it work? Simple. You need to install it (will speak about that later) and run it. It’s command line tool and it doesn’t need any special parameters. All you need to do is to run etc-update as root (to be actually able to do something with these files). And the result?

# etc-update 
Scanning Configuration files...
The following is the list of files which need updating, each
configuration file is followed by a list of possible replacement files.
1) /etc/camsource.conf (1)
2) /etc/ntp.conf (1)
Please select a file to edit by entering the corresponding number.
              (don't use -3, -5, -7 or -9 if you're unsure what to do)
              (-1 to exit) (-3 to auto merge all files)
                           (-5 to auto-merge AND not use 'mv -i')
                           (-7 to discard all updates)
                           (-9 to discard all updates AND not use 'rm -i'):

What I usually do is that I select configuration files I do care about, review changes and merge them somehow and later just use -5 for everything else. It looks really simple, doesn’t it? And in fact it is!

Somebody asked a question on how to merge updates of configuration files while visiting our openSUSE booth at FOSDEM. When I learned that from Richard, we talked a little bit about how easy it is to do something like that and later during one of the less interesting talks, I took this Gentoo tool, patched it to work on rpm distributions, packaged it and now it is in Factory and it will be part of openSUSE 13.1 ;-) If you want to try it, you can get it either from my home project – home:-miska-:arm (even for x86 ;-) ) or from utilities repository.

Hope you will like it and that it will make many sysadmins happy ;-)

April 06, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.7 (April 06, 2013, 15:30 UTC)

Some cool bugfixes happened since v0.5 and py3status broke the 20 github stars, I hope people are enjoying it.

changelog

  • clear the user class cache when receiving SIGUSR1
  • specify default folder for user defined classes
  • fix time transformation thx to @Lujeni
  • add Pingdom checks latency example module
  • fix issue #2 reported by @Detegr which caused the clock to drift on some use cases

April 04, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

If you’re using dev-db/postgresql-server, update now.

CVE-2013-1899 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13}
------------------------------------------------------------
A connection request containing a database name that begins
with "-" may be crafted to damage or destroy files within a server's data directory.

CVE-2013-1900 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13,8.4.17}
-------------------------------------------------------------------
Random numbers generated by contrib/pgcrypto functions may be easy for another
database user to guess

CVE-2013-1901 <dev-db/postgresql-server-{9.2.4,9.1.9}
-----------------------------------------------------
An unprivileged user can run commands that could interfere with in-progress backups.

April 03, 2013
Matthew Thode a.k.a. prometheanfire (homepage, bugs)

Disclaimer

  1. Keep in mind that ZFS on Linux is supported upstream, for differing values of support
  2. I do not care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). I uploaded an iso that works for me at this link Live DVDs newer then 12.1 should also have support, but the previous link has the stable version of zfsonlinux. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry Most newer drives are 4k advanced format drives. Because of this you need ashift=12, some/most newer SSDs need ashift=13 compression set to lz4 will make your system incompatible with upstream (oracle) zfs, if you want to stay compatible then just set compression=on

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=lz4 rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /tmp/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources                #or hardned-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-0.6.1/work/spl-0.6.1 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-0.6.1/work/zfs-zfs-0.6.1/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
mkdir -p /etc/portage/profile                                                   
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask      
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use                    
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

April 02, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The WebP experiment (April 02, 2013, 17:58 UTC)

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP at all… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.