Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matthias Geerdsen
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Thomas Kahle
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Victor Ostorga
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
October 25, 2012, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

October 25, 2012
Markos Chandras a.k.a. hwoarang (homepage, stats, bugs)
Gentoo Recruitment: How do we perform? (October 25, 2012, 18:53 UTC)

A couple of days ago, Tomas and I, gave a presentation at the Gentoo Miniconf. The subject of the presentation was to give an overview of the current recruitment process, how are we performing compared to the previous years and what other ways there are for users to help us improve our beloved distribution. In this blog post I am gonna get into some details that I did not have the time to address during the presentation regarding our recruitment process.

 

Recruitment Statistics

Recruitment Statistics from 2008 to 2012

Looking at the previous graph, two things are obvious. First of all, every year the number of people who wanted to become developers is constantly decreased. Second, we have a significant number of people who did not manage to become developers. Let me express my personal thoughts on these two things.

For the first one, my opinion is that these numbers are directly related to the Gentoo’s reputation and its “infiltration” to power users. It is not a secret that Gentoo is not as popular as it used to be. Some people think this is because of the quality of our packages, or because of the frequency we cause headaches to our users. Other people think that the “I want to compile every bit of my linux box” trend belongs to the past and people want to spend less time maintaining/updating their boxes and more time doing some actual work nowadays. Either way, for the past few years we are loosing people, or to state it better, we are not “hiring” as many as we used to. Ignoring those who did not manage to become developers, we must admit that the absolute numbers are not in our favor. One may say that, 16 developers for 2011-2012 is not bad at all, but we aim for the best right? What bothers me the most is not the number of the people we recruit, but that this number is constantly falling for the last 5 years…

As for the second observation, we see that, every year, around 4-5 people give up and decide to not become developers after all. Why is that? The answer is obvious. Our long, painful, exhausting recruitment process drives people away. From my experience, it takes about 2 months from the time your mentor opens your bug, until a recruiter picks you up. This obviously kills someone’s motivation, makes him lose interest, get busy with other stuff and he eventually disappears. We tried to improve this process by creating a webapp two years ago, but it did not work out well. So we are now back to square one. We really can’t afford loosing developers because of our recruitment process. It is embarrassing to say at least.

Again, is there anything that can be done? Definitely yes. I’d say, we need an improved or a brand new web application that will focus on two things:

1) make the review process between mentor <-> recruit easier

2) make the final review process between recruit <-> recruiter an enjoyable learning process

Ideas are always welcomed. Volunteers and practical solutions even more ;) In the meantime, I am considering using Google+ hangouts for the face-to-face interview sessions with the upcoming recruits. This should bring some fresh air to this process ;)

The entire presentation can be found here

October 24, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Munin, sensors and IPMI (October 24, 2012, 15:06 UTC)

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…


[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo Miniconf 2012 (October 24, 2012, 11:07 UTC)

The Gentoo Miniconf is over now but it was a great success. There was 30+ developers that went and I met quite some users too. Thanks to Theo (tampakrap) and Michal (miska) for organizing the event (and others), thanks to openSUSE for sponsoring and letting the Gentoo Linux guys hangout there. Thanks to the other sponsors too, Google, Aeroaccess, et al.

More pics at the Google+ event page.

It was excellent to meet all of you.

October 23, 2012
Launching Gentoo VMs on okeanos.io (October 23, 2012, 13:50 UTC)

Long time, no post.

For about a year now, I’ve been working at GRNET on its (OpenStack API compliant) open source IaaS cloud platform Synnefo, which powers the ~okeanos service.

Since ~okeanos is mainly aimed towards the Greek academic community (and thus has restrictions on who can use the service), we set up a ‘playground’ ‘bleeding-edge’ installation (okeanos.io) of Synnefo, where anyone can get a free trial account, experiment with the the Web UI, and have fun scripting with the kamaki API client. So, you get to try the latest features of Synnefo, while we get valuable feedback. Sounds like a fair deal. :)

Unfortunately, being the only one in our team that actually uses Gentoo Linux, up until recently Gentoo VMs were not available. So, a couple of days ago I decided it was about time to get a serious distro running on ~okeanos (the load of our servers had been ridiculously low after all :P ). For future reference, and in case anyone wants to upload their own image on okeanos.io or ~okeanos, I’ll briefly describe the steps I followed.

1) Launch a Debian-base (who needs a GUI?) VM on okeanos.io

Everything from here on is done inside our Debian-base VM.

2) Use fallocate or dd seek= to create an (empty) file large enough to hold our image (5GB)

fallocate -l $((5 * 1024 * 1024 *1024) gentoo.img

3) Losetup the image, partition and mount it

losetup -f gentoo.img
parted mklabel msdos /dev/loop0
parted mkpart primary ext4 2048s 5G /dev/loop0
kpartx -a /dev/loop0
mkfs.ext4 /dev/mapper/loop0p1
losetup /dev/loop1 /dev/mapper/loop0p1 (trick needed for grub2 installation later on)
mount /dev/loop1 /mnt/gentoo -t ext4 -o noatime,nodiratime

4) Chroot and install Gentoo in /mnt/gentoo. Just follow the handbook. At a minimum you’ll need to extract the base system and portage, and set up some basic configs, like networking. It’s up to you how much you want to customize the image. For the Linux Kernel, I just copied directly the Debian /boot/[vmlinuz|initrd|System.map] and /lib/modules/ of the VM (and it worked! :) ).

5) Install sys-boot/grub-2.00 (I had some *minor* issues with grub-0.97 :P ).

6) Install grub2 in /dev/loop0 (this should help). Make sure your device.map inside the Gentoo chroot looks like this:

(hd0) /dev/loop0
(hd1) /dev/loop1

and make sure you have a sane grub.cfg (I’d suggest replacing all references to UUIDs in grub.cfg and /etc/fstab to /dev/vda[1]).
Now, outside the chroot, run:

grub-install --root-directory=/mnt --grub-mkdevicemap=/mnt/boot/grub/device.map /dev/loop0

Cleanup everything (umount, losetup -d, kpartx -d etc), and we’re ready to upload the image, with snf-image-creator.

snf-image-creator takes a diskdump as input, launches a helper VM, cleans up the diskdump / image (cleanup of sensitive data etc), and optionally uploads and registers our image with ~okeanos.

For more information on how snf-image-creator and Synnefo image registry works, visit the relevant docs [1][2][3].

0) Since snf-image-creator will use qemu/kvm to spawn a helper VM, and we’re inside a VM, let’s make sure that nested virtualization (OSDI ’10 Best Paper award btw :) ) works.

First, we need to make sure that kvm_[amd|intel] is modprobe’d on the host machine / hypervisor with the nested = 1 parameter, and that the vcpu, that qemu/kvm creates, thinks that it has ‘virtual’ virtualization extensions (that’s actually our responsibility, and it’s enabled on the okeanos.io servers).

Inside our Debian VM, let’s verify that everything is ok.

grep [vmx | svm] /proc/cpuinfo
modprobe -v kvm kvm_intel

1) Clone snf-image-creator repo

git clone https://code.grnet.gr/git/snf-image-creator

2) Install snf-image-creator using setuptools (./setup.py install) and optionally virtualenv. You’ll need to install (pip install / aptitude install etc) setuptools, (python-)libguestfs and python-dialog manually. setuptools will take care of the rest of the deps.

3) Use snf-image-creator to prepare and upload / register the image:

snf-image-creator -u gentoo.diskdump -r "Gentoo Linux" -a [okeanos.io username] -t [okeanos.io user token] gentoo.img -o gentoo.img --force

If everything goes as planned, after snf-image-creator terminates, you should be able to see your newly uploaded image in https://pithos.okeanos.io, inside the Images container. You should also be able to choose your image to create a new VM (either via the Web UI, or using the kamaki client).

And, let’s install kamaki to spawn some Gentoo VMs:

git clone https://code.grnet.gr/git/kamaki

and install it using setuptools (just like snf-image-creator). Alternatively, you could use our Debian repo (you can find the GPG key here).

Modify .kamakirc to match your credentials:

[astakos]
enable = on
url = https://astakos.okeanos.io
[compute]
cyclades_extensions = on
enable = on
url = https://cyclades.okeanos.io/api/v1.1
[global]
colors = on
token = [token]
[image]
enable = on
url = https://cyclades.okeanos.io/plankton
[storage]
account = [username]
container = pithos
enable = on
pithos_extensions = on
url = https://pithos.okeanos.io/v1

Now, let’s create our first Gentoo VM:

kamaki server create LarryTheCow 37 `kamaki image list | grep Gentoo | cut -f -d ' '` --personality /root/.ssh/authorized_keys

That’s all for now. Hopefully, I’ll return soon with another more detailed post on scripting with kamaki (vkoukis has a nice script using kamaki python lib to create from scratch a small MPI cluster on ~okeanos :) ).

Cheers!


October 22, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
May I have a network connection, please? (October 22, 2012, 15:31 UTC)

If you’re running ~arch, you probably noticed by now that the latest OpenRC release no longer allows services to “need net” in their init scripts. This change has caused quite a bit of grief because some services no longer started after a reboot, or no longer start after a restart, including Apache. Edit: this only happens if you have corner case configurations such as an LXC guest. As William points out, the real change is simply that net.lo no longer provides the net virtual, but the other network interfaces do.

While it’s impossible to say that this is not annoying as hell, it could be much worse. Among other reasons, because it’s really trivial to work it around until the init scripts themselves are properly fixed. How? You just need to append to /etc/conf.d/$SERVICENAME the line rc_need="!net" — if the configuration file does not exist, simply create it.

Interestingly enough, knowing this workaround also allows you to do something even more useful, that is making sure that services requiring a given interface being up depend on that interface. Okay it’s a bit complex, let me backtrack a little.

Most of the server daemons that you have out there don’t really care of how many, which, and what name your interfaces are. They open either to the “catch-all” address (0.0.0.0 or :: depending on the version of the IP protocol — the latter can also be used as a catch-both IPv4 and IPv6, but that’s a different story altogether), to a particular IP address, or they can bind to the particular interface but that’s quite rare, and usually only has to do with the actual physical address, such as RADVD or DHCP.

Now to bind to a particular IP address, you really need to have the address assigned to the local computer or the binding will fail. So in these cases you have to stagger the service start until the network interface with that address is started. Unfortunately, it’s extremely hard to do so automatically: you’d have to parse the configuration file of the service (which is sometimes easy and most of the times not), and then you’d have to figure out which interface will come up with that address … which is not really possible for networks that get their addresses automatically.

So how do you solve this conundrum? There are two ways and both involve manual configuration, but so do defined-address listening sockets for daemons.

The first option is to keep the daemon listening on the catch-all addresses, then use iptables to set up filtering per-interface or per-address. This is quite easy to deal with, and quite safe as well. It also has the nice side effect that you only have one place to handle all the IP address specifications. If you ever had to restructure a network because the sysadmin before you used the wrong subnet mask, you know how big a difference that makes. I’ve found before that some people think that iptables also needs the interfaces to be up to work. This is not the case, fortunately, it’ll accept any interface names as long as they could possibly be valid, and then will only match them when the interface is actually coming up (that’s why it’s usually a better idea to whitelist rather than blacklist there).

The other option requires changing the configuration on the OpenRC side. As I shown above you can easily manipulate the dependencies of the init scripts without having to change those scripts at all. So if you’re running a DHCP server on the lan served by the interface named lan0 (named this way because a certain udev no longer allows you to swap the interface names with the permanent rules that were first introduced by it), and you want to make sure that one network interface is up before dhcp, you can simply add rc_need="net.lan0" to your /etc/conf.d/dhcpd. This way you can actually make sure that the services’ dependencies match what you expect — I use this to make sure that if I restart things like mysql, php-fpm is also restarted.

So after I gave you two ways to work around the current not-really-working-well status, but why did I not complain about the current situation? Well, the reason for which so many init scripts have that “need net” line is simply cargo-culting. And the big problem is that there is no real good definition of what “net” is supposed to be. I’ve seen used (and used it myself!) for at least the following notions:

  • there are enough modules loaded that you can open sockets; this is not really a situation that I’d like to find myself to have to work around; while it’s possible to build both ipv4 and ipv6 as modules, I doubt that most things would work at all that way;
  • there is at least one network interface present on the system; this usually is better achieved by making sure that net.lo is started instead; especially since in most cases for situations like this what you’re looking for is really whether 127.0.0.1 is usable;
  • there is an external interface connected; okay sure, so what are you doing with that interface? because I can assure you that you’ll find eth0 up … but no cable is connected, what about it now?
  • there is Internet connectivity available; this would make sense if it wasn’t for the not-insignificant detail that you can’t really know that from the init system; this would be like having a “need userpresence” that makes sure that the init script is started only after the webcam is turned on and the user face is identified.

While some of these particular notions have use cases, the fact that there is no clear identification of what that “need net” is supposed to be makes it extremely unreliable, and at this point, especially considering all the various options (oldnet, newnet, NetworkManager, connman, flimflam, LXC, vserver, …) it’s definitely a better idea to get rid of it and not consider it anymore. Unfortunately, this is leading us into a relative world of pain, but sometimes you have to get through it.

October 21, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Asynchronous Munin (October 21, 2012, 17:52 UTC)

If you’re a Munin user in Gentoo and you look at ChangeLogs you probably noticed that yesterday I did commit quite a few changes to the latest ~arch ebuild of it. The main topic for these changes was async support, which unfortunately I think is still not ready yet, but let’s take a step back. Munin 2.0 brought one feature that was clamored for, and one that was simply extremely interesting: the former is the native SSH transport, the others is what is called “Asynchronous Nodes”.

On a classic node whenever you’re running the update, you actually have to connect to each monitored node (real or virtual), get the list of plugins, get the config of each plugin (which is not cached by the node), and then get the data for said plugin. For things that are easy to get because they only require you to get data out of a file, this is okay, but when you have to actually contact services that take time to respond, it’s a huge pain in the neck. This gets even worse when SNMP is involved, because then you have to actually make multiple requests (for multiple values) both to get the configuration, and to get the values.

To the mix you have to add that the default timeout on the node, for various reason, is 10 seconds which, as I wrote before makes it impossible to use the original IPMI plugin for most of the servers available out there (my plugin instead seem to work just fine, thanks to FreeIPMI). You can increase the timeout, even though this is not really documented to begin with (unfortunately like most of the things about Munin) but that does not help in many cases.

So here’s how the Asynchronous node should solve this issue: on a standard node, the requests to the single node are serialized so you’re actually waiting for each to complete before the next one is fetched, as I said, and since this can make the connection to the node take, all in all, a few minutes, and if the connection is severed then, you lose your data. The Asynchronous node, instead, has a different service polling the actual node on the same host, and saves the data in its spool file. The master in this case connects via SSH (it could theoretically work using xinetd but neither me nor Steve care about that), launches the asynchronous client, and then requests all the data that was fetched since the last request.

This has two side-effects: the first is that your foreign network connection is much faster (there is no waiting for the plugins to config and fetch the data), which in turn means that the overall munin-update transaction is faster, but also, if for whatever reason the connection fails at one point (a VPN connection crashes, a network cable is unplugged, …), the spooled data will cover the time that the network was unreachable as well, removing the “holes” in the monitoring that I’ve been seeing way too often lately. The second side effect is that you can actually spool data every five minutes, but only request it every, let’s say, 15, for hosts which does not require constant monitoring, even though you want to keep granularity.

Unfortunately, the async support is not as tested as it should be and there are quite a few things that are not ironed out yet, which is why the support for it in the ebuild has been this much in flux up to this point. Some things have been changed upstream as well: before, you had only one user, and that was used for both the SSH connections and for the plugins to fetch data — unfortunately one of the side effect of this is that you might have given your munin user more access (usually read-only, but often times there’s no way to ensure that’s the case!) to devices, configurations or things like that… and you definitely don’t want to allow direct access to said user. Now we have two users, munin and munin-async, and the latter needs to have an actual shell.

I tried toying with the idea of using the munin-async client as a shell, but the problem is that there are no ways to pass options to it that way so you can’t use --spoolfetch which makes it vastly useless. On the other hand, I was able to get the SSH support a bit more reliable without having to handle configuration files on the Gentoo side (so that it works for other distributions as well, I need that because I have a few CentOS servers at this point), including the ability to use this without requiring netcat on the other side of the SSH connection (using one old trick with OpenSSH). But this is not yet ready, it’ll have to wait for a little longer.

Anyway as usual you can expect updates to the Munin page on the Gentoo Wiki when the new code is fully deployed. The big problem I’m having right now is making sure I don’t screw up with the work’s monitors while I’m playing with improving and fixing Munin itself.

Gentoo on the OLPC XO-1.75 (October 21, 2012, 10:00 UTC)

Currently at the Gentoo Miniconf 2012 in Prague, we have two OLPC XO-1.75 devices and are working to install Gentoo on them.

These XO-1.75 is based on the Marvell Armada 610 SoC (armv7l, non-NEON), which promises countless hours of fun enumerating and obtaining obscure pieces of software which are needed to make the laptop work.

One of these is the xf86-video-dove DDX for the Vivante(?) GPU: The most recent version 0.3.5 seems to be available only as SRPM in the OLPC rpmdropbox. Extracting it reveals a "source" tarball containing this:

.:
total 1364
-rw-r--r-- 1 chithanh users 423968 12. Sep 14:39 aclocal.m4
drwxr-xr-x 1 chithanh users 80 12. Sep 14:39 autom4te.cache
-rwxr-xr-x 1 chithanh users 981 12. Sep 14:37 build_no_dpkg_env.sh
-rw-r--r-- 1 chithanh users 0 12. Sep 14:37 ChangeLog
lrwxrwxrwx 1 chithanh users 37 12. Sep 14:39 config.guess -> /usr/share/automake-1.12/config.guess
-rw-r--r-- 1 chithanh users 2120 12. Sep 14:40 config.h
-rw-r--r-- 1 chithanh users 1846 12. Sep 14:40 config.h.in
-rw-r--r-- 1 chithanh users 43769 12. Sep 14:40 config.log
-rwxr-xr-x 1 chithanh users 65749 12. Sep 14:40 config.status
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 config.sub -> /usr/share/automake-1.12/config.sub
-rwxr-xr-x 1 chithanh users 440014 12. Sep 14:40 configure
-rw-r--r-- 1 chithanh users 2419 12. Sep 14:37 configure.ac
-rwxr-xr-x 1 chithanh users 1325 12. Sep 14:37 COPYING
drwxr-xr-x 1 chithanh users 262 12. Sep 14:37 debian
lrwxrwxrwx 1 chithanh users 32 12. Sep 14:39 depcomp -> /usr/share/automake-1.12/depcomp
drwxr-xr-x 1 chithanh users 252 12. Sep 14:37 etc
drwxr-xr-x 1 chithanh users 44 12. Sep 14:37 fedora
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 install-sh -> /usr/share/automake-1.12/install-sh
-rwxr-xr-x 1 chithanh users 293541 12. Sep 14:40 libtool
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 ltmain.sh -> /usr/share/libtool/config/ltmain.sh
-rw-r--r-- 1 chithanh users 27005 12. Sep 14:40 Makefile
-rw-r--r-- 1 chithanh users 1167 12. Sep 14:37 Makefile.am
-rw-r--r-- 1 chithanh users 25708 12. Sep 14:40 Makefile.in
drwxr-xr-x 1 chithanh users 76 12. Sep 14:40 man
lrwxrwxrwx 1 chithanh users 32 12. Sep 14:39 missing -> /usr/share/automake-1.12/missing
-rw-r--r-- 1 chithanh users 4169 12. Sep 14:37 README
drwxr-xr-x 1 chithanh users 1192 12. Sep 21:48 src
-rw-r--r-- 1 chithanh users 23 12. Sep 14:40 stamp-h1

src/:
total 688
-rw-r--r-- 1 chithanh users 3555 12. Sep 14:41 compat-api.h
-rw-r--r-- 1 chithanh users 805 12. Sep 14:37 datatypes.h
-rw-r--r-- 1 chithanh users 55994 12. Sep 21:22 dovefb.c
-rw-r--r-- 1 chithanh users 32160 12. Sep 15:11 dovefb_cursor.c
-rw-r--r-- 1 chithanh users 278 12. Sep 17:09 dovefb_cursor.lo
-rw-r--r-- 1 chithanh users 6052 12. Sep 14:41 dovefb_driver.h
-rw-r--r-- 1 chithanh users 974 12. Sep 17:09 dovefb_drv.la
-rw-r--r-- 1 chithanh users 13856 12. Sep 14:37 dovefb.h
-rw-r--r-- 1 chithanh users 264 12. Sep 17:09 dovefb.lo
-rw-r--r-- 1 chithanh users 128733 12. Sep 15:11 dovefb_xv.c
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 dovefb_xv.lo
-rw-r--r-- 1 chithanh users 2548 12. Sep 14:53 list.h
-rw-r--r-- 1 chithanh users 22242 12. Sep 17:08 Makefile
-rw-r--r-- 1 chithanh users 2121 12. Sep 14:37 Makefile.am
-rw-r--r-- 1 chithanh users 2134 12. Sep 14:37 Makefile.am.sw
-rw-r--r-- 1 chithanh users 21742 12. Sep 14:40 Makefile.in
-rw-r--r-- 1 chithanh users 18584 12. Sep 15:11 mrvl_crtc.c
-rw-r--r-- 1 chithanh users 856 12. Sep 14:37 mrvl_crtc.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_crtc.lo
-rw-r--r-- 1 chithanh users 851 12. Sep 14:37 mrvl_cursor.h
-rw-r--r-- 1 chithanh users 2509 12. Sep 15:11 mrvl_debug.c
-rw-r--r-- 1 chithanh users 2284 12. Sep 14:37 mrvl_debug.h
-rw-r--r-- 1 chithanh users 272 12. Sep 17:09 mrvl_debug.lo
-rw-r--r-- 1 chithanh users 32528 12. Sep 15:11 mrvl_edid.c
-rw-r--r-- 1 chithanh users 5794 12. Sep 14:37 mrvl_edid.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_edid.lo
-rw-r--r-- 1 chithanh users 84262 12. Sep 17:07 mrvl_exa_driver.c
-rw-r--r-- 1 chithanh users 282 12. Sep 17:09 mrvl_exa_driver.lo
-rw-r--r-- 1 chithanh users 10388 12. Sep 15:11 mrvl_exa_fence_pool.c
-rw-r--r-- 1 chithanh users 290 12. Sep 17:09 mrvl_exa_fence_pool.lo
-rw-r--r-- 1 chithanh users 9189 12. Sep 14:51 mrvl_exa.h
-rw-r--r-- 1 chithanh users 4258 12. Sep 14:37 mrvl_exa_profiling.h
-rw-r--r-- 1 chithanh users 46583 12. Sep 15:11 mrvl_exa_utils.c
-rw-r--r-- 1 chithanh users 3768 12. Sep 15:06 mrvl_exa_utils.h
-rw-r--r-- 1 chithanh users 280 12. Sep 17:09 mrvl_exa_utils.lo
-rw-r--r-- 1 chithanh users 20622 12. Sep 15:11 mrvl_heap.c
-rw-r--r-- 1 chithanh users 3256 12. Sep 14:53 mrvl_heap.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_heap.lo
-rw-r--r-- 1 chithanh users 1774 12. Sep 15:11 mrvl_offscreen_memory.c
-rw-r--r-- 1 chithanh users 235 12. Sep 14:37 mrvl_offscreen_memory.h
-rw-r--r-- 1 chithanh users 294 12. Sep 17:09 mrvl_offscreen_memory.lo
-rw-r--r-- 1 chithanh users 47286 12. Sep 15:11 mrvl_output.c
-rw-r--r-- 1 chithanh users 274 12. Sep 17:09 mrvl_output.lo

More pictures of the Gentoo Miniconf can be found at the Google+ Event page.

October 19, 2012
Miniconf: Gentoo on the OLPC XO-1.75 (October 19, 2012, 21:02 UTC)

At the Gentoo Miniconf 2012 in Prague we will install Gentoo on the OLPC XO-1.75, an ARM based laptop designed as an educational tool for children. If you are interested in joining us, come to the Gentoo booth and start hacking with us!

—Chí-Thanh Christopher Nguyễn

October 18, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
ModSecurity news, rules, and future (October 18, 2012, 05:32 UTC)

So the day started looking into getting a new version of ModSecurity into shape for a new stable ebuild in Gentoo for bug #438724 (a security issue in ModSecurity 2.6.8 and earlier). Unfortunately this also meant that I had to get a new CRS in, and that requires more testing than I was expecting.

The problem is that the ModSecurity 2.7 release is now stricter on what it accepts for rules. In particular now rules are mandated to have an unique ID. And that ID has to be only numeric. And that also means that if you publish your ruleset like I do you have to register for a reserved ID range with the ModSecurity developers. I did, and I have my proper range. I already developed a tool some time ago to validate my rules’ compliance with the new policy, but it turned out to requiring some tweaking anyway, as a few conditions weren’t reported properly.

Unfortunately the Core Rule Set (which is actually developed as a separate project by Ryan Barnett, whereas ModSecurity is maintained by Breno Silva), was not ready for this yet. Oh yes, the base rules, which are the only ones usually enabled by Gentoo, are fine, but the optional, experimental and the, newly introduced SpiderLabs Research rules are not ready. Some rules lack an ID, some IDs are duplicate, and some rules go well out of the designed ranges for them.

I pointed the guys at SpiderLabs/TrustWave at my script already — hopefully we’ll soon get a 2.2.7 release that covers those issues. Until then we’ll have to do with what we have. My rules are all fixed to work properly with the new ModSecurity though, this blog is using them already.

On a different note, I’ve considered making my validation of browsers’ user agents stronger than before, as spammers and exploit tools are becoming more advanced and more capable. In particular, I’ve found Mozilla’s docs as well as Microsoft’s which include a description for IE 8 and one for IE9 (I haven’t looked up one for IE 10 yet, I’m sure they have it). This should be enough to actually validate that there aren’t extraneous addons installed that could be signal for a spambot.

In particular, it seems like many of the posters in the recent wave of spam I’ve been hit with lately, which is looking exactly like a standard browser, reports coming from Firefox with WebMoney Advisor installed. Turns out that WebMoney is one of the many anonymous, electronic currencies that are so often used by spammers, carders, and the rest of the low-life scum that causes us so much grief as email users and bloggers. I wouldn’t be surprised if these were actually mechanical turks used to post spam bypassing various filters, who are then paid through that service.

Anyway, as usual please let me know if you can’t post the blog just send me an email, it shouldn’t happen but sometimes I have been overly excited with the rules themselves. On the other hand, I’ve tested most of the browsers as we have them lined up here at the office and they are fine — we don’t use or support Opera, but that should be fine as well. The infamous Opera Turbo issues should be fixed now, it would have been nice if Opera actually sent the proper HTTP parameters as required by the RFC when using that feature, but it’s okay.

October 17, 2012
2012 Gentoo Screenshot Contest Results (October 17, 2012, 20:57 UTC)

Gentoo - Still alive and kicking ...

As the quantity and quality of this year's entries will attest, Gentoo is alive, well, and taking no prisoners!

We had 70 entries for the 2012 Gentoo screenshot contest, representing 11 different window managers / desktop environments. Thanks to all that participated, the judges and likewhoa for the screenshot site.

The Winners!

New subproject: kde-stable (October 17, 2012, 18:53 UTC)

If you are a kde user, you may be interested to this new subproject:
http://www.gentoo.org/proj/en/desktop/kde/kde-stable/

Feel free to ask any doubt.

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The latest news (October 17, 2012, 10:27 UTC)

Overview of What Happened

In the last few weeks, the conference team has worked hard to prepare the conference. The main news items you should be awere of are the FAQ which has been published, the party locations and times, the call to organize BoF sessions and of course the sponsors who help make the event possible. And we’re happy to tell you that we will provide live video streams from the main rooms during the event (!!!) and we announced the Round Table sessions during the Future Media track. Last but not least, there have been some interviews with intresting speakers in the schedule!

Sneak Peek of the Conference Schedule

Let’s start with the interviews. During the last weeks, a number of interesting speakers has been interviewed, both by text and over video chat. You can find the interviews in our first sneak peek article and more in this extensive follow-up article about the Future Media track. You can find the video interviews also in our youtube channel and on our blip.tv channel.

Video!

Talking about video interviews, there will be more videos in those channels: the openSUSE Video team is gearing up to tape the talks at the event. They will even provide a live stream of the event, which you can watch via flash and on a smartphone at bambuser and via these three links via ogv feeds: Room Kirk Room McCoy and Room Scotty. Keep an eye on the wiki page as the team will add feeds to more rooms if we can get some more volunteers to help us out.

Round Table Sessions!

We’ve mentioned the special feature track ‘Future Media’ already and we’ve got an extra bite for you all: the track will feature two round table discussions, one about the value of Free and Open for our Society and one about the practicalities of doing ‘open’ projects. Find more in the schedule: Why open matters and How do you DO open?.

We need YOU!

Despite all our work, this event would be nothing without YOUR help. We’re still looking for volunteers to sign up but there’s another thing we need you for: be pro-active and get the most out of this event! That means not only sitting in the talks but also stepping up and participating in the BoF Sessions. And organize a BoF if you think there’s something to discuss!

Party time!

Of course, we’re also thinking about the social side of the event. Yes, there will surely be an extensive “hallway track” as we feature a nice area with booths and the university has lots of hallways… But sometimes it’s just nice to sit down with someone over a good beer, and this is where our parties come in. As this article explains, there will be two parties: one on Friday, as warming-up (and pre-registration) and one on Saturday, rockin’ in the city center of Prague. Note that you will need your badge to enter this party, which means you have to be registered!

Sponsors

As we wrote a few days ago, all this would not be possible without our sponsors, and we’d like to thank them A LOT for their support!

Big hugs to Platinum Sponsor SUSE, Gold Sponsor Aeroaccess, Silver Sponsor Google, Bronze Sponsor B1Systems, supporters ownCloud and Univention and of course our media partners LinuxMagazine and Root.cz. Last but not least, a big shout-out to the university which is providing this location to us!

FaQ

On a practical level, we also published our Conference FAQ answering a bunch of questions you might have about the event. If you weren’t sure about someting, check it out!

More

There will be more news in the coming days, be sure to keep an eye on news.opensuse.org for articles leading up and of course during the event. As one teaser, we’ve got the Speedy Geeko and Lightning talks schedule coming soon!

Be there!

Gentoo Miniconf, oSC12 and LinuxDays will take place at the Czech Technical University in Prague. The campus is located in the Dejvice district and is next to an underground station that gets you directly to the historic city center – an opportunity you can’t miss!

We expect to welcome about 700 Open Source developers, testers, usability experts, artists and professional attendees to the co-hosted conferences! We work together making one big, smashing event! Admission to the conference is completely free. However for oSC a professional attendee ticket is available that offers some additional benefits.

All the co-hosted conferences will start on October 20th. Gentoo Miniconf and Linuxdays end on October 21st, while the openSUSE Conference ends on October 23rd. See you there!

Dane Smith a.k.a. c1pher (homepage, stats, bugs)
New Tricks, Goals, and Ideas (October 17, 2012, 01:06 UTC)

It’s been a while since I’ve done anything visible to anyone but myself. So, what the heck have I been doing?

Well, for starts, in the past year I’ve done a serious amount of work in Python. This work was one of the reasons for my lack of motivation for Gentoo. I went from doing little programming / maintenance at work to doing it 40+ hours a week. It meant I didn’t really feel up to doing more of it in my limited spare time. So I took up a few new hobbies. I got into Photography (feel free to look under links for the photo website). I feel weird with the self promotion for that type of thing, but, c’est la vie.

As the programming at work died down some, I started to find odd projects. I spent some serious time learning Go [1] and did a few small projects of my own in that. One of those projects will be open sourced soon. I know a fair few different languages, and I know C, Python, and Java pretty decently. While I like all of the ones on that list, I can’t say that I truly buy into the philosophies. Python is great. It’s simple, it’s clean, and it “just works.” However, I find that like OpenSSL, it gives you enough room to hang yourself and everyone else in the room. The lack of strict typing coupled with the fact that it’s a scripting language are downsides (in my eyes). C, for all that it is awesome at low level work, requires so much verbosity to accomplish the simplest tasks that I tend to shy away from it for anything other than what must be done at that level. Java… is well Java. It’s a decent enough language I suppose, but being run in a VM is silly in my eyes. It, like C, suffers from being too verbose as well (again, merely my humble opinion).

Enter Go. Go has duck typed interfaces, unlike Java’s explicit ones. It’s compiled and strictly typed. It has other modern niceties (like proper strings), along with a strong tie to web development (another area C struggles with). It has numerous interesting concepts (check out defer), along with what I find to be a MUCH better approach to error handling than what exists in any of C, Java, or Python. Add in that it is concurrent by design and you have one serious language. I must say that I am thoroughly impressed. Serious Kudos to those Google guys for one awesome language.

I also picked up a Nexus 7 and started looking into how Android is built and works. I got my own custom ROM and Kernel working along with a nice Gentoo image on the SD Card. Can anyone say “Go compiler on my Nexus 7?” This work also led me to do some work as far as getting Gentoo booting on Amazon’s Elastic Compute Cloud. Building Android takes for-freaking-ever, so I figured.. why not do it in the cloud!? It works splendidly, and it is fast.

So that covers new tricks. You mentioned goals and ideas?!

First, time to get myself off the slacker wagon and back to doing something useful. I no longer repulse at the idea of developing when I get home. That helps =p. One of the first things I want to spend some time addressing is disk encryption in Gentoo. I wrote here pertaining to the state of loop-aes. Both Loop-AES and Truecrypt need to spend a little time under the microscope as to how they should be handled within Gentoo. I’ll write more on his later when I have all my ducks in a row. I have no doubt that this will be a fun topic.

I also want to look into how a language like Go fits into Gentoo. Go has it’s own build system (no Makefiles, configure scripts, or anything else) that DOES have a notion of things like CFLAGS. It also has the ability to “go get” a package and install it. To those curious check out their website. All of these lead to interesting questions from a package management point of view. I am inclined to think that Go is around to stay. I hope it is. So we may as well start looking into this now rather than later. As my father used to tell me all the time, “Proper Prior Planning Prevents Piss Poor Performance.” Time to plan =).

That is, right after I sort out the fiasco that is my bug queue. *facepalm*

[1] http://golang.com

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Sophistication can be bad (October 17, 2012, 00:06 UTC)

Everybody heard about the KISS principle I guess — the idea is the less complex a moving part is, the better. This is true in software as much as mechanics. Unix in particular, and all the Unix-like projects including GNU, also tended to follow that principle as it can be shown by the huge amount of small utilities that only do one particular text or file editing functions — that is until you introduce sed, awk and find.

Now we all know that the main sophistication that is afoot in the Linux world nowadays is Lennart’s systemd. I have no intention to discuss it now, or at any later time I’d say. I really don’t care as long as I have a choice not to use it, and judging from a given thread I think we’ll always have an alternative, no matter what some people said before and keep saying.

No, my problem today is not with udev deciding it’s time to stop using the same persistent rules that people had to fight with for years and that now are no longer usable, and instead it’s a problem with util-linux, and in particular with the losetup utility that manages the loop devices. See, the loop devices have been quite a big deal in the past, mostly because they started as a fixed amount, then the kernel let you decide how many, and then finally code was enabled that would let you change dynamically the amount of loop devices you want to have available. Great, but it required a newer version of util-linux, and at the time when it was introduced, there wasn’t one that actually worked as intended.

Anyway, in the past week I’ve been working on building a new firmware image for the device I’m working on, and when it comes down to run the script that generates the image to burn on the SSD, it locked up with 100% CPU usage (luckily the system is multicore so I could get in to kill it). The problem was to be found in losetup, so today with enough time on my hands, I went to check it out. Turns out that the reason why it failed was a joint issue between my setup, OpenRC updates, and util-linux updates, but let’s proceed with order.

The build happen on a container for which I was not mounting /sys — or at least so I intended, although it is possible that OpenRC mounted it on its own; this has changed recently, but I don’t think those changes hit stable yet, so I’m not sure that’s the case. I had created static nodes for the loop devices and for /dev/loop-control — but this latter was not to be found at first today. Maybe I deleted it by mistake or something along those lines. But the point is it worked before, and nothing changed beside an emerge -avuDN.

So, what happens is that the script is running something along the lines of losetup --find --show file which is intended to find the first available loop device, set up the file, and then print the loop device that was found. It’s a bit more complex than this as I’m explicitly setting up only the partition on the loop device (getting partitioned loop devices to play cool with LXC is a pain), but the point stands. Unfortunately, when both /dev/loop-control and /sys are unreachable, the looping around that should give us the first available device is looping over the same device over and over and over again, never trying the next. This causes the problem noted above, of losetup locking at 100% CPU usage.

And it’s definitely not the only problem! If you just execute losetup --find, which should give you the first available device, it provides you /dev/loop0 even if that device is already in use. Not content enough with these problems? losetup -a lists no device, even when they are present, and still returns with a valid, zero exit status. Which is definitely not the case!

Okay you can say that losetup is already trying its best by using not one but three different sources (the third one is /proc/partitions) to find the data to use, but when the primary two are not usable, you shouldn’t expect it to give you proper information, should you? Well, that’s not the point. The big problem is that it should tell me “man, I can’t get you the data you requested because I need more sources, give me the sources!” instead of trying its best, failing, and locking up.

The next question is obviously “why are you ranting, instead of fixing it?” — the answer is that I tried, but the code I was reading made me cry. The problem is that nowadays, losetup is just a shallow interface to some shared code in util-linux .. and the design of said code makes it very difficult to make it clear whether a non-zero return value from a function is a “we reached the end of the list” or “I couldn’t see anything because I lack my sources”. And it really didn’t feel like a good idea for me to start throwing away that code to replace it with something more KISS-compliant.

So at the end of the day, I fixed my container to mount /sys and everything works, but util-linux is still broken upstream.

October 15, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
box down (October 15, 2012, 07:08 UTC)

my main gentoo workstation is down. no more documentation updates from me for awhile.

it seems the desktop computer’s video card has finally bitten the dust. the monitor comes up as “no input detected” despite repeated reboots. so now i’m faced with a decision: throw in a cheap, low-end GFX card as a stopgap measure, or wash my hands of 3 to 6 years of progressive hardware failure, and do a complete rebuild. last time i put anything new in the box was probably back in 2009…said (dead) GFX card, and a side/downgraded AMD CPU. might be worth building an entirely new machine from scratch at this point.

i haven’t bothered to pay attention to the AMD-vs-Intel race for the last few years, so i’m a bit at a loss. i’ll check TechReport, SPCR, NewEgg, and all those sites, but…not being at all caught up on the bang-for-buck parts…is a bit disconcerting. i used to follow the latest trends and reviews like a true technoweenie.

and now, of course, i’m thinking in terms of what hardware lends itself to music production — USB/Firewire ports, bus latency, linux driver status for crucial bits; things like that. all very challenging to juggle after being out of it for so long.

so, who’s built their own PC lately? what’d ya use?

October 14, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Gentoo Hardened progress meeting (October 14, 2012, 13:00 UTC)

Not that long ago we had our monthly Gentoo Hardened project meeting (on October 3rd to be exact). On these meetings, we discuss the progress of the project since the last meeting.

For our toolchain domain, Zorry reported that the PIE patchset is updated for GCC, fixing bug #436924. Blueness also mentioned that he will most likely create a separate subproject for the alternative hardened systems (such as mips and arm). This is mostly for management reasons (as the information is currently scattered throughout the Gentoo project at large).

For the kernel domain, since version 3.5.4-r2 (and higher), the kernexec and uderef settings (for grSecurity) should no longer impact performance on virtualized platforms (when hardware acceleration is used of course), something that has been bothering Intel-based systems for quite some time already. Also, the problem with guest systems immediately reserving (committing) all memory on the host should be fixed with recent kernels as well. Of course, this is only true as long as you don’t sanitize your memory, otherwise all memory gets allocated regardless.

In the SELinux subproject, we now have live ebuilds allowing users to pull in the latest policy changes directly from the git repository where we keep our policy at. Also, we will see a high commit frequency in the next few weeks (or perhaps even months) as Fedora’s changes are being merged with upstream. Another change is that our patchbundles no longer contain all individual patches, but a merged patch. This increases the deployment time of a SELinux policy package considerably (up to 30% faster since patching is now only a second or less). And finally, the latest userspace utilities are in the hardened-dev overlay ready for broader testing.

grSecurity is still focusing on the XATTR-based PaX flags. The eclass (pax-utils) has been updated, and we will now be looking at supporting the PaX extended attributes for file systems such as tmpfs.

For profiles, people will notice that in the next few weeks, we will be dropping the (extremely) old SELinux profiles as the current ones have been marked stable long time ago.

In the system integrity domain, IMA is being worked on (packages and documentation) after which we’ll move to the EVM support to protect extended attributes.

And finally, klondike held a good talk about Gentoo Hardened at the Flossk conference in Kosovo.

All in all a good month of work, again with many thanks to the volunteers that are keeping Gentoo Hardened alive and kicking!

Matthew Thode a.k.a. prometheanfire (homepage, stats, bugs)
VLAN trunking to KVM VMs (October 14, 2012, 05:00 UTC)

Why this is needed

In testing linux bridging I noticed a problem that took me much longer then I feel comfortable admitting. You cannot break out the VLANs to from a physical device and also use that physical device (attached to a bridge) to forward forward the entire trunk to a set of VMs. The reason this occurs is that once linux starts inspecting for vlans on an interface to split them out it discards all those you do not have defined, so you have to trick it.

Setup

I had my Trunk on eth1. What you need to do is directly attach eth1 to a bridge (vmbr1). This bridge now has the entire trunk associated with it. Here's the fun part, you can break out vlans on the bridge, so you would have an interface for vlan 13 named vmbr1.13 and then attach that to a brige, allowing you to have a group of machines only exposed to vlan 13.

The networking goes like this.

               /-> vmbr1.13 -> vmbr13 -> VM2
eth1 -> vmbr1 ---> VM1
               \-> vmbr1.42 -> vmbr42 -> VM3

Example

Here is the script I used with proxmox (you can set up the bridge in proxmox, but not the source for the bridges data (the 'input'). This is for VLANs 1-13 and assumes you have vyatta set up the target bridges. I had this start at boot (via rc.local).

vconfig add vmbr1 2
vconfig add vmbr1 3
vconfig add vmbr1 4
vconfig add vmbr1 5
vconfig add vmbr1 6
vconfig add vmbr1 7
vconfig add vmbr1 9
vconfig add vmbr1 10
vconfig add vmbr1 11
vconfig add vmbr1 12
vconfig add vmbr1 13
ifconfig eth1 up
ifconfig vmbr1 up
ifconfig vmbr1.2 up
ifconfig vmbr1.3 up
ifconfig vmbr1.4 up
ifconfig vmbr1.5 up
ifconfig vmbr1.6 up
ifconfig vmbr1.7 up
ifconfig vmbr1.8 up
ifconfig vmbr1.9 up
ifconfig vmbr1.10 up
ifconfig vmbr1.11 up
ifconfig vmbr1.12 up
ifconfig vmbr1.13 up
brctl addif vmbr1 eth1
brctl addif vmbr2 vmbr1.2
brctl addif vmbr3 vmbr1.3
brctl addif vmbr4 vmbr1.4
brctl addif vmbr5 vmbr1.5
brctl addif vmbr6 vmbr1.6
brctl addif vmbr7 vmbr1.7
brctl addif vmbr8 vmbr1.8
brctl addif vmbr9 vmbr1.9
brctl addif vmbr10 vmbr1.10
brctl addif vmbr11 vmbr1.11
brctl addif vmbr12 vmbr1.12
brctl addif vmbr13 vmbr1.13

October 13, 2012
Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
Reanimating #gentoo-commits (October 13, 2012, 13:58 UTC)

Today I got annoyed with the silence in #gentoo-commits and spent a few hours fixing that. We have a bot reporting ... well, I hope all commits, but I haven't tested it enough.

So let me explain how it works so you can be very amused ...

First stage: Get notifications
Difficulty: I can't install postcommit hooks on cvs.gentoo.org
Workaround: gentoo-commits@lists.gentoo.org emails
Code (procmailrc):

:0:
* ^TO_gentoo-commits@lists.gentoo.org
{
  :0 c
  .maildir/.INBOX.gentoo-commits/

  :0
  | bash ~/irker-wrapper.sh
}
So this runs all mails that come from the ML through a script, and puts a copy into a subfolder.

Second stage: Extracting the data
Difficulty: Email is not a structured format
Workaround: bashing things with bash until happy
Code (irker-wrapper.sh):
#!/bin/bash
# irker wrapper helper thingy

while read line; do
        # echo $line # debug
        echo $line | grep -q "X-VCS-Repository:" && REPO=${line/X-VCS-Repository: /}
        echo $line | grep -q "X-VCS-Committer:"  && AUTHOR=${line/X-VCS-Committer:/}
        echo $line | grep -q "X-VCS-Directories:"  &&  DIRECTORIES=${line/X-VCS-Directories:/}
        echo $line | grep -q "Subject:"  && SUBJECT=${line/Subject:/}
        EVERYTHING+=$line
        EVERYTHING+="\n"
done

COMMIT_MSG=`echo -e $EVERYTHING | grep "Log:" -A1 | grep -v "Log:"`

ssh commitbot@lolcode.gentooexperimental.org "{\"to\": [\"irc://chat.freenode.net/#gentoo-commits\"], \"privmsg\": \"$REPO: ${AUTHOR} ${DIRECTORIES}: $COMMIT_MSG \"}"
Why the ssh stuff? Well, the server where the mails arrive is a bit restricted, hard to run a daemon there 'n stuff, so let's just pipe it somewhere more liberal

Third stage: Sending the notifications
Difficulty: How to communicate with irkerd?
Workaround: nc, a hammer, a few thumbs
Code:
#!/bin/bash

echo $@ | nc --send-only  127.0.0.1 6659
And that's how the magic works.

Bonus trick: using command="" in ~/.ssh/authorized_keys

... and now I really need a beer :)

October 12, 2012
Raúl Porcel a.k.a. armin76 (homepage, stats, bugs)
Beaglebone documentation updated (October 12, 2012, 17:06 UTC)

Hi all,

I’ve got some reports that my Beaglebone guide is outdated and giving some troubles regarding the bootloader and kernel.

While as of vanilla kernel 3.6.1 doesn’t support the beaglebone, U-Boot 2012.10-rc3 does support it, so i’ve tested all thechanges and updated the guide accordingly.

You can find it in http://dev.gentoo.org/~armin76/arm/beaglebone/install.xml
Some changes i’ve noticed in almost a year since i did the documentation:

  • The bug (by design they said) which made the USB port stop working after unplugging a device (check my post about the Beaglebone) is now fixed
  • CPU scaling is working, although the default governor is ‘userspace’. The default speed with this governor is:

a) 600MHz if powering it using a PSU through the 5V power connector, remember that the maximum speed of the  Beaglebone is 720MHz

b) 500MHz if powering it using the mini-USB port

Have fun


October 08, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The Keynote speaker (October 08, 2012, 12:22 UTC)

The Keynote speaker for the Bootstrapping Awesome co-hosted conferences is going to be Agustin Benito Bethencourt. Agustin is currently working in Nuremberg, Germany as the openSUSE Team Lead at SUSE, and in the Free Software community he’s mostly known for his contributions to KDE and especially in the KDE eV. He is a very interesting guy, with a lot of experience about FOSS both from the community and the enterprise POV, which is also the reason I asked him to do the Keynote. I enjoy a lot working with him on organizing this conference, his experience is valuable. In this interview he talks a bit about himself, and a lot about the subject of his Keynote, the conference, openSUSE and SUSE, and about Free Software. The interview was done inside the SUSE office in Prague, with me being the “journalist” and Michal being the “camera-man”. Post-processing was done by Jos. More interviews from other speakers are about to come, so stay tuned! Enjoy!

I’m writing this post in italian language because it is intended only for italian people.

E’ da tempo che abbiamo messo su l’idea di lavorare su git per quanto riguarda la traduzione della documentazione gentoo da inglese a italiano.
Siamo già in tanti, ma se avessimo altri traduttori potremmo produrre molto di più.
Non sono richeste specifiche tecniche, se non un minimo di conoscenza della lingua inglese.

Riferimenti:
http://dev.gentoo.org/~ago/trads-it.xml
http://dev.gentoo.org/~ago/howtohelp.xml
http://www.gentoo.org/doc/it/xml-guide.xml

Se in questi documenti c’è qualcosa di poco chiaro, non esitate a contattarmi.

Chi è interessato a collaborare può scrivermi via mail all’indirizzo ago@gentoo.org aggiungendo possibilmente il tag [docs-it] ad inizio oggetto o semplicemente cliccando qui.

October 07, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
New ancient content (October 07, 2012, 18:05 UTC)

This is a very brief post, which is just going to tell you that all the old content which was originally written on the b2evolution install at Planet Gentoo, and which was then imported into the WordPress install at Gentoo Blogs, have been imported into this site instead.

This basically means that Google will start indexing the old posts as well, which will then appear on the custom search’s results, which is why I went through the trouble. In particular yesterday’s post was actually linked to a very old problem which I enocuntered almost exactly seven years ago (time flies!) but finding it was a bit of a mess.

So I saved a backup of the Wordpress data, and wrote a script to create articles in Typo starting from it. The result is that now there are 324 extra posts in this blog — unfortunately some of them are noise. Some of it because the original images they linked to are gone, others because at some point b2evolution went crazy and started dropping anything where UTF-8 was involved, as can be shown by the original and the truncated version from the Wayback Machine. I’ll probably import the two articles I found being truncated.

After the import completed, I also deleted the WordPress site on Gentoo’s infra. The reason is simple: I don’t want to have two copies around, with all the broken links it has (nobody set up a remapping of the old planet links to blogs links), the truncated content and so on. There are a number of things that need to cleared up with the content, and links that went from this blog to the old one need to be fixed as well, but that’ll happen over time, I don’t want to sweat it now.

I’m happy that my content is back with me.

UTF-8 Forever (October 07, 2012, 17:56 UTC)

Ok I know I’m being quite an ass for posting so much blog entries in a short time, but I had part of them already in mind in the last days and I just hadn’t had time to actually write them.

This entry has a title that would probably be misleading at the end as I’m going to summarize a couple of things here…

Starting from the topic in the title, I’m really loving UTF-8 these days. With the surname I have (“Pettenò”) is quite important being sure that the final ò is handled correcly by computers.. it already happened that I received (snail) mails with the surname mangled with the wrong encoding, and sometime I just get mails with “Petteno” as receiver (which is completely another surname).
Luckily, using UTF-8 is possible represent my surname as well as kloeri’s (Østergaard, sorry for naming you but was the only other one with special chars in name that I had in mind) without having to mangle encodings, and at the same time also writing ?? without having to install special supports for extra encodings or force disabling latin-extended characters (for who’s wondering what is wrote there, it’s “baka” .. and if you don’t know, just google :P).
Unfortunately, UTF-8 is not a magic solution as there is also someone who fails to write ChangeLogs with my name spelt right.. eh Mike? (I am refering to late ChangeLogs from vapier where I had to fix my surname as it was using broken, nothing personal :) ).

Then I must say that lately my (real) life is being really fooled up and strange. Maybe it’s just the weather, maybe the time that’s passing, but I really feel depressed in the last days. More probable, is the knowledge that the someone I care about is happy, but with someone else, that’s making me feel strange. While I’m happy for her being happy, I’m totally sad as I know I won’t be able to be at her side for all the time we have in front of us. This feeling is really messing me up, so I don’t really know if I’ll be present or not on IRC, if I’ll look at bugs or if I’ll complete GNOME porting without pauses… I think, I hope, to remain the same as usual, also because it helps me not to think of her, but I can’t really say what I’m going to do.

On a little more happy note, after being published on NewsForge, becoming the Developer of the Week (of three weeks ago) and now becoming Deputy Lead for G/BSD project, I’m starting thinking that I’m not wasting others’ time every day all the day, so I feel relieved on the “professional” part of my life.. I just hope I’ll be able to continue like this after I find a job, as I’m have been paid yet for the translation, and I don’t have any other income. It sucks not being able to test G/FBSD 6 just because the only other machine requires a damn PC100 memory stick to work (the memory I had was faulty, I had to trash it).

October 04, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
FreeRADIUS and 802.1x authentication (October 04, 2012, 01:02 UTC)

Sometimes my work require me to do stuff that is so interesting I work overtime without actually telling anybody to make it work better, like I’ve done for Munin and Icinga — most of the time, though, my work is just boring, repetitive, but that’s okay, excitement goes a long way to ruin a person’s health. Especially when said excitement hits you on the jawbone with a network where Ethernet ports are authenticated with 802.1x…

You might not know it but there is a way to authenticate clients more or less the same way you do on WiFi when you use actual wired Ethernet. This is done through 802.1x and RADIUS. What is RADIUS? Well, it’s basically an authentication and accounting protocol, which I guess was originally developed for dial-up Internet access… remember those 56k modem days? No? Sigh, I feel old now. At any rate, for that reason, you’ll find the FreeRADIUS server in net-dialup/freeradius as of this moment .. it really should be moved to sys-auth/freeradius-server but I don’t want to bother with that right now.

So what happens during 802.1x is simple: the switch act as a proxy between the client and the RADIUS server (called authenticator), passing through the authentication messages, most of the case EAP-based. Until the authentication is passed, all other packets sent over the network are simply dropped. Now depending on how you set up the network and the capability of your switch, you can make it so that if authentication does not happen in a given time you go to a guest VLAN, or you just keep dropping packets waiting for authentication. Unfortunately if you go with default DHCP configuration, with the default timeouts, it’s likely that you won’t get any network, which is the problem we hit with our device (comparatively, OSX had the same issue, and you had to renew the DHCP lease after about a minute of connecting the Ethernet cable).

So I bought the cheapest 802.1x capable switch I could find on Amazon (an eight ports Cisco SG-200, if you’re interested) and started setting up to simulate an 802.1x network — this switch does not support the guest vlan as far as I can tell, which is actually what bothers me quite a bit, but on the whole it looks okay for our testing. I actually found out after the whole work was done, that it was technically possible to authenticate against a local database instead of having to deal with an external authenticator, but I’ve not been able to get it running that way, so it’s okay.

For what concerns the main topic of this discussion from the Gentoo point of view, I was quite lucky actually; there is good documentation for it on nothing less than TLDP — it’s a 2004 howto but it’s still almost perfect. The only difference in syntax for FreeRADIUS’s configuration is the way the password is designed in the users configuration file. I’ve not checked the client-side configuration of course, since that is probably completely out of date nowadays thanks to WPA Supplicant and NetworkManager.

The big hurdle was getting FreeRADIUS in a decent shape: simply emerging it and trying to start it would get it to fail silently, so I had to kick it into submission, hard. It so happened that a new version of the server was just released in September, so I decided to update to that one version and get it working as a proper ebuild. The new ebuild in tree should work quite nicely, the problem is that if you look at it, it’s hideous. The big problem is that their build system is a complete fsckup, and you have to resolve to deleting whole subdirectories to configure the features to build, and it has quite a few of them as it has over half a dozen database backends, and so many integrations that it’s really not funny to deal with those optional dependencies.

If you used the older ebuilds, or compare the new one to the old ones you can probably notice that I dropped a number of USE flags, especially those that were so specific to FreeRADIUS that they had a fr- prefix. This is because I’ve followed my usual general idea, that USE flags are great, if you’re turning on/off features that are heavy, or that have external dependencies, but if you just enable/disable codepaths that are barely noticeable, they just add to the noise. For this reason there is now only one local USE flag for pcap (which is actually a global flag with a more detailed description).

Also, when you build with SSL (which you want to do when doing 802.1x!) you need a CA to sign the users’ certificates. While you can set up your own CA relatively easily, like you already do for OpenVPN, I’ve made it much easier by wiring the originally-provided script to the --config option for the package (so you just need to run emerge --config freeradius for it to work).

As I said, the build system is extremely bad, to the point that they are actually committing to their GIT repository the whole autotools-generated files, which is not a good idea. At least this time around I was able to free up the files directory as all the patches are handled as tarballed patchsets on my devspace; if you want to see the patches in a more friendly way, I also got a copy of the repository on Gentoo’s GitHub account — you can also find a number of other projects that I patched the same way, including Munin.

Due to security issues, the new version of FreeRADIUS I put in tree is now stable on the two arches that were stable before, and all the old versions are gone, together with their patches (it cleaned up nicely) for the love for our rsyncs. Hopefully that doesn’t screw with anybody’s plans — if somebody has a problem with my changes, feel free to prod me.

September 29, 2012
Mike Gilbert a.k.a. floppym (homepage, stats, bugs)
Slot-operator deps for V8 (September 29, 2012, 03:11 UTC)

The recently approved EAPI 5 adds a feature called "slot-operator dependencies" to the package manager specification. Once these dependencies are implemented in the portage tree, the package manager will be able to automatically trigger package rebuilds when library ABI changes occur. Long-term, this will greatly reduce the need for revdep-rebuild.

If you are a Chromium user on Gentoo and you don't use portage-2.2, you have probably noticed that we are using the "preserve_old_lib" kludge so that your web browser doesn't break every time you upgrade the V8 Javascript library. This leaves old versions of V8 installed on your system until you manually clean them up. With slot-operator deps, we can eliminate this kludge since portage will have enough information to know it needs to rebuild chromium automatically. It's pretty neat.

I have forked the dev-lang/v8 and www-client/chromium ebuilds into my overlay to test this new feature; we can't really apply it in the main portage tree until a new enough version of portage has been stabilized. I will be maintaining the latest chromium dev channel release, plus a couple of versions of v8 in my overlay.

If you would like to try it out, you can install my overlay with layman -a floppym. Once you've upgraded to the versions in my overlay, upgrading/downgrading dev-lang/v8 should automatically trigger a chromium rebuild.

If you run into any issues, please file a bug.

September 28, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Debugging SELinux file context mismatches (September 28, 2012, 08:52 UTC)

I originally posted the question on gentoo-hardened ML, but Sven Vermeulen advised me to file a bug, so there it is: bug #436474.

The problem I hit is that my ~/.config/chromium/ directory should have unconfined_u:object_r:chromium_xdg_config_t context, but it has unconfined_u:object_r:xdg_config_home_t instead.

I could manually force the "right" context, but it turned out even removing the directory in question and allowing the browser to re-create it still results in wrong context. Looks like something deeper is broken (maybe just on my system), and fixing the root cause is always better. After all, other people may hit this problem too.

Here is what error messages appear on chromium launch:


$ chromium
[2557:2557:1727940797:ERROR:process_singleton_linux.cc(263)] Failed to
create /home/ph/.config/chromium/SingletonLock: Permission denied
[2557:2557:1727941544:ERROR:chrome_browser_main.cc(1552)] Failed to
create a ProcessSingleton for your profile directory. This means that
running multiple instances would start multiple browser processes rather
than opening a new window in the existing process. Aborting now to avoid
profile corruption.

And SELinux messages:

# audit2allow -d
#============= chromium_t ==============
allow chromium_t xdg_config_home_t:file create;
allow chromium_t xdg_config_home_t:lnk_file { read create };

[ 107.872466] type=1400 audit(1348505952.982:67): avc: denied { read
} for pid=2166 comm="chrome" name="SingletonLock" dev="sda1" ino=522327
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=lnk_file
[ 107.873916] type=1400 audit(1348505952.983:68): avc: denied {
create } for pid=2178 comm="Chrome_FileThre"
name=".org.chromium.Chromium.ZO3dGF"
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=file

If you have any ideas how to further debug it, or how to solve it, please share (e.g. comment on the bug or send me an e-mail). Thanks!

September 27, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
IPv6 in the workplace (September 27, 2012, 23:27 UTC)

I noted last week that for some reason I couldn’t understand, for some website the access time was quite lower on IPv6 than it was on IPv4. This seems to be consistent within the network as well, even though I’m still not sure if it’s a matter of a smaller overhead incurred in IPv6 itself, or if it’s mostly because the router in that case doesn’t have to do the same level of connection tracking for NAT and PAT.

But it’s not all clear this way: while NetworkManager is pretty happy with finding out both the address and the DNS server advertised with radvd, neither Mac OS X (10.5, 10.6 and 10.8) nor Windows (7) could get the DNS server. This is known and the only solution for this is to use a hybrid network with the stateless autoconfiguration (radvd) and DHCPv6 for extra information (NTP and DNS servers, among others).

So I first tried to set up ISC DHCP to serve out the v6 information, since that was the DHCP server I was using already. But this is extremely cumbersome. The first problem is that you can’t actually have one single dhcpd process running and serve both DHCP and DHCPv6, even though they use different ports, so you have to make use of dhcpd’s init script multiplexing support. Okay, not that big a deal is it? Strike two is that the configuration file can’t be shared either, even though the option names are different between the two implementations. What?

Okay so multiplexed init scripts, and separate configuration files. Is that all? It should, but honestly I’ve been unable to get it to work. I’m not sure if I just screwed up the configuration or what else, but it was trouble. Add to that that you have no way with the current init script to just reload the configuration, but you actually have to restart the service (and there is no configuration check on stop, which means you might take your DHCP down), and the fact that the man page for dhcpd.conf does not list most of the IPv6 options and I got tired.

Luckily for me, net-dns/dnsmasq (which we’re already using to serve as local DNS — I used unbound before, but in this case it seemed much easier as we need local hostnames, whereas at my house I simply used public IPv6 addresses) supports both DHCP and DHCPv6, responds to both with the same process, and supports a reload command. More interestingly, it seems like it could take over the job that right now is handled by radvd for router advertisement, but I haven’t tried that yet.

With this change, finally, I was able to get Windows 7 and Mac OS X to make DNS requests to the router’s IPv6 address, in the hope that this improves the general network’s responsiveness (at a first glance it seems to be working). So I started checking over the various systems we have in the office what supports what, testing also with test-ipv6

  • Windows 7 now gets both IPv6 addresses (temporary and mac-based) and DNS servers; test results 10/10;
  • Mac OS X Mountain Lion gets the stateless IPv6 address as well as the DNS server; test results 10/10;
  • Mac OS X Snow Leopard gets the IPv6 address but doesn’t see the DNS server, in either way; test results 10/10;
  • Linux gets the IPv6 address and the DNS server; test results 10/10;
  • Windows XP (after adding the protocol manually, of course) does not let you see which IP addresses it has, so I don’t know if it gets the DNS right, but it seems to work; test results 10/10;
  • Kindle Fire (first generation) does not show you the addresses it got, but tests pass 10/10 so I assume it’s working;
  • iPhone, running iOS 5 (colleague of mine) doesn’t show the addresses but tests also pass 10/10;
  • iPad, running iOS 6 (mine) shows the IPv6 DNS address, but tests don’t pass, 0/10;
  • Desire HD (CyanogenMod 7) doesn’t show any address, and tests don’t pass 0/10.

Something seems to be extremely wrong with these results honestly, but I’m not yet sure what.

Unfortunately, I haven’t had time to experiment with Flash and Red5 to see if there is any reason why we should work on supporting IPv6 in our products yet (if those two components don’t support it yet, there’s no real reason for us to look into it for now), but in the mean time, the advantages to start moving to IPv6 start to show themselves pretty clearly..

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: FAQ (September 27, 2012, 12:04 UTC)

All common questions regarding travelling, transportation, event details, sightseeing and much more, in this Frequently Asked Questions page. Feel free to ask more questions, so we can include them in the FAQ and make it more complete

David Abbott a.k.a. dabbott (homepage, stats, bugs)
epatch_user to the rescue ! (September 27, 2012, 09:38 UTC)

I was updating one of my boxens and ran into Bug 434686. In the bug Martin describes the simple way we as users can apply patches to packages that fail from bug fixes. This post is more than anything a reminder for me on how to do it. epatch_user has been blogged about before, dilfridge talks about it and says "A neat trick for testing patches in Gentoo (source-based distros are great!)".

As Martin explained in the bug and with the patch supplied by Liongene, here is how it works!

# mkdir -p /etc/portage/patches/net-print/cups-filters-1.0.24
# wget -O /etc/portage/patches/net-print/cups-filters-1.0.24/cups-filters-1.0.24-c++11.patch 'https://434686.bugs.gentoo.org/attachment.cgi?id=323788'
# emerge -1 net-print/cups-filters

Now that is cool :)

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

I’m not talking about web links but symlinks on the system. I’ve got to fix up the tinderbox as we speak – the dev-libs/mpc update broken GCC, I’m thrilled that soon enough a C++ library upgrade will be able to break the compiler entirely! – but before doing so I was checking what was around on that system. And it turns out that way too many packages use absolute symlinks, both when installing from makefiles, and when creating ebuilds.

And there are more problems as well: a ton of world writeable directories are also to be found on my tinderbox as it is, mostly in /var but not limited to. How many of these could be exploited for security issues, and how many of these are much less warranted than the stuff the hyped security guys focus on? Probably lots. Should I start reporting all of these as bugs? Possibly, yes, but I’m not sure if I’d be listened to, and for many packages the solution is going to be a very simple “drop the package as nobody’s interested”.

After this there is much debris that is there because upstream doesn’t understand how to properly install files on a Linux system. Sometimes there’s little you can do about it, as you need to keep it around for compatibility, but many other times, it’s extremely helpful to actually move things around in such a way that you don’t pollute the directories that, for instance, the linker has to search through to find the runtime libraries.

One very quick example about this latter situation is for instance move the daemons’ binaries out of /usr/sbin into /usr/libexec or whatever else you prefer (/usr/lib/${PN} works for me just as fine, it’s just a bit more arcane), so that it doesn’t pollute the path space for the root user. Especially when said daemons have no way to be executed in foreground for debug purposes.

At any rate, this is yet another piece of the puzzle that the tinderbox is helping to solve, so thanks again to all the people who contributed to it, the work is in progress (and yes, I did solve the mpc issue; bugs will resume flowing).

September 26, 2012
Hans de Graaff a.k.a. graaff (homepage, stats, bugs)

I've just updated the text on the Gentoo Wiki page on Ruby 1.9 to indicate that we now support eselecting ruby19 as the default ruby interpreter. This has not been tested extensively, so there may still be some problems with it. Please open bugs if you run into problems.

Most packages are now ready for ruby 1.9. If your favorite packages are not ready yet, please file a bug as well. We expect to make ruby 1.9 the default ruby interpreter in a few months time at the most. Your bug reports can help speed that up.

On a related note, we will be masking Ruby Enterprise Edition (ree18) shortly. With Ruby 1.9 now stable and well-supported we no longer see the need to also provide Ruby Enterprise Edition. This is also upstream's advice. On top of this the last few releases of ree18 never worked properly on Gentoo due to threading issues, and these are currenty already hard-masked.

Since we realize people may depend on ree18 and migration to ruby19 may not be straightforward, we intend to move slowly here. Expect a package mask within a month or so, and instead of the customary month we probably won't remove ree18 until after three months or so. That should give everyone plenty of time to migrate.

Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5-hdepend (September 26, 2012, 05:04 UTC)

In portage-2.1.11.22 and 2.2.0_alpha133 there’s support for expermental EAPI 5-hdepend which adds the HDEPEND variable which is used to represent build-time host dependencies. For build-time target dependencies, use DEPEND (if the host is the target then both HDEPEND and DEPEND will be installed on it). There’s a special “targetroot” USE flag that will be automatically enabled for packages that are built for installation into a target ROOT, and will otherwise be automatically disabled. This flag may be used to control conditional dependencies, and ebuilds that use this flag need to add it to IUSE unless it happens to be included in the profile’s IUSE_IMPLICIT variable.

For those who may not be familiar with the history of HDEPEND, it was originally suggested in bug #317337. That was in 2010, and later that year there was some discussion about it on the chromium-os-dev mailing list. Recently, I suggested on the gentoo-dev mail list that it be included in EAPI 5, but it didn’t make it in. Since then, there’s been some renewed effort , and now the patch is included in mainline Portage.

September 24, 2012
Richard Freeman a.k.a. rich0 (homepage, stats, bugs)
Gentoo EC2 Tutorial / Bootstrapping (September 24, 2012, 14:20 UTC)

I want to accomplish a few things with this post.

First, I’d like to give more attention to the work recently done by edowd on Bootstrapping Gentoo in EC2.

Second, I’d like to introduce a few enhancements I’ve made on these (some being merged upstream already).

Third, I’d like to turn this into a bit of a tutorial into getting started with EC2 as well since these scripts make it brain-dead simple.

I’ve previously written on building a Gentoo EC2 image from scratch, but those instructions do not work on EBS instances without adjustment, and they’re fairly manual. Edowd extended this work by porting to EBS and writing scripts to build a gentoo install from a stage3 on EC2. I’ve further extended this by adding a rudimentary plugin framework so that this can be used to bootstrap servers for various purposes – I’ve been inspired by some of the things I’ve seen done with Chef and while that tool doesn’t fit perfectly with the Gentoo design this is a step in that direction.

What follows is a step-by-step howto that assumes you’re reading this on Gentoo and little else, and ends up with you at a shell on your own server on EC2. Those familiar with EC2 can safely skim over the early parts until you get to the git clone step.

  1. To get started, go to aws.amazon.com, and go through the steps of creating an account if you don’t already have one. You’ll need to specify payment details/etc. If you buy stuff from amazon just use your existing account (if you want), and there isn’t much more than enabling AWS.
  2. Log into aws.amazon.com, and from the top right corner drop-down under either your name or My Account/Console choose “Security Credentials”.
  3. Browse down to access credentials, click on the X.509 certificate tab, generate a certificate, and then download both the certificate and private key files. The web services require these to do just about anything on AWS.
  4. On your gentoo system run as root emerge ec2-ami-tools ec2-api-tools. This installs the tools needed to script actions on EC2.
  5. Export into your environment (likely via .bashrc) EC2_CERT and EC2_PRIVATE_KEY. These should contain the paths to the files you created in the previous step. Congratulations – any of the ac2-api-tools should now work.
  6. We’re now going to checkout the scripts to build your server. Go to an empty directory and run git clone git://github.com/rich0/rich0-gentoo-bootstrap.git -b rich0-changes.
  7. chdir to the repository directory if necessary, and within it run ./setup_build_gentoo.sh. This creates security zones and ssh keys automatically for you, and at the end outputs command lines that will build a 32 or 64 bit server. The default security zone will accept inbound connections to anywhere, but unless you’re worried about an ssh zero-day that really isn’t a big deal.
  8. Run either command line that was generated by the setup script. The parameters tell the script what region to build the server in, what security zone to use, what ssh public key to use, and where to find the private key file for that public key (it created it for you in the current directory).
  9. Go grab a cup of coffee – here is what is happening:
    1. A spot request is created for a half decent server to be used to build your gentoo image. This is done to save money – amazon can kill your bootstrap server if they need it, and you’ll get the prevailing spot rate. You can tweak the price you’re willing to pay in the script – lower prices mean more waiting. Right now I set it pretty high for testing purposes.
    2. The script waits for an instance to be created and boot. The build server right now uses an amazon image – not Gentoo-based. That could be easily tweaked – you don’t need anything in particular to bootstrap gentoo as long as it can extract a stage3 tarball.
    3. A few build scripts are scp’ed to the server and run. The server formats an EBS partition for gentoo and mounts it.
    4. A stage3 and portage snapshot are downloaded and extracted. Portage config files (world, make.conf, etc) are populated. A script is created inside the EBS volume, and executed via chroot.
    5. That script basically does the typical handbook install (emerge sync, update world (which has all the essentials in it like dhcpcd and so on), build a kernel, configure rc files, etc.
    6. The bootstrap server terminates, leaving behind the EBS volume containing the new gentoo image. A snapshot is created of this image and registered as an AMI.
    7. A micro instance of the AMI is launched to test it. After successful testing it is terminated.
  10. After the script is finished check the output to see that the server worked. If you want it outputs a command line to make the server public – otherwise only you can see/run it.
  11. To run your server go to aws.amazon.com, sign in if necessary, browse to the EC2 dashboard. Click on AMIs on the left side, select your new gentoo AMI, and launch it (micro instances are cheap for testing purposes). Go to instances on the left side and hit refresh until your instance is running. Click on it and look down in the details for the public DNS entry.
  12. To connect to your instance run ssh -i <path to pem file in your bootstrap directory> ec2-user@<public DNS name of your server>. You can sudo to root (no password).

That’s it – you have a server in the cloud. When you’re done be sure to clean up to avoid excessive charges (a few cents an hour can add up). Check the instances section and TERMINATE (not stop) any instances that are there. You will be billed by the month for storage so de-register AMIs you don’t need and go to the snapshot section and delete their corresponding snapshots.

Now, all that is useful, but you probably want to tailor your instance. You can of course do that interactively, but if you want to script it check out the plugins in the plugin directory. Just add a path to a plugin file at the end of the command line to build the instance and it will tailor your image accordingly. I plan to clean up the scripts a bit more to move anything discretionary into the plugins (you don’t NEED fcron or atop on a server).

The plugins/desktop plugin is a work in progress, but I think it should work now (takes the better part of a day to build). It only works 32-bit right now due to the profile line. However, if you run it you should be able to connect with x2goclient and have a KDE virtual desktop. A word of warning – a micro instance is a bit underpowered for this.

And on a side note, if somebody could close bugs 427722 and 423855 that would eliminate two hacks in my plugin. The stable NX doesn’t work with x2go (I don’t know if it works for anything else), and the stable gst-plugins-xvideo is missing a dependency. The latter bug will bite anybody who tries to install a clean stage3 and emerge kde-meta.

All of this is very much a work in progress. Patches or pull requests are welcome, and edowd is maintaining a nice set of up-to-date gentoo images for public use based on his scripts.


Filed under: foss, gentoo, linux

September 22, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)
preserve-libs now available in Portage 2.1 branch (September 22, 2012, 05:22 UTC)

EAPI 5 includes support for automatic rebuilds via the slot-operator and sub-slots, which has potential to make @preserved-rebuild unnecessary (see Diego’s blog post regarding symbol collisions and bug #364425 for some examples of @preserved-rebuild shortcomings). Since this support for automatic rebuilds has potential to greatly improve the user-friendliness of preserve-libs, I have decided to make preserve-libs available in the 2.1 branch of portage (beginning with portage-2.1.11.20). It’s not enabled by default, so you’ll have to set FEATURES=”preserve-libs” in make.conf if you want to enable it. After EAPI 5 and automatic rebuilds have gained widespread adoption, I might consider enabling preserve-libs by default.

September 20, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)

In portage-2.1.11.19 and 2.2.0_alpha130 there’s support for EAPI 5, which implements all of the features that were approved by the Gentoo Council for EAPI 5. There are no differences since EAPI 5_pre2.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Stabilization hiccup with dev-perl/net-server-2.6.0 (September 20, 2012, 15:35 UTC)

What happened?

Sep 13th I stabilized net-analyzer/munin-2.0.5-r1 (security bug #412881). I use automated repoman checks and USE="-ipv6", and everything was fine at the time I committed the stabilization (also, see no mention of net-server in that security bug).

Sep 14th Seraphim Mellos filed bug #434978 about munin pulling in ~arch net-server.

Sep 16th x86@ team has been re-added to security bug #412881. Meanwhile Mr_Bones_ pinged me on irc. Also, Diego Elio Pettenò (flameeyes) filed bug #435242 against repoman not catching the dependency problem.

Sep 17th I stabilized dev-perl/net-server-2.6.0 on x86, fixing the immediate problem.

Sep 18th the repoman fix has been released in portage-2.1.11.18 and 2.2.0_alpha129.

Now the only remaining thing to do is pushing the portage/repoman fix to stable. I especially like how quickly the fix for root cause (repoman check) has been produced and released.

September 18, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo: IPSec, L2TP VPN for iOS (September 18, 2012, 13:07 UTC)

There are thousands of guides out there on this subject, however I still struggled to set up an IPSEC VPN at first. This is a HOWTO for my own benefit – maybe someone else will use it too. I struggled because most of the guides involved setting up the VPN on a NAT’d host and connecting to the VPN inside the network. I didn’t do that on my linode, which has a static public IP.

My objectives were clear:

  1. Create a connection point that was semi-secure while connecting to open wifi networks
  2. Bypass some “You are not in the US” restrictions while on the road

Step 1: Install applications, net-misc/openswan, net-dialup/xl2tpd
Step 2: Configure openswan:

# cat /etc/ipsec.conf 
config setup
    nat_traversal=yes
    virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!10.152.2.0/24
    oe=off
    protostack=auto

conn L2TP-PSK-NAT
    rightsubnet=vhost:%priv
    also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
    authby=secret
    pfs=no
    auto=add
    keyingtries=3
    rekey=no
    ikelifetime=8h
    keylife=1h
    type=transport
    left=1.1.1.1
    leftprotoport=17/1701
    right=%any
    rightprotoport=17/%any
    dpddelay=15
    dpdtimeout=30
    dpdaction=clear
# cat /etc/ipsec.secrets
1.1.1.1 %any: PSK "TestSecret"

Where 1.1.1.1 is your public eth0 address and 10.152.2.0 is the subnet that xl2tpd will assign IPs from (can be anything, I picked this at the advice of a guide because it is unlikely to be assigned from a router on a public network)

Step 3: Configure xl2tpd:

# cat /etc/xl2tpd/xl2tpd.conf
[global]
ipsec saref = no

[lns default]
ip range = 10.152.2.2-10.152.2.254
local ip = 10.152.2.1
require chap = yes
refuse pap = yes
require authentication = yes
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes

The local IP must be inside the subnet but outside the IP range above.

# cat /etc/ppp/options.xl2tpd
refuse-mschap-v2
refuse-mschap
ms-dns 8.8.8.8
ms-dns 8.8.4.4
asyncmap 0
auth
lock
hide-password
local
#debug
name l2tpd
proxyarp
lcp-echo-interval 30
lcp-echo-failure 4

The ms-dns lines are configurable to any DNS server you have access to.

# cat /etc/ppp/chap-secrets
# Format:
# client server secret IP-addresses
#
# Two lines are needed since it is two-sided auth
test l2tpd testpass *
l2tpd test testpass *

Step 4: Configure kernel parameters (sysctl)

# cat /etc/sysctl.conf
# only values specific for ipsec/l2tp functioning are shown here. merge with
# existing file
# iPad VPN
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1

Remember that sysctl.conf is evaluated at boot so run sysctl -p to get the settings enabled now as well.

Step 5: Configure firewall (iptables):
This is the critical step that I wasn’t grokking from the existing guides in the wild. Even when bringing the firewall down to test, you need the NAT/forwarding rules:

# iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -A FORWARD -s 10.152.2.0/24 -j ACCEPT
# iptables -A FORWARD -j REJECT
# iptables -t nat -A POSTROUTING -s 10.152.2.0/24 -o eth0 -j MASQUERADE

Step 6: Configure the device/client:
Settings -> General -> Network -> VPN -> Add VPN Configuration

L2TP
Description: Description
Server: 1.1.1.1 (or the hostname)
Account: test
RSA SecurID=OFF
Password: testpass
Secret: TestSecret
Send All Traffic=On

Step 7: Verify it works by going to some IP display webpage and it should show 1.1.1.1

Conclusion: The above examples should be enough to get the VPN working. There are some tweaking oppurtunities that I didn’t document or elaborate on. There is plenty of examples out there to look at or research, however. This was all setup without the firewall configuration and the client would connect but there would be no onward internet activity. It acted just like there was a invalid DNS server configured, at that point I looked into setting up a NAT, dnsmasq on the local interface, and other wierd things. In the end, just needed to forward the traffic properly.

With that knowledge of the firewall issue, the ultimate instructions would probably be this page: https://www.openswan.org/projects/openswan/wiki/L2TPIPsec_configuration_using_openswan_and_xl2tpd

September 14, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: room names (September 14, 2012, 16:36 UTC)

As you probably have seen in the schedule, we have multiple room that have ugly names from university like 107, 155 or 349. We would like to rename them during the conference so people can remember them more easily. So try your creativity and send us some ideas!

September 13, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The schedule (September 13, 2012, 14:47 UTC)

The Call for Papers has ended and the schedule is now up for the four in one event that is gonna take place soon in Prague. The full schedule of all the co-hosted conferences can be found here! Don’t forget to register!

Gentoo Miniconf: It will take place on Saturday and Sunday with a plethora of amazing talks by experienced Developers and Contributors, all around Gentoo, targeting both desktop and server environments!

On Saturday morning Fabian Groffen, Gentoo Council member, along with Robin H. Johnson, member of the Board of Trustees, will give us a quick view of how those two highest authorities manage the whole project. Afterwards there are going to be a few talks regarding various topics, like managing your home directory, the KDE team workflow, the important topic of Security and a benchmarking suite, all performed by important people for the project. A cool Catalyst workshop will be next, followed by a workshop regarding Gentoo Prefix, and at the end we’re going to participate on BoFs regarding the Infrastructure and the Gentoo PR, which will cover hot topics, like the Git migration and our website. 

On Sunday we’ll see how a large company (IsoHunt) uses Gentoo, the tools it has developed and the problems it has encountered. Then, a cool talk about 3D games and graphic performance is going to take place, followed by a presentation on SHA1 and OpenPGP, which is the precursor of the Key Signing Party!! The second part of the Catalyst workshop is next, along with a Puppet workshop. At the end there are again two BoFs, the first about automated testing and the second about how we can grab more contributors and enlarge our cool project.

And a sneak peek on the other co-hosted conferences:

Future Media, which will be held on Saturday is a special feature track talking about the influence of developments in technology, social media and design on society. It will have talks like the future of Wikipedia and Open Data in general by Lydia Pintscher or using FOSS and open hardware for disaster relief by Shane Couglan.

The first day in the openSUSE Conference, Michael Meeks will tell you all aboutwhat’s new in LibreOffice, Klaas Freitag will give everyone a peek under the hood of ownCloud and for the more technical users, Stefan Seyfried will show you how to crash the Linux Kernel for fun and backtraces. Saturday night there’ll be a good party and the next day musician Sam Aaron will talk about Zen and how to Live Program music like he did during the party. Later, Libor Pecháček will explain the process of getting software from the community into commercial enterprises and at the end of the day Miguel Angel Barajas Watson will show us how a computer could win Jeopardy using SUSE, Power and Hadoop. The openSUSE event continues on Monday and Tuesday with many workshops and BoF sessions planned as well as a few large-room discussions about the future of the openSUSE development- and release process.

On Saturday the LinuxDays track features a number of Czech talks like an introduction to Gentoo by Tomáš Chvátal with his talk titled “if it moves, compile it!” (‘Pokud se to hýbe, zkompiluj to!’). Fedora is represented by Jiří Eischmann & Jaroslav Řezník later in the day. There also few real ninja-style talks about low-level programming like Petr Baudiš about low level programming and Thomas Renninger on modern CPU power usage monitoring (these both are in English). During the Saturday there will also be track of graphics workshops in Czech (Gimp, Inkscape, Scribus) followed by a 3D printing workshop (reprap!). Sunday is kicked of by Vojtěch Trefný explaining how to use Canonical’s Launchpad as a place to host your project (CZ). Those interested in networking will be taken care off by Pavel Šimerda (news from Linux Networking) and Radek Neužil who explains how to use networks securely (both CZ). You can also learn all about how to set up a Linux desktop/server solution for educational purposes (EN) and follow Vladimír Čunát talking about NixOS and the unique package manager this OS is build on. The LinuxDays track will be closed by Petr Krčmář (chief editor of root.cz) and Tomáš Matějíček (author of Slax) talking about future of Slax (CZ).

Find your way to your favorite talks. Come on, it’s easy!

September 12, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5_pre2 (September 12, 2012, 08:47 UTC)

In portage-2.1.11.16 and 2.2.0_alpha127 there’s support for EAPI 5_pre2, which implements all of the features that were approved for EAPI 5 in the Gentoo Council meeting on September 11. The only difference from EAPI 5_pre1 is that the “user patches” feature has been removed.

September 11, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
initramfs documentation updates (September 11, 2012, 23:31 UTC)

i just finished hacking on our XML for the month. several months ago, sven mentioned the changes needed to get the handbooks updated with initramfs/initrd instructions for separate /usr partitions. it took me a few hours, but i finally closed bug numbers 415175, 434550, 434554, and 434732. thanks to raúl for the patches.

i initially started putting in the patches as-is, but then i noticed that the initramfs descriptions were just copied from the x86+amd64 handbook. so, i stripped them out, and rewrote them as an included section common to all affected architecture handbooks. that <include> is then dynamically inserted by our XML processor, dropping the instructions into the appropriate place, so that there’s no extraneous text duplication.

the raw handbook XML looks something like this:

<pre caption="Installing the kernel">
# <i>cp arch/<keyval id="arch-sub"/>/boot/bzImage /boot/<keyval id="kernel-name"
/></i>
</pre>

</body>
</subsection>
<subsection>
<include href="hb-install-initramfs.xml"/>
</subsection>

</section>

that bit about include href="hb-install-initramfs.xml" fills in the next subsection with whatever we put in the hb-install-initramfs.xml include, which is never viewed by itself. little tricks like this make it much easier to maintain the documentation…we make one change to an include, and it’s propagated to all documents that use it. same goes for things like <keyval> — that variable is set elsewhere in our documentation, so that as kernel versions or ISO sizes change, we can update that value in one place (handbook-$ARCH.xml). every instance of the variable is automatically filled in when you view the handbook in your web browser.

not to say everything was smooth sailing while updating the handbooks…i ran into a few snags. i figured out why my initial commit attempts were blocked by our pre-commit hooks: it’s not that the xml interpreter was giving me spurious errors on each check. (“why you blocking me? i’m head of the project! DON’T YOU KNOW WHO I AM?!”) instead, i forgot a slash in a </body> element. THAT ruined the next 300 lines of code. solution: fix, re-run xmllint --valid --noout, add commit message, push to CVS.

the handbooks are now all set for the new initramfs/initrd mojo for those poor, poor souls mounting /usr on a separate partition/disk. my own partition layout is much simpler; i’ve never needed an initramfs.

September 10, 2012
Steve Dibb a.k.a. beandog (homepage, stats, bugs)

I regularly use monit to monitor services and restart them if needed (and possible).  An issue I’ve run into though with Gentoo is that openrc doesn’t act as I expect it to.  openrc keeps it’s own record of the state of a service, and doesn’t look at the actual PID to see if it’s running or not.  In this post, I’m talking about apache.

For context, it’s necessary to share what my monit configuration looks like for apache.  It’s just a simple ‘start’ for startup and ‘stop’ command for shutdown:

check process apache with pidfile /var/run/apache2.pid start program = “/etc/init.d/apache2 start” with timeout 60 seconds stop program = “/etc/init.d/apache2 stop”

When apache gets started, there are two things that happen on the system: openrc flags it as started, and apache creates a PID file.

The problem I run into is when apache dies for whatever reason, unexpectedly.  Monit will notice that the PID doesn’t exist anymore, and try to restart it, using openrc.  This is where things start to go wrong.

To illustrate what happens, I’ll duplicate the scenario by running the command myself.  Here’s openrc starting it, me killing it manually, then openrc trying to start it back up using ‘start’.

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 start
* WARNING: apache2 has already been started

You can see that ‘status’ properly returns that it has crashed, but when running ‘start’, it thinks otherwise.  So, even though an openrc status check reports that it’s dead, when running ‘start’ it only checks it’s own internal status to determine it’s status.

This gets a little weirder in that if I run ‘stop’, the init script will recognize that the process is not running, and reset’s openrc’s status to stopped.  That is actually a good thing, and so it makes running ‘stop’ a reliable command.

Resuming the same state as above, here’s what happens when I run ‘stop’:

# /etc/init.d/apache2 stop
* apache2 not running (no pid file)

Now if I run it again, it checks both the process and the openrc status, and gives a different message, the same one it would as if it was already stopped.

# /etc/init.d/apache2 stop
* WARNING: apache2 is already stopped

So, the problem this creates for me is that if a process has died, monit will not run the stop command, because it’s already dead, and there’s no reason to run it.  It will run ‘start’, which will insist that it’s already running.  Monit (depending on your configuration) will try a few more times, and then just give up completely, leaving your process completely dead.

The solution I’m using is that I will tell monit to run ‘restart’ as the start command, instead of ‘start’.  The reason for this is because restart doesn’t care if it’s stopped or started, it will successfully get it started again.

I’ll repeat my original test case, to demonstrate how this works:

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 restart
* apache2 not running (no pid file)
* Starting apache2 …

I don’t know if my expecations of openrc are wrong or not, but it seems to me like it relies on it’s internal status in some cases instead of seeing if the actual process is running.  Monit takes on that responsibility, of course, so it’s good to have multiple things working together, but I wish openrc was doing a bit more strict checking.

I don’t know how to fix it, either.  openrc has arguments for displaying debug and verbose output.  It will display messages on the first run, but not the second, so I don’t know where it’s calling stuff.

# /etc/init.d/apache2 -d -v start
<lots of output>
# /etc/init.d/apache2 -d -v start
* WARNING: apache2 has already been started

No extra output on the second one.  Is this even a ‘problem’ that should be fixed, or not?  That’s kinda where I’m at right now, and just tweaking my monit configuration so it works for me.


Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
ffmpeg saves the day (.mts files) (September 10, 2012, 07:17 UTC)

If you need to convert .mts files to .mov (so that e.g. iMovie can import them), I found ffmpeg to be the best tool for the task (I don't want to install and run "free format converters" that are usually Windows-only and come from untrusted sources). This post is inspired by iMovie and MTS blog post.

First I tried just changing the container:

for x in *.MTS; do ffmpeg -i ${x} -c copy ${x/.MTS/.mov}; done


But QuickTime could not play sound from those files because of AC-3 codec. Also, the quality of the video playback was very poor. The other command I tried was:

for x in *.MTS; do ffmpeg -i ${x} -vcodec copy -acodec mp2 -ac 2 ${x/.MTS/.mov}; done

Now QuickTime was able to play the sound, but problems with video remained. iMovie was unable to import the resulting files anyway (silently: I got no error message, just nothing happened when trying to import).

The final command that is proven to work well is this:


for x in *.MTS; do ffmpeg -i ${x} -vcodec mpeg1video -acodec mp2 -ac 2 -sameq ${x/.MTS/.mov}; done

The video has been converted perfectly, and iMovie successfully imported the movies. Note the useful bash substitution of extension, ${x/.MTS/.mov}. Enjoy!




September 08, 2012
Anthony Basile a.k.a. blueness (homepage, stats, bugs)

Hi everyone,

I’d like to announce a new initiative within the mips arch team. We are now supporting an xfce4-based desktop system for the Lemote Yeeloong netbook.  The images can be found on any gentoo mirrors, under gentoo/experimental/mips/desktop-loongson2f.  The installation instructions can be found here.  The yeeloong netbook is particularly interesting because it only uses “free” hardware, ie. hardware which doesn’t require any proprietary code.  It is manufactured by Lemote in China, and distributed and promoted in the US by “Freedom Included“.  It is how Richard Stallman does his computing.

I’m blogging because I thought it was important for Planet Gentoo to know that mips devices are currently being manufactured and used in netbooks as well as embedded systems.  The gentoo mips team has risen to the challenge of targetting these systems and maintaining natively compiled stage4′s for them.  Why stage4′s?  And why a full desktop for the yeeloong?  These processors are slow, so the time from a stage3 to a desktop is about three days for the yeeloong.  Also, the yeeloong sports a little endian mips64 processor, the loongson2f, and we support three ABIs: o32, n32 and n64, with n32 being the preferred.  This significantly increases the time to build glibc and other core packages.  I provide two images, a vanilla one and a hardened one.  The latter adds full hardening (pie, ssp, _FORTIFY_SOURCES=2, bind now, relro) to the toolchain and userland binaries as we do for amd64 and i686 in hardened gentoo.  I have not ported over the hardened kernel, however.

I allude above to “other” targetted devices.  I am also maintaining some mips uclibc systems (both hardened and vanilla) which are on the gentoo mirrors under experimental/mips/uclibc.  But I will speak more of these later as part of an initiative to maintain hardened uclibc systems on “alternative” architectures such as arm, mips, ppc as well as amd64 and i686.

You can read the full installation instructions, but here’s a quick summary, since it doesn’t follow the usual Gentoo method of starting from a stage3:

  • Prepare either a pen drive or a tftp server with a rescue image: netboot-yeeloong.img
  • Turn on the yeeloong and hit the Del key multiple times until you get the firmware promt: PMON>
  • If netbooting, add an IP address and point to the netboot-yeeloong.img.  If using a pen drive then point to thei image on the drive and boot into the rescue environment.
  • Partition and format the drive.
  • Download the desktop image from a mirror via http or ftp.  Its about 350 MB in size.
  • Unpack the image.  It contains not only the userland, but also a kernel.
  • Reboot to the PMON> prompt.  Aim to the kernel on the drive.  PMON will remember your choice and you will not have to repeat this step.

Once installed, you will log in as an ordinary user with sudo with username and password = “gentoo”.  The root password is also set to “root”.  It is an ordinary Gentoo system, so edit your make.conf, emerge –sync and add whatever packages you like!  File bugs to: blueness@gentoo.org with a CC to mips@gentoo.org.

If you have a Yeeloong or go out and buy one, consider trying out this image.

September 04, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Another report from rarely updated system (September 04, 2012, 11:05 UTC)

This is another (second) post about updating a system I rarely updated. If you're interested, read the first post. I recommend more frequent updates, but I also want to show that it's possible to update without re-installing, and how to solve common problems.

Read more »

September 03, 2012
Doug Goldstein a.k.a. cardoe (homepage, stats, bugs)
Unofficial NVidia bugzilla? (September 03, 2012, 04:52 UTC)

The idea for this really comes from the Unofficial ATI bugzilla at http://ati.cchtml.com which appears to be successful. For NVidia issues the official way has been to email linux-bugs@nvidia.com or the unofficial method of posting on http://nvnews.net and hoping for a reply. Unfortunately I don’t find forums terribly useful for bug reports and the search functionality is less than ideal for issues.

I’ve been thinking of spinning up a Bugzilla instance for an Unofficial NVidia Bugzilla and inviting all distros to use it as well as the NVidia Linux engineers. But obviously I’d need some user/developer interest in this.

Would you use it?


Tagged: bug tracker, bugzilla, NVIDIA, nvidia-drivers

Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5_pre1 (September 03, 2012, 00:25 UTC)

In portage-2.1.11.13 and 2.2.0_alpha124 there’s support for EAPI 5_pre1, which implements all of the features that are currently in the eapi-5 branch of PMS (including the features from EAPI 4-slot-abi, which I’ve blogged about before). For additional references about the upcoming EAPI 5, see the “EAPI 5 tentative features” wiki page.

If you’d like to experiment with EAPI 5_pre1, then you can refer to the corresponding portage documentation, and you may need to pay special attention to the new “Profile IUSE Injection” feature. Since the profiles currently aren’t configured for this feature yet, you’ll have to configure these variables yourself if your experimental ebuilds reference special flags (like x86, kernel_linux, elibc_glibc, and userland_GNU) without listing them explicitly in IUSE. Here’s an abbreviated example of what the variables should look like, which you can put in make.conf:

IUSE_IMPLICIT="prefix selinux"
USE_EXPAND="ELIBC KERNEL USERLAND"
USE_EXPAND_UNPREFIXED="ARCH"
USE_EXPAND_IMPLICIT="ARCH ELIBC KERNEL USERLAND"
USE_EXPAND_VALUES_ARCH="amd64 ppc ppc64 x86 x86-fbsd x86-solaris"
USE_EXPAND_VALUES_ELIBC="FreeBSD glibc"
USE_EXPAND_VALUES_KERNEL="FreeBSD linux SunOS"
USE_EXPAND_VALUES_USERLAND="BSD GNU"

I have not populated all of the above variables exhaustively, but these values should be enough to get you started. If you need a more complete set of ARCH values to list in USE_EXPAND_VALUES_ARCH, then you can grab the exhaustive set of values from arch.list.

August 31, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: Need a Gentoo force! (August 31, 2012, 10:01 UTC)

The schedule of all the events will be published soon, so stay tuned!

P.S. To avoid confusion, I’m reminding everyone that the Gentoo Miniconf and the czech Linuxdays conference will be held on 20-21 October, while the openSUSE Conference has two extra days, so it will be held on 20-23 October

P.S.2 Thanks a lot to Joanna Malkogianni and Triantafyllia Androulidaki for the pacman banner

P.S.3 Thanks a lot to Anna Mineeva for the animated banner

August 27, 2012
Eray Aslan a.k.a. eras (homepage, stats, bugs)
Squid-3.2.1 in the tree (August 27, 2012, 14:15 UTC)

Squid-3.2.1 - the first non-beta release of Squid web proxy server 3.2 branch - is in the tree.  Big news is SMP scalability.  We can finally utilize multiple CPU cores natively instead of running multiple squid instances.


There are a lot of changes from previous versions.  In particular, some changes to existing directives may affect your existing traffic behaviour.  So, please be sure to read the release notes at [1] and [2] before upgrading.

There are 2 new USE flags:

  • ssl-crtd:  Adds support for dynamic SSL certificate generation in SslBump environments which allows icap inspection of SSL traffic without / with reduced certificate mismatch errors in browsers.  See [3] for further info.
  • qos:  Adds support for Quality of Service by allowing one to select a TOS / DSCP / Netfilter Mark value to mark outgoing connections with, based on where the reply was sourced.  Also turns on zero-penalty-hit config option which used to be a separate patch but now is included with squid itself.  Please see the qos_flows directive for further info [4].


One note regarding squid.conf:  By default, Gentoo provided a huge squid.conf file with lots of comments.  Upstream provides a small condensed squid.conf file which we will start to install as default from squid-3.2.1 onwards.  I always found it difficult to see what the overall squid configuration was in the previous huge squid.conf file.  Hopefully, this change will make life easier for squid admins.  The old commented squid.conf file is still available as squid.conf.documented under /etc/squid directory.  Please do try to migrate your settings to the new squid.conf file for ease of future upgrades.

August 26, 2012
Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)
Running owncloud on Gentoo stable (August 26, 2012, 18:51 UTC)

As I migrated to clean data layout (see previous post) I decided to be cool&trendy guy and fire up my own lovely cloudy service.

First my thinking was bit off regular setup, because even if we have in-tree ebuild of owncloud it hard-requires apache, which I find overkill here.

So I introduce you to secret approach how to make it work with ngnix and sqlite3. Before you say that I should use *insertothercooldbname* please think of that my deployment is only for handfull users and I tested it with 5 users connected at once each of them having access to 1 tb shared datastore and it proven fast enough.

Preparing keywords/useflags/etc

Well owncloud is testing, so unmask it:

scarabeus@htpc: /etc/portage $ cat package.keywords/own-cloud
www-apps/owncloud

We need dav for direct access and php stuff for the setup (some useflags might be useless or redundant):

scarabeus@htpc: /etc/portage $ cat package.use/own-cloud
dev-lang/php pdo sqlite3 curl xmlwriter gd truetype cgi force-cgi-redirect fpm
www-servers/nginx nginx_modules_http_dav

Now silently punt the apache away as we love nginx:

scarabeus@htpc: /etc/portage $ cat make.profile/package.provided
virtual/httpd-php-5.4

And put all this to good use by emerging required stuff:

emerge -v www-servers/nginx www-apps/owncloud

Setting up the stuff

As nginx does not have any fcgi we will use the fpm from php directly. For that we need to add it to runlevel rc-update add php-fpm default and set up a bit default number of spawned servers (config is in /etc/php/fpm-php5.4/php-fpm.conf). Also remeber to set there proper user/group there, or you won’t be able to store content in your cloud, just read from it.

Then we set up the nginx (/etc/nginx/nginx.conf and /etc/nginx/fastcgi_params). To keep this short and easy I will just post the config I used and let you to google for other nginx variables.
First the conf file:

        server {
                listen 80;
                server_name hostname;
                rewrite ^ https://$server_name$request_uri? permanent;  # enforce https
        }

        server {
                listen 443;
                server_name hostname;

                ssl on;
                ssl_certificate /etc/ssl/nginx/nginx.crt;
                ssl_certificate_key /etc/ssl/nginx/nginx.key;

                access_log /var/log/nginx/htpc.access_log main;
                error_log /var/log/nginx/htpc.error_log info;

                root /var/www/htpc/htdocs/owncloud/;

                client_max_body_size 8M;
                create_full_put_path on;
                dav_access user:rw group:rw all:r;

                index index.php;

                location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
                        deny all;
                }

                location / {
                        rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
                        rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
                        rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
                        rewrite ^/apps/calendar/caldav.php /remote.php/caldav/ last;
                        rewrite ^/apps/contacts/carddav.php /remote.php/carddav/ last;
                        rewrite ^/apps/([^/]*)/(.*\.(css|php))$ /index.php?app=$1&getfile=$2 last;
                        rewrite ^/remote/(.*) /remote.php/$1 last;

                        try_files $uri $uri/ @webdav;
                }

                location @webdav {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        fastcgi_pass 127.0.0.1:9000;
                        include fastcgi_params;
                        fastcgi_param HTTPS on;
                }

                location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
                        expires 30d;
                        access_log off;
                }

                location ~ \.php$ {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        fastcgi_pass 127.0.0.1:9000;
                        include fastcgi_params;
                        fastcgi_index index.php;
                        fastcgi_intercept_errors on;
                        try_files $uri =404;
                }
        }

For the fcgi we also need some params to make the webdav work:

fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param   SCRIPT_NAME     $fastcgi_script_name;
fastcgi_param   PATH_INFO       $fastcgi_path_info;

That should be it, now we just deploy the owncloud to our webserver by webapp-config:

/usr/sbin/webapp-config -I -h htpc -u root -d /owncloud owncloud 4.0.7

After we start up the webserver and fcgi provider, we should be up and running to open the stuff in web browsers.

Few issues I didn’t manage to sort out in owncloud

  • External module to load all system users into it does not pass the auth
  • Google sync just timeouts everytime I try it (I maybe have just damn huge content here)
  • External storage support from within owncloud didn’t work for me, I just symlinked the data folder to the proper places under each user and logged into them in browser, then waited for 3 hours (1tb of data to index) and they were able to access everything.