Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Thomas Kahle
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Victor Ostorga
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
November 19, 2012, 23:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

November 19, 2012
Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
GPG Key Signing Party (November 19, 2012, 08:19 UTC)

Last Thursday we had GPG Key & CAcert Signing party at SUSE office inviting anybody who wants to get his key signed. I would say, that it went quite well, we had about 20 people showing up, we had some fun, and we now trust each other some more!

GPG Key Signing

We started with GPG key signing. You know, the ussual stuff. Two rows moving against each other, people exchanging paper slips

Signing keys

For actually signing keys at home, we recommended people to use signing-party package and caff in particular. It’s easy to use tool as long as you can send mails from command line (there are some options to set up against SMTP directly, but I run into some issues). All you need to do is to call

caff HASH

and it will download the key, show you identities and fingerprint, sign it for you and send each signed identity to the owner by itself vie e-mail. And all that with nice wizard. It can’t be simpler than that.

Importing signatures

When my signed keys started coming back, I was wondering how do I process them. It was simply too many emails. I searched a little bit, but I get too lazy quite soon, so as I have all my mails stored locally in Maildir by offlineimap, I just wrote a following one liner to import them all.

   grep -Rl 'Your signed' INBOX | while read i; do 
        gpg -d "$i" | gpg --import -a;
   done

Maybe somebody will find it useful as well, maybe somebody more experienced will tell me in comments how to do it correctly ;-)

CAcert

One friend of mine – Theo – really wanted to be able to issue CAcert certificates, so we added CAcert assurance to the program. For those who doesn’t know, CAcert is nonprofit certification authority based on web of trust. You’ll get verified by volunteers and when enough of them trusts you enough, you are trusted by authority itself. When people are verifying you, they give you some points based on how they are trusted and how do they trust you. Once you get 50 points, you are trusted enough to get your certificate signed and once you have 100, you are trusted enough to start verifying other people (after a little quiz to make sure you know what are you doing).

I knew that my colleague Michal čihař is able and willing to issue some points but as he was starting with issuing 10 and I with 15, I also asked few nearby living assurers from CAcert website. Unfortunately I got no reply, but we were organizing everything quite quickly. But we had another colleague – Martin Vidner – showing up and being able to issue some points. I assured another 11 people on the party and now I can give out 25 points. As well as Michal and I guess Martin is now somewhere around 20 as well. So it means that if you need to be able to issue CAcert certificates, visiting just SUSE office in Prague is enough! But still, contact us beforehand, sometimes we do have a vacation ;-)

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Ah, LXC! Isn't that good now? (November 19, 2012, 02:55 UTC)

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

November 18, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
A matter of copyrights (November 18, 2012, 16:55 UTC)

One of the issues that came through with the recent drama about the n-th udev fork is the matter of assigning copyright to the Gentoo Foundation. This topic is not often explored, mostly because it really is a minefield, and – be ready to be surprised – I think the last person who actually said something sane on the topic has been Ciaran.

Let’s see a moment what’s going on: all ebuilds and eclasses in the main tree, and in most of the overlays, report “Gentoo Foundation” as the holder of copyright. This is so much a requirement that we’re not committing to the tree anything that reports anyone else’s copyright, and we refuse the contribution in that case for the most part. While it’s cargo-culted at this point, it is also an extremely irresponsible thing to do.

First of all, nobody ever signed a copyright assignment form to the Gentoo Foundation, as far as I can tell. I certainly didn’t do it. And especially as we go along with getting more and more proxied maintainers, as they almost always are not Gentoo Foundation members (Foundation membership comes after an year as a developer, if I’m not mistaken — or something along those lines, I honestly forgot because, honestly, I’m not following the Foundation doing at all).

Edit: Robin made me notice that a number of people did sign a copyright assignment, first to Gentoo Technologies that were then re-assigned to the Foundation. I didn’t know that — I would be surprised if a majority of the currently active developers knew about that either. As far as I can tell, copyright assignment was no longer part of the standard recruitment procedure when I joined, as, as I said, I didn’t sign one. Even assuming I was the first guy who didn’t sign it, 44% of the total active developers wouldn’t have signed it, and that’s 78% of the currently active developers (give or take). Make up your mind on these numbers.

But even if we all signed said copyright assignment, it’s for a vast part invalid. The problem with copyright assignment is that they are just that, copyright assignments… which means they only work where the law regime concerning authors’ work is that of copyright. For most (all?) of Europe, the regime is actually that of author’s rights and like VideoLAN shows it’s a bit more complex, as the authors have no real way to “assign” those rights.

Edit²: Robin also pointed at the fact that FSFe, Google (and I add Sun, at the very lest) have a legal document, usually called Contributor License Agreement (when it’s basically replacing a full blown assignment) or Fiduciary Licence Agreement (the more “free software friendly” version). This solves half the problem only, as the Foundation would still not be owning the copyright, which means that you still have to come up with a different way to identify the contributors, as they still have their rights even though they leave any decision regarding their contributions to the entity they sign the CLA/FLA to.

So the whole thing stinks of half-understood problem.

This is actually gotten more complex recently, because the sci team borrowed an eclass (or the logic for an eclass) from Exherbo — who actually handles the individual’s copyright. This is actually a much more sensible approach, on the legal side, although I find the idea of having to list, let’s say, 20 contributors at the top of every 15-lines ebuild a bit of an overkill.

My proposal would then be to have a COPYRIGHTS.gentoo file in every package directory, where we list the contributors to the ebuild. This way even proxied maintainers, and one-time contributors, get their credit. The ebuild can then refer to “see the file” for the actual authors. A similar problem also applies to files that are added to the package, including, but not limited to, the init scripts, and making the file formatted, instead of freeform, would probably allow crediting those as well.

Now, this is just a sketch of an idea — unlike Fabio, whose design methodology I do understand and respect, I prefer posting as soon as I have something in mind, to see if somebody can easily shoot it down or if it has wings to fly, and also in the vain hope that if I don’t have the time, somebody else would pick up my plan — but if you have comments on it, I’d be happy to hear them. Maybe after a round of comments, and another round of thinking about it, I’ll propose it as a real GLEP.

Secretly({Plan, Code, Think}) && PublishLater() (November 18, 2012, 12:19 UTC)

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!


Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Multi-level bundling, with a twist (November 18, 2012, 05:51 UTC)

I spent half my Saturday afternoon working on Blender, to get the new version (2.64a) in Portage. This is never an easy task but in this case it was a bit more tedious because thanks to the new release of libav (version 9) I had to make a few more modifications … making sure it would still work with the old libav (0.8).

FFmpeg support is not guaranteed — If you care you can submit a patch. I’m one of the libav developers thus that’s what I work, and test, with.

Funnily enough, while I was doing that work, a new bug for blender was reported in Gentoo, so I looked into it and found out that it was actually caused by one of the bundled dependencies — luckily, one that was already available as its own ebuild, so I just decided to get rid of it. The interesting part was that it wasn’t listed in the “still bundled libraries” list that the ebuild’s own diagnostic prints… since it was actually a bundled library of the bundled libmv!

So you reach the point where you get one package (Blender) bundling a library (libmv) bundling a bunch of libraries, multi-level.

Looking into it I found out that not only the dependency that was causing the bug was bundled (ldl) but there were at least two more that, I knew for sure, were available in Gentoo (glog and gflags). Which meant I could shave some more code out of the package, by adding a few more dependencies… which is always a good thing in my book (and I know that my book is not the same as many others’).

While looking for other libraries to unbundle, I found another one, mostly because its name (eltopo) was funny — it has a website and from there you can find the sources — neither are linked in the Blender package. When I looked at the sources, I was dismayed to see that there was no real build system but just an half-broken Makefile building two completely different PIC-enabled static archives, for debug and release. Not really something that distributions could get much interest in packaging.

So I set up at build my usual autotools-based build system (which no matter what people say it’s extremely fast, if you know how to do it), fix the package to build with gcc 4.7 correctly (how did it work for Blender? I assume they patched it somehow but they don’t write down what they do!), and .. uh where’s the license file?

Turns out that while the homepage says that the library is “public domain”, there is no license statement anywhere in the source code, making it in all effects the exact opposite: a proprietary software. I’ve opened an issue for it and hopefully upstream will fix that one up so I can send him my fixes and package it in Gentoo.

Interestingly enough, the libmv software that Blender packages, is much better in its way of bundling libraries. While they don’t seem to give you an easy way to disable the bundled copies (which might or might not be Blender’s build system fault), they make it clear where each library come from, and they have scripts to “re-bundle” said libraries. When they make changes, they also keep a log of them so that you can identify what changed and either ignore, patch or send it upstream. If all projects bundling stuff did it that way, it would be a much easier job to unbundle…

In the mean time, if you have some free time and feel like doing something to improve the bundled libraries situation in Gentoo Linux, or you care about Blender and you’d like to have a better Gentoo experience with it, we could use some ebuilds for ceres-solver and SSBA as well as fast-C (this last one has no buildsystem at all, unfortunately) all used by libmv, or maybe carve libredcode (for which I don’t even have an URL at hand), recastnavigation (which has no releases) which are instead used directly by Blender.

P.S.: don’t expect to see me around this Sunday, I’m actually going to see the Shuttle, and so I won’t be back till late, most likely, or at least I hope so. You’ll probably see a photo set on Monday on my Flickr page if you want to have a treat.

November 17, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
The hardened project continues going forward… (November 17, 2012, 19:34 UTC)

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

Documentation

The SELinux handbook has been updated with the latest policy changes (such as supporting the named init scripts). We also documented SELinux policy constraints which was long overdue.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

  • [1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).
  • [2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.

Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)

Few days ago I finished fiddling with open build service (obs) packages in our main tree. Now when anyone wants to mess up with obs he just have to emerge dev-util/osc and have the fun with it.

What the hell is obs?

OBS is pretty cool service that allows you to specify how to build your package and its dependencies in one .spec file where you can deliver the results to multiple archs/distros and not care about how it happens (Debian, SUSE, Fedora, CentOS, Archlinux).

Primary implementation is running for SUSE and it is free to use by anyone (eg. you don’t have to build suse packages there if you don’t want to :P). It has two ways how to interact with the whole tool, one is the web application, which is really PITA and the other is the osc command line tool I finished fiddling with.

Okay so why did you do it?

Well I work at SUSE and we are free to use whatever distro we want while being able to complete our taks. I like to improve stuff I want to be able fix bugs in SLE/openSUSE while not having any chroot/virtual with the named system installed, for such task this works pretty well :-)

How -g0 may be useful (November 17, 2012, 13:35 UTC)

Usually I use -g0 as CFLAGS/CXXFLAGS; it will be useful to find wrong buildsystem behaviur.

Here is an example where the buildsystem sed only -g and leave 0 and causing compile failure:

x86_64-pc-linux-gnu-gcc -DNDEBUG -march=native -O2 0 -m64 -O3 -Wall -DREGINA_SHARE_DIRECTORY=\"/usr/share/regina\" -DREGINA_VERSION_DATE=\""31 Dec 2011"\" -DREGINA_VERSION_MAJOR=\"3\" -DREGINA_VERSION_MINOR=\"6\" -DREGINA_VERSION_SUPP=\"\" -DHAVE_CONFIG_H -DHAVE_GCI -I./gci -I. -I. -I./contrib -o funcs.o -c ./funcs.c
x86_64-pc-linux-gnu-gcc: 0: No such file or directory
./funcs.c: In function '__regina_convert_date':
./funcs.c:772:14: warning: array subscript is above array bounds
make: *** [funcs.o] Error 1
emake failed

So add it to your CFLAGS/CXXFLAGS may be a good idea.

November 14, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
RIP recruiting.gentoo.org (November 14, 2012, 13:28 UTC)

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
An informal comparison (November 14, 2012, 03:14 UTC)

A few people asked me to write this down so that they can reference it - so here it is.
A completely unscientific comparison between Linux flavours and how they behave:

CentOS 5 (because upgrading is impossible):

             total       used       free     shared    buffers     cached
Mem:          3942       3916         25          0        346       2039
-/+ buffers/cache:       1530       2411

And on the same hardware, doing the same jobs, a Gentoo:
             total       used       free     shared    buffers     cached
Mem:          3947       3781        166          0        219       2980
-/+ buffers/cache:        582       3365
So we use roughly 1/3rd the memory to get the same things done (fileserver), and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound; and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

Original CentOS install: ~900 qps peak in mysql, ~60 seconds walltime to render a pathological page
Default-y Gentoo: ~1200 qps peak, ~45-50 seconds walltime to render the same page
Gentoo with -march=native in CFLAGS: ~1800qps peak, ~30 seconds render time (this one was unexpected for me!)

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks. The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

Oh yeah, and you can upgrade without a reinstall.

November 13, 2012
Donnie Berkholz a.k.a. dberkholz (homepage, stats, bugs)

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.


Tagged: development, gentoo

November 12, 2012
Equo code refactoring: mission accomplished (November 12, 2012, 20:34 UTC)

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!


Jan Kundrát a.k.a. jkt (homepage, stats, bugs)

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty impressed about the event so far -- the organizers have done an excellent job and everything feels very, very smooth here. Congratulations for that; I have a first-hand experience with organizing a workshop and can imagine the huge pile of work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP e-mail client. A lot of work has found the way in since the last release; Trojitá now supports almost all of the useful IMAP extensions including QRESYNC and CONDSTORE for blazingly fast mailbox synchronization or the CONTEXT=SEARCH for live-updated search results to name just a few. There've also been roughly 666 tons of bugfixes, optimizations, new features and tweaks. Trojitá is finally showing evidence of getting ready for being usable as a regular e-mail client, and it's exciting to see that process after 6+ years of working on that in my spare time. People are taking part in the development process; there has been a series of commits from Thomas Lübking of the kwin fame dealing with tricky QWidget issues, for example -- and it's great to see many usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree (the thesis was about Trojitá, of course), I started a new job (this time using Qt) and implemented quite some interesting stuff with Qt -- if you have always wondered how to integrate Ragel, a parser generator, with qmake, stay tuned for future posts.

Anyway, in case you are interested in using an extremely fast e-mail client implemented in pure Qt, give Trojitá a try. If you'd like to chat about it, feel free to drop me a mail or just stop me anywhere. We're always looking for contributors, so if you hit some annoying behavior, please do chime in and start hacking.

Cheers,
Jan

November 08, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Boosting my morale? Nope, still not. (November 08, 2012, 05:43 UTC)

I’m not sure if you’re following the development of this particular package in Gentoo, but with some discussion, quite a few developers reached a consensus last week that the slotted dev-libs/boost that we’ve had for the past couple of years had to go, replaced with a single-slot package like we have for most other libraries.

The main reason for this is that the previous slotting was not really doing what the implementers expected it to do — the idea for many is that you can always depend on whatever highest version of Boost you support, and if you don’t support the latest, no problem, you’ll get an older one. Unfortunately, this clashes with the fact that only the newest version of Boost is supported by upstream with modern configurations, so it happens that a new C library, or a new compiler, can (and do) make older versions non-buildable.

Like what happened with the new GLIBC 2.16, which is partially described in the previous post of the same series, and lately summarized, where there’s no way to rebuild boost-1.49 with the new glibc (the “patch” that could be used would change the API, making it similar to boost-1.50 which ..), but since I did report build failures with 1.50, people “fixed” them by depending on an older version… which is now not installable. D’oh!

So what did I do to sort this out? We dropped the slot altogether. Now all Boost versions install as slot zero and each replace the others. This makes it much easier for both developers and users, as you know that the one version you got installed is the one you’re building against, instead of “whatever has been eselected” or “whatever was installed last” or “whatever is the first one that the upstream user is finding” which was before — usually a mix of all.

But this wasn’t enough because unfortunately, libraries, headers and tools were all slotted so they all had different names based on the version. This was handled in the new 1.52 release which I unmasked today, by going back to the default install layout that Boost uses for Unix: the system layout. This is designed to allow one and only one version of each Boost library in the system, and does neither provide a version nor a variant suffix. This meant we needed another change.

Before going back to system layout, each boost version installed two sets of libraries, one that was multithread-safe and oen that wasn’t. Software using threads would have to link to the mt variant, while those not using threads could link to the (theoretically lower-overhead) single-thread variant. Which happened to be the default. Unfortunately, this also meant that a ton of software out there, even when using threads, simply linked to the boost library they wanted without caring for the variant. Oopsie.

Even worse, it was very well possible, and indeed was the case for Blender, that both variants were brought in, in the process’s address space, possibly causing extremely hard to debug issues due to symbol collisions (which I know, unfortunately, very well).

An easy way to see (using older versions of boost ebuilds) whether your program is linking to the wrong variant, is to see if you see it linking to libboost_threads-mt and at the same time to some other library such as libboost_system (not mt variant). Since our very pleasant former maintainer decided to link the mt variant of libboost_threads to the non-mt one, quite a few ways to check for multithreaded Boost simply … failed.

Now the decision on whether to build threadsafe or not is done through an USE flag like most other ebuilds do, and since only one variant is installed, everybody gets, by default and in most cases, the multithread-safe version, and all is good. Packages requiring threads might want already to start using dev-libs/boost[threads(+)] to make sure that they are not installed with a non-threadsafe version of Boost, but there are symlinks in place righ tnow so that even if they are looking for the mt variant they get the one installed version of boost anyway (only with USE=threads of course).

One question that raised was “how broken will people’s systems be, after upgrading from one Boost to another?” and the answer is “quite” … unless you’re using a modern enough Portage (the last few versions of the 2.1 series are okay, and most of the 2.2), which can use preserve-libs. In that case, it’ll just require you to run a single emerge command to get back on the new version, and if not, you’ll have to wait for revdep-rebuild to finish.

And to make things sweeter, with this change, the time it takes for Boost to build is halved (4 minutes vs 8 on my laptop), while the final package is 30MB less (here at least), since only one set of libraries is installed instead of two — without counting the time and space you’d waste by having to install multiple boost versions together.

And for developers, this also mean that you can forget about the ruddy boost-utils.eclass, since now everything is supposed to work without any trickery. Win-win situation, for once.

November 03, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Tinderbox and manual intervention (November 03, 2012, 20:39 UTC)

So after my descriptive post you might be wondering what’s so complex or time-requiring in running a tinderbox. That’s because I haven’t spoken about the actual manual labor that goes into handling the tinderbox.

The major work is of course scouring the logs to make sure that I file only valid bugs (and often enough that’s not enough, as things hide behind the surface), but there are a quite a number of tasks that are not related to the bug filing, at least not directly.

First of all, there is the matter of making sure that the packages are available for installation. This used to be more complex, but luckily thanks to REQUIRED_USE and USE deps, this task is slightly easier than before. The tinderbox.py script (that generates the list of visible packages that need to be tested) also generates a list of use conflicts, dependencies etc. This list I have to look at manually, and then update the package.use file so that they are satisfied. If their dependencies or REQUIRED_USE are not satisfied, the package is not visible, which means it won’t be tested.

This sounds extremely easy, but there are quite a few situations, which I discussed previously where there is no real way to satisfy requirements for all the packages in the tree. In particular there are situations where you can’t enable the same USE flag all over the tree — for instance if you do enable icu for libxml2, you can’t enable it for qt-webkit (well, you can but you have to disable gstreamer then, which is required by other packages). Handling all the conflicting requirements takes a bit of trial and error.

Then there is a much worse problem and that is with tests that can get stuck, so that things like this happen:

localhost ~ # qlop -c
 * dev-python/mpi4py-1.3
     started: Sat Nov  3 12:29:39 2012
     elapsed: 9 hours, 11 minutes, 12 seconds

And I’ve got to keep growing the list of packages whose tests are unreliable — I wonder if the maintainers ever try running their tests, sometimes.

This task used to be easier because the tinderbox supports sending out tweets or dents through bti so that it would tell me what was its action — unfortunately identi.ca kept marking the tinderbox’s account as spam, and while they did unlock it three times, it meant I had to ask support to do so every other week. I grew tired of that and stopped caring about it. Unfortunately that means I have to connect to the instance(s) from time to time to make sure they are still crunching.

Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
komplete audio 6 on gentoo: first impressions (November 03, 2012, 05:36 UTC)

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
How to run a tinderbox with my scripts (November 03, 2012, 03:57 UTC)

Hello there everybody, today’s episode is dedicated to set up a tinderbox instance like mine which is building and installing every visible package in the tree, running its tests and so on.

So first step is to have a system where to run the tinderbox. A virtual system is much preferred, since the tinderbox can easily install very insecure code, although nothing prevents you from running it straight on the metal. My choice for this, after Tiziano pointed me in that direction, was to get LXC to handle this, as a chroot on steroids (the original implementation used chroot and was much less reliable).

Now there are a number of degrees you could be running the tinderbox at; most of the basics are designed to work with almost every package in the system broken — there are only a few packages that are needed for this system to work, here’s my world file on the two tinderboxes:

app-misc/screen
app-portage/gentoolkit
app-portage/portage-utils
dev-java/java-dep-check
dev-lang/python:2.7
net-analyzer/netcat6
net-misc/curl

But let’s do stuff in order. What do I do when I run the tinderbox? I connect on SSH over IPv6 – the tinderbox has very limited Internet connectivity, as everything is proxied by a Squid instance, like I described in this two years old post – directly as root unfortunately (but only with key auth). Then I either start or reconnect to a screen instance, which is where the tinderbox is running (or will be running).

The tinderbox’s scripts are on git and are written partially by me and partially by Zac (following my harassment for the most part, and he’s done a terrific job). The key script is tinderbox-continuous.sh which is simply going to keep executing, either ad-infinitum, or going through a file given as parameter, the tinderbox on 200 packages at a time (this way there is emerge --sync from time to time so that the tree doesn’t get stale). There is also a fetch-reverse-deps.sh which is used to, as the cover says, fetch the reverse dependencies of a given package, which pairs with the continuous script above when I do a targeted run.

On the configuration side, /etc/portage/make.conf has to refer to /root/flameeyes-tinderbox/tinderbox.make.conf which comes from the repository and sets up features, verbosity levels, and the fetch/resume commands to use curl.. these are also set up so that if there is a TINDERBOX_PROXY environment variable set, then it’ll go through it. Setting of TINDERBOX_PROXY and a couple more variables is done in /etc/portage/make.tinderbox.private.conf; you can use it for setting GENTOO_MIRRORS with something that is easily and quickly reachable, as there’s a lot to download!

But what does this get us? Just a bunch of files in /var/log/portage/build. How do I analyze them? Originally I did this by using grep within Emacs and looked at it file by file. Since I was opening the bugs with Firefox running on the same system, so I could very easily attach the logs. This is no longer possible, so that’s why I wrote a log collector which is also available and that is designed in two components: a script that receives (over IPv6 only, and within the virtual network of the host) the log being sent with netcat and tar, removes colour escape sequences, and writes it down as an HTML file (in a way that Chrome does not explode on) on Amazon’s S3, also counting how many of the observed warnings are found, and whether the build, or tests, failed — this data is saved over SimpleDB.

Then there is a simple sinatra-based interface that can be ran on any computer, and I run it locally on my laptop, and fetches the data from SimpleDB, and displays it in a table with links to the build logs. This also has a link to the pre-filled bug template (it uses a local file where emerge --info is saved as comment #0.

Okay so this is the general gist of it, if I have some more time this weekend I’ll draw some cute diagram for it, and you can all tell me that it’s overcomplicated and that if I did it in $whatever it would have been much easier, but at the same time you’ll not be providing any replacement, or if you will start working on it, you’ll spend months designing the schema of the database, with a target of next year, which will not be met. I’ve been there.

November 01, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Slock 1.1 background colour (November 01, 2012, 13:43 UTC)

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:


# slock version
VERSION = 1.0-tip

# Customize below to fit your system

# paths
PREFIX = /usr/local

X11INC = /usr/X11R6/include
X11LIB = /usr/X11R6/lib

# includes and libs
INCS = -I. -I/usr/include -I${X11INC}
LIBS = -L/usr/lib -lc -lcrypt -L${X11LIB} -lX11 -lXext

# flags
CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"
CFLAGS = -std=c99 -pedantic -Wall -Os ${INCS} ${CPPFLAGS}
LDFLAGS = -s ${LIBS}

# On *BSD remove -DHAVE_SHADOW_H from CPPFLAGS and add -DHAVE_BSD_AUTH
# On OpenBSD and Darwin remove -lcrypt from LIBS

# compiler and linker
CC = cc

# Install mode. On BSD systems MODE=2755 and GROUP=auth
# On others MODE=4755 and GROUP=root
#MODE=2755
#GROUP=auth

With the line applicable to background colour being:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"

In order to change it back to the pre-1.1 background colour scheme, simply modify -DCOLOR2 to be the same as -DCOLOR1:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"black\"

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

Hope that helps!

Cheers,
Zach

October 31, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

I guess it’s time for a new post on what’s the status with Gentoo Linux right now. First of all, the tinderbox is munching as I write. Things are going mostly smooth but there are still hiccups due to some developers not accepting its bug reports because of the way logs are linked (as in, not attached).

Like last time that I wrote about it, four months ago, this is targeting GCC 4.7, GLIBC 2.16 (which is coming out of masking next week!) and GnuTLS 3. Unfortunately, there are a few (biggish) problems with this situation, mostly related to the Boost problem I noted back in July.

What happens is this:

  • you can’t use any version of boost older than 1.48 with GCC 4.7 or later;
  • you can’t use any version of boost older than 1.50 with GLIBC 2.16;
  • many packages don’t build properly with boost 1.50 and later;
  • a handful of packages require boost 1.46;
  • boost 1.50-r2 and later (in Gentoo) no longer support eselect boost making most of the packages using boost not build at all.

This kind of screwup is a major setback, especially since Mike (understandably) won’t wait any more to unmask GLIBC 2.16 (he waited a month, the Boost maintainers had all the time to fix their act, which they didn’t — it’s now time somebody with common sense takes over). So the plan right now is for me and Tomáš to pick up the can of worms, and un-slot Boost, quite soon. This is going to solve enough problems that we’ll all be very happy about it, as most of the automated checks for Boost will then work out of the box. It’s also going to reduce the disk space being used by your install, although it might require you to rebuild some C++ packages, I’m sorry about that.

For what concerns GnuTLS, version 3.1.3 is going to hit unstable users at the same time as glibc-2.16, and hopefully the same will be true for stable when that happens. Unfortunately there are still a number of packages not fixed to work with gnutls, so if you see a package you use (with GnuTLS) in the tracker it’s time to jump on fixing it!

Speaking of GnuTLS, we’ve also had a smallish screwup this morning when libtasn1 version 3 also hit the tree unmasked — it wasn’t supposed to happen, and it’s now masked, as only GnuTLS 3 builds fine with it. Since upstream really doesn’t care about GnuTLS 2 at this point, I’m not interested in trying to get that to work nicely, and since I don’t see any urgency in pushing libtasn1 v3 as is, I’ll keep it masked until GNOME 3.6 (as gnome-keyring also does not build with that version, yet).

Markos has correctly noted that the QA team – i.e., me – is not maintaining the DevManual anymore. We made it now a separate project, under QA (but I’d rather say it’s shared under QA and Recruiters at the same time), and the GIT Repository is now writable by any developer. Of course if you play around that without knowing what you’re doing, on master, you’ll be terminated.

There’s also the need to convert the DevManual to something that makes sense. Right now it’s a bunch of files all called text.xml which makes editing a nightmare. I did start working on that two years ago but it’s tedious work and I don’t want to do it on my free time. I’d rather not have to do it while being paid for it really. If somebody feels like they can handle the conversion, I’d actually consider paying somebody to do that job. How much? I’d say around $50. Desirable format is something that doesn’t make a person feel like taking their eyes out when trying to edit it with Emacs (and vim, if you feel generous): my branch used DocBook 5, which I rather fancy, as I’ve used it for Autotools Mythbuster but RST or Sphinx would probably be okay as well, as long as no formatting is lost along the way. Update: Ben points out he already volunteered to convert it to RST, I’ll wait for that before saying anything more.

Also, we’re looking for a new maintainer for ICU (and I’m pressing Davide to take the spot) as things like the bump to 50 should have been handled more carefully. Especially now that it appears that it’s basically breaking a quarter of its dependencies when using GCC 4.7 — both the API and ABI of the library change entirely depending on whether you’re using GCC 4.6 or 4.7, as it’ll leverage C++11 support in the latter. I’m afraid this is just going to be the first of a series of libraries making this kind of changes and we’re all going to suffer through it.

I guess this is all for now.

October 30, 2012
Greg KH a.k.a. gregkh (homepage, stats, bugs)
Help Wanted (October 30, 2012, 19:03 UTC)

I'm looking for someone to help me out with the stable Linux kernel release process. Right now I'm drowning in trees and patches, and could use some one to help me sanity-check the releases I'm doing.

Specifically, I'm looking for someone to help with:

  • test boot the -rc stable kernels to make sure I didn't do anything foolish.
  • dig through the Linux kernel distro trees and send me the git commit ids, or the backported patches, of things they are shipping that are not in the stable and longterm kernel releases.
  • do code review of the patches going into the stable releases.

If you can help out with this, I'd really appreciate it.

Note, this is not a long-term position, only 6 months or so, I figure you'll be tired of it by then and want to move on to something else, which is fine.

In return, you get:

  • your name in the stable releases as someone who has signed-off-by on patches going into it.
  • better knowledge of more kernel subsystems than you ever have in the past, and probably really want.
  • free beverages of your choice at any Linux conference you attend that I am at (given my travel schedule, seems to be just about all of them.)

If anyone is interested in this, here are the 5 steps you need to do to "apply" for the position:

  • email me with the subject line starting with "[Stable tree help]"
  • email me "proof" you are running the latest stable -rc kernel at the moment.
  • send a link to some kernel patches you have done that were accepted into Linus's tree.
  • send a link to any Linux distro kernel tree where they keep their patches.
  • say why you want to do this type of thing, and what amount of time you can spend on it per week.

I'll close the application process in a week, on November 7, 2012, after that I'll contact everyone who applied and do some follow-up questions through email with them. I'll also post something here to say what the response was like.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Munin, sensors and IPMI (October 30, 2012, 17:47 UTC)

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…


[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
grsec and PulseAudio (and Gentoo) (October 30, 2012, 08:49 UTC)

This problem seems to bite some of our hardened users a couple of times a year, so thought I’d blog about it. If you are using grsec and PulseAudio, you must not enable CONFIG_GRKERNSEC_SYSFS_RESTRICT in your kernel, else autodetection of your cards will fail.

PulseAudio’s module-udev-detect needs to access /sys to discover what cards are available on the system, and that kernel option disallows this for anyone but root.

October 26, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
moving services around (October 26, 2012, 15:53 UTC)

A few days ago the box that was hosting our low-risk webapps died (barbet.gentoo.org). The services that were affected are get.gentoo.org planet.gentoo.org packages.gentoo.org devmanual.gentoo.org infra-status.gentoo.org and bouncer.gentoo.org. We quickly migrated the services to another box (brambling.gentoo.org). Brambling had issues in the past with its RAM, but we changed them with new ones a couple of months ago. Additionally, this machine was used for testing only. Unfortunately the machine started to malfunction as soon as those services were transferred there, which means that it has more hardware issues than the RAM. The resulting error messages stopped when we disabled packages.gentoo.org temporarily. The truth is that this packages webapp is old, unmaintained, uses deprecated interfaces and real pain to debug. In this year’s GSoC we had a really nice replacement by Slava Bacherikov written in django. Additionally, recently we were given a Ganeti cluster hosted at OSUOSL. Thus we decided not to put up again the old packages.gentoo.org instance, and instead create 4 virtual machines in our Ganeti cluster, and migrate the above webapps there, along with the new and shiny packages.gentoo.org website. Furthermore, we will also deploy another GSoC webapp, gentoostats, and start providing our developers with virtual machines. We will not give public IPv4 IPs to the dev VMs though, but probably use IPv6 only so that developers can access them through woodpecker (the box where the developers have their shell accounts), but it is still under discussion. We already started working on the above, and we expect next week to be fully finished with the new webapps live and rocking. Special thanks to Christian and Alec who took care of the migrations before and during the Gentoo Miniconf.

October 25, 2012
Markos Chandras a.k.a. hwoarang (homepage, stats, bugs)
Gentoo Recruitment: How do we perform? (October 25, 2012, 18:53 UTC)

A couple of days ago, Tomas and I, gave a presentation at the Gentoo Miniconf. The subject of the presentation was to give an overview of the current recruitment process, how are we performing compared to the previous years and what other ways there are for users to help us improve our beloved distribution. In this blog post I am gonna get into some details that I did not have the time to address during the presentation regarding our recruitment process.

 

Recruitment Statistics

Recruitment Statistics from 2008 to 2012

Looking at the previous graph, two things are obvious. First of all, every year the number of people who wanted to become developers is constantly decreased. Second, we have a significant number of people who did not manage to become developers. Let me express my personal thoughts on these two things.

For the first one, my opinion is that these numbers are directly related to the Gentoo’s reputation and its “infiltration” to power users. It is not a secret that Gentoo is not as popular as it used to be. Some people think this is because of the quality of our packages, or because of the frequency we cause headaches to our users. Other people think that the “I want to compile every bit of my linux box” trend belongs to the past and people want to spend less time maintaining/updating their boxes and more time doing some actual work nowadays. Either way, for the past few years we are loosing people, or to state it better, we are not “hiring” as many as we used to. Ignoring those who did not manage to become developers, we must admit that the absolute numbers are not in our favor. One may say that, 16 developers for 2011-2012 is not bad at all, but we aim for the best right? What bothers me the most is not the number of the people we recruit, but that this number is constantly falling for the last 5 years…

As for the second observation, we see that, every year, around 4-5 people give up and decide to not become developers after all. Why is that? The answer is obvious. Our long, painful, exhausting recruitment process drives people away. From my experience, it takes about 2 months from the time your mentor opens your bug, until a recruiter picks you up. This obviously kills someone’s motivation, makes him lose interest, get busy with other stuff and he eventually disappears. We tried to improve this process by creating a webapp two years ago, but it did not work out well. So we are now back to square one. We really can’t afford loosing developers because of our recruitment process. It is embarrassing to say at least.

Again, is there anything that can be done? Definitely yes. I’d say, we need an improved or a brand new web application that will focus on two things:

1) make the review process between mentor <-> recruit easier

2) make the final review process between recruit <-> recruiter an enjoyable learning process

Ideas are always welcomed. Volunteers and practical solutions even more ;) In the meantime, I am considering using Google+ hangouts for the face-to-face interview sessions with the upcoming recruits. This should bring some fresh air to this process ;)

The entire presentation can be found here

October 24, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Munin, sensors and IPMI (October 24, 2012, 15:06 UTC)

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…


[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo Miniconf 2012 (October 24, 2012, 11:07 UTC)

The Gentoo Miniconf is over now but it was a great success. There was 30+ developers that went and I met quite some users too. Thanks to Theo (tampakrap) and Michal (miska) for organizing the event (and others), thanks to openSUSE for sponsoring and letting the Gentoo Linux guys hangout there. Thanks to the other sponsors too, Google, Aeroaccess, et al.

More pics at the Google+ event page.

It was excellent to meet all of you.

October 23, 2012
Launching Gentoo VMs on okeanos.io (October 23, 2012, 13:50 UTC)

Long time, no post.

For about a year now, I’ve been working at GRNET on its (OpenStack API compliant) open source IaaS cloud platform Synnefo, which powers the ~okeanos service.

Since ~okeanos is mainly aimed towards the Greek academic community (and thus has restrictions on who can use the service), we set up a ‘playground’ ‘bleeding-edge’ installation (okeanos.io) of Synnefo, where anyone can get a free trial account, experiment with the the Web UI, and have fun scripting with the kamaki API client. So, you get to try the latest features of Synnefo, while we get valuable feedback. Sounds like a fair deal. :)

Unfortunately, being the only one in our team that actually uses Gentoo Linux, up until recently Gentoo VMs were not available. So, a couple of days ago I decided it was about time to get a serious distro running on ~okeanos (the load of our servers had been ridiculously low after all :P ). For future reference, and in case anyone wants to upload their own image on okeanos.io or ~okeanos, I’ll briefly describe the steps I followed.

1) Launch a Debian-base (who needs a GUI?) VM on okeanos.io

Everything from here on is done inside our Debian-base VM.

2) Use fallocate or dd seek= to create an (empty) file large enough to hold our image (5GB)

fallocate -l $((5 * 1024 * 1024 *1024) gentoo.img

3) Losetup the image, partition and mount it

losetup -f gentoo.img
parted mklabel msdos /dev/loop0
parted mkpart primary ext4 2048s 5G /dev/loop0
kpartx -a /dev/loop0
mkfs.ext4 /dev/mapper/loop0p1
losetup /dev/loop1 /dev/mapper/loop0p1 (trick needed for grub2 installation later on)
mount /dev/loop1 /mnt/gentoo -t ext4 -o noatime,nodiratime

4) Chroot and install Gentoo in /mnt/gentoo. Just follow the handbook. At a minimum you’ll need to extract the base system and portage, and set up some basic configs, like networking. It’s up to you how much you want to customize the image. For the Linux Kernel, I just copied directly the Debian /boot/[vmlinuz|initrd|System.map] and /lib/modules/ of the VM (and it worked! :) ).

5) Install sys-boot/grub-2.00 (I had some *minor* issues with grub-0.97 :P ).

6) Install grub2 in /dev/loop0 (this should help). Make sure your device.map inside the Gentoo chroot looks like this:

(hd0) /dev/loop0
(hd1) /dev/loop1

and make sure you have a sane grub.cfg (I’d suggest replacing all references to UUIDs in grub.cfg and /etc/fstab to /dev/vda[1]).
Now, outside the chroot, run:

grub-install --root-directory=/mnt --grub-mkdevicemap=/mnt/boot/grub/device.map /dev/loop0

Cleanup everything (umount, losetup -d, kpartx -d etc), and we’re ready to upload the image, with snf-image-creator.

snf-image-creator takes a diskdump as input, launches a helper VM, cleans up the diskdump / image (cleanup of sensitive data etc), and optionally uploads and registers our image with ~okeanos.

For more information on how snf-image-creator and Synnefo image registry works, visit the relevant docs [1][2][3].

0) Since snf-image-creator will use qemu/kvm to spawn a helper VM, and we’re inside a VM, let’s make sure that nested virtualization (OSDI ’10 Best Paper award btw :) ) works.

First, we need to make sure that kvm_[amd|intel] is modprobe’d on the host machine / hypervisor with the nested = 1 parameter, and that the vcpu, that qemu/kvm creates, thinks that it has ‘virtual’ virtualization extensions (that’s actually our responsibility, and it’s enabled on the okeanos.io servers).

Inside our Debian VM, let’s verify that everything is ok.

grep [vmx | svm] /proc/cpuinfo
modprobe -v kvm kvm_intel

1) Clone snf-image-creator repo

git clone https://code.grnet.gr/git/snf-image-creator

2) Install snf-image-creator using setuptools (./setup.py install) and optionally virtualenv. You’ll need to install (pip install / aptitude install etc) setuptools, (python-)libguestfs and python-dialog manually. setuptools will take care of the rest of the deps.

3) Use snf-image-creator to prepare and upload / register the image:

snf-image-creator -u gentoo.diskdump -r "Gentoo Linux" -a [okeanos.io username] -t [okeanos.io user token] gentoo.img -o gentoo.img --force

If everything goes as planned, after snf-image-creator terminates, you should be able to see your newly uploaded image in https://pithos.okeanos.io, inside the Images container. You should also be able to choose your image to create a new VM (either via the Web UI, or using the kamaki client).

And, let’s install kamaki to spawn some Gentoo VMs:

git clone https://code.grnet.gr/git/kamaki

and install it using setuptools (just like snf-image-creator). Alternatively, you could use our Debian repo (you can find the GPG key here).

Modify .kamakirc to match your credentials:

[astakos]
enable = on
url = https://astakos.okeanos.io
[compute]
cyclades_extensions = on
enable = on
url = https://cyclades.okeanos.io/api/v1.1
[global]
colors = on
token = [token]
[image]
enable = on
url = https://cyclades.okeanos.io/plankton
[storage]
account = [username]
container = pithos
enable = on
pithos_extensions = on
url = https://pithos.okeanos.io/v1

Now, let’s create our first Gentoo VM:

kamaki server create LarryTheCow 37 `kamaki image list | grep Gentoo | cut -f -d ' '` --personality /root/.ssh/authorized_keys

That’s all for now. Hopefully, I’ll return soon with another more detailed post on scripting with kamaki (vkoukis has a nice script using kamaki python lib to create from scratch a small MPI cluster on ~okeanos :) ).

Cheers!


October 22, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
May I have a network connection, please? (October 22, 2012, 15:31 UTC)

If you’re running ~arch, you probably noticed by now that the latest OpenRC release no longer allows services to “need net” in their init scripts. This change has caused quite a bit of grief because some services no longer started after a reboot, or no longer start after a restart, including Apache. Edit: this only happens if you have corner case configurations such as an LXC guest. As William points out, the real change is simply that net.lo no longer provides the net virtual, but the other network interfaces do.

While it’s impossible to say that this is not annoying as hell, it could be much worse. Among other reasons, because it’s really trivial to work it around until the init scripts themselves are properly fixed. How? You just need to append to /etc/conf.d/$SERVICENAME the line rc_need="!net" — if the configuration file does not exist, simply create it.

Interestingly enough, knowing this workaround also allows you to do something even more useful, that is making sure that services requiring a given interface being up depend on that interface. Okay it’s a bit complex, let me backtrack a little.

Most of the server daemons that you have out there don’t really care of how many, which, and what name your interfaces are. They open either to the “catch-all” address (0.0.0.0 or :: depending on the version of the IP protocol — the latter can also be used as a catch-both IPv4 and IPv6, but that’s a different story altogether), to a particular IP address, or they can bind to the particular interface but that’s quite rare, and usually only has to do with the actual physical address, such as RADVD or DHCP.

Now to bind to a particular IP address, you really need to have the address assigned to the local computer or the binding will fail. So in these cases you have to stagger the service start until the network interface with that address is started. Unfortunately, it’s extremely hard to do so automatically: you’d have to parse the configuration file of the service (which is sometimes easy and most of the times not), and then you’d have to figure out which interface will come up with that address … which is not really possible for networks that get their addresses automatically.

So how do you solve this conundrum? There are two ways and both involve manual configuration, but so do defined-address listening sockets for daemons.

The first option is to keep the daemon listening on the catch-all addresses, then use iptables to set up filtering per-interface or per-address. This is quite easy to deal with, and quite safe as well. It also has the nice side effect that you only have one place to handle all the IP address specifications. If you ever had to restructure a network because the sysadmin before you used the wrong subnet mask, you know how big a difference that makes. I’ve found before that some people think that iptables also needs the interfaces to be up to work. This is not the case, fortunately, it’ll accept any interface names as long as they could possibly be valid, and then will only match them when the interface is actually coming up (that’s why it’s usually a better idea to whitelist rather than blacklist there).

The other option requires changing the configuration on the OpenRC side. As I shown above you can easily manipulate the dependencies of the init scripts without having to change those scripts at all. So if you’re running a DHCP server on the lan served by the interface named lan0 (named this way because a certain udev no longer allows you to swap the interface names with the permanent rules that were first introduced by it), and you want to make sure that one network interface is up before dhcp, you can simply add rc_need="net.lan0" to your /etc/conf.d/dhcpd. This way you can actually make sure that the services’ dependencies match what you expect — I use this to make sure that if I restart things like mysql, php-fpm is also restarted.

So after I gave you two ways to work around the current not-really-working-well status, but why did I not complain about the current situation? Well, the reason for which so many init scripts have that “need net” line is simply cargo-culting. And the big problem is that there is no real good definition of what “net” is supposed to be. I’ve seen used (and used it myself!) for at least the following notions:

  • there are enough modules loaded that you can open sockets; this is not really a situation that I’d like to find myself to have to work around; while it’s possible to build both ipv4 and ipv6 as modules, I doubt that most things would work at all that way;
  • there is at least one network interface present on the system; this usually is better achieved by making sure that net.lo is started instead; especially since in most cases for situations like this what you’re looking for is really whether 127.0.0.1 is usable;
  • there is an external interface connected; okay sure, so what are you doing with that interface? because I can assure you that you’ll find eth0 up … but no cable is connected, what about it now?
  • there is Internet connectivity available; this would make sense if it wasn’t for the not-insignificant detail that you can’t really know that from the init system; this would be like having a “need userpresence” that makes sure that the init script is started only after the webcam is turned on and the user face is identified.

While some of these particular notions have use cases, the fact that there is no clear identification of what that “need net” is supposed to be makes it extremely unreliable, and at this point, especially considering all the various options (oldnet, newnet, NetworkManager, connman, flimflam, LXC, vserver, …) it’s definitely a better idea to get rid of it and not consider it anymore. Unfortunately, this is leading us into a relative world of pain, but sometimes you have to get through it.

October 21, 2012
Gentoo on the OLPC XO-1.75 (October 21, 2012, 10:00 UTC)

Currently at the Gentoo Miniconf 2012 in Prague, we have two OLPC XO-1.75 devices and are working to install Gentoo on them.

These XO-1.75 is based on the Marvell Armada 610 SoC (armv7l, non-NEON), which promises countless hours of fun enumerating and obtaining obscure pieces of software which are needed to make the laptop work.

One of these is the xf86-video-dove DDX for the Vivante(?) GPU: The most recent version 0.3.5 seems to be available only as SRPM in the OLPC rpmdropbox. Extracting it reveals a "source" tarball containing this:

.:
total 1364
-rw-r--r-- 1 chithanh users 423968 12. Sep 14:39 aclocal.m4
drwxr-xr-x 1 chithanh users 80 12. Sep 14:39 autom4te.cache
-rwxr-xr-x 1 chithanh users 981 12. Sep 14:37 build_no_dpkg_env.sh
-rw-r--r-- 1 chithanh users 0 12. Sep 14:37 ChangeLog
lrwxrwxrwx 1 chithanh users 37 12. Sep 14:39 config.guess -> /usr/share/automake-1.12/config.guess
-rw-r--r-- 1 chithanh users 2120 12. Sep 14:40 config.h
-rw-r--r-- 1 chithanh users 1846 12. Sep 14:40 config.h.in
-rw-r--r-- 1 chithanh users 43769 12. Sep 14:40 config.log
-rwxr-xr-x 1 chithanh users 65749 12. Sep 14:40 config.status
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 config.sub -> /usr/share/automake-1.12/config.sub
-rwxr-xr-x 1 chithanh users 440014 12. Sep 14:40 configure
-rw-r--r-- 1 chithanh users 2419 12. Sep 14:37 configure.ac
-rwxr-xr-x 1 chithanh users 1325 12. Sep 14:37 COPYING
drwxr-xr-x 1 chithanh users 262 12. Sep 14:37 debian
lrwxrwxrwx 1 chithanh users 32 12. Sep 14:39 depcomp -> /usr/share/automake-1.12/depcomp
drwxr-xr-x 1 chithanh users 252 12. Sep 14:37 etc
drwxr-xr-x 1 chithanh users 44 12. Sep 14:37 fedora
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 install-sh -> /usr/share/automake-1.12/install-sh
-rwxr-xr-x 1 chithanh users 293541 12. Sep 14:40 libtool
lrwxrwxrwx 1 chithanh users 35 12. Sep 14:39 ltmain.sh -> /usr/share/libtool/config/ltmain.sh
-rw-r--r-- 1 chithanh users 27005 12. Sep 14:40 Makefile
-rw-r--r-- 1 chithanh users 1167 12. Sep 14:37 Makefile.am
-rw-r--r-- 1 chithanh users 25708 12. Sep 14:40 Makefile.in
drwxr-xr-x 1 chithanh users 76 12. Sep 14:40 man
lrwxrwxrwx 1 chithanh users 32 12. Sep 14:39 missing -> /usr/share/automake-1.12/missing
-rw-r--r-- 1 chithanh users 4169 12. Sep 14:37 README
drwxr-xr-x 1 chithanh users 1192 12. Sep 21:48 src
-rw-r--r-- 1 chithanh users 23 12. Sep 14:40 stamp-h1

src/:
total 688
-rw-r--r-- 1 chithanh users 3555 12. Sep 14:41 compat-api.h
-rw-r--r-- 1 chithanh users 805 12. Sep 14:37 datatypes.h
-rw-r--r-- 1 chithanh users 55994 12. Sep 21:22 dovefb.c
-rw-r--r-- 1 chithanh users 32160 12. Sep 15:11 dovefb_cursor.c
-rw-r--r-- 1 chithanh users 278 12. Sep 17:09 dovefb_cursor.lo
-rw-r--r-- 1 chithanh users 6052 12. Sep 14:41 dovefb_driver.h
-rw-r--r-- 1 chithanh users 974 12. Sep 17:09 dovefb_drv.la
-rw-r--r-- 1 chithanh users 13856 12. Sep 14:37 dovefb.h
-rw-r--r-- 1 chithanh users 264 12. Sep 17:09 dovefb.lo
-rw-r--r-- 1 chithanh users 128733 12. Sep 15:11 dovefb_xv.c
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 dovefb_xv.lo
-rw-r--r-- 1 chithanh users 2548 12. Sep 14:53 list.h
-rw-r--r-- 1 chithanh users 22242 12. Sep 17:08 Makefile
-rw-r--r-- 1 chithanh users 2121 12. Sep 14:37 Makefile.am
-rw-r--r-- 1 chithanh users 2134 12. Sep 14:37 Makefile.am.sw
-rw-r--r-- 1 chithanh users 21742 12. Sep 14:40 Makefile.in
-rw-r--r-- 1 chithanh users 18584 12. Sep 15:11 mrvl_crtc.c
-rw-r--r-- 1 chithanh users 856 12. Sep 14:37 mrvl_crtc.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_crtc.lo
-rw-r--r-- 1 chithanh users 851 12. Sep 14:37 mrvl_cursor.h
-rw-r--r-- 1 chithanh users 2509 12. Sep 15:11 mrvl_debug.c
-rw-r--r-- 1 chithanh users 2284 12. Sep 14:37 mrvl_debug.h
-rw-r--r-- 1 chithanh users 272 12. Sep 17:09 mrvl_debug.lo
-rw-r--r-- 1 chithanh users 32528 12. Sep 15:11 mrvl_edid.c
-rw-r--r-- 1 chithanh users 5794 12. Sep 14:37 mrvl_edid.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_edid.lo
-rw-r--r-- 1 chithanh users 84262 12. Sep 17:07 mrvl_exa_driver.c
-rw-r--r-- 1 chithanh users 282 12. Sep 17:09 mrvl_exa_driver.lo
-rw-r--r-- 1 chithanh users 10388 12. Sep 15:11 mrvl_exa_fence_pool.c
-rw-r--r-- 1 chithanh users 290 12. Sep 17:09 mrvl_exa_fence_pool.lo
-rw-r--r-- 1 chithanh users 9189 12. Sep 14:51 mrvl_exa.h
-rw-r--r-- 1 chithanh users 4258 12. Sep 14:37 mrvl_exa_profiling.h
-rw-r--r-- 1 chithanh users 46583 12. Sep 15:11 mrvl_exa_utils.c
-rw-r--r-- 1 chithanh users 3768 12. Sep 15:06 mrvl_exa_utils.h
-rw-r--r-- 1 chithanh users 280 12. Sep 17:09 mrvl_exa_utils.lo
-rw-r--r-- 1 chithanh users 20622 12. Sep 15:11 mrvl_heap.c
-rw-r--r-- 1 chithanh users 3256 12. Sep 14:53 mrvl_heap.h
-rw-r--r-- 1 chithanh users 270 12. Sep 17:09 mrvl_heap.lo
-rw-r--r-- 1 chithanh users 1774 12. Sep 15:11 mrvl_offscreen_memory.c
-rw-r--r-- 1 chithanh users 235 12. Sep 14:37 mrvl_offscreen_memory.h
-rw-r--r-- 1 chithanh users 294 12. Sep 17:09 mrvl_offscreen_memory.lo
-rw-r--r-- 1 chithanh users 47286 12. Sep 15:11 mrvl_output.c
-rw-r--r-- 1 chithanh users 274 12. Sep 17:09 mrvl_output.lo

More pictures of the Gentoo Miniconf can be found at the Google+ Event page.

October 19, 2012
Miniconf: Gentoo on the OLPC XO-1.75 (October 19, 2012, 21:02 UTC)

At the Gentoo Miniconf 2012 in Prague we will install Gentoo on the OLPC XO-1.75, an ARM based laptop designed as an educational tool for children. If you are interested in joining us, come to the Gentoo booth and start hacking with us!

—Chí-Thanh Christopher Nguyễn

October 17, 2012
2012 Gentoo Screenshot Contest Results (October 17, 2012, 20:57 UTC)

Gentoo - Still alive and kicking ...

As the quantity and quality of this year's entries will attest, Gentoo is alive, well, and taking no prisoners!

We had 70 entries for the 2012 Gentoo screenshot contest, representing 11 different window managers / desktop environments. Thanks to all that participated, the judges and likewhoa for the screenshot site.

The Winners!

New subproject: kde-stable (October 17, 2012, 18:53 UTC)

If you are a kde user, you may be interested to this new subproject:
http://www.gentoo.org/proj/en/desktop/kde/kde-stable/

Feel free to ask any doubt.

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The latest news (October 17, 2012, 10:27 UTC)

Overview of What Happened

In the last few weeks, the conference team has worked hard to prepare the conference. The main news items you should be awere of are the FAQ which has been published, the party locations and times, the call to organize BoF sessions and of course the sponsors who help make the event possible. And we’re happy to tell you that we will provide live video streams from the main rooms during the event (!!!) and we announced the Round Table sessions during the Future Media track. Last but not least, there have been some interviews with intresting speakers in the schedule!

Sneak Peek of the Conference Schedule

Let’s start with the interviews. During the last weeks, a number of interesting speakers has been interviewed, both by text and over video chat. You can find the interviews in our first sneak peek article and more in this extensive follow-up article about the Future Media track. You can find the video interviews also in our youtube channel and on our blip.tv channel.

Video!

Talking about video interviews, there will be more videos in those channels: the openSUSE Video team is gearing up to tape the talks at the event. They will even provide a live stream of the event, which you can watch via flash and on a smartphone at bambuser and via these three links via ogv feeds: Room Kirk Room McCoy and Room Scotty. Keep an eye on the wiki page as the team will add feeds to more rooms if we can get some more volunteers to help us out.

Round Table Sessions!

We’ve mentioned the special feature track ‘Future Media’ already and we’ve got an extra bite for you all: the track will feature two round table discussions, one about the value of Free and Open for our Society and one about the practicalities of doing ‘open’ projects. Find more in the schedule: Why open matters and How do you DO open?.

We need YOU!

Despite all our work, this event would be nothing without YOUR help. We’re still looking for volunteers to sign up but there’s another thing we need you for: be pro-active and get the most out of this event! That means not only sitting in the talks but also stepping up and participating in the BoF Sessions. And organize a BoF if you think there’s something to discuss!

Party time!

Of course, we’re also thinking about the social side of the event. Yes, there will surely be an extensive “hallway track” as we feature a nice area with booths and the university has lots of hallways… But sometimes it’s just nice to sit down with someone over a good beer, and this is where our parties come in. As this article explains, there will be two parties: one on Friday, as warming-up (and pre-registration) and one on Saturday, rockin’ in the city center of Prague. Note that you will need your badge to enter this party, which means you have to be registered!

Sponsors

As we wrote a few days ago, all this would not be possible without our sponsors, and we’d like to thank them A LOT for their support!

Big hugs to Platinum Sponsor SUSE, Gold Sponsor Aeroaccess, Silver Sponsor Google, Bronze Sponsor B1Systems, supporters ownCloud and Univention and of course our media partners LinuxMagazine and Root.cz. Last but not least, a big shout-out to the university which is providing this location to us!

FaQ

On a practical level, we also published our Conference FAQ answering a bunch of questions you might have about the event. If you weren’t sure about someting, check it out!

More

There will be more news in the coming days, be sure to keep an eye on news.opensuse.org for articles leading up and of course during the event. As one teaser, we’ve got the Speedy Geeko and Lightning talks schedule coming soon!

Be there!

Gentoo Miniconf, oSC12 and LinuxDays will take place at the Czech Technical University in Prague. The campus is located in the Dejvice district and is next to an underground station that gets you directly to the historic city center – an opportunity you can’t miss!

We expect to welcome about 700 Open Source developers, testers, usability experts, artists and professional attendees to the co-hosted conferences! We work together making one big, smashing event! Admission to the conference is completely free. However for oSC a professional attendee ticket is available that offers some additional benefits.

All the co-hosted conferences will start on October 20th. Gentoo Miniconf and Linuxdays end on October 21st, while the openSUSE Conference ends on October 23rd. See you there!

Dane Smith a.k.a. c1pher (homepage, stats, bugs)
New Tricks, Goals, and Ideas (October 17, 2012, 01:06 UTC)

It’s been a while since I’ve done anything visible to anyone but myself. So, what the heck have I been doing?

Well, for starts, in the past year I’ve done a serious amount of work in Python. This work was one of the reasons for my lack of motivation for Gentoo. I went from doing little programming / maintenance at work to doing it 40+ hours a week. It meant I didn’t really feel up to doing more of it in my limited spare time. So I took up a few new hobbies. I got into Photography (feel free to look under links for the photo website). I feel weird with the self promotion for that type of thing, but, c’est la vie.

As the programming at work died down some, I started to find odd projects. I spent some serious time learning Go [1] and did a few small projects of my own in that. One of those projects will be open sourced soon. I know a fair few different languages, and I know C, Python, and Java pretty decently. While I like all of the ones on that list, I can’t say that I truly buy into the philosophies. Python is great. It’s simple, it’s clean, and it “just works.” However, I find that like OpenSSL, it gives you enough room to hang yourself and everyone else in the room. The lack of strict typing coupled with the fact that it’s a scripting language are downsides (in my eyes). C, for all that it is awesome at low level work, requires so much verbosity to accomplish the simplest tasks that I tend to shy away from it for anything other than what must be done at that level. Java… is well Java. It’s a decent enough language I suppose, but being run in a VM is silly in my eyes. It, like C, suffers from being too verbose as well (again, merely my humble opinion).

Enter Go. Go has duck typed interfaces, unlike Java’s explicit ones. It’s compiled and strictly typed. It has other modern niceties (like proper strings), along with a strong tie to web development (another area C struggles with). It has numerous interesting concepts (check out defer), along with what I find to be a MUCH better approach to error handling than what exists in any of C, Java, or Python. Add in that it is concurrent by design and you have one serious language. I must say that I am thoroughly impressed. Serious Kudos to those Google guys for one awesome language.

I also picked up a Nexus 7 and started looking into how Android is built and works. I got my own custom ROM and Kernel working along with a nice Gentoo image on the SD Card. Can anyone say “Go compiler on my Nexus 7?” This work also led me to do some work as far as getting Gentoo booting on Amazon’s Elastic Compute Cloud. Building Android takes for-freaking-ever, so I figured.. why not do it in the cloud!? It works splendidly, and it is fast.

So that covers new tricks. You mentioned goals and ideas?!

First, time to get myself off the slacker wagon and back to doing something useful. I no longer repulse at the idea of developing when I get home. That helps =p. One of the first things I want to spend some time addressing is disk encryption in Gentoo. I wrote here pertaining to the state of loop-aes. Both Loop-AES and Truecrypt need to spend a little time under the microscope as to how they should be handled within Gentoo. I’ll write more on his later when I have all my ducks in a row. I have no doubt that this will be a fun topic.

I also want to look into how a language like Go fits into Gentoo. Go has it’s own build system (no Makefiles, configure scripts, or anything else) that DOES have a notion of things like CFLAGS. It also has the ability to “go get” a package and install it. To those curious check out their website. All of these lead to interesting questions from a package management point of view. I am inclined to think that Go is around to stay. I hope it is. So we may as well start looking into this now rather than later. As my father used to tell me all the time, “Proper Prior Planning Prevents Piss Poor Performance.” Time to plan =).

That is, right after I sort out the fiasco that is my bug queue. *facepalm*

[1] http://golang.com

October 15, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
box down (October 15, 2012, 07:08 UTC)

my main gentoo workstation is down. no more documentation updates from me for awhile.

it seems the desktop computer’s video card has finally bitten the dust. the monitor comes up as “no input detected” despite repeated reboots. so now i’m faced with a decision: throw in a cheap, low-end GFX card as a stopgap measure, or wash my hands of 3 to 6 years of progressive hardware failure, and do a complete rebuild. last time i put anything new in the box was probably back in 2009…said (dead) GFX card, and a side/downgraded AMD CPU. might be worth building an entirely new machine from scratch at this point.

i haven’t bothered to pay attention to the AMD-vs-Intel race for the last few years, so i’m a bit at a loss. i’ll check TechReport, SPCR, NewEgg, and all those sites, but…not being at all caught up on the bang-for-buck parts…is a bit disconcerting. i used to follow the latest trends and reviews like a true technoweenie.

and now, of course, i’m thinking in terms of what hardware lends itself to music production — USB/Firewire ports, bus latency, linux driver status for crucial bits; things like that. all very challenging to juggle after being out of it for so long.

so, who’s built their own PC lately? what’d ya use?

October 14, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Gentoo Hardened progress meeting (October 14, 2012, 13:00 UTC)

Not that long ago we had our monthly Gentoo Hardened project meeting (on October 3rd to be exact). On these meetings, we discuss the progress of the project since the last meeting.

For our toolchain domain, Zorry reported that the PIE patchset is updated for GCC, fixing bug #436924. Blueness also mentioned that he will most likely create a separate subproject for the alternative hardened systems (such as mips and arm). This is mostly for management reasons (as the information is currently scattered throughout the Gentoo project at large).

For the kernel domain, since version 3.5.4-r2 (and higher), the kernexec and uderef settings (for grSecurity) should no longer impact performance on virtualized platforms (when hardware acceleration is used of course), something that has been bothering Intel-based systems for quite some time already. Also, the problem with guest systems immediately reserving (committing) all memory on the host should be fixed with recent kernels as well. Of course, this is only true as long as you don’t sanitize your memory, otherwise all memory gets allocated regardless.

In the SELinux subproject, we now have live ebuilds allowing users to pull in the latest policy changes directly from the git repository where we keep our policy at. Also, we will see a high commit frequency in the next few weeks (or perhaps even months) as Fedora’s changes are being merged with upstream. Another change is that our patchbundles no longer contain all individual patches, but a merged patch. This increases the deployment time of a SELinux policy package considerably (up to 30% faster since patching is now only a second or less). And finally, the latest userspace utilities are in the hardened-dev overlay ready for broader testing.

grSecurity is still focusing on the XATTR-based PaX flags. The eclass (pax-utils) has been updated, and we will now be looking at supporting the PaX extended attributes for file systems such as tmpfs.

For profiles, people will notice that in the next few weeks, we will be dropping the (extremely) old SELinux profiles as the current ones have been marked stable long time ago.

In the system integrity domain, IMA is being worked on (packages and documentation) after which we’ll move to the EVM support to protect extended attributes.

And finally, klondike held a good talk about Gentoo Hardened at the Flossk conference in Kosovo.

All in all a good month of work, again with many thanks to the volunteers that are keeping Gentoo Hardened alive and kicking!

Matthew Thode a.k.a. prometheanfire (homepage, stats, bugs)
VLAN trunking to KVM VMs (October 14, 2012, 05:00 UTC)

Why this is needed

In testing linux bridging I noticed a problem that took me much longer then I feel comfortable admitting. You cannot break out the VLANs to from a physical device and also use that physical device (attached to a bridge) to forward forward the entire trunk to a set of VMs. The reason this occurs is that once linux starts inspecting for vlans on an interface to split them out it discards all those you do not have defined, so you have to trick it.

Setup

I had my Trunk on eth1. What you need to do is directly attach eth1 to a bridge (vmbr1). This bridge now has the entire trunk associated with it. Here's the fun part, you can break out vlans on the bridge, so you would have an interface for vlan 13 named vmbr1.13 and then attach that to a brige, allowing you to have a group of machines only exposed to vlan 13.

The networking goes like this.

               /-> vmbr1.13 -> vmbr13 -> VM2
eth1 -> vmbr1 ---> VM1
               \-> vmbr1.42 -> vmbr42 -> VM3

Example

Here is the script I used with proxmox (you can set up the bridge in proxmox, but not the source for the bridges data (the 'input'). This is for VLANs 1-13 and assumes you have vyatta set up the target bridges. I had this start at boot (via rc.local).

vconfig add vmbr1 2
vconfig add vmbr1 3
vconfig add vmbr1 4
vconfig add vmbr1 5
vconfig add vmbr1 6
vconfig add vmbr1 7
vconfig add vmbr1 9
vconfig add vmbr1 10
vconfig add vmbr1 11
vconfig add vmbr1 12
vconfig add vmbr1 13
ifconfig eth1 up
ifconfig vmbr1 up
ifconfig vmbr1.2 up
ifconfig vmbr1.3 up
ifconfig vmbr1.4 up
ifconfig vmbr1.5 up
ifconfig vmbr1.6 up
ifconfig vmbr1.7 up
ifconfig vmbr1.8 up
ifconfig vmbr1.9 up
ifconfig vmbr1.10 up
ifconfig vmbr1.11 up
ifconfig vmbr1.12 up
ifconfig vmbr1.13 up
brctl addif vmbr1 eth1
brctl addif vmbr2 vmbr1.2
brctl addif vmbr3 vmbr1.3
brctl addif vmbr4 vmbr1.4
brctl addif vmbr5 vmbr1.5
brctl addif vmbr6 vmbr1.6
brctl addif vmbr7 vmbr1.7
brctl addif vmbr8 vmbr1.8
brctl addif vmbr9 vmbr1.9
brctl addif vmbr10 vmbr1.10
brctl addif vmbr11 vmbr1.11
brctl addif vmbr12 vmbr1.12
brctl addif vmbr13 vmbr1.13

October 13, 2012
Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
Reanimating #gentoo-commits (October 13, 2012, 14:58 UTC)

Today I got annoyed with the silence in #gentoo-commits and spent a few hours fixing that. We have a bot reporting ... well, I hope all commits, but I haven't tested it enough.

So let me explain how it works so you can be very amused ...

First stage: Get notifications
Difficulty: I can't install postcommit hooks on cvs.gentoo.org
Workaround: gentoo-commits@lists.gentoo.org emails
Code (procmailrc):

:0:
* ^TO_gentoo-commits@lists.gentoo.org
{
  :0 c
  .maildir/.INBOX.gentoo-commits/

  :0
  | bash ~/irker-wrapper.sh
}
So this runs all mails that come from the ML through a script, and puts a copy into a subfolder.

Second stage: Extracting the data
Difficulty: Email is not a structured format
Workaround: bashing things with bash until happy
Code (irker-wrapper.sh):
#!/bin/bash
# irker wrapper helper thingy

while read line; do
        # echo $line # debug
        echo $line | grep -q "X-VCS-Repository:" && REPO=${line/X-VCS-Repository: /}
        echo $line | grep -q "X-VCS-Committer:"  && AUTHOR=${line/X-VCS-Committer:/}
        echo $line | grep -q "X-VCS-Directories:"  &&  DIRECTORIES=${line/X-VCS-Directories:/}
        echo $line | grep -q "Subject:"  && SUBJECT=${line/Subject:/}
        EVERYTHING+=$line
        EVERYTHING+="\n"
done

COMMIT_MSG=`echo -e $EVERYTHING | grep "Log:" -A1 | grep -v "Log:"`

ssh commitbot@lolcode.gentooexperimental.org "{\"to\": [\"irc://chat.freenode.net/#gentoo-commits\"], \"privmsg\": \"$REPO: ${AUTHOR} ${DIRECTORIES}: $COMMIT_MSG \"}"
Why the ssh stuff? Well, the server where the mails arrive is a bit restricted, hard to run a daemon there 'n stuff, so let's just pipe it somewhere more liberal

Third stage: Sending the notifications
Difficulty: How to communicate with irkerd?
Workaround: nc, a hammer, a few thumbs
Code:
#!/bin/bash

echo $@ | nc --send-only  127.0.0.1 6659
And that's how the magic works.

Bonus trick: using command="" in ~/.ssh/authorized_keys

... and now I really need a beer :)

October 12, 2012
Raúl Porcel a.k.a. armin76 (homepage, stats, bugs)
Beaglebone documentation updated (October 12, 2012, 17:06 UTC)

Hi all,

I’ve got some reports that my Beaglebone guide is outdated and giving some troubles regarding the bootloader and kernel.

While as of vanilla kernel 3.6.1 doesn’t support the beaglebone, U-Boot 2012.10-rc3 does support it, so i’ve tested all thechanges and updated the guide accordingly.

You can find it in http://dev.gentoo.org/~armin76/arm/beaglebone/install.xml
Some changes i’ve noticed in almost a year since i did the documentation:

  • The bug (by design they said) which made the USB port stop working after unplugging a device (check my post about the Beaglebone) is now fixed
  • CPU scaling is working, although the default governor is ‘userspace’. The default speed with this governor is:

a) 600MHz if powering it using a PSU through the 5V power connector, remember that the maximum speed of the  Beaglebone is 720MHz

b) 500MHz if powering it using the mini-USB port

Have fun


October 08, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The Keynote speaker (October 08, 2012, 12:22 UTC)

The Keynote speaker for the Bootstrapping Awesome co-hosted conferences is going to be Agustin Benito Bethencourt. Agustin is currently working in Nuremberg, Germany as the openSUSE Team Lead at SUSE, and in the Free Software community he’s mostly known for his contributions to KDE and especially in the KDE eV. He is a very interesting guy, with a lot of experience about FOSS both from the community and the enterprise POV, which is also the reason I asked him to do the Keynote. I enjoy a lot working with him on organizing this conference, his experience is valuable. In this interview he talks a bit about himself, and a lot about the subject of his Keynote, the conference, openSUSE and SUSE, and about Free Software. The interview was done inside the SUSE office in Prague, with me being the “journalist” and Michal being the “camera-man”. Post-processing was done by Jos. More interviews from other speakers are about to come, so stay tuned! Enjoy!

I’m writing this post in italian language because it is intended only for italian people.

E’ da tempo che abbiamo messo su l’idea di lavorare su git per quanto riguarda la traduzione della documentazione gentoo da inglese a italiano.
Siamo già in tanti, ma se avessimo altri traduttori potremmo produrre molto di più.
Non sono richeste specifiche tecniche, se non un minimo di conoscenza della lingua inglese.

Riferimenti:
http://dev.gentoo.org/~ago/trads-it.xml
http://dev.gentoo.org/~ago/howtohelp.xml
http://www.gentoo.org/doc/it/xml-guide.xml

Se in questi documenti c’è qualcosa di poco chiaro, non esitate a contattarmi.

Chi è interessato a collaborare può scrivermi via mail all’indirizzo ago@gentoo.org aggiungendo possibilmente il tag [docs-it] ad inizio oggetto o semplicemente cliccando qui.

September 29, 2012
Mike Gilbert a.k.a. floppym (homepage, stats, bugs)
Slot-operator deps for V8 (September 29, 2012, 03:11 UTC)

The recently approved EAPI 5 adds a feature called "slot-operator dependencies" to the package manager specification. Once these dependencies are implemented in the portage tree, the package manager will be able to automatically trigger package rebuilds when library ABI changes occur. Long-term, this will greatly reduce the need for revdep-rebuild.

If you are a Chromium user on Gentoo and you don't use portage-2.2, you have probably noticed that we are using the "preserve_old_lib" kludge so that your web browser doesn't break every time you upgrade the V8 Javascript library. This leaves old versions of V8 installed on your system until you manually clean them up. With slot-operator deps, we can eliminate this kludge since portage will have enough information to know it needs to rebuild chromium automatically. It's pretty neat.

I have forked the dev-lang/v8 and www-client/chromium ebuilds into my overlay to test this new feature; we can't really apply it in the main portage tree until a new enough version of portage has been stabilized. I will be maintaining the latest chromium dev channel release, plus a couple of versions of v8 in my overlay.

If you would like to try it out, you can install my overlay with layman -a floppym. Once you've upgraded to the versions in my overlay, upgrading/downgrading dev-lang/v8 should automatically trigger a chromium rebuild.

If you run into any issues, please file a bug.

September 28, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Debugging SELinux file context mismatches (September 28, 2012, 08:52 UTC)

I originally posted the question on gentoo-hardened ML, but Sven Vermeulen advised me to file a bug, so there it is: bug #436474.

The problem I hit is that my ~/.config/chromium/ directory should have unconfined_u:object_r:chromium_xdg_config_t context, but it has unconfined_u:object_r:xdg_config_home_t instead.

I could manually force the "right" context, but it turned out even removing the directory in question and allowing the browser to re-create it still results in wrong context. Looks like something deeper is broken (maybe just on my system), and fixing the root cause is always better. After all, other people may hit this problem too.

Here is what error messages appear on chromium launch:


$ chromium
[2557:2557:1727940797:ERROR:process_singleton_linux.cc(263)] Failed to
create /home/ph/.config/chromium/SingletonLock: Permission denied
[2557:2557:1727941544:ERROR:chrome_browser_main.cc(1552)] Failed to
create a ProcessSingleton for your profile directory. This means that
running multiple instances would start multiple browser processes rather
than opening a new window in the existing process. Aborting now to avoid
profile corruption.

And SELinux messages:

# audit2allow -d
#============= chromium_t ==============
allow chromium_t xdg_config_home_t:file create;
allow chromium_t xdg_config_home_t:lnk_file { read create };

[ 107.872466] type=1400 audit(1348505952.982:67): avc: denied { read
} for pid=2166 comm="chrome" name="SingletonLock" dev="sda1" ino=522327
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=lnk_file
[ 107.873916] type=1400 audit(1348505952.983:68): avc: denied {
create } for pid=2178 comm="Chrome_FileThre"
name=".org.chromium.Chromium.ZO3dGF"
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=file

If you have any ideas how to further debug it, or how to solve it, please share (e.g. comment on the bug or send me an e-mail). Thanks!

September 27, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: FAQ (September 27, 2012, 12:04 UTC)

All common questions regarding travelling, transportation, event details, sightseeing and much more, in this Frequently Asked Questions page. Feel free to ask more questions, so we can include them in the FAQ and make it more complete

David Abbott a.k.a. dabbott (homepage, stats, bugs)
epatch_user to the rescue ! (September 27, 2012, 09:38 UTC)

I was updating one of my boxens and ran into Bug 434686. In the bug Martin describes the simple way we as users can apply patches to packages that fail from bug fixes. This post is more than anything a reminder for me on how to do it. epatch_user has been blogged about before, dilfridge talks about it and says "A neat trick for testing patches in Gentoo (source-based distros are great!)".

As Martin explained in the bug and with the patch supplied by Liongene, here is how it works!

# mkdir -p /etc/portage/patches/net-print/cups-filters-1.0.24
# wget -O /etc/portage/patches/net-print/cups-filters-1.0.24/cups-filters-1.0.24-c++11.patch 'https://434686.bugs.gentoo.org/attachment.cgi?id=323788'
# emerge -1 net-print/cups-filters

Now that is cool :)

September 26, 2012
Hans de Graaff a.k.a. graaff (homepage, stats, bugs)

I've just updated the text on the Gentoo Wiki page on Ruby 1.9 to indicate that we now support eselecting ruby19 as the default ruby interpreter. This has not been tested extensively, so there may still be some problems with it. Please open bugs if you run into problems.

Most packages are now ready for ruby 1.9. If your favorite packages are not ready yet, please file a bug as well. We expect to make ruby 1.9 the default ruby interpreter in a few months time at the most. Your bug reports can help speed that up.

On a related note, we will be masking Ruby Enterprise Edition (ree18) shortly. With Ruby 1.9 now stable and well-supported we no longer see the need to also provide Ruby Enterprise Edition. This is also upstream's advice. On top of this the last few releases of ree18 never worked properly on Gentoo due to threading issues, and these are currenty already hard-masked.

Since we realize people may depend on ree18 and migration to ruby19 may not be straightforward, we intend to move slowly here. Expect a package mask within a month or so, and instead of the customary month we probably won't remove ree18 until after three months or so. That should give everyone plenty of time to migrate.

Bumped Piwik and Plex Media Server (September 26, 2012, 12:51 UTC)

For those interested I just pushed latest Piwik 1.8.4 and Plex Media Server 0.9.6.9 (based on the megacoffee.net overlay) ebuilds to my portage overlay at github.

Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5-hdepend (September 26, 2012, 05:04 UTC)

In portage-2.1.11.22 and 2.2.0_alpha133 there’s support for expermental EAPI 5-hdepend which adds the HDEPEND variable which is used to represent build-time host dependencies. For build-time target dependencies, use DEPEND (if the host is the target then both HDEPEND and DEPEND will be installed on it). There’s a special “targetroot” USE flag that will be automatically enabled for packages that are built for installation into a target ROOT, and will otherwise be automatically disabled. This flag may be used to control conditional dependencies, and ebuilds that use this flag need to add it to IUSE unless it happens to be included in the profile’s IUSE_IMPLICIT variable.

For those who may not be familiar with the history of HDEPEND, it was originally suggested in bug #317337. That was in 2010, and later that year there was some discussion about it on the chromium-os-dev mailing list. Recently, I suggested on the gentoo-dev mail list that it be included in EAPI 5, but it didn’t make it in. Since then, there’s been some renewed effort , and now the patch is included in mainline Portage.

September 24, 2012
Richard Freeman a.k.a. rich0 (homepage, stats, bugs)
Gentoo EC2 Tutorial / Bootstrapping (September 24, 2012, 14:20 UTC)

I want to accomplish a few things with this post.

First, I’d like to give more attention to the work recently done by edowd on Bootstrapping Gentoo in EC2.

Second, I’d like to introduce a few enhancements I’ve made on these (some being merged upstream already).

Third, I’d like to turn this into a bit of a tutorial into getting started with EC2 as well since these scripts make it brain-dead simple.

I’ve previously written on building a Gentoo EC2 image from scratch, but those instructions do not work on EBS instances without adjustment, and they’re fairly manual. Edowd extended this work by porting to EBS and writing scripts to build a gentoo install from a stage3 on EC2. I’ve further extended this by adding a rudimentary plugin framework so that this can be used to bootstrap servers for various purposes – I’ve been inspired by some of the things I’ve seen done with Chef and while that tool doesn’t fit perfectly with the Gentoo design this is a step in that direction.

What follows is a step-by-step howto that assumes you’re reading this on Gentoo and little else, and ends up with you at a shell on your own server on EC2. Those familiar with EC2 can safely skim over the early parts until you get to the git clone step.

  1. To get started, go to aws.amazon.com, and go through the steps of creating an account if you don’t already have one. You’ll need to specify payment details/etc. If you buy stuff from amazon just use your existing account (if you want), and there isn’t much more than enabling AWS.
  2. Log into aws.amazon.com, and from the top right corner drop-down under either your name or My Account/Console choose “Security Credentials”.
  3. Browse down to access credentials, click on the X.509 certificate tab, generate a certificate, and then download both the certificate and private key files. The web services require these to do just about anything on AWS.
  4. On your gentoo system run as root emerge ec2-ami-tools ec2-api-tools. This installs the tools needed to script actions on EC2.
  5. Export into your environment (likely via .bashrc) EC2_CERT and EC2_PRIVATE_KEY. These should contain the paths to the files you created in the previous step. Congratulations – any of the ac2-api-tools should now work.
  6. We’re now going to checkout the scripts to build your server. Go to an empty directory and run git clone git://github.com/rich0/rich0-gentoo-bootstrap.git -b rich0-changes.
  7. chdir to the repository directory if necessary, and within it run ./setup_build_gentoo.sh. This creates security zones and ssh keys automatically for you, and at the end outputs command lines that will build a 32 or 64 bit server. The default security zone will accept inbound connections to anywhere, but unless you’re worried about an ssh zero-day that really isn’t a big deal.
  8. Run either command line that was generated by the setup script. The parameters tell the script what region to build the server in, what security zone to use, what ssh public key to use, and where to find the private key file for that public key (it created it for you in the current directory).
  9. Go grab a cup of coffee – here is what is happening:
    1. A spot request is created for a half decent server to be used to build your gentoo image. This is done to save money – amazon can kill your bootstrap server if they need it, and you’ll get the prevailing spot rate. You can tweak the price you’re willing to pay in the script – lower prices mean more waiting. Right now I set it pretty high for testing purposes.
    2. The script waits for an instance to be created and boot. The build server right now uses an amazon image – not Gentoo-based. That could be easily tweaked – you don’t need anything in particular to bootstrap gentoo as long as it can extract a stage3 tarball.
    3. A few build scripts are scp’ed to the server and run. The server formats an EBS partition for gentoo and mounts it.
    4. A stage3 and portage snapshot are downloaded and extracted. Portage config files (world, make.conf, etc) are populated. A script is created inside the EBS volume, and executed via chroot.
    5. That script basically does the typical handbook install (emerge sync, update world (which has all the essentials in it like dhcpcd and so on), build a kernel, configure rc files, etc.
    6. The bootstrap server terminates, leaving behind the EBS volume containing the new gentoo image. A snapshot is created of this image and registered as an AMI.
    7. A micro instance of the AMI is launched to test it. After successful testing it is terminated.
  10. After the script is finished check the output to see that the server worked. If you want it outputs a command line to make the server public – otherwise only you can see/run it.
  11. To run your server go to aws.amazon.com, sign in if necessary, browse to the EC2 dashboard. Click on AMIs on the left side, select your new gentoo AMI, and launch it (micro instances are cheap for testing purposes). Go to instances on the left side and hit refresh until your instance is running. Click on it and look down in the details for the public DNS entry.
  12. To connect to your instance run ssh -i <path to pem file in your bootstrap directory> ec2-user@<public DNS name of your server>. You can sudo to root (no password).

That’s it – you have a server in the cloud. When you’re done be sure to clean up to avoid excessive charges (a few cents an hour can add up). Check the instances section and TERMINATE (not stop) any instances that are there. You will be billed by the month for storage so de-register AMIs you don’t need and go to the snapshot section and delete their corresponding snapshots.

Now, all that is useful, but you probably want to tailor your instance. You can of course do that interactively, but if you want to script it check out the plugins in the plugin directory. Just add a path to a plugin file at the end of the command line to build the instance and it will tailor your image accordingly. I plan to clean up the scripts a bit more to move anything discretionary into the plugins (you don’t NEED fcron or atop on a server).

The plugins/desktop plugin is a work in progress, but I think it should work now (takes the better part of a day to build). It only works 32-bit right now due to the profile line. However, if you run it you should be able to connect with x2goclient and have a KDE virtual desktop. A word of warning – a micro instance is a bit underpowered for this.

And on a side note, if somebody could close bugs 427722 and 423855 that would eliminate two hacks in my plugin. The stable NX doesn’t work with x2go (I don’t know if it works for anything else), and the stable gst-plugins-xvideo is missing a dependency. The latter bug will bite anybody who tries to install a clean stage3 and emerge kde-meta.

All of this is very much a work in progress. Patches or pull requests are welcome, and edowd is maintaining a nice set of up-to-date gentoo images for public use based on his scripts.


Filed under: foss, gentoo, linux

September 22, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)
preserve-libs now available in Portage 2.1 branch (September 22, 2012, 05:22 UTC)

EAPI 5 includes support for automatic rebuilds via the slot-operator and sub-slots, which has potential to make @preserved-rebuild unnecessary (see Diego’s blog post regarding symbol collisions and bug #364425 for some examples of @preserved-rebuild shortcomings). Since this support for automatic rebuilds has potential to greatly improve the user-friendliness of preserve-libs, I have decided to make preserve-libs available in the 2.1 branch of portage (beginning with portage-2.1.11.20). It’s not enabled by default, so you’ll have to set FEATURES=”preserve-libs” in make.conf if you want to enable it. After EAPI 5 and automatic rebuilds have gained widespread adoption, I might consider enabling preserve-libs by default.