Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
November 23, 2012, 23:08 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

November 23, 2012
Ian Whyman a.k.a. thev00d00 (homepage, stats, bugs)
Test Post #1 (November 23, 2012, 13:31 UTC)

Hello Guys,

This is just a test post to make sure the new WordPress is working correctly.

November 22, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
I'm happy I didn't replace my phone! (November 22, 2012, 19:44 UTC)

Since I’ve been to the US, I’ve been thinking of replacing my cellphone, which right now still is my HTC Desire HD (which I was supposed not to pay, as I got it with an operator contract, but which I ended up paying dearly to avoid having to pay a contract that wouldn’t do me any good outside of Italy). The reasons were many, including the fact that it doesn’t get to HSDPA speed here in the US, but the most worrisome was definitely the fact that I had to charge it at least twice a day, and it was completely unreasonable for me to expect it to work for a full day out of the office.

After Google’s failure to provide a half-decent experience with the Nexus 4 orders (I did try to get one, the price was just too sweet, but for me it went straight from “Coming Soon” to “Out of stock”), I was considering going for a Sony (Xperia S), or even (if it wasn’t for the pricetag), a Galaxy Note II with a bluetooth headset. Neither option was a favourite of mine, but beggars can’t be choosers, can they?

The other day, as most of my Twitter/Facebook/Google+ followers would have noticed, my phone also decided to give up: it crashed completely while at lunch, and after removing the battery it lost all settings, due to a corruption of the ext4 filesystem on the SD card (the phone’s memory is just too limited for installing a decent amount of apps). After a complete re-set and reinstall, during which I also updated from the latest CyanogenMod version that would work on it to the latest nightly (still CM7, no CM10 for me yet, although the same chipset is present on modern, ICS-era phones from HTC), I had a very nice surprise. The battery has been now running for 29 hours, I spoke for two and something hours on the phone, used it for email, Facebook messages, and Foursquare check-ins, and it’s still running (although it is telling me to connect my charger).

So what could have triggered this wide difference in battery life? Well there are a number of things that changed, and a number that were kept the same:

  • I did reset the battery statistics, but unlike most of the guides I did so when the phone was 100% charged instead of completely discharged — simply because I had it connected to the computer and charged when I was in the Clockwork Recovery, so I just took a chance to it.
  • I didn’t install many of the apps I had before, including a few that are basically TSRs – and if you’re old enough you know what I mean! – including Advanced Call Manager (no more customers, no more calls to filter!), and, the most likely culprit, an auto-login app for Starbucks wifi.
  • While I kept Volume Ace installed, as it’s extremely handy with its scheduler (think of it like a “quiet hours” on steroids, as it can be programmed with many different profiles, depending on the day of the week as well), I decided to disable the “lock volume” feature (as it says it can be a battery drain) and replaced it with simply disabling the volume buttons when the screen is locked (which is why I enabled the lock volume feature to begin with).
  • I also replaced Zeam Launcher, although I doubt that might be the issue, with the new ADW Launcher (the free version — which unfortunately is not replacing the one in CyanogenMod as far as I can tell) — on the other hand I have to say that the new version is very nice, it has a configurable application drawer which is exactly what I wanted, and it’s quite faster than anything else I tried in a long time.
  • Since I recently ended up replacing my iPod Classic with an iPod Touch (the harddrive in the former clicked and neither Windows nor Linux could access it), I didn’t need to re-install DoggCatcher either, and that one might have been among the power drains, since it also schedules operation in the background and, again as far as I can tell, it does not uses the “sync” options that Android provides.

In all of this, I fell pretty much in love again with my phone. Having put in a 16GB microSD card a few months ago means I have quite a bit of space for all kind of stuff (applications as well as data), and thanks to the new battery life I can’t really complain about it that much. Okay so the lack of 3G while in the US is a bit of a pain, but I’m moving to London soon anyway so that won’t be a problem (I know it works in HSDPA there just fine). And I certainly can’t lament myself about the physical strength of the device… the chassis is made of metal, I’d venture to say it’s aluminum, but I wouldn’t be sure, which makes it strong enough to resist falling into a stone pavement (twice) and on concrete (only once) — yes I mistreat my gadgets, either they cope with me or they can get the heck out of my life.

Pavlos Ratis a.k.a. dastergon (homepage, stats, bugs)
Gentoo Miniconf 2012: Review (November 22, 2012, 17:36 UTC)

After one month I think it was time to write my review about Gentoo miniconf. :-)

In 20 and 21 October I attended to the Gentoo Miniconf which was a part of the bootstrapping-awesome project, 4 conferences (openSUSE Conference/Gentoo Miniconf/LinuxDays/SUSE Labs)  where took place in the Technical Czech University at Prague.

Photo by Martin Stehno

Day 0: After our flight arrived in Prague’s airport – we went straight to the pre-conference welcome party in a cafe near the university where the conference took place. There we met the other greeks who arrived in the previous days and I had also the chance to meet a lot of Gentoo developers and talk with them.

Day 1: The first day started earlier in the morning. Me and Dimitris went to the venue before the conference started in order to prepare the room for the miniconf. The day started with Theo as host to welcome us. There were plenty of interesting presentations  that covered a lot of aspects of Gentoo, the Trustees/Council, Public Relations, The Gentoo KDE team, Gentoo Prefix, Security, Catalyst and Benchmarking. The highlight of the day was when Robin Johnson introduced the Infrastructure team and started a very interesting BoF which talked about the state of the Infra team, currently running web apps and the burning issue of the git migration. The first day ended with lots of beers in the big party of the conference in the center of the Prague next to the famous Charles Bridge.

Gentoo Developers group photo
Photo by Jorge Manuel B. S. Vicetto

 

Day 2:The second day was more relaxed. There were presentations about Gentoo@ IsoHunt, 3D and Linux graphics and Οpen/GnuPG .After the lunch break a Οpen/GnuPG key signing party began outside of the miniconf’s room.After the key signing party we continued with a workshop regarding Puppet also a presentation about how to use testing on Gentoo to improve QA and finally the last presentation ended with Markos and Tomáš talking about how to get involved into development of Gentoo. In the end Theo and Michal closed the session of the miniconf.

 

I really liked Prague especially the beers and the Czech cuisine.

Gentoo Miniconf was a great exp erience for me. I could write lot of pages about the miniconf because I was in the room the whole days and I saw all the presentations.

I had also the opportunity to get in touch and talk with lots of Gentoo developers and contributors from other FOSS projects. Thanks to Theo and Michal for organizing this awesome event.

More about the presentations and the videos of the miniconf  can be found  here.
Looking forward to the next Gentoo miniconf(why not a conference).

November 21, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Monitoring HP servers (November 21, 2012, 23:21 UTC)

Sometimes this blog has something like “columns” for long-term topics that keep re-emerging (no pun intended) from time to time. Since I came back to the US last July you can see that one of the big issues I fight with daily is HP servers.

Why is the company I’m working for using HP servers? Mostly because they didn’t have a resident system administrator before I came on board, and just recently they hired an external consultant to set up new servers … the one who set up my nightmare: Apple OS X Server so I’m not sure which of the two options I prefer.

Anyway, as you probably know if you follow my blog, I’ve been busy setting up Munin and Icinga to monitor the status of services and servers — and that helped quite a bit over time. Unfortunately, monitoring HP servers is not easy. You probably remember I wrote a plugin so I could monitor them through IPMI — it worked nicely until I actually got Albert to expose the thresholds in the ipmi-sensors output, then it broke because HP’s default thresholds are totally messed up and unusable, and it’s not possible to commit new thresholds.

After spending quite some time playing with this, I ended up with write access to Munin’s repositories (thanks, Steve!) and I can now gloat be worried about having authored quite a few new Munin plugins (the second generation FreeIPMI multigraph plugin is an example, but I also have a sysfs-based hwmon plugin that can get all the sensors in your system in one sweep, a new multigraph-capable Apache plugin, and a couple of SNMP plugins to add to the list). These actually make my work much easier, as they send me warnings when a problem happens without having to worry about it too much, but of course are not enough.

After finally being able to replace the RHEL5 (without a current subscription) with CentOS 5, I’ve started looking in what tools HP makes available to us — and found out that there are mainly two that I care about: one is hpacucli, which is also available in Gentoo’s tree, and the other is called hp-health and is basically a custom interface to the IPMI features of the server. The latter actually has a working, albeit not really polished, plugin in the Munin contrib repository – which I guess I’ll soon look to transform into a multigraph capable one; I really like multigraph – and that’s how I ended up finding it.

At any rate at that point I realized that I did not add one of the most important checks: the SMART status of the harddrives — originally because I couldn’t get smartctl installed. So I went and checked for it — the older servers are almost all running as IDE (because that’s the firmware’s default.. don’t ask), so those are a different story altogether; the newer servers running CentOS are using an HP controller with SAS drives, using the CCISS (block-layer) driver from the kernel, while one is running Gentoo Linux, and uses the newer, SCSI-layer driver. All of them can’t use smartctl directly, but they have to use a special command: smartctl -d cciss,0 — and then either point it to /dev/cciss/c0d0 or /dev/sda depending on how which of the two kernel drivers you’re using. They don’t provide all the data that they provide for SATA drives, but they provide enough for Munin’s hddtemp_smartctl and they do provide an health status…

For what concerns Munin, your configuration would then be something like this in /etc/munin/plugin-conf.d/hddtemp_smartctl:

[hddtemp_smartctl]
user root
env.drives hd1 hd2
env.type_hd1 cciss,0
env.type_hd2 cciss,1
env.dev_hd1 cciss/c0d0
env.dev_hd2 cciss/c0d0

Depending on how many drives you have and which driver you’re using you will have to edit it of course.

But when I tried to use the default check_smart.pl script from the nagios-plugins package I had two bad surprises: the first is that they try to validate the parameter passed to the plugin to identify the device type to smartctl, refusing to work for a cciss type, and the other that it didn’t consider the status message that is printed by this particular driver. I was so pissed, that instead of trying to fix that plugin – which still comes with special handling for IDE-based harddrives! – I decided to write my own, using the Nagios::Plugin Perl module, and releasing it under the MIT license.

You can find my new plugin in my github repository where I think you’ll soon find more plugins — as I’ve had a few things to keep under control anyway. The next step is probably using the hp-health status to get a good/bad report, hopefully for something that I don’t get already through standard IPMI.

The funny thing about HP’s utilities is that they for the most part just have to present data that is already available from the IPMI interface, but there are a few differences. For instance, the fan speed reported by IPMI is exposed in RPMs — which is the usual way to expose the speed of fans. But on the HP utility, fan speed is actually exposed as a percentage of the maximum fan speed. And that’s how their thresholds are exposed as well (as I said, the thresholds for fan speed are completely messed up on my HP servers).

Oh well, anything else can happen lately, this would be enough for now.

November 20, 2012
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, stats, bugs)
Project homepages for slackers (November 20, 2012, 03:50 UTC)

Create a homepage and documentation for a project is a boring task. I have a few projects that were not released yet due to lack of time and motivation to create a simple webpage and write down some Sphinx-based documentation.

To fix this issue I did a quick hack based on my favorite pieces of software: Flask, docutils and Mercurial. It is a single file web application that creates homepages automatically for my projects, using data gathered from my Mercurial repositories. It uses the tags, the README file, and a few variables declared on the repository's .hgrc file to build an interesting homepage for each project. I just need to improve my READMEs! :)

It works similarly to the PyPI Package Index, but accepts any project hosted on a Mercurial repository, including my non-Python and Gentoo-only projects.

My instance of the application lives here:

http://projects.rafaelmartins.eng.br/

The application is highly tied to my workflow, e.g. the way I handle tags and the directory structure of my repositories on my server, but the code is available in a Mercurial repository:

http://hg.rafaelmartins.eng.br/projects/

Most of my projects aren't listed yet, and I'll start enabling them as soon as I fix their READMEs.

November 19, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)

This past Saturday (17 November 2012), I participated in the St. Jude Children’s Hospital Give Thanks Walk. This year was a bit different than the previous ones, as it also had a competitive 5k run (which was actually a 6k). I woke up Saturday morning, hoping that the weather report was incorrect, and that it would be much warmer than they had anticipated. However, it was not. When I arrived at the race site (which was the beautiful Creve Coeur Lake Park [one of my absolute favourites in the area]), it was a bit nippy at 6°C. However, the sun came out and warmed up everything a bit. Come race time, it wasn’t actually all that bad, and at least it wasn’t raining or snowing. :)

When I started the race, I was still a bit cold even with my stocking cap. However, by about halfway through the 6k, I had to roll up my sleeves because I was sweating pretty badly. It was an awesome run, and I felt great at the end of it. I think that the best part was being outside with a bunch of people that were also there to support an outstanding cause like Saint Jude Children’s Hospital. There were some heartfelt stories from families of patients, and nice conversations with fellow runners.

I actually finished the race in 24’22″, which wasn’t all that bad of a time:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - runner placement list
Click to enlarge

In fact, it put me in first place, with 2’33″ between me and the runner-up! Though coming in first place wasn’t a goal of mine, I was in competition with myself. I had set a personal goal of completing the 6k in 26’30″ and actually came in under it! My placement earned me both a medal and a great certificate:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - first-place medal and certificate
Click to enlarge

After the announcements of the winners and thanks to all of the sponsors, the female first-place runner (Lisa Schmitz) and I had our photo taken together in front of the finish line:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - male and female first-place runners
Click to enlarge

Thank you to everyone that sponsored and supported me for this run! The children and families of Saint Jude received tens-of-thousands of dollars just from the Saint Louis race alone!

Cheers,
Nathan Zachary (“Zach”)

Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
GPG Key Signing Party (November 19, 2012, 08:19 UTC)

Last Thursday we had GPG Key & CAcert Signing party at SUSE office inviting anybody who wants to get his key signed. I would say, that it went quite well, we had about 20 people showing up, we had some fun, and we now trust each other some more!

GPG Key Signing

We started with GPG key signing. You know, the ussual stuff. Two rows moving against each other, people exchanging paper slips

Signing keys

For actually signing keys at home, we recommended people to use signing-party package and caff in particular. It’s easy to use tool as long as you can send mails from command line (there are some options to set up against SMTP directly, but I run into some issues). All you need to do is to call

caff HASH

and it will download the key, show you identities and fingerprint, sign it for you and send each signed identity to the owner by itself vie e-mail. And all that with nice wizard. It can’t be simpler than that.

Importing signatures

When my signed keys started coming back, I was wondering how do I process them. It was simply too many emails. I searched a little bit, but I get too lazy quite soon, so as I have all my mails stored locally in Maildir by offlineimap, I just wrote a following one liner to import them all.

   grep -Rl 'Your signed' INBOX | while read i; do 
        gpg -d "$i" | gpg --import -a;
   done

Maybe somebody will find it useful as well, maybe somebody more experienced will tell me in comments how to do it correctly ;-)

CAcert

One friend of mine – Theo – really wanted to be able to issue CAcert certificates, so we added CAcert assurance to the program. For those who doesn’t know, CAcert is nonprofit certification authority based on web of trust. You’ll get verified by volunteers and when enough of them trusts you enough, you are trusted by authority itself. When people are verifying you, they give you some points based on how they are trusted and how do they trust you. Once you get 50 points, you are trusted enough to get your certificate signed and once you have 100, you are trusted enough to start verifying other people (after a little quiz to make sure you know what are you doing).

I knew that my colleague Michal čihař is able and willing to issue some points but as he was starting with issuing 10 and I with 15, I also asked few nearby living assurers from CAcert website. Unfortunately I got no reply, but we were organizing everything quite quickly. But we had another colleague – Martin Vidner – showing up and being able to issue some points. I assured another 11 people on the party and now I can give out 25 points. As well as Michal and I guess Martin is now somewhere around 20 as well. So it means that if you need to be able to issue CAcert certificates, visiting just SUSE office in Prague is enough! But still, contact us beforehand, sometimes we do have a vacation ;-)

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Ah, LXC! Isn't that good now? (November 19, 2012, 02:55 UTC)

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

November 18, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
A matter of copyrights (November 18, 2012, 16:55 UTC)

One of the issues that came through with the recent drama about the n-th udev fork is the matter of assigning copyright to the Gentoo Foundation. This topic is not often explored, mostly because it really is a minefield, and – be ready to be surprised – I think the last person who actually said something sane on the topic has been Ciaran.

Let’s see a moment what’s going on: all ebuilds and eclasses in the main tree, and in most of the overlays, report “Gentoo Foundation” as the holder of copyright. This is so much a requirement that we’re not committing to the tree anything that reports anyone else’s copyright, and we refuse the contribution in that case for the most part. While it’s cargo-culted at this point, it is also an extremely irresponsible thing to do.

First of all, nobody ever signed a copyright assignment form to the Gentoo Foundation, as far as I can tell. I certainly didn’t do it. And especially as we go along with getting more and more proxied maintainers, as they almost always are not Gentoo Foundation members (Foundation membership comes after an year as a developer, if I’m not mistaken — or something along those lines, I honestly forgot because, honestly, I’m not following the Foundation doing at all).

Edit: Robin made me notice that a number of people did sign a copyright assignment, first to Gentoo Technologies that were then re-assigned to the Foundation. I didn’t know that — I would be surprised if a majority of the currently active developers knew about that either. As far as I can tell, copyright assignment was no longer part of the standard recruitment procedure when I joined, as, as I said, I didn’t sign one. Even assuming I was the first guy who didn’t sign it, 44% of the total active developers wouldn’t have signed it, and that’s 78% of the currently active developers (give or take). Make up your mind on these numbers.

But even if we all signed said copyright assignment, it’s for a vast part invalid. The problem with copyright assignment is that they are just that, copyright assignments… which means they only work where the law regime concerning authors’ work is that of copyright. For most (all?) of Europe, the regime is actually that of author’s rights and like VideoLAN shows it’s a bit more complex, as the authors have no real way to “assign” those rights.

Edit²: Robin also pointed at the fact that FSFe, Google (and I add Sun, at the very lest) have a legal document, usually called Contributor License Agreement (when it’s basically replacing a full blown assignment) or Fiduciary Licence Agreement (the more “free software friendly” version). This solves half the problem only, as the Foundation would still not be owning the copyright, which means that you still have to come up with a different way to identify the contributors, as they still have their rights even though they leave any decision regarding their contributions to the entity they sign the CLA/FLA to.

So the whole thing stinks of half-understood problem.

This is actually gotten more complex recently, because the sci team borrowed an eclass (or the logic for an eclass) from Exherbo — who actually handles the individual’s copyright. This is actually a much more sensible approach, on the legal side, although I find the idea of having to list, let’s say, 20 contributors at the top of every 15-lines ebuild a bit of an overkill.

My proposal would then be to have a COPYRIGHTS.gentoo file in every package directory, where we list the contributors to the ebuild. This way even proxied maintainers, and one-time contributors, get their credit. The ebuild can then refer to “see the file” for the actual authors. A similar problem also applies to files that are added to the package, including, but not limited to, the init scripts, and making the file formatted, instead of freeform, would probably allow crediting those as well.

Now, this is just a sketch of an idea — unlike Fabio, whose design methodology I do understand and respect, I prefer posting as soon as I have something in mind, to see if somebody can easily shoot it down or if it has wings to fly, and also in the vain hope that if I don’t have the time, somebody else would pick up my plan — but if you have comments on it, I’d be happy to hear them. Maybe after a round of comments, and another round of thinking about it, I’ll propose it as a real GLEP.

Secretly({Plan, Code, Think}) && PublishLater() (November 18, 2012, 12:19 UTC)

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!


Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Multi-level bundling, with a twist (November 18, 2012, 05:51 UTC)

I spent half my Saturday afternoon working on Blender, to get the new version (2.64a) in Portage. This is never an easy task but in this case it was a bit more tedious because thanks to the new release of libav (version 9) I had to make a few more modifications … making sure it would still work with the old libav (0.8).

FFmpeg support is not guaranteed — If you care you can submit a patch. I’m one of the libav developers thus that’s what I work, and test, with.

Funnily enough, while I was doing that work, a new bug for blender was reported in Gentoo, so I looked into it and found out that it was actually caused by one of the bundled dependencies — luckily, one that was already available as its own ebuild, so I just decided to get rid of it. The interesting part was that it wasn’t listed in the “still bundled libraries” list that the ebuild’s own diagnostic prints… since it was actually a bundled library of the bundled libmv!

So you reach the point where you get one package (Blender) bundling a library (libmv) bundling a bunch of libraries, multi-level.

Looking into it I found out that not only the dependency that was causing the bug was bundled (ldl) but there were at least two more that, I knew for sure, were available in Gentoo (glog and gflags). Which meant I could shave some more code out of the package, by adding a few more dependencies… which is always a good thing in my book (and I know that my book is not the same as many others’).

While looking for other libraries to unbundle, I found another one, mostly because its name (eltopo) was funny — it has a website and from there you can find the sources — neither are linked in the Blender package. When I looked at the sources, I was dismayed to see that there was no real build system but just an half-broken Makefile building two completely different PIC-enabled static archives, for debug and release. Not really something that distributions could get much interest in packaging.

So I set up at build my usual autotools-based build system (which no matter what people say it’s extremely fast, if you know how to do it), fix the package to build with gcc 4.7 correctly (how did it work for Blender? I assume they patched it somehow but they don’t write down what they do!), and .. uh where’s the license file?

Turns out that while the homepage says that the library is “public domain”, there is no license statement anywhere in the source code, making it in all effects the exact opposite: a proprietary software. I’ve opened an issue for it and hopefully upstream will fix that one up so I can send him my fixes and package it in Gentoo.

Interestingly enough, the libmv software that Blender packages, is much better in its way of bundling libraries. While they don’t seem to give you an easy way to disable the bundled copies (which might or might not be Blender’s build system fault), they make it clear where each library come from, and they have scripts to “re-bundle” said libraries. When they make changes, they also keep a log of them so that you can identify what changed and either ignore, patch or send it upstream. If all projects bundling stuff did it that way, it would be a much easier job to unbundle…

In the mean time, if you have some free time and feel like doing something to improve the bundled libraries situation in Gentoo Linux, or you care about Blender and you’d like to have a better Gentoo experience with it, we could use some ebuilds for ceres-solver and SSBA as well as fast-C (this last one has no buildsystem at all, unfortunately) all used by libmv, or maybe carve libredcode (for which I don’t even have an URL at hand), recastnavigation (which has no releases) which are instead used directly by Blender.

P.S.: don’t expect to see me around this Sunday, I’m actually going to see the Shuttle, and so I won’t be back till late, most likely, or at least I hope so. You’ll probably see a photo set on Monday on my Flickr page if you want to have a treat.

November 17, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
The hardened project continues going forward… (November 17, 2012, 19:34 UTC)

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

Documentation

The SELinux handbook has been updated with the latest policy changes (such as supporting the named init scripts). We also documented SELinux policy constraints which was long overdue.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

  • [1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).
  • [2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.

Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)

Few days ago I finished fiddling with open build service (obs) packages in our main tree. Now when anyone wants to mess up with obs he just have to emerge dev-util/osc and have the fun with it.

What the hell is obs?

OBS is pretty cool service that allows you to specify how to build your package and its dependencies in one .spec file where you can deliver the results to multiple archs/distros and not care about how it happens (Debian, SUSE, Fedora, CentOS, Archlinux).

Primary implementation is running for SUSE and it is free to use by anyone (eg. you don’t have to build suse packages there if you don’t want to :P). It has two ways how to interact with the whole tool, one is the web application, which is really PITA and the other is the osc command line tool I finished fiddling with.

Okay so why did you do it?

Well I work at SUSE and we are free to use whatever distro we want while being able to complete our taks. I like to improve stuff I want to be able fix bugs in SLE/openSUSE while not having any chroot/virtual with the named system installed, for such task this works pretty well :-)

How -g0 may be useful (November 17, 2012, 13:35 UTC)

Usually I use -g0 as CFLAGS/CXXFLAGS; it will be useful to find wrong buildsystem behaviur.

Here is an example where the buildsystem sed only -g and leave 0 and causing compile failure:

x86_64-pc-linux-gnu-gcc -DNDEBUG -march=native -O2 0 -m64 -O3 -Wall -DREGINA_SHARE_DIRECTORY=\"/usr/share/regina\" -DREGINA_VERSION_DATE=\""31 Dec 2011"\" -DREGINA_VERSION_MAJOR=\"3\" -DREGINA_VERSION_MINOR=\"6\" -DREGINA_VERSION_SUPP=\"\" -DHAVE_CONFIG_H -DHAVE_GCI -I./gci -I. -I. -I./contrib -o funcs.o -c ./funcs.c
x86_64-pc-linux-gnu-gcc: 0: No such file or directory
./funcs.c: In function '__regina_convert_date':
./funcs.c:772:14: warning: array subscript is above array bounds
make: *** [funcs.o] Error 1
emake failed

So add it to your CFLAGS/CXXFLAGS may be a good idea.

November 16, 2012
Fwd: “Apple Now Owns the Page Turn” (November 16, 2012, 22:58 UTC)

Article: http://bits.blogs.nytimes.com/2012/11/16/apple-now-owns-the-page-turn/

(Heard about it from LWN https://lwn.net/Articles/525493/rss)

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
One month “in” – some sort of status report (November 16, 2012, 12:27 UTC)

I’d like to write some sort of public status report or brain dump of what’s going on. I’ve been on-the-road for one month of the planned 12 months and just “Living the Dream” as many of the fellow travelers would say. I’ve met so many people so far, some have been really inspiring, some are not. I’m embracing the idea of slow travel and/or home base travel. I really don’t care how you travel, but the Eurorail, every capital city for two days is not what I want to do. I’ve learned that already from talking to people and my preconceived values. So far, I’m on track by only visiting two countries so far, Netherlands and Czech. I’m really diving into Czech Republic – mind you, I didn’t really plan on that but it somehow happened and I’m very ok with that. However, the bad side of that is that I’m staying still while people are moving by every 2-5 days. Since the hostel gives a free beer token to every guest, I see new people everyday for just long enough to say the smalltalk – I haven’t been in that position before so it’s new for this computer guy from Minnesota his whole life… (Self-reflection, yay) Annnyway, I’m having fun, I’m enjoying myself, I don’t like to “not-work”, I am forcing myself to take the unbeaten path, I’m getting more comfortable with myself and my environment, I’m relaxed, I can go with the flow, I know “it” will work out, I drink tea daily, I started to enjoy coffee, I’m living life, I am balanced. Go me.

As of this writing, I was in Netherlands for 7 days and spent $55usd per day and Czech for 28 days and spent $28usd per day. With my pre-trip expenses, etc, I’ve spent $65usd per day.

I’m doing fine, read my posts about where I’ve been, look at my pictures on Flickr, interact with me on Twitter for what I am doing, and check back often for what I’ve been doing. Ciao.

(After thought: considering that I’ve been at (or lived at) a dropzone for nearly every weekend this past summer (and the past 6 years), I’m really missing skydiving. Not going to lie, I can’t wait to jump out of a plane, most places around me are closing for the winter and I’m not properly prepared to jump in the cold even if they were open :( poor planning on my part. I didn’t think it would be so bad, taking a hiatus, but that sport is such a part of my life. I miss it.)

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

Many people have written about Apple’s screwups in the past — recently, the iPad Mini seems to be the major focus for about anybody, and I can see why. While I don’t want to argue I know their worst secret, I’m definitely going to show you one contender that could actually be fit to be called Apple’s Biggest Screwup: OS X Server.

Okay, those who read my blog daily already know that because I blogged this last night:

OS X Server combines UNIX’s friendliness, with Windows’s remote management capabilities, Solaris’s hardware support, and AIX software availability.

So let’s try to dissect this.

UNIX friendliness. This can be argued both positively and negatively — we all know that UNIX in general is not very friendly (I’m using the trademarked name because OS X is actually using FreeBSD which is UNIX), but it’s friendlier to have an UNIX server than a Windows one. So if you want to argue it negatively, you don’t have all the Windows-style point-and-click tools for every possible service. If you want to argue it positively, you’re still running solid (to a point) software such as Apache, BIND, and so on.

Windows’s remote management capabilities. This is an extremely interesting point. While, as I just said, OS X Server provides you with BIND as DNS server, you’re supposed not to edit the files by hand but leave it to Apple’s answer to the Microsoft Management Console — ServerAdmin. Unfortunately, doing so remotely is hard.

Yes because even though it’s supposed to be usable from a remote host, it requires that you’re using the same version on both sides, and that is impractical if your server is running 10.6, and your only client at hand is updated to 10.8. So this option has to be dropped entirely in most cases — you don’t want to keep updating to the latest OS your server, but you do so for your client, especially if you’re doing development on said client. Whoops.

So can you launch it through an SSH session? Of course not, because despite all people complaining about X11, the X protocol, and SSH X11 forwarding, are a godsend for remote management, if you have things like very old versions of libvirt and friends, or some other tool that can only be executed on a graphical environment, you only need another X with an SSH client and you’re done.

Okay so what can you do? Well, the first option would be to do it locally on the box, but that’s not possible, so the second best would be to use one of the many remote desktop techniques — OS X Server comes with Apple’s Remote Desktop server by default. While this is using the VNC standard 5900 port… it seems like it does not work with a standard VNC client such as KRDC. You really need Apple’s Remote Desktop Client, which is a paid-for proprietary app. Of course you can set up one of many third party apps to connect to it, but if you didn’t think about that when installing the server, you’re basically stuck.

And I’m pretty sure that this does not limit itself to the DNS server, but Apache, and other servers, will probably have the same issues.

Solaris’s hardware support. This should be easy, if you ever tried to run Solaris on real hardware, rather than just virtualized – and even then … – you know that it’s extremely picky. Last time I tried it, it wouldn’t run on a system with SATA drives, to give you an idea.

What hardware can OS X Server run on? Obviously, only Apple hardware. If you’re talking about a server, you have to remove from the equation all their laptops, obviously. If it’s a local server you could use an iMac, but the problem I’ve got is that it’s used not locally but at a co-location. The XServe, which was the original host for OS X Server, is now gone forever, and that leaves us with only two choices: Mac Pro and Mac Mini. Which are the only ones that are sold with that version of OS X anyway.

The former hasn’t been updated in quite a long time. It’s quite bulky to put at a co-location, even though I’ve seen enough messy racks to know that somebody could actually think about bringing it there. The latter actually just recently got an update that makes it sort of interesting, by giving you a two-HDDs option…

But you still get a system that has 2.5", 5400 RPM disks at most, with no RAID, and that’s telling you to use external storage if you need anything difference. And since this is a server edition, it comes with no mouse or keyboard, just adding those means adding another $120. Tell me again why would anybody in their sane mind use one of those for a server? And no don’t remind that I could have an answer on the tip of my tongue.

For those who might object that you can fit two Mac Minis on 1U – you really can’t, you need a tray and you end up using 2U most of the time anyway – you can easily use something like SuperMicro’s Twins that fits two completely independent nodes on a single 1U chassis. And the price is not really different.

The model I linked is quote, googling, at around eighteen hundreds dollars ($1800); add $400 for four 1TB hard disks (WD Caviar Black, that’s their going price as I ordered, since last April, eight of them already — four for Excelsior, four for work), you get to $2200 — two Apple Mac Minis? $2234, with mouse and keyboard that you need (the Twin system has IPMI support and remote KVM, so you don’t need them).

AIX’s software availability. So yes, you can have MacPorts, or Gentoo Prefix, or Fink, or probably a number of other similar projects. The same is probably true for AIX. How much software is actually tested on OS X Server? Probably not much. While Gentoo Prefix and MacPorts cover most of the basic utilities you’d use on your UNIX workstation, I doubt that you’ll find the complete software coverage that you currently find for Linux, and that’s often enough a dealbreaker.

For example, I happen to have these two Apple servers (don’t ask!). How do I monitor them? Neither Munin nor NRPE are easy to set up on OS X so they are yet unmonitored, and I’m not sure if I’ll ever actually monitor them. I’d honestly replace them just for the sake of not having to deal with OS X Server anymore, but it’s not my call.

I think Apple did quite a feat, to make me think that our crappy HP servers are not the worst out there…

November 15, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Revenge of the HP Updates (November 15, 2012, 03:58 UTC)

Just shy of three months ago I was fighting with updating the iLO firmware (IPMI and extras) and as I recounted, even when you select downloads for RHEL (which is a supported operating system on those boxes), you’re given a Windows executable file, which you have to extract. But at least, you can use the file you extract, to update the IPMI firmware remotely.

Well, if it wasn’t for a small little issue that the fans are going to get stuck at 14 KRPM until you also update the BIOS. It wasn’t obvious how much of a problem that is until we got to the co-location last week and… “What on Earth is this noise?” “I think it’s our servers!” screamed on the backside of the cabinet.

Since one of the servers also had some other hardware issues (one of the loops that keep the chipset’s heatsink gave way — I glued it back and applied a new layer of thermal paste after scraping the old one), we ended up bringing it back to the office, where today, after repairing it and booting, it became obvious that we couldn’t leave it running at any time with that kind of noise. So it was time to update the BIOS. Which is easier said than done.

Step one is finding the correct download — the first one I found turned out to be wrong, but it took me some time to understand that, because BIOS update has to be done with DOS. And that brought me back to a very old post of mine (well, not that old, it’s just an year and a half ago, now that I see), and its follow-up which came with a downloadable 383KB compressed, 2GB uncompressed bootable FreeDOS image. Since getting sysrescuecd’s FreeDOS to do anything other than booting and playing its own demos was impossible.

So when I actually get to run the executable in the FreeDOS image … what I come to is an extremely stupid tool (that, I remember you, will not work on Windows XP, Vista or 7) to create an USB drive to update the BIOS… lol, whut?

The correct download is, once again, for Windows even when you select RHEL4, and it auto-extracts in a multitude of files that include the BIOS itself some four different times, and would provide some sort of network update, as well as “flat files” (which you can use with FreeDOS), a Windows updater, an ISO file, and an utility to build an USB stick to update the BIOS itself.

If you count the fact that this is for a server running Linux, you now just involved two more operating systems. And the next trip to the co-lo we’ve got work cut out for us, updating server by server the BIOS, and the IPMI firmware (hoping that the new firmware actually have a reliable SOL connection, among others).

But to avoid being all too negative with HP, it’s still better than trying to do standard sysadmin work on an Apple OS X Server install on a Mac Mini. OS X Server combines UNIX’s friendliness, with Windows’s remote management capabilities, Solaris’s hardware support, and AIX software availability. But that’s a topic for another post.

November 14, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Kutná Hora / Olomouc weekend trip (November 14, 2012, 20:24 UTC)

I took a weekend trip to Kutná Hora and Olomouc. Kutná Hora was on the way via train so I got off there (with a small connection train) and visited the Bone Church, a common gravesite of over 40,000 people. I feel like it is one of those things that will just disappear someday – bones won’t last forever in the open air like that.

Prague - Oct 2012-121

Otherwise, Kutná Hora was just a small town and I didn’t do much else there besides get on the train again for the city Olomouc (a-la-moats). I probably missed something in Kutná Hora, but it wasn’t obvious to me and I just heard about the church. Olomouc is the 6th largest city in the Czech Republic, and largely a university town. I stayed in the lovely small hostel, the Poet’s Corner (highly recommended), for a few nights. Most students go home on the weekends, which I think is odd, but I did get to talk to some students (from a different city that were home for the weekend) and went out to enjoy the student bars. Good times, I recommend seeing Olomouc if you have a few days open in your itinerary and are not doing the crazy whirlwind capital city Europe tour. There is some nice things to see, I just had to watch the country’s ‘other’ astronomical clock. Also, a few microbreweries, which were delicious, and I even did a beer spa for fun (why not?).

Prague - Oct 2012-136

Kutná Hora Pics
Olomouc Pics

Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey results (November 14, 2012, 15:46 UTC)

Last week I posted a survey about openSUSE Connect. Although some answers are still coming and you are still welcome to provide more feedback, let’s take a look at some results. Some numbers first. openSUSE Connect is not really busy website, it gets about 80 different visitors per day. Not much, but not a total wasteland. Related to this number is another one. More than half of the people responding in the survey have never ever heard about openSUSE Connect. So it sounds like we should speak about it more…

Now something regarding the feedback. Most people think that it is a good idea and that it either is already useful or it can become quite useful. But even though feedback was positive, lot of people made various suggestions how to improve it. So what can be done to make it better? Most of the feedback was centred around following two topics.

Social aspects

One frequently mentioned topic was social aspect of the Connect. It is social network, where you can’t post status messages and where it is not easy to follow what are people up to. So it’s kinda antisocial social network. There were people asking for adding ability to share what are they going – add status messages, chat and stuff they know from Facebook of Google+. On the other hand there were people who complained that they don’t want to have another social network to maintain. And the third opinion which I think is something between was to provide some easier integration with already existing social networks like Facebook, Twitter or Google+. That I would say sounds the most reasonable solution.

More polishing

This was mentioned with most of the sites aspects. openSUSE Connect is a good thing, it contains many great ideas, but somehow they are not polished enough. As connect itself. People complained that UI could be nicer and more user-friendly. That widgets miss some finishing touches. So what is needed in this aspect? Probably some designers to step in and fix UI ;-) But apart from that, some widgets could use even some coding touches. So if you don’t like how is something done, feel free to submit patch ;-)

Conclusion?

People didn’t know about openSUSE Connect and there are things to be polished. We had some good ideas and we implemented them when we started with Connect. But there is still quite some work left before Connect will be perfect. Work that can be picked up by anybody as openSUSE Connect is open source, written in PHP and we even have a documentation mentioning among other things how to work on it. We can off course just let it live as it is and use it for membership and elections for which it works well. But looks like my survey got people at least a little bit interested and for example victorhck submitted logo proposal for openSUSE Connect! So maybe we will get some other contributors as well ;-) And let’s see how will I spend my next Hackweek :-D

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
RIP recruiting.gentoo.org (November 14, 2012, 13:28 UTC)

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
An informal comparison (November 14, 2012, 03:14 UTC)

A few people asked me to write this down so that they can reference it - so here it is.
A completely unscientific comparison between Linux flavours and how they behave:

CentOS 5 (because upgrading is impossible):

             total       used       free     shared    buffers     cached
Mem:          3942       3916         25          0        346       2039
-/+ buffers/cache:       1530       2411

And on the same hardware, doing the same jobs, a Gentoo:
             total       used       free     shared    buffers     cached
Mem:          3947       3781        166          0        219       2980
-/+ buffers/cache:        582       3365
So we use roughly 1/3rd the memory to get the same things done (fileserver), and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound; and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

Original CentOS install: ~900 qps peak in mysql, ~60 seconds walltime to render a pathological page
Default-y Gentoo: ~1200 qps peak, ~45-50 seconds walltime to render the same page
Gentoo with -march=native in CFLAGS: ~1800qps peak, ~30 seconds render time (this one was unexpected for me!)

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks. The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

Oh yeah, and you can upgrade without a reinstall.

November 13, 2012
Donnie Berkholz a.k.a. dberkholz (homepage, stats, bugs)

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.


Tagged: development, gentoo

November 12, 2012
Equo code refactoring: mission accomplished (November 12, 2012, 20:34 UTC)

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!


Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
WordPress FLV plugin WP OS FLV slow (November 12, 2012, 19:56 UTC)

Over the past few weeks, I’ve been designing a basic site (in WordPress) for a new client. This client needs some embedded FLVs on the site, and doesn’t want them (for good reason) to be directly linked to YouTube. As such, and seeing as I didn’t want to make the client write the HTML for embedding a flash video, I installed a very simple FLV plugin called WP OS FLV.

The plugin worked exactly as I had hoped it would, by cleanly showing the FLV with just a few basic options. However, I noticed that the pages with FLVs embedded in them using the plugin were significantly slower to load than were pages without FLVs. Doing some fun experimentation with cURL, I found that those pages had some external calls on them. Hmmmmmm, now what would the plugin need from an external site? Doing a little more digging, I found the following line hardcoded twice in the plugin’s wposflv.php file:


<param name="movie" value="http://flv-player.net/medias/player_flv_maxi.swf" />

That line means that if the site flv-player.net is down or slow, the page with the FLV plugin on your blog will also be slow. In order to fix this problem, you simply need to download the player_flv_maxi.swf file from that site, upload it somewhere on your server, and edit the line to call the location on your server instead. For instance, if your site is my-site.com, and you put the SWF file in a directory called static, you would change the absolute URL to:


<param name="movie" value="http://my-site.com/static/player_flv_maxi.swf" />

If you too were having problems with this plugin being a bit slow, I hope that this suggestion helps!

Cheers,
Zach

Jan Kundrát a.k.a. jkt (homepage, stats, bugs)

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty impressed about the event so far -- the organizers have done an excellent job and everything feels very, very smooth here. Congratulations for that; I have a first-hand experience with organizing a workshop and can imagine the huge pile of work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP e-mail client. A lot of work has found the way in since the last release; Trojitá now supports almost all of the useful IMAP extensions including QRESYNC and CONDSTORE for blazingly fast mailbox synchronization or the CONTEXT=SEARCH for live-updated search results to name just a few. There've also been roughly 666 tons of bugfixes, optimizations, new features and tweaks. Trojitá is finally showing evidence of getting ready for being usable as a regular e-mail client, and it's exciting to see that process after 6+ years of working on that in my spare time. People are taking part in the development process; there has been a series of commits from Thomas Lübking of the kwin fame dealing with tricky QWidget issues, for example -- and it's great to see many usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree (the thesis was about Trojitá, of course), I started a new job (this time using Qt) and implemented quite some interesting stuff with Qt -- if you have always wondered how to integrate Ragel, a parser generator, with qmake, stay tuned for future posts.

Anyway, in case you are interested in using an extremely fast e-mail client implemented in pure Qt, give Trojitá a try. If you'd like to chat about it, feel free to drop me a mail or just stop me anywhere. We're always looking for contributors, so if you hit some annoying behavior, please do chime in and start hacking.

Cheers,
Jan

November 11, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Local policy management script (November 11, 2012, 11:37 UTC)

I’ve written a small script that I call selocal which manages locally needed SELinux rules. It allows me to add or remove SELinux rules from the command line and have them loaded up without needing to edit a .te file and building the .pp file manually. If you are interested, you can download it from my github location.

Its usage is as follows:

  • You can add a rule to the policy with selocal -a “rule”
  • You can list the current rules with selocal -l
  • You can remove entries by referring to their number (in the listing output), like semodule -d 19.
  • You can ask it to build (-b) and load (-L) the policy when you think it is appropriate

It even supports multiple modules in case you don’t want to have all local rules in a single module set.

So when I wanted to give a presentation on Tor, I had to allow the torbrowser to connect to an unreserved port. The torbrowser runs in the mozilla domain, so all I did was:

~# selocal -a "corenet_tcp_connect_all_unreserved_ports(mozilla_t)" -b -L

At the end of the presentation, I removed the line from the policy:

~# selocal -l | grep mozilla_t
19. corenet_tcp_connect_all_unreserved_ports(mozilla_t)
~# selocal -d 19 -b -L

I can also add in comments in case I would forget why I added it in the first place:

~# selocal -a "allow mplayer_t self:udp_socket create_socket_perms;" \
  -c "MPlayer plays HTTP resources" -b -L

This then also comes up when listing the current local policy rules:

~# selocal -l
...
40: allow mplayer_t self:udp_socket create_socket_perms; # MPlayer plays HTTP resources

November 09, 2012
Hanno Böck a.k.a. hanno (homepage, stats, bugs)
Languages and translation technology (November 09, 2012, 21:53 UTC)

Chinese timetableJust recently, Microsoft research has made some progress in developing a device to do live translations from English into Mandarin. I'd like to share some thoughts with you about that.

If you read my blog on a regular basis, you will know that I traveled through Russia, Mongolia and China last year. If there's one big thing I learned on this trip, it's this: English language is - on a worldwide scale - much less prevalent than I thought. Call me a fool, but I just wasn't aware of that. I thought, okay, maybe many people won't understand English, but at least I'll always be able to find someone nearby who's able to translate. That just wasn't the case. I spent days in cities where I met nobody that shared any language knowledge with me.

I'm pretty sure that translation technologies will become really important in the not-so-distant future. For many people, they already are. I've learned about the opinions of swedish initiatives without any knowledge of swedish just by using Google translate. Google Chrome and the free variant Chromium show directly the option to send something through Google translate if it detects that it's not in your language (although that wasn't working with Mongolian when I was there last year). I was in hotels where the staff pointed me to their PC with an instance of Yandex translate or Baidu translate where I should type in my questions in English (Yandex is something like the russian Google, Baidu is something like the chinese Google). Despite all the shortcomings of today's translation services, people use them to circumvent language barriers.

Young people in those countries are often learning English today, but it's a matter of fact that this will only very slowly translate into a real change. Lots of barriers exist. Many countries have their own language and another language that's used as the "international communication language" that's not English. For example, you'll probably get along pretty well in most post-soviet countries with Russian, no matter if the countries have their own native language or not. This also happens in single countries with more than one language. People have their native language and learn the countries language as their first foreign language.
Some people think their language is especially important and this stops the adoption of English (France is especially known for that). Some people have the strange idea that supporting English language knowledge is equivalent to supporting US politics and therefore oppose it.

Yes, one can try to learn more languages (I'm trying it with Mandarin myself and if I'll ever feel I can try a fourth language it'll probably be Russian), but if you look on the world scale, it's a loosing battle. To get along worldwide, you'd probably have to learn at least five languages. If you are fluent in English, Mandarin, Russian, Arabic and Spanish, you're probably quite good, but I doubt there are many people on this planet able to do that. If you're one of them, you have my deepest respect (please leave a comment if you are).

If you'd pick two completely random people of the world population, it's quite likely that they don't share a common language.

I see no reason in principle why technology can't solve that. We're probably far away from a StarTrek-alike universal translator and sadly evolution hasn't brought us the Babelfish yet, but I'm pretty confident that we will see rapid improvements in this area and that will change a lot. This may sound somewhat pathetic, but I think this could be a crucial issue in fixing some of the big problems of our world - hate, racism, war. It's just plain simple: If you have friends in China, you're less likely to think that "the chinese people are bad" (I'm using this example because I feel this thought is especially prevalent amongst the left-alternative people who would never admit any racist thoughts - but that's probably a topic for a blog entry on its own). If you have friends in Iran, you're less likely to support your country fighting a war against Iran. But having friends requires being able to communicate with them. Being able to have friends without the necessity of a common language is a fascinating thought to me.

November 08, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Boosting my morale? Nope, still not. (November 08, 2012, 05:43 UTC)

I’m not sure if you’re following the development of this particular package in Gentoo, but with some discussion, quite a few developers reached a consensus last week that the slotted dev-libs/boost that we’ve had for the past couple of years had to go, replaced with a single-slot package like we have for most other libraries.

The main reason for this is that the previous slotting was not really doing what the implementers expected it to do — the idea for many is that you can always depend on whatever highest version of Boost you support, and if you don’t support the latest, no problem, you’ll get an older one. Unfortunately, this clashes with the fact that only the newest version of Boost is supported by upstream with modern configurations, so it happens that a new C library, or a new compiler, can (and do) make older versions non-buildable.

Like what happened with the new GLIBC 2.16, which is partially described in the previous post of the same series, and lately summarized, where there’s no way to rebuild boost-1.49 with the new glibc (the “patch” that could be used would change the API, making it similar to boost-1.50 which ..), but since I did report build failures with 1.50, people “fixed” them by depending on an older version… which is now not installable. D’oh!

So what did I do to sort this out? We dropped the slot altogether. Now all Boost versions install as slot zero and each replace the others. This makes it much easier for both developers and users, as you know that the one version you got installed is the one you’re building against, instead of “whatever has been eselected” or “whatever was installed last” or “whatever is the first one that the upstream user is finding” which was before — usually a mix of all.

But this wasn’t enough because unfortunately, libraries, headers and tools were all slotted so they all had different names based on the version. This was handled in the new 1.52 release which I unmasked today, by going back to the default install layout that Boost uses for Unix: the system layout. This is designed to allow one and only one version of each Boost library in the system, and does neither provide a version nor a variant suffix. This meant we needed another change.

Before going back to system layout, each boost version installed two sets of libraries, one that was multithread-safe and oen that wasn’t. Software using threads would have to link to the mt variant, while those not using threads could link to the (theoretically lower-overhead) single-thread variant. Which happened to be the default. Unfortunately, this also meant that a ton of software out there, even when using threads, simply linked to the boost library they wanted without caring for the variant. Oopsie.

Even worse, it was very well possible, and indeed was the case for Blender, that both variants were brought in, in the process’s address space, possibly causing extremely hard to debug issues due to symbol collisions (which I know, unfortunately, very well).

An easy way to see (using older versions of boost ebuilds) whether your program is linking to the wrong variant, is to see if you see it linking to libboost_threads-mt and at the same time to some other library such as libboost_system (not mt variant). Since our very pleasant former maintainer decided to link the mt variant of libboost_threads to the non-mt one, quite a few ways to check for multithreaded Boost simply … failed.

Now the decision on whether to build threadsafe or not is done through an USE flag like most other ebuilds do, and since only one variant is installed, everybody gets, by default and in most cases, the multithread-safe version, and all is good. Packages requiring threads might want already to start using dev-libs/boost[threads(+)] to make sure that they are not installed with a non-threadsafe version of Boost, but there are symlinks in place righ tnow so that even if they are looking for the mt variant they get the one installed version of boost anyway (only with USE=threads of course).

One question that raised was “how broken will people’s systems be, after upgrading from one Boost to another?” and the answer is “quite” … unless you’re using a modern enough Portage (the last few versions of the 2.1 series are okay, and most of the 2.2), which can use preserve-libs. In that case, it’ll just require you to run a single emerge command to get back on the new version, and if not, you’ll have to wait for revdep-rebuild to finish.

And to make things sweeter, with this change, the time it takes for Boost to build is halved (4 minutes vs 8 on my laptop), while the final package is 30MB less (here at least), since only one set of libraries is installed instead of two — without counting the time and space you’d waste by having to install multiple boost versions together.

And for developers, this also mean that you can forget about the ruddy boost-utils.eclass, since now everything is supposed to work without any trickery. Win-win situation, for once.

November 07, 2012
gcc / ld madness (November 07, 2012, 17:53 UTC)

So, I started reading [The Definitive Guide to the Xen Hypervisor] (again :P ), and I thought it would be fun to start with the example guest kernel, provided by the author, and extend it a bit (ye, there’s mini-os already in extras/, but I wanted to struggle with all the peculiarities of extended inline asm, x86_64 asm, linker scripts, C macros etc, myself :P ).

After doing some reading about x86_64 asm, I ‘ported’ the example kernel to 64bit, and gave it a try. And of course, it crashed. While I was responsible for the first couple of crashes (for which btw, I can write at least 2-3 additional blog posts :P ), I got stuck with this error:

traps.c:470:d100 Unhandled bkpt fault/trap [#3] on VCPU 0 [ec=0000]
RIP:    e033:<0000000000002271>

when trying to boot the example kernel as a domU (under xen-unstable).

0×2000 is the address where XEN maps the hypercall page inside the domU’s address space. The guest crashed when trying to issue any hypercall (HYPERCALL_console_io in this case). At first, I thought I had screwed up with the x86_64 extended inline asm, used to perform the hypercall, so I checked how the hypercall macros were implemented both in the Linux kernel (wow btw, it’s pretty scary), and in the mini-os kernel. But, I got the same crash with both of them.

After some more debugging, I made it work. In my Makefile, I used gcc to link all of the object files into the guest kernel. When I switched to ld, it worked. Apparently, when using gcc to link object files, it calls the linker with a lot of options you might not want. Invoking gcc using the -v option will reveal that gcc calls collect2 (a wrapper around the linker), which then calls ld with various options (certainly not only the ones I was passing to my ‘linker’). One of them was –build-id, which generates a .note.gnu.build-id” ELF note section in the output file, which contains some hash to identify the linked file.

Apparently, this note changes the layout of the resulting ELF file, and ‘shifts’ the .text section to 0×30 from 0×0, and hypercall_page ends up at 0×2030 instead of 0×2000. Thus, when I ‘called’ into the hypercall page, I ended up at some arbitrary location instead of the start of the specific hypercall handler I was going for. But it took me quite some time of debugging before I did an objdump -dS [kernel] (and objdump -x [kernel]), and found out what was going on.

The code from bootstrap.x86_64.S looks like this (notice the .org 0×2000 before the hypercall_page global symbol):

        .text
        .code64
	.globl	_start, shared_info, hypercall_page
_start:
	cld
	movq stack_start(%rip),%rsp
	movq %rsi,%rdi
	call start_kernel

stack_start:
	.quad stack + 8192
	
	.org 0x1000
shared_info:
	.org 0x2000

hypercall_page:
	.org 0x3000	

One solution, mentioned earlier, is to switch to ld (which probalby makes more sense), instead of using gcc. The other solution is to tweak the ELF file layout, through the linker script (actually this is pretty much what the Linux kernel does, to work around this):

OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(_start)

PHDRS {
	text PT_LOAD FLAGS(5);		/* R_E */
	data PT_LOAD FLAGS(7);		/* RWE */
	note PT_NOTE FLAGS(0);		/* ___ */
}

SECTIONS
{
	. = 0x0;			/* Start of the output file */
	_text = .;			/* Text and ro data */
	.text : {
		*(.text)
	} :text = 0x9090 

	_etext = .;			/* End ot text section */

	.rodata : {			/* ro data section */
		*(.rodata)
		*(.rodata.*)
	} :text

	.note : { 
		*(.note.*)
	} :note

	_data = .;
	.data : {			/* Data */
		*(.data)
	} :data

	_edata = .;			/* End of data section */	
}

And now that my kernel boots, I can go back to copy-pasting code from the book … erm hacking. :P

Disclaimer: I’m not very familiar with lds scripts or x86_64 asm, so don’t trust this post too much. :P


Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

_MG_4471

You might remember that many years ago (actually, it’s just shy of four years ago) I wrote a post about a disconcerting label I found on the box of a pair of Shure earphones I got to try to sleep better during the night when noise was coming from the outside. This was a Californian notice about the danger of carcinogenic chemicals, most likely related to the PVC in the earphones’ cord — which didn’t even last six full months! I had to trash the extremely expensive pair of earphones, because the cables ruptured behind my years; the stupid plastic was just too rigid I’m afraid.

Well, now that I’ve been in California for a while, I was expecting to see many more similar notices, but at least here in Hermosa Beach where I’m based, I haven’t seen one … until Starbucks was forced to put on. I actually did find out something more about those notices before, as Amazon has a page which is linked in your order when you’re shipping something in California that should have the label attached.

Now the title of this post is obviously inflammatory, I know that and it’s half-intended, but my problem with all of this is that when I wrote about that stupid label, I didn’t really know much about the whole thing — I’ve been told right away in those comments that the labels are extremely common in California, a few months ago I finally found that it was a popular ballot that actually put the law into place… and now I feel like something’s extremely wrong in this place.

Really I feel this is one of the most stupidest warning people can have on things, and somehow, for once, it makes me feel better thinking that in Italy, referendums are only used to vote laws off, not in…

November 06, 2012
Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
PulseConf 2012: Report (November 06, 2012, 11:04 UTC)

For those of you who missed my previous updates, we recently organised a PulseAudio miniconference in Copenhagen, Denmark last week. The organisation of all this was spearheaded by ALSA and PulseAudio hacker, David Henningsson. The good folks organising the Ubuntu Developer Summit / Linaro Connect were kind enough to allow us to colocate this event. A big thanks to both of them for making this possible!

The room where the first PulseAudio conference took place

The room where the first PulseAudio conference took place

The conference was attended by the four current active PulseAudio developers: Colin Guthrie, Tanu Kaskinen, David Henningsson, and myself. We were joined by long-time contributors Janos Kovacs and Jaska Uimonen from Intel, Luke Yelavich, Conor Curran and Michał Sawicz.

We started the conference at around 9:30 am on November 2nd, and actually managed to keep to the final schedule(!), so I’m going to break this report down into sub-topics for each item which will hopefully make for easier reading than an essay. I’ve also put up some photos from the conference on the Google+ event.

Mission and Vision

We started off with a broad topic — what each of our personal visions/goals for the project are. Interestingly, two main themes emerged: having the most seamless desktop user experience possible, and making sure we are well-suited to the embedded world.

Most of us expressed interest in making sure that users of various desktops had a smooth, hassle-free audio experience. In the ideal case, they would never need to find out what PulseAudio is!

Orthogonally, a number of us are also very interested in making PulseAudio a strong contender in the embedded space (mobile phones, tablets, set top boxes, cars, and so forth). While we already find PulseAudio being used in some of these, there are areas where we can do better (more in later topics).

There was some reservation expressed about other, less-used features such as network playback being ignored because of this focus. The conclusion after some discussion was that this would not be the case, as a number of embedded use-cases do make use of these and other “fringe” features.

Increasing patch bandwidth

Contributors to PulseAudio will be aware that our patch queue has been growing for the last few months due to lack of developer time. We discussed several ways to deal with this problem, the most promising of which was a periodic triage meeting.

We will be setting up a rotating schedule where each of us will organise a meeting every 2 weeks (the period might change as we implement things) where we can go over outstanding patches and hopefully clear backlog. Colin has agreed to set up the first of these.

Routing infrastructure

Next on the agenda was a presentation by Janos Kovacs about the work they’ve been doing at Intel with enhancing the PulseAudio’s routing infrastructure. These are being built from the perspective of IVI systems (i.e., cars) which typically have fairly complex use cases involving multiple concurrent devices and users. The slides for the talk will be put up here shortly (edit: slides are now available).

The talk was mingled with a Q&A type discussion with Janos and Jaska. The first item of discussion was consolidating Colin’s priority-based routing ideas into the proposed infrastructure. The broad thinking was that the ideas were broadly compatible and should be implementable in the new model.

There was also some discussion on merging the module-combine-sink functionality into PulseAudio’s core, in order to make 1:N routing easier. Some alternatives using te module-filter-* were proposed. Further discussion will likely be required before this is resolved.

The next steps for this work are for Jaska and Janos to break up the code into smaller logical bits so that we can start to review the concepts and code in detail and work towards eventually merging as much as makes sense upstream.

Low latency

This session was taken up against the background of improving latency for games on the desktop (although it does have other applications). The indicated required latency for games was given as 16 ms (corresponding to a frame rate of 60 fps). A number of ideas to deal with the problem were brought up.

Firstly, it was suggested that the maxlength buffer attribute when setting up streams could be used to signal a hard limit on stream latency — the client signals that it will prefer an underrun, over a latency above maxlength.

Another long-standing item was to investigate the cause of underruns as we lower latency on the stream — David has already begun taking this up on the LKML.

Finally, another long-standing issue is the buffer attribute adjustment done during stream setup. This is not very well-suited to low-latency applications. David and I will be looking at this in coming days.

Merging per-user and system modes

Tanu led the topic of finding a way to deal with use-cases such as mpd or multi-user systems, where access to the PulseAudio daemon of the active user by another user might be desired. Multiple suggestions were put forward, though a definite conclusion was not reached, as further thought is required.

Tanu’s suggestion was a split between a per-user daemon to manage tasks such as per-user configuration, and a system-wide daemon to manage the actual audio resources. The rationale being that the hardware itself is a common resource and could be handled by a non-user-specific daemon instance. This approach has the advantage of having a single entity in charge of the hardware, which keeps a part of the implementation simpler. The disadvantage is that we will either sacrifice security (arbitrary users can “eavesdrop” using the machine’s mic), or security infrastructure will need to be added to decide what users are allowed what access.

I suggested that since these are broadly fringe use-cases, we should document how users can configure the system by hand for these purposes, the crux of the argument being that our architecture should be dictated by the main use-cases, and not the ancillary ones. The disadvantage of this approach is, of course, that configuration is harder for the minority that wishes multi-user access to the hardware.

Colin suggested a mechanism for users to be able to request access from an “active” PulseAudio daemon, which could trigger approval by the corresponding “active” user. The communication mechanism could be the D-Bus system bus between user daemons, and Ștefan Săftescu’s Google Summer of Code work to allow desktop notifications to be triggered from PulseAudio could be used to get to request authorisation.

David suggested that we could use the per-user/system-wide split, modified somewhat to introduce the concept of a “system-wide” card. This would be a device that is configured as being available to the whole system, and thus explicitly marked as not having any privacy guarantees.

In both the above cases, discussion continued about deciding how the access control would be handled, and this remains open.

We will be continuing to look at this problem until consensus emerges.

Improving (laptop) surround sound

The next topic dealt with being able to deal with laptops with a built-in 2.1 channel set up. The background of this is that there are a number of laptops with stereo speakers and a subwoofer. These are usually used as stereo devices with the subwoofer implicitly being fed data by the audio controller in some hardware-dependent way.

The possibility of exposing this hardware more accurately was discussed. Some investigation is required to see how things are currently exposed for various hardware (my MacBook Pro exposes the subwoofer as a surround control, for example). We need to deal with correctly exposing the hardware at the ALSA layer, and then using that correctly in PulseAudio profiles.

This led to a discussion of how we could handle profiles for these. Ideally, we would have a stereo profile with the hardware dealing with upmixing, and a 2.1 profile that would be automatically triggered when a stream with an LFE channel was presented. This is a general problem while dealing with surround output on HDMI as well, and needs further thought as it complicates routing.

Testing

I gave a rousing speech about writing more tests using some of the new improvements to our testing framework. Much cheering and acknowledgement ensued.

Ed.: some literary liberties might have been taken in this section

Unified cross-distribution ALSA configuration

I missed a large part of this unfortunately, but the crux if the discussion was around unifying cross-distribution sound configuration for those who wish to disable PulseAudio.

Base volumes

The next topic we took up was base volumes, and whether they are useful to most end users. For those unfamiliar with the concept, we sometimes see sinks/sources where which support volume controls going to > 0dB (which is the no=attenuation point). We provide the maximum allowed gain in ALSA as the maximum volume, and suggest that UIs show a marker for the base volume.

It was felt that this concept was irrelevant, and probably confusing to most end users, and that we suggest that UIs do not show this information any more.

Relatedly, it was decided that having a per-port maximum volume configuration would be useful, so as to allow users to deal with hardware where the output might get too loud.

Devices with dynamic capabilities (HDMI)

Our next topic of discussion was finding a way to deal with devices such as those HDMI ports where the capabilities of the device could change at run time (for example, when you plug out a monitor and plug in a home theater receiver).

A few ideas to deal with this were discussed, and the best one seemed to be David’s proposal to always have a separate card for each HDMI device. The addition of dynamic profiles could then be exploited to only make profiles available when an actual device is plugged in (and conversely removed when the device is plugged out).

Splitting of configuration

It was suggested that we could split our current configuration files into three categories: core, policy and hardware adaptation. This was met with approval all-around, and the pre-existing ability to read configuration from subdirectories could be reused.

Another feature that was desired was the ability to ship multiple configurations for different hardware adaptations with a single package and have the correct one selected based on the hardware being run on. We did not know of a standard, architecture-independent way to determine hardware adaptation, so it was felt that the first step toward solving this problem would be to find or create such a mechanism. This could either then be used to set up configuration correctly in early boot, or by PulseAudio for do runtime configuration selection.

Relatedly, moving all distributed configuration to /usr/share/..., with overrides in /etc/pulse/... and $HOME were suggested.

Better drain/underrun reporting

David volunteered to implement a per-sink-input timer for accurately determining when drain was completed, rather than waiting for the period of the entire buffer as we currently do. Unsurprisingly, no objections were raised to this solution to the long-standing issue.

In a similar vein, redefining the underflow event to mean a real device underflow (rather than the client-side buffer running empty) was suggested. After some discussion, we agreed that a separate event for device underruns would likely be better.

Beer

We called it a day at this point and dispersed beer-wards.

PulseConf Hackers

Our valiant attendees after a day of plotting the future of PulseAudio

User experience

David very kindly invited us to spend a day after the conference hacking at his house in Lund, Sweden, just a short hop away from Copenhagen. We spent a short while in the morning talking about one last item on the agenda — helping to build a more seamless user experience. The idea was to figure out some tools to help users with problems quickly converge on what problem they might be facing (or help developers do the same). We looked at the Ubuntu apport audio debugging tool that David has written, and will try to adopt it for more general use across distributions.

Hacking

The rest of the day was spent in more discussions on topics from the previous day, poring over code for some specific problems, and rolling out the first release candidate for the upcoming 3.0 release.

And cut!

I am very happy that this conference happened, and am looking forward to being able to do it again next year. As you can see from the length of this post, there are lot of things happening in this part of the stack, and lots more yet to come. It was excellent meeting all the fellow PulseAudio hackers, and my thanks to all of them for making it.

Finally, I wouldn’t be sitting here writing this report without support from Collabora, who sponsored my travel to the conference, so it’s fitting that I end this with a shout-out to them. :)

November 05, 2012
Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey (November 05, 2012, 12:42 UTC)

You might remember that in our team (openSUSE Boosters), we created openSUSE Connect some time ago. It was meant as replacement for users.opensuse.org that nobody knew about and nobody used. We hoped that it will attract more users and that it will be more user friendly way, how to manage personal data. Apart from that, we wanted to include more interesting widgets so it can become your landing page for all your efforts in openSUSE project. With that regards we created bugzilla widget, fate widget, build status widget and some more. We hoped that it would make difference and help people and that they will enjoy using the new site. During this summer my GSoC student created amazing Karma widget as well to make it more fun. And as Connect has been some time already in function, it’s now time to collect some feedback. Did it work? Do you like it? Or did it become just a wasteland? Do you think such a site make sense?

I’m not promising anything right now, but it would be nice to know, what our users think about it and whether it could make sense to put some effort in it and how much and where to concentrate it ;-) So please, fill in this little survey and let me know your opinion. I’ll publish results later ;-)

November 03, 2012
Stuart Longland a.k.a. redhatter (homepage, stats, bugs)
I dub thee… iKarma (November 03, 2012, 23:53 UTC)

Mexico to Apple: You WILL NOT use the name ‘iPhone’ here

We don’ need no stinkin’ badge lawsuits

Apple has lost the right to use the word “iPhone” in Mexico after its trademark lawsuit against Mexican telco iFone backfired.

http://www.theregister.co.uk/2012/11/02/iphone_ifone_mexico_trademark/

Not so nice when the shoe’s on the other foot now is it, Apple? Now if only other law courts had such common sense.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Tinderbox and manual intervention (November 03, 2012, 20:39 UTC)

So after my descriptive post you might be wondering what’s so complex or time-requiring in running a tinderbox. That’s because I haven’t spoken about the actual manual labor that goes into handling the tinderbox.

The major work is of course scouring the logs to make sure that I file only valid bugs (and often enough that’s not enough, as things hide behind the surface), but there are a quite a number of tasks that are not related to the bug filing, at least not directly.

First of all, there is the matter of making sure that the packages are available for installation. This used to be more complex, but luckily thanks to REQUIRED_USE and USE deps, this task is slightly easier than before. The tinderbox.py script (that generates the list of visible packages that need to be tested) also generates a list of use conflicts, dependencies etc. This list I have to look at manually, and then update the package.use file so that they are satisfied. If their dependencies or REQUIRED_USE are not satisfied, the package is not visible, which means it won’t be tested.

This sounds extremely easy, but there are quite a few situations, which I discussed previously where there is no real way to satisfy requirements for all the packages in the tree. In particular there are situations where you can’t enable the same USE flag all over the tree — for instance if you do enable icu for libxml2, you can’t enable it for qt-webkit (well, you can but you have to disable gstreamer then, which is required by other packages). Handling all the conflicting requirements takes a bit of trial and error.

Then there is a much worse problem and that is with tests that can get stuck, so that things like this happen:

localhost ~ # qlop -c
 * dev-python/mpi4py-1.3
     started: Sat Nov  3 12:29:39 2012
     elapsed: 9 hours, 11 minutes, 12 seconds

And I’ve got to keep growing the list of packages whose tests are unreliable — I wonder if the maintainers ever try running their tests, sometimes.

This task used to be easier because the tinderbox supports sending out tweets or dents through bti so that it would tell me what was its action — unfortunately identi.ca kept marking the tinderbox’s account as spam, and while they did unlock it three times, it meant I had to ask support to do so every other week. I grew tired of that and stopped caring about it. Unfortunately that means I have to connect to the instance(s) from time to time to make sure they are still crunching.

Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
komplete audio 6 on gentoo: first impressions (November 03, 2012, 05:36 UTC)

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

November 02, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Crossfit Praha: new home gym for November (November 02, 2012, 21:19 UTC)

(I’d like to first give a global shout out to my first Crossfit home, The Athlete Lab)

Prague - Oct 2012-113

Since I’m in Prague for a month, I became a member of Crossfit Praha instead of just being a drop-in client. The gym is quite small, but centrally located in Prague. The lifting days are separate than the normal days (probably unless you are a trusted regular). The premise is, you show up during a block of time, warm up on your own, proceed with WOD, then cool down on your own which is pretty standard across gyms from what I can tell, exception being that everyone is starting the WOD at their own time (not structured times). Now I’ve put my money where my mouth is and have to keep a good diet, drink not so much beer, etc to be able to function the next day(s) after a WOD. “Tomorrow will not be any easier”

Prague - Oct 2012-110
(Myself and Zdeněk)

Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Lenovo laptops now feature what? (November 02, 2012, 15:32 UTC)

Each month, the online discount retailer Working Advantage has a sweepstakes for some hot item. For November 2012, it is a Lenovo IdeaPad Z580. I received the following email about it yesterday:

Working Advantage Lenovo IdeaPad Z580 November Giveaway features top sirloin steaks

Last time I checked, the IdeaPad Z580 had some neat features, but definitely did not come with top sirloin steaks! :razz:

Cheers,
Zach

November 01, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Slock 1.1 background colour (November 01, 2012, 13:43 UTC)

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:


# slock version
VERSION = 1.0-tip

# Customize below to fit your system

# paths
PREFIX = /usr/local

X11INC = /usr/X11R6/include
X11LIB = /usr/X11R6/lib

# includes and libs
INCS = -I. -I/usr/include -I${X11INC}
LIBS = -L/usr/lib -lc -lcrypt -L${X11LIB} -lX11 -lXext

# flags
CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"
CFLAGS = -std=c99 -pedantic -Wall -Os ${INCS} ${CPPFLAGS}
LDFLAGS = -s ${LIBS}

# On *BSD remove -DHAVE_SHADOW_H from CPPFLAGS and add -DHAVE_BSD_AUTH
# On OpenBSD and Darwin remove -lcrypt from LIBS

# compiler and linker
CC = cc

# Install mode. On BSD systems MODE=2755 and GROUP=auth
# On others MODE=4755 and GROUP=root
#MODE=2755
#GROUP=auth

With the line applicable to background colour being:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"

In order to change it back to the pre-1.1 background colour scheme, simply modify -DCOLOR2 to be the same as -DCOLOR1:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"black\"

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

Hope that helps!

Cheers,
Zach

October 30, 2012
Liam McLoughlin a.k.a. hexxeh (homepage, stats, bugs)
512MB Pi + Adafruit Budget Pack = win (October 30, 2012, 22:00 UTC)

The kind folks over at Element 14 emailed me last week asking if I’d like to review the new Raspberry Pi 512MB edition and the Adafruit Budget Pack. Whilst I already have a rather large collection of Pi, I thought it’d be fun to write a review since it’s not something I’ve really done before.

So, yesterday the kit arrived and I got chance today to unpack it and have a play around. The kit doesn’t come with a Raspberry Pi, you have to buy that separately. Here’s a breakdown of what the kit includes:

  • Pi box (a clear acrylic case for the Pi)
  • Cobbler and GPIO ribbon cable (breakout board to split the GPIO cable out onto a breadboard)
  • Half-size breadboard with a bundle of breadboarding wires
  • 4GB microSD card with SD adaptor
  • 5V/1A USB power supply and cable

Firstly, the Pi box. The clear plastic looks pretty awesome once it’s assembled, and the laser engraved labels are an excellent touch. However I tend to swap my Pis in and out of cases a lot, and assembling the case is kinda fiddly, so I think I’ll be keeping whichever Pi goes in this case in there.

The USB power supply, cable and SD card: there isn’t really a whole lot to say about these, you need them to use your Pi. The power supply is supposedly specced to the hilt and overrated at 5.25V to account for the voltage drop caused by the cable. However, given that it’s got a US two pin plug and I live in the UK (and don’t have the appropriate adaptor handy) I’ve not been able to test this out. That said, if Adafruit have said it’s the case, I’m totally inclined to believe that it’s the bees knees like they say it is. The SD card is a class 4 Dane-Elec, which will work just fine, but probably isn’t the fastest (note: I haven’t benchmarked this, I’m going off my general experience using various cards in the Pi). That said, this is the budget pack, so if you want a fast, expensive card, you’re best buying that separately.

My favourite part of this whole kit is the Cobbler and the GPIO ribbon cable. Very often when I’m developing with the Pi I need to use a serial console for debugging, and plugging in the rather tiny cables that come with my USB serial adaptor into a Pi each time is somewhat of a pain. I must’ve done it a few hundred times now and I still don’t remember which cable goes to which pin. With the Cobbler I can just leave the serial adaptor connected to the breadboard and use the ribbon cable to connect the Pi of my choice: very nice!

Lastly, the 512MB Raspberry Pi itself. Personally, I think this is huge. 512MB of RAM on an ARM board with a fairly bitchin’ GPU for $35? Never before has “shut up and take my money” been so appropriate. As the foundation have said, hardware accelerated X is being worked on, which combined with a 512MB Pi should make for an impressively capable machine for the money in my opinion.

The hardware alone is useless without cool software though, that’s the most amazing part. In the past twelve months the Raspberry Pi has rocketed into mainstream and has amassed a huge community of fans, many of which are developing and showing off new and cool things for the Pi. If you’ve made something cool, I’d love to see it; tweet me a link and if I think it’s awesome I’ll retweet it and share it on.

Want to find more cool projects? Check out the Raspberry Pi and Element 14 forums, which are both very active and have much of this stuff being shared about.

 

Greg KH a.k.a. gregkh (homepage, stats, bugs)
Help Wanted (October 30, 2012, 19:03 UTC)

I'm looking for someone to help me out with the stable Linux kernel release process. Right now I'm drowning in trees and patches, and could use some one to help me sanity-check the releases I'm doing.

Specifically, I'm looking for someone to help with:

  • test boot the -rc stable kernels to make sure I didn't do anything foolish.
  • dig through the Linux kernel distro trees and send me the git commit ids, or the backported patches, of things they are shipping that are not in the stable and longterm kernel releases.
  • do code review of the patches going into the stable releases.

If you can help out with this, I'd really appreciate it.

Note, this is not a long-term position, only 6 months or so, I figure you'll be tired of it by then and want to move on to something else, which is fine.

In return, you get:

  • your name in the stable releases as someone who has signed-off-by on patches going into it.
  • better knowledge of more kernel subsystems than you ever have in the past, and probably really want.
  • free beverages of your choice at any Linux conference you attend that I am at (given my travel schedule, seems to be just about all of them.)

If anyone is interested in this, here are the 5 steps you need to do to "apply" for the position:

  • email me with the subject line starting with "[Stable tree help]"
  • email me "proof" you are running the latest stable -rc kernel at the moment.
  • send a link to some kernel patches you have done that were accepted into Linus's tree.
  • send a link to any Linux distro kernel tree where they keep their patches.
  • say why you want to do this type of thing, and what amount of time you can spend on it per week.

I'll close the application process in a week, on November 7, 2012, after that I'll contact everyone who applied and do some follow-up questions through email with them. I'll also post something here to say what the response was like.

Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
grsec and PulseAudio (and Gentoo) (October 30, 2012, 08:49 UTC)

This problem seems to bite some of our hardened users a couple of times a year, so thought I’d blog about it. If you are using grsec and PulseAudio, you must not enable CONFIG_GRKERNSEC_SYSFS_RESTRICT in your kernel, else autodetection of your cards will fail.

PulseAudio’s module-udev-detect needs to access /sys to discover what cards are available on the system, and that kernel option disallows this for anyone but root.

October 29, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Happy 15th, Noah! (October 29, 2012, 13:40 UTC)

Just wanted to wish you a very happy 15th birthday, Noah! I hope that you have an awesome day, filled with fun and excitement, and surrounded by your friends, family, and loved ones. Those are the best elements of a special day, but maybe, just maybe, you’ll get some cool stuff too! :cool: I also can’t believe that it’s just one more year until you’ll have your license; bet you can’t wait!

Anyway, thinking about you, and hope that everything in your life is going superbly well.

With love,
Zach

Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
PulseConf Schedule (October 29, 2012, 12:45 UTC)

David has now published a tentative schedule for the PulseAudio Mini-conference (I’m just going to call it PulseConf — so much easier on the tongue).

For the lazy, these are some of the topics we’ll be covering:

  • Vision and mission — where we are and where we want to be
  • Improving our patch review process
  • Routing infrastructure
  • Improving low latency behaviour
  • Revisiting system- and user-modes
  • Devices with dynamic capabilities
  • Improving surround sound behaviour
  • Separating configuration for hardware adaptation
  • Better drain/underrun reporting behaviour

Phew — and there are more topics that we probably will not have time to deal with!

For those of you who cannot attend, the Linaro Connect folks (who are graciously hosting us) are planning on running Google+ Hangouts for their sessions. Hopefully we should be able to do the same for our proceedings. Watch this space for details!

p.s.: A big thank you to my employer Collabora for sponsoring my travel to the conference.

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Unexpected turn of events in Prague (October 29, 2012, 11:57 UTC)

This adventure of mine is really turning into an adventure..

I’m staying in Prague for another month. I’m working at a hostel as a bartender and getting my own private room and one/two meals per day. I have two consecutive days off per week and I plan on going on overnight trips to other cities in Czech. I’ve basically invalidated the rest of my planning for the next month or two but I’ll figure that out later..

Welcome to my office…
Camera Roll-32

October 28, 2012
Liam McLoughlin a.k.a. hexxeh (homepage, stats, bugs)
Android? Meet Chromium OS (October 28, 2012, 03:42 UTC)

It’s been too long since I’ve cracked out the Jolt and spent the wee hours hacking away on something. So tonight, I picked up a device from my collection and did the inevitable:

Nexus 7 running Chromium OS

More details soon to a tech blog near you. Image release date? Whenever I get around to neatening this up for widespread consumption. Mad props to the Queen for that extra hour tonight, really handy as I’m sure you’ll all agree.

October 27, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Prague, Czech Republic (October 27, 2012, 10:26 UTC)

I’ve been in Prague since Oct 17, 10 days now. I really like the city and hope to explore more of the country soon besides the capital city. The city’s archetecture is nice because it was virtual untouched during WW2. The culture is somewhat interesting because it was communist until 1989. Now the city is preserving what was left to decay during that era.

Prague - Oct 2012-33

The food is good, the beer is good, and the city is cheap to live in. Being a continental country, the weather is marginal but that just reminds me of home anyway.

Prague pics

October 26, 2012
Sean Amoss a.k.a. ackle (homepage, stats, bugs)
Happy Halloween, Gentoo! (October 26, 2012, 16:32 UTC)

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
moving services around (October 26, 2012, 15:53 UTC)

A few days ago the box that was hosting our low-risk webapps died (barbet.gentoo.org). The services that were affected are get.gentoo.org planet.gentoo.org packages.gentoo.org devmanual.gentoo.org infra-status.gentoo.org and bouncer.gentoo.org. We quickly migrated the services to another box (brambling.gentoo.org). Brambling had issues in the past with its RAM, but we changed them with new ones a couple of months ago. Additionally, this machine was used for testing only. Unfortunately the machine started to malfunction as soon as those services were transferred there, which means that it has more hardware issues than the RAM. The resulting error messages stopped when we disabled packages.gentoo.org temporarily. The truth is that this packages webapp is old, unmaintained, uses deprecated interfaces and real pain to debug. In this year’s GSoC we had a really nice replacement by Slava Bacherikov written in django. Additionally, recently we were given a Ganeti cluster hosted at OSUOSL. Thus we decided not to put up again the old packages.gentoo.org instance, and instead create 4 virtual machines in our Ganeti cluster, and migrate the above webapps there, along with the new and shiny packages.gentoo.org website. Furthermore, we will also deploy another GSoC webapp, gentoostats, and start providing our developers with virtual machines. We will not give public IPv4 IPs to the dev VMs though, but probably use IPv6 only so that developers can access them through woodpecker (the box where the developers have their shell accounts), but it is still under discussion. We already started working on the above, and we expect next week to be fully finished with the new webapps live and rocking. Special thanks to Christian and Alec who took care of the migrations before and during the Gentoo Miniconf.

October 25, 2012
Markos Chandras a.k.a. hwoarang (homepage, stats, bugs)
Gentoo Recruitment: How do we perform? (October 25, 2012, 18:53 UTC)

A couple of days ago, Tomas and I, gave a presentation at the Gentoo Miniconf. The subject of the presentation was to give an overview of the current recruitment process, how are we performing compared to the previous years and what other ways there are for users to help us improve our beloved distribution. In this blog post I am gonna get into some details that I did not have the time to address during the presentation regarding our recruitment process.

 

Recruitment Statistics

Recruitment Statistics from 2008 to 2012

Looking at the previous graph, two things are obvious. First of all, every year the number of people who wanted to become developers is constantly decreased. Second, we have a significant number of people who did not manage to become developers. Let me express my personal thoughts on these two things.

For the first one, my opinion is that these numbers are directly related to the Gentoo’s reputation and its “infiltration” to power users. It is not a secret that Gentoo is not as popular as it used to be. Some people think this is because of the quality of our packages, or because of the frequency we cause headaches to our users. Other people think that the “I want to compile every bit of my linux box” trend belongs to the past and people want to spend less time maintaining/updating their boxes and more time doing some actual work nowadays. Either way, for the past few years we are loosing people, or to state it better, we are not “hiring” as many as we used to. Ignoring those who did not manage to become developers, we must admit that the absolute numbers are not in our favor. One may say that, 16 developers for 2011-2012 is not bad at all, but we aim for the best right? What bothers me the most is not the number of the people we recruit, but that this number is constantly falling for the last 5 years…

As for the second observation, we see that, every year, around 4-5 people give up and decide to not become developers after all. Why is that? The answer is obvious. Our long, painful, exhausting recruitment process drives people away. From my experience, it takes about 2 months from the time your mentor opens your bug, until a recruiter picks you up. This obviously kills someone’s motivation, makes him lose interest, get busy with other stuff and he eventually disappears. We tried to improve this process by creating a webapp two years ago, but it did not work out well. So we are now back to square one. We really can’t afford loosing developers because of our recruitment process. It is embarrassing to say at least.

Again, is there anything that can be done? Definitely yes. I’d say, we need an improved or a brand new web application that will focus on two things:

1) make the review process between mentor <-> recruit easier

2) make the final review process between recruit <-> recruiter an enjoyable learning process

Ideas are always welcomed. Volunteers and practical solutions even more ;) In the meantime, I am considering using Google+ hangouts for the face-to-face interview sessions with the upcoming recruits. This should bring some fresh air to this process ;)

The entire presentation can be found here