Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
November 17, 2012, 23:07 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

November 17, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
The hardened project continues going forward… (November 17, 2012, 19:34 UTC)

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

Documentation

The SELinux handbook has been updated with the latest policy changes (such as supporting the named init scripts). We also documented SELinux policy constraints which was long overdue.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

  • [1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).
  • [2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.

Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)

Few days ago I finished fiddling with open build service (obs) packages in our main tree. Now when anyone wants to mess up with obs he just have to emerge dev-util/osc and have the fun with it.

What the hell is obs?

OBS is pretty cool service that allows you to specify how to build your package and its dependencies in one .spec file where you can deliver the results to multiple archs/distros and not care about how it happens (Debian, SUSE, Fedora, CentOS, Archlinux).

Primary implementation is running for SUSE and it is free to use by anyone (eg. you don’t have to build suse packages there if you don’t want to :P). It has two ways how to interact with the whole tool, one is the web application, which is really PITA and the other is the osc command line tool I finished fiddling with.

Okay so why did you do it?

Well I work at SUSE and we are free to use whatever distro we want while being able to complete our taks. I like to improve stuff I want to be able fix bugs in SLE/openSUSE while not having any chroot/virtual with the named system installed, for such task this works pretty well :-)

How -g0 may be useful (November 17, 2012, 13:35 UTC)

Usually I use -g0 as CFLAGS/CXXFLAGS; it will be useful to find wrong buildsystem behaviur.

Here is an example where the buildsystem sed only -g and leave 0 and causing compile failure:

x86_64-pc-linux-gnu-gcc -DNDEBUG -march=native -O2 0 -m64 -O3 -Wall -DREGINA_SHARE_DIRECTORY=\"/usr/share/regina\" -DREGINA_VERSION_DATE=\""31 Dec 2011"\" -DREGINA_VERSION_MAJOR=\"3\" -DREGINA_VERSION_MINOR=\"6\" -DREGINA_VERSION_SUPP=\"\" -DHAVE_CONFIG_H -DHAVE_GCI -I./gci -I. -I. -I./contrib -o funcs.o -c ./funcs.c
x86_64-pc-linux-gnu-gcc: 0: No such file or directory
./funcs.c: In function '__regina_convert_date':
./funcs.c:772:14: warning: array subscript is above array bounds
make: *** [funcs.o] Error 1
emake failed

So add it to your CFLAGS/CXXFLAGS may be a good idea.

November 16, 2012
Fwd: “Apple Now Owns the Page Turn” (November 16, 2012, 22:58 UTC)

Article: http://bits.blogs.nytimes.com/2012/11/16/apple-now-owns-the-page-turn/

(Heard about it from LWN https://lwn.net/Articles/525493/rss)

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
One month “in” – some sort of status report (November 16, 2012, 12:27 UTC)

I’d like to write some sort of public status report or brain dump of what’s going on. I’ve been on-the-road for one month of the planned 12 months and just “Living the Dream” as many of the fellow travelers would say. I’ve met so many people so far, some have been really inspiring, some are not. I’m embracing the idea of slow travel and/or home base travel. I really don’t care how you travel, but the Eurorail, every capital city for two days is not what I want to do. I’ve learned that already from talking to people and my preconceived values. So far, I’m on track by only visiting two countries so far, Netherlands and Czech. I’m really diving into Czech Republic – mind you, I didn’t really plan on that but it somehow happened and I’m very ok with that. However, the bad side of that is that I’m staying still while people are moving by every 2-5 days. Since the hostel gives a free beer token to every guest, I see new people everyday for just long enough to say the smalltalk – I haven’t been in that position before so it’s new for this computer guy from Minnesota his whole life… (Self-reflection, yay) Annnyway, I’m having fun, I’m enjoying myself, I don’t like to “not-work”, I am forcing myself to take the unbeaten path, I’m getting more comfortable with myself and my environment, I’m relaxed, I can go with the flow, I know “it” will work out, I drink tea daily, I started to enjoy coffee, I’m living life, I am balanced. Go me.

As of this writing, I was in Netherlands for 7 days and spent $55usd per day and Czech for 28 days and spent $28usd per day. With my pre-trip expenses, etc, I’ve spent $65usd per day.

I’m doing fine, read my posts about where I’ve been, look at my pictures on Flickr, interact with me on Twitter for what I am doing, and check back often for what I’ve been doing. Ciao.

(After thought: considering that I’ve been at (or lived at) a dropzone for nearly every weekend this past summer (and the past 6 years), I’m really missing skydiving. Not going to lie, I can’t wait to jump out of a plane, most places around me are closing for the winter and I’m not properly prepared to jump in the cold even if they were open :( poor planning on my part. I didn’t think it would be so bad, taking a hiatus, but that sport is such a part of my life. I miss it.)

November 14, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Kutná Hora / Olomouc weekend trip (November 14, 2012, 20:24 UTC)

I took a weekend trip to Kutná Hora and Olomouc. Kutná Hora was on the way via train so I got off there (with a small connection train) and visited the Bone Church, a common gravesite of over 40,000 people. I feel like it is one of those things that will just disappear someday – bones won’t last forever in the open air like that.

Prague - Oct 2012-121

Otherwise, Kutná Hora was just a small town and I didn’t do much else there besides get on the train again for the city Olomouc (a-la-moats). I probably missed something in Kutná Hora, but it wasn’t obvious to me and I just heard about the church. Olomouc is the 6th largest city in the Czech Republic, and largely a university town. I stayed in the lovely small hostel, the Poet’s Corner (highly recommended), for a few nights. Most students go home on the weekends, which I think is odd, but I did get to talk to some students (from a different city that were home for the weekend) and went out to enjoy the student bars. Good times, I recommend seeing Olomouc if you have a few days open in your itinerary and are not doing the crazy whirlwind capital city Europe tour. There is some nice things to see, I just had to watch the country’s ‘other’ astronomical clock. Also, a few microbreweries, which were delicious, and I even did a beer spa for fun (why not?).

Prague - Oct 2012-136

Kutná Hora Pics
Olomouc Pics

Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey results (November 14, 2012, 15:46 UTC)

Last week I posted a survey about openSUSE Connect. Although some answers are still coming and you are still welcome to provide more feedback, let’s take a look at some results. Some numbers first. openSUSE Connect is not really busy website, it gets about 80 different visitors per day. Not much, but not a total wasteland. Related to this number is another one. More than half of the people responding in the survey have never ever heard about openSUSE Connect. So it sounds like we should speak about it more…

Now something regarding the feedback. Most people think that it is a good idea and that it either is already useful or it can become quite useful. But even though feedback was positive, lot of people made various suggestions how to improve it. So what can be done to make it better? Most of the feedback was centred around following two topics.

Social aspects

One frequently mentioned topic was social aspect of the Connect. It is social network, where you can’t post status messages and where it is not easy to follow what are people up to. So it’s kinda antisocial social network. There were people asking for adding ability to share what are they going – add status messages, chat and stuff they know from Facebook of Google+. On the other hand there were people who complained that they don’t want to have another social network to maintain. And the third opinion which I think is something between was to provide some easier integration with already existing social networks like Facebook, Twitter or Google+. That I would say sounds the most reasonable solution.

More polishing

This was mentioned with most of the sites aspects. openSUSE Connect is a good thing, it contains many great ideas, but somehow they are not polished enough. As connect itself. People complained that UI could be nicer and more user-friendly. That widgets miss some finishing touches. So what is needed in this aspect? Probably some designers to step in and fix UI ;-) But apart from that, some widgets could use even some coding touches. So if you don’t like how is something done, feel free to submit patch ;-)

Conclusion?

People didn’t know about openSUSE Connect and there are things to be polished. We had some good ideas and we implemented them when we started with Connect. But there is still quite some work left before Connect will be perfect. Work that can be picked up by anybody as openSUSE Connect is open source, written in PHP and we even have a documentation mentioning among other things how to work on it. We can off course just let it live as it is and use it for membership and elections for which it works well. But looks like my survey got people at least a little bit interested and for example victorhck submitted logo proposal for openSUSE Connect! So maybe we will get some other contributors as well ;-) And let’s see how will I spend my next Hackweek :-D

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
RIP recruiting.gentoo.org (November 14, 2012, 13:28 UTC)

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
An informal comparison (November 14, 2012, 03:14 UTC)

A few people asked me to write this down so that they can reference it - so here it is.
A completely unscientific comparison between Linux flavours and how they behave:

CentOS 5 (because upgrading is impossible):

             total       used       free     shared    buffers     cached
Mem:          3942       3916         25          0        346       2039
-/+ buffers/cache:       1530       2411

And on the same hardware, doing the same jobs, a Gentoo:
             total       used       free     shared    buffers     cached
Mem:          3947       3781        166          0        219       2980
-/+ buffers/cache:        582       3365
So we use roughly 1/3rd the memory to get the same things done (fileserver), and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound; and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

Original CentOS install: ~900 qps peak in mysql, ~60 seconds walltime to render a pathological page
Default-y Gentoo: ~1200 qps peak, ~45-50 seconds walltime to render the same page
Gentoo with -march=native in CFLAGS: ~1800qps peak, ~30 seconds render time (this one was unexpected for me!)

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks. The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

Oh yeah, and you can upgrade without a reinstall.

November 13, 2012
Donnie Berkholz a.k.a. dberkholz (homepage, stats, bugs)

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.


Tagged: development, gentoo

November 12, 2012
Equo code refactoring: mission accomplished (November 12, 2012, 20:34 UTC)

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!


Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
WordPress FLV plugin WP OS FLV slow (November 12, 2012, 19:56 UTC)

Over the past few weeks, I’ve been designing a basic site (in WordPress) for a new client. This client needs some embedded FLVs on the site, and doesn’t want them (for good reason) to be directly linked to YouTube. As such, and seeing as I didn’t want to make the client write the HTML for embedding a flash video, I installed a very simple FLV plugin called WP OS FLV.

The plugin worked exactly as I had hoped it would, by cleanly showing the FLV with just a few basic options. However, I noticed that the pages with FLVs embedded in them using the plugin were significantly slower to load than were pages without FLVs. Doing some fun experimentation with cURL, I found that those pages had some external calls on them. Hmmmmmm, now what would the plugin need from an external site? Doing a little more digging, I found the following line hardcoded twice in the plugin’s wposflv.php file:


<param name="movie" value="http://flv-player.net/medias/player_flv_maxi.swf" />

That line means that if the site flv-player.net is down or slow, the page with the FLV plugin on your blog will also be slow. In order to fix this problem, you simply need to download the player_flv_maxi.swf file from that site, upload it somewhere on your server, and edit the line to call the location on your server instead. For instance, if your site is my-site.com, and you put the SWF file in a directory called static, you would change the absolute URL to:


<param name="movie" value="http://my-site.com/static/player_flv_maxi.swf" />

If you too were having problems with this plugin being a bit slow, I hope that this suggestion helps!

Cheers,
Zach

Jan Kundrát a.k.a. jkt (homepage, stats, bugs)

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty impressed about the event so far -- the organizers have done an excellent job and everything feels very, very smooth here. Congratulations for that; I have a first-hand experience with organizing a workshop and can imagine the huge pile of work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP e-mail client. A lot of work has found the way in since the last release; Trojitá now supports almost all of the useful IMAP extensions including QRESYNC and CONDSTORE for blazingly fast mailbox synchronization or the CONTEXT=SEARCH for live-updated search results to name just a few. There've also been roughly 666 tons of bugfixes, optimizations, new features and tweaks. Trojitá is finally showing evidence of getting ready for being usable as a regular e-mail client, and it's exciting to see that process after 6+ years of working on that in my spare time. People are taking part in the development process; there has been a series of commits from Thomas Lübking of the kwin fame dealing with tricky QWidget issues, for example -- and it's great to see many usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree (the thesis was about Trojitá, of course), I started a new job (this time using Qt) and implemented quite some interesting stuff with Qt -- if you have always wondered how to integrate Ragel, a parser generator, with qmake, stay tuned for future posts.

Anyway, in case you are interested in using an extremely fast e-mail client implemented in pure Qt, give Trojitá a try. If you'd like to chat about it, feel free to drop me a mail or just stop me anywhere. We're always looking for contributors, so if you hit some annoying behavior, please do chime in and start hacking.

Cheers,
Jan

November 11, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Local policy management script (November 11, 2012, 11:37 UTC)

I’ve written a small script that I call selocal which manages locally needed SELinux rules. It allows me to add or remove SELinux rules from the command line and have them loaded up without needing to edit a .te file and building the .pp file manually. If you are interested, you can download it from my github location.

Its usage is as follows:

  • You can add a rule to the policy with selocal -a “rule”
  • You can list the current rules with selocal -l
  • You can remove entries by referring to their number (in the listing output), like semodule -d 19.
  • You can ask it to build (-b) and load (-L) the policy when you think it is appropriate

It even supports multiple modules in case you don’t want to have all local rules in a single module set.

So when I wanted to give a presentation on Tor, I had to allow the torbrowser to connect to an unreserved port. The torbrowser runs in the mozilla domain, so all I did was:

~# selocal -a "corenet_tcp_connect_all_unreserved_ports(mozilla_t)" -b -L

At the end of the presentation, I removed the line from the policy:

~# selocal -l | grep mozilla_t
19. corenet_tcp_connect_all_unreserved_ports(mozilla_t)
~# selocal -d 19 -b -L

I can also add in comments in case I would forget why I added it in the first place:

~# selocal -a "allow mplayer_t self:udp_socket create_socket_perms;" \
  -c "MPlayer plays HTTP resources" -b -L

This then also comes up when listing the current local policy rules:

~# selocal -l
...
40: allow mplayer_t self:udp_socket create_socket_perms; # MPlayer plays HTTP resources

November 09, 2012
Hanno Böck a.k.a. hanno (homepage, stats, bugs)
Languages and translation technology (November 09, 2012, 21:53 UTC)

Chinese timetableJust recently, Microsoft research has made some progress in developing a device to do live translations from English into Mandarin. I'd like to share some thoughts with you about that.

If you read my blog on a regular basis, you will know that I traveled through Russia, Mongolia and China last year. If there's one big thing I learned on this trip, it's this: English language is - on a worldwide scale - much less prevalent than I thought. Call me a fool, but I just wasn't aware of that. I thought, okay, maybe many people won't understand English, but at least I'll always be able to find someone nearby who's able to translate. That just wasn't the case. I spent days in cities where I met nobody that shared any language knowledge with me.

I'm pretty sure that translation technologies will become really important in the not-so-distant future. For many people, they already are. I've learned about the opinions of swedish initiatives without any knowledge of swedish just by using Google translate. Google Chrome and the free variant Chromium show directly the option to send something through Google translate if it detects that it's not in your language (although that wasn't working with Mongolian when I was there last year). I was in hotels where the staff pointed me to their PC with an instance of Yandex translate or Baidu translate where I should type in my questions in English (Yandex is something like the russian Google, Baidu is something like the chinese Google). Despite all the shortcomings of today's translation services, people use them to circumvent language barriers.

Young people in those countries are often learning English today, but it's a matter of fact that this will only very slowly translate into a real change. Lots of barriers exist. Many countries have their own language and another language that's used as the "international communication language" that's not English. For example, you'll probably get along pretty well in most post-soviet countries with Russian, no matter if the countries have their own native language or not. This also happens in single countries with more than one language. People have their native language and learn the countries language as their first foreign language.
Some people think their language is especially important and this stops the adoption of English (France is especially known for that). Some people have the strange idea that supporting English language knowledge is equivalent to supporting US politics and therefore oppose it.

Yes, one can try to learn more languages (I'm trying it with Mandarin myself and if I'll ever feel I can try a fourth language it'll probably be Russian), but if you look on the world scale, it's a loosing battle. To get along worldwide, you'd probably have to learn at least five languages. If you are fluent in English, Mandarin, Russian, Arabic and Spanish, you're probably quite good, but I doubt there are many people on this planet able to do that. If you're one of them, you have my deepest respect (please leave a comment if you are).

If you'd pick two completely random people of the world population, it's quite likely that they don't share a common language.

I see no reason in principle why technology can't solve that. We're probably far away from a StarTrek-alike universal translator and sadly evolution hasn't brought us the Babelfish yet, but I'm pretty confident that we will see rapid improvements in this area and that will change a lot. This may sound somewhat pathetic, but I think this could be a crucial issue in fixing some of the big problems of our world - hate, racism, war. It's just plain simple: If you have friends in China, you're less likely to think that "the chinese people are bad" (I'm using this example because I feel this thought is especially prevalent amongst the left-alternative people who would never admit any racist thoughts - but that's probably a topic for a blog entry on its own). If you have friends in Iran, you're less likely to support your country fighting a war against Iran. But having friends requires being able to communicate with them. Being able to have friends without the necessity of a common language is a fascinating thought to me.

November 08, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Boosting my morale? Nope, still not. (November 08, 2012, 05:43 UTC)

I’m not sure if you’re following the development of this particular package in Gentoo, but with some discussion, quite a few developers reached a consensus last week that the slotted dev-libs/boost that we’ve had for the past couple of years had to go, replaced with a single-slot package like we have for most other libraries.

The main reason for this is that the previous slotting was not really doing what the implementers expected it to do — the idea for many is that you can always depend on whatever highest version of Boost you support, and if you don’t support the latest, no problem, you’ll get an older one. Unfortunately, this clashes with the fact that only the newest version of Boost is supported by upstream with modern configurations, so it happens that a new C library, or a new compiler, can (and do) make older versions non-buildable.

Like what happened with the new GLIBC 2.16, which is partially described in the previous post of the same series, and lately summarized, where there’s no way to rebuild boost-1.49 with the new glibc (the “patch” that could be used would change the API, making it similar to boost-1.50 which ..), but since I did report build failures with 1.50, people “fixed” them by depending on an older version… which is now not installable. D’oh!

So what did I do to sort this out? We dropped the slot altogether. Now all Boost versions install as slot zero and each replace the others. This makes it much easier for both developers and users, as you know that the one version you got installed is the one you’re building against, instead of “whatever has been eselected” or “whatever was installed last” or “whatever is the first one that the upstream user is finding” which was before — usually a mix of all.

But this wasn’t enough because unfortunately, libraries, headers and tools were all slotted so they all had different names based on the version. This was handled in the new 1.52 release which I unmasked today, by going back to the default install layout that Boost uses for Unix: the system layout. This is designed to allow one and only one version of each Boost library in the system, and does neither provide a version nor a variant suffix. This meant we needed another change.

Before going back to system layout, each boost version installed two sets of libraries, one that was multithread-safe and oen that wasn’t. Software using threads would have to link to the mt variant, while those not using threads could link to the (theoretically lower-overhead) single-thread variant. Which happened to be the default. Unfortunately, this also meant that a ton of software out there, even when using threads, simply linked to the boost library they wanted without caring for the variant. Oopsie.

Even worse, it was very well possible, and indeed was the case for Blender, that both variants were brought in, in the process’s address space, possibly causing extremely hard to debug issues due to symbol collisions (which I know, unfortunately, very well).

An easy way to see (using older versions of boost ebuilds) whether your program is linking to the wrong variant, is to see if you see it linking to libboost_threads-mt and at the same time to some other library such as libboost_system (not mt variant). Since our very pleasant former maintainer decided to link the mt variant of libboost_threads to the non-mt one, quite a few ways to check for multithreaded Boost simply … failed.

Now the decision on whether to build threadsafe or not is done through an USE flag like most other ebuilds do, and since only one variant is installed, everybody gets, by default and in most cases, the multithread-safe version, and all is good. Packages requiring threads might want already to start using dev-libs/boost[threads(+)] to make sure that they are not installed with a non-threadsafe version of Boost, but there are symlinks in place righ tnow so that even if they are looking for the mt variant they get the one installed version of boost anyway (only with USE=threads of course).

One question that raised was “how broken will people’s systems be, after upgrading from one Boost to another?” and the answer is “quite” … unless you’re using a modern enough Portage (the last few versions of the 2.1 series are okay, and most of the 2.2), which can use preserve-libs. In that case, it’ll just require you to run a single emerge command to get back on the new version, and if not, you’ll have to wait for revdep-rebuild to finish.

And to make things sweeter, with this change, the time it takes for Boost to build is halved (4 minutes vs 8 on my laptop), while the final package is 30MB less (here at least), since only one set of libraries is installed instead of two — without counting the time and space you’d waste by having to install multiple boost versions together.

And for developers, this also mean that you can forget about the ruddy boost-utils.eclass, since now everything is supposed to work without any trickery. Win-win situation, for once.

November 07, 2012
gcc / ld madness (November 07, 2012, 17:53 UTC)

So, I started reading [The Definitive Guide to the Xen Hypervisor] (again :P ), and I thought it would be fun to start with the example guest kernel, provided by the author, and extend it a bit (ye, there’s mini-os already in extras/, but I wanted to struggle with all the peculiarities of extended inline asm, x86_64 asm, linker scripts, C macros etc, myself :P ).

After doing some reading about x86_64 asm, I ‘ported’ the example kernel to 64bit, and gave it a try. And of course, it crashed. While I was responsible for the first couple of crashes (for which btw, I can write at least 2-3 additional blog posts :P ), I got stuck with this error:

traps.c:470:d100 Unhandled bkpt fault/trap [#3] on VCPU 0 [ec=0000]
RIP:    e033:<0000000000002271>

when trying to boot the example kernel as a domU (under xen-unstable).

0×2000 is the address where XEN maps the hypercall page inside the domU’s address space. The guest crashed when trying to issue any hypercall (HYPERCALL_console_io in this case). At first, I thought I had screwed up with the x86_64 extended inline asm, used to perform the hypercall, so I checked how the hypercall macros were implemented both in the Linux kernel (wow btw, it’s pretty scary), and in the mini-os kernel. But, I got the same crash with both of them.

After some more debugging, I made it work. In my Makefile, I used gcc to link all of the object files into the guest kernel. When I switched to ld, it worked. Apparently, when using gcc to link object files, it calls the linker with a lot of options you might not want. Invoking gcc using the -v option will reveal that gcc calls collect2 (a wrapper around the linker), which then calls ld with various options (certainly not only the ones I was passing to my ‘linker’). One of them was –build-id, which generates a .note.gnu.build-id” ELF note section in the output file, which contains some hash to identify the linked file.

Apparently, this note changes the layout of the resulting ELF file, and ‘shifts’ the .text section to 0×30 from 0×0, and hypercall_page ends up at 0×2030 instead of 0×2000. Thus, when I ‘called’ into the hypercall page, I ended up at some arbitrary location instead of the start of the specific hypercall handler I was going for. But it took me quite some time of debugging before I did an objdump -dS [kernel] (and objdump -x [kernel]), and found out what was going on.

The code from bootstrap.x86_64.S looks like this (notice the .org 0×2000 before the hypercall_page global symbol):

        .text
        .code64
	.globl	_start, shared_info, hypercall_page
_start:
	cld
	movq stack_start(%rip),%rsp
	movq %rsi,%rdi
	call start_kernel

stack_start:
	.quad stack + 8192
	
	.org 0x1000
shared_info:
	.org 0x2000

hypercall_page:
	.org 0x3000	

One solution, mentioned earlier, is to switch to ld (which probalby makes more sense), instead of using gcc. The other solution is to tweak the ELF file layout, through the linker script (actually this is pretty much what the Linux kernel does, to work around this):

OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(_start)

PHDRS {
	text PT_LOAD FLAGS(5);		/* R_E */
	data PT_LOAD FLAGS(7);		/* RWE */
	note PT_NOTE FLAGS(0);		/* ___ */
}

SECTIONS
{
	. = 0x0;			/* Start of the output file */
	_text = .;			/* Text and ro data */
	.text : {
		*(.text)
	} :text = 0x9090 

	_etext = .;			/* End ot text section */

	.rodata : {			/* ro data section */
		*(.rodata)
		*(.rodata.*)
	} :text

	.note : { 
		*(.note.*)
	} :note

	_data = .;
	.data : {			/* Data */
		*(.data)
	} :data

	_edata = .;			/* End of data section */	
}

And now that my kernel boots, I can go back to copy-pasting code from the book … erm hacking. :P

Disclaimer: I’m not very familiar with lds scripts or x86_64 asm, so don’t trust this post too much. :P


Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

_MG_4471

You might remember that many years ago (actually, it’s just shy of four years ago) I wrote a post about a disconcerting label I found on the box of a pair of Shure earphones I got to try to sleep better during the night when noise was coming from the outside. This was a Californian notice about the danger of carcinogenic chemicals, most likely related to the PVC in the earphones’ cord — which didn’t even last six full months! I had to trash the extremely expensive pair of earphones, because the cables ruptured behind my years; the stupid plastic was just too rigid I’m afraid.

Well, now that I’ve been in California for a while, I was expecting to see many more similar notices, but at least here in Hermosa Beach where I’m based, I haven’t seen one … until Starbucks was forced to put on. I actually did find out something more about those notices before, as Amazon has a page which is linked in your order when you’re shipping something in California that should have the label attached.

Now the title of this post is obviously inflammatory, I know that and it’s half-intended, but my problem with all of this is that when I wrote about that stupid label, I didn’t really know much about the whole thing — I’ve been told right away in those comments that the labels are extremely common in California, a few months ago I finally found that it was a popular ballot that actually put the law into place… and now I feel like something’s extremely wrong in this place.

Really I feel this is one of the most stupidest warning people can have on things, and somehow, for once, it makes me feel better thinking that in Italy, referendums are only used to vote laws off, not in…

November 06, 2012
Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
PulseConf 2012: Report (November 06, 2012, 11:04 UTC)

For those of you who missed my previous updates, we recently organised a PulseAudio miniconference in Copenhagen, Denmark last week. The organisation of all this was spearheaded by ALSA and PulseAudio hacker, David Henningsson. The good folks organising the Ubuntu Developer Summit / Linaro Connect were kind enough to allow us to colocate this event. A big thanks to both of them for making this possible!

The room where the first PulseAudio conference took place

The room where the first PulseAudio conference took place

The conference was attended by the four current active PulseAudio developers: Colin Guthrie, Tanu Kaskinen, David Henningsson, and myself. We were joined by long-time contributors Janos Kovacs and Jaska Uimonen from Intel, Luke Yelavich, Conor Curran and Michał Sawicz.

We started the conference at around 9:30 am on November 2nd, and actually managed to keep to the final schedule(!), so I’m going to break this report down into sub-topics for each item which will hopefully make for easier reading than an essay. I’ve also put up some photos from the conference on the Google+ event.

Mission and Vision

We started off with a broad topic — what each of our personal visions/goals for the project are. Interestingly, two main themes emerged: having the most seamless desktop user experience possible, and making sure we are well-suited to the embedded world.

Most of us expressed interest in making sure that users of various desktops had a smooth, hassle-free audio experience. In the ideal case, they would never need to find out what PulseAudio is!

Orthogonally, a number of us are also very interested in making PulseAudio a strong contender in the embedded space (mobile phones, tablets, set top boxes, cars, and so forth). While we already find PulseAudio being used in some of these, there are areas where we can do better (more in later topics).

There was some reservation expressed about other, less-used features such as network playback being ignored because of this focus. The conclusion after some discussion was that this would not be the case, as a number of embedded use-cases do make use of these and other “fringe” features.

Increasing patch bandwidth

Contributors to PulseAudio will be aware that our patch queue has been growing for the last few months due to lack of developer time. We discussed several ways to deal with this problem, the most promising of which was a periodic triage meeting.

We will be setting up a rotating schedule where each of us will organise a meeting every 2 weeks (the period might change as we implement things) where we can go over outstanding patches and hopefully clear backlog. Colin has agreed to set up the first of these.

Routing infrastructure

Next on the agenda was a presentation by Janos Kovacs about the work they’ve been doing at Intel with enhancing the PulseAudio’s routing infrastructure. These are being built from the perspective of IVI systems (i.e., cars) which typically have fairly complex use cases involving multiple concurrent devices and users. The slides for the talk will be put up here shortly (edit: slides are now available).

The talk was mingled with a Q&A type discussion with Janos and Jaska. The first item of discussion was consolidating Colin’s priority-based routing ideas into the proposed infrastructure. The broad thinking was that the ideas were broadly compatible and should be implementable in the new model.

There was also some discussion on merging the module-combine-sink functionality into PulseAudio’s core, in order to make 1:N routing easier. Some alternatives using te module-filter-* were proposed. Further discussion will likely be required before this is resolved.

The next steps for this work are for Jaska and Janos to break up the code into smaller logical bits so that we can start to review the concepts and code in detail and work towards eventually merging as much as makes sense upstream.

Low latency

This session was taken up against the background of improving latency for games on the desktop (although it does have other applications). The indicated required latency for games was given as 16 ms (corresponding to a frame rate of 60 fps). A number of ideas to deal with the problem were brought up.

Firstly, it was suggested that the maxlength buffer attribute when setting up streams could be used to signal a hard limit on stream latency — the client signals that it will prefer an underrun, over a latency above maxlength.

Another long-standing item was to investigate the cause of underruns as we lower latency on the stream — David has already begun taking this up on the LKML.

Finally, another long-standing issue is the buffer attribute adjustment done during stream setup. This is not very well-suited to low-latency applications. David and I will be looking at this in coming days.

Merging per-user and system modes

Tanu led the topic of finding a way to deal with use-cases such as mpd or multi-user systems, where access to the PulseAudio daemon of the active user by another user might be desired. Multiple suggestions were put forward, though a definite conclusion was not reached, as further thought is required.

Tanu’s suggestion was a split between a per-user daemon to manage tasks such as per-user configuration, and a system-wide daemon to manage the actual audio resources. The rationale being that the hardware itself is a common resource and could be handled by a non-user-specific daemon instance. This approach has the advantage of having a single entity in charge of the hardware, which keeps a part of the implementation simpler. The disadvantage is that we will either sacrifice security (arbitrary users can “eavesdrop” using the machine’s mic), or security infrastructure will need to be added to decide what users are allowed what access.

I suggested that since these are broadly fringe use-cases, we should document how users can configure the system by hand for these purposes, the crux of the argument being that our architecture should be dictated by the main use-cases, and not the ancillary ones. The disadvantage of this approach is, of course, that configuration is harder for the minority that wishes multi-user access to the hardware.

Colin suggested a mechanism for users to be able to request access from an “active” PulseAudio daemon, which could trigger approval by the corresponding “active” user. The communication mechanism could be the D-Bus system bus between user daemons, and Ștefan Săftescu’s Google Summer of Code work to allow desktop notifications to be triggered from PulseAudio could be used to get to request authorisation.

David suggested that we could use the per-user/system-wide split, modified somewhat to introduce the concept of a “system-wide” card. This would be a device that is configured as being available to the whole system, and thus explicitly marked as not having any privacy guarantees.

In both the above cases, discussion continued about deciding how the access control would be handled, and this remains open.

We will be continuing to look at this problem until consensus emerges.

Improving (laptop) surround sound

The next topic dealt with being able to deal with laptops with a built-in 2.1 channel set up. The background of this is that there are a number of laptops with stereo speakers and a subwoofer. These are usually used as stereo devices with the subwoofer implicitly being fed data by the audio controller in some hardware-dependent way.

The possibility of exposing this hardware more accurately was discussed. Some investigation is required to see how things are currently exposed for various hardware (my MacBook Pro exposes the subwoofer as a surround control, for example). We need to deal with correctly exposing the hardware at the ALSA layer, and then using that correctly in PulseAudio profiles.

This led to a discussion of how we could handle profiles for these. Ideally, we would have a stereo profile with the hardware dealing with upmixing, and a 2.1 profile that would be automatically triggered when a stream with an LFE channel was presented. This is a general problem while dealing with surround output on HDMI as well, and needs further thought as it complicates routing.

Testing

I gave a rousing speech about writing more tests using some of the new improvements to our testing framework. Much cheering and acknowledgement ensued.

Ed.: some literary liberties might have been taken in this section

Unified cross-distribution ALSA configuration

I missed a large part of this unfortunately, but the crux if the discussion was around unifying cross-distribution sound configuration for those who wish to disable PulseAudio.

Base volumes

The next topic we took up was base volumes, and whether they are useful to most end users. For those unfamiliar with the concept, we sometimes see sinks/sources where which support volume controls going to > 0dB (which is the no=attenuation point). We provide the maximum allowed gain in ALSA as the maximum volume, and suggest that UIs show a marker for the base volume.

It was felt that this concept was irrelevant, and probably confusing to most end users, and that we suggest that UIs do not show this information any more.

Relatedly, it was decided that having a per-port maximum volume configuration would be useful, so as to allow users to deal with hardware where the output might get too loud.

Devices with dynamic capabilities (HDMI)

Our next topic of discussion was finding a way to deal with devices such as those HDMI ports where the capabilities of the device could change at run time (for example, when you plug out a monitor and plug in a home theater receiver).

A few ideas to deal with this were discussed, and the best one seemed to be David’s proposal to always have a separate card for each HDMI device. The addition of dynamic profiles could then be exploited to only make profiles available when an actual device is plugged in (and conversely removed when the device is plugged out).

Splitting of configuration

It was suggested that we could split our current configuration files into three categories: core, policy and hardware adaptation. This was met with approval all-around, and the pre-existing ability to read configuration from subdirectories could be reused.

Another feature that was desired was the ability to ship multiple configurations for different hardware adaptations with a single package and have the correct one selected based on the hardware being run on. We did not know of a standard, architecture-independent way to determine hardware adaptation, so it was felt that the first step toward solving this problem would be to find or create such a mechanism. This could either then be used to set up configuration correctly in early boot, or by PulseAudio for do runtime configuration selection.

Relatedly, moving all distributed configuration to /usr/share/..., with overrides in /etc/pulse/... and $HOME were suggested.

Better drain/underrun reporting

David volunteered to implement a per-sink-input timer for accurately determining when drain was completed, rather than waiting for the period of the entire buffer as we currently do. Unsurprisingly, no objections were raised to this solution to the long-standing issue.

In a similar vein, redefining the underflow event to mean a real device underflow (rather than the client-side buffer running empty) was suggested. After some discussion, we agreed that a separate event for device underruns would likely be better.

Beer

We called it a day at this point and dispersed beer-wards.

PulseConf Hackers

Our valiant attendees after a day of plotting the future of PulseAudio

User experience

David very kindly invited us to spend a day after the conference hacking at his house in Lund, Sweden, just a short hop away from Copenhagen. We spent a short while in the morning talking about one last item on the agenda — helping to build a more seamless user experience. The idea was to figure out some tools to help users with problems quickly converge on what problem they might be facing (or help developers do the same). We looked at the Ubuntu apport audio debugging tool that David has written, and will try to adopt it for more general use across distributions.

Hacking

The rest of the day was spent in more discussions on topics from the previous day, poring over code for some specific problems, and rolling out the first release candidate for the upcoming 3.0 release.

And cut!

I am very happy that this conference happened, and am looking forward to being able to do it again next year. As you can see from the length of this post, there are lot of things happening in this part of the stack, and lots more yet to come. It was excellent meeting all the fellow PulseAudio hackers, and my thanks to all of them for making it.

Finally, I wouldn’t be sitting here writing this report without support from Collabora, who sponsored my travel to the conference, so it’s fitting that I end this with a shout-out to them. :)

November 05, 2012
Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey (November 05, 2012, 12:42 UTC)

You might remember that in our team (openSUSE Boosters), we created openSUSE Connect some time ago. It was meant as replacement for users.opensuse.org that nobody knew about and nobody used. We hoped that it will attract more users and that it will be more user friendly way, how to manage personal data. Apart from that, we wanted to include more interesting widgets so it can become your landing page for all your efforts in openSUSE project. With that regards we created bugzilla widget, fate widget, build status widget and some more. We hoped that it would make difference and help people and that they will enjoy using the new site. During this summer my GSoC student created amazing Karma widget as well to make it more fun. And as Connect has been some time already in function, it’s now time to collect some feedback. Did it work? Do you like it? Or did it become just a wasteland? Do you think such a site make sense?

I’m not promising anything right now, but it would be nice to know, what our users think about it and whether it could make sense to put some effort in it and how much and where to concentrate it ;-) So please, fill in this little survey and let me know your opinion. I’ll publish results later ;-)

November 03, 2012
Stuart Longland a.k.a. redhatter (homepage, stats, bugs)
I dub thee… iKarma (November 03, 2012, 23:53 UTC)

Mexico to Apple: You WILL NOT use the name ‘iPhone’ here

We don’ need no stinkin’ badge lawsuits

Apple has lost the right to use the word “iPhone” in Mexico after its trademark lawsuit against Mexican telco iFone backfired.

http://www.theregister.co.uk/2012/11/02/iphone_ifone_mexico_trademark/

Not so nice when the shoe’s on the other foot now is it, Apple? Now if only other law courts had such common sense.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Tinderbox and manual intervention (November 03, 2012, 20:39 UTC)

So after my descriptive post you might be wondering what’s so complex or time-requiring in running a tinderbox. That’s because I haven’t spoken about the actual manual labor that goes into handling the tinderbox.

The major work is of course scouring the logs to make sure that I file only valid bugs (and often enough that’s not enough, as things hide behind the surface), but there are a quite a number of tasks that are not related to the bug filing, at least not directly.

First of all, there is the matter of making sure that the packages are available for installation. This used to be more complex, but luckily thanks to REQUIRED_USE and USE deps, this task is slightly easier than before. The tinderbox.py script (that generates the list of visible packages that need to be tested) also generates a list of use conflicts, dependencies etc. This list I have to look at manually, and then update the package.use file so that they are satisfied. If their dependencies or REQUIRED_USE are not satisfied, the package is not visible, which means it won’t be tested.

This sounds extremely easy, but there are quite a few situations, which I discussed previously where there is no real way to satisfy requirements for all the packages in the tree. In particular there are situations where you can’t enable the same USE flag all over the tree — for instance if you do enable icu for libxml2, you can’t enable it for qt-webkit (well, you can but you have to disable gstreamer then, which is required by other packages). Handling all the conflicting requirements takes a bit of trial and error.

Then there is a much worse problem and that is with tests that can get stuck, so that things like this happen:

localhost ~ # qlop -c
 * dev-python/mpi4py-1.3
     started: Sat Nov  3 12:29:39 2012
     elapsed: 9 hours, 11 minutes, 12 seconds

And I’ve got to keep growing the list of packages whose tests are unreliable — I wonder if the maintainers ever try running their tests, sometimes.

This task used to be easier because the tinderbox supports sending out tweets or dents through bti so that it would tell me what was its action — unfortunately identi.ca kept marking the tinderbox’s account as spam, and while they did unlock it three times, it meant I had to ask support to do so every other week. I grew tired of that and stopped caring about it. Unfortunately that means I have to connect to the instance(s) from time to time to make sure they are still crunching.

Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
komplete audio 6 on gentoo: first impressions (November 03, 2012, 05:36 UTC)

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
How to run a tinderbox with my scripts (November 03, 2012, 03:57 UTC)

Hello there everybody, today’s episode is dedicated to set up a tinderbox instance like mine which is building and installing every visible package in the tree, running its tests and so on.

So first step is to have a system where to run the tinderbox. A virtual system is much preferred, since the tinderbox can easily install very insecure code, although nothing prevents you from running it straight on the metal. My choice for this, after Tiziano pointed me in that direction, was to get LXC to handle this, as a chroot on steroids (the original implementation used chroot and was much less reliable).

Now there are a number of degrees you could be running the tinderbox at; most of the basics are designed to work with almost every package in the system broken — there are only a few packages that are needed for this system to work, here’s my world file on the two tinderboxes:

app-misc/screen
app-portage/gentoolkit
app-portage/portage-utils
dev-java/java-dep-check
dev-lang/python:2.7
net-analyzer/netcat6
net-misc/curl

But let’s do stuff in order. What do I do when I run the tinderbox? I connect on SSH over IPv6 – the tinderbox has very limited Internet connectivity, as everything is proxied by a Squid instance, like I described in this two years old post – directly as root unfortunately (but only with key auth). Then I either start or reconnect to a screen instance, which is where the tinderbox is running (or will be running).

The tinderbox’s scripts are on git and are written partially by me and partially by Zac (following my harassment for the most part, and he’s done a terrific job). The key script is tinderbox-continuous.sh which is simply going to keep executing, either ad-infinitum, or going through a file given as parameter, the tinderbox on 200 packages at a time (this way there is emerge --sync from time to time so that the tree doesn’t get stale). There is also a fetch-reverse-deps.sh which is used to, as the cover says, fetch the reverse dependencies of a given package, which pairs with the continuous script above when I do a targeted run.

On the configuration side, /etc/portage/make.conf has to refer to /root/flameeyes-tinderbox/tinderbox.make.conf which comes from the repository and sets up features, verbosity levels, and the fetch/resume commands to use curl.. these are also set up so that if there is a TINDERBOX_PROXY environment variable set, then it’ll go through it. Setting of TINDERBOX_PROXY and a couple more variables is done in /etc/portage/make.tinderbox.private.conf; you can use it for setting GENTOO_MIRRORS with something that is easily and quickly reachable, as there’s a lot to download!

But what does this get us? Just a bunch of files in /var/log/portage/build. How do I analyze them? Originally I did this by using grep within Emacs and looked at it file by file. Since I was opening the bugs with Firefox running on the same system, so I could very easily attach the logs. This is no longer possible, so that’s why I wrote a log collector which is also available and that is designed in two components: a script that receives (over IPv6 only, and within the virtual network of the host) the log being sent with netcat and tar, removes colour escape sequences, and writes it down as an HTML file (in a way that Chrome does not explode on) on Amazon’s S3, also counting how many of the observed warnings are found, and whether the build, or tests, failed — this data is saved over SimpleDB.

Then there is a simple sinatra-based interface that can be ran on any computer, and I run it locally on my laptop, and fetches the data from SimpleDB, and displays it in a table with links to the build logs. This also has a link to the pre-filled bug template (it uses a local file where emerge --info is saved as comment #0.

Okay so this is the general gist of it, if I have some more time this weekend I’ll draw some cute diagram for it, and you can all tell me that it’s overcomplicated and that if I did it in $whatever it would have been much easier, but at the same time you’ll not be providing any replacement, or if you will start working on it, you’ll spend months designing the schema of the database, with a target of next year, which will not be met. I’ve been there.

November 02, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Crossfit Praha: new home gym for November (November 02, 2012, 21:19 UTC)

(I’d like to first give a global shout out to my first Crossfit home, The Athlete Lab)

Prague - Oct 2012-113

Since I’m in Prague for a month, I became a member of Crossfit Praha instead of just being a drop-in client. The gym is quite small, but centrally located in Prague. The lifting days are separate than the normal days (probably unless you are a trusted regular). The premise is, you show up during a block of time, warm up on your own, proceed with WOD, then cool down on your own which is pretty standard across gyms from what I can tell, exception being that everyone is starting the WOD at their own time (not structured times). Now I’ve put my money where my mouth is and have to keep a good diet, drink not so much beer, etc to be able to function the next day(s) after a WOD. “Tomorrow will not be any easier”

Prague - Oct 2012-110
(Myself and Zdeněk)

Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Lenovo laptops now feature what? (November 02, 2012, 15:32 UTC)

Each month, the online discount retailer Working Advantage has a sweepstakes for some hot item. For November 2012, it is a Lenovo IdeaPad Z580. I received the following email about it yesterday:

Working Advantage Lenovo IdeaPad Z580 November Giveaway features top sirloin steaks

Last time I checked, the IdeaPad Z580 had some neat features, but definitely did not come with top sirloin steaks! :razz:

Cheers,
Zach

November 01, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Slock 1.1 background colour (November 01, 2012, 13:43 UTC)

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:


# slock version
VERSION = 1.0-tip

# Customize below to fit your system

# paths
PREFIX = /usr/local

X11INC = /usr/X11R6/include
X11LIB = /usr/X11R6/lib

# includes and libs
INCS = -I. -I/usr/include -I${X11INC}
LIBS = -L/usr/lib -lc -lcrypt -L${X11LIB} -lX11 -lXext

# flags
CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"
CFLAGS = -std=c99 -pedantic -Wall -Os ${INCS} ${CPPFLAGS}
LDFLAGS = -s ${LIBS}

# On *BSD remove -DHAVE_SHADOW_H from CPPFLAGS and add -DHAVE_BSD_AUTH
# On OpenBSD and Darwin remove -lcrypt from LIBS

# compiler and linker
CC = cc

# Install mode. On BSD systems MODE=2755 and GROUP=auth
# On others MODE=4755 and GROUP=root
#MODE=2755
#GROUP=auth

With the line applicable to background colour being:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"

In order to change it back to the pre-1.1 background colour scheme, simply modify -DCOLOR2 to be the same as -DCOLOR1:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"black\"

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

Hope that helps!

Cheers,
Zach

October 31, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)

I guess it’s time for a new post on what’s the status with Gentoo Linux right now. First of all, the tinderbox is munching as I write. Things are going mostly smooth but there are still hiccups due to some developers not accepting its bug reports because of the way logs are linked (as in, not attached).

Like last time that I wrote about it, four months ago, this is targeting GCC 4.7, GLIBC 2.16 (which is coming out of masking next week!) and GnuTLS 3. Unfortunately, there are a few (biggish) problems with this situation, mostly related to the Boost problem I noted back in July.

What happens is this:

  • you can’t use any version of boost older than 1.48 with GCC 4.7 or later;
  • you can’t use any version of boost older than 1.50 with GLIBC 2.16;
  • many packages don’t build properly with boost 1.50 and later;
  • a handful of packages require boost 1.46;
  • boost 1.50-r2 and later (in Gentoo) no longer support eselect boost making most of the packages using boost not build at all.

This kind of screwup is a major setback, especially since Mike (understandably) won’t wait any more to unmask GLIBC 2.16 (he waited a month, the Boost maintainers had all the time to fix their act, which they didn’t — it’s now time somebody with common sense takes over). So the plan right now is for me and Tomáš to pick up the can of worms, and un-slot Boost, quite soon. This is going to solve enough problems that we’ll all be very happy about it, as most of the automated checks for Boost will then work out of the box. It’s also going to reduce the disk space being used by your install, although it might require you to rebuild some C++ packages, I’m sorry about that.

For what concerns GnuTLS, version 3.1.3 is going to hit unstable users at the same time as glibc-2.16, and hopefully the same will be true for stable when that happens. Unfortunately there are still a number of packages not fixed to work with gnutls, so if you see a package you use (with GnuTLS) in the tracker it’s time to jump on fixing it!

Speaking of GnuTLS, we’ve also had a smallish screwup this morning when libtasn1 version 3 also hit the tree unmasked — it wasn’t supposed to happen, and it’s now masked, as only GnuTLS 3 builds fine with it. Since upstream really doesn’t care about GnuTLS 2 at this point, I’m not interested in trying to get that to work nicely, and since I don’t see any urgency in pushing libtasn1 v3 as is, I’ll keep it masked until GNOME 3.6 (as gnome-keyring also does not build with that version, yet).

Markos has correctly noted that the QA team – i.e., me – is not maintaining the DevManual anymore. We made it now a separate project, under QA (but I’d rather say it’s shared under QA and Recruiters at the same time), and the GIT Repository is now writable by any developer. Of course if you play around that without knowing what you’re doing, on master, you’ll be terminated.

There’s also the need to convert the DevManual to something that makes sense. Right now it’s a bunch of files all called text.xml which makes editing a nightmare. I did start working on that two years ago but it’s tedious work and I don’t want to do it on my free time. I’d rather not have to do it while being paid for it really. If somebody feels like they can handle the conversion, I’d actually consider paying somebody to do that job. How much? I’d say around $50. Desirable format is something that doesn’t make a person feel like taking their eyes out when trying to edit it with Emacs (and vim, if you feel generous): my branch used DocBook 5, which I rather fancy, as I’ve used it for Autotools Mythbuster but RST or Sphinx would probably be okay as well, as long as no formatting is lost along the way. Update: Ben points out he already volunteered to convert it to RST, I’ll wait for that before saying anything more.

Also, we’re looking for a new maintainer for ICU (and I’m pressing Davide to take the spot) as things like the bump to 50 should have been handled more carefully. Especially now that it appears that it’s basically breaking a quarter of its dependencies when using GCC 4.7 — both the API and ABI of the library change entirely depending on whether you’re using GCC 4.6 or 4.7, as it’ll leverage C++11 support in the latter. I’m afraid this is just going to be the first of a series of libraries making this kind of changes and we’re all going to suffer through it.

I guess this is all for now.

October 30, 2012
Liam McLoughlin a.k.a. hexxeh (homepage, stats, bugs)
512MB Pi + Adafruit Budget Pack = win (October 30, 2012, 22:00 UTC)

The kind folks over at Element 14 emailed me last week asking if I’d like to review the new Raspberry Pi 512MB edition and the Adafruit Budget Pack. Whilst I already have a rather large collection of Pi, I thought it’d be fun to write a review since it’s not something I’ve really done before.

So, yesterday the kit arrived and I got chance today to unpack it and have a play around. The kit doesn’t come with a Raspberry Pi, you have to buy that separately. Here’s a breakdown of what the kit includes:

  • Pi box (a clear acrylic case for the Pi)
  • Cobbler and GPIO ribbon cable (breakout board to split the GPIO cable out onto a breadboard)
  • Half-size breadboard with a bundle of breadboarding wires
  • 4GB microSD card with SD adaptor
  • 5V/1A USB power supply and cable

Firstly, the Pi box. The clear plastic looks pretty awesome once it’s assembled, and the laser engraved labels are an excellent touch. However I tend to swap my Pis in and out of cases a lot, and assembling the case is kinda fiddly, so I think I’ll be keeping whichever Pi goes in this case in there.

The USB power supply, cable and SD card: there isn’t really a whole lot to say about these, you need them to use your Pi. The power supply is supposedly specced to the hilt and overrated at 5.25V to account for the voltage drop caused by the cable. However, given that it’s got a US two pin plug and I live in the UK (and don’t have the appropriate adaptor handy) I’ve not been able to test this out. That said, if Adafruit have said it’s the case, I’m totally inclined to believe that it’s the bees knees like they say it is. The SD card is a class 4 Dane-Elec, which will work just fine, but probably isn’t the fastest (note: I haven’t benchmarked this, I’m going off my general experience using various cards in the Pi). That said, this is the budget pack, so if you want a fast, expensive card, you’re best buying that separately.

My favourite part of this whole kit is the Cobbler and the GPIO ribbon cable. Very often when I’m developing with the Pi I need to use a serial console for debugging, and plugging in the rather tiny cables that come with my USB serial adaptor into a Pi each time is somewhat of a pain. I must’ve done it a few hundred times now and I still don’t remember which cable goes to which pin. With the Cobbler I can just leave the serial adaptor connected to the breadboard and use the ribbon cable to connect the Pi of my choice: very nice!

Lastly, the 512MB Raspberry Pi itself. Personally, I think this is huge. 512MB of RAM on an ARM board with a fairly bitchin’ GPU for $35? Never before has “shut up and take my money” been so appropriate. As the foundation have said, hardware accelerated X is being worked on, which combined with a 512MB Pi should make for an impressively capable machine for the money in my opinion.

The hardware alone is useless without cool software though, that’s the most amazing part. In the past twelve months the Raspberry Pi has rocketed into mainstream and has amassed a huge community of fans, many of which are developing and showing off new and cool things for the Pi. If you’ve made something cool, I’d love to see it; tweet me a link and if I think it’s awesome I’ll retweet it and share it on.

Want to find more cool projects? Check out the Raspberry Pi and Element 14 forums, which are both very active and have much of this stuff being shared about.

 

Greg KH a.k.a. gregkh (homepage, stats, bugs)
Help Wanted (October 30, 2012, 19:03 UTC)

I'm looking for someone to help me out with the stable Linux kernel release process. Right now I'm drowning in trees and patches, and could use some one to help me sanity-check the releases I'm doing.

Specifically, I'm looking for someone to help with:

  • test boot the -rc stable kernels to make sure I didn't do anything foolish.
  • dig through the Linux kernel distro trees and send me the git commit ids, or the backported patches, of things they are shipping that are not in the stable and longterm kernel releases.
  • do code review of the patches going into the stable releases.

If you can help out with this, I'd really appreciate it.

Note, this is not a long-term position, only 6 months or so, I figure you'll be tired of it by then and want to move on to something else, which is fine.

In return, you get:

  • your name in the stable releases as someone who has signed-off-by on patches going into it.
  • better knowledge of more kernel subsystems than you ever have in the past, and probably really want.
  • free beverages of your choice at any Linux conference you attend that I am at (given my travel schedule, seems to be just about all of them.)

If anyone is interested in this, here are the 5 steps you need to do to "apply" for the position:

  • email me with the subject line starting with "[Stable tree help]"
  • email me "proof" you are running the latest stable -rc kernel at the moment.
  • send a link to some kernel patches you have done that were accepted into Linus's tree.
  • send a link to any Linux distro kernel tree where they keep their patches.
  • say why you want to do this type of thing, and what amount of time you can spend on it per week.

I'll close the application process in a week, on November 7, 2012, after that I'll contact everyone who applied and do some follow-up questions through email with them. I'll also post something here to say what the response was like.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Munin, sensors and IPMI (October 30, 2012, 17:47 UTC)

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…


[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
grsec and PulseAudio (and Gentoo) (October 30, 2012, 08:49 UTC)

This problem seems to bite some of our hardened users a couple of times a year, so thought I’d blog about it. If you are using grsec and PulseAudio, you must not enable CONFIG_GRKERNSEC_SYSFS_RESTRICT in your kernel, else autodetection of your cards will fail.

PulseAudio’s module-udev-detect needs to access /sys to discover what cards are available on the system, and that kernel option disallows this for anyone but root.

October 29, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Happy 15th, Noah! (October 29, 2012, 13:40 UTC)

Just wanted to wish you a very happy 15th birthday, Noah! I hope that you have an awesome day, filled with fun and excitement, and surrounded by your friends, family, and loved ones. Those are the best elements of a special day, but maybe, just maybe, you’ll get some cool stuff too! :cool: I also can’t believe that it’s just one more year until you’ll have your license; bet you can’t wait!

Anyway, thinking about you, and hope that everything in your life is going superbly well.

With love,
Zach

Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
PulseConf Schedule (October 29, 2012, 12:45 UTC)

David has now published a tentative schedule for the PulseAudio Mini-conference (I’m just going to call it PulseConf — so much easier on the tongue).

For the lazy, these are some of the topics we’ll be covering:

  • Vision and mission — where we are and where we want to be
  • Improving our patch review process
  • Routing infrastructure
  • Improving low latency behaviour
  • Revisiting system- and user-modes
  • Devices with dynamic capabilities
  • Improving surround sound behaviour
  • Separating configuration for hardware adaptation
  • Better drain/underrun reporting behaviour

Phew — and there are more topics that we probably will not have time to deal with!

For those of you who cannot attend, the Linaro Connect folks (who are graciously hosting us) are planning on running Google+ Hangouts for their sessions. Hopefully we should be able to do the same for our proceedings. Watch this space for details!

p.s.: A big thank you to my employer Collabora for sponsoring my travel to the conference.

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Unexpected turn of events in Prague (October 29, 2012, 11:57 UTC)

This adventure of mine is really turning into an adventure..

I’m staying in Prague for another month. I’m working at a hostel as a bartender and getting my own private room and one/two meals per day. I have two consecutive days off per week and I plan on going on overnight trips to other cities in Czech. I’ve basically invalidated the rest of my planning for the next month or two but I’ll figure that out later..

Welcome to my office…
Camera Roll-32

October 28, 2012
Liam McLoughlin a.k.a. hexxeh (homepage, stats, bugs)
Android? Meet Chromium OS (October 28, 2012, 03:42 UTC)

It’s been too long since I’ve cracked out the Jolt and spent the wee hours hacking away on something. So tonight, I picked up a device from my collection and did the inevitable:

Nexus 7 running Chromium OS

More details soon to a tech blog near you. Image release date? Whenever I get around to neatening this up for widespread consumption. Mad props to the Queen for that extra hour tonight, really handy as I’m sure you’ll all agree.

October 27, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Prague, Czech Republic (October 27, 2012, 10:26 UTC)

I’ve been in Prague since Oct 17, 10 days now. I really like the city and hope to explore more of the country soon besides the capital city. The city’s archetecture is nice because it was virtual untouched during WW2. The culture is somewhat interesting because it was communist until 1989. Now the city is preserving what was left to decay during that era.

Prague - Oct 2012-33

The food is good, the beer is good, and the city is cheap to live in. Being a continental country, the weather is marginal but that just reminds me of home anyway.

Prague pics

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
My Time in the USA: About Tipping (October 27, 2012, 06:22 UTC)

I’m afraid I don’t have a suitable photo for this post!

Coming from Italy to the US for the first time, it’s important to note a few very different customs. One of these is the already noted bigger portions, that can cause you to overeat if you don’t remember to ask for a box when you’re stuffed. Another big one is tipping. While it’s not unheard of in Italy as well, tipping is not as regular, or regulated, as here. For what I know, tips (mancie) are not declared at all, even if they are supposed to, since they are only possible on cash transaction, as there are no lines in the receipts where you can add tips. Even though Wikipedia says that this requires a citation (maybe I should just take a picture of my next receipt when I go back to Italy).

The reason for this is that the service, i.e., the wage for the waiting staff, is usually included on the bill (usually, explicitly — some rare times it’s included in the price of the food itself, but that’s been rare until a few hours ago). The same is true, as far as I know, in England for the most part, while in France it seems like they are happy to get some.

Anyway, I have to say that up to now, my experience with tipping staff is actually quite positive. It’s not like it changes much of how I go around — even in Italy I tend to always go to the same place, but I guess it helps the fact that I tip well enough that the waitresses remember me, and they almost never bring me the menu nowadays, unless I ask for it (they know already what I’m getting).

A quick check of my past receipts shows that my average tipping is around 22%, with the exception being the breakfasts I get in the morning, which is well over that (but simply because it would be less than eight dollars), at around 50%. This actually paid off, since I didn’t have to know about the local diner’s “Breakfast Club” — the waiter brought me the card after seeing me one morning after the other, already stamped twice; and the one time I forgot my card at the office, he stamped it twice the next visit. Also, once I actually used the fidelity card, which got me free pancakes, they poured in the coffee with it (which is not supposed to be included).

I guess that for most of the waiting staff, having to survive on tips is far from easy. On the other hand, it feels like the waiting staff here is more caring about the single customer’s experience (since their living depends on it) rather than the frenetic “serve as many customers as possible in the shortest time as possible” that most of the Italian restaurants (as in, in Italy) focus on. Even in places I like, and where I know the owner since forever, don’t have the same friendly service.

Googling around, it seems like there is a lot of angst and grief around the concept of tipping – I was looking around to see how much to tip a cap driver since today I went to Santa Monica to see The Oatmeal – and I can from one point understand why, on the other hand it’s also an easy to use them as a way to make sure that you’re offered a decent service. Like the cab driver who brought me back, and who insisted for me to get cash on the ATMs, which meant I had to walk three blocks over, and pay another $3 in fees, and got less than 10% tip (if he accepted the credit card, he would have gotten 20% — yes that means waiting and paying the extra fee, but it’s still more than he got).

I guess one of the reasons why I’m not having much problem, as a customer, with tipping, is that Free Software works the same way. We’re for the most part not paid, or paid (as related to opensource) a minimum wage, and all we do is compensated for the most part in tips … which are actually rarely enough to cover our side of the expenses — I can actually write quite a bit on the subject as recently I found out how much it costed me, in power alone, to run Yamato and the tinderbox at my house.

So in all of this, I can actually say that it’s one of the things that I have really no problem whatsoever with, during my stay here.

October 26, 2012
Sean Amoss a.k.a. ackle (homepage, stats, bugs)
Happy Halloween, Gentoo! (October 26, 2012, 16:32 UTC)

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
moving services around (October 26, 2012, 15:53 UTC)

A few days ago the box that was hosting our low-risk webapps died (barbet.gentoo.org). The services that were affected are get.gentoo.org planet.gentoo.org packages.gentoo.org devmanual.gentoo.org infra-status.gentoo.org and bouncer.gentoo.org. We quickly migrated the services to another box (brambling.gentoo.org). Brambling had issues in the past with its RAM, but we changed them with new ones a couple of months ago. Additionally, this machine was used for testing only. Unfortunately the machine started to malfunction as soon as those services were transferred there, which means that it has more hardware issues than the RAM. The resulting error messages stopped when we disabled packages.gentoo.org temporarily. The truth is that this packages webapp is old, unmaintained, uses deprecated interfaces and real pain to debug. In this year’s GSoC we had a really nice replacement by Slava Bacherikov written in django. Additionally, recently we were given a Ganeti cluster hosted at OSUOSL. Thus we decided not to put up again the old packages.gentoo.org instance, and instead create 4 virtual machines in our Ganeti cluster, and migrate the above webapps there, along with the new and shiny packages.gentoo.org website. Furthermore, we will also deploy another GSoC webapp, gentoostats, and start providing our developers with virtual machines. We will not give public IPv4 IPs to the dev VMs though, but probably use IPv6 only so that developers can access them through woodpecker (the box where the developers have their shell accounts), but it is still under discussion. We already started working on the above, and we expect next week to be fully finished with the new webapps live and rocking. Special thanks to Christian and Alec who took care of the migrations before and during the Gentoo Miniconf.

October 25, 2012
Markos Chandras a.k.a. hwoarang (homepage, stats, bugs)
Gentoo Recruitment: How do we perform? (October 25, 2012, 18:53 UTC)

A couple of days ago, Tomas and I, gave a presentation at the Gentoo Miniconf. The subject of the presentation was to give an overview of the current recruitment process, how are we performing compared to the previous years and what other ways there are for users to help us improve our beloved distribution. In this blog post I am gonna get into some details that I did not have the time to address during the presentation regarding our recruitment process.

 

Recruitment Statistics

Recruitment Statistics from 2008 to 2012

Looking at the previous graph, two things are obvious. First of all, every year the number of people who wanted to become developers is constantly decreased. Second, we have a significant number of people who did not manage to become developers. Let me express my personal thoughts on these two things.

For the first one, my opinion is that these numbers are directly related to the Gentoo’s reputation and its “infiltration” to power users. It is not a secret that Gentoo is not as popular as it used to be. Some people think this is because of the quality of our packages, or because of the frequency we cause headaches to our users. Other people think that the “I want to compile every bit of my linux box” trend belongs to the past and people want to spend less time maintaining/updating their boxes and more time doing some actual work nowadays. Either way, for the past few years we are loosing people, or to state it better, we are not “hiring” as many as we used to. Ignoring those who did not manage to become developers, we must admit that the absolute numbers are not in our favor. One may say that, 16 developers for 2011-2012 is not bad at all, but we aim for the best right? What bothers me the most is not the number of the people we recruit, but that this number is constantly falling for the last 5 years…

As for the second observation, we see that, every year, around 4-5 people give up and decide to not become developers after all. Why is that? The answer is obvious. Our long, painful, exhausting recruitment process drives people away. From my experience, it takes about 2 months from the time your mentor opens your bug, until a recruiter picks you up. This obviously kills someone’s motivation, makes him lose interest, get busy with other stuff and he eventually disappears. We tried to improve this process by creating a webapp two years ago, but it did not work out well. So we are now back to square one. We really can’t afford loosing developers because of our recruitment process. It is embarrassing to say at least.

Again, is there anything that can be done? Definitely yes. I’d say, we need an improved or a brand new web application that will focus on two things:

1) make the review process between mentor <-> recruit easier

2) make the final review process between recruit <-> recruiter an enjoyable learning process

Ideas are always welcomed. Volunteers and practical solutions even more ;) In the meantime, I am considering using Google+ hangouts for the face-to-face interview sessions with the upcoming recruits. This should bring some fresh air to this process ;)

The entire presentation can be found here

Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Lorell 86200 mesh-back office chair (October 25, 2012, 17:23 UTC)

When I moved back to Saint Louis with my current job, and started working from home, it became readily apparent that I would need a decent office chair (sitting on one of my chairs from the less-than-great dining room table would certainly not be ideal). After looking at a bunch of different options, and realising that I’m not going to spend $1000+ USD on a Herman Miller Aeron, I found some great choices on Amazon.

I finally settled on the 86200 model Executive Mesh-back chair from Lorell:

Lorell 86200 Executive Mesh-back chair

For the price, the chair is actually incredibly well-built. Is it an Aeron? No, of course not, but it also doesn’t carry nearly the same price tag with it. That being said, it also doesn’t feel like a cheaply-made knock-off. The only part of the build quality that is somewhat questionable is the armrest construction. They have plastic shields and are rubber-stamped on the top, but they do serve their purpose nicely. I would like a little further adjustment capabilities on them, but they are what they are. The only other qualm that I have is that the chair makes a bit of noise when moving around, or leaning back. I believe that these sounds are related to the two adjustable nuts near the chair’s base, but I haven’t thoroughly tested that idea.

Assembly of the chair was incredibly easy and straightforward. I did find it a lot easier to do with the help of one other person (for holding the back of the chair in place whilst attaching it to the base, et cetera). If you don’t have help, though, it would be easy enough to do by one’s self. There was one piece of plastic that served no useful purpose, but only an aesthetic element. I chose to not screw that piece into backing of the chair (maybe that’s the engineer in me).

More important than the build quality and the ease of assembly, the seat is very comfortable, even for the 8-10 hours per day that I am in it. I don’t find that I struggle to stay comfortable during that time. Also, the lumbar support and backing are both stronger than other chairs that I have used in the past. Given that I have had trouble with my middle back in the past, I’m pleasantly surprised that I don’t experience any discomfort in that area throughout the day.

So, if you are in the market for a good office chair, but don’t want to spend a huge amount of money, I recommend that you at least look into the Lorell 86200. It is nicely built, easy to assemble, and I find it to be one of the most comfortable chairs in the price range.

Cheers,
Zach

October 24, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Addie’s Thai House – Saint Louis, MO (October 24, 2012, 18:45 UTC)

Several weeks ago, a good friend and I went to Addie’s Thai House in Saint Louis, MO. Though it is a bit far from where we live, and when travelling that distance, we would usually head north to Thai Kitchen, we decided to try a new place (and they had a special at the time). Upon entering the restaurant, I immediately noticed that it was a little more posh than most of the Thai restaurants in the area. The décor and seating arrangements both lent themselves to a higher-scale dining experience.

We started off with an appetiser, and seeing as we wanted to try one that was unique to their menu, we opted for the sweet potatoes. They were cut in a thick string style, deep-fried, and came out with coconut flakes and a sweet and sour dipping sauce. To me, the coconut taste was so subtle that one really had to try to notice it. I found that to be disappointing, because otherwise, they ended up just tasting a lot like regular sweet potato chips.

For dinner, I had the green curry with fresh tofu. It was pleasant, but lacked a lot of the heat that I’m used to with green curry. Also, I found that there were not many vegetables (or much tofu, for that matter) in the pot, but rather that it was primarily sauce. That being said, one of my favourite things to do with curry is to soak some rice in the remainder of the sauce. As such, I did enjoy that aspect of the dish.

She had Praram Long Song, which is a common Siamese dish that generally comes with carrots, spinach, and your choice of protein with a peanut sauce atop it. The peanut sauce wasn’t all that great (especially compared to Thai Kitchen, which has some of the best I’ve ever eaten), and overall, the dish was rather bland.

Though Addie’s Thai House appeared to be a more upscale restaurant in terms of atmosphere, the quality of the food was fairly disappointing. Given that, I would much rather go to one of the restaurants in the area that focuses more on the preparation of the food, especially seeing as Addie’s was a bit more expensive as well. For those reasons, I can’t recommend Addie’s over other nearby Thai places.

Cheers,
Zach

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Munin, sensors and IPMI (October 24, 2012, 15:06 UTC)

In my previous post about Munin I said that I was still working on making sure that the async support would reach Gentoo in a way that actually worked. Now with version 2.0.7-r5 this is vastly possible, and it’s documented on the Wiki for you all to use.

Unfortunately, while testing it, I found out that one of the boxes I’m monitoring, the office’s firewall, was going crazy if I used the async spooled node, reporting fan speeds way too low (87 RPMs) or way too high (300K), and with similar effects on the temperatures as well. This also seems to have caused the fans to go out of control and run constantly at their 4KRPM instead of their usual 2KRPM. The kernel log showed that there was something going wrong with the i2c access, which is what the sensors program uses.

I started looking into the sensors_ plugin that comes with Munin, which I knew already a bit as I fixed it to match some of my systems before… and the problem is that for each box I was monitoring, it would have to execute sensors six times: twice for each graph (fan speed, temperature, voltages), one for config and one for fetching the data. And since there is no way to tell it to just fetch some of the data instead of all of it, it meant many transactions had to go over the i2c bus, all at the same time (when using munin async, the plugins are fetched in parallel). Understanding that the situation is next to unsolvable with that original code, and having one day “half off” at work, I decided to write a new plugin.

This time, instead of using the sensors program, I decided to just access /sys directly. This is quite faster and allows to pinpoint what data you need to fetch. In particular during the config step, there is no reason to fetch the actual value, which saves many i2c transactions even just there. While at it, I also made it a multigraph plugin, instead of the old wildcard one, so that you only need to call it once, and it’ll prepare, serially, all the available graphs: in addition to those that were supported before, which included power – as it’s exposed by the CPUs on Excelsior – I added a few that I haven’t been able to try but are documented by the hwmon sysfs interface, namely current and humidity.

The new plugin is available on the contrib repository – which I haven’t found a decent way to package yet – as sensors/hwmon and is still written in Perl. It’s definitely faster, has fewer dependencies and it’s definitely more reliable at leas ton my firewall. Unfortunately, there is one feature that is missing: sensors would sometimes report an explicit label for temperature data.. but that’s entirely handled in userland. Since we’re reading the data straight from the kernel, most of those labels are lost. For drivers that do expose those labels, such as coretemp, they are used, though.

Also we lose the ability to ignore the values from the get-go, like I describe before but you can’t always win. You’ll have to ignore the graph data from the master instead. Otherwise you might want to find a way to tell the kernel to not report that data. The same probably is true for the names, although unfortunately…


[temp*_label] Should only be created if the driver has hints about what this temperature channel is being used for, and user-space doesn’t. In all other cases, the label is provided by user-space.

But I wouldn’t be surprised if it was possible to change that a tinsy bit. Also, while it does forfeit some of the labeling that the sensors program do, I was able to make it nicer when anonymous data is present — it wasn’t so rare to have more than one temp1 value as it was the first temperature channel for each of the (multiple) controllers, such as the Super I/O, ACPI Thermal Zone, and video card. My plugin outputs the controller and the channel name, instead of just the channel name.

After I’ve completed and tested my hwmon plugin I moved on to re-rewrite the IPMI plugin. If you remember the saga I first rewrote the original ipmi_ wildcard plugin in freeipmi_, including support for the same wildcards as ipmisensor_, so that instead of using OpenIPMI (and gawk), it would use FreeIPMI (and awk). The reason was that FreeIPMI can cache SDR information automatically, whereas OpenIPMI does have support, but you have to tackle it manually. The new plugin was also designed to work for virtual nodes, akin to the various SNMP plugins, so that I could monitor some of the servers we have in production, where I can’t install Munin, or I can’t install FreeIPMI. I have replaced the original IPMI plugin, which I was never able to get working on any of my servers, with my version on Gentoo for Munin 2.0. I expect Munin 2.1 to come with the FreeIPMI-based plugin by default.

Unfortunately, like for the sensors_ plugin, my plugin was calling the command six times per host — although this allows you to filter for the type of sensors you want to receive data for. And that became even worse when you have to monitor foreign virtual nodes. How do I solve that? I decided to rewrite it to be multigraph as well… but shell script then was difficult to handle, which means that it’s now also written in Perl. The new freeipmi, non-wildcard, virtual node-capable plugin is available in the same repository and directory as hwmon. My network switch thanks me for that.

Of course unfortunately the async node still does not support multiple hosts, that’s something for later on. In the mean time though, it does spare me lots of grief and I’m happy I took the time working on these two plugins.

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo Miniconf 2012 (October 24, 2012, 11:07 UTC)

The Gentoo Miniconf is over now but it was a great success. There was 30+ developers that went and I met quite some users too. Thanks to Theo (tampakrap) and Michal (miska) for organizing the event (and others), thanks to openSUSE for sponsoring and letting the Gentoo Linux guys hangout there. Thanks to the other sponsors too, Google, Aeroaccess, et al.

More pics at the Google+ event page.

It was excellent to meet all of you.

October 23, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Dordrecht, Kinderdijk, Delft (October 23, 2012, 22:52 UTC)

I went to Dordrecht for just a short time, a very small town. We made a mistake on the waterbus that led us to walking around the town for a few hours until we could get to the intended goal of Kinderdijk. Kinderdijk is the home of the famous windmills that Holland is known for. The windmills are preserved and still working but not used since the invention of the electric pump. We had to go see the windmills and get the picture…

Rotterdam 10/2012-189

Then I went to Delft for one night and just relaxed at the hostel for the night and bummed around inside while it was raining. Delft is home of the famous hand painted blue and white china – “delftware”. I did manage to stroll around the town briefly (not much to see by foot though). Delft has all the canals and architecture that Amsterdam has but very small and different culture.

Dordrecht pics
Kinderdijk pics
Delft pics

Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Ronald Jenkees – Disorganized Fun review (October 23, 2012, 16:15 UTC)

Earlier this month, I reviewed the self-titled first album by Ronald Jenkees. Now that I’ve listened to his second full-length studio album, Disorganized Fun, several times, I can share my thoughts on it.

Ronald Jenkees - Disorganized Fun album cover

1. Disorganized Fun9 / 10
Coming in full-force with his mix of disjointed synth elements and smooth beats, this first track lives up nicely to its title. Jenkees played around a lot with pitch bending, and it worked really well with his choices of sounds. In the middle of the track, there’s a great bridge followed by a keyboard solo. Not only does the style live up to the title of the track, but it serves as a great start to his second full-length album.

2. Fifteen Fifty8 / 10
Unlike the previous song, this one is a bit more fluid. As such, however, it doesn’t have as much of a stylistic edge, and I found it to drag a bit in spots. There is a neat bass line that comes in around 1’15″ or so, but unfortunately, it doesn’t carry through the rest of the tune. Whilst not a bad song at all, it just doesn’t have the energy of its predecessor (even with the wild solo at the very end).

3. Guitar Sound10 / 10
It’s really impressive to me that Jenkees is able to emulate an 80s-style guitar sound as well as he does. The opening portion of this track sounds a lot like some of Eric Johnson’s work, especially in the vein of Cliffs of Dover. There are some great hard-hitting riffs in there that, when coupled with the up-tempo beats and breakdown/variety of the bridge, make for a fantastic track all around! Even at just over 7 minutes, the song doesn’t drag at all.

4. Synth One6 / 10
This song has a little stronger emphasis on the drums and beats than the previous tracks, and as such, they stand out more prominently than do some of the synth parts. There are a lot of sound effects in this track that have an old NES feel to them, which is a bit nostalgic. However, I don’t really find this to be one of the stronger songs on the album.

5. Throwing Fire8 / 10
I stand corrected about the throwback to old Nintendo games, as this song starts out in a way that almost makes me feel like I just put in the cartridge and fired up Blaster Master. Unlike the former track, however, Throwing Fire has a really upbeat and lively feel to it. There are a couple parts around the 2-minute mark, though, where it seems like Jenkees stumbles a bit on the notes, but they add a nice human element.

6. Minimal MC8 / 10
On this track, Jenkees plays a lot with throwing sounds back and forth between the left and right stereo channels, which makes for a very cool effect whilst listening on headphones. Significantly more subdued, and containing a lot fewer effects than some of the previous tracks, Minimal MC adheres to its name. After the halfway mark, there are some great dramatic elements and a little bit of an Asian influence.

7. Stay Crunchy10 / 10
Stay Crunchy was actually the song that prompted me to buy both of his albums after I originally heard it on Pandora. I think that it is an incredible mix of funky beats and rhythm, great synth work, and some techno/club elements. This is my clear favourite on the album (though that could be related to the Serial Position Preference Effect)!

8. Inverted Mean8 / 10
With the intro of this track, I expected someone like Jay-Z to come in with some dramatic near-spoken-word lyrics; it just presents a very theatrical sound right from the start. This song also has a stronger hip-hop feel than many of the others, but it is a nice way to increase the dynamic nature of the album. My favourite part of the piece come in around the 3’15″ mark with this great piano solo which fades out nicely.

9. Outer Space8 / 10
A lot stronger emphasis on synth sounds and chaotic melody than the previous track, Outer Space combines techno and dance beats with sci-fi effects. Again, tracks like these really highlight the versatility of his musical vision. Though it isn’t the most appealing track to my ears, this track showcases technical aptitude within the genre.

10. Let’s Ride (rap)6 / 10
As with the raps on his previous album, this one is fairly entertaining, regardless of whether or not the technical expertise is as high as his non-rap tracks. The reference to passing the DQ is fairly funny as well.

11. It’s Gettin Rowdy (rap)6 / 10
For some reason, this rap makes me think of Regulate by Warren G, but with a little bit of a silly element to it. Ahhh, the delusions of grandeur…

That makes for a total of 87 / 110 or ~79%. That comes out to a very strong 8 stars:

Filled starFilled starFilled starFilled starFilled starFilled starFilled starFilled starUnfilled starUnfilled star

Cheers,
Zach

Launching Gentoo VMs on okeanos.io (October 23, 2012, 13:50 UTC)

Long time, no post.

For about a year now, I’ve been working at GRNET on its (OpenStack API compliant) open source IaaS cloud platform Synnefo, which powers the ~okeanos service.

Since ~okeanos is mainly aimed towards the Greek academic community (and thus has restrictions on who can use the service), we set up a ‘playground’ ‘bleeding-edge’ installation (okeanos.io) of Synnefo, where anyone can get a free trial account, experiment with the the Web UI, and have fun scripting with the kamaki API client. So, you get to try the latest features of Synnefo, while we get valuable feedback. Sounds like a fair deal. :)

Unfortunately, being the only one in our team that actually uses Gentoo Linux, up until recently Gentoo VMs were not available. So, a couple of days ago I decided it was about time to get a serious distro running on ~okeanos (the load of our servers had been ridiculously low after all :P ). For future reference, and in case anyone wants to upload their own image on okeanos.io or ~okeanos, I’ll briefly describe the steps I followed.

1) Launch a Debian-base (who needs a GUI?) VM on okeanos.io

Everything from here on is done inside our Debian-base VM.

2) Use fallocate or dd seek= to create an (empty) file large enough to hold our image (5GB)

fallocate -l $((5 * 1024 * 1024 *1024) gentoo.img

3) Losetup the image, partition and mount it

losetup -f gentoo.img
parted mklabel msdos /dev/loop0
parted mkpart primary ext4 2048s 5G /dev/loop0
kpartx -a /dev/loop0
mkfs.ext4 /dev/mapper/loop0p1
losetup /dev/loop1 /dev/mapper/loop0p1 (trick needed for grub2 installation later on)
mount /dev/loop1 /mnt/gentoo -t ext4 -o noatime,nodiratime

4) Chroot and install Gentoo in /mnt/gentoo. Just follow the handbook. At a minimum you’ll need to extract the base system and portage, and set up some basic configs, like networking. It’s up to you how much you want to customize the image. For the Linux Kernel, I just copied directly the Debian /boot/[vmlinuz|initrd|System.map] and /lib/modules/ of the VM (and it worked! :) ).

5) Install sys-boot/grub-2.00 (I had some *minor* issues with grub-0.97 :P ).

6) Install grub2 in /dev/loop0 (this should help). Make sure your device.map inside the Gentoo chroot looks like this:

(hd0) /dev/loop0
(hd1) /dev/loop1

and make sure you have a sane grub.cfg (I’d suggest replacing all references to UUIDs in grub.cfg and /etc/fstab to /dev/vda[1]).
Now, outside the chroot, run:

grub-install --root-directory=/mnt --grub-mkdevicemap=/mnt/boot/grub/device.map /dev/loop0

Cleanup everything (umount, losetup -d, kpartx -d etc), and we’re ready to upload the image, with snf-image-creator.

snf-image-creator takes a diskdump as input, launches a helper VM, cleans up the diskdump / image (cleanup of sensitive data etc), and optionally uploads and registers our image with ~okeanos.

For more information on how snf-image-creator and Synnefo image registry works, visit the relevant docs [1][2][3].

0) Since snf-image-creator will use qemu/kvm to spawn a helper VM, and we’re inside a VM, let’s make sure that nested virtualization (OSDI ’10 Best Paper award btw :) ) works.

First, we need to make sure that kvm_[amd|intel] is modprobe’d on the host machine / hypervisor with the nested = 1 parameter, and that the vcpu, that qemu/kvm creates, thinks that it has ‘virtual’ virtualization extensions (that’s actually our responsibility, and it’s enabled on the okeanos.io servers).

Inside our Debian VM, let’s verify that everything is ok.

grep [vmx | svm] /proc/cpuinfo
modprobe -v kvm kvm_intel

1) Clone snf-image-creator repo

git clone https://code.grnet.gr/git/snf-image-creator

2) Install snf-image-creator using setuptools (./setup.py install) and optionally virtualenv. You’ll need to install (pip install / aptitude install etc) setuptools, (python-)libguestfs and python-dialog manually. setuptools will take care of the rest of the deps.

3) Use snf-image-creator to prepare and upload / register the image:

snf-image-creator -u gentoo.diskdump -r "Gentoo Linux" -a [okeanos.io username] -t [okeanos.io user token] gentoo.img -o gentoo.img --force

If everything goes as planned, after snf-image-creator terminates, you should be able to see your newly uploaded image in https://pithos.okeanos.io, inside the Images container. You should also be able to choose your image to create a new VM (either via the Web UI, or using the kamaki client).

And, let’s install kamaki to spawn some Gentoo VMs:

git clone https://code.grnet.gr/git/kamaki

and install it using setuptools (just like snf-image-creator). Alternatively, you could use our Debian repo (you can find the GPG key here).

Modify .kamakirc to match your credentials:

[astakos]
enable = on
url = https://astakos.okeanos.io
[compute]
cyclades_extensions = on
enable = on
url = https://cyclades.okeanos.io/api/v1.1
[global]
colors = on
token = [token]
[image]
enable = on
url = https://cyclades.okeanos.io/plankton
[storage]
account = [username]
container = pithos
enable = on
pithos_extensions = on
url = https://pithos.okeanos.io/v1

Now, let’s create our first Gentoo VM:

kamaki server create LarryTheCow 37 `kamaki image list | grep Gentoo | cut -f -d ' '` --personality /root/.ssh/authorized_keys

That’s all for now. Hopefully, I’ll return soon with another more detailed post on scripting with kamaki (vkoukis has a nice script using kamaki python lib to create from scratch a small MPI cluster on ~okeanos :) ).

Cheers!


October 22, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Ernesto’s Wine Bar, Saint Louis, MO (October 22, 2012, 16:21 UTC)

Several months back, there was a Groupon for a restaurant named Ernesto’s Wine Bar in Saint Louis, MO. This restaurant and bar is located in the Benton Park neighbourhood, which is just off of the 55 motorway near the Anheuser-Busch brewery.

Though their food menu isn’t very extensive–consisting of primarily some appetisers, flatbreads, salads, and a couple larger plates–the food was fairly tasty for the price. We started with the House Chips (which were actually crisps, not chips), and they were quite nice. They were cut from Russet potatoes, and were lightly coated in truffle oil and Parmigiano-Reggiano. As I’m highly allergic to cheese, I had to be careful, but it wasn’t all that big of a deal to avoid the cheese. For dinner, I had grilled chicken and vegetable linguine, which was nice. The sauce was a bit thick for my liking, but it was easy enough to simply use less of it. She had the fancied-up grilled cheese, which was apparently quite good (for obvious reasons, I couldn’t try it). For our wine offering, we went with a 2010 Pinot Grigio from Lagaria. Though overpriced for the vintage, it nicely complemented our entrées.

The best part, in my opinion, was neither the food nor the wine, though. Instead, the atmosphere is what made the evening fantastic. It was a lightly cool night, and we were sitting out on the back patio near the fireplace. The heat from the fire was just enough to take the chill out of the air, but not so hot as to be uncomfortable. The service was a bit slow, but that was to be expected on a Friday evening, and sitting out enjoying the light breeze made time pass quickly.

Overall, Ernesto’s is a nice change of pace from the typical dinner, but the cost seems to be out of alignment with the quality of the food and drink. That being said, it isn’t so outrageously off-balanced as to be off-putting. I would like to go back another time to try some of the flatbreads and another bottle (but this time, of a rustic red).

Cheers,
Zach

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Rotterdam Thoughts (October 22, 2012, 15:51 UTC)

After spending a few days in Amsterdam, it was very refreshing to goto Rotterdam. Rotterdam, a 1h20m train ride away from Amsterdam, was interesting to me because it is essentially a new town by Europe standards. There are many, many new buildings in Rotterdam since it was bombed and essentially destroyed during WW2, however, being the largest port in Europe (formally the largest in the world) it has been rebuilt pretty fast. I stayed in Rotterdam for 4 days and 3 nights, I could have stayed for more days and felt entertained too. It was still an expensive city but marginally less expensive than Amsterdam. There were many English speakers there but also some less than Amsterdam.

You can view my Rotterdam pictures online. Take note of the different buildings.