Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
December 06, 2012, 23:07 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

December 06, 2012
Nirbheek Chauhan a.k.a. nirbheek (homepage, stats, bugs)
Recording VoIP calls using pulseaudio and avconv (December 06, 2012, 15:58 UTC)

For ages, I've wanted an option in Skype or Empathy to record my video and voice calls1. Text is logged constantly because it doesn't cost much in the form of resources, but voice and video are harder.

In lieu of integrated support inside Empathy, and also because I mostly use Skype (for various reasons), the workaround I have is to do an X11 screen grab and encode it to a file. This is not hard at all. A cursory glance at the man page of avconv will tell you how to do it:

avconv -s:v [screen-size] -f x11grab -i "$DISPLAY" output_file.mkv

[screen-size] is in the form of 1366x768 (Width x Height), etc, and you can extend this to record audio by passing the -f pulse -i default flags to avconv2but that's not quite right, is it? Those flags will only record your own voice! You want to record both your own voice and the voices of the people you're talking to. As far as I know, avconv cannot record from multiple audio sources, and hence we must use Pulseaudio to combine all the voices into a single audio source!

As a side note, I really love Pulseaudio for the very flexible way in which you can manipulate audio streams. I'm baffled by the prevailing sense of dislike that people have towards it! The level of script-level control you get with Pulseaudio is unparallelled compared to any other general-purpose audio server3. One would expect geeks to like such a tool—especially since all the old bugs with it are now fixed.

So, the aim is to take my voice coming in through the microphone, and the voices of everyone else coming out of my speakers, and mix them into one audio stream which can be passed to avconv, and encoded into the video file. In technical terms, the voice coming in from the microphone is exposed as an audio source, and the audio for the speakers is going to an audio sink. Pulseaudio allows applications to listen to the audio going into a sink through a monitor source. So in effect, every sink also has a source attached to it. This will be very useful in just a minute.

The work now boils down to combining two sources together into one single source for avconv. Now, apparently, there's a Pulseaudio module to combine sinks but there isn't any in-built module to combine sources. So we route both the sources to a module-null-sink, and then monitor it! That's it.


pactl load-module module-null-sink sink_name=combined
pactl load-module module-loopback sink=combined source=[voip-source-id]
pactl load-module module-loopback sink=combined source=[mic-source-id]
avconv -s:v [screen-size" -f x11grab -i "$DISPLAY" -f pulse -i combined.monitor output_file.mkv

Here's a script that does this and more (it also does auto setup and cleanup). Run it, and it should Just Work™.

Cheers!

1. It goes without saying that doing so is a breach of the general expectation of privacy, and must be done with the consent of all parties involved. In some countries, not getting consent may even be illegal.
2. If you don't use Pulseaudio, see the man page of avconv for other options, and stop reading now. The cool stuff requires Pulseaudio. :)
3. I don't count JACK as a general-purpose audio system. It's specialized for a unique pro-audio use case.

Richard Freeman a.k.a. rich0 (homepage, stats, bugs)
The Dark Side of Quality (December 06, 2012, 15:48 UTC)

Voltaire once said that the best is the enemy of the good. I think that there are few places where one can see as many abuses of quality as you’ll find in many FOSS projects, including Gentoo.

Often FOSS errs on the side of insufficient quality. Developers who are scratching itches don’t always have incentive to polish their work, and as a result many FOSS projects result in a sub-optimal user experience. In these cases “good enough” is standing in the way of “the best.”

However, I’d like to briefly comment on an opposite situation, where “the best” stands in the way of “good enough.” As an illustrative example, consider the excellent practice of removing bundled libraries from upstream projects. I won’t go on about why this is a good thing – others have already done so more extensively. And make no mistake – I agree that this is a good thing, the following notwithstanding.

The problem comes when things like bundled libraries become a reason to not package software at all. Two examples I’m aware of where this has happened recently are media-sound/logitechmediaserver-bin and media-gfx/darktable. In the former there is a push to remove the package due to the inclusion of bundled libraries. In the latter the current version is lagging somewhat because while upstream actually created an ebuild, it bundles libraries. Another example is www-client/chromium, which still bundles libraries despite a very impressive campaign by the chromium team to remove them.

The usual argument for banning packages containing bundled libraries is that they can contain security problems. However, I think this is misleading at best. If upstream bundles zlib in their package we cry about potential security bugs (and rightly so), however, if upstream simply writes their own compression functions and includes them in the code, we don’t bat an eyelash, even though this is more likely to cause security problems. The only reason we can complain about zlib is BECAUSE it is extensively audited, making it easy to spot the security problems. We’re not reacting to the severity of problems, but only to the detectablity of them.

Security is a very important aspect of quality, but any reasonable treatment of security has to consider the threat model. While software that bundles a library is rightfully considered “lower” in quality than one that does not, what matters more is whether this is a quality difference that is meaningful to end users, and what their alternatives are. If the alternative for the user is to just install the same software with the same issues, but from an even lower quality source with no commitment to security updates, then removing a package from Gentoo actually increases the risks to our users. This is not unlike the situation that exists with SSL, where an unencrypted connection is presented to the user as being more secure than an SSL connection with a self-signed certificate, when this is not true at all. If somebody uses darktable to process photos that they take, then they’re probably not concerned with a potential buffer overflow in a bundled version of dcraw. If the another user operated a service that accepted files from strangers on the internet, then they might be more concerned.

What is the solution?: A policy that gives users reasonably secure software from a reputable source, with clear disclosure. We should encourage devs to unbundle libraries, consider bugs pointing out bundled libraries valid, accept patches to unbundle libraries when they are available, and add an elog notice to packages containing bundled libraries in the interest of disclosure. Packages with known security vulnerabilities would be subject to the existing security policy. However, developers would still be free to place packages in the tree that contain bundled libraries, unmasked, and they could be stabilized. Good enough for upstream should be good enough for Gentoo (again, baring specific known vulnerabilities), but that won’t stop us from improving further.


Filed under: gentoo

gstreamer 1.0 (December 06, 2012, 00:03 UTC)

It has been a while since I have last written here but I am not dead and I still somehow manage to contribute to Gentoo.

In the past weeks, I have been working on making Gnome 3.6 ready for inclusion in portage. It rapidly appeared that Gnome 3.6 would have to use both gstreamer 0.10 and gstreamer 1.0 however gstreamer team is badly understaffed and only Alexandre (tetromino) who is not even a gstreamer team member had tried to start bumping ebuilds to gstreamer 1.0.

But then Alexandre got busy and this development stalled a bit. After I finished bumping the overlay to Gnome 3.6.1, I took the challenge to rewrite the gstreamer eclasses to make them easier to use and understand. They were, in my opinion, quite scary with version checks everywhere and I think it is one of the reason that so few people wants to work in gstreamer team :)

If you do not follow gentoo-dev, most of the code moved to gst-plugins10.eclass which received some magic touches that basically makes 99% of the version dependant code go away. As an added bonus, the eclasses are now documented and support EAPI 1 to 5. EAPI 0 support got dropped because of missing slot operators which is really annoying right now with gstreamer.

So if you hit some gstreamer compilation problems in the last few days, please forgive me, upgrade road was a bit bumpy but, overall, it was not so bad. And now, I am happy to say that gstreamer 1.0 is in portage which clears the road for gnome 3.6 inclusion.

On a final note, I also continued Alexandre’s work of bumping last 0.10 releases and so we are up-to-date on that front as well.

Happy compiling !

December 05, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
nginx as reverse SMTP proxy (December 05, 2012, 22:03 UTC)

I’ve noticed that not that many resources are online telling you how you can use nginx as a reverse SMTP proxy. Using a reverse SMTP proxy makes sense even if you have just one mail server back-end, either because you can easily switch towards another one, or because you want to put additional checks before handing off the mail to the back-end.

In the below example, a back-end mail server is running on localhost (in my case it’s a Postfix back-end, but that doesn’t matter). Mails received by Nginx will be forwarded to this server.

user nginx nginx;
worker_processes 1;

error_log /var/log/nginx/error_log debug;

events {
        worker_connections 1024;
        use epoll;
}
http {

        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';


        server {
                listen 127.0.0.1:8008;
                server_name localhost;
                access_log /var/log/nginx/localhost.access_log main;
                error_log /var/log/nginx/localhost.error_log info;

                root /var/www/localhost/htdocs;

                location ~ .php$ {
                        add_header Auth-Server 127.0.0.1;
                        add_header Auth-Port 25;
                        return 200;
                }
        }
}

mail {
        server_name localhost;

        auth_http localhost:8008/auth-smtppass.php;

        server {
                listen 192.168.100.102:25;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
}

If you first look at the mail setting, you notice that I include an auth_http directive. This is needed by Nginx as it will consult this back-end service on what to do with the mail (the moment that it receives the recipient information). The URL I use is arbitrarily chosen here, as I don’t really run a PHP service in the background (yet).

In the http section, I create the same resource that the mails’ auth_http wants to connect to. I then declare the two return headers that Nginx needs (Auth-Server and Auth-Port) with the back-end information (127.0.0.1:25). If I ever need to do load balancing or other tricks, I’ll write up a simple PHP script and serve it from PHP-FPM or so.

Next on the list is to enable SSL (not difficult) with client authentication (which isn’t supported by Nginx for the mail module (yet) sadly, so I’ll need to look at a different approach for that).

BTW, this is all on a simple Gentoo Hardened with SELinux enabled. The following booleans were set to true: nginx_enable_http_server, nginx_enable_smtp_server and nginx_can_network_connect_http.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
A perfect use case for IPv6 (December 05, 2012, 19:37 UTC)

You probably remember, or encountered at least once, my modsecurity ruleset which I use to filter spam comments on this blog, among other things. One of the things that the ruleset does is filtering based on a number of DNSBL which covers among other open proxies and infected nodes — this is great, because most of the comment spam you’ll ever receive passes through open proxies, or through computers that have been infected by malware.

Unfortunately, this has a side effect: public networks such as airports’, Starbucks shops’, and the gogo in-flight wifi that I’m using now, use a very wide NAT, and the sheer number of devices connected mean that there is no way that the IP address wouldn’t be counted as an infected node. This would normally mean that I won’t be able to blog from within the plane, so how am I doing that right now? I simply opened a VPN connection to the office in LA and route all accesses to my server through that. It works, but it really feels wrong.

Well, turns out that there is a very easy way to deal with it: you just need to assign unique IP addresses for each of the devices connected — easy, isn’t it? And since you don’t want them reused you probably want a single per-device address that is unique among all the possible devices.. wait isn’t this what IPv6 is designed to be? Yes it is.

Indeed, I would say that even more so than a private entity, be it a person or a company, public wireless networks are a perfect reason to get more IPv6 service out there, and I’m very surprised that none of these companies seem to have smarten up in providing IPv6, especially in light of the recent switch on for services like Facebook, Google, and so on.

And it’s funny that the companies that make available the in-flight wireless, and provide IPv6, have such a similar name, while being totally unrelated… gogo and gogo6.

On a different note, I have to say that the staff for Delta Airlines in LAX today has been the most friendly, prepared and fast than I have ever experienced. Even in the face of an hour delay on the plane, they’ve communicated clearly, and defused a situation that could have been very tense. Congrats!

December 04, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
music made with gentoo: debris (December 04, 2012, 07:29 UTC)

a new song: debris by ioflow

reworking music from three netlabel releases, for the 48th disquiet junto, fraternité, dérivé.

a last-minute contribution to this junto. i was in a car wreck a couple days ago, so abruptly my planned participation time was reduced to just a day and a half. i could only spend a little while per session sitting at the DAW. the track’s title is a reference to that event.

everything was sequenced with renoise, as seen in the screenshot.

the three source tracks were very hard to work with; this was easily the hardest junto i’ve attempted. i had to make several passes through the tracks, pulling out tiny sub-one-second sections here and there, building up percussion, or finding a droney passages that would work for background material.

for the percussion, i zoomed in and grabbed pieces of non-tonal audio, gated them to remove incidental noise, and checked playback at other speeds for useful sounds. some of the samples were doubly useful with different filter and speed settings. most of the percussion sounds were created after isolating one channel or mixing down to mono; this gave a sharper, clickier sound. occasionally, some of the hits/sticks were left in stereo for a slightly fuller sound.

the melody/drone passages were all pulled from the “unloop” track. i chopped out a short section of mostly percussion-free sound at the beginning of the song, isolated one channel, and ran this higher-pitched drone into paulstretch, stretched to 50x. i played with the bandwidth and noise/tone sliders to get the distinctive crystalline sound, rendering it a few more times. by playing this tone at different speeds using renoise’s basic sample editor, i was able to layer octaves, fading different copies of the sample in and out for some evolving harmonics as desired.

a signal follower attached to the low-passed kick drum flexed the drone’s volume on the beat, adding some liveliness, resulting in a pleasant low-key “bloom pads” effect. i don’t go for huge sidechain compression; just a touch is all that’s needed to reinforce the rhythm. a slow LFO set to “random” mode, attached to a bitcrusher, downgraded the clap sounds with some pleasant crunch.

calf reverb and vintage tape delay plugins rounded out the FX, with the percussion patterns treated liberally, resulting in some complex sounds despite simple arrangement. the only other effect was a tape warmth plugin on the master channel; everything was kept quite minimal, for aesthetic and time reasons. given that i only had a day or so to work on the track, i knew i couldn’t try for too many complicated tricks or melodies.

December 02, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
I'm doing it for you (December 02, 2012, 07:44 UTC)

Okay this is not going to be a very fun post to read, and the title can already make you think that I’m being an arrogant bastard this time around, but I got a feeling that lately people are missing the point that even being grumpy, I’m not usually grumpy just because, I’m usually grumpy because I’m trying to get things to improve rather than stagnate or get worse.

So let’s take an example right now. Thomáš postd about some of the changes that are to be expected on LibreOffice 4 — one of these is that the LDAP client libraries are no longer an optional dependency but have to be present. I wasn’t happy about that.

I actually stumbled across that just the other day when installing the new laptop: while installing KDE component with the default USE flags, OpenLDAP would have been installed. The reason is obviously that the ldap USE flag is enabled by default, which makes sense, as it’s (unfortunately) the most common “shared address book” database available. But why should I get an LDAP server if I selected explicitly a desktop profile?

So the first task at hand, was to make sure that the minimal USE flag was present on the package (it was), and if it did what was intended, i.e., not install the LDAP server — and that is the case indeed. Good, so we can install only the client libraries. Unfortunately the default dependencies were slightly wrong, with said USE flag, as some things like libtool (for libltdl) are only really used by the server components. This was easy to fix, together with a couple more fixes.

But as I proposed on the mailing list to change the defaults, for the desktop profile, to have the minimal USE flag enabled, hell broke loose ­— now the good point about it is that the minimal USE flag is definitely being over-used — and I’m afraid I’m at fault there as well, since both NRPE and NSCA have a minimal USE flag. I guess it’s time to reel back on that for me as well. And I now I have a patch to get openldap to gain a server USE flag, enabled by default – except, hopefully, on the desktop profile – to replace the old minimal flag. Incidentally looking into it I also found that said USE flag was actually clashing with the cxx one, for no good reason as far as I could tell. But Robin doesn’t even like the idea of going with a server USE flag for OpenLDAP!

On a different note, let’s take hwids — I originally created the package to reduce the amount of code our units’ firmware required, but while at it I ended up with a problematic file on my hands, as I wrote the oui.txt file downloaded from IEEE has been redistributed for a number of years, but when I contacted them to make sure I could redistribute it, they told me that it wasn’t possible. Unfortunately the new versions of systemd/udev use that file to generate some hardware database — finally implementing my suggestion from four years ago better late than never!

Well, I ended up having to take some flak, and some risk, and now the new hwids package fetches that file (as well as the iab.txt file) and also fully implements re-building the hardware database, so that we can keep it up to date from Portage, without having to get people to re-build their udev package over and over.

So, excuse me if I’m quite hard to work with sometimes, but the amount of crap I have to take when doing my best to make Gentoo better, for users and developers, is so high that sometimes I’d just like to say “screw it” and leave it to someone else to fix the mess. But I’m not doing that — if you don’t see me around much in the next few days, it’s because I’m leaving LA on Wednesday, and I can’t post on the blog while flying to New York (because the gogonet IP addresses are in virtually every possible blacklist, now and in the future - so no way I can post to the blog, unless I figure out a way to set up a VPN and route traffic to my blog to said VPN …).

And believe it or not, but I do have other concerns in my life beside Gentoo.

December 01, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Tinderbox and expenses (December 01, 2012, 17:44 UTC)

I’ve promised some insight into how much running the tinderbox actually costed me. And since today marks two months from Google AdSense’s crazy blacklisting of my website, I guess it’s a good a time as any other.

SO let’s start with the obvious first expense: the hardware itself. My original Tinderbox was running on the box I called Yamato, which costed me some €1700 and change, without the harddrives, this was back in 2008 — and about half the cost was paid with donation from users. Over time, Yamato had to have its disks replaced a couple of times (and sometimes the cost came out of donations). That computer has been used for other purposes, including as my primary desktop for a long time, so I can’t really complain about the parts that I had to pay myself. Other devices, and connectivity, and all those things, ended up being shared between my tinderbox efforts and my freelancing job, so I also don’t complain about those in the least.

The new Tinderbox host is Excelsior, which has been bought with the Pledgie which got me paying only some $1200 of my pocket, the rest coming in from the contributors. The space, power and bandwidth, have been offered by my employer which solved quite a few problems. Since now I don’t have t pay for the power, and last time I went back to Italy (in June) I turned off, and got rid of, most of my hardware (the router was already having some trouble; Yamato’s motherboard was having trouble anyway, I saved the harddrive to decide what to do, and sold the NAS to a friend of mine), I can assess how much I was spending on the power bill for that answer.

My usual power bill was somewhere around €270 — which obviously includes all the usual house power consumption as well as my hardware and, due to the way the power is billed in Italy, an advance on the next bill. The bill for the months between July and September, the first one where I was fully out of my house, was for -€67 and no, it’s not a typo, it was a negative bill! Calculator at hand, he actual difference between between the previous bills and the new is around €50 month — assuming that only a third of that was about the tinderbox hardware, that makes it around €17 per month spent on the power bill. It’s not much but it adds up. Connectivity — that’s hard to assess, so I’d rather not even go there.

With the current setup, there is of course one expense that wasn’t there before: AWS. The logs that the tinderbox generates are stored on S3, since they need to be accessible, and they are lots. And one of the reasons why Mike is behaving like a child about me just linking the build logs instead of attaching them, is that he expects me to delete them because they are too expensive to keep indefinitely. So, how much does the S3 storage cost me? Right now, it costs me a whopping $0.90 a month. Yes you got it right, it costs me less than one dollar a month for all the storage. I guess the reason is because they are not stored for high reliability or high speed access, and they are highly compressible (even though they are not compressed by default).

You can probably guess at this point that I’m not going to clear out the logs from AWS for a very long time at this point. Although I would like for some logs not to be so big for nothing — like the sdlmame one that used to use the -v switch to GCC which causes all the calls to print a long bunch of internal data that is rarely useful on a default log output.

Luckily for me (and for the users relying on the tinderbox output!) those expenses are well covered with the Flattr revenue from my blog’s posts — and thank to Socialvest I no longer have to have doubts on whether I should keep the money or use it to flattr others — I currently have over €100 ready for the next six/seven months worth of flattrs! Before this, between my freelancer’s jobs, Flattr, and the ads on the blog, I would also be able to cover at least the cost of the server (and barely the cost of the domains — but that’s partly my fault for having.. a number).

Unfortunately, as I said at the top of the post, there no longer are ads served by Google on my blog. Why? Well, a month and a half ago I received a complain from Google, saying that one post of mine in which I namechecked a famous adult website, in the context of a (at the time) recent perceived security issue, is adult material, and that it goes against the AdSense policies to have ads served on a website with adult content. I would still argue that just namechecking a website shouldn’t be considered adult content, but while I did submit an appeal to Google, a month and a half later I have no response at hand. They didn’t blacklist the whole domain though, they only blacklisted my blog, so the ads are still showed on Autotools Mythbuster (which I count to resume working almost full time pretty soon) but the result is bleak: I went down from €12-€16 a month to a low €2 a month due to this, and that is no longer able to cover for the serve expense by itself.

This does not mean that anything will change in the future, immediate or not. This blog for me has more value than the money that I can get back from it, as it’s a way for me to showcase my ability and, to a point, get employment — but you can understand that it still upsets me a liiiittle bit the way they handled that particular issue.

Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)
Libreoffice 4.0 and other cool stuff (December 01, 2012, 12:47 UTC)

During the following week there will be hard feature freeze on libreoffice and 4.0 branch will be created. This means that we can finally start to do some sensible stuff, like testing it like hell in Gentoo.

This release is packed with new features so let me list at least some relevant to our Gentoo stuff:

  • repaired nsplugin interface (who the hell uses it :P) that was fixed by Stephan Bergmann for wich you ALL guys should sent him some cookies :-)
  • liblangtag direct po/mo usage that ensures easier translations usage because the translations are not converted in to internal sdf format
  • liborcus library debut which brings out some features from calc to nice small lib so anyone can reuse them, plus it is easier to maintain, cookies to Kohei Yoshida
  • bluetooth remote control that allows you to just mess with your presentations over bluetooth, also there is android remote app for that over network ;-)
  • telepathy colaboration framework inclusion that allows you to mess with mutiple other people on one document in semi-realtime manner (it is mostly tech preview and you don’t see what is the other guy doing, it just appears in the doc)
  • binfilter is gone! Which is awesome as it was huge load of code that was really stinky

For more changes you can just read the wiki article, just keep in mind that this wiki page will be updated until the release, so it does not contain all the stuff.

Build related stuff

  • We are going to require new library that allows us to parse mspub format. Fridrich Strba was obviously bored so he wrote yet another format parser :-)
  • Pdfimport is no longer pseudo-extension but it is directly built in with normal useflag, which saves quite a lot of copy&paste code and it looks like it operates faster now.
  • The openldap schema provider is now hard-required so you can use adresbooks (Mork driver handles that). I bet some of you lads wont like this much, but ldap itself does not have too much deps and it is usefull for quite few business cases.
  • There are also some nice removals, like glib and librsvg are goners from default reqs (no-suprise for gnomers that they will still need them). From other it no longer needs the sys-libs/db, which I finally removed from my system.
  • Gcc requirement was raised to 4.6, because otherwise boost acts like *censored* and I have better stuff to do than just fix it all the time.
  • Saxon buindling has been delt with and removed completely.
  • Paralel build is sorted out so it will use correct amount of cpus and will fork gcc only required times not n^n times.
  • And last but most probably worst, the plugin foundation that was in java is slowly migrating to python, and it needs python:3.3 or later. This did not make even me happy :-)

Other fancy libreoffice stuff

Michel Meeks is running merges against the Apache Openoffice so we try hard to get even fixes that are not in our codebase (thankfully allowed by license this way). So with lots of efforts we review all their code changes and try to merge it over into our implementation. This will grow more and more complex over a time, because in libo we actually try to use the new stuff like new C++ std/Boost/… so there are more and more collisions. Lets see how long it will be worth it (of course oneliners are easy to pick up :P).

What is going in stable?

We at last got libreoffice-3.6 and binary stable. After this there was found svg bug with librsvg (see above, its gone from 4.0) so the binaries will be rebuilt and next version bump will loose the svg useflag. This was caused by how I wrote the detection of new switches and overlook on my side, I simply tried to just launch the libreo with -svg and didn’t dig further. Other than that the whole package is production ready and there should not be much new regressions.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Further notes about UEFI booting (December 01, 2012, 08:23 UTC)

So after having to get one laptop to boot on UEFI I got to make the Latitude work with that as well, if anything, as a point of pride. Last time I was able to get Windows booting as UEFI but I ended up using GRUB 2 with the legacy BIOS booting instead of UEFI for Linux, and promisd myself to find a way to set this up before.

Well, after the comments on my previous post I made sure to update my copy of SysRescueCD, as I only had a version 2.x on my USB key the other day, and that does not support EFI booting — but the new version (3.0) actually supports it out of the box, which makes it much easier, as I no longer need the EFI shell to boot an EFI stub kernel. To be precise, there also no need to use EFI stub, if not to help in recovery situations.

So, after booting into SysRescueCD, I zeroed out the Master Boot Record (to remove the old-style GRUB setup), re-typed the first partition to EF00 — it was set to EF02 which is what GRUB2 uses to install its modules on non-EFI systems), and formatted it to vfat, then… I chrooted into the second partition (which is my Gentoo Linux’s root partition), rebuilt GRUB2 to support efi-64, and just used grub2-install. Done!

Yes, the new SysRescueCD makes it absolutely piece of cake. And now I actually disabled the non-UEFI booting of that laptop and, not sure if it’s just my impression, though, it feels like it’s actually a second or two faster.

Still on the UEFI topic, turns out that Fabio ordered the same laptop I got (and I’m writing from right now), which means that soon Sabayon will have to support UEFI booting. On the other hand, I got Gentoo working fine on this laptop and the battery life is great, s I’m not complaining about it too much. I’ll actually write something about the laptop and how it feels soon, but tonight, I’m just too tired for it.

November 30, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Inside the life of an embedded developer (November 30, 2012, 04:22 UTC)

Now that the new ultrabook is working almost fine (I’m still tweaking the kernel to make sure it works as intended, especially for what concerns the brightness, and tweaking the power usage, for once, to give me some more power), I think it’s time for me to get used to the new keyboard, which is definitely different from the one I’m used to on the Dell, and more similar to the one I’ve used on the Mac instead. To do so, I think I’ll resume writing a lot on different things that are not strictly related to what I’m doing at the moment but talk about random topic I know about. Here’s the first of this series.

So, you probably all know me for a Gentoo Linux developer, some of you will know me for a multimedia-related developer thanks to my work on xine, libav, and a few other projects; a few would know me as a Ruby and Rails developer, but that’s not something I’m usually boasting or happy about. In general, I think that I’m doing my best not to work on Rails if I can. One of the things I’ve done over the years has been working on a few embedded projects, a couple of which has been Linux related. Despite what some people thought of me before, I’m not an electrical engineer, so I’m afraid I haven’t had the hands-on experience that many other people I know had, which I have to say, sometimes bother me a bit.

Anyway, the projects I worked on over time have never been huge project, or very well funded, or managed projects, which meant that despite Greg’s hope as expressed on the gentoo-dev mailing list, I never had a company train me in the legal landscape of Free Software licenses. Actually in most of the cases, I ended up being the one that had to explain to my manager how these licenses interact. And while I take licenses very seriously, and I do my best to make sure that they are respected, even when the task falls on the shoulder of someone else, and that someone might not actually do everything by the book.

So, while I’m not a lawyer and I really don’t want to go near that option, I always take the chance of understand licenses correctly and, when my ass is on the line, I make sure to verify the licenses of what I’m using for my job. One such example is the project I’ve been working since last March, of which I’ve prepared the firmware, based on Gentoo Linux. To make sure that all the licenses were respected properly, I had to come up with the list of packages that the firmware is composed of, and then verify the license of each. Doing this I ended up finding a problem with PulseAudio due to linking in the (GPL-only) GDBM library.

What happens is that if the company you’re working for has no experience with licenses, and/or does not want to pay to involve lawyers to review what is being done, it’s extremely easy for mistake to happen unless you are very careful. And in many cases, if you, as a developer, pay more attention to the licenses than your manager does, it’s also seen as a negative point, as that’s not how they’d like for you to employ your time. Of course you can say that you shouldn’t be working for that company then, but sometimes it’s not like you have tons of options.

But this is by far not the only problem. Sometimes, what happens is a classic 801 — that is that instead of writing custom code for the embedded platform you’re developing for, the company wants you to re-use previous code, which has a high likelihood of being written in a language that is completely unsuitable for the embedded world: C++, Java, COBOL, PHP….

Speaking of which, here’s an anecdote from my previous experiences in the field: at one point I was working on the firmware for an ARM7 device, that had to run an application written in Java. Said application was originally written to use PostgreSQL and Tomcat, with a full-fledged browser, but had to run on a tiny resistive display with SQLite. But since at the time IcedTea was nowhere to be found, and the device wouldn’t have had enough memory for it anyway, the original implementation used a slightly patched GCJ to build the application to ELF, and used JNI hooks to link to SQLite. The time (and money, when my time was involved) spent making sure the system wouldn’t run out of memory would probably have sufficed to rewrite the whole thing in C. And before you ask, the code bases between the Tomcat and the GCJ versions started drifting almost immediately, so code sharing was not a good option anyway.

Now, to finish this mostly pointless, anecdotal post of mine, I’d like to write a few words of commentary about embedded systems, systemd, and openrc. Whenever I head one or the other side saying that embedded people love the other, I think they don’t know how different embedded development can be from one company to the next, and even between good companies there are so big variation, that make them stand lightyears apart from bad companies like some of the ones I described above. Both sides have good points for the embedded world; what you choose depends vastly on what you’re doing.

If memory and timing are your highest constraints, then it’s very likely that you’re looking into systemd. If you don’t have those kind of constraints, but you’re building a re-usable or highly customizable platform, it’s likely you’re not going to choose it. The reason? While if you’re half-decent in your development cross-compilation shouldn’t be a problem, the reality is that in many place, it is. What happen then is that you want to be able to make last-minute changes, especially in the boot process, for debugging purposes, using shell scripts is vastly easier, and for some people, doing it more easily is the target, rather than doing it right (and this is far from saying that I find the whole set of systemd ideas and designs “right”).

But this is a discussion that is more of a flamebait than anything else, and if I want to go into details I should probably spend more time on it than I’m comfortable doing now. In general, the only thing I’m afraid of, is that too many people make assumption on how people do things, or take for granted that companies, big and small, care about doing things “right” — by my experience, that’s not really the case, that often.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)

If you're seeing a message like "Failed to move to new PID namespace: Cannot allocate memory" when running Chrome, this is actually a problem with the Linux kernel.

For more context, see http://code.google.com/p/chromium/issues/detail?id=110756 . In case you wonder what the fix is, the patch is available at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=976a702ac9eeacea09e588456ab165dc06f9ee83, and it should be in Linux-3.7-rc6.

November 28, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo: Graphing the Developer Web of Trust (November 28, 2012, 13:57 UTC)

“Nothing gets people’s interest peaked like colorful graphics. Therefore, graphing the web of trust in your local area as you build it can help motivate people to participate as well as giving everyone a clear sense of what’s being accomplished as things progress.”

I graphed the Gentoo Developer Web of Trust, as motivated by the (outdated) Debian Web of Trust.

Graph (same as link above) – Redrawn weekly : http://qa-reports.gentoo.org/output/wot-graph.png
Stats per Node : http://qa-reports.gentoo.org/output/wot-stats.html
Source : http://git.overlays.gentoo.org/gitweb/?p=proj/qa-scripts.git;a=blob;f=gen-dev-wot.sh;hb=HEAD

Enjoy.

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
UEFI booting (November 28, 2012, 05:46 UTC)

Last Friday (Black Friday, since I was in the US this year), I ended up buying for myself an early birthday present; I finally got the ZenBook UX31A that I was looking at since September, after seeing the older model being used by J-B of VLC fame. Today it arrived, and I decided to go the easy route: I already prepared a DVD with Sabayon, and after updating the “BIOS” from Windows (since you never know), I wiped it out and installed the new OS on it. Which couldn’t be booted.

Now before you run around screaming “conspiracy”, I ask you to watch Jo’s video (Jo did you really not have anything with a better capture? My old Nikon P50 had a better iris!) and notice that Secure Boot over there works just fine. Other than that, this “ultrabook” is not using SecureBoot because it’s not certified for Windows 8 anyway.

The problem is not that it requires Secure Boot or anything like that but much more simply, it has no legacy boot. Which is what I’m using on the other laptop (the Latitude E6510), since my first attempt at using EFI for booting failed badly. Anyway this simply meant that I had to figure out how to get this to boot.

What I knew from the previous attempt is this:

  • grub2 supports UEFI both 32- and 64-bit mode, which is good — both my systems run 64-bit EFI anyway;
  • grub2 requires efibootmgr to set up the boot environment;
  • efibootmg requires to have access to the EFI variables, so it requires a kernel with support for EFI variables;
  • but there is no way to access those variables when using legacy boot.

This chicken-and-egg problem is what blown it for me last time — I did try before the kernel added EFI Stub support anyway. So what did I do this time? Well, since Sabayon did not work out of the box I decided to scratch it and I went with good old fashioned Gentoo. And as usual to install it, I started from SysRescueCD — which to this day, as far as I can tell, still does not support booting as EFI either. It’s a good thing then that Asus actually supports legacy boot… from USB drives as well as CDs.

So I boot from SysRescueCD and partition the SSD in three parts: a 200MB, vfat EFI partition; a root-and-everything partition; and a /home partition. Note that I don’t split either /boot or /usr so I’m usually quite easy to please, in the boot process. The EFI partition I mount as /mnt/gentoo/boot/efi and inside it I create a EFI directory (it’s actually case-insensitive but I prefer keeping it uppercase anyway).

Now it’s time to configure and build the kernel — make sure to enable the EFI Stub support. Pre-configure the boot parameters in the kernel, make sure to not use any module for stuff you need during boot. This way you don’t have to care about an initrd at all. Build and install the kernel. Then copy /boot/vmlinuz-* as /boot/efi/EFI/kernel.efi — make sure to give it a .efi suffix otherwise it won’t work — the name you use now is not really important as you’ll only be using it once.

Now you need an EFI shell. The Zenbook requires the shell available somewhere, but I know that at least the device we use at work has some basic support for an internal shell in its firmware. The other Gentoo wiki has a link on where to download the file; just put it in the root of the SysRescueCD USB stick. Then you can select to boot it from the “BIOS” configuration screen, which is what you’ll get after a reboot.

At this point, you just need to execute the kernel stub: FS1:\EFI\kernel.efi will be enough for it to start. After the boot completed, you’re now in an EFI-capable kernel, booted in EFI mode. And the only thing that you’re left to do is grub-install --efi-directory=/boot/grub/efi. And .. you’re done!

When you reboot, grub2 will start in EFI mode, boot your kernel, and be done with it. Pretty painless, isn’t it?

November 27, 2012
Pacho Ramos a.k.a. pacho (homepage, stats, bugs)
About maintainer-needed (November 27, 2012, 18:35 UTC)

As you can see at:
http://euscan.iksaif.net/maintainers/maintainer-needed@gentoo.org/

there are a lot of packages assigned to maintainer-needed. This packages lack an active maintainer and his bugs are solved usually by people in maintainer-needed alias (like pinkbyte, hasufell, kensington and me). Even if we are still able to keep the bug list "short" (when excluding "enhancement" and "qa" tagged bugs) any help on this task is really appreciated and, then:
1. If you are already a Gentoo Dev and would like to help us, simply join the team adding you to mail alias. There is no need to go to bug list and fix any specified amount of bugs by obligation. For example, I simply try to go to fix maintainer-needed bugs when I have a bit of time after taking care of other things.
2. If you are a user, you can:
- Step up as maintainer using proxy-maintainers project:
http://www.gentoo.org/proj/en/qa/proxy-maintainers/index.xml
- Go to bugs:
http://tinyurl.com/cssc95v
and provide fixes, patches... for them ;)

Thanks a lot for your contribution!

November 25, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Why you need the real_* thing with genkernel (November 25, 2012, 19:05 UTC)

Today it bit me. I rebooted my workstation, and all hell broke loose. Well, actually, it froze. Literally, if you consider my root file system. When the system tried to remount the root file system read-write, it gave me this:

mount: / not mounted or bad option

So I did the first thing that always helps me, and that is to disable the initramfs booting and boot straight from the kernel. Now for those wondering why I boot with an initramfs while it still works directly with a kernel: it’s a safety measure. Ever since there are talks, rumours, fear, uncertainty and doubt about supporting a separate /usr file system I started supporting an initramfs on my system in case an update really breaks the regular boot cycle. Same because I use lvm on most file systems, and software RAID on all of them. If I wouldn’t have an initramfs laying around, I would be screwed the moment userspace decides not to support this straight from a kernel boot. Luckily, this isn’t the case (yet) so I could continue working without an initramfs. But I digress. Back to the situation.

Booting without initramfs worked without errors of any kind. Next thing is to investigate why it fails. I reboot back with the initramfs, get my read-only root file system and start looking around. In my dmesg output, I notice the following:

EXT4-fs (md3): Cannot change data mode on remount

So that’s weird, not? What is this data mode? Well, the data mode tells the file system (ext4 for me) how to handle writing data to disk. As you are all aware, ext4 is a journaled file system, meaning it writes changes into a journal before applying, allowing changes to be replayed when the system suddenly crashes. By default, ext4 uses ordered mode, writing the metadata (information about files and such, like inode information, timestamps, block maps, extended attributes, … but not the data itself) to the journal right after writing data to the disk, after which the metadata is then written to disk as well.

On my system though, I use data=journal so data too is written to the journal first. This gives a higher degree of protection in case of a system crash (or immediate powerdown – my laptop doesn’t recognize batteries anymore and with a daughter playing around, I’ve had my share of sudden powerdowns). I do boot with the rootflags=data=journal and I have data=journal in my fstab.

But the above error tells me otherwise. It tells me that the mode is not what I want it to be. So after fiddling a bit with the options and (of course) using Google to find more information, I found out that my initramfs doesn’t check the rootflags parameter, so it mounts the root file system with the standard (ordered) mode. Trying to remount it later will fail, as my fstab contains the data=journal tag, and running mount -o remount,rw,data=ordered for fun doesn’t give many smiles.

The man page for genkernel however showed me that it uses real_rootflags. So I reboot with that parameter set to real_rootflags=data=journal and all is okay again.

Edit: I wrote that even changing the default mount options in the file system itself (using tune2fs /dev/md3 -o journal_data) didn’t help. However, that seems to be an error on my part, I didn’t reboot after toggling this, which is apparently required. Thanks to Xake for pointing that out.

November 24, 2012
Gentoo Haskell Herd a.k.a. haskell (homepage, stats, bugs)
EAPI=5, ghc-7.6 and other goodies (November 24, 2012, 20:53 UTC)

Today I have unmasked ghc-7.6.1 in gentoo‘s haskell overlay. Quite a few of things is broken (like unbumped yet gtk2hs), but major things (like darcs) seem to work fine. Feel free to drop a line on #gentoo-haskell to get the thing fixed.

Some notes and events in the overlay:

  • ghc-7.6.1 is available for all major arches we try to support
  • a few ebuilds of overlay were converted to EAPI=5 to use subslot depends (see below)
  • we’ve got working ghc-9999 ebuild with shared libraries by default! (see below)

ghc-7.6

That beast brought two major problems to it’s users:

  1. Prelude.catch gone away and is called ‘System.IO.Error.catchIOError’ now
  2. directory package broke interface to existing function ‘getModificationTime’ without old compatible variant.

While the first breakage is easy to fix by something like:

#if MIN_VERSION_base(4,6,0)
catch :: IO a -> (IOError -> IO a) -> IO a
catch = System.IO.Error.catchIOError

(or just switch to extensible-exceptions package if you need support for really old ghc versions).

The second one is literally a disaster

-getModificationTime :: FilePath -> IO ClockTime
+getModificationTime :: FilePath -> IO UTCTime

It is not as straightforward and "fixes" in various packages break PVP in a very funny way.

Look at this example.

Now that package has random signature type depending on which directory version it decided to build against.

TODO: find a nice and simple ‘:: ClockTime -> IO UTCTime’ compatibility function to end that keep creeping mess. (I wish the directory package to provide that).

Okay. Enough ranting.

EAPI=5

Some of experienced gentoo haskell users already know about the magic haskell-updater tool written by Ivan to fix the mess after ghc upgrade or some base library upgrade.

Typical symptom of broken libraries is the similar ghc-pkg check result:

There are problems in package data-accessor-monads-fd-0.2.0.3:
  dependency "monads-fd-0.1.0.4-830f79a91000e99707aac145b972f786" doesn't exist
There are problems in package LibZip-0.10.2:
  dependency "mtl-2.0.1.0-b1b6de8085e5ea10cc0eb01054b69110" doesn't exist
There are problems in package jail-0.0.1.1:
  dependency "monads-fd-0.1.0.4-830f79a91000e99707aac145b972f786" doesn't exist

Why it happens?

Well, ghc’s library ABI depends on ABIs on all the libraries it uses. It has quite nasty consequences.

Once you upgrade a library you need to:

  1. rebuld all the reverse dependencies
  2. and their reverse dependencies (recursive)

The first point can be solved by EAPI 5 so called SUBSLOT feature.

The second one is not solved yet, but i was said is planned for EAPI=6. Thus you will still need to use haskell-updater time to time.

Anyway, I’ve bumped binary package today and to show how portage picks all it’s immediate users:

# emerge -av1 dev-haskell/binary

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  r  U ~] dev-haskell/binary-0.6.4.0:0/0.6.4.0::gentoo-haskell [0.6.2.0:0/0.6.2.0::gentoo-haskell] USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  r  U ~] dev-haskell/sha-1.6.1:0/1.6.1::gentoo-haskell [1.6.0:0/1.6.0::gentoo-haskell] USE="doc hscolour -hoogle -profile" 2,651 kB
[ebuild  r  U ~] dev-haskell/zip-archive-0.1.2.1-r2:0/0.1.2.1::gentoo-haskell [0.1.2.1-r1:0/0.1.2.1::gentoo-haskell] USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/data-binary-ieee754-0.4.3:0/0.4.3::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/dyre-0.8.11:0/0.8.11::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hxt-9.3.1.1:0/9.3.1.1::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hashed-storage-0.5.10:0/0.5.10::gentoo-haskell  USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/dbus-core-0.9.3-r1:0/0.9.3::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hoogle-4.2.14:0/4.2.14::gentoo-haskell  USE="doc fetchdb hscolour -fetchdb-ghc -hoogle -localdb -profile" 0 kB
[ebuild  rR   ~] www-apps/gitit-0.10.0.2-r1:0/0.10.0.2::gentoo-haskell  USE="doc hscolour plugins -hoogle -profile" 0 kB
[ebuild  r  U ~] dev-haskell/yesod-auth-1.1.1.7:0/1.1.1.7::gentoo-haskell [1.1.1.6:0/1.1.1.6::gentoo-haskell] USE="doc hscolour -hoogle -profile" 17 kB
[ebuild  rR   ~] dev-haskell/yesod-1.1.4:0/1.1.4::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB

Total: 12 packages (4 upgrades, 8 reinstalls), Size of downloads: 2,668 kB

Would you like to merge these packages? [Yes/No]

I would like to rebuild all the sha (and so on) revdeps as well, but EAPI can’t express that kind of depends yet.

The EAPI=5 ebuild slowly drift to main portage tree as well.

ghc-9999

The most iteresting thing!

With great Mark’s help we now have live ghc ebuild right out of gti tree!

One of the most notable things is the dynamic linking by default.

# ldd `which happy` # ghc-7.7.20121116
    linux-vdso.so.1 (0x00007fffb0bff000)
    libHScontainers-0.5.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/containers-0.5.0.0/libHScontainers-0.5.0.0-ghc7.7.20121116.so (0x00007fe616972000)
    libHSarray-0.4.0.1-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/array-0.4.0.1/libHSarray-0.4.0.1-ghc7.7.20121116.so (0x00007fe6166d0000)
    libHSbase-4.6.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/base-4.6.0.0/libHSbase-4.6.0.0-ghc7.7.20121116.so (0x00007fe615df9000)
    libHSinteger-gmp-0.5.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/integer-gmp-0.5.0.0/libHSinteger-gmp-0.5.0.0-ghc7.7.20121116.so (0x00007fe615be6000)
    libHSghc-prim-0.3.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/ghc-prim-0.3.0.0/libHSghc-prim-0.3.0.0-ghc7.7.20121116.so (0x00007fe615976000)
    libHSrts-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/rts-1.0/libHSrts-ghc7.7.20121116.so (0x00007fe615715000)
    libc.so.6 => /lib64/libc.so.6 (0x00007fe61536c000)
    libHSdeepseq-1.3.0.1-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/containers-0.5.0.0/../deepseq-1.3.0.1/libHSdeepseq-1.3.0.1-ghc7.7.20121116.so (0x00007fe615162000)
    libgmp.so.10 => /usr/lib64/libgmp.so.10 (0x00007fe614ef4000)
    libffi.so.6 => /usr/lib64/libffi.so.6 (0x00007fe614cec000)
    libm.so.6 => /lib64/libm.so.6 (0x00007fe6149f2000)
    librt.so.1 => /lib64/librt.so.1 (0x00007fe6147ea000)
    libdl.so.2 => /lib64/libdl.so.2 (0x00007fe6145e6000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fe616d41000)
    libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe6143ca000)

$ ls -lh `which pandoc` # ghc-7.7.20121116
-rwxr-xr-x 1 root root 6.3M Nov 16 16:38 /usr/bin/pandoc
$ ls -lh `which pandoc` # ghc-7.4.2
-rwxr-xr-x 1 root root 27M Nov 18 17:46 /usr/bin/pandoc

Actually, the whole ghc-9999 installation is 150MB smaller, than ghc-7.4.1 on amd64.

Quite a win!

And as a side effect revdep-rebuild (or portage’s FEATURES=preserved-rebuild) can note (and fix) introduced breakages due to upgrades!

Work on the ghc cross-compilation in the ebuild slowly continues (needs some upstream fixes to support toolchains inferred from build/host/target triplets).

Have fun!


November 23, 2012
Ian Whyman a.k.a. thev00d00 (homepage, stats, bugs)
Test Post #1 (November 23, 2012, 13:31 UTC)

Hello Guys,

This is just a test post to make sure the new WordPress is working correctly.

November 22, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
I'm happy I didn't replace my phone! (November 22, 2012, 19:44 UTC)

Since I’ve been to the US, I’ve been thinking of replacing my cellphone, which right now still is my HTC Desire HD (which I was supposed not to pay, as I got it with an operator contract, but which I ended up paying dearly to avoid having to pay a contract that wouldn’t do me any good outside of Italy). The reasons were many, including the fact that it doesn’t get to HSDPA speed here in the US, but the most worrisome was definitely the fact that I had to charge it at least twice a day, and it was completely unreasonable for me to expect it to work for a full day out of the office.

After Google’s failure to provide a half-decent experience with the Nexus 4 orders (I did try to get one, the price was just too sweet, but for me it went straight from “Coming Soon” to “Out of stock”), I was considering going for a Sony (Xperia S), or even (if it wasn’t for the pricetag), a Galaxy Note II with a bluetooth headset. Neither option was a favourite of mine, but beggars can’t be choosers, can they?

The other day, as most of my Twitter/Facebook/Google+ followers would have noticed, my phone also decided to give up: it crashed completely while at lunch, and after removing the battery it lost all settings, due to a corruption of the ext4 filesystem on the SD card (the phone’s memory is just too limited for installing a decent amount of apps). After a complete re-set and reinstall, during which I also updated from the latest CyanogenMod version that would work on it to the latest nightly (still CM7, no CM10 for me yet, although the same chipset is present on modern, ICS-era phones from HTC), I had a very nice surprise. The battery has been now running for 29 hours, I spoke for two and something hours on the phone, used it for email, Facebook messages, and Foursquare check-ins, and it’s still running (although it is telling me to connect my charger).

So what could have triggered this wide difference in battery life? Well there are a number of things that changed, and a number that were kept the same:

  • I did reset the battery statistics, but unlike most of the guides I did so when the phone was 100% charged instead of completely discharged — simply because I had it connected to the computer and charged when I was in the Clockwork Recovery, so I just took a chance to it.
  • I didn’t install many of the apps I had before, including a few that are basically TSRs – and if you’re old enough you know what I mean! – including Advanced Call Manager (no more customers, no more calls to filter!), and, the most likely culprit, an auto-login app for Starbucks wifi.
  • While I kept Volume Ace installed, as it’s extremely handy with its scheduler (think of it like a “quiet hours” on steroids, as it can be programmed with many different profiles, depending on the day of the week as well), I decided to disable the “lock volume” feature (as it says it can be a battery drain) and replaced it with simply disabling the volume buttons when the screen is locked (which is why I enabled the lock volume feature to begin with).
  • I also replaced Zeam Launcher, although I doubt that might be the issue, with the new ADW Launcher (the free version — which unfortunately is not replacing the one in CyanogenMod as far as I can tell) — on the other hand I have to say that the new version is very nice, it has a configurable application drawer which is exactly what I wanted, and it’s quite faster than anything else I tried in a long time.
  • Since I recently ended up replacing my iPod Classic with an iPod Touch (the harddrive in the former clicked and neither Windows nor Linux could access it), I didn’t need to re-install DoggCatcher either, and that one might have been among the power drains, since it also schedules operation in the background and, again as far as I can tell, it does not uses the “sync” options that Android provides.

In all of this, I fell pretty much in love again with my phone. Having put in a 16GB microSD card a few months ago means I have quite a bit of space for all kind of stuff (applications as well as data), and thanks to the new battery life I can’t really complain about it that much. Okay so the lack of 3G while in the US is a bit of a pain, but I’m moving to London soon anyway so that won’t be a problem (I know it works in HSDPA there just fine). And I certainly can’t lament myself about the physical strength of the device… the chassis is made of metal, I’d venture to say it’s aluminum, but I wouldn’t be sure, which makes it strong enough to resist falling into a stone pavement (twice) and on concrete (only once) — yes I mistreat my gadgets, either they cope with me or they can get the heck out of my life.

Pavlos Ratis a.k.a. dastergon (homepage, stats, bugs)
Gentoo Miniconf 2012: Review (November 22, 2012, 17:36 UTC)

After one month I think it was time to write my review about Gentoo miniconf. :-)

In 20 and 21 October I attended to the Gentoo Miniconf which was a part of the bootstrapping-awesome project, 4 conferences (openSUSE Conference/Gentoo Miniconf/LinuxDays/SUSE Labs)  where took place in the Technical Czech University at Prague.

Photo by Martin Stehno

Day 0: After our flight arrived in Prague’s airport – we went straight to the pre-conference welcome party in a cafe near the university where the conference took place. There we met the other greeks who arrived in the previous days and I had also the chance to meet a lot of Gentoo developers and talk with them.

Day 1: The first day started earlier in the morning. Me and Dimitris went to the venue before the conference started in order to prepare the room for the miniconf. The day started with Theo as host to welcome us. There were plenty of interesting presentations  that covered a lot of aspects of Gentoo, the Trustees/Council, Public Relations, The Gentoo KDE team, Gentoo Prefix, Security, Catalyst and Benchmarking. The highlight of the day was when Robin Johnson introduced the Infrastructure team and started a very interesting BoF which talked about the state of the Infra team, currently running web apps and the burning issue of the git migration. The first day ended with lots of beers in the big party of the conference in the center of the Prague next to the famous Charles Bridge.

Gentoo Developers group photo
Photo by Jorge Manuel B. S. Vicetto

 

Day 2:The second day was more relaxed. There were presentations about Gentoo@ IsoHunt, 3D and Linux graphics and Οpen/GnuPG .After the lunch break a Οpen/GnuPG key signing party began outside of the miniconf’s room.After the key signing party we continued with a workshop regarding Puppet also a presentation about how to use testing on Gentoo to improve QA and finally the last presentation ended with Markos and Tomáš talking about how to get involved into development of Gentoo. In the end Theo and Michal closed the session of the miniconf.

 

I really liked Prague especially the beers and the Czech cuisine.

Gentoo Miniconf was a great exp erience for me. I could write lot of pages about the miniconf because I was in the room the whole days and I saw all the presentations.

I had also the opportunity to get in touch and talk with lots of Gentoo developers and contributors from other FOSS projects. Thanks to Theo and Michal for organizing this awesome event.

More about the presentations and the videos of the miniconf  can be found  here.
Looking forward to the next Gentoo miniconf(why not a conference).

November 21, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Monitoring HP servers (November 21, 2012, 23:21 UTC)

Sometimes this blog has something like “columns” for long-term topics that keep re-emerging (no pun intended) from time to time. Since I came back to the US last July you can see that one of the big issues I fight with daily is HP servers.

Why is the company I’m working for using HP servers? Mostly because they didn’t have a resident system administrator before I came on board, and just recently they hired an external consultant to set up new servers … the one who set up my nightmare: Apple OS X Server so I’m not sure which of the two options I prefer.

Anyway, as you probably know if you follow my blog, I’ve been busy setting up Munin and Icinga to monitor the status of services and servers — and that helped quite a bit over time. Unfortunately, monitoring HP servers is not easy. You probably remember I wrote a plugin so I could monitor them through IPMI — it worked nicely until I actually got Albert to expose the thresholds in the ipmi-sensors output, then it broke because HP’s default thresholds are totally messed up and unusable, and it’s not possible to commit new thresholds.

After spending quite some time playing with this, I ended up with write access to Munin’s repositories (thanks, Steve!) and I can now gloat be worried about having authored quite a few new Munin plugins (the second generation FreeIPMI multigraph plugin is an example, but I also have a sysfs-based hwmon plugin that can get all the sensors in your system in one sweep, a new multigraph-capable Apache plugin, and a couple of SNMP plugins to add to the list). These actually make my work much easier, as they send me warnings when a problem happens without having to worry about it too much, but of course are not enough.

After finally being able to replace the RHEL5 (without a current subscription) with CentOS 5, I’ve started looking in what tools HP makes available to us — and found out that there are mainly two that I care about: one is hpacucli, which is also available in Gentoo’s tree, and the other is called hp-health and is basically a custom interface to the IPMI features of the server. The latter actually has a working, albeit not really polished, plugin in the Munin contrib repository – which I guess I’ll soon look to transform into a multigraph capable one; I really like multigraph – and that’s how I ended up finding it.

At any rate at that point I realized that I did not add one of the most important checks: the SMART status of the harddrives — originally because I couldn’t get smartctl installed. So I went and checked for it — the older servers are almost all running as IDE (because that’s the firmware’s default.. don’t ask), so those are a different story altogether; the newer servers running CentOS are using an HP controller with SAS drives, using the CCISS (block-layer) driver from the kernel, while one is running Gentoo Linux, and uses the newer, SCSI-layer driver. All of them can’t use smartctl directly, but they have to use a special command: smartctl -d cciss,0 — and then either point it to /dev/cciss/c0d0 or /dev/sda depending on how which of the two kernel drivers you’re using. They don’t provide all the data that they provide for SATA drives, but they provide enough for Munin’s hddtemp_smartctl and they do provide an health status…

For what concerns Munin, your configuration would then be something like this in /etc/munin/plugin-conf.d/hddtemp_smartctl:

[hddtemp_smartctl]
user root
env.drives hd1 hd2
env.type_hd1 cciss,0
env.type_hd2 cciss,1
env.dev_hd1 cciss/c0d0
env.dev_hd2 cciss/c0d0

Depending on how many drives you have and which driver you’re using you will have to edit it of course.

But when I tried to use the default check_smart.pl script from the nagios-plugins package I had two bad surprises: the first is that they try to validate the parameter passed to the plugin to identify the device type to smartctl, refusing to work for a cciss type, and the other that it didn’t consider the status message that is printed by this particular driver. I was so pissed, that instead of trying to fix that plugin – which still comes with special handling for IDE-based harddrives! – I decided to write my own, using the Nagios::Plugin Perl module, and releasing it under the MIT license.

You can find my new plugin in my github repository where I think you’ll soon find more plugins — as I’ve had a few things to keep under control anyway. The next step is probably using the hp-health status to get a good/bad report, hopefully for something that I don’t get already through standard IPMI.

The funny thing about HP’s utilities is that they for the most part just have to present data that is already available from the IPMI interface, but there are a few differences. For instance, the fan speed reported by IPMI is exposed in RPMs — which is the usual way to expose the speed of fans. But on the HP utility, fan speed is actually exposed as a percentage of the maximum fan speed. And that’s how their thresholds are exposed as well (as I said, the thresholds for fan speed are completely messed up on my HP servers).

Oh well, anything else can happen lately, this would be enough for now.

November 20, 2012
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, stats, bugs)
Project homepages for slackers (November 20, 2012, 03:50 UTC)

Create a homepage and documentation for a project is a boring task. I have a few projects that were not released yet due to lack of time and motivation to create a simple webpage and write down some Sphinx-based documentation.

To fix this issue I did a quick hack based on my favorite pieces of software: Flask, docutils and Mercurial. It is a single file web application that creates homepages automatically for my projects, using data gathered from my Mercurial repositories. It uses the tags, the README file, and a few variables declared on the repository's .hgrc file to build an interesting homepage for each project. I just need to improve my READMEs! :)

It works similarly to the PyPI Package Index, but accepts any project hosted on a Mercurial repository, including my non-Python and Gentoo-only projects.

My instance of the application lives here:

http://projects.rafaelmartins.eng.br/

The application is highly tied to my workflow, e.g. the way I handle tags and the directory structure of my repositories on my server, but the code is available in a Mercurial repository:

http://hg.rafaelmartins.eng.br/projects/

Most of my projects aren't listed yet, and I'll start enabling them as soon as I fix their READMEs.

November 19, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)

This past Saturday (17 November 2012), I participated in the St. Jude Children’s Hospital Give Thanks Walk. This year was a bit different than the previous ones, as it also had a competitive 5k run (which was actually a 6k). I woke up Saturday morning, hoping that the weather report was incorrect, and that it would be much warmer than they had anticipated. However, it was not. When I arrived at the race site (which was the beautiful Creve Coeur Lake Park [one of my absolute favourites in the area]), it was a bit nippy at 6°C. However, the sun came out and warmed up everything a bit. Come race time, it wasn’t actually all that bad, and at least it wasn’t raining or snowing. :)

When I started the race, I was still a bit cold even with my stocking cap. However, by about halfway through the 6k, I had to roll up my sleeves because I was sweating pretty badly. It was an awesome run, and I felt great at the end of it. I think that the best part was being outside with a bunch of people that were also there to support an outstanding cause like Saint Jude Children’s Hospital. There were some heartfelt stories from families of patients, and nice conversations with fellow runners.

I actually finished the race in 24’22″, which wasn’t all that bad of a time:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - runner placement list
Click to enlarge

In fact, it put me in first place, with 2’33″ between me and the runner-up! Though coming in first place wasn’t a goal of mine, I was in competition with myself. I had set a personal goal of completing the 6k in 26’30″ and actually came in under it! My placement earned me both a medal and a great certificate:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - first-place medal and certificate
Click to enlarge

After the announcements of the winners and thanks to all of the sponsors, the female first-place runner (Lisa Schmitz) and I had our photo taken together in front of the finish line:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - male and female first-place runners
Click to enlarge

Thank you to everyone that sponsored and supported me for this run! The children and families of Saint Jude received tens-of-thousands of dollars just from the Saint Louis race alone!

Cheers,
Nathan Zachary (“Zach”)

Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
GPG Key Signing Party (November 19, 2012, 08:19 UTC)

Last Thursday we had GPG Key & CAcert Signing party at SUSE office inviting anybody who wants to get his key signed. I would say, that it went quite well, we had about 20 people showing up, we had some fun, and we now trust each other some more!

GPG Key Signing

We started with GPG key signing. You know, the ussual stuff. Two rows moving against each other, people exchanging paper slips

Signing keys

For actually signing keys at home, we recommended people to use signing-party package and caff in particular. It’s easy to use tool as long as you can send mails from command line (there are some options to set up against SMTP directly, but I run into some issues). All you need to do is to call

caff HASH

and it will download the key, show you identities and fingerprint, sign it for you and send each signed identity to the owner by itself vie e-mail. And all that with nice wizard. It can’t be simpler than that.

Importing signatures

When my signed keys started coming back, I was wondering how do I process them. It was simply too many emails. I searched a little bit, but I get too lazy quite soon, so as I have all my mails stored locally in Maildir by offlineimap, I just wrote a following one liner to import them all.

   grep -Rl 'Your signed' INBOX | while read i; do 
        gpg -d "$i" | gpg --import -a;
   done

Maybe somebody will find it useful as well, maybe somebody more experienced will tell me in comments how to do it correctly ;-)

CAcert

One friend of mine – Theo – really wanted to be able to issue CAcert certificates, so we added CAcert assurance to the program. For those who doesn’t know, CAcert is nonprofit certification authority based on web of trust. You’ll get verified by volunteers and when enough of them trusts you enough, you are trusted by authority itself. When people are verifying you, they give you some points based on how they are trusted and how do they trust you. Once you get 50 points, you are trusted enough to get your certificate signed and once you have 100, you are trusted enough to start verifying other people (after a little quiz to make sure you know what are you doing).

I knew that my colleague Michal čihař is able and willing to issue some points but as he was starting with issuing 10 and I with 15, I also asked few nearby living assurers from CAcert website. Unfortunately I got no reply, but we were organizing everything quite quickly. But we had another colleague – Martin Vidner – showing up and being able to issue some points. I assured another 11 people on the party and now I can give out 25 points. As well as Michal and I guess Martin is now somewhere around 20 as well. So it means that if you need to be able to issue CAcert certificates, visiting just SUSE office in Prague is enough! But still, contact us beforehand, sometimes we do have a vacation ;-)

Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
Ah, LXC! Isn't that good now? (November 19, 2012, 02:55 UTC)

If you’re using LXC, you might have noticed that there was a 0.8.0 release lately, finally, after two release candidates, one of which was never really released. Do you expect everything would go well with it? Hah!

Well, you might remember that over time I found that the way that you’re supposed to mount directories in the configuration files changed, from using the path to use as root, to the default root path used by LXC, and every time that happened, no error message was issued that you were trying to mount directories outside of the tree that they are running.

Last time the problem I hit was that if you try to mount a volume instead of a path, LXC expected you to use as a base path the full realpath, which in the case of LVM volumes is quite hard to know by heart. Yes you can look it up, but it’s a bit of a bother. So I ended up patching LXC to allow using the old format, based on the path LXC mounts it to (/usr/lib/lxc/rootfs). With the new release, this changed again, and the path you’re supposed to use is /var/lib/lxc/${container}/rootfs — again, it’s a change in a micro bump (rc2 to final) which is not documented…. sigh. This would be enough to irk me, but there is more.

The new version also seem to have a bad interaction with the kernel on stopping of a container — the virtual ethernet device (veth pair) is not cleared up properly, and it causes the process to stall, with something insisting to call the kernel and failing. The result is a not happy Diego.

Without even having to add the fact that the interactions between LXC and SystemD are not clear yet – with the maintainers of the two projects trying to sort out the differences between them, at least I don’t have to care about it anytime soon – this should be enough to make it explicit that LXC is not ready for prime time so please don’t ask.

On a different, interesting note, the vulnerability publicized today that can bypass KERNEXEC? Well, unless you disable the net_admin capability in your containers (which also mean you can’t set the network parameters, or use iptables), a root user within a container could leverage that particular vulnerability.. which is one extremely good reason not to have untrusted users having root on your containers.

Oh well, time to wait for the next release and see if they can fix a few more issues.

November 18, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, stats, bugs)
A matter of copyrights (November 18, 2012, 16:55 UTC)

One of the issues that came through with the recent drama about the n-th udev fork is the matter of assigning copyright to the Gentoo Foundation. This topic is not often explored, mostly because it really is a minefield, and – be ready to be surprised – I think the last person who actually said something sane on the topic has been Ciaran.

Let’s see a moment what’s going on: all ebuilds and eclasses in the main tree, and in most of the overlays, report “Gentoo Foundation” as the holder of copyright. This is so much a requirement that we’re not committing to the tree anything that reports anyone else’s copyright, and we refuse the contribution in that case for the most part. While it’s cargo-culted at this point, it is also an extremely irresponsible thing to do.

First of all, nobody ever signed a copyright assignment form to the Gentoo Foundation, as far as I can tell. I certainly didn’t do it. And especially as we go along with getting more and more proxied maintainers, as they almost always are not Gentoo Foundation members (Foundation membership comes after an year as a developer, if I’m not mistaken — or something along those lines, I honestly forgot because, honestly, I’m not following the Foundation doing at all).

Edit: Robin made me notice that a number of people did sign a copyright assignment, first to Gentoo Technologies that were then re-assigned to the Foundation. I didn’t know that — I would be surprised if a majority of the currently active developers knew about that either. As far as I can tell, copyright assignment was no longer part of the standard recruitment procedure when I joined, as, as I said, I didn’t sign one. Even assuming I was the first guy who didn’t sign it, 44% of the total active developers wouldn’t have signed it, and that’s 78% of the currently active developers (give or take). Make up your mind on these numbers.

But even if we all signed said copyright assignment, it’s for a vast part invalid. The problem with copyright assignment is that they are just that, copyright assignments… which means they only work where the law regime concerning authors’ work is that of copyright. For most (all?) of Europe, the regime is actually that of author’s rights and like VideoLAN shows it’s a bit more complex, as the authors have no real way to “assign” those rights.

Edit²: Robin also pointed at the fact that FSFe, Google (and I add Sun, at the very lest) have a legal document, usually called Contributor License Agreement (when it’s basically replacing a full blown assignment) or Fiduciary Licence Agreement (the more “free software friendly” version). This solves half the problem only, as the Foundation would still not be owning the copyright, which means that you still have to come up with a different way to identify the contributors, as they still have their rights even though they leave any decision regarding their contributions to the entity they sign the CLA/FLA to.

So the whole thing stinks of half-understood problem.

This is actually gotten more complex recently, because the sci team borrowed an eclass (or the logic for an eclass) from Exherbo — who actually handles the individual’s copyright. This is actually a much more sensible approach, on the legal side, although I find the idea of having to list, let’s say, 20 contributors at the top of every 15-lines ebuild a bit of an overkill.

My proposal would then be to have a COPYRIGHTS.gentoo file in every package directory, where we list the contributors to the ebuild. This way even proxied maintainers, and one-time contributors, get their credit. The ebuild can then refer to “see the file” for the actual authors. A similar problem also applies to files that are added to the package, including, but not limited to, the init scripts, and making the file formatted, instead of freeform, would probably allow crediting those as well.

Now, this is just a sketch of an idea — unlike Fabio, whose design methodology I do understand and respect, I prefer posting as soon as I have something in mind, to see if somebody can easily shoot it down or if it has wings to fly, and also in the vain hope that if I don’t have the time, somebody else would pick up my plan — but if you have comments on it, I’d be happy to hear them. Maybe after a round of comments, and another round of thinking about it, I’ll propose it as a real GLEP.

Secretly({Plan, Code, Think}) && PublishLater() (November 18, 2012, 12:19 UTC)

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!


November 17, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
The hardened project continues going forward… (November 17, 2012, 19:34 UTC)

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

Documentation

The SELinux handbook has been updated with the latest policy changes (such as supporting the named init scripts). We also documented SELinux policy constraints which was long overdue.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

  • [1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).
  • [2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.

Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)

Few days ago I finished fiddling with open build service (obs) packages in our main tree. Now when anyone wants to mess up with obs he just have to emerge dev-util/osc and have the fun with it.

What the hell is obs?

OBS is pretty cool service that allows you to specify how to build your package and its dependencies in one .spec file where you can deliver the results to multiple archs/distros and not care about how it happens (Debian, SUSE, Fedora, CentOS, Archlinux).

Primary implementation is running for SUSE and it is free to use by anyone (eg. you don’t have to build suse packages there if you don’t want to :P). It has two ways how to interact with the whole tool, one is the web application, which is really PITA and the other is the osc command line tool I finished fiddling with.

Okay so why did you do it?

Well I work at SUSE and we are free to use whatever distro we want while being able to complete our taks. I like to improve stuff I want to be able fix bugs in SLE/openSUSE while not having any chroot/virtual with the named system installed, for such task this works pretty well :-)

How -g0 may be useful (November 17, 2012, 13:35 UTC)

Usually I use -g0 as CFLAGS/CXXFLAGS; it will be useful to find wrong buildsystem behavior.
ago@arcadia ~ $ portageq envvar CFLAGS
-march=native -O2 -g0

Here is an example where the buildsystem sed only ‘-g‘, leave ‘0‘ and causing compile failure:

x86_64-pc-linux-gnu-gcc -DNDEBUG -march=native -O2 0 -m64 -O3 -Wall -DREGINA_SHARE_DIRECTORY=\"/usr/share/regina\" -DREGINA_VERSION_DATE=\""31 Dec 2011"\" -DREGINA_VERSION_MAJOR=\"3\" -DREGINA_VERSION_MINOR=\"6\" -DREGINA_VERSION_SUPP=\"\" -DHAVE_CONFIG_H -DHAVE_GCI -I./gci -I. -I. -I./contrib -o funcs.o -c ./funcs.c
x86_64-pc-linux-gnu-gcc: 0: No such file or directory
./funcs.c: In function '__regina_convert_date':
./funcs.c:772:14: warning: array subscript is above array bounds
make: *** [funcs.o] Error 1
emake failed

So add it to your CFLAGS/CXXFLAGS may be a good idea.

November 16, 2012
Fwd: “Apple Now Owns the Page Turn” (November 16, 2012, 22:58 UTC)

Article: http://bits.blogs.nytimes.com/2012/11/16/apple-now-owns-the-page-turn/

(Heard about it from LWN https://lwn.net/Articles/525493/rss)

Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
One month “in” – some sort of status report (November 16, 2012, 12:27 UTC)

I’d like to write some sort of public status report or brain dump of what’s going on. I’ve been on-the-road for one month of the planned 12 months and just “Living the Dream” as many of the fellow travelers would say. I’ve met so many people so far, some have been really inspiring, some are not. I’m embracing the idea of slow travel and/or home base travel. I really don’t care how you travel, but the Eurorail, every capital city for two days is not what I want to do. I’ve learned that already from talking to people and my preconceived values. So far, I’m on track by only visiting two countries so far, Netherlands and Czech. I’m really diving into Czech Republic – mind you, I didn’t really plan on that but it somehow happened and I’m very ok with that. However, the bad side of that is that I’m staying still while people are moving by every 2-5 days. Since the hostel gives a free beer token to every guest, I see new people everyday for just long enough to say the smalltalk – I haven’t been in that position before so it’s new for this computer guy from Minnesota his whole life… (Self-reflection, yay) Annnyway, I’m having fun, I’m enjoying myself, I don’t like to “not-work”, I am forcing myself to take the unbeaten path, I’m getting more comfortable with myself and my environment, I’m relaxed, I can go with the flow, I know “it” will work out, I drink tea daily, I started to enjoy coffee, I’m living life, I am balanced. Go me.

As of this writing, I was in Netherlands for 7 days and spent $55usd per day and Czech for 28 days and spent $28usd per day. With my pre-trip expenses, etc, I’ve spent $65usd per day.

I’m doing fine, read my posts about where I’ve been, look at my pictures on Flickr, interact with me on Twitter for what I am doing, and check back often for what I’ve been doing. Ciao.

(After thought: considering that I’ve been at (or lived at) a dropzone for nearly every weekend this past summer (and the past 6 years), I’m really missing skydiving. Not going to lie, I can’t wait to jump out of a plane, most places around me are closing for the winter and I’m not properly prepared to jump in the cold even if they were open :( poor planning on my part. I didn’t think it would be so bad, taking a hiatus, but that sport is such a part of my life. I miss it.)

November 14, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Kutná Hora / Olomouc weekend trip (November 14, 2012, 20:24 UTC)

I took a weekend trip to Kutná Hora and Olomouc. Kutná Hora was on the way via train so I got off there (with a small connection train) and visited the Bone Church, a common gravesite of over 40,000 people. I feel like it is one of those things that will just disappear someday – bones won’t last forever in the open air like that.

Prague - Oct 2012-121

Otherwise, Kutná Hora was just a small town and I didn’t do much else there besides get on the train again for the city Olomouc (a-la-moats). I probably missed something in Kutná Hora, but it wasn’t obvious to me and I just heard about the church. Olomouc is the 6th largest city in the Czech Republic, and largely a university town. I stayed in the lovely small hostel, the Poet’s Corner (highly recommended), for a few nights. Most students go home on the weekends, which I think is odd, but I did get to talk to some students (from a different city that were home for the weekend) and went out to enjoy the student bars. Good times, I recommend seeing Olomouc if you have a few days open in your itinerary and are not doing the crazy whirlwind capital city Europe tour. There is some nice things to see, I just had to watch the country’s ‘other’ astronomical clock. Also, a few microbreweries, which were delicious, and I even did a beer spa for fun (why not?).

Prague - Oct 2012-136

Kutná Hora Pics
Olomouc Pics

Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey results (November 14, 2012, 15:46 UTC)

Last week I posted a survey about openSUSE Connect. Although some answers are still coming and you are still welcome to provide more feedback, let’s take a look at some results. Some numbers first. openSUSE Connect is not really busy website, it gets about 80 different visitors per day. Not much, but not a total wasteland. Related to this number is another one. More than half of the people responding in the survey have never ever heard about openSUSE Connect. So it sounds like we should speak about it more…

Now something regarding the feedback. Most people think that it is a good idea and that it either is already useful or it can become quite useful. But even though feedback was positive, lot of people made various suggestions how to improve it. So what can be done to make it better? Most of the feedback was centred around following two topics.

Social aspects

One frequently mentioned topic was social aspect of the Connect. It is social network, where you can’t post status messages and where it is not easy to follow what are people up to. So it’s kinda antisocial social network. There were people asking for adding ability to share what are they going – add status messages, chat and stuff they know from Facebook of Google+. On the other hand there were people who complained that they don’t want to have another social network to maintain. And the third opinion which I think is something between was to provide some easier integration with already existing social networks like Facebook, Twitter or Google+. That I would say sounds the most reasonable solution.

More polishing

This was mentioned with most of the sites aspects. openSUSE Connect is a good thing, it contains many great ideas, but somehow they are not polished enough. As connect itself. People complained that UI could be nicer and more user-friendly. That widgets miss some finishing touches. So what is needed in this aspect? Probably some designers to step in and fix UI ;-) But apart from that, some widgets could use even some coding touches. So if you don’t like how is something done, feel free to submit patch ;-)

Conclusion?

People didn’t know about openSUSE Connect and there are things to be polished. We had some good ideas and we implemented them when we started with Connect. But there is still quite some work left before Connect will be perfect. Work that can be picked up by anybody as openSUSE Connect is open source, written in PHP and we even have a documentation mentioning among other things how to work on it. We can off course just let it live as it is and use it for membership and elections for which it works well. But looks like my survey got people at least a little bit interested and for example victorhck submitted logo proposal for openSUSE Connect! So maybe we will get some other contributors as well ;-) And let’s see how will I spend my next Hackweek :-D

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
RIP recruiting.gentoo.org (November 14, 2012, 13:28 UTC)

The recruiters team announced a few months ago that they decided not to use the recruiting webapp any more, and move back to the txt quizes instead. Additionally, the webapp started showing random ruby exceptions, and since nobody is willing to fix them, we found it a good opportunity to shut down the service completely. There have been people that were still working on it though (including me), so if you are a mentor, mentee or someone who had answers in there, please let me know so I can extract your data and send it to you.
And now I’d like to state my personal thoughts regarding the webapp and the recruiter’s decision to move back to the quizes. First of all, I used this webapp as mentor a lot from the very first point it came up, and I mentored about 15 people through it. It was a really nice idea, but not properly implemented. With the txt quizes, the mentees were sending me the txt files by mail, then we had to schedule an IRC meeting to review the answers, or I had to send the mail back etc. It was a hell for both me and the mentee. I was ending up with hundreds of attachments, trying to find out the most recent one (or the previous one to compare answers), and the mentee had to dig between irc logs and mails to find my feedback.
The webapp solved that issue, since the mentee was putting his answers in a central place, and I could easily leave comments there. But it had a bunch of issues though, mostly UI related. It required too many clicks for simple actions, the notification system was broken by design, I had no easy way to see diffs or to see the progress of my mentee (answers replied / answers left). For example, in order to approve an answer, I had to press “Edit” which transfered me in a new page, where I had to tick “Approve” and press save. Too much, I just wanted to press “Approve”! When I decided to start filling bugs, surprisingly I found out that all my UI complaints had already been reported, clearly I was not alone in this world.
In short, cool idea but annoying UI. That was not the problem though, the real problem is that nobody was willing to fix those issues, which led to the recruiters’ decision to move back to txt quizes. But I am not going back to the txt quizes, no way. Instead, I will start a Google doc and tell my mentees to put their answers there. This will allow me to write my comments below their answers with different font/color, so I can have async communication with them. I was present during the recruitment interview session of my last mentee Pavlos, and his recruiter Markos fired up a Google doc for some coding answers, and it worked pretty well. So I decided to do the same. If the recruiters want the answers in plain text, fine, I can extract them easily.
I’d like to thank a lot Joachim Bartosik, for his work on the webapp and his interesting ideas he put on this (it saved me a lot of time, and made the mentoring process fun again), and Petteri Räty who mentored Joachim creating the recruiting webapp as GSoC project, and helped in deploying it to infra servers. I am kinda sad that I had to shut it down, and I really hope that someone steps up and revives it or creates an alternative. There has been some discussion regarding that webapp during the Gentoo Miniconf, I hope it doesn’t sink.

Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
An informal comparison (November 14, 2012, 03:14 UTC)

A few people asked me to write this down so that they can reference it - so here it is.
A completely unscientific comparison between Linux flavours and how they behave:

CentOS 5 (because upgrading is impossible):

             total       used       free     shared    buffers     cached
Mem:          3942       3916         25          0        346       2039
-/+ buffers/cache:       1530       2411

And on the same hardware, doing the same jobs, a Gentoo:
             total       used       free     shared    buffers     cached
Mem:          3947       3781        166          0        219       2980
-/+ buffers/cache:        582       3365
So we use roughly 1/3rd the memory to get the same things done (fileserver), and an informal performance analysis gives us roughly double the IO throughput.
On the same hardware!
(The IO difference could be attributed to the ext3 -> ext4 upgrade and the kernel 2.6.18 -> 3.2.1 upgrade)

Another random data point: A really clumsy mediawiki (php+mysql) setup.
Since php is singlethreaded the performance is pretty much CPU-bound; and as we have a small enough dataset it all fits into RAM.
So we have two processes (mysql+php) that are serially doing things.

Original CentOS install: ~900 qps peak in mysql, ~60 seconds walltime to render a pathological page
Default-y Gentoo: ~1200 qps peak, ~45-50 seconds walltime to render the same page
Gentoo with -march=native in CFLAGS: ~1800qps peak, ~30 seconds render time (this one was unexpected for me!)

And a "move data around" comparison: 63GB in 3.5h vs. 240GB in 4.5h - or roughly 4x the throughput

So, to summarize: For the same workload on the same hardware we're seeing substantial improvements between a few percent and roughly four times the throughput, for IO-bound as well as for CPU-bound tasks. The memory use goes down for most workloads while still getting the exact same results, only a lot faster.

Oh yeah, and you can upgrade without a reinstall.

November 13, 2012
Donnie Berkholz a.k.a. dberkholz (homepage, stats, bugs)

App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.


Tagged: development, gentoo

November 12, 2012
Equo code refactoring: mission accomplished (November 12, 2012, 20:34 UTC)

Apparently it’s been a while since my last blog post. This however does mean that I’ve been too busy on the coding side, which is what you may prefer I guess.

The new Equo code is hitting the main Sabayon Entropy repository as I write. But what’s it about?

Refactoring

First thing first. The old codebase was ugly, as in, really ugly. Most of it was originally written in 2007 and maintained throughout the years. It wasn’t modular, object oriented, bash-completion friendly, man pages friendly, and most importantly, it did not use any standard argument parsing library (because there was no argparse module and optparse was about to be deprecated).

Modularity

Equo subcommands are just stand-alone modules. This means that adding new functionality to Equo is only a matter of writing a new module, containing a subclass of “SoloCommand” and registering it against the “command dispatcher” singleton object. Also, the internal Equo library has now its own name: Solo.

Backward compatibility

In terms of command line exposed to the user, there are no substantial changes. During the refactoring process I tried not to break the current “equo” syntax. However, syntax that has been deprecated more than 3 years ago is gone (for instance, stuff like: “equo world”). In addition, several commands are now sporting new arguments (have a look at “equo match” for example).

Man pages

All the equo subcommands are provided with a man page which is available through “man equo-<subcommand name>”. The information required to generated the man page is tightly coupled with the module code itself and automatically generated via some (Python + a2x)-fu. As you can understand, maintaining both the code and its documentation becomes easier this way.

Bash completion

Bash completion code lives together with the rest of the business logic. Each subcommand exposes its bash completion options through a class instance method called “list bashcomp(last_argument_str)”, overridden from SoloCommand. In layman’s terms, you’ve got working bashcomp awesomeness for every equo command available.

Where to go from here

Tests, we need more tests (especially regression tests). And I have this crazy idea to place tests directly in the subcommand module code.
Testing! Please install entropy 149 and play with it, try to break it and report bugs!


Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
WordPress FLV plugin WP OS FLV slow (November 12, 2012, 19:56 UTC)

Over the past few weeks, I’ve been designing a basic site (in WordPress) for a new client. This client needs some embedded FLVs on the site, and doesn’t want them (for good reason) to be directly linked to YouTube. As such, and seeing as I didn’t want to make the client write the HTML for embedding a flash video, I installed a very simple FLV plugin called WP OS FLV.

The plugin worked exactly as I had hoped it would, by cleanly showing the FLV with just a few basic options. However, I noticed that the pages with FLVs embedded in them using the plugin were significantly slower to load than were pages without FLVs. Doing some fun experimentation with cURL, I found that those pages had some external calls on them. Hmmmmmm, now what would the plugin need from an external site? Doing a little more digging, I found the following line hardcoded twice in the plugin’s wposflv.php file:


<param name="movie" value="http://flv-player.net/medias/player_flv_maxi.swf" />

That line means that if the site flv-player.net is down or slow, the page with the FLV plugin on your blog will also be slow. In order to fix this problem, you simply need to download the player_flv_maxi.swf file from that site, upload it somewhere on your server, and edit the line to call the location on your server instead. For instance, if your site is my-site.com, and you put the SWF file in a directory called static, you would change the absolute URL to:


<param name="movie" value="http://my-site.com/static/player_flv_maxi.swf" />

If you too were having problems with this plugin being a bit slow, I hope that this suggestion helps!

Cheers,
Zach

Jan Kundrát a.k.a. jkt (homepage, stats, bugs)

I'm sitting on the first day of the Qt Developer Days in Berlin and am pretty impressed about the event so far -- the organizers have done an excellent job and everything feels very, very smooth here. Congratulations for that; I have a first-hand experience with organizing a workshop and can imagine the huge pile of work which these people have invested into making it rock. Well done I say.

It's been some time since I blogged about Trojitá, a fast and lightweight IMAP e-mail client. A lot of work has found the way in since the last release; Trojitá now supports almost all of the useful IMAP extensions including QRESYNC and CONDSTORE for blazingly fast mailbox synchronization or the CONTEXT=SEARCH for live-updated search results to name just a few. There've also been roughly 666 tons of bugfixes, optimizations, new features and tweaks. Trojitá is finally showing evidence of getting ready for being usable as a regular e-mail client, and it's exciting to see that process after 6+ years of working on that in my spare time. People are taking part in the development process; there has been a series of commits from Thomas Lübking of the kwin fame dealing with tricky QWidget issues, for example -- and it's great to see many usability glitches getting addressed.

The last nine months were rather hectic for me -- I got my Master's degree (the thesis was about Trojitá, of course), I started a new job (this time using Qt) and implemented quite some interesting stuff with Qt -- if you have always wondered how to integrate Ragel, a parser generator, with qmake, stay tuned for future posts.

Anyway, in case you are interested in using an extremely fast e-mail client implemented in pure Qt, give Trojitá a try. If you'd like to chat about it, feel free to drop me a mail or just stop me anywhere. We're always looking for contributors, so if you hit some annoying behavior, please do chime in and start hacking.

Cheers,
Jan

November 11, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Local policy management script (November 11, 2012, 11:37 UTC)

I’ve written a small script that I call selocal which manages locally needed SELinux rules. It allows me to add or remove SELinux rules from the command line and have them loaded up without needing to edit a .te file and building the .pp file manually. If you are interested, you can download it from my github location.

Its usage is as follows:

  • You can add a rule to the policy with selocal -a “rule”
  • You can list the current rules with selocal -l
  • You can remove entries by referring to their number (in the listing output), like semodule -d 19.
  • You can ask it to build (-b) and load (-L) the policy when you think it is appropriate

It even supports multiple modules in case you don’t want to have all local rules in a single module set.

So when I wanted to give a presentation on Tor, I had to allow the torbrowser to connect to an unreserved port. The torbrowser runs in the mozilla domain, so all I did was:

~# selocal -a "corenet_tcp_connect_all_unreserved_ports(mozilla_t)" -b -L

At the end of the presentation, I removed the line from the policy:

~# selocal -l | grep mozilla_t
19. corenet_tcp_connect_all_unreserved_ports(mozilla_t)
~# selocal -d 19 -b -L

I can also add in comments in case I would forget why I added it in the first place:

~# selocal -a "allow mplayer_t self:udp_socket create_socket_perms;" \
  -c "MPlayer plays HTTP resources" -b -L

This then also comes up when listing the current local policy rules:

~# selocal -l
...
40: allow mplayer_t self:udp_socket create_socket_perms; # MPlayer plays HTTP resources

November 09, 2012
Hanno Böck a.k.a. hanno (homepage, stats, bugs)
Languages and translation technology (November 09, 2012, 21:53 UTC)

Chinese timetableJust recently, Microsoft research has made some progress in developing a device to do live translations from English into Mandarin. I'd like to share some thoughts with you about that.

If you read my blog on a regular basis, you will know that I traveled through Russia, Mongolia and China last year. If there's one big thing I learned on this trip, it's this: English language is - on a worldwide scale - much less prevalent than I thought. Call me a fool, but I just wasn't aware of that. I thought, okay, maybe many people won't understand English, but at least I'll always be able to find someone nearby who's able to translate. That just wasn't the case. I spent days in cities where I met nobody that shared any language knowledge with me.

I'm pretty sure that translation technologies will become really important in the not-so-distant future. For many people, they already are. I've learned about the opinions of swedish initiatives without any knowledge of swedish just by using Google translate. Google Chrome and the free variant Chromium show directly the option to send something through Google translate if it detects that it's not in your language (although that wasn't working with Mongolian when I was there last year). I was in hotels where the staff pointed me to their PC with an instance of Yandex translate or Baidu translate where I should type in my questions in English (Yandex is something like the russian Google, Baidu is something like the chinese Google). Despite all the shortcomings of today's translation services, people use them to circumvent language barriers.

Young people in those countries are often learning English today, but it's a matter of fact that this will only very slowly translate into a real change. Lots of barriers exist. Many countries have their own language and another language that's used as the "international communication language" that's not English. For example, you'll probably get along pretty well in most post-soviet countries with Russian, no matter if the countries have their own native language or not. This also happens in single countries with more than one language. People have their native language and learn the countries language as their first foreign language.
Some people think their language is especially important and this stops the adoption of English (France is especially known for that). Some people have the strange idea that supporting English language knowledge is equivalent to supporting US politics and therefore oppose it.

Yes, one can try to learn more languages (I'm trying it with Mandarin myself and if I'll ever feel I can try a fourth language it'll probably be Russian), but if you look on the world scale, it's a loosing battle. To get along worldwide, you'd probably have to learn at least five languages. If you are fluent in English, Mandarin, Russian, Arabic and Spanish, you're probably quite good, but I doubt there are many people on this planet able to do that. If you're one of them, you have my deepest respect (please leave a comment if you are).

If you'd pick two completely random people of the world population, it's quite likely that they don't share a common language.

I see no reason in principle why technology can't solve that. We're probably far away from a StarTrek-alike universal translator and sadly evolution hasn't brought us the Babelfish yet, but I'm pretty confident that we will see rapid improvements in this area and that will change a lot. This may sound somewhat pathetic, but I think this could be a crucial issue in fixing some of the big problems of our world - hate, racism, war. It's just plain simple: If you have friends in China, you're less likely to think that "the chinese people are bad" (I'm using this example because I feel this thought is especially prevalent amongst the left-alternative people who would never admit any racist thoughts - but that's probably a topic for a blog entry on its own). If you have friends in Iran, you're less likely to support your country fighting a war against Iran. But having friends requires being able to communicate with them. Being able to have friends without the necessity of a common language is a fascinating thought to me.

November 07, 2012
gcc / ld madness (November 07, 2012, 17:53 UTC)

So, I started reading [The Definitive Guide to the Xen Hypervisor] (again :P ), and I thought it would be fun to start with the example guest kernel, provided by the author, and extend it a bit (ye, there’s mini-os already in extras/, but I wanted to struggle with all the peculiarities of extended inline asm, x86_64 asm, linker scripts, C macros etc, myself :P ).

After doing some reading about x86_64 asm, I ‘ported’ the example kernel to 64bit, and gave it a try. And of course, it crashed. While I was responsible for the first couple of crashes (for which btw, I can write at least 2-3 additional blog posts :P ), I got stuck with this error:

traps.c:470:d100 Unhandled bkpt fault/trap [#3] on VCPU 0 [ec=0000]
RIP:    e033:<0000000000002271>

when trying to boot the example kernel as a domU (under xen-unstable).

0×2000 is the address where XEN maps the hypercall page inside the domU’s address space. The guest crashed when trying to issue any hypercall (HYPERCALL_console_io in this case). At first, I thought I had screwed up with the x86_64 extended inline asm, used to perform the hypercall, so I checked how the hypercall macros were implemented both in the Linux kernel (wow btw, it’s pretty scary), and in the mini-os kernel. But, I got the same crash with both of them.

After some more debugging, I made it work. In my Makefile, I used gcc to link all of the object files into the guest kernel. When I switched to ld, it worked. Apparently, when using gcc to link object files, it calls the linker with a lot of options you might not want. Invoking gcc using the -v option will reveal that gcc calls collect2 (a wrapper around the linker), which then calls ld with various options (certainly not only the ones I was passing to my ‘linker’). One of them was –build-id, which generates a .note.gnu.build-id” ELF note section in the output file, which contains some hash to identify the linked file.

Apparently, this note changes the layout of the resulting ELF file, and ‘shifts’ the .text section to 0×30 from 0×0, and hypercall_page ends up at 0×2030 instead of 0×2000. Thus, when I ‘called’ into the hypercall page, I ended up at some arbitrary location instead of the start of the specific hypercall handler I was going for. But it took me quite some time of debugging before I did an objdump -dS [kernel] (and objdump -x [kernel]), and found out what was going on.

The code from bootstrap.x86_64.S looks like this (notice the .org 0×2000 before the hypercall_page global symbol):

        .text
        .code64
	.globl	_start, shared_info, hypercall_page
_start:
	cld
	movq stack_start(%rip),%rsp
	movq %rsi,%rdi
	call start_kernel

stack_start:
	.quad stack + 8192
	
	.org 0x1000
shared_info:
	.org 0x2000

hypercall_page:
	.org 0x3000	

One solution, mentioned earlier, is to switch to ld (which probalby makes more sense), instead of using gcc. The other solution is to tweak the ELF file layout, through the linker script (actually this is pretty much what the Linux kernel does, to work around this):

OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(_start)

PHDRS {
	text PT_LOAD FLAGS(5);		/* R_E */
	data PT_LOAD FLAGS(7);		/* RWE */
	note PT_NOTE FLAGS(0);		/* ___ */
}

SECTIONS
{
	. = 0x0;			/* Start of the output file */
	_text = .;			/* Text and ro data */
	.text : {
		*(.text)
	} :text = 0x9090 

	_etext = .;			/* End ot text section */

	.rodata : {			/* ro data section */
		*(.rodata)
		*(.rodata.*)
	} :text

	.note : { 
		*(.note.*)
	} :note

	_data = .;
	.data : {			/* Data */
		*(.data)
	} :data

	_edata = .;			/* End of data section */	
}

And now that my kernel boots, I can go back to copy-pasting code from the book … erm hacking. :P

Disclaimer: I’m not very familiar with lds scripts or x86_64 asm, so don’t trust this post too much. :P


November 06, 2012
Arun Raghavan a.k.a. ford_prefect (homepage, stats, bugs)
PulseConf 2012: Report (November 06, 2012, 11:04 UTC)

For those of you who missed my previous updates, we recently organised a PulseAudio miniconference in Copenhagen, Denmark last week. The organisation of all this was spearheaded by ALSA and PulseAudio hacker, David Henningsson. The good folks organising the Ubuntu Developer Summit / Linaro Connect were kind enough to allow us to colocate this event. A big thanks to both of them for making this possible!

The room where the first PulseAudio conference took place

The room where the first PulseAudio conference took place

The conference was attended by the four current active PulseAudio developers: Colin Guthrie, Tanu Kaskinen, David Henningsson, and myself. We were joined by long-time contributors Janos Kovacs and Jaska Uimonen from Intel, Luke Yelavich, Conor Curran and Michał Sawicz.

We started the conference at around 9:30 am on November 2nd, and actually managed to keep to the final schedule(!), so I’m going to break this report down into sub-topics for each item which will hopefully make for easier reading than an essay. I’ve also put up some photos from the conference on the Google+ event.

Mission and Vision

We started off with a broad topic — what each of our personal visions/goals for the project are. Interestingly, two main themes emerged: having the most seamless desktop user experience possible, and making sure we are well-suited to the embedded world.

Most of us expressed interest in making sure that users of various desktops had a smooth, hassle-free audio experience. In the ideal case, they would never need to find out what PulseAudio is!

Orthogonally, a number of us are also very interested in making PulseAudio a strong contender in the embedded space (mobile phones, tablets, set top boxes, cars, and so forth). While we already find PulseAudio being used in some of these, there are areas where we can do better (more in later topics).

There was some reservation expressed about other, less-used features such as network playback being ignored because of this focus. The conclusion after some discussion was that this would not be the case, as a number of embedded use-cases do make use of these and other “fringe” features.

Increasing patch bandwidth

Contributors to PulseAudio will be aware that our patch queue has been growing for the last few months due to lack of developer time. We discussed several ways to deal with this problem, the most promising of which was a periodic triage meeting.

We will be setting up a rotating schedule where each of us will organise a meeting every 2 weeks (the period might change as we implement things) where we can go over outstanding patches and hopefully clear backlog. Colin has agreed to set up the first of these.

Routing infrastructure

Next on the agenda was a presentation by Janos Kovacs about the work they’ve been doing at Intel with enhancing the PulseAudio’s routing infrastructure. These are being built from the perspective of IVI systems (i.e., cars) which typically have fairly complex use cases involving multiple concurrent devices and users. The slides for the talk will be put up here shortly (edit: slides are now available).

The talk was mingled with a Q&A type discussion with Janos and Jaska. The first item of discussion was consolidating Colin’s priority-based routing ideas into the proposed infrastructure. The broad thinking was that the ideas were broadly compatible and should be implementable in the new model.

There was also some discussion on merging the module-combine-sink functionality into PulseAudio’s core, in order to make 1:N routing easier. Some alternatives using te module-filter-* were proposed. Further discussion will likely be required before this is resolved.

The next steps for this work are for Jaska and Janos to break up the code into smaller logical bits so that we can start to review the concepts and code in detail and work towards eventually merging as much as makes sense upstream.

Low latency

This session was taken up against the background of improving latency for games on the desktop (although it does have other applications). The indicated required latency for games was given as 16 ms (corresponding to a frame rate of 60 fps). A number of ideas to deal with the problem were brought up.

Firstly, it was suggested that the maxlength buffer attribute when setting up streams could be used to signal a hard limit on stream latency — the client signals that it will prefer an underrun, over a latency above maxlength.

Another long-standing item was to investigate the cause of underruns as we lower latency on the stream — David has already begun taking this up on the LKML.

Finally, another long-standing issue is the buffer attribute adjustment done during stream setup. This is not very well-suited to low-latency applications. David and I will be looking at this in coming days.

Merging per-user and system modes

Tanu led the topic of finding a way to deal with use-cases such as mpd or multi-user systems, where access to the PulseAudio daemon of the active user by another user might be desired. Multiple suggestions were put forward, though a definite conclusion was not reached, as further thought is required.

Tanu’s suggestion was a split between a per-user daemon to manage tasks such as per-user configuration, and a system-wide daemon to manage the actual audio resources. The rationale being that the hardware itself is a common resource and could be handled by a non-user-specific daemon instance. This approach has the advantage of having a single entity in charge of the hardware, which keeps a part of the implementation simpler. The disadvantage is that we will either sacrifice security (arbitrary users can “eavesdrop” using the machine’s mic), or security infrastructure will need to be added to decide what users are allowed what access.

I suggested that since these are broadly fringe use-cases, we should document how users can configure the system by hand for these purposes, the crux of the argument being that our architecture should be dictated by the main use-cases, and not the ancillary ones. The disadvantage of this approach is, of course, that configuration is harder for the minority that wishes multi-user access to the hardware.

Colin suggested a mechanism for users to be able to request access from an “active” PulseAudio daemon, which could trigger approval by the corresponding “active” user. The communication mechanism could be the D-Bus system bus between user daemons, and Ștefan Săftescu’s Google Summer of Code work to allow desktop notifications to be triggered from PulseAudio could be used to get to request authorisation.

David suggested that we could use the per-user/system-wide split, modified somewhat to introduce the concept of a “system-wide” card. This would be a device that is configured as being available to the whole system, and thus explicitly marked as not having any privacy guarantees.

In both the above cases, discussion continued about deciding how the access control would be handled, and this remains open.

We will be continuing to look at this problem until consensus emerges.

Improving (laptop) surround sound

The next topic dealt with being able to deal with laptops with a built-in 2.1 channel set up. The background of this is that there are a number of laptops with stereo speakers and a subwoofer. These are usually used as stereo devices with the subwoofer implicitly being fed data by the audio controller in some hardware-dependent way.

The possibility of exposing this hardware more accurately was discussed. Some investigation is required to see how things are currently exposed for various hardware (my MacBook Pro exposes the subwoofer as a surround control, for example). We need to deal with correctly exposing the hardware at the ALSA layer, and then using that correctly in PulseAudio profiles.

This led to a discussion of how we could handle profiles for these. Ideally, we would have a stereo profile with the hardware dealing with upmixing, and a 2.1 profile that would be automatically triggered when a stream with an LFE channel was presented. This is a general problem while dealing with surround output on HDMI as well, and needs further thought as it complicates routing.

Testing

I gave a rousing speech about writing more tests using some of the new improvements to our testing framework. Much cheering and acknowledgement ensued.

Ed.: some literary liberties might have been taken in this section

Unified cross-distribution ALSA configuration

I missed a large part of this unfortunately, but the crux if the discussion was around unifying cross-distribution sound configuration for those who wish to disable PulseAudio.

Base volumes

The next topic we took up was base volumes, and whether they are useful to most end users. For those unfamiliar with the concept, we sometimes see sinks/sources where which support volume controls going to > 0dB (which is the no=attenuation point). We provide the maximum allowed gain in ALSA as the maximum volume, and suggest that UIs show a marker for the base volume.

It was felt that this concept was irrelevant, and probably confusing to most end users, and that we suggest that UIs do not show this information any more.

Relatedly, it was decided that having a per-port maximum volume configuration would be useful, so as to allow users to deal with hardware where the output might get too loud.

Devices with dynamic capabilities (HDMI)

Our next topic of discussion was finding a way to deal with devices such as those HDMI ports where the capabilities of the device could change at run time (for example, when you plug out a monitor and plug in a home theater receiver).

A few ideas to deal with this were discussed, and the best one seemed to be David’s proposal to always have a separate card for each HDMI device. The addition of dynamic profiles could then be exploited to only make profiles available when an actual device is plugged in (and conversely removed when the device is plugged out).

Splitting of configuration

It was suggested that we could split our current configuration files into three categories: core, policy and hardware adaptation. This was met with approval all-around, and the pre-existing ability to read configuration from subdirectories could be reused.

Another feature that was desired was the ability to ship multiple configurations for different hardware adaptations with a single package and have the correct one selected based on the hardware being run on. We did not know of a standard, architecture-independent way to determine hardware adaptation, so it was felt that the first step toward solving this problem would be to find or create such a mechanism. This could either then be used to set up configuration correctly in early boot, or by PulseAudio for do runtime configuration selection.

Relatedly, moving all distributed configuration to /usr/share/..., with overrides in /etc/pulse/... and $HOME were suggested.

Better drain/underrun reporting

David volunteered to implement a per-sink-input timer for accurately determining when drain was completed, rather than waiting for the period of the entire buffer as we currently do. Unsurprisingly, no objections were raised to this solution to the long-standing issue.

In a similar vein, redefining the underflow event to mean a real device underflow (rather than the client-side buffer running empty) was suggested. After some discussion, we agreed that a separate event for device underruns would likely be better.

Beer

We called it a day at this point and dispersed beer-wards.

PulseConf Hackers

Our valiant attendees after a day of plotting the future of PulseAudio

User experience

David very kindly invited us to spend a day after the conference hacking at his house in Lund, Sweden, just a short hop away from Copenhagen. We spent a short while in the morning talking about one last item on the agenda — helping to build a more seamless user experience. The idea was to figure out some tools to help users with problems quickly converge on what problem they might be facing (or help developers do the same). We looked at the Ubuntu apport audio debugging tool that David has written, and will try to adopt it for more general use across distributions.

Hacking

The rest of the day was spent in more discussions on topics from the previous day, poring over code for some specific problems, and rolling out the first release candidate for the upcoming 3.0 release.

And cut!

I am very happy that this conference happened, and am looking forward to being able to do it again next year. As you can see from the length of this post, there are lot of things happening in this part of the stack, and lots more yet to come. It was excellent meeting all the fellow PulseAudio hackers, and my thanks to all of them for making it.

Finally, I wouldn’t be sitting here writing this report without support from Collabora, who sponsored my travel to the conference, so it’s fitting that I end this with a shout-out to them. :)

November 05, 2012
Michal Hrusecky a.k.a. miska (homepage, stats, bugs)
openSUSE Connect Survey (November 05, 2012, 12:42 UTC)

You might remember that in our team (openSUSE Boosters), we created openSUSE Connect some time ago. It was meant as replacement for users.opensuse.org that nobody knew about and nobody used. We hoped that it will attract more users and that it will be more user friendly way, how to manage personal data. Apart from that, we wanted to include more interesting widgets so it can become your landing page for all your efforts in openSUSE project. With that regards we created bugzilla widget, fate widget, build status widget and some more. We hoped that it would make difference and help people and that they will enjoy using the new site. During this summer my GSoC student created amazing Karma widget as well to make it more fun. And as Connect has been some time already in function, it’s now time to collect some feedback. Did it work? Do you like it? Or did it become just a wasteland? Do you think such a site make sense?

I’m not promising anything right now, but it would be nice to know, what our users think about it and whether it could make sense to put some effort in it and how much and where to concentrate it ;-) So please, fill in this little survey and let me know your opinion. I’ll publish results later ;-)

November 03, 2012
Stuart Longland a.k.a. redhatter (homepage, stats, bugs)
I dub thee… iKarma (November 03, 2012, 23:53 UTC)

Mexico to Apple: You WILL NOT use the name ‘iPhone’ here

We don’ need no stinkin’ badge lawsuits

Apple has lost the right to use the word “iPhone” in Mexico after its trademark lawsuit against Mexican telco iFone backfired.

http://www.theregister.co.uk/2012/11/02/iphone_ifone_mexico_trademark/

Not so nice when the shoe’s on the other foot now is it, Apple? Now if only other law courts had such common sense.

Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
komplete audio 6 on gentoo: first impressions (November 03, 2012, 05:36 UTC)

i received my native instruments komplete audio 6 in the mail today. i wasted no time plugging it in. i have a few first impressions:

build quality

this thing is heavy. not unduly so — just two or three times heaver than the audiofire 2 it replaces. it’s solidly built, so i imagine it can take a fair amount of beating on-the-go. knobs are sturdy, stiff rather than loose, without much wiggle. the big top volume knob is a little looser, with more wiggle, but it’s also made out of metal, rather than the tough plastic of the front trim knobs. the input ports grip 1/4″ jacks pretty tightly, so there’s no worry that cables will fall out.

i haven’t tested the main outputs yet, but the headphone output works correctly, offering more volume than my ears can take, and it seems to be very quiet — i couldn’t hear any background hiss even when turning up the gain.

JACK support

i have mixed first impressions here. according to ALSA upstream, and one of my buddies who’s done some kernel driver code for NI interfaces, it should work perfectly, as it’s class-compliant to the USB2.0 spec (no, really, there is a spec for 2.0, and the KA6 complies with it, separating it from the vast majority of interfaces that only comply with the common 1.1 spec).

i setup some slightly more aggressive settings on this USB interface than for my FireWire audiofire 2, which seems to have been discontinued in favor of echo’s new USB interface (though the audiofire 4 is still available, and is mostly the same). i went with 64 frames/period, 48000 sample rate, 3 periods/buffer . . . which got me 4ms latency. that’s just under half the 8ms+ latency i had with the firewire-based af2.

at these settings, qjackctl reported about 18-20% CPU usage, idling around 0.39-5.0% with no activity. i only have a 1.5ghz core2duo processor from 2007, so any time the CPU clocks down to 1.0ghz, i expect the utilization numbers to jump up. switching from the ondemand to performance governor helps a bit, raising the processor speed all the way up.

playing a raw .wav file through mplayer’s JACK output worked just fine. next, i started ardour 3, and that’s where the troubles began. ardour has shown a distressing tendency to crash jackd and/or the interface, sometimes without any explanation in the logs. one second the ardour window is there, the next it’s gone.

i tried renoise next, and loaded up an old tracker project, from my creative one-a-day: day 316, beta decay. this piece isn’t too demanding: it’s sample-based, with a few audio channels, a send, and a few FX plugins on each track.

playing this song resulted in 20-32% CPU utilization, though at least renoise crashed less often than ardour. renoise feels noticeably more stable than the snapshot of ardour3 i built on july 9th.

i wasn’t very thrilled with how much work my machine was doing, since the CPU load was noticeably better with the af2. though this is to be expected; the CPU doesn’t have to do so much processing of the audio streams; the work is offloaded onto the firewire bus. with usb, all traffic goes through the CPU, so that takes more valuable DSP resources.

still, time to up the ante. i raised the sample rate to 96000, restarted JACK, and reloaded the renoise project. now i had 2ms latency…much lower than i ever ran with the af2. this low latency took more cycles to run, though: CPU utilization was between 20% and 36%, usually around 30-33%.

i haven’t yet tested the device on my main workstation, since that desktop computer is still dead. i’m planning to rebuild it, moving from an old AMD dualcore CPU to a recent Intel Ivy Bridge chip. that should free up enough resources to create complex projects while simultaneously playing back and recording high-quality audio.

first thoughts

i’m a bit concerned that for a $200 best-in-class USB2.0 class-compliant device, it’s not working as perfectly as i’d hoped. all 6/6 inputs and outputs present themselves correctly in the JACK window, but the KA6 doesn’t show up as a valid ALSA mixer device if i wanted to just listen to music through it, without running JACK.

i’m also concerned that the first few times i plug it in and start it, it’s mostly rock-solid, with no xruns (even at 4ms) appearing unless i run certain (buggy) applications. however, it’s xrun/crash-prone at a sample rate of 96000, forcing me to step down to 48000. i normally work at that latter rate anyway, but still…i should be able to get the higher quality rates. perhaps a few more reboots might fix this.

it could be one of the three USB ports on this laptop shares a bus with another high-traffic device, which means there could be bandwidth and/or IRQ conflicts. i’m also running kernel 3.5.3 (ck-sources), with alsa-lib 1.0.25, and there might have been driver fixes in the 3.6 kernel and alsa-lib 1.0.26. i’m also using JACK1, version 0.121.3, rather than the newer JACK2. after some upgrades, i’ll do some more testing.

early verdict: the KA6 should work perfectly on linux, but higher sample rates and lowest possible latency are still out of reach. sound quality is good, build quality is great. ALSA backend support is weak to nonexistent; i may have to do considerable triage and hacking to get it to work as a regular audio playback device.

November 02, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Crossfit Praha: new home gym for November (November 02, 2012, 21:19 UTC)

(I’d like to first give a global shout out to my first Crossfit home, The Athlete Lab)

Prague - Oct 2012-113

Since I’m in Prague for a month, I became a member of Crossfit Praha instead of just being a drop-in client. The gym is quite small, but centrally located in Prague. The lifting days are separate than the normal days (probably unless you are a trusted regular). The premise is, you show up during a block of time, warm up on your own, proceed with WOD, then cool down on your own which is pretty standard across gyms from what I can tell, exception being that everyone is starting the WOD at their own time (not structured times). Now I’ve put my money where my mouth is and have to keep a good diet, drink not so much beer, etc to be able to function the next day(s) after a WOD. “Tomorrow will not be any easier”

Prague - Oct 2012-110
(Myself and Zdeněk)

Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Lenovo laptops now feature what? (November 02, 2012, 15:32 UTC)

Each month, the online discount retailer Working Advantage has a sweepstakes for some hot item. For November 2012, it is a Lenovo IdeaPad Z580. I received the following email about it yesterday:

Working Advantage Lenovo IdeaPad Z580 November Giveaway features top sirloin steaks

Last time I checked, the IdeaPad Z580 had some neat features, but definitely did not come with top sirloin steaks! :razz:

Cheers,
Zach

November 01, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)
Slock 1.1 background colour (November 01, 2012, 13:43 UTC)

If you use the slock application, like I do, you may have noticed a subtle change with the latest release (which is version 1.1). That change is that the background colour is now teal-like when you start typing your password in order to disable slock, and get back to using your system. This change came from a dual-colour patch that was added to version 1.1.

I personally don’t like the change, and would rather have my screen simply stay black until the correct password is entered. Is it a huge deal? No, of course not. However, I think of it as just one additional piece of security via obscurity. In any case, I wanted it back to the way that it was pre-1.1. There are a couple ways to accomplish this goal. The first way is to build the package from source. If your distribution doesn’t come with a packaged version of slock, you can do this easily by downloading the slock-1.1 tarball, unpacking it, and modifying config.mk accordingly. The config.mk file looks like this:


# slock version
VERSION = 1.0-tip

# Customize below to fit your system

# paths
PREFIX = /usr/local

X11INC = /usr/X11R6/include
X11LIB = /usr/X11R6/lib

# includes and libs
INCS = -I. -I/usr/include -I${X11INC}
LIBS = -L/usr/lib -lc -lcrypt -L${X11LIB} -lX11 -lXext

# flags
CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"
CFLAGS = -std=c99 -pedantic -Wall -Os ${INCS} ${CPPFLAGS}
LDFLAGS = -s ${LIBS}

# On *BSD remove -DHAVE_SHADOW_H from CPPFLAGS and add -DHAVE_BSD_AUTH
# On OpenBSD and Darwin remove -lcrypt from LIBS

# compiler and linker
CC = cc

# Install mode. On BSD systems MODE=2755 and GROUP=auth
# On others MODE=4755 and GROUP=root
#MODE=2755
#GROUP=auth

With the line applicable to background colour being:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"\#005577\"

In order to change it back to the pre-1.1 background colour scheme, simply modify -DCOLOR2 to be the same as -DCOLOR1:

CPPFLAGS = -DVERSION=\"${VERSION}\" -DHAVE_SHADOW_H -DCOLOR1=\"black\" -DCOLOR2=\"black\"

but note that you do not need the extra set of escaping backslashes when you are using the colour name instead of hex representation.

If you use Gentoo, though, and you’re already building each package from source, how can you make this change yet still install the package through the system package manager (Portage)? Well, you could try to edit the file, tar it up, and place the modified tarball in the /usr/portage/distfiles/ directory. However, you will quickly find that issuing another emerge slock will result in that file getting overwritten, and you’re back to where you started. Instead, the package maintainer (Jeroen Roovers), was kind enough to add the ‘savedconfig’ USE flag to slock on 29 October 2012. In order to take advantage of this great USE flag, you firstly need to have Portage build slock with the USE flag enabled by putting it in /etc/portage/package.use:

echo "x11-misc/slock savedconfig" >> /etc/portage/package.use

Then, you are free to edit the saved config.mk which is located at /etc/portage/savedconfig/x11-misc/slock-1.1. After recompiling with the ‘savedconfig’ USE flag, and the modifications of your choice, slock should now exhibit the behaviour that you anticipated.

Hope that helps!

Cheers,
Zach