Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
February 09, 2013, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

February 09, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I guess many people may hit similar problems, so here is my experience of the upgrades. Generally it was pretty smooth, but required paying attention to the details and some documentation/forums lookups.

udev-171 -> udev-197 upgrade

  1. Make sure you have CONFIG_DEVTMPFS=y in kernel .config, otherwise the system becomes unbootable for sure (I think the error message during boot mentions that config option, which is good).
  2. The ebuild also asks for CONFIG_BLK_DEV_BSG=y, not sure if that's strictly needed but I'm including it here for completeness.
  3. Things work fine for me without DEVTMPFS_MOUNT. I haven't tried with it enabled, I guess it's optional.
  4. I do not have a split /usr. YMMV then if you do.
  5. Make sure to run "rc-update del udev-postmount".
  6. Expect network device names to change (I guess this is a non-issue for systems with a single network card). This can really mess up things in quite surprising ways. It seems /etc/udev/rules.d/70-persistent-net.rules no longer works (bug #453494). Note that the "new way" to do the same thing (http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames) is disabled by default in Gentoo (see /etc/udev/rules.d/80-net-name-slot.rules). For now I've adjusted my firewall and other configs, but I think I'll need to figure out the new persistent net naming system.

iptables-1.4.13 -> iptables-1.4.16.3

* Loading iptables state and starting firewall ...
WARNING: The state match is obsolete. Use conntrack instead.
iptables-restore v1.4.16.3: state: option "--state" must be specified

It can be really non-obvious what to do with this one. Change your rules from e.g. "-m state --state RELATED" to "-m conntrack --ctstate RELATED". See http://forums.gentoo.org/viewtopic-t-940302.html for more info.
  Also note that iptables-restore doesn't really provide good error messages, e.g. "iptables-restore: line 48 failed". I didn't find a way to make it say what exactly was wrong (the line in question was just a COMMIT line, it didn't actually identify the real offending line). These mysterious errors are usually caused by missing kernel support for some firewall features/targets.

two upgrades together

Actually what adds to the confusion is having these two upgrades done simultaneously. This makes it harder to identify which upgrade is responsible for which breakage. For an even smoother ride, I'd recommend upgrading iptables first, making sure the updated rules work, and then proceed with udev.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We've generated a new set of profiles for Gentoo installation. These are now called 13.0 instead of 10.0, e.g., "default/linux/amd64/10.0/desktop" becomes "default/linux/amd64/13.0/desktop".
Everyone should upgrade as soon as possible. This brings (nearly) no user-visible changes. Some new files have been added to the profile directories that make it possible for the developers to do more fine-grained use flag masking (see PMS-5 for the details), and this formally requires a new profile tree with EAPI=5 (and a recent portage version, but anything since sys-apps/portage-2.1.11.31 should work and anything since sys-apps/portage-2.1.11.50 should be perfect).
Since the 10.0 profiles will be deprecated immediately and removed in a year, emerge will suggest a replacement on every run. I strongly suggest you just follow that recommendation.
One additional change has been added to the package: the "server" profiles will be removed; they do not exist in the 13.0 tree anymore. If you have used a server profile so far, you should migrate to its parent, i.e. from "default/linux/amd64/10.0/server" to "default/linux/amd64/13.0". This may change the default value of some use-flags (the setting in "server" was USE="-perl -python snmp truetype xml"), so you may want to check the setting of these flags after switching profile, but otherwise nothing happens.

February 08, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

While on my machine KDE 4.10.0 runs perfectly fine, unfortunately a lot of Gentoo users see immediate crashes of plasma-desktop - which makes the graphical desktop environment completely unuseable. We know more or less what happened in the meantime, just not how to properly fix it...
The problem:

  • plasma-desktop uses a new code path in 4.10, which triggers a Qt bug leading to immediate SIGSEGV. 
  • The Qt bug only becomes fatal for some compiler options, and only on 64bit systems (amd64).
  • The Qt bug may be a fundamental architectural problem that needs proper thought.
The links:
The bugfixing situation:
  • Reverting the commit to plasma-workspace that introduced the problem makes the crash go away, but plasma-desktop starts hogging 100% CPU after a while. (This is done in plasma-workspace-4.10.0-r1 as a stopgap measure.) Kinda makes sense since the commit was there to fix a problem - now we hit the original problem.
  • The bug seems not to occur if Qt is compiled with CFLAGS="-Os". Cause unknown. 
  • David E. Narváez aka dmaggot wrote a patch for Qt that fixes this particular codepath but likely does not solve the global problem.
  • So far comments from Qt upstream indicate that this is in their opinion not the right way to fix the problem.
  • Our Gentoo Qt team understandably only wants to apply a patch if it has been accepted upstream.
Right now, the only option we (as Gentoo KDE team) have is wait for someone to pick up the phone. Either from KDE (to properly use the old codepath or provide some alternative), or from Qt (to fix the bug or apply a workaround)...

Sorry & stay tuned.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

Update! Update! Read all about it!You can find the recent updates in a tree near you. They are currently keyworded, but will be stablized as soon as the arch teams find time to do so. You may not want to wait that long as it is a Denial of Service, which is not as severe as it sounds in this case. The user would have to be logged in to cause a DoS.

There have been some other updates to the PostgreSQL ebuilds as well. PostgreSQL will no longer restart if you restarted your system logger. The ebuilds install PAM service files unique to each slot so you don’t have to worry about it being removed when you uninstall an old slot. And, finally, you can write your PL/Python in Python 3.

Greg KH a.k.a. gregkh (homepage, bugs)
AF_BUS, D-Bus, and the Linux kernel (February 08, 2013, 18:37 UTC)

There's been a lot of information scattered around the internet about these topic recently, so here's my attempt to put them all in one place to (hopefully) settle things down and give my inbox a break.

Last week I spent a number of days at the GNOME Developer Hackfest in Brussels, with the goal to help make the ability to distribute applications written for GNOME (and even more generally, Linux) in a better manner. A great summary of what happened there can be found in this H-Online article. Also please read Alexander Larsson's great summary of what we discussed and worked on for another view of this.

Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today.

Our goal (and I use "goal" in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a "libdbus" interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system.

nothing blocks

"But Greg!" some of you will shout, "What about the existing AF_BUS kernel patches that have been floating around for a while and that you put into the LTSI 3.4 kernel release?"

The existing AF_BUS patches are great for users who need a very low-latency, high-speed, D-Bus protocol on their system. This includes the crazy automotive Linux developers, who try to shove tens of thousands of D-Bus messages through their system at boot time, all while using extremely underpowered processors. For this reason, I included the AF_BUS patches in the LTSI kernel release, as that limited application can benefit from them.

Please remember the LTSI kernel is just like a distro kernel, it has no relation to upstream kernel development other than being a consumer of it. Patches are in this kernel because the LTSI member groups need them, they aren't always upstream, just like all Linux distro kernels work.

However, given that the AF_BUS patches have been rejected by the upstream Linux kernel developers, I advise that anyone relying on them be very careful about their usage, and be prepared to move away from them sometime in the future when this new "kernel dbus" code is properly merged.

As for when this new kernel code will be finished, I can only respond with the traditional "when it is done" mantra. I can't provide any deadlines, and at this point in time, don't need any additional help with it, we have enough people working on it at the moment. It's available publicly if you really want to see it, but I'll not link to it as it's nothing you really want to see or watch right now. When it gets to a usable state, I'll announce it in the usual places (linux-kernel mailing list) where it will be torn to the usual shreds and I will rewrite it all again to get it into a mergable state.

In the meantime, if you see me at any of the many Linux conferences I'll be attending around the world this year, and you are curious about the current status, buy me a beer and I'll be glad to discuss it in person.

If there's anything else people are wondering about this topic, feel free to comment on it here on google+, or email me.

February 07, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened goes onward (aka project meeting) (February 07, 2013, 21:40 UTC)

It’s been a while again, so time for another Gentoo Hardened online progress meeting.

Toolchain

GCC 4.8 is on development stage 4, so the hardened patches will be worked on next week. Some help on it is needed to test the patches on ARM, PPC and MIPS though. For those interested, keep a close eye on the hardened-dev overlay as those will contain the latest fixes. When GCC 4.9 starts development phase 1, Zorry will again try to upstream the patches.

With the coming fixes, we might probably (need to) remove the various hardenedno* GCC profiles from the hardened Gentoo profiles. This shouldn’t impact too many users as ebuilds add in the correct flags anyhow (for instance when needing to turn off PIE/PIC).

Kernel, grSecurity and PaX

The kernel release 3.7.0 that we have stable in our tree has seen a few setbacks, but no higher version is stable yet (mainly due to the stabilization period needed). 3.7.4-r1 and 3.7.5 are prime candidates with good track record,
so we might be stabilizing 3.7.5 in the very near future (next week probably).

On the PaX flag migration (you know, from ELF-header based marking to extended attributes marking), the documentation has seen its necessary upgrades and the userland utilities have been updated to reflect the use of xattr markings. The eclass we use for the markings will use the correct utility based on the environment.

One issue faced when trying to support both markings is that some actions (like the “paxctl -Cc” which creates the PT_PAX header if it is missing) make no sense with the other (as there is no header when using XATTR_PAX). The eclass will be updated to ignore these flags when XATTR_PAX is selected.

SELinux

Revision 10 is stable in the tree, and revision 11 is waiting stabilization period. A few more changes have been put in the policy repository already (which are installed when using the live ebuilds) and will of course be part of
revision 12.

A change in the userland utilities was also pushed out to allow permissive domains (so run a single domain in permissive mode instead of the entire system).

Finally, the SELinux eclass has been updated to remove SELinux modules from all defined SELinux module stores if the SELinux policy package is removed from the system. Before that, the user had to remove the modules from the store himself manually, but this is error-prone and easily forgotten, especially for the non-default SELinux policy stores.

Profiles

All hardened subprofiles are marked as deprecated now (you’ve seen the discussions on this on the mailinglist probably on this) so we now have a sane set of hardened profiles to manage. The subprofiles were used for things like
“desktop” or “server”, whereas users can easily stack their profiles as they see fit anyhow – so there was little reason for the project to continue managing those subprofiles.

Also, now that Gentoo has released its 13.0 profile, we will need to migrate our profiles to the 13.0 ones as well. So, the idea is to temporarily support 13.0 in a subprofile, test it thoroughly, and then remove the subprofile and switch the main one to 13.0.

System Integrity

The documentation for IMA and EVM is available on the Gentoo Hardened project site. They currently still refer to the IMA and EVM subsystems as development-only, but they are available in the stable kernels now. Especially the default policy that is available in the kernel is pretty useful. When you want to consider custom policies (for instance with SELinux integration) you’ll need a kernel patch that is already upstreamed but not applied to the stable kernels yet.

To support IMA/EVM, a package called ima-evm-utils is available in the hardened-dev overlay, which will be moved to the main tree soon.

Documentation

As mentioned before, the PaX documentation has seen quite a lot of updates. Other documents that have seen updates are the Hardened FAQ, Integrity subproject and SELinux documentation although most of them were small changes.

Another suggestion given is to clean up the Hardened project page; however, there has been some talk within Gentoo to move project pages to the Gentoo wiki. Such a move might make the suggestion easier to handle. And while on the subject of the wiki, we might want to move user guides to the wiki already.

Bugs

Bug 443630 refers to segmentation faults with libvirt when starting Qemu domains on a SELinux-enabled host. Sadly, I’m not able to test libvirt myself so either someone with SELinux and libvirt
expertise can chime in, or we will need to troubleshoot it by bug (using gdb, strace’ing more, …) which might take quite some time and is not user friendly…

Media

Various talks where held at FOSDEM regarding Gentoo Hardened, and a lot of people attended those talks. Also the round table was quite effective, with many users interacting with developers all around. For next year, chances are very high that we’ll give a “What has changed since last year” session and a round table again.

With many thanks to the usual suspects: Zorry, blueness, prometheanfire, lejonet, klondike and the several dozen contributors that are going to kill me for not mentioning their (nick)names.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
January in review: Istanbul, Dubai (February 07, 2013, 17:33 UTC)

Preface: It appears that I have fallen behind in my writings. It’s a shame really because I think of things that I should write in the moment and then forget. However, as I’m embracing slowish travel, sometimes I just don’t really do anything that is interesting to write about every day/week.

My last post was about my time in Greece. Since then I have been to Istanbul, Dubai, and (now) Sri Lanka. I was in Istanbul for about 10 days. My lasting impressions of Istanbul were:

  • +: Istanbul was the first Muslim country I’ve been to. This is is a positive because it opened up some thoughts of what to expect as I continue east. To see all the impressive mosques, to hear the azan (call to prayer) in the streets, to talk to some Turks about religion, really made it a new experience for me.
  • +: Istanbul receives many visitors per year, which makes it such that it is easy to converse, find stuff you need, etc
  • -: Istanbul receives many visitors per year, which makes it very touristy in some parts.
  • +: Istanbul is a huge city and there is much to see. I stepped on Asia for the first time. There are many old, old, buildings that leave you in awe. Oldest shopping area in the world, the Grand Bazaar, stuff like that.
  • -: Istanbul is a huge city and the public transit is not well connected, I thought.
  • –: Every shop owner harasses you to come in the store! The best defense that I can recommend is to walk with a purpose (like you are running an errand) but not in a hurry. This will bring the least amount of attention to yourself at risk of “missing” the finer details as you meander.

Turkey - Jan 2013-67

Let’s not joke anyone, Dubai was a skydiving trip, for sure. I spent 15 days in Dubai and made 30 jumps. It was a blast. I was at the dropzone most everyday and on the weather days, my generous hosts showed me around the city. I didn’t feel the need to take any pictures of the sites because, while impressive, they seemed too “fake” to me (outrageous, silly, etc). I went to the largest mall in the world, ate brunch in the shadow of the largest building in the world, largest aquarium, indoor ski hill in a desert, eventually it was just…meh. However, I will never forget “The Palm”

When deciding where to go onwards, as I knew I shouldn’t stay in Dubai too long (money matters, of course, I would spend my whole lot on fun and there is so much more to see). I ended up in Sri Lanka, because skyscanner told me there was a direct flight there on a budget airline. I don’t see the point in accepting layovers in my flight details at my pace. Then I found someone on HelpX that wanted an English teacher in exchange for accommodation. While I’m not a teacher, I am a native speaker, and that was acceptable at this level of classes. I did a week stint of that in a small village and now I’m relaxing at the beach…I’ll write more about Sri Lanka later and post pics, a fun photo so far:

20130209-134926.jpg

February 06, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The future of Autotools Mythbuster (February 06, 2013, 00:34 UTC)

You might have noticed after yesterday’s post that I have done a lot of visual changes to Autotools Mythbuster over the weekend. The new style is just a bunch of changes over the previous one (even though I also made use of sass to make the stylesheet smaller), and for the most part is to give it something recognizable.

I need to spend another day or two working on the content itself at the very least, as the automake 1.13 porting notes are still not correct, due to further changes done on Automake side (more on this in a future post, as it’s a topic of its own). I’m also thinking about taking a few days off Gentoo Linux maintenance, Munin development, and other tasks, and just work on the content on all the non-work time, as it could use some documentation of install and uninstall procedures for instance.

But leaving the content side alone, let me address a different point first. More and more people lately have been asking for a way to have the guide available offline, either as ebook (ePub or PDF) or packaged. Indeed I was asked by somebody if I could drop the NonCommercial part of the license so that it can be packaged in Debian (at some point I was actually asked why I’m not contributing this to the main manuals; the reason is that I really don’t like the GFDL, and furthermore I’m not contributing to automake proper because copyright assignment is becoming a burden in my view).

There’s an important note here: while you can easily see that I’m not pouring into it the amount of time needed to bring this to book quality, it does take a lot of time to work on it. It’s not just a matter of gluing together the posts that talk about autotools from my blog, it’s a whole lot of editing, which is indeed a whole lot of work. While I do hope that the guide is helpful, as I wrote before, it’s much more work for the most part that I can pour into on my free time, especially in-between jobs like now (and no, I don’t need to find a job — I’m waiting to hear from one, and got a few others lined up if it falls through). While Flattr helps, it seems to be drying up, at least for what concerns my content; even Socialvest is giving me some grief, probably because I’m no longer connecting from the US. Beside that, the only “monetization” (I hate that word) strategy I got for the guide is AdSense – which, I remind you, kicked my blog out for naming an adult website on a post – and making the content available offline would defeat even the very small returns of that.

At this point, I’m really not sure what to do; on one side I’m happy to receive more coverage just because it makes my life easier to have fewer broken build systems around. On the other hand, while not expecting to get rich off it, I would like to know that the time I spend on it is at least partly compensated – token gestures are better than nothing as well – and that precludes a simple availability of the content offline, which is what people at this point are clamoring for.

So let’s look into the issues more deeply: why the NC clause on the guide? Mostly I want to have a way to stop somebody else exploiting my work for gain. If I drop the NC clause, nothing can stop an asshole from picking up the guide, making it available on Amazon, and get the money for it. Is it likely? Maybe not, but it’s something that can happen. Given the kind of sharks that infest Amazon’s self-publishing business, I wouldn’t be surprised. On the other hand, it would probably make it easier for me to accept non-minor contributions and still be able to publish it at some point, maybe even in real paper, so it is not something I’m excluding altogether at this point.

Getting the guide packaged by distributions is also not entirely impossible right now: Gentoo has generally not the same kind of issues as Debian regarding the NC clauses, and since I’m already using Gentoo to build and publish it, making an ebuild for it is tremendously simple. Since the content is also available on Git – right now on Gitorious, but read on – it would be trivial to do. But again, this would be cannibalizing the only compensation I got for the time spent on the guide. Which makes me very doubtful on what to do.

About the sources, there is another issue: while at the time I started all this, Gitorious was handier than GitHub, over time Gitorious interface didn’t improve, while the latter improved a lot, to the point that right now it would be my choice to host the guide: easier pull requests, and easier coverage. On the other hand, I’m not sure if the extra coverage is a good thing, as stated above. Yes, it is already available offline through Gitorious, but GitHub would make it effectively easier to get offline than to consult online. Is that what I want to do? Again, I don’t know.

You probably also remember an older post of mine from one and a half years ago where I discussed the reasons why I haven’t published Autotools Mythbuster at least through Amazon; the main reason was that, at the time, Amazon has no easy way to update the book for the buyers without having them buying a new copy. Luckily, this has changed recently, so the obstacle is actually fallen. With this in mind, I’m considering making it available as a Kindle book for those of you who are interested. To do so I have first to create it as an ePub though — so it would solve the question that I’ve been asked about the eBook availability… but at the same time we’re back to the compensation issue.

Indeed, if I decide to set up ePub generation and start selling it on the Kindle store, I’d be publishing the same routines on the Git repository, making it available to everybody else as well. Are people going to buy the eBook, even if I priced it at $0.99? I’d suppose not. Which brings me to not be sure what the target would be, on the Kindle store: price it down so that the convenience to just buy it from Amazon overweights the work to rolling your own ePub, or googling for a copy, – considering that just one person rolling the ePub can easily make it available to everybody else – or price it at a higher point, say $5, hoping that a few, interested users would fund the improvements? Either bet sounds bad to me honestly, even considering that Calcote’s book is priced at $27 at Amazon (hardcopy) and $35 at O’Reilly (eBook) — obviously, his book is more complete, although it is not a “living” edition like Autotools Mythbuster is.

Basically, I’m not sure what to do at all. And I’m pretty sure that some people (who will comment) will feel disgusted that I’m trying to make money out of this. On the whole, I guess one way to solve the issue is to drop the NC clause, stick it into a Git repository somewhere, maybe keep it running on my website, maybe not, but not waste energy into it anymore… the fact that, with the much more focused topic, it has just 65 flattrs, is probably indication that there is no need for it — which explains why I couldn’t find any publisher interested in making me write a book on the topic before. Too bad.

February 05, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A story of bad suggestions (February 05, 2013, 23:44 UTC)

You might have noticed that my blog has been down for a little while today. The reason that happened is that I was trying to get Google Webmaster Tools to work again, as I’ve been spending some more time lately to clean up my web presence — I’ll soon post more about news related to Autotools Mythbuster and the direction it’s going to take.

How did that cause my blog’s not coming up though? Well, the new default for GWT’s validation of the authorship of the website is to use DNS TXT records, instead of the old header on the homepage, or file on the root. Unfortunately, it doesn’t work as well.

First, it actually tends to be smart, by checking whose DNS servers are assigned to the domain — which meant that it showed up instructions on how to login on my OVH account (great). On the other hand, it told me to create the new TXT record without setting a subdomain — too bad that it will not accept a validation on flameeyes.eu for blog.flameeyes.eu.

The other problem is that the moment I added a TXT record for blog.flameeyes.eu, the resolution of the host didn’t lead to the CNAME anymore, which meant that the host was unreachable altogether. I’ve not checked the DNS documentation to learn whether this is a bug in OVH or if the GWT suggestion is completely broken. In either case it was a bad suggestion.

Also, if you happen to not be able to reach posts and you end up always on the homepage, please flush your cache, I made a mess when I was fixing the redirects to fix more links all over the Internet — it should all be fine now, and links should all work, even those that were mangled beforehand due to non-ASCII-compatible URLs.

Finally, I’ve updated the few posts were a YouTube video was linked, and they now use the iframe-based embed strategy, which means they are visible without using Adobe Flash, via HTML5. But that’s all fine, no issue should be created by that.

The status of Blender (February 05, 2013, 16:21 UTC)

So after my recent complaints on the way Blender is packaged upstream, it’s a probably a good idea to see what the current status on the topic is.

First of all, upstream has been at least discussing how to deal with these kind of complains: while some commenters complained about leaving Gentoo because of our decision of not bumping to 2.65a (yet), with the idea that it’d be much easier to have Blender on Debian, Arch Linux or whatever else, it turns out that Gentoo was not alone having trouble with Blender, and indeed Matteo asked our help with patching at least for libav-9 support.

For what concerns Gentoo, while I keep getting bugs requesting an update to version 2.65a, I’ve been basically closing them every time, as none seem to care about getting it right — and I really don’t want to get a crappy ebuild in, as I’d be the one taking the pieces anyway. Mostly, what we need is a version of Blender ebuild that does use CMake, but also does not use the bundled libraries for all the code we have in the system already. The main issue here is Bullet, which requires a version bump, possibly with a pre-release snapshot of 2.82, due to the patches that are applied on top of the copy that comes with Blender.

Today I actually had to shoot down a request for a live ebuild; due to the quantity of patches that we end up having to apply, we’re not going to get a live ebuild, full stop.

Unfortunately, this also left us for a long time to deal with the old, buggy, and bitrotting version 2.49b which was marked stable. That stopped today as, with the agreement of at least some of the arch team members, I masked Blender altogether and got rid of version 2.49b-r2 and its patches from the tree. If you do want to use Blender now, you’ve got to unmask it. While this could be considered like dropping the ball on it, it’s just making it explicit that we haven’t been supporting version 2.49b for a long time already.

No, don’t ask for it to be re-added slotted. Upstream is not maintaining a 2.4x branch, so we won’t be doing that either.

So right now if you want to help, start by preparing (upstreamable, or even better, upstreamed) patches that allows to select with CMake the use of system libraries for most of the bundled ones. Another thing that would be very useful at this point would be a separate ebuild for libmv, even with the bundled libraries to start with, so that would at least stop the multi-level madness and we would end up with good old single-level bundled libraries.

February 03, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)

Warning, this is a long post written over some hours. It is a brain dump of my thoughts regarding user interfaces and IT in general.

Technology is a funny beast. Largely because people so often get confused about what “technology” really is. Wind back the clock a few hundred millenia, then the concept of stone tools was all the rage. Then came metallurgy, mechanisation, industrialisation, with successive years comes a new wave of “technology”.

Today though, apparently it’s only these electronic gadgets that need apply. Well, perhaps a bit unfair, but the way some behave, you could be forgiven for thinking this.

What is amusing though, is when some innovation gets dreamt up, becomes widespread (or perhaps not) but then gets forgotten about, and re-discovered. No more have I noticed this, but in the field of user interfaces.

My introduction to computing

Now I’ll admit I’m hardly an old hand in the computing world. Not by a long shot. My days of computing go back to no later than about the late 80′s. My father, working for Telecom Australia (as they were then known) brought home a laptop computer.

A “luggable” by today’s standards, it had no battery and required 240V AC, a smallish monochrome plasma display with CGA graphics. The machine sported a Intel 80286 with a 80287 maths co-processor, maybe 2MB RAM tops, maybe a 10MB HDD and a 3.5″ floppy drive. The machine was about 4 inches high when folded up.

It of course, ran the DOS operating system. Not sure what version, maybe MS-DOS 5. My computing life began with simple games that you launched by booting the machine up, sticking in a floppy disk (a device you now only ever see in pictorial form next to the word “Save”) into the drive, firing up X-Tree Gold, and hunting down the actual .exe file to launch stuff.

Later on I think we did end up setting up QuikMenu but for a while, that’s how it was. I seem to recall at one point my father bringing home something of a true laptop, something that had a monochrome LCD screen and an internal battery. A 386 of some sort, but too little RAM to run those shiny panes of glass from Redmond.

Windows

I didn’t get to see this magical “Windows” later until about 1992 or ’93 or so when my father brought home a brand-new desktop. A Intel 486DX running at 33MHz, 8MB RAM, something like a 150MB HDD, and a new luxury, a colour VGA monitor. It also had Windows 3.1.

So, as many may have gathered, I’ve barely known computers without a command line. Through my primary school years I moved from just knowing the basics of DOS, to knowing how to maintain the old CONFIG.SYS and AUTOEXEC.BAT boot scripts, dealing with WIN.INI, fiddling around with the PIF editor to get contankerous DOS applications working.

Eventually I graduated to QBasic and learning to write software. Initially with only the commands PRINT and PLAY, baby steps that just spewed rubbish on the screen and made lots of noise with the PC speaker, but it was a start. I eventually learned how to make it do useful things, and even dabbled with other variants like CA Realizer BASIC.

My IT understanding entirely revolved around DOS however and the IBM PC clone. I did from time to time get to look at Apple’s offerings, at school there was the odd Apple computer, I think one Macintosh, and a few Apple IIs. With the exception of old-world MacOS, I had not experienced a desktop computer OS lacking a command line.

About this sort of time frame, a second computer appeared. This new one was a AMD Am486DX4 100MHz with I think 16MB or 32MB RAM, can’t recall exactly (it was 64MB and a Am5x86 133MHz by the time the box officially retired). It was running a similar, but different OS, Windows NT Workstation 3.1.

At this point we had a decent little network set up with the two machines connected via RG58 BNC-terminated coax. My box moved to Windows for Workgroups 3.11, and we soon had network file sharing and basic messaging (Chat and WinPopup).

Windows 95

Mid 1996, and I graduated to a new computer. This one was a Pentium 133MHz, 16MB RAM, 1GB HDD, and it ran the latest consumer OS of the day, Windows 95. Well, throw out everything I knew about the Program Manager. It took me a good month or more to figure out how to make icons on the desktop to launch DOS applications without resorting to the Start ? Run ? Browse dance.

After much rummaging through the Help, looking at various tutorials, I stumbled across it quite by accident — the right-click on the desktop, and noticing a menu item called “New”, with a curious item called “Shortcut”.

I later found out some time later that yes, Windows 95 did in fact have a Program Manager, although the way Windows 95 renders minimised MDI windows meant it didn’t have the old feel of the earlier user interface. I also later found out how to actually get at that start menu, and re-arrange it to my liking.

My father’s box had seen a few changes too. His box moved from NT 3.1, to 3.5, to 3.51 and eventually 4.0, before the box got set aside and replaced by a dual Pentium PRO 200MHz with 64MB RAM.

Linux

It wasn’t until my father was going to university for a post-grad IT degree, that I got introduced to Linux.In particular, Red Hat Linux 4.0. Now if people think Ubuntu is hard to use, my goodness, you’re in for a shock.

This got tossed onto the old 486DX4/100MHz box, where I first came to experiment.

Want a GUI? Well, after configuring XFree86, type ‘startx’ at the prompt. I toiled with a few distributions, we had these compilation sets which came with Red Hat, Slackware and Debian (potato I think). First thing I noticed was the desktop, it sorta looked like Windows 95.

The window borders were different, but I instantly recognised the “Start” button. It was FVWM2 with the FVWMTaskBar module. My immediate reaction was “Hey, they copied that!”, but then it was pointed out to me, that this desktop environment was somewhat older than the early “Chicago” releases by at least a year.

The machines at the uni were slightly different again, these ones did have more Win95-ish borders on them. FVWM95.

What attracted me to this OS initially was the games. Not the modern first person shooters, but games like Xbilliard, Xpool, hextris, games that you just didn’t see on DOS. Even then, I discovered there were sometimes ports of the old favourites like DOOM.

The OS dance

The years that followed for me was an oscillation between Windows 3.1/PC DOS 7, Windows 95, Slackware Linux, Red Hat Linux, a little later on Mandrake Linux, Caldera OpenLinux, SuSE Linux, SCO OpenServer 5, and even OS/2.

Our choise was mainly versions of SuSE or Red Hat, as the computer retailer near us sold boxes of them. At the time our Internet connection was via a belovid 28.8kbps dial-up modem link with a charge of about $2/hr. So downloading distributions was simply out of the question.

During this time I became a lot more proficient with Linux, in particular when I used Slackware. I experimented with many different window managers including: twm, ctwm, fvwm, fvwm2, fvwm95, mwm, olvwm, pmwm (SCO’s WM), KDE 1.0 (as distributed with SuSE 5.3), Gnome + Enlightenment (as distributed with Red Hat 6.0), qvwm, WindowMaker, AfterStep.

I got familiar with the long-winded xconfigurator tool, and even getting good at having an educated guess at modelines when I couldn’t find the specs in the monitor documentation. In the early days it was also not just necessary to know what video card you had, but also what precise RAMDAC chip it had!

Over time I settled on KDE as the desktop of choice under Linux. KDE 1.0 had a lot of flexibility and ease of use that many of its contemporaries lacked. Gnome+Enlightenment looked alright at first, but then the inability to change how the desktop looked without making your own themes bothered me, the point and click of KDE’s control panel just suited me as it was what I was used to in Windows 3.1 and 95. Not having to fuss around with the .fvwm2rc (or equivalent) was a nice change too. Even adding menu items to the K menu was easy.

One thing I had grown used to on Linux was how applications install themselves in the menu in a logical manner. Games got stashed under Games, utilities under Utilities, internet stuff under Internet, office productivity tools under Office. Every Linux install I had, the menu was neatly organised. Even the out-of-the-box FVWM configuration had some logical structure to it.

As a result, whenever I did use Windows on my desktop, a good amount of time was spent re-arranging the Start menu to make the menu more logical. Many a time I’d open the Start menu on someone else’s computer, and it’d just spew its guts out right across the screen, because every application thinks itself deserving of a top-level place below “Programs”.

This was a hang-over of the days of Windows 3.1. The MDI-style interface that was Program Manager couldn’t manage anything other than program groups as the top-level, and program items below that. Add to this a misguided belief that their product was more important than anyone elses, application vendors got used to this and just repeated the status quo when Windows 95/NT4 turned up.

This was made worse if someone installed Internet Explorer 4.0. It invaded like a cancer. Okay now your screenful of Start menu didn’t spew out across the screen, it just crammed itself into a single column with tiny little arrows on the top and bottom to scroll past the program groups one by one.

Windows 95 Rev C even came with IE4, however there was one trick. If you left the Windows 95 CD in the drive on the first boot, it’d pop up a box telling you that the install was not done, you’d click that away and IE4 Setup would do its damage. Eject the CD, and you were left with pristine Windows 95. Then when IE5 came around, it could safely be installed without it infecting everything.

Windows 2000

I never got hold of Windows 98 on my desktop, but at some point towards the very end of last century, I got my hands on a copy of Windows 2000 Release Candidate 2. My desktop was still a Pentium 133MHz, although it had 64MB RAM now and few more GB of disk space.

I loaded it on, and it surprised me just how quick the machine seemed to run. It felt faster than Windows 95. That said, it wasn’t all smooth sailing. Where was Network Neighbourhood? That was a nice feature of Windows 95. Ohh no, we have this thing called “My Network Places” now. I figured out how to kludge my own with the quick-launch, but it wasn’t as nice since applications’ file dialogues knew nothing of it.

The other shift was that Internet Explorer still lurked below the surface, and unlike Windows 95, there was no getting rid of it. My time using Linux had made me a Netscape user and so for me it was unneccesary bloat. Windows 2000 did similar Start Menu tricks, including “hiding” applications that it thought I didn’t use very often.

If it’s one thing that irritates me, it’s a computer hiding something from me arbitrarily.

Despite this, it didn’t take as long for me to adapt to it as I did from Windows 3.1 to 95 though, as the UI was still much the same. A few tweaks here and there.

In late 2001, my dinky old Pentium box got replaced. In fact, we replaced both our desktops. Two new dual Pentium III 1GHz boxes, 512MB RAM. My father’s was the first, with a nVidia Riva TNT2 32MB video card and a CD-ROM drive. Mine came in December as a Christmas/18th birthday present, with a ATI Radeon 7000 64MB video card and a DVD drive.

I was to run the old version of Windows NT 4.0 we had. Fun ensued with Windows NT not knowing anything about >8GB HDDs, but Service Pack 6a sorted that out, and the machine ran. I got Linux on there as well (SuSE initially) and apart from the need to distribute ~20GB of data between many FAT16 partitions of about 2GB each (Linux at this time couldn’t write NTFS), it worked. I had drive letters A through to O all occupied.

We celebrated by watching a DVD for the first time (it was our first DVD player in the house).

NT 4 wasn’t too bad to use, it was more like Windows 95, and I quickly settled into it. That said, its tenure was short lived. The momoent anything went wrong with the installation, I found I was right back to square one as the Emergency Repair Disk did not recognise the 40GB HDD. I wrustled up that old copy of Windows 2000 RC2 and found it worked okay, but wouldn’t accept the Windows 2000 drivers for the video card. So I nicked my father’s copy of Windows 2000 and ran that for a little while.

Windows XP was newly released, and so I did enquire about a student-license upgrade, but being a high-school student, Microsoft’s resellers would have none of that. Eventually we bought a new “Linux box” with an OEM copy of Windows 2000 with SP2, and I used that. All legal again, and everything worked.

At this point, I was dual-booting Linux and Windows 2000. Just before the move to ADSL, I had about 800 hours to use up (our dial-up account was one that accumulated unused hours) and so I went on a big download-spree. Slackware 8.0 was one of the downloaded ISOs, and so out went SuSE (which I was running at the time) and in went Slackware.

Suddenly I felt right at home. Things had changed a little, and I even had KDE there, but I felt more in control of my computer than I had in a long while. In addition to using an OS that just lets you do your thing, I had also returned from my OS travels, having gained an understanding of how this stuff works.

I came to realise that point-and-click UIs are fine when they work, hell when they don’t. When they work, any dummy can use them. When they don’t, they cry for the non-dummies to come and sort them out. Sometimes we can, sometimes it’s just reload, wash-rinse-repeat.

No more was this brought home to me when we got hold of a copy of Red Hat 8.0. I tried it for a while, but was immediately confronted by the inability to play the MP3s that I had acquired (mostly via the sneakernet). Ogg/Vorbis was in its infancy and I noticed that at the time, there didn’t seem to be any song metadata such as what ID3 tags provided, or at least XMMS didn’t show it.

A bit of time back on Slackware had taught me how to download sources, read the INSTALL file and compile things myself. So I just did what I always did. Over time I ran afoul with the Red Hat Package Manager, and found myself going in cycles doing many RPM solving dependency hell.

On top of this, there was now the need to man-handle the configuration tools that expected things the way the distribution packagers intended them.

Urgh, back to Slackware I go. I downloaded Slackware 9.0 and stayed with that a while. Eventually I really did go my own way with Linux From Scratch, which was good, but a chore.

These days I use Gentoo, and while I do have my fights with Portage (ohh slot-conflict, how I love you!!!), it does usually let me do what I want.

A time for experimentation and learning

During this time I was mostly a pure KDE user. KDE 2.0, then 3.0. I was learning all sorts of tricks reading HOWTO guides on the Linux Documentation Project. I knew absolutely no one around me that used Linux, in fact on Linux matters, I was the local “expert”. Where my peers (in 2002) might have seen it once or twice, I had been using it since 1996.

I had acquired some more computers by this time, and I was experimenting with setting up dial-up routers with proxy servers (Squid, then IP Masquerade), turning my old 386 into a dumb terminal with XDMCP, getting interoperation between the SCO OpenServer box (our old 486DX4/100MHz).

The ability for the Windows boxes to play along steadily improved over this time, from ethernet frames that passed like ships in the night (circa 1996; NetBEUI and IPX/SPX on the Windows 3.1, TCP/IP on Linux) though to begrudging communications with TCP/IP with newer releases of Windows.

Andrew Tridgell’s SAMBA package made its debut in my experimentation, and suddenly Windows actually started to talk sensible things to the Linux boxes and vice versa.

Over time the ability for Linux machines and Windows boxes to interoperate has improved with each year improving on the next layer in the OSI model. I recall some time in 1998 getting hold of an office suite called ApplixWare, but in general when I wanted word processing I turned to Netscape Composer and Xpaint as my nearest equivalent.

It wasn’t until 2000 or so that I got hold of StarOffice, and finally had an office suite that could work on Windows, Linux and OS/2 that was comparable to what I was using at school (Microsoft Office 97).

In 2002 I acquired an old Pentium 120MHz laptop, and promptly loaded that with Slackware 8 and OpenOffice 1.0. KDE 3.0 chugged with 48MB RAM, but one thing the machine did well was suspend and resume. A little while later we discovered eBay and upgraded to a second-hand Pentium II 266MHz, a machine that served me well into the following year.

For high-school work, this machine was fine. OpenOffice served the task well, and I was quite proficient at using Linux and KDE. I even was a trend-setter… listening to MP3s on the 15GB HDD a good year before the invention of the iPod.

Up to this point, it is worth mentioning that the Microsoft world, the UI hadn’t changed all that much in the time between Windows 95 and 2000/ME. Network Neighbourhood was probably the thing I noticed the most. At this time I was usual amongst my peers in that I had more than one computer at home, and they all talked to each other. Hence why Windows 95/98 through to 2000/ME didn’t create such an uproar.

What people DID notice was how poorly Windows ME (and the first release of 98) performed “under the hood”. More so for the latter than the former.

Windows XP

Of course I did mention trying to get hold of Windows XP earlier. It wasn’t until later in 2002 when the school was tossing out a lab of old AMD K6 machines with brand new boxes (and their old Windows NT 4 server for a Novell one) that I got to see Windows XP up close.

The boxes were to run Windows 2000 actually, but IBM had just sent us the boxes preloaded with XP, and we were to set up Zenworks to image them with Windows 2000. This was during my school holidays and I was assisting in the transition. So I fired up a box and had a poke around.

Up came the initial set up wizard, with the music playing, animations, and a little question mark character which at first did its silly little dance telling you how it was there to help you. Okay, if I had never used Windows before, I’d be probably thankful this was there, but this just felt like a re-hash of that sodding paperclip. At least it did eventually shut up and sit in the corner where I could ignore it. That wasn’t the end of it though.

Set up the user account, logged in, and bam, another bubble telling me to take the tour. In fact, to this day that bubble is one of the most annoying things about Windows XP because 11 years on, on a new install it insists on bugging you for the next 3 log-ins as if you’ve never used the OS before!

The machines had reasonable 17″ CRT monitors, the first thing I noticed was just how much extra space was taken up with the nice rounded corners and the absence of the My Computer and My Network Places icons on the desktop. No, these were in the Start menu now. Where are all the applications? Hidden under All Programs of course.

Hidden and not even sorted in any logical order, so if you’ve got a lot of programs it will take you a while to find the one you want, and even longer to find it’s not actually installed.

I took a squiz at the control panel. Now the control panel hadn’t changed all that much from Windows 3.1 days. It was still basically a window full of icons in Windows 2000/ME, albeit since Windows 95, it now used Explorer to render itself rather than a dedicated application.

Do we follow the tradition so that old hands can successfully guide the novices? No, we’ll throw everyone in the dark with this Category View nonsense! Do the categories actually help the novices? Well for some tasks, maybe, but for anything advanced, most definitely not!

Look and feel? Well if you want to go back to the way Windows used to look, select Classic theme, and knock yourself out. Otherwise, you’ve got the choice of 3 different styles, all hard-coded. Of course somewhere you can get additional themes. Never did figure out where, but it’s Gnome 1.0-style visual inflexibility all over again unless you’re able to hack the theme files yourself.

No thank-you, I’ll stick with the less pixel-wasting Classic theme if you don’t mind!

Meanwhile in Open Source land

As we know, it was a long break between releases of Windows XP, and over the coming years we heard much hype about what was to become Vista. For years to come though, I’d be seeing Windows XP everywhere.

My university work horses (I had a few) all exclusively ran Linux however. If I needed Windows there was a plethora of boxes at uni to use, and most of the machines I had were incapable of running anything newer than Windows 2000.

I now was more proficient in front of a Linux machine than any version of Windows. During this time I was using KDE most of the time. Gnome 2.0 was released, I gave it a try, but, it didn’t really grab me. One day I recall accidentally breaking the KDE installation on my laptop. Needing a desktop, I just reached for whatever I had, and found XFCE3.

I ran XFCE for about a month or two. I don’t recall exactly what brought me back to KDE, perhaps the idea of a dock for launching applications didn’t grab me. AfterStep afterall did something similar.

In 2003, one eBay purchase landed us with a Cobalt Qube2 clone, a Gateway Microserver. Experimenting with it, I managed to brick the (ancient) OS, and turned the thing into a lightish door stop. I had become accustomed to commands like `uname` which could tell me the CPU amonst other things.

I was used to seeing i386, i486, i586 and i686, but this thing came back with ‘mips’. What’s this? I did some research and found that there was an entire port. I also found some notes on bootstrapping a SGI Indy. Well this thing isn’t an Indy, but maybe the instructions have something going for them. I toiled, but didn’t get far…

Figuring it might be an idea to actually try these instructions on an Indy, we hit up eBay again, and after a few bids, we were the proud owners of a used SGI Indy R4600 133MHz with 256MB RAM running IRIX 6.5. I toiled a bit with IRIX, the 4DWM seemed okay to use, but certain parts of the OS were broken. Sound never worked, there was a port of Doom, but it’d run for about 10 seconds then die.

We managed to get some of the disc set for IRIX, but never did manage to get the Foundation discs needed to install. My research however, led me onto the Debian/MIPS port. I followed the instructions, installed it, and hey presto, Linux on a SGI box, and almost everything worked. VINO (the video capture interface) was amongst those things that didn’t at the time, but never mind. Sound was one of the things that did, and my goodness, does it sound good for a machine of that vintage!

Needless to say, the IRIX install was history. I still have copies of IRIX 6.5.30 stashed in my archives, lacking the foundation discs. The IRIX install didn’t last very long, so I can’t really give much of a critique of the UI. I didn’t have removable media so didn’t get to try the automounting feature. The shut down procedure was a nice touch, just tap the OFF button, the computer does the rest. The interface otherwise looked a bit like MWM. The machine however was infinitely more useful to me running Linux than it ever was under IRIX.

As I toiled with Debian/MIPS on the Indy, I discovered there was a port of this for the Qube2. Some downloads later and suddenly the useless doorstop was a useful server again.

Debian was a new experience for me, I quite liked APT. The version I installed evidently was the unstable release, so it had modern software. Liking this, I tried it on one of the other machines, and was met with, Debian Stab^Hle. Urgh… at the time I didn’t know enough about the releases, and on my own desktop I was already using Linux From Scratch by this time.

I was considering my own distribution that would target the Indy, amongst other systems. Already formulating ideas, and at one point, I had a mismash of about 3 different distributions on my laptop.

Eventually I discovered Gentoo, along with its MIPS port. Okay, not as much freedom as LFS, but very close to it. In fact, it gives you the same freedom if you can be arsed to write your own portage tree. One by one the machines got moved over, and that’s what I’ve used.

The primary desktop environment was KDE for most of them. Build times for this varied, but for most of my machines it was an overnight build. Especially for the Indy. Once installed though, it worked quite well. It took a little longer to start on the older machines, but was still mostly workable.

Up to this point, I had my Linux desktop set up just the way I liked it. Over the years the placement of desktop widgets and panels has moved around as I borrowed ideas I had seen elsewhere. KDE was good in that it was flexible enough for me to fundamentally change many aspects of the interface.

My keybindings were set up to be able to perform most window operations without the need of a mouse (useful when juggling the laptop in one’s hands whilst figuring out where the next lecture was), notification icons and the virtual desktop pager were placed to the side for easy access. The launcher and task bar moved around.

Initially down the bottom, it eventually wound up on the top of the screen much like OS/2 Warp 4, as that’s also where the menu bar appears for applications — up the top of the window. Thus minimum mouse movement. Even today, the Windows 7 desktop at work has the task bar up the top.

One thing that frustrated me with Windows at the time was the complete inability to change many aspects of the UI. Yes you could move the task bar around and add panels to it, but if you wanted to use some other keystroke to close the window? Tough. ALT-F4 is it. Want to bring up the menu items? Hit the logo key, or failing that, CTRL-ESC. Want to maximise a window? Either hit ALT-Space, then use the arrows to hit Maximise, or start reaching for the rodent. Far cry from just pressing Logo-Shift-C or Logo-Shift-X.

Ohh, and virtual desktops? Well, people have implemented crude hacks to achieve something like it. In general, anything I’ve used has felt like a bolt-on addition rather than a seamless integration.

I recall commenting about this, and someone pointing out this funny thing called “standardisation”. Yet, I seem to recall the P in PC standing for personal. i.e. this is my personal computer, no one else uses it, thus it should work the way I choose it to. Not what some graphic designer in Redmond or Cupertino thinks!

The moment you talk about standardisation or pining for things like Group Policy objects, you’ve crossed the boundary that separates a personal computer from a workstation.

Windows Vista

Eventually, after much fanfare, Microsoft did cough up a new OS. And cough up would be about the right description for it. It was behind schedule, and as a result, saw many cut backs. The fanciful new WinFS? Gone. Palladium? Well probably a good thing that did go, although I think I hear its echoes in Secure Boot.

What was delivered, is widely considered today a disaster. And it came at just the wrong time. Just as the market for low-end “netbook” computers exploded, just the sort of machine that Windows Vista runs the worst on.

Back in the day Microsoft recommended 8MB RAM for Windows 95 (and I can assure you it will even run in 4MB), but no one in their right mind would tollerate the constant rattle from the paging disk. The same could be said for Windows NT’s requirement of 12MB RAM and a 486. Consumers soon learned that “Windows Vista Basic ready” meant a warning label to steer clear of, or insist on it coming with Windows XP.

A new security feature, UAC, ended up making more of a nuisance of itself, causing people to do the knee-jerk reaction of shooting the messenger.

The new Aero interface wastes even more screen pixels than the “Luna” interface of Windows XP. And GPU cycles to boot. The only good thing about it was that the GPU did all the hard work putting windows on the screen. It looked pretty, when the CPU wasn’t overloaded, but otherwise the CPU had trouble keeping up and the whole effect was lost. Exactly what productivity gains one has by having a window do three somersaults before landing in the task bar on minimise is lost on me.

Windows Vista was the last release that could do the old Windows 95 style start menu. The newer Vista one was even more painful than the one in XP. The All Programs sub-menu opened out much like previous editions did (complete with the annoying “compress myself into a single column”). In Vista, this menu was now entrapped inside this small scrolling window.

Most of Vista’s problems were below the surface. Admittedly Service Pack 1 fixed a lot of the problems, it was already too late. No one wanted to know.

Even with the service packs, it still didn’t perform up to par on the netbooks that were common for the period. The original netbook for what it’s worth, was never intended to run any version of Windows, the entire concept came out of the One Laptop Per Child project, which was always going to be Linux based.

Asus developed the EeePC was one of the early candidates for the OLPC project. When another design got selected, Asus simply beefed up the spec, loaded on Xandros and pushed it out the door. Later models came with Windows XP, and soon, other manufacturers pitched in. This was a form factor with specs that ran Windows XP well, unfortunately Vista’s Aero interface was too much for the integrated graphics typically installed, and the memory requirements had the disk drive rattling constantly, sapping the machine of valuable kilojoules when running from the battery.

As to my run-in with Vista? For my birthday one year I was given a new laptop. This machine came pre-loaded with it, and of course, the very first task I did was spend a good few hours making the recovery discs then uninstalling every piece of imaginable crap that manufacturers insist on prebloating their machines with.

For what I needed, I actually needed Linux to run. The applications I use and depend on for university work, whilst compatible with Windows, run as second class citizens due to their Unix heritage. Packages like The Gimp, gEDA, LaTeX, git, to name a few, never quite run as effortlessly on Windows. The Gimp had a noticable lag when using my Wacom tablet, something that put my handwriting way off.

Linux ran on it, but with no support for the video card, GUI related tasks were quite choppy. In the end, it proved to be little use to me. My father at the time was struggling along with his now aging laptop using applications and hardware that did not support Windows Vista. I found a way to exorcise Windows Vista from the machine, putting Windows XP in its place.

The bloat becomes infectious

What was not lost on me, was that each new iteration of full desktops like KDE brought in more dependencies. During my latter years at University, I bought myself a little netbook. I was doing work for Gentoo/MIPS with their port of Linux, and thus a small machine that would run what I needed for university, and could serve as a test machine during my long trips between The Gap and Laidley (where I was doing work experience for Eze Corp) would go down nicely. So I fired off an email and a telegraphic money transfer over to Lemote in China, and on the doorstep turned up a Yeeloong netbook.

I dual booted Debian and Gentoo on this machine, in fact I still do. Just prior to buying this machine, I was limping along with an old Pentium II 300MHz laptop. I did have a Pentium 4M laptop, but a combination of clumsiness and age slowly caused the machine’s demise. Eventually it failed completely, and so I just had to make do with the PII which had been an earlier workhorse.

One thing, KDE 3.0 was fine on this laptop. Even 3.5 was okay. But when you’ve only got 300MHz of CPU and 160MB RAM, the modern KDE releases were just a bit too much. Parts of KDE were okay, but for the main desktop, it chugged along. Looking around, I needed a workable desktop, so I installed FVWM. I found the lack of a system tray annoyed me, so in went stalonetray. Then came maintaining the menu. Well, modern FVWM comes with a Perl script that automates this, so in that went.

Finally, a few visual touches, a desktop background loader, some keybinding experiments and I pretty much had what KDE gave me, that started in a fraction of the time, and built much faster. When the Yeeloong turned up and I got Gentoo onto there, the FVWM configuration here was the first thing to be installed on the Yeeloong, and so I had a sensible desktop for the Yeeloong.

Eventually I did get KDE 4 working on the Yeeloong, sort of. It was glitchy on MIPS. KDE 3.5 used to work without issue but 4.0 never ran quite right. I found myself using FVWM with just the bits of KDE that worked.

As time went on, university finished, and the part-time industrial experience became full-time work. My work at the time revolved around devices that needed Windows and a parallel port to program them. We had another spare P4 laptop, so grabbed that, tweaked Windows XP on there to my liking, and got to work. The P4 “lived” at Laidley and was my workstation of sorts, the Yeeloong came with me to and from there. Eventually that work finished, and through the connections I came to another company (Jacques Electronics). In the new position, it was Linux development on ARM.

The Windows installation wasn’t so useful any more. So in went a copy of the gPartED LiveCD, told Windows to shove, followed by a Gentoo disc and a Stage 3 tarball. For a while my desktop was just the Linux command line, then I got X and FVWM going, finally as I worked, KDE.

I was able to configure KDE once again, and on i686 hardware, it ran as it should. It felt like home, so it stayed. Over time the position at Jacques finished up, I came to work at VRT where I am to this day. The P4 machine stayed at the workplace, with the netbook being my personal machine away from work.

It’s worth pointing out that at this point, although Windows 7 had been around for some time, I was yet to actually come to use it first hand.

My first Apple

My initial time at VRT was spent working on a Python-based application to ferry metering data from various energy meters to various proprietary systems. The end result was a package that slotted in along side MacroView called Metermaster, and forms one of the core components in VRT’s Wages Hub system. While MacroView can run on Windows, and does for some (even Cygwin), VRT mainly deploys it on Ubuntu Linux. So my first project was all Linux based.

During this time, my new work colleagues were assessing my skills, and were looking at what I could work on next. One of the discussions revolved around working on some of their 3D modelling work using Unity3D. Unity3D at the time, ran on just two desktop OSes. Windows, and MacOS X.

My aging P4 laptop had a nVidia GeForce 420Go video device with 32MB memory. In short, if I hit that thing with a modern 3D games engine, it’d most likely crap itself. So I was up for a newer laptop. That got me thinking, did I want to try and face Windows again, or did I want to try something new?

MacOS was something I had only fleeting contact with. MacOS X I had actually never used myself. I knew a bit about it, such as its basis was on the FreeBSD userland, the Mach microkernel. I saw a 2008 model MacBook with a 256MB video device inbuilt, going cheap, so I figured I’d take the plunge.

My first impressions of MacOS X 10.5 were okay. I had a few technical glitches at first, namely MacOS X would sometimes hang during boot, just showing nothing more than an Apple logo and a swirling icon. Some updates refused to download, they’d just hang and the time estimate would blow out. Worst of all, it wouldn’t resume, it’d just start at the beginning.

In the end I wandered down to the NextByte store in the city, bought a copy of MacOS X 10.6. I bought the disc on April 1st, 2011, and it’s the one and only disc the DVD drive in the MacBook won’t accept. The day I bought it I was waiting at the bus stop, figured I’d have a look and see what docs there are. I put the disc in, hear a few noises, it spits the disc out again. So I put it back in again, and out it comes. Figuring this was a defective disc, I put the disc back in and march back down to the shop, receipt in one hand, cantankerous laptop in the other. So much for Apple kit “just working”.

Then the laptop decided it liked the pussy cat disc so much it wouldn’t give it back! Cue about 10 minutes in the service bay getting the disc to eject. Finally the machine reneged and spat the disc out. That night it tried the same tricks, so I grabbed an external DVD drive and did the install that way. Apart from this, OSX 10.6 has given me no problems in that regard.

As for the interface? I noticed a few things features that I appreciated from KDE, such as the ability to modify some of the standard shortcuts, although not all of them. Virtual desktops get called Spaces in MacOS X, but essentially the same deal.

My first problem was identifying what the symbols on the key shortcuts meant. Command and Shift were simple enough, but the symbol used to denote “Option” was not intuitive, and I can see some people getting confused for the one for Control. That said, once I found where the Terminal lived, I was right at home.

File browsing? Much like what I’m used to elsewhere. Stick a disc in, and it appears on the desktop. But then to eject? The keyboard eject button didn’t seem to work. Then I remembered a sarcastic comment one of my uncles made about using a Macintosh, requiring you to “throw your expensive software in the bin to eject”. So click the CD icon, drag to the rubbish bin icon, voilà, out it comes.

Apple’s applications have always put the menu bar of the application right up the top of the screen. I found this somewhat awkward when working with multiple applications since you find yourself clicking on one (or command-tabbing over to) one window, access the menu there, then clicking (or command-tabbing) to the other, access the menu up the top of the screen.

Switching applications with Command-Tab works by swapping between completely separate applications. Okay if you’re working with two completely separate applications, not so great if you’re working on many instances of the same application. Exposé works, probably works quite well if the windows are visually distinct when zoomed out, but if they look similar, one is reminded of Forrest Gump: “Life’s like a box of chocolates, you never know what you’re gonna get!”

The situation is better if you hit Command-Tab, then press a down-arrow, that gives you an Exposé of just the windows belonging to that application. A far cry from hitting Alt-Tab in FVWM to bring up the Window List and just cycling through. Switching between MacVim instances was a pain.

As for the fancy animations. Exposé looks good, but when the CPUs busy (and I do give it a hiding), the animation just creeps along at a snail’s pace. I’ll tolerate it if it’s over and done with within a second, but when it takes 10 seconds to slowly zoom out, I’m left sitting there going “Just get ON with it!” I’d be fine if it just skipped the animation and just switched from normal view to Exposé in a single frame. Unfortunately there’s nowhere to turn this off that I’ve found.

The dock works to an extent. It suffers a bit if you have a lot of applications running all at once, there’s only so much screen real-estate. A nice feature though is in the way it auto-hides and zooms.

When the mouse cursor is away from the dock, it drops off the edge of the screen. As the user configures this and sets up which edge it clings to, this is a reasonable option. As the mouse is brought near the space where the dock resides, it slowly pops out to meet the cursor. Not straight away, but progressively as the proximity of the cursor gets closer.

When fully extended, the icons nearest the cursor enlarge, allowing the rest to remain visible, but not occupy too much screen real-estate. The user is free to move the cursor amongst them, the ones closest zooming in, the ones furtherest away zooming out. Moving the cursor away causes the dock to slip away again.

Linux on the MacBook

And it had to happen, but eventually Linux did wind up on there. Again, KDE initially, but I again, found that KDE was just getting too bloated for my liking. It took about 6 months of running KDE before I started looking at other options.

FVWM was of course where I turned to first, in fact, it was what I used before KDE was finished compiling. I came to the realisation that I was mostly just using windows full-screen. So I thought, what about a tiling window manager?

Looking at a couple, I settled on Awesome. At first I tried it for a bit, didn’t like it, reverted straight back to FVWM. But then I gave it a second try.

Awesome works okay, it’s perhaps not the most attractive to look at, but it’s functional. At the end of the day looks aren’t what matter, it’s functionality. Awesome was promising in that it uses Lua for its configuration. It had a lot of the modern window manager features for interacting with today’s X11 applications. I did some reading up on the handbook, did some tweaking of the configuration file and soon had a workable desktop.

The default keybindings were actually a lot like what I already used, so that was a plus. In fact, it worked pretty good. Where it let me down was in window placement. In particular, floating windows, and dividing the screen.

Awesome of course works by a number of canned window layouts. It can make a window full screen (hiding the Awesome bar) or near full-screen, show two windows above/below each other or along side. Windows are given numerical tags which cause those windows to appear whenever a particular tag is selected, much like virtual desktops, only multiple tags can be active on a screen.

What irritated me most was trying to find a layout scheme that worked for me. I couldn’t seem to re-arrange the windows in the layout, and so if Awesome decided to plonk a window in a spot, I was stuck with it there. Or I could try cycling through the layouts to see if one of the others was better. I spent much energy arguing with it.

Floating windows were another hassle. Okay, modal dialogues need to float, but there was no way to manually override the floating status of a window. The Gimp was one prime example. Okay, you can tell it to not float its windows, but it still took some jiggery to get each window to sit where you wanted it. And not all applications give you this luxury.

Right now I’m back with the old faithful, FVWM.

FVWM

FVWM, as I have it set up on Gentoo

Windows 7

When one of my predecessors at VRT left to work for a financial firm down in Sydney, I wound up inheriting his old projects, and the laptop he used to develop them on. The machine dual-boots Ubuntu (with KDE) and Windows 7, and seeing as I already have the MacBook set up as I want it, I use that as my main workstation and leave the other booted into Windows 7 for those Windows-based tasks.

Windows 7 is much like Windows Vista in the UI. Behind the scenes, it runs a lot better. People aren’t kidding when they say Windows 7 is “Vista done right”. However, recall I mentioned about Windows Vista being the last to be able to do the classic Start menu? Maybe I’m dense, but I’m yet to spot the option in Windows 7. It isn’t where they put it Windows XP or Vista.

So I’m stuck with a Start menu that crams itself into a small bundle in one corner of the screen. Aero has been turned off in favour of a plain “classic” desktop. I have the task bar up the top of the screen.

One new feature of Windows 7 is that the buttons of running applications by default only show the icon of the application. Clicking that reveals tiny wee screenshots with wee tiny title text. More than once I’ve been using a colleague’s computer, he’ll have four spreadsheets open, I’ll click the icon to switch to one of them, and neither of us can figure out which one we want.

Thankfully you can tell it to not group the icons, showing a full title next to the icon on the task bar, but it’s an annoying default.

Being able to hit Logo-Left or Logo-Right to tile a window on the screen is nice, but I find more often than not I wind up hitting that when I mean to hit one of the other meta keys, and thus I have to reach for the rodent and maximise the window again. This is more to do with the key layout of the laptop than Windows 7, but it’s Windows 7′s behaviour and the inability to configure it that exacerbates the problem.

The new start menu I’d wager, is why Microsoft saw so many people pinning applications to the task bar. It’s not just for quick access, in some cases it’s the only bleeding hope they’d ever find their application again! Sure, you can type the name of the application, but circumstance doesn’t always favour that option. Great if you know exactly what the program is called, not so great if it’s someone else’s computer and you need to know if something is even there.

Thankfully most of the effects can be turned off, and so I’m left with a mostly Spartan desktop that just gets the job done. I liken using Windows to a business trip abroad, you’re not there for pleasure, and there’s nothing quite like home sweet home.

Windows 8

Now, I get to this latest instalment of desktop Operating Systems. I have yet to actually use it myself, but looking at a few screenshots, a few thoughts:

  • “Modern”: apart from being a silly name for a UI paradigm (what do you call it when it isn’t so “modern” anymore?), looks like it could really work well on the small screen. However, it relies quite heavily on gestures and keystrokes to navigate. All very well if you set these up to suit how you operate yourself, but not so great when forced upon you.
  • Different situations will call for different interface methods. Sometimes it is convenient to reach out and touch the screen, other times it’ll be easier to grab the rodent, other times it’ll be better to use the keyboard. Someone should be able to achieve most tasks (within reason) with any of the above, and seamlessly swap between these input methods as need arises.
  • “Charms” and “magic corners” makes the desktop sound like it belongs on the set of a Harry Potter film
  • Hidden menus that jump out only when you hit the relevant corner or edge of the screen by default without warning will likely startle and confuse
  • A single flat hierarchy of icons^Wtiles for all one’s applications? Are we back to MS-DOS Executive again?
  • “Press the logo key to access the Start screen”, and so what happens if the keyboard isn’t in convenient reach but the mouse is?
  • In a world where laptops are out-numbering desktops and monitors are getting wider faster than they’re getting taller, are extra-high ribbons really superior to drop-down menus for anyone other than touch users?

Apparently there’s a tutorial when you first start up Windows 8 for the first time. Comments have been made about how people have been completely lost working with the UI until they saw this tutorial. That should be a clue at least. Keystrokes are really just a shortened form of command line. Even the Windows 7 start menu, with its search bar, is looking more like a stylised command line (albeit one with minimal capability).

Are we really back to typing commands into pseudo command line interfaces?

The Command line: what is old is new again

The Command line: what is old is new again

I recall the Ad campaigns for Windows 7, on billboards, some attractive woman posing with the caption: “I’m a PC and Windows 7 was my idea”

Mmm mmm, so who’s idea was Windows 8 then? There’s no rounded rectangles to be seen, so clearly not Apple’s. I guess how well it goes remains to be seen.

It apparently has some good improvements behind the scenes, but anecdotal evidence at the workplace suggests that the ability to co-operate with a Samba 3.5-based Windows Domain is not among them. One colleague recently bought herself a new ultrabook running Windows 8.

I’m guessing sooner or later I’ll be asked to assist with setting up the Cisco VPN client and setting up links to file shares, but another colleague, despite getting the machine to successfully connect to the office Wifi, couldn’t manage to bring up a login prompt to connect to the file server, the machine instead just assuming the local username and password matched the credentials to be used on the remote server. I will have to wait and see.

Where to now?

Well I guess I’m going to stick with FVWM a bit longer, or maybe pull my finger out and go my own way. I think Linus has a point when he describes KDE as a bit “cartoony”. Animations make something easy to sell, but at the end of the day, it actually has to do the work. Some effects can add value to day-to-day tasks, but most of what I’ve seen over the years doesn’t seem to add much at all.

User interfaces are not one-size-fits-all. Never have been. Touch screen interfaces have to deal with problems like fat fingers, and so there’s a balancing act between how big to make controls and how much to fit on a screen. Keyboard interfaces require a decent area for a keypad, and in the case of standard computer keyboards, ideally, two hands free. Mice work for selecting individual objects, object groups and basic gestures, but make a poor choice for entering large amounts of data into a field.

For some, physical disability can make some interfaces a complete no-go. I’m not sure how I’d go trying to use a mouse or touch screen if I lost my eyesight for example. I have no idea how someone minus arms would go with a tablet — if you think fat fingers is a problem, think about toes! I’d imagine the screens on those devices often would be too small to read when using such a device with your feet, unless you happen to have very short legs.

Even for those who have full physical ability, there are times when one input method will be more appropriate at a given time than another. Forcing one upon a user is just not on.

Hiding information from a user has to be carefully considered. One of my pet peeves is when you can’t see some feature on screen because it is hidden from view. There is one thing if you yourself set up the computer to hide something, but quite another when it happens by default. Having a small screen area that activates and reveals a panel is fine, if the area is big enough and there is some warning that the panel is about to fly out.

As for organising applications? I’ve never liked the way everything just gets piled into the “Programs” directory of the Start Menu in Windows. It is just an utter mess. MacOS X isn’t much better.

The way things are in Linux might take someone a little discovery to find where an application has been put, but once discovered, it’s then just a memory exercise to get at it, or shortcuts can be created. Much better than hunting through a screen-full of unsorted applications.

Maybe Microsoft can improve on this with their Windows Store, if they can tempt a few application makers from the lucrative iOS and Android markets.

One thing is clear, the computer is a tool, and as such, must be able to be adapted for how the user needs to use that tool at any particular time for it to maintain maximum utility.

January 31, 2013
LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 96 OpenRC | SystemD | Pulseaudio (January 31, 2013, 22:38 UTC)

LC

In this podcast, comprookie talks about Gentoo and the OpenRC, udev, SystemD debate, his slacking abilities and so much less ...

Links

SystemD
http://www.freedesktop.org/wiki/Software/systemd
http://0pointer.de/blog/projects/the-biggest-myths.html
OpenRC
http://www.gentoo.org/proj/en/base/openrc/
eudev
http://www.gentoo.org/proj/en/eudev/
Gentoo udev
http://wiki.gentoo.org/wiki/Udev

Download

ogg

Markos Chandras a.k.a. hwoarang (homepage, bugs)
What happened to all the mentors? (January 31, 2013, 19:07 UTC)

I had this post in the Drafts for a while, but now it’s time to publish it since the situation does not seem to be improving at all.

As you probably now, if you want to become a Gentoo developer, you need to find yourself a mentor[1]. This used to be easy. I mean, all you had to do was to contact the teams you were interested in contributing as a developer and then one of the team members would step up and help you with your quizzes. However, lately, I find myself in the weird situation of having to become a mentor myself because potential recruits come back to recruiters and say that they could not find someone from the teams to help them. This is sub-optimal  for a couple of reasons. First of all, time constrains  Mentoring someone can take days, weeks or months. Recruiting someone after being trained (properly or not), can also take days, weeks or months. So somehow, I ended up spending twice as much time as I used to.  So we are back to those good old days, where someone needed to wait months before we fully recruit him. Secondly, a mentor and a recruiter should be different persons. This is necessary for recruits to gain a wider and more efficient training as different people will focus on different areas during this training period.

One may wonder, why teams are not willing to spend time to train new developers. I guess, this is because training people takes quite a lot of someone’s time and people tend to prefer fixing bugs and writing code than spending time training people. Another reason could be that teams are short on manpower, so try are mostly busy with other stuff and they just can’t do both at the same time. Others just don’t feel ready to become mentors which is rather weird because every developer was once a mentee. So it’s not like they haven’t done something similar before. Truth is that this seems to be a vicious circle. No manpower to train people -> less people are trained -> Not enough manpower in the teams.

In my opinion, getting more people on board is absolutely crucial for Gentoo. I strongly believe that people must spend time training new people because a) They could offload work to them ;) and b) it’s a bit sad to have quite a few interested and motivated people out there and not spend time to train them properly and get them on board. I sincerely hope this is a temporary situation and things will become better in the future.

ps: I will be in FOSDEM this weekend. If you are there and you would like to discuss about the Gentoo recruitment process or anything else, come and find me ;)

 

[1] http://www.gentoo.org/proj/en/devrel/handbook/handbook.xml?part=1&chap=2#doc_chap3

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Dealing with insecure runpaths (January 31, 2013, 15:36 UTC)

Somehow, lately I’ve had a few more inquiries about runpaths than usual — all for different packages, which makes it quite a bit more interesting. Even though not all the questions regarded Gentoo’s “insecure runpaths” handling, I think it might be well worth it to write a bit more about it. Before going into details about it, though, I’ll point you to my previous post for a description of what runpaths (and rpath) are, instead of repeating it all here.

So, what is an insecure runpath? Basically any runpath set to a directory where an attacker may have control is insecure, because it allows an attacker to load an arbitrary shared object which can then run arbitrary code. The definition of attacker, as I already discussed is flexible depending on the situation.

Since runpaths are, as the name suggest, paths on the filesystem, there are two main starting points that would cause a path to become insecure: runpaths derived from the current working directory, and runpaths derived from any world-writable directory. The ability for an attacker to place the correct object in the correct path varies considerably, but it’s a good rule of thumb to consider the both of them just as bad. What happens for packages built in Gentoo often enough, is for the runpath to include the build directory in its runpath, which could be either /var/tmp or /tmp (if you, for instance, build in tmpfs — please tell me you’re not using /dev/shm for that!).

Depending on how your system is set up, this might not be as insecure as it might sound at first, because for instance in my laptop, /var/tmp/portage is always present, and always owned by the portage user, which makes it less vulnerable to a random user wanting to move there a particularly nasty shared object… on the other hand, assuming that it’s secure enough is the wrong move, full stop.

Before I start panicking people ­— Portage is smarter than you think and it has been, by default, stripping insecure runpaths for a while. Which means that you don’t have to fret about fixing the issues when you see the warning — but you really should look into fixing them for good. I would also argue that since Portage already strips the runpaths at install time, you shouldn’t just use chrpath to remove the insecure paths after the build, but you should either fix it properly or leave it be, in my opinion. This opinion might be a bit too personal, as I don’t know if pkgcore or paludis support the same kind of fixing.

So let’s go back to the two sources of insecurity in the runpaths. In the case of a current directory being the base for the insecure path, which means having an rpath of either . or ../something, this is usually bad logic in the way the package tries to set its rpath, as instead of the current work directory, the developers most likely wanted to refer to where the executable itself is installed, which is, instead, $ORIGIN literally. Another common situation is for that literal to be treated as a variable name, so that $ORIGIN/../mylibs becomes something like /../mylibs which is also wrong.

But a much more common situation arises when our build directory is injected into the runpath. Something along the lines of /var/tmp/portage/pkg-category/package-0.0.0/image/usr/lib64/mypkgslibs — more rarely it would point to the build directory rather than the image directory. In many cases this happens because the upstream build system does not know about, or mishandles, the DESTDIR variable.

The DESTDIR variable is commonly used to install a software package at a given offset — binary distributions will then generate a binary package out of it, source distributions like Gentoo will then merge the installed copy to the live filesystem after recording which files have been installed. In either case, the understanding behind this variable is that the final location of the executables will not include it. Unfortunately not all build systems do support it, so in some cases we end up doing something a bit more hackish by replacing /usr with ${D}/usr in the definition of the install prefix. The prefix is, though, commonly used to identify where the executables will be at the end, so it would be possible for a build system to have, in the parameters, -rpath ${prefix}/lib/mylibs which would then inject ${D} on the runpath.

As you can see, for most common situations it’s a matter of getting upstream to fix their build system. In other cases, the problem is that the ebuild is installing files without going through the build system’s install phase, which, at least with libtool, would often re-link the object files to make sure the rpath is handled correctly.

Beside this, there isn’t much more I can add, I’m afraid.

Marcus Hanwell a.k.a. cryos (homepage, bugs)
FOSDEM: Open Science and Open Chemistry (January 31, 2013, 15:14 UTC)

I will be talking about the Open Chemistry Project at FOSDEM this year in the FOSS for scientists devroom at 12:30pm on Saturday. I will discuss the development of a suite of tools for computational chemists and related disciplines, which includes the development of three desktop applications addressing 3D molecular structure editing, input preparation, output analysis, cheminformatics and integration with high-performance computing resources.

Open Chemistry

On Sunday Bill Hoffman will be speaking in the main track about Open Science, Open Software, and Reproducible Code at 3pm on Sunday. Bill and Alexander Neundorf will also be talking about Modern CMake in the cross desktop devroom on Saturday.

FOSDEM is one of the first conferences I attended (possibly the first, I can't remember if I went to a science conference before this). It will be great to return after so many years, and hopefully meet old colleagues and a few new ones. Please find me, Bill or Alex if you would like to discuss any of this work with us. I fly out tomorrow, and hope to get over jet lag quickly. Once FOSDEM is over we will be visiting Kitware SAS in Lyon, France for a couple of days (this is my first trip to our new office).

Then I have a few days in England visiting friends and family before heading back to the US.

January 30, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: iO Tillett Wright: Fifty shades of gay (January 30, 2013, 23:01 UTC)

Since the TED player seems to skip the last few seconds, I’m linking to the TED talk page but embedding a version from YouTube:

January 29, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

January 28, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
State of Chromium Open Source packages (January 28, 2013, 14:56 UTC)

Let me present an informal an unofficial state of Chromium Open Source packages as I see it. Note a possible bias: I'm a Chromium developer (and this post represents my views, not the projects'), and a Gentoo Linux developer (and Chromium package maintenance lead - this is a team effort, and the entire team deserves credit, especially for keeping stable and beta ebuilds up to date).

  1. Gentoo Linux - ships stable, beta and dev channels. Security updates are promptly pushed to stable. NaCl (NativeClient) is enabled, although pNaCl (Portable NaCl) is disabled. Up to 23 use_system_... gyp switches are enabled (depending on USE flags).
  2. Arch Linux - ships stable channel, promptly reacts to security updates. NaCl is enabled, following Gentoo closely - I consider that good, and I'm glad people find that code useful. :) 5 use_system_... gyp switches are enabled. A notable thing is that the PKGBUILD is one of the shortest and simplest among Chromium packages - this seems to follow from The Arch Way. There is also chromium-dev on AUR - it is more heavily based on the Gentoo package, and tracks the upstream dev channel. Uses 19 use_system_... gyp switches.
  3. FreeBSD / OpenBSD - ship stable channel, and are doing pretty well, especially when taking amount of BSD-specific patching into account. NaCl is disabled.
  4. ALT Linux - ships stable channel. NaCl seems to be disabled by default, I'm not sure what's actually shipped in compiled package. Uses 11 use_system_... gyp switches.
  5. Debian - ancient 6.x version in Squeeze, 22.x in sid at the time of this writing. This is two major milestones behind, and is missing security updates. Not recommended at this moment. :( If you are on Debian, my advice is to use Google Chrome, since official debs should work, and monitor state of the open source Chromium package. You can always return to it when it gets updated.
  6. Fedora - not in official repositories, but Tom "spot" Callaway has an unofficial repo. Note: currently the version in that repo is 23.x, one major version behind on stable. Tom wrote an article in 2009 called Chromium: Why it isn't in Fedora yet as a proper package, so there is definitely an interest to get it packaged for Fedora, which I appreciate. Many of the issues he wrote about are now fixed, and I hope to work on getting the remaining ones fixed. Please stay tuned!
This is not intended to be an exhaustive list. I'm aware of openSUSE packages, there seems to be something happening for Ubuntu, and I've heard of Slackware, Pardus, PCLinuxOS and CentOS packaging. I do not follow these closely enough though to provide a meaningful "review".

Some conclusions: different distros package Chromium differently. Pay attention to the packaging lag: with about 6 weeks upstream release cycle and each major update being a security one, this matters. Support for NativeClient is another point. There are extension and Web Store apps that use it, and when more and more sites start to use it, this will become increasingly important. Then it is interesting why on some distros some bundled libraries are used even though upstream provides an option to use a system library that is known to work on other distros.

Finally, I like how different maintainers look at each other's packages, and how patches and bugs are frequently being sent upstream.

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Openstack on Gentoo (January 28, 2013, 06:00 UTC)

Just a simple announcement for now. It's a bit messy, but should work :D

I have packaged Openstack for Gentoo and it is now in tree, the most complete packaging is probably for Openstack Swift. Nova and some of the others are missing init scripts (being worked on). If you have problems or bugs, report as normal.

January 27, 2013
Looking for KDE users on ARM (January 27, 2013, 15:11 UTC)

I received few requests to make KDE stable for ARM. Unfortunately I can’t do a complete test but I’m able to compile on both armv5 and armv7.

Before stabilize, I may set a virtual machine on qemu to test better, but I’d prefer to receive some feedback from the users.

So, if you are running KDE on arm, feel free to comment here, send me an e-mail or add a comment in the stabilization bug.

If you want to partecipate, look at kde-stable project.

January 26, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
I think I'll keep away from Python still (January 26, 2013, 13:39 UTC)

Last night I ended up in Bizarro World, hacking at Jürgen’s gmaillabelpurge (which he actually wrote on my request, thanks once more Jürgen!). Why? Well, the first reason was that I found out that it hasn’t been running for the past two and a half months, because, for whatever reason, the default Python interpreter on the system where it was running was changed from 2.7 to 3.2.

So I tried first to get it to work with Python 3 keeping it working with Python 2 at the same time; some of the syntax changes ever so slightly and was easy to fix, but the 2to3 script that it comes with is completely bogus. Among other things, it adds parenthesis on all the print calls… which would be correct if it checked that said parenthesis wouldn’t be there already. In a script link the one aforementioned, the noise on the output is so high that there is really no signal worth reading.

You might be asking how comes I didn’t notice this before. The answer is “because I’m an idiot”: I found out only yesterday that my firewall configuration was such that postfix was not reachable from the containers within Excelsior, which meant I never got the fcron notifications that the job was failing.

While I wasn’t able to fix the Python 3 compatibility, I was able to at least understand the code a little by reading it, and after remembering something about the IMAP4 specs I read a long time ago, I was able to optimize its execution quite a bit, more than halving the runtime on big folders, like most of the ones I have here, by using batch operations, and peeking, instead of “seeing” the headers. At the end, I spent some three hours on the script, give or take.

But at the same time, I ended up having to workaround limitations in Python’s imaplib (which is still nice to have by default), such as reporting fetched data as an array, where each odd entry is a pair of strings (tag and unparsed headers) and each even entry is a string with a closed parenthesis (coming from the tag). Since I wasn’t able to sleep, at 3.30am I started re-writing the script in Perl (which at this point I know much better than I’ll ever know Python, even if I’m a newbie in it); by 5am I had all the features of the original one, and I was supporting non-English locales for GMail — remember my old complain about natural language interfaces? Well, it turns out that the solution is to use the Special-Use Extension for IMAP folders; I don’t remember this explanation page when we first worked on that script.

But this entry is about Python and not the script per-se (you can find on my fork the Perl version if you want). I have said before I dislike Python, and my feeling is still unchanged at this point. It is true that the script in Python required no extra dependency, as the standard library already covered all the bases … but at the same time that’s about it: it is basics that it has; for something more complex you still need some new modules. Perl modules are generally easier to find, easier to install, and less error-prone — don’t try to argue this; I’ve got a tinderbox that reports Python tests errors more often than even Ruby’s (which are lots), and most of the time for the same reasons, such as the damn unicode errors “because LC_ALL=C is not supported”.

I also still hate the fact that Python forces me to indent code to have blocks. Yes I agree that indented code is much better than non-indented one, but why on earth should the indentation mandate the blocks rather than the other way around? What I usually do in Emacs when I’m getting stuff in and out of loops (which is what I had to do a lot on the script, as I was replacing per-message operations with bulk operations), is basically adding the curly brackets in different place, then select the region, and C-M-\ it — which means that it’s re-indented following my brackets’ placement. If I see an indent I don’t expect, it means I made a mistake with the blocks and I’m quick to fix it.

With Python, I end up having to manage the space to have it behave as I want, and it’s quite more bothersome, even with the C-c < and C-c > shortcuts in Emacs. I find the whole thing obnoxious. The other problem is that, while Python does provide basics access to a lot more functionality than Perl, its documentation is .. spotty at best. In the case of imaplib, for instance, the only real way to know what’s going to give you, is to print the returned value and check with the RFC — and it does not seem to have a half-decent way to return the UIDs without having to parse them. This is simply.. wrong.

The obvious question for people who know would be “why did you not write it in Ruby?” — well… recently I’ve started second-guessing my choice of Ruby at least for simple one-off scripts. For instance, the deptree2dot tool that I wrote for OpenRC – available here – was originally written as a Ruby script … then I converted it a Perl script half the size and twice the speed. Part of it I’m sure it’s just a matter of age (Perl has been optimized over a long time, much more than Ruby), part of it is due to be different tools for different targets: Ruby is nowadays mostly a long-running software language (due to webapps and so on), and it’s much more object oriented, while Perl is streamlined, top-down execution style…

I do expect to find the time to convert even my scan2pdf script to Perl (funnily enough, gscan2pdf which inspired it is written in Perl), although I have no idea yet when… in the mean time though, I doubt I’ll write many more Ruby scripts for this kind of processing..

Hanno Böck a.k.a. hanno (homepage, bugs)

Based on the XKCD comic "Up Goer Five", someone made a nice little tool: An online text editor that lets you only use the 1000 most common words in English. And ask you to explain a hard idea with it.

Nice idea. I gave it a try. The most obvious example to use was my diploma thesis (on RSA-PSS and provable security), where I always had a hard time to explain to anyone what it was all about.

Well, obviously math, proof, algorithm, encryption etc. all are forbidden, but I had a hard time with the fact that even words like "message" (or anything equivalent) don't seem to be in the top 1000.

Here we go:

When you talk to a friend, she or he knows you are the person in question. But when you do this a friend far away through computers, you can not be sure.
That's why computers have ways to let you know if the person you are talking to is really the right person.

The ways we use today have one problem: We are not sure that they work. It may be that a bad person knows a way to be able to tell you that he is in fact your friend. We do not think that there are such ways for bad persons, but we are not completely sure.

This is why some people try to find ways that are better. Where we can be sure that no bad person is able to tell you that he is your friend. With the known ways today this is not completely possible. But it is possible in parts.

I have looked at those better ways. And I have worked on bringing these better ways to your computer.


So - do you now have an idea what I was taking about?

I found this nice tool through Ben Goldacre, who tried to explain randomized trials, blinding, systematic review and publication bias - go there and read it. Knowing what publication bias and systematic reviews are is much more important for you than knowing what RSA-PSS is. You can leave cryptography to the experts, but you should care about your health. And for the record, I recently tried myself to explain publication bias (german only).

January 25, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A personal update (January 25, 2013, 17:32 UTC)

While I’m sure that it’s not of interest to the vast majority of readers coming from Gentoo Universe, I’m sure that some of you won’t mind some updates on my personal situation, at least to help you understand my current availability and what you can ask me to do for you, realistically.

First of all, I’m not currently in the USA — since I didn’t have a work visa, my stay was always supposed to be limited to three months at a time. The three months expired in early December, so before the expiration I traveled back to Europe — in particular to the bureaucracy of Italy and to the swamp of Venice; you can guess I don’t really like my motherland.

I’m not planning at this point to go back to the US anytime soon. Among other reasons, during 2012 I spent over six months there, and they have been very clear the last time I entered: I’m not welcome back right away — a few months would be enough, but that also means that the line of work I started back in February last year couldn’t proceed properly. While the original plan was for me to get an office in, or nearby, London, I haven’t seen any progress for said plan, which meant I went back to my old freelancing. I suppose this currently puts me in a consulting capacity more than anything.

Unfortunately, as you can guess, after a hiatus of a full year, most of my customers found already someone else to take care of them, and I’m currently only following one last customer in their project — for something they paid already for, which means that there isn’t any money to be made there. I am already trying to get a new position, this time as a full-time employee, as the life of a freelancer (in Italy!) really made me long for more stability. For the moment I have no certain news for my future employment, but you can probably guess that, in the case I do accept a full-time position, the time I have to spend on Gentoo is likely going to be reduced, unless said position requires me to use Gentoo — and I wouldn’t bet on that if I was you.

Furthermore I do expect that, whatever position I’m going to accept next, I’m going to move out of Italy — the political scene in Italy has never been good, but it reached my limit with the current populist promises from both sides of the aisles, and from the small challengers alike; and my freelancing experience makes me wonder how on earth it’s possible that only one (small) party is actually trying to fight the crisis and increase productivity … but this all is for a different time. Anyway, wherever am I going to end up (I’m aiming for one of the few English-speaking countries in Europe), it’s going to take a while for me to settle down (find a place to live, get it so that it’s half-decently convenient to me, etc.), which is going to eat away some of the time I spend on Gentoo.

Time is being eaten away already to be honest. Among other things, here at home I’ve got a bunch of paperwork to take care of: not only the general taxes that need to be paid and accounted for, but the bank took some of my time just to make sure I have money to cover for the expenses (during the year I accrued some debts here in Italy, as I was living off the American account), and so on. I’m also trying to reduce the expenses as much as it’s possible for me. Most of the hardware I had before, anyway, has been dropped already, back in June when the original plan was for me to get an H1B visa and jump out of here, so it’s less bothersome that it can seem at first.

The one thing that really bothers me the most is that since last year I’ve been feeling like wherever I am, I’m “borrowing” my space — it’s not something I like. While some people, such as Luca, feel comfortable with just carrying their things in a suitcase, and as long as they have a place to sleep and wash their clothes they are happy, I’ve always been quite the sedentary guy: I like having my space, personalized for my needs and so on. Even now back at home I don’t feel entirely stable because I do not know how long I’m going to stay here.

I’m afraid I have overindulged during the months in the US, relying too much on the promises made then. Hopefully, I’ll come out of the recent mess on my feet, and possibly with a less foul mood than I have been having recently.

Michal Hrusecky a.k.a. miska (homepage, bugs)
MySQL, MariaDB & openSUSE 12.3 (January 25, 2013, 12:22 UTC)

MariaDB logoopenSUSE 12.3 is getting closer and closer and probably one of the last changes I pushed for MySQL was switching the default MySQL implementation. So in openSUSE 12.3 we will have MariaDB as a default.

If you are following what is going on in openSUSE in regards to MySQL, you probably already know, that we started shipping MariaDB together with openSUSE starting with version 11.3 back in 2010. It is now almost three years since we started providing it. There were some little issues on the way to resolve all conflicts and to make everything work nicely together. But I believe we polished everything and smoothed all rough edges. And now everything is working nice and fine, so it’s time to change something, isn’t it? :-D So let’s take a look of the change I made…

MariaDB as default, what does it mean?

First of all, for those who don’t know, MariaDB is MySQL fork – drop-in replacement for MySQL. Still same API, still same protocol, even same utilities. And mostly the same datafiles. So unless you have some deep optimizations depending on your current version, you should see no difference. And what will switch mean?

Actually, switching default doesn’t mean much in openSUSE. Do you remember the time when we set KDE as a default? And we still provide great Gnome experience with Gnome Shell. In openSUSE we believe in freedom of choice so even now, you can install either MySQL or MariaDB quite simply. And if you are interested, you can try testing beta versions of both – we have MySQL 5.6 and MariaDB 10.0 in server:database repo. So where is the change of default?

Actually, the only thing that changed is that everything now links against MariaDB and uses MariaDB libraries – no change from users point of view. And if you will try to update from system that used to have just one package called ‘mysql’, you’ll end up with MariaDB. And it will be default in LAMP pattern. But generally, you can still easily replace MariaDB with MySQL, if you like Oracle ;-) Yes, it is hard to make a splash with a default change if you are supporting both sides well…

What happens to MySQL?

Oracles MySQL will not go away! I’ll keep packaging their version and it will be available in the openSUSE. It’s just not going to be a default, but nothing prevents you from installing it. And if you had it in past and you are going to do just a plain upgrade, you’ll stick to it – we are not going to tell you what to use if you know what you want.

Why?

As mentioned before, being default doesn’t have many consequences. So why the switch? Wouldn’t it break stuff? Is that MariDB safe enough? Well, I’m personally using MariaDB since 2010 with few switches to MySQL and back, so it is better tested from my point of view. I originally switched for the kicks of living on the edge, but in the end I found MariaDB boringly stable (even though I run their latest alpha). I never had any serious issue with it. It also has some interesting goodies that it can offer to it’s user over MySQL. Even Wikipedia decided to switch. And our friends at Fedora are considering it too, but AFAIK they don’t have MariaDB yet in their distribution….

Don’t take it as a complain about MySQL guys and girls at Oracle, I know that they are doing a great job that even MariaDB is based on as they do periodical merges to get newest MySQL and they “just” add some more tweaks, engines and stuff.

So, as I like MariaDB and I think it’s time to move, I, as a maintainer of both, proposed to change the default. There were no strong objections, so we are doing it!

Overview

So overall, yes, we are changing default MySQL provider, but you probably wouldn’t even notice

Marcus Hanwell a.k.a. cryos (homepage, bugs)
Avogadro Paper Published Open Access (January 25, 2013, 10:29 UTC)

In January of last year I was invited to attend the Semantic Physical Science Workshop in Cambridge, England. That was a great meeting where I met like-minded scientists and developers working on adding semantic structure to data in the physical sciences. Peter managed to bring together a varied group with many backgrounds, and so the discussions were especially useful. I was there to think about how our work with Avogadro, and the wider Open Chemistry project might benefit from and contribute to this area.

Avogadro graphical abstract

My thanks go out to Peter Murray-Rust for inviting me to the Semantic Physical Science meeting and helping us to get the Avogadro paper published in the Journal of Cheminformatics as part of the Semantic Physical Science collection. Noel O'Boyle wrote up a blog post summarizing the Avogadro paper accesses in the first month (shown below - thanks Noel) compared to the Blue Obelisk paper and the Open Babel paper. We only just got the final version of the PDF/HTML published in early January, but already have 12 citations according to Google scholar, showing as the second most viewed article in the last 30 days, and the most viewed article in the last year. The paper made the Chemistry Central most accessed articles list in October and November.

&transpose&headers&range&gid&pub

I made a guest blog post talking about open access and the Avogadro paper, which was later republished for a different audience. I would like to thank Geoffrey Hutchison, Donald Curtis, David Lonie, Tim Vandermeersch and Eva Zurek for the work they put into the article, along with our contributors, collaborators and the users of Avogadro. If you use Avogadro in your work please cite our paper, and get in touch to let us know what you are doing with it. As we develop the next generation of Avogadro we would appreciate your input, feedback and suggestions on how we can make it more useful to the wider community.

January 24, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We are currently working on integrating carbon nanotube nanomechanical systems into superconducting radio-frequency electronics. Overall objective is the detection and control of nanomechanical motion towards its quantum limit. In this project, we've got a PhD position with project working title "Gigahertz nanomechanics with carbon nanotubes" available immediately.

You will design and fabricate superconducting on-chip structures suitable as both carbon nanotube contact electrodes and gigahertz circuit elements. In addition, you will build up and use - together with your colleagues - two ultra-low temperature measurement setups to conduct cutting-edge measurements.

Good knowledge of electrodynamics and possibly superconductivity are required. Certainly helpful is low temperature physics, some sort of programming experience, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

The combination of localized states within carbon nanotubes and superconducting contact materials leads to a manifold of fascinating physical phenomena and is a very active area of current research. An additional bonus is that the carbon nanotube can be suspended, i.e. the quantum dot between the contacts forms a nanomechanical system. In this research field a PhD position is immediately available; the working title of the project is "A carbon nanotube as a moving weak link".

You will develop and fabricate chip structures combining various superconductor contact materials with ultra-clean, as-grown carbon nanotubes. Together with your colleagues, you will optimize material, chip geometry, nanotube growth process, and measurement electronics. Measurements will take place in one of our ultra-low temperature setups.

Good knowledge of superconductivity is required. Certainly helpful is knowledge of semiconductor nanostructures and low temperature physics, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Two days ago, Luca asked me to help him figure out what’s going on with a patch for libav which he knew to be the right thing, but was acting up in a fashion he didn’t understand: on his computer, it increased the size of the final shared object by 80KiB — while this number is certainly not outlandish for a library such as libavcodec, it does seem odd at a first glance that a patch removing source code is increasing the final size of the executable code.

My first wild guess which (spoiler alert) turned out to be right, was that removing branches out of the functions let GCC optimize the function further and decide to inline it. But how to actually be sure? It’s time to get the right tools for the job: dev-ruby/ruby-elf, dev-util/dwarves and sys-devel/binutils enter the battlefield.

We’ve built libav with and without the patch on my server, and then rbelf-size told us more or less the same story:

% rbelf-size --diff libav-{pre,post}/avconv
        exec         data       rodata        relro          bss     overhead    allocated   filename
     6286266       170112      2093445       138872      5741920       105740     14536355   libav-pre/avconv
      +19456           +0         -592           +0           +0           +0       +18864 

Yes there’s a bug in the command, I noticed. So there is a total increase of around 20KiB, where is it split? Given this is a build that includes debug info, it’s easy to find it through codiff:

% codiff -f libav-{pre,post}/avconv
[snip]

libavcodec/dsputil.c:
  avg_no_rnd_pixels8_9_c    | -163
  avg_no_rnd_pixels8_10_c   | -163
  avg_no_rnd_pixels8_8_c    | -158
  avg_h264_qpel16_mc03_10_c | +4338
  avg_h264_qpel16_mc01_10_c | +4336
  avg_h264_qpel16_mc11_10_c | +4330
  avg_h264_qpel16_mc31_10_c | +4330
  ff_dsputil_init           | +4390
 8 functions changed, 21724 bytes added, 484 bytes removed, diff: +21240

[snip]

If you wonder why it’s adding more code than we expected, it’s because there are other places where functions have been deleted by the patch, causing some reductions in other places. Now we know that the three functions that the patch deleted did remove some code, but five other functions added 4KiB each. It’s time to find out why.

A common way to do this is to generate the assembly file (which GCC usually does not represent explicitly) to compare the two — due to the size of the dsputil translation unit, this turned out to be completely pointless — just the changes in the jump labels cause the whole file to be rewritten. So we rely instead on objdump, which allows us to get a full disassembly of the executable section of the object file:

% objdump -d libav-pre/libavcodec/dsputil.o > dsputil-pre.s
% objdump -d libav-post/libavcodec/dsputil.o > dsputil-post.s
% diff -u dsputil-{pre,post}.s | diffstat
 unknown |245013 ++++++++++++++++++++++++++++++++--------------------------------
 1 file changed, 125163 insertions(+), 119850 deletions(-)

As you can see, trying a diff between these two files is going to be pointless, first of all because of the size of the disassembled files, and secondarily because each instruction has its address-offset prefixed, which means that every single line will be different. So what to do? Well, first of all it’s useful to just isolate one of the functions so that we reduce the scope of the changes to check — I found out that there is a nice way to do so, and it involves relying on the way the function is declared in the file:

% fgrep -A3 avg_h264_qpel16_mc03_10_c dsputil-pre.s
00000000000430f0 <avg_h264_qpel16_mc03_10_c>:
   430f0:       41 54                   push   %r12
   430f2:       49 89 fc                mov    %rdi,%r12
   430f5:       55                      push   %rbp
--
[snip]

While it takes a while to come up with the correct syntax, it’s a simple sed command that can get you the data you need:

% sed -n -e &apos/\<avg_h264_qpel16_mc03_10_c/, /^$/ s|^\s\+[0-9a-f]\+:|| p&apos dsputil-pre.s > dsputil-func-pre.s
% sed -n -e &apos/\<avg_h264_qpel16_mc03_10_c/, /^$/ s|^\s\+[0-9a-f]\+:|| p&apos dsputil-post.s > dsputil-func-post.s
% diff -u dsputil-func-{pre,post}.s | diffstat
 dsputil-func-post.s | 1430 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 1376 insertions(+), 54 deletions(-)

Okay that’s much better — but it’s still a lot of code to sift through, can’t we reduce it further? Well, actually… yes. My original guess was that some function was inlined; so let’s check for that. If a function is not inlined, it has to be called, the instruction for which, in this context, is callq. So let’s check if there are changes in the calls that happen:

% diff -u =(fgrep callq dsputil-func-pre.s) =(fgrep callq dsputil-func-post.s)
--- /tmp/zsh-flamehIkyD2        2013-01-24 05:53:33.880785706 -0800
+++ /tmp/zsh-flamebZp6ts        2013-01-24 05:53:33.883785509 -0800
@@ -1,7 +1,6 @@
-       e8 fc 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 e5 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 c6 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 a7 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 cd 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
-       e8 a3 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
-       e8 00 00 00 00          callq  43261 <avg_h264_qpel16_mc03_10_c+0x171>
+       e8 00 00 00 00          callq  8e670 <avg_h264_qpel16_mc03_10_c>
+       e8 71 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 52 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 33 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 14 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 00 00 00 00          callq  8f8d3 <avg_h264_qpel16_mc03_10_c+0x1263>

Yes, I do use zsh — on the other hand, now that I look at the code above I note that there’s a bug: it does not respect $TMPDIR as it should have used /tmp/.private/flame as base path, dang!

So the quick check shows that avg_pixels8_l2_10 is no longer called — but does that account for the whole size? Let’s see if it changed:

% nm -S libav-{pre,post}/libavcodec/dsputil.o | fgrep avg_pixels8_l2_10
00000000000072e0 0000000000000112 t avg_pixels8_l2_10
00000000000072e0 0000000000000112 t avg_pixels8_l2_10

The size is the same and it’s 274 bytes. The increase is 4330 bytes, which is around 15 times more than the size of the single function — what does that mean then? Well, a quick look around shows this piece of code:

        41 b9 20 00 00 00       mov    $0x20,%r9d
        41 b8 20 00 00 00       mov    $0x20,%r8d
        89 d9                   mov    %ebx,%ecx
        4c 89 e7                mov    %r12,%rdi
        c7 04 24 10 00 00 00    movl   $0x10,(%rsp)
        e8 cd 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
        48 8d b4 24 80 00 00    lea    0x80(%rsp),%rsi
        00 
        49 8d 7c 24 10          lea    0x10(%r12),%rdi
        41 b9 20 00 00 00       mov    $0x20,%r9d
        41 b8 20 00 00 00       mov    $0x20,%r8d
        89 d9                   mov    %ebx,%ecx
        48 89 ea                mov    %rbp,%rdx
        c7 04 24 10 00 00 00    movl   $0x10,(%rsp)
        e8 a3 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
        48 8b 84 24 b8 04 00    mov    0x4b8(%rsp),%rax
        00 
        64 48 33 04 25 28 00    xor    %fs:0x28,%rax
        00 00 
        75 0c                   jne    4325c <avg_h264_qpel16_mc03_10_c+0x16c>

This is just a fragment but you can see that there are two calls to the function, followed by a pair of xor and jne instructions — which is the basic harness of a loop. Which means the function gets called multiple times. Knowing that this function is involved in 10-bit processing, it becomes likely that the function gets called twice per bit, or something along those lines — remove the call overhead (as the function is inlined) and you can see how twenty copies of that small function per caller account for the 4KiB.

So my guess was right, but incomplete: GCC not only inlined the function, but it also unrolled the loop, probably doing constant propagation in the process.

Is this it? Almost — the next step was to get some benchmark data when using the code, which was mostly Luca’s work (and I have next to no info on how he did that, to be entirely honest); the results on my server has been inconclusive, as the 2% loss that he originally registered was gone in further testing and would, anyway, be vastly within margin of error of a non-dedicated system — no we weren’t using full-blown profiling tools for that.

While we don’t have any sound numbers about it, what we’re worried about is for cache-starved architectures, such as Intel Atom, where the unrolling and inlining can easily cause performance loss, rather than gain — which is why all us developers facepalm in front of people using -funroll-all-loops and similar. I guess we’ll have to find an Atom system to do this kind of runs on…

Richard Freeman a.k.a. rich0 (homepage, bugs)
MythTV 0.26 In Portage (January 24, 2013, 01:31 UTC)

Well, all of MythTV 0.26 is now in portage, masked for testing for a few days.

If anyone is interested now is a good time to give it a try and report any issues you find. If all is quiet the masks will come off and we’ll be up-to-date (including all patches up to a few days ago).

Thanks to all who have contributed to the 0.26 bug. I can also happily report that I’m running Gentoo on my mythtv front-end, which should help me with maintaining things. MiniMyth is a great distro, but it has made it difficult to keep the front- and back-ends in sync.


Filed under: foss, gentoo, mythtv

January 23, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The usual Typo update report (January 23, 2013, 21:18 UTC)

You probably got used to read about me updating Typo at this point — the last update I wrote about was almost an year ago when I updated to Typo 6, using Rails 3 instead of 2. Then you probably remember my rant about what I would like of my blog …

Well, yesterday I was finally able to get rid of the last Rails 2.3 application that was running on my server, as a nuisance of a customer’s contract finally expired, and since I was finally able to get to update Typo without having to worry about the Ruby 1.8 compatibility that was dropped upstream. Indeed since the other two Ruby applications running on this server are Harvester for Planet Multimedia and a custom application I wrote for a customer, the first not using Rails at all, and the second written to work on both 1.8 and 1.9 alike, I was able to move from having three separate Rails slot installed (2.3, 3.0 and 3.1), to having only the latest 3.2, which means that security issues are no longer a problem for the short term either.

The new Typo version solves some of the smaller issues I’ve got with it before — starting from the way it uses Rails (now no longer requiring a single micro-version, but accepting any version after 3.2.11), and the correct dependency on the new addressable. At the same time it does not solve some of the most long-standing issues, as it insists on using the obsolete coderay 0.9 instead of the new 1.0 series.

So let’s go in order: the new version of Typo brings in another bunch of gems — which means I have to package a few more. One of them is fog which includes a long list of dependencies, most of which from the same author, and reminds me of how bad the dependencies issue is with Ruby packages. Luckily for me, even though the dependency is declared mandatory, a quick hacking around got rid of it just fine — okay hacking might be too much, it really is just a matter of removing it from the Gemfile and then removing the require statement for it, done.

For the moment I used the gem command to install the required packages — some of them are actually available on Hans’s overlay and I’ll be reviewing them soon (I was supposed to do that tonight, but my job got in the way) to add them to main tree. A few more requires me to write them from scratch so I’ll spend a few days on that soon. I have other things in my TODO pipeline but I’ll try to cover as many bases as I can.

While I’m not sure if this update finally solves the issue of posts being randomly marked as password-protected, at least this version solves the header in the content admin view, which means that I can finally see what drafts I have pending — and the default view also changed to show me the available drafts to finish, which is great for my workflow. I haven’t looked yet if the planning for future-published posts work, but I’ll wait for that.

My idea of forking Typo is still on, even though it might be more like a set of changes over it instead of being a full-on fork.. we’ll see.

Marcus Hanwell a.k.a. cryos (homepage, bugs)
The Roller Coaster of 2012 (January 23, 2013, 00:20 UTC)

It has been a long time since I wrote anything on here, I am still alive and kicking! 2012 was another roller coaster of a year, with many good and bad things happening. Louise and I got our green cards early on in the year (massive thanks to my employer), which was great after having lived in the US for over five years now. We started house hunting a few months after that, which was an adventure and a half.

As we were in the process of looking for a house I was promoted to technical leader at Kitware, and I continue to work on our Open Chemistry project. We ended up falling in love with the first house we found, and found a great realtor who took us back there for a second look. We then learned how different buying a house in the US versus England, but after several rounds of negotiations came to an agreement. We had a very long wait for completion, but that all proceeded well in the end.

As we moved out of the place we had been renting for the last three years we found out just how bad some landlords can be about returning security deposits...that is still ongoing and has not been a fun process. We never rented in England, but many friends have assured us that this isn't that unusual. Our move actually went very smoothly though, and we have some great friends who helped us with some of the heavy lifting. We have been learning what it is like to own a home in the country, with a well, septic, large garden etc. The learning curve has been a little steep at times! We attended two weddings (I was a groomsman in one) with two amazing groups of friends - it was a pleasure to be part of the day for two great friends.

I made a few guest blog posts, which I will try to talk more about in another post, and attended some great conferences including the ACS, Semantic Physical Science and Supercomputing. Our Avogadro paper was published, and was recently published in final form (I will write more about this too). I finally cancelled my dedicated server (an old Gentoo box), which I originally took when I was consulting in England, this was very disruptive in the end and I didn't have a complete backup of all data when it was taken offline. This caused lots of disruption to email (sorry if I never got back to you). I moved to a cloud server with Rackspace in the end, after playing with a few alternatives. I was retired as a Gentoo developer too (totally missed those emails), it was a great experience being a developer and I still value many of the friendships formed during that time. My passion for packaging has wained in recent years, and I tend to use Arch Linux more now (although still love lots of things about Gentoo).

Just before Xmas our ten year old German Shepherd developed a sudden paralysis in his back legs and had to be put down. It was pretty devastating, after having him from when he was 12 weeks old. He joined our little family just after we got our own place in England, he had five great years in England and another five in the US. He was with me for so much of my life (a degree, loss of my brother, marriage, loss of my sister, moving to another country, birth of our first child, getting a "real" job). We had family over for the holidays as we call them over here (Xmas and New Year back home), which was great but we may not have been the best of company after having just lost our dog.

I think I skipped lots of stuff too, but it was quite a year! Hoping for more of a steady ride this year to say the least.

January 22, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Crashes and DoS, what is it with them anyway? (January 22, 2013, 16:38 UTC)

During the recent Gentoo mudslinging about libav and FFmpeg, one of the contention points is the fact that FFmpeg boasts more “security fixes” than libav over time. Any security conscious developer would know that assessing the general reliability of a software requires much more than just counting CVEs — as they only get assigned when bugs are reported as security issues by somebody.

I ended up learning this first-hand. In August 2005 I was just fixing a few warnings out of xine-ui, with nothing in mind but cleaning up the build log — that patch ended up in Gentoo, but no new release was made for xine-ui itself. Come April of 2006 and a security researcher marked them as a security issue — we were already covered, for the most part, but other distros weren’t. The bug was fixed upstream, but not released, simply because nobody considered them security issues up to that point. My lesson was that issues that might lead to security problems are always better looked at from a security expert — that’s why I originally started working with ocert for verifying issues within xine.

So which kind of issues are considered security issues? In this case the problem was a format string — this is obvious, as it can theoretically allow, under given conditions, to write to arbitrary memory. The same is true for buffer overflows obviously. But what about unbound reads, which in my experience form the vast majority of crashes out there? I would say that there are two widely different problems with them, which can be categorized as security issues: information disclosure (if the attacker can decide where to read and can get useful information out of said read — such as the current base address for the executable or libraries of the process, which can be used later), and good old crashes — which for security purposes are called DoS: Denial of Service.

Not all DoS are crashes, not all crashes are DoS, though! In particular, you can DoS an app without having it crashing, but rather deadlocking, or otherwise exhausting all of one scarce resource — this is the preferred method for DoS on servers; indeed this is the way the Slowloris attack for Apache worked: it used all the connection handlers and caused the server to not answer legitimate clients; a crash would be much easier to identify and recover from, which is why DoS on servers are rarely full-blown crashes. Crashes cannot realistically be called DoS when they are user-initiated without a third-party intervening. It might sounds silly, and remind of an old joke – “Doctor, doctor, if I do this it hurts!” “Stop doing that, then!” – but it’s the case: if going to the app’s preferences and clicking something causes the app to crash, then there’s a bug which is a crash but is not a DoS.

This brings us to one of the biggest problem with calling something a DoS: it might be a DoS in one use-case, and not in another — let’s use libav as an example. It’s easy to argue that any crash in the libraries for decoding a stream as a DoS, as it’s common to download a file, and try to play it; said file is the element in the equation that comes from a possible attacker, and anything that can happen due to its decode is a security risk. Is it possible to argue that a crash in an encoding path is a DoS? Well, from a client’s perspective, it’s not common to — it’s still very possible that an attacker can trick you into downloading a file and re-encoding it, but it’s less common a situation, and in my experience, most of the encoding-related crashes are triggered only with a given subset of parameters, which makes it more difficult for an attacker to exploit than a decoder-side DoS. If the crash only happens when using avconv, also, it’s hard to declare it a DoS taking into consideration that at most, it should crash the encoding process, and that’s about it.

Let’s now turn the table, and instead of being the average user downloading movies from The Pirate Bay, we’re a video streaming service, such as YouTube, Vimeo or the like — but without the right expertise, which means that a DoS on your application is actually a big deal. In this situation, assuming your users control the streams that get encoded, you’re dealing with an input source that is untrusted, which means that you’re vulnerable to both crashes in the decoder and in the encoder as real-world DoS attacks. As you see what earlier required explicit user interaction and was hard to consider a full-blown DoS now gets much more important.

This kind of issues is why languages like Ada were created, and why many people out there insist that higher-level languages like Java, Python and Ruby are more secure than C, thanks to the existence of exceptions for error handling, making it easier to have fail-safe conditions which should solve the problem of DoS — the fact that there are just as many security issues in software written in high-level languages as low-level shows how false that concept is nowadays. Because while it does save from some kind of crashes, it also creates issues by the increase in the sheer area of exposure: the more layers, more code is involved in execution, and that can actually increase the chance for somebody to find an issue in them.

Area of exposure is important also for software like libav: if you enable every possible format under the sun for input and output, you’re enabling a whole lot of code, and you can suffer from a whole lot of vulnerabilities — if you’re targeting a much more reduced audience, like for instance you’re using it on a device that has to output only H.264 and Speex audio, you can easily turn everything else off, and reduce your exposure many times. You can probably see now why even when using libav or ffmpeg as backend, Chrome does not support all the input files that they support; it would just be too difficult to validate all the possible code out there, while it’s feasible to validate a subset of them.

This should have established the terms on what to consider DoS and when ­— so how do you handle this? Well, the first problem is to identify the crashes; you can either wait for an attack to happen, and react to that, or proactively try to identify crash situations, and obviously the latter is what you should do most of the time. Unfortunately, this requires the use of many different techniques, and none yields a 100% positive result, even the combined results are rarely sufficient to argue that a piece of software is 100% safe from crashes and other possible security issues.

One obvious thing is that you just have to make sure the code is not allowing things that should not happen, like incredibly high values or negative ones. This requires manual work and analysis of code, which is usually handled through code reviews – on the topic there is a nice article by Mozilla’s David Humphrey – at least for what concerns libav. But this by itself is not enough, as many times it’s values that are allowed by the specs, but are not handled properly, that cause the crashes. How to deal with them? A suggestion would be to use fuzzing, which is a technique in which a program is executed receiving, as input, a file that is corrupted starting from a valid one. A few years ago, a round of FFmpeg/VLC bugs were filed after Sam Hocevar released, and started using, his zzuf tool (which should be in Portage, if you want to look at it).

Unfortunately, fuzzing, just like using particular exemplars of attacks in the wild, have one big drawback – one that we could call “zenish” – you can easily forget that you’re looking at a piece of code that is crashing on invalid input, and you just go and resolve that one small issue. Do you remember the calibre security shenanigan ? It’s the same thing: if you only fix the one bit that is crashing on you without looking at the whole situation, an attacker, or a security researcher, can actually just look around and spot the next piece that is going to break on you. This is the one issue that me, Luca and the others in the libav project get vocal about when we’re told that we don’t pay attention to security only because it takes us a little longer to come up with a (proper) fix — well, this, and the fact that most of the CVE that are marked as resolved by FFmpeg we have had no way to verify for ourselves because we weren’t given access to the samples for reproducing the crashes; this changed after the last VDD for at least those coming from Google. If I’m not mistaken, at least one of them ended up with a different, complete fix rather than the partial bandaid put in by our peers at FFmpeg.

Testsuites for valid configurations and valid files are not useful to identify these problems, as those are valid files and should not cause a DoS anyway. On the other hand, just using a completely shot-in-the-dark fuzzing technique like zzuf could or could not help, depending on how much time you can pour to look at the failures. Some years ago, I read an interesting book, Fuzzing: Brute Force Vulnerability Discovery by Sutton, Greene and Amini. It was a very interesting read, although last I checked, the software they pointed to was mostly dead in the water. I should probably get back at it and see if I can find if there are new forks of that software that we can use to help getting there.

It’s also important to note that it’s not just a matter of causing a crash, you need to save the sample that caused the issue, and you need to make sure that it’s actually crashing. Even a “all okay” result might not be actually a pass, as in some cases, a corrupted file could cause a buffer overflow that, in a standard setup, could let the software keep running — hardened, and other tools, make it nicer to deal with that kind of issues at least…

Josh Saddler a.k.a. nightmorph (homepage, bugs)

a new song: walking home alone through moonlit streets by ioflow

for the 55th disquiet junto, two screws.

the task was to combine do and re by nils frahm into a new work. i chopped “re” into loops, and rearranged sections by sight and sound for a deliberately loose feel. the resulting piece is entirely unquantized, with percussion generated from the piano/pedal action sounds of “do” set under the “re” arrangement. the perc was performed with an mpd18 midi controller in real time, and then arranged by dragging individual hits with a mouse. since the original piano recordings were improvised, tempo fluctuates at around 70bpm, and i didn’t want to lock myself into anything tighter when creating the downtempo beats.

beats performed live on the mpd18, arranged in ardour3.

normally i’d program everything to a strict grid with renoise, but for this project, i used ardour3 (available in my overlay) almost exclusively, except for a bit of sample preparation in renoise and audacity. the faint background pads/strings were created with paulstretch. my ardour3 session was filled with hundreds of samples, each one placed by hand and nudged around to keep the jazzy feel, as seen in this screenshot:

ardour3 session

this is a very rough rework — no FX, detailed mixing/mastering, or complicated tricks. i ran outta time to do all the subtle things i usually do. instead, i spent all my time & effort on the arrangement and vibe. the minimal treatment worked better than everything i’d planned.

January 20, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
RolandDG DXY-800A under Linux (January 20, 2013, 09:35 UTC)

Many moons ago, we acquired an old RolandDG DXY-800A plotter.  This is an early A3 plotter which took 8 pens, driven via either the parallel port or the serial port.

It came with software to use with an old DOS-version of AutoCAD.  I also remember using it with QBasic.  We had the handbook, still do, somewhere, if only I could lay my hands on it.  Either that, or on the QBasic code I used to use with the thing, as that QBasic code did exercise most of the functionality.

Today I dusted it off, wondering if I could get it working again.  I had a look around.  The thing was not difficult to drive from what I recalled, and indeed, I found the first pointer in a small configuration file for Eagle PCB.

The magic commands:

H Go home
Jn Select Pen n (1..8)
Mxxxx,yyyy Move (with pen up) to position xxx.x, yyy.y mm from lower left corner.
Dxxxx,yyyy Draw (with pen down) a line to position xxx.x, yyy.y mm

Okay, this misses the text features, drawing circles and hatching, but it’s a good start.  Everything else can be emulated with the above anyway.  Something I’d have to do, since there was only one font, and I seem to recall, no ability to draw ellipses.

Inkscape has the ability to export HPGL, so I had a look at what the format looks like.  Turns out, the two are really easy to convert, and Inkscape HPGL is entirely line drawing commands.

hpgl2roland.pl is a quick and nasty script which takes Inkscape-generated HPGL, and outputs RolandDG plotter language. It’s crude, only understands a small subset of HPGL, but it’s a start.

It can be used as follows:

$ perl hpgl2roland.pl < drawing.hpgl > /dev/lp0

January 19, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
GHC as a cross-compiler (January 19, 2013, 23:34 UTC)

Another small breakthrough today for those who would like to see haskell programs running.

Here is a small incomplete HOWTO for gentoo users on how to build a crosscompiler running on x86_64 host targeted on ia64 platform.

It is just an example. You can pick any target.

First of all you need to enable haskell overlay and install host compiler:

# GHC_IS_UNREG=yeah emerge -av =ghc-7.6.1

The GHC_IS_UNREG=yeah bit is critical. If we won’t do it GHC build system will try to build registerised stage1 (which is a crosscompiler already).

Not setting GHC_IS_UNREG will break for a set of problems:

  • GHC will try to optimize generated bitcode with llvm‘s optimizer which will produce x86_64 instructions, not ia64.

  • GHC will try to run (broken on ia64) object splitter perl script: ghc-split.lprl.

The rest is rather simple:

# crossdev ia64-unknown-linux-gnu
# ia64-unknown-linux-gnu-emerge sys-libs/ncurses virtual/libffi dev-libs/gmp
# ln -s ${haskell_overlay}/haskell/dev-lang/ghc ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# cd ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# EXTRA_ECONF=--enable-unregisterised USE=ghcmakebinary ebuild ghc-9999.ebuild compile

It will fail as the following command tries to run ia64 binary on x86_64 host:

libraries/integer-gmp/cbits/mkGmpDerivedConstants > libraries/integer-gmp/cbits/GmpDerivedConstants.h

I’ve logged-in to ia64 box and ran mkGmpDerivedConstants to get a GmpDerivedConstants.h. Added the result to the ${WORKDIR} and reran last command.

After the build has finished I’ve got corsscompiler:

sf ghc-9999 # "inplace/bin/ghc-stage1" --info
 [("Project name","The Glorious Glasgow Haskell Compilation System")
 ,("GCC extra via C opts"," -fwrapv")
 ,("C compiler command","/usr/bin/ia64-unknown-linux-gnu-gcc")
 ,("C compiler flags"," -fno-stack-protector  -Wl,--hash-size=31 -Wl,--reduce-memory-overheads")
 ,("ld command","/usr/bin/ia64-unknown-linux-gnu-ld")
 ,("ld flags","     --hash-size=31     --reduce-memory-overheads")
 ,("ld supports compact unwind","YES")
 ,("ld supports build-id","YES")
 ,("ld is GNU ld","YES")
 ,("ar command","/usr/bin/ar")
 ,("ar flags","q")
 ,("ar supports at file","YES")
 ,("touch command","touch")
 ,("dllwrap command","/bin/false")
 ,("windres command","/bin/false")
 ,("perl command","/usr/bin/perl")
 ,("target os","OSLinux")
 ,("target arch","ArchUnknown")
 ,("target word size","8")
 ,("target has GNU nonexec stack","True")
 ,("target has .ident directive","True")
 ,("target has subsections via symbols","False")
 ,("Unregisterised","YES")
 ,("LLVM llc command","llc")
 ,("LLVM opt command","opt")
 ,("Project version","7.7.20130118")
 ,("Booter version","7.6.1")
 ,("Stage","1")
 ,("Build platform","x86_64-unknown-linux")
 ,("Host platform","x86_64-unknown-linux")
 ,("Target platform","ia64-unknown-linux")
 ,("Have interpreter","NO")
 ,("Object splitting supported","NO")
 ,("Have native code generator","NO")
 ,("Support SMP","NO")
 ,("Tables next to code","NO")
 ,("RTS ways","l debug  thr thr_debug thr_l thr_p ")
 ,("Dynamic by default","NO")
 ,("Leading underscore","NO")
 ,("Debug on","False")
 ,("LibDir","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib")
 ,("Global Package DB","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib/package.conf.d")
 ]

# cat a.hs
main = print 1
# "inplace/bin/ghc-stage1" a.hs -fforce-recomp -o a
[1 of 1] Compiling Main             ( a.hs, a.o )
Linking a ...
# file a
a: ELF 64-bit LSB executable, IA-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, not stripped
# LANG=C ls -lh a
-rwxr-xr-x 1 root portage 24M Jan 20 02:24 a
on ia64:
$ ./a
1

Results:

  • It’s not that hard to build a ghc with some exotic target if you have gcc there.

  • mkGmpDerivedConstants needs to be more cross-compiler friendly It should be really simple to implement, it only queries for data sizes/offsets. I think autotools is already able to do it.

  • GHC should be able to run llvm with correct -mtriple in crosscompiler case. That way we would get registerised crosscompiler.

Some TODOs:

In order to coexist with native compiler ghc should stop mangling —-target=ia64-unknown-linux-gnu option passed by user and name resulting compiler a ia64-unknown-linux-gnu-ghc and not ia64-unknown-linux-ghc.

That way I could have many flavours of compiler for one target. For example I would like to have x86_64-pc-linux-gnu-ghc as a registerised compiler and x86_64-unknown-linux-gnu-ghc as an unreg one.

And yes, they will all be tracked by gentoo’s package manager.


January 18, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What we needs from daemons (January 18, 2013, 18:04 UTC)

In my post of yesterday I noted some things about the init scripts, small niceties that init scripts should do in Gentoo for them to work properly and to solve the issue of migrating pid files to /run. Today I’d like to add a few more notes of what I wish all daemons out there implemented at the very least.

First of all, while some people prefer for the daemon to not fork and background by itself, I honestly prefer it to — it makes so many things so much easier. But if you fork, wait till the forked process completed initialization before exiting! The reason why I’m saying this is that, unfortunately, it’s common for a daemon to start up, fork, then load its configuration file and find out there’s a mistake … leading to a script that thinks that the daemon started properly, while no process is left running. In init scripts, --wait allows you to tell the start-stop-daemon tool to wait for a moment to see if the daemon could start at all, but it’s not so nice, because you have to find the correct wait time empirically, and in almost every case you’re going to run longer than needed.

If you will background by yourself, please make sure that you create a pidfile to tell the init system which ID to signal to stop — and if you do have such a pidfile, please do not make it configurable on the configuration file, but set a compiled-in default and eventually allow an override at runtime. The runtime override is especially welcome if your software is supposed to have multiple instances configured on the same box — as then a single pidfile would conflict. Not having it configured on a file means that you no longer need to hack up a parser for the configuration file to be able to know what the user wants, but you can rely on either the default or your override.

Also if you do intend to support multiple instances of the same daemon make sure that you allow multiple configuration files to be passed in by he command-line. This simplifies a lot the whole handling of multiple-instances, and should be mandatory in that situation. Make sure you don’t re-use paths in that case either.

If you have messages you should also make sure that they are sent to syslog — please do not force, or even default, everything to log files! We have tons of syslog implementations, and at least the user does not have to guess which one of many files is going to be used for the messages from your last service start — at this point you probably guessed that there are a few things I hope to rectify in Munin 2.1.

I’m pretty sure that there are other concerns that could come up, but for now I guess this would be enough for me to have a much simpler life as an init script maintainer.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
The case of defaults (Libav vs FFmpeg) (January 18, 2013, 17:18 UTC)

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.

Alexis Ballier a.k.a. aballier (homepage, bugs)

It’s been a while since I wanted to write about this and since there recently has been a sort of hijack without any kind of discussion to let libav be the default implementation for Gentoo, this motivated me.

Exactly two years ago, a group consisting of the majority of FFmpeg developers took over its maintainership. While I didn’t like the methods, I’m not an insider so my opinion stops here, especially since if you pay attention to who was involved: Luca was part of it. Luca has been a Gentoo developer since probably most of us even used Gentoo and I must admit I’ve never seen him heating any discussion, rather the contrary, and it’s always been a pleasure to work with him. What happened next, after a lot of turmoil, is that the developers split in two groups: libav formed by the “secessionists” and FFmpeg.

Good, so what do we chose now? One of the first things that was done on the libav side was to “cleanup” the API with the 0.7 release, meaning we had to fix almost all its consumers: Bad idea if you want wide adoption of a library that has an history of frequently changing its API and breaking all its consumers. Meanwhile, FFmpeg maintained two branches: the 0.7 branch compatible with the old API and the 0.8 one with the new API. The two branches were supposed to be identical except for the API change. On my side the choice was easy: Thanks but no thanks sir, I’ll stay with FFmpeg.
FFmpeg, while having its own development and improvements, has been doing daily merges of all libav changes, often with an extra pass of review and checks, so I can even benefit from all the improvements from libav while using FFmpeg.

So why should we use libav? I don’t know. Some projects use libav within their release process, so they are likely to be much more tested with libav than FFmpeg. However, until I see real bugs, I consider this as pure supposition and I have yet to see real facts. On the other hand, I can see lots of false claims, usually without any kind of reference: Tomáš claims that there’s no failure that is libav specific, well, some bugs tend to say the contrary and have been open for some time (I’ll get back to XBMC later). Another false claim is that FFmpeg-1.1 will have the same failures as libav-9: Since Diego made a tinderbox run for libav-9, I made the tests for FFmpeg 1.1 and made the failures block our old FFmpeg 0.11 tracker. If you click the links, you will see that the number of blockers is much smaller (something like 2/3) for the FFmpeg tracker. Another false claim I have seen is that there will be libav-only packages: I have yet to see one; the example I had as an answer is gst-plugins-libav, which unfortunately is in the same shape for both implementations.

In theory FFmpeg-1.1 and libav-9 should be on par, but in practice, after almost two years of disjoint development, small differences have started to accumulate. One of them is the audio resampling library: While libswresample has been in FFmpeg since the 0.9 series, libav developers did not want it and made another one, with a very similar API, called libavresample that appeared in libav-9. This smells badly as a NIH syndrome, but to be fair, it’s not the first time such things happen: libav and FFmpeg developers tend to write their own codecs instead of wrapping external libraries and usually achieve better results. The audio resampling library is why XBMC being broken with libav is, at least partly, my fault: While cleaning up its API usage of FFmpeg/libav, I made it use the public API for audio resampling, initially with libswresample but made sure it worked with libavresample from libav. At that time, this would mean it required libav git master since libav-9 was not even close to be released, so there was no point in trying to make it compatible with such a moving target. libswresample from FFmpeg was present since the 0.9 series, released more than one year ago. Meanwhile, XBMC-12 has entered its release process, meaning it will probably not work with libav easily. Too late, too bad.

Another important issue I’ve raised is the security holes: Nowadays, we are much more exposed to them. Instead of having to send a specially crafted video to my victim and make him open it with the right program, I only have to embed it in an HTML5 webpage and wait. This is why I am a bit concerned that security issues fixed 7 months ago in FFmpeg have only been fixed with the recently released libav-0.8.5. I’ve been told that these issues are just crashes are have been fixed in a better way in libav: This is probably true but I still consider the delay huge for such an important component of modern systems, and, thanks to FFmpeg merging from libav, the better fix will also land in FFmpeg. I have also been told that this will improve on the libav side, but again, I want to see facts rather than claims.

As a conclusion: Why is the default implementation changed? Some people seem to like it better and use false claims to force their preference. Is it a good idea for our users? Today, I don’t think so (remember: FFmpeg merges from libav and adds its own improvements), maybe later when we’ll have some clear evidence that libav is better (the improvements might be buggy or the merges might lead to subtle bugs). Will I fight to get the default back to FFmpeg ? No. I use it, will continue to use and maintain it, and will support people that want the default back to FFmpeg but that’s about it.


January 17, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Perl::Critic CERT Theme (January 17, 2013, 18:18 UTC)

So, Brian d Foy has compiled the CERT recommendations for securely programming in Perl. I’ve whipped up a perlcriticrc for it.

I’ve checked out he subversion from Perl::Critic and will submit the simple patch…if somebody else hasn’t beaten me to it.

January 15, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.
Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"
D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. Strunk
Journal of Applied Physics 113, 034303 (2013); arXiv:1208.2163 (PDF[*])
[*] Copyright American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)

UPDATE: Added some basic migration instructions to the bottom.
UPDATE2: Removed mplayer incompatibility mention. Mplayer-1.1 works with system libav.

As the summary says the default media codec provider for new installs will be libav instead of ffmpeg.

This change is being done due to various reasons like matching default with Fedora and Debian, or due to fact that some projects which are high-profile (eg sh*tload of people use them) will be probably libav only. One example being gst-libav which is in return required by libreoffice-4 which is due release in about month. To go for least pain for the user we decided to move from default ffmpeg to default libav library.

This change won’t affect your current installs at all but we would like to ask you to try to migrate to the libav and test and report any issues. So if stuff happen in the future and we are forced to throw libav as only implementation for everyone you are not left in the dark screaming for your suddenly missing features.

What to do when some package does not build with libav but ffmpeg is fine

There are no such packages left around if I am searching correctly (might be my blindness so do not take my word for it).

So if you encounter any package not building with libav just open bugreport on bugzilla and assign it to media-video team and add lu_zero[at]gentoo.org to CC to be sure he really takes a sneaky look to fix it. If you want to fix the issue yourself it gets even better. You write the patch open the bug in our bugzie and someone will include it. Also the patch should be sent to upstream for inclusion, so we don’t have to keep the patches in tree for long time.

What should I do when I have some issues with libav and I require more features that are on ffmpeg but not on libav

Its easier than fixing bugs about failing packages. Just nag to lu_zero (mail hidden somewhere in this post ;-)) and read this.

So when is this stuff going to ruin my day?

The switch in the tree and news item informing all users of media-video/ffmpeg will be created at the end of the January or early February, unless something really bad happens while you guys test it now.

I feel lucky and I want to switch right away so I can ruin your day by reporting bugs

Great I am really happy you want to contribute. The libav switch is pretty easy to be done as there are only 2 things to keep in mind.

You have to sync your useflags between virtual/ffmpeg and the newly-to-be-switched media-video/libav. This is most probably best to do just edit your package.use stuff and replace the media-video/ffmpeg line with media-video/libav one.

Then one would go straight away for emerge libav but there is one more caveat. Libav has split libpostproc library while ffmpeg still is using the internal one. Code wise they are most probably equal but you have to take account for it so just call emerge with both libraries.
emerge -1v libav libpostproc

If this succeeds you have to revdep-rebuild the packages you have or use @preserved-rebuild from portage-2.2 to rebuild all the RDEPENDS of libav.

Good luck and happy bug hunting.

January 14, 2013

Many times, when I had to set the make.conf on systems with particular architectures, I had a doubt on which is the best –jobs value.
The handook suggest to have ${core} + 1, but since I’m curious I wanted to test it by myself to be sure this is right.

To make a good test we need a package with a respectable build system that respects the make parallelization and takes at least few minutes to compile. Otherwise with packages that compile in few seconds we are unable to track the effective difference.
kde-base/kdelibs is, in my opinion, perfect.

If you are on architecture which kde-base/kdelibs is unavailable, just switch to another cmake-based package.

Now, download best_makeopts from my overlay. Below an explanation on what the script does and various suggestions.

  • You need to compile the package on a tmpfs filesystem and, I’m assuming you have /tmp mounted as a tmpfs too;
  • You need to have the tarball of the package on a tmpfs because if you have a slow disk, it may takes more time.
  • You need to switch your governor to performance.
  • You need to be sure you don’t have strange EMERGE_DEFAULT_OPTS.
  • You need to add ‘-B’ because we don’t want to include the time of the installation.
  • You need to drop the existent cache before compile.

As you can see, the for will emerge the same package with makeopts from 1 to 10. If you have, for example, a single core machine, just try the for from 1 to 4 is enough.

Please, during the test, don’t use the cpu for other purposes, and if you can, stop all services and make the test from the tty; you will see the time for every merge.

The following is an example on my machine:
-j1 : real 29m56.527s
-j2 : real 15m24.287s
-j3 : real 13m57.370s
-j4 : real 12m48.465s
-j5 : real 12m55.894s
-j6 : real 13m5.421s
-j7 : real 13m13.322s
-j8 : real 13m23.414s
-j9 : real 13m26.657s

The hardware is:
Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz which has 2 CPUs and 4 threads.
After -j4 you can see the regression.

Another example from an Intel Itanium with 4 CPUs.
-j1 : real 4m24.930s
-j2 : real 2m27.854s
-j3 : real 1m47.462s
-j4 : real 1m28.082s
-j5 : real 1m29.497s

I tested this script on ~20 different machines and in the majority of the cases, the best optimization was ${core} or more exactly ${threads} of your CPU.

Conclusion:
From the handbook:

A good choice is the number of CPUs (or CPU cores) in your system plus one, but this guideline isn’t always perfect.

I don’t know who, years ago, suggested in the handbook ${core} + 1 and I don’t want to trigger a flame. I’m just saying, ${core} + 1 is not the best optimization for me and the test confirms the part:“but this guideline isn’t always perfect”

In all cases ${threads} + ${X} is slower than only ${threads}, so don’t use -j20 if you have a dual-core cpu.

Also, I’m not saying to use ${threads}, I’m just saying feel free to make your tests to watch what is the best optimization.

If you have suggestions to improve the functionality of the script or you think that this script is wrong, feel free to comment or leave an email.

January 13, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

I’m late with this, but… If you have not seen this talk yet, you might want to. As usual with Jacob, very interesting and inspiring.

On Aaron Swartz (January 13, 2013, 18:43 UTC)

Through both LWN and netzpolitik.org I just heard that Aaron Swartz has committed suicide. While watching his speech “How we stopped SOPA” his name ring a bell with me, I looked into my inbox and found that he and I once had a brief chat on html2text, I piece of free software of his that I was in touch with in the context of Gentoo Linux. So there is this software, his website, these past mails, this amazing talk, his political work that I didn’t know about… and he’s dead. It only takes a few minutes of watching the talk to get the feeling that this is a great loss to society.

January 10, 2013
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Make django-staticfiles to follow DRY principle (January 10, 2013, 07:03 UTC)

When you work with Django and especially with static files or other template tags you realize that you have to include {% load staticfiles %} in all our template files. This violates the DRY principle because we have to repeat the {% load staticfiles %} template tag on each template file .

Lets give an example.

We have a base.html file which links some Javascript and CSS files from our static folder.

{% load staticfiles %}
<!DOCTYPE html>
<html>
    <head>
        <title>Webapp</title>
        <link rel="stylesheet" type="text/css" href="{% static "css/random-css.css" %}">
        <script type="text/javascript" src="{% static "js/random-javascript.js" %}"></script>
        {% block extra_js_top %}{% endblock %}
    </head>
...
</html>

Also we have index.html which extends base.html and in addition it loads some extra javascript.

{% extends "base.html" %}
{% load staticfiles %}
{% block extra_js_top %}
    <script type="text/javascript" src="{% static "js/extra-javascript.js" %}"></script>
{% endblock %}

As you can see I load again staticfiles in index.html. If I remove it, I will take this error. “TemplateSyntaxError at /, Invalid block tag ‘static’”. Unfortunately even if we extend base.html it will not inherit load template tag from the file and it will not load staticfiles to index.html that means it will not load our extra javascript file.
The truth is that there is a hack-y way to do that. After a small research I finally found a way to follow DRY principle and avoid repeating {% load staticfiles %} template tag in every template file.

Open one of the files that loads automatically from the beginning( settings.py, urls.py and models.py ). I will use settings.py.
So we add the following to settings.py:

from django import template
 
#django-staticfiles DRY principle
template.add_to_builtins('django.contrib.staticfiles.templatetags.staticfiles')

With that snippet of code we load statcifiles “globally” and we don’t have to load staticfiles for every template (not even in base.html) because it loads from the beginning.

PS: Sometimes on big projects this way maybe will not be so ‘correct’ or considered unconventional technique and complicate the developers.

I hope it will be useful.

Happy django-ing.

Further reading:

January 07, 2013
Alex Alexander a.k.a. wired (homepage, bugs)

Passwords. No one likes them, but everybody needs them. If you are concerned about your online safety, you probably have unique passwords for your critical accounts and some common pattern for all the almost-useless accounts you create when browsing the web.

At first I used to save my passwords in a gpg encrypted file. Over time however, I began using Firefox’s and Chrome’s password managers, mostly because of their awesome synching capabilities and form auto-filling.

Unfortunately, convenience comes at a price. I ended up relying on the password managers a bit too much, using my password pattern all over the place.

Then it hit me: I had strayed too much. Although my main accounts were relatively safe (strong passwords, two factor authentication), I had way too many weak passwords, synced on way too many devices, over syncing protocols of questionable security.

Looking for a better solution, I stumbled upon LastPass. Although LastPass uses an interesting security model, with passwords encrypted locally and a password generator that helps you maintain strong passwords for all your accounts, I didn’t like depending on an external service for something so critical. Its ui also left something to be desired.

Meet “pass“.

A Unix command line tool that takes advantage of commonly used tools like gnupg and git to provide safe storage for your passwords and other critical information.

Pass‘ concept is simple. It creates one file for each one of your passwords, which it then encrypts using gpg and your key. You can provide your own passwords or ask it to generate strong passwords for you automatically.

When you need a password you can ask pass to print it on screen or copy it to the clipboard, ready for you to paste in the desired password field.

Pass can optionally use git, allowing you to track the history of your passwords and sync them easily among your systems. I have a Linode server, so I use that + gitolite to keep things synced.

Installation and usage of the tool is straightforward, with clean instructions and bash completion support that makes it even easier to use.

All this does come with a cost, since you lose the ability to auto save passwords and fill out forms. But this is a small price you pay compared to the security benefits gained. I also love the fact that you can access your passwords with standard Unix tools in case of emergencies. The system is also useful for securely storing other critical information, like credit cards.

Pass is not for everyone and most people would be fine using something like LastPass or KeePass, but if you’re a Unix guy looking for a solid password management solution, pass may be what you’re looking for :)

Pass was written by zx2c4 (thanks!) and is available in Gentoo’s portage

emerge -av pass

For more information visit the project’s website at http://zx2c4.com/projects/password-store/

Jeremy Olexa a.k.a. darkside (homepage, bugs)
My holidays in Greece were excellent (January 07, 2013, 09:57 UTC)

No, the country is not in flames or rioting everyday, bad media, bad.

I spent 12 days in Greece. The Greek hospitality is superb, I can not ask for better friends in Greece. I first arrived in Thessaloniki, stayed there for a few nights. Then went to Larissa, and stayed with my friend and his family. There was a small communication barrier with his parents in this smaller town, they don’t get too many tourists. However, I had a very nice Christmas there and it was nice to be with such great people over the holidays. I went to a namesday celebration. Even though I couldn’t understand most of the conversations, they still welcomed me, gave me food and wine, and exchanged culture information. Then I went to Athens, stayed in a hostel, and spent New Year’s watching the fireworks over the Acropolis and the Parthenon. Cool experience! It was so great to be walking around the birthplace of “western ideals” – not the oldest civilization, but close. Some takeaway thoughts: 1) Greek hospitality is unlike anything I’ve experienced, really. I made sure that a I told everyone that they have an open door with me whenever we meet in “my new home” (meaning, I don’t know when or where), 2) you cannot go hungry in Greece, especially when they are cooking for you! 3) the cafe culture is great, 4) I want to go back during the summer

Of course, you will always find the not so nice parts. I got fooled by the old man scam, as seen here. Luckily, they only got 30€ from me, compared to some of the stories I’ve heard. Looking back on it, I just laugh at myself. Maybe I’ll be jaded towards a genuine experience in the future but, lesson learned. I don’t judge Athens by this one mishap, however.

Greece - Dec 2012-22

I only have pictures of Athens since I had to buy a new camera.. Pics here

January 06, 2013
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: ice is given (January 06, 2013, 09:31 UTC)

a new song: ice is given by ioflow

piano improvisation and ambient recordings for the 53rd disquiet junto, ice for 2013.

the assignment was to record the sound of ice in a glass, and make something of it.

the track picture shows my lo-fi setup for the field recording segment. i balanced a logitech USB microphone (which came with the Rock Band game) on a box of herbal tea (to keep it off the increasingly wet kitchen table), and started dropping ice cubes into a glass tumbler. audible is the initial crack and flex of the tray, scrabbling for cubes, tossing them into the cup. i made a point of recording the different tone of cubes dropped into a glass of hot water. i also filled the cup with ice, then recorded the sound of water running into it from the kitchen tap. i liked this sound enough to begin the song with it.

i decided that my first song of 2013 should incorporate the piano, so with the ice cubes recorded, i sat down to improvise an appropriately wintry melody. the result is a simple two-minute minor motif. i turned to the ardour3 beta to integrate the field recordings and the piano improvisation.

it’s been awhile since i last used my strymon bluesky reverb pedal, so i figured i should use it for this project. i setup a feedback-free hardware effects loop using my NI Komplete Audio6 interface with the help of #ardour IRC channel, and listened to the piano recording as it ran through fairly spacious settings on the BSR. (normal mode, room type, decay @ 3:00, predelay @ 11:00, low damp @ 4:00, high damp @ 8:00). with just a bit of “send” to the reverb unit, the piano really came to life.

i added a few more tracks in ardour for the ice cube snippets, with even more subtle audio sends to the BSR, and laid out the field recordings. i pulled them apart in several places, copying and pasting segments throughout the song; minimal treatment was needed to get a good balance of piano and ice.

ardour3 session

working environment in ardour3. laying out hardware FX and tracks.

title reference: Job 37:10

January 04, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
DIY Project: Gatsby cap (January 04, 2013, 22:13 UTC)

Those who have met me, might notice I have a somewhat unusual taste in clothing. One thing I despise is having clothes that are heavily branded, especially when the local shops then charge top dollar for them.

Where hats are concerned, I’m fussy. I don’t like the boring old varieties that abound $2 shops everywhere. I prefer something unique.

The mugshot of me with my Vietnamese coolie hat is probably the one most people on the web know me by. I was all set to try and make one, and I had an idea how I might achieve it, bought some materials I thought might work, but then I happened to be walking down Brunswick Street in Brisbane’s Fortitude Valley and saw a shop selling them for $5 each.

I bought one and have been wearing it on and off ever since. Or rather, I bought one, it wore out, I was given one as a present, wore that out, got given two more. The one I have today is #4.

I find them quite comfortable, lightweight, and most importantly, they’re cool and keep the sun off well. They are also one of the few full-brim designs that can accommodate wearing a pair of headphones or headset underneath. Being cheap is a bonus. The downside? One is I find they’re very divisive, people either love them or hate them — that said I get more compliments than complaints. The other, is they try to take off with the slightest bit of wind, and are quite bulky and somewhat fragile to stow.

I ride a bicycle to and from work, and so it’s just not practical to transport. Hanging around my neck, I can guarantee it’ll try to break free the moment I exceed 20km/hr… if I try and sit it on top of the helmet, it’ll slide around and generally make a nuisance.

Caps stow much easier. Not as good sun protection, but still can look good.   I’ve got a few baseball caps, but they’re boring and a tad uncomfortable.  I particularly like the old vintage gatsby caps — often worn by the 1930′s working class.  A few years back on my way to uni I happened to stop by a St. Vinnies shop near Brisbane Arcade (sadly, they have closed and moved on) and saw a gatsby-style denim cap going for about $10. I bought it, and people commented that the style suited me. This one was a little big on me, but I was able to tweak it a bit to make it fit.

Fast forward to today, it is worn out — the stitching is good, but there are significant tears on the panelling and the embedded plastic in the peak is broken in several places. I looked around for a replacement, but alas, they’re as rare as hens teeth here in Brisbane, and no, I don’t care for ordering overseas.

Down the road from where I live, I saw the local sports/fitness shop were selling those flat neoprene sun visors for about $10 each.  That gave me an idea — could I buy one of these and use it as the basis of a new cap?

These things basically consist of a peak and headband, attached to a dome consisting of 8 panels.  I took apart the old faithful and traced out the shape of one of the panels.

Now I already had the headband and peak sorted out from the sun visor I bought, these aren’t hard to manufacture from scratch either.  I just needed to cut out some panels from suitable material and stitch them together to make the dome.

There are a couple of parameters one can experiment that changes the visual properties of the cap.  Gatsby caps could be viewed as an early precursor to the modern baseball cap.  The prime difference is the shape of the panels.

Measurements of panel from old cap

The above graphic is also available as a PDF or SVG image.  The key measurements to note are A, which sets the head circumference, C which tweaks the amount of overhang, and D which sets the height of the dome.

The head circumference is calculated as ${panels}×${A} so in the above case, 8 panels, a measurement of 80mm, means a head circumference of 640mm.  Hence why it never quite fitted (58cm is about my size) me.  I figured a measurement of about 75mm would do the trick.

B and C are actually two of three parameters that separates a gatsby from the more modern baseball cap.  The other parameter is the length of the peak.  A baseball cap sets these to make the overall shape much more triangular, increasing B to about half D, and tweaking C to make the shape more spherical.

As for the overhang, I decided I’d increase this a bit, increasing C to about 105mm.  I left measurements B and D alone, making a fairly flattish dome.

For each of these measurements, once you come up with values that you’re happy with, add about 10mm to A, C and D for the actual template measurements to give yourself a fabric margin with which to sew the panels together.

As for material, I didn’t have any denim around, but on my travels I saw an old towel that someone had left by the side of the road — likely an escapee.  These caps back in the day would have been made with whatever material the maker had to hand.  Brushed cotton, denim, suede leather, wool all are common materials.  I figured this would be a cheap way to try the pattern out, and if it worked out, I’d then see about procuring some better material.

Below are the results, click on the images to enlarge.  I found due to the fact that this was my first attempt, and I just roughly cut the panels from a hand-drawn template, the panels didn’t quite meet in the middle.  This is hidden by making a small circular patch where the panels normally meet.  Traditionally a button is sewn here.  I sewed the patch from the underside so as to hide the edges of it.

Hand-made gatsbyHand-made gatsby (Underside)

Not bad for a first try, I note I didn’t quite get the panels aligned at dead centre, the seam between the front two is just slightly off centre by about 15mm.  The design looks alright to my eye, so I might look around for some suede leather and see if I can make a dressier one for more formal occasions.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
Signal handler safety, re-entering malloc (January 04, 2013, 20:23 UTC)

This is a story from real-world development. From signal(7):


   Async-signal-safe functions
       A  signal  handler  function must be very careful,
       since processing elsewhere may be interrupted at some
       arbitrary point in the execution of the program.
       POSIX has the concept of "safe function".  If a signal
       interrupts the execution of an  unsafe  function,
       and handler calls an unsafe function, then the behavior
       of the program is undefined.


After that a list of safe functions follows, and one notable things is that malloc and free are async-signal-unsafe!

I hit this issue while enabling tcmalloc's debugallocation for Chromium Debug builds. We have a StackDumpSignalHandler for tests, which prints a stack trace on various crashing signals for easier debugging. It's very useful, and worked fine for a pretty long while (which means that "but it works!" is not a valid argument for doing unsafe things).

Now when I enabled debugallocation, I noticed hangs triggered by the stack trace display. In one example, this stack trace:

@0  0x00000000019c6c85 in tcmalloc::Abort () at third_party/tcmalloc/chromium/src/base/abort.cc:15
@1 0x00000000019b39c1 in LogPrintf (severity=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s", ap=0x7fff52c379e8)
at third_party/tcmalloc/chromium/src/base/logging.h:210
@2 0x00000000019b3a8b in RAW_LOG (lvl=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s")
at third_party/tcmalloc/chromium/src/base/logging.h:230
@3 0x00000000019c3fb1 in MallocBlock::CheckLocked (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:461
@4 0x00000000019c3c42 in MallocBlock::CheckAndClear (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:401
@5 0x00000000019c436a in MallocBlock::Deallocate (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:557
@6 0x00000000019c1929 in DebugDeallocate (ptr=0x7fd18f143420, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:998
@7 0x00000000028d1482 in tc_delete (p=0x7fd18f143420) at ./third_party/tcmalloc/chromium/src/debugallocation.cc:1232
@8 0x000000000097dc04 in cc::ResourceProvider::deleteResourceInternal (this=0x7fd191827da0, it=...) at cc/resource_provider.cc:242
@9 0x000000000097daaf in cc::ResourceProvider::deleteResource (this=0x7fd191827da0, id=1) at cc/resource_provider.cc:230
@10 0x00000000006f9824 in (anonymous namespace)::ResourceProviderTest_Basic_Test::TestBody (this=0x7fd18dc5abf0)
at cc/resource_provider_unittest.cc:328
@11 0x00000000008ec801 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2071
@12 0x00000000008e9665 in testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2123
@13 0x00000000008dee0d in testing::Test::Run (this=0x7fd18dc5abf0) at testing/gtest/src/gtest.cc:2143
@14 0x00000000008df3ea in testing::TestInfo::Run (this=0x7fd191823020) at testing/gtest/src/gtest.cc:2319
@15 0x00000000008df8dc in testing::TestCase::Run (this=0x7fd19181f0d0) at testing/gtest/src/gtest.cc:2426
@16 0x00000000008e3eea in testing::internal::UnitTestImpl::RunAllTests (this=0x7fd19829dd60) at testing/gtest/src/gtest.cc:4249

generates SIGSEGV (tcmalloc::Abort). This is just debugallocation having stricter checks about usage of dynamically allocated memory. Now the StackDumpSignalHandler kicks in, and internally calls malloc. But we're already inside malloc code as you can see on the above stack trace (see frame @7, bold font), and re-entering it tries to take locks that are already held, resulting in a hang.

The fix required several changes:
  • no dynamic memory, and that includes std::string and std::vector, which use it internally
  • no buffered stdio or iostreams, they are not async-signal-safe (that includes fflush)
  • custom code for number-to-string conversion that doesn't need dynamically allocated memory (snprintf is not on the list of safe functions as of POSIX.1-2008; it seems to work on a glibc-2.15-based system, but as said before this is not a good assumption to make); in this code I've named it itoa_r, and it supports both base-10 and base-16 conversions, and also negative numbers for base-10
  • warming up backtrace(3): now this is really tricky, and backtrace(3) itself is not whitelisted for being safe; in fact, on the very first call it does some memory allocations; for now I've just added a call to backtrace() from a context that is safe and happens before the signal handler may be executed; implementing backtrace(3) in a known-safe way would be another fun thing to do
Note that for the above, I've also added a unit test that triggers the deadlock scenario. This will hopefully catch cases where calling backtrace(3) leads to trouble.

For more info, feel free to read the articles below: