Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
February 18, 2013, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

February 18, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Who consumes the semantic web? (February 18, 2013, 08:19 UTC)

In my previous post I’ve noted that I was adding support for the latest fad method for semantic tagging of data on web pages, but it was obviously not clear who actually consumes that data. So let’s see.

In the midst of the changes to Typo that I’ve been sending to support a fully SSL-compatible blog install (mine is not entirely yet, mostly because most of the internal links from one post to the next are not currently protocol-relative), I’ve added one commit to give a bit more OpenGraph insights — OpenGraph is used by Facebook, almost exclusively. The only metadata that I provide on that protocol, though, is an image for the blog – since I don’t have a logo, I’m sending my gravatar – the title of the single page and the global site title.

Why that? Well, mostly because this way if you do post a link to my blog on facebook, it will appear with the title of the post itself instead of the one that is visible on the page. This solves the problem of whether the title of the blog itself should be dropped out of the <title> tag.

For what concerns Google, instead, the most important part of metadata you can provide them seems to be authorship tagging which uses Google+ to connect content of the same author. Is this going to be useful? Not sure yet, but at least it shows up in a less anonymous way in the search results, and that can’t be bad. Unlike what they say on the link, it’s possible to use an invisible <link> tag to connect the two, which is why you don’t find a G+ logo on my blog anywhere.

What else do search engines do with the remaining semantic data? Not sure, it doesn’t seem to explain it, and since I don’t know what it does behind the scenes it’s hard for me to give a proper answer. But I can guess, and hope, that they use it to reduce the redundancy of the current index. For instance, pages that are actually a list of posts, such as the main index, the categories/tags and archives will now properly tell that they are describing a blog posting whose URL is, well, somewhere else. My hope would be for the search engines to know then to link to the declared blog post’s URL instead of the index page. And possibly boost the results for the posts that result more popular (given they can then count the comments). What I’m surely counting on, is for descriptions in search results to be more humanly-centered.

Now in the case of Google you can use their Rich Snippet testing tool that gives you an idea of what it finds. I’m pretty sure that they take all this data with a grain of salt though, seeing as how many players are there in the “SEO” world, with people trying to game the system altogether. But at least I can hope that things will move in the right direction.

Interestingly, when I first implemented the new semantic data, Readability did not support it, and would show my blog’s title instead of the post’s title when reading the articles from there — after a feedback on their site they added some workaround for my case, so you can enjoy their app with my content just fine. Hopefully, with time, the microformat will be supported in the general sense.

On the other hand, Flattr still has no improvement on using metadata, as far as I can see. They require that you actually add a button manually, including repeating that kind of metadata (content type, language, tags) that is already easily inferred from the microformat given. Hereby, I’d like to reiterate my plea to Flattr developers to listen to OpenGraph and other microformat data, and at least use that to augment the manually-inserted buttons. Supporting the schema.org format, by the way, should make it relatively easy to add per-fragment buttons — i.e., I wouldn’t mind having a per-comment Flattr button to reward constructive comments, like they have on their own blog, but without the overhead that it adds to do so manually.

Right now this is all the semantic data that I figured out that is being used. Hopefully things will become more useful in the future.

February 17, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
LightZone in Gentoo betagarden (February 17, 2013, 19:08 UTC)

If you are running Gentoo, heard about the release of the LightZone source code and got curious to see it for yourself:

sudo layman -a betagarden
sudo emerge -av media-gfx/LightZone

What you get is LightZone 100% built from sources, no more shipped .jar files included.

One word of warning: the software has not seen much testing in this form, yet. So if your pictures mean a lot you, make backups before. Better safe than sorry.

Stuart Longland a.k.a. redhatter (homepage, bugs)
Well, it finally happened (February 17, 2013, 09:54 UTC)

Well, I was half expecting that it’d happen one day. Gentoo Bug 89744, the bug that saw me promoted to developer on the Gentoo/MIPS team, is now a retirement bug in full swing.

To those in the Gentoo community, I say, thank-you for putting up with me for so long. It is probably time that I moved on though. Real life has meant I’ve got practically no time during my working week to do anything meaningful, and after a week of arguing with computers (largely Ubuntu-based) I come home on a Friday evening not feeling like even looking at a computer. Some weekends, the computer has stayed in my backpack, and not been removed until the following Monday when I return to work.

Thus the time has come, I must be going.

That said, I mostly did enjoy the time I had as a developer. I still remain a Gentoo user, as that seems to be the OS that best fits my usage patterns, and I might pop up from time to time, but I’ll probably maintain a fairly low profile from now on.

I actually didn’t notice the account being shut down, only discovered today in fact, that the redhatter@gentoo.org email address was not working from an Amateur radio colleague. It’s then I thought to have a quick gander and found out what had happened.

This does leave me with two Lemote boxes, that technically no longer belong here.

Remember these? They’re looking for a home now!

I shall enquire, and find out where to send the boxes themselves, or a donation to cover their cost. It is not right that they remain here without some sort of compensation.

This leaves some people without a means of contacting me.  I don’t bother with the instant messengers these days, and definitely not Skype.

Plain old email still works though, you can contact me at stuartl at longlandclan dot yi dot org from now on.  As for the old links on dev.gentoo.org, terribly sorry but that’s outside my control now.


Update:

The Lemote boxes now have a home. Thanks Anthony!

February 16, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
ModSecurity and my ruleset, a release (February 16, 2013, 15:44 UTC)

After the recent Typo update I had some trouble with Akismet not working properly to mark comments as spam, at least the very few spam comments that could get past my ModSecurity Ruleset — so I set off to deal with it a couple of days ago to find out why.

Well, to be honest, I didn’t really want to focus on why at first. The first thing I found out while looking at the way Typo uses akismet, is that it still used a bundled, hacked, ancient akismet library.. given that the API key I got was valid, I jumped to the conclusion, right or wrong it was, that the code was simply using an ancient API that was dismissed, and decided to look around if there is a newer Akismet version; lo and behold, a 1.0.0 gem was released not many months ago.

After fiddling with it a bit, the new Akismet library worked like a charm, and spam comments passing through ModSecurity were again marked as such. A pull request and its comments later, I got a perfectly working Typo which marks comments as spam as good as before, with one less library bundled within it (and I also got the gem into Portage so there is no problem there).

But this left me with the problem that some spam comments were still passing through my filters! Why did that happen? Well, if you remember my idea behind it was validating the User-Agent header content… and it turns out that the latest Firefox versions have such a small header that almost every spammer seem to have been able to copy it just fine, so they weren’t killed off as intended. So more digging in the requests.

Some work later, and I was able to find two rules with which to validate Firefox, and a bunch of other browsers; the first relies on checking the Connection: keep-alive header that is always sent by Firefox (tried in almost every possible combination), and the other relies on checking the Content-Type on the POST request for a charset being defined: browsers will have it, but whatever the spammers are using nowadays doesn’t.

Of course, the problem is that once I actually describe and upload the rules, spammers will just improve their tools to not commit these mistakes, but in the mean time I’ll have some calm, spamless blog. I still won’t give in to captchas!

At any rate, beside adding these validations, thanks to another round of testing I was able to fix Opera Turbo users (now they can comment just fine), and that lead me to the choice of tagging the ruleset and .. releasing it! Now you can download it from GitHub or, if you use Gentoo, just install it as www-apache/modsec-flameeyes — there’s also a live ebuild for the most brave.

Me and a RaspberryPi: Introduction (February 16, 2013, 11:05 UTC)

People who follow me on Google+ might have noticed last night that I ordered a RaspberryPi board. This might sound strange, but the reason is simple: I needed a very small, very low power computer to set up a friend of mine with, for a project that we’ve decided to work on together, but let’s put this in order.

My friend owns a Davis Vantage Pro2 weather station, and he publishes on his website the data coming from it — up to now he’s been doing that with the software that comes with the station itself, which runs on Windows, and in particular on his laptop. So no update if he’s not at home.

So, what has this to do with a RaspberryPi board? Well, the station connects to a PC via an USB cable, connected in turn to an USB-to-serial adapter, which means that there is no low-level protocol to reverse engineer, and not only that, but Davis publishes the protocol specifications as well as a number of other documentation and SDKs for Windows and Macintosh.

It is strange that Davis does not publish anything for Linux themselves, but I found an old project and a newer one that seems to do exactly what my friend needs — the latter in particular does exactly what we need, which means that my task in all this is to set up a Gentoo Linux install (cross-compiled, of course) to run on that Pi and have wview to actually work on Gentoo — it requires some packaging, and more likely than not, some fixing.

Thankfully, I don’t have to start from scratch; Elias pointed out that we have a good page on the wiki with the instructions, even though they do not include cross-compilation pointers and other things like that that could be extremely useful. I’ll probably extend from there with whatever I’ll find useful beside the general cross compilation. Depending on whether I’ll need non-cross-compilable software I might end up experimenting with qemu-user for ARM chroots..

At any rate, this post serves as an introduction of what you might end up reading in this blog in the future, which might or might not end up on Planet Gentoo depending on what the topic of the single post is.

February 15, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book review — Instant Munin Plugin Starter (February 15, 2013, 16:12 UTC)

This is going to be a bit of a different review than usual, if anything because I actually I already reviewed the book, in the work-in-progress sense. So bear with me.

Today, Packt published Bart ten Brinkle’s Instant Munin Plugin Starter which I reviewed early this year. Bart has done an outstanding job in expanding from the sparsely-available documentation to a comprehensive and, especially coherent book.

If you happen to use Munin, or are interested to use it, I would say it’s a read well worth the $8 that it’s priced at!

LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 97 Interview with WilliamH (February 15, 2013, 00:46 UTC)

Interview with WilliamH, Gentoo Linux Developer

Links

OpenRC
http://en.wikipedia.org/wiki/OpenRC
http://git.overlays.gentoo.org/gitweb/?p=proj/openrc.git
udev
http://en.wikipedia.org/wiki/Udev
espeak
http://espeak.sourceforge.net/
speakup
http://www.linux-speakup.org/
espeakup
https://github.com/williamh/espeakup
Gentoo Accessibility
http://www.gentoo.org/proj/en/desktop/accessibility/

Download

ogg

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The issue with the split HTML/XHTML serialization (February 15, 2013, 00:10 UTC)

Not everybody knows that HTML 5 has been released in two flavours: HTML 5 proper, which uses the old serialization, similarly to HTML 4, and what is often incorrectly called XHTML 5 which uses XML serialization, like XHTML and XHTML 1.1 did. The two serializations have different grades of strictness, and the browsers deal witht hem that way.

It so happens that the default output on DocBook for XHTML 1 is compatible with the HTML serialization, which means that even if the files have a .html extension, locally, they will load correctly in Chrome, for instance. The same can’t be said to XHTML 1.1 or XHTML5 output; one particularly nasty problem is that the generated code will output XML-style tags such as <a id="foo" /> which throw off the browsers entirely, unless properly loaded as XHTML … and on the other hand, IE still has trouble when served properly-typed XHTML (i.e. you have to serve it as application/xml rather than application/xhtml+xml).

So I have two choices: redirect all the .html requests to .xhtml, make it use XHTML 5 and work around the IE8 (and earlier) limitations, or I can forget about XHTML 5 at all. This starts to get tricky! So for the moment I decided to not go with XHTML 5, and at the same time I’m going to keep building ePub 2 books, and publish them as they are, instead of using ePub 3 (even though, as I said, O’Reilly got it working for their workflow).

Unfortunately even if I went through that on the server side to fix it, that wouldn’t even be enough alone! I would have to also change the CSS, since many things that were always <div> before, are now using proper semantic types, including <section> (with the exception of the table of contents on the first landing page, obviously (damn). This actually makes it easier in one way as it lets me drop the stupid nth-child CSS3 trick I used to set the style of the main div, compared to the header and footer. Hopefully this should let me fix the nasty IE 3 style beveled border that Chrome put around the Flattr button when using XHTML 5.

In the mean time I have a few general fixes to the style, now I just need to wait for the cover image to come from my designer friend, and then I can update both the website and the eBook versions all around the stores.

To close the post.. David you deserve a public apology: while you were listed as <editor> on the DocBook sources before, and the XSL was supposed to emit it on the homepage, for whatever reason, it fails to. I’ve upgrade you to <author> until I can find why the XSL is misbehaving so I can fix it properly.

In the mean time, tomorrow I’ll write a few more words about automake and then

February 14, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

http://pyfound.blogspot.de/2013/02/python-trademark-at-risk-in-europe-we.html

Greg KH a.k.a. gregkh (homepage, bugs)
A year in my life. (February 14, 2013, 17:58 UTC)

I've now been with the Linux Foundation for just over a year. When I started, I posted a list of how you can watch to see what I've been doing. But, given that people like to see year-end-summary reports, the excellent graphic designers at the Linux Foundation have put together an image summarizing my past year, in numbers:

Year in the life of a kernel maintainer

February 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Adding semantic data to the blog (February 13, 2013, 15:42 UTC)

You probably remember I’ve been a bit of a semantic data nerd, as I’ve added support for RDF and quite a bit of other extra metadata to both my blog and FSWS a long time ago. Well, since I was already looking into updating Autotools Mythbuster to make the website use XHTML5, I went to look at which options are available for it, since the old RDFa syntax is no longer available.

After some reading about, even if RDFa is still supported, with a new syntax, it turns out the currently preferred way to declare semantic data on a webpage is through schema.org which is not a standards body but just a cooperation between the “big guys” in the web search business, mostly Google and Microsoft, not unlike sitemaps.org which it shares the design with as well. The idea is that instead of having the two dozens vocabulary to express metadata, you get a big one that includes most of the important metadata that is useful for search engines to get the right data out of a page — after all, what is that data there to do, beside making it easier for search engines to find your important bits?

So I started with my blog, ripping out the old RDF metadata and starting to set up the new one. For the most part, it turned out to be easy, although one of the biggest problems was avoiding having too-redundant metadata. For instance, both the blog itself and the single article have me as author (it’s not properly correct, as there could be more authors in the blog, but in my case there are none, so…), and I was going crazy trying to use the itemref attribute to get it to use the author data already expressed at the top level — the trick is that the correct way to express this is:

<html itemscope="itemscope" itemtype="http://schema.org/Blog"><body>
<div id="author" itemprop="author" itemtype="http://schema.org/Person" itemscope="itemscope">
<meta itemprop="name" content="John Smith" />
</div>
<article itemprop="blogPost" itemtype="http://schema.org/BlogPosting" itemscope="itemscope" itemref="author"></article>
</body></html>

The trick here is that you don’t have to define an author property for the article, but just import the global one at the article level; the documentation for that out there is quite lacking, and the result has been that I wasted the morning trying to get Google to process the data correctly.

At any rate, the experiment turned out decent enough, which means that the next day or two is going to be spent to get FSWS to emit the same kind of data, then I should have a good starting point to make Autotools Mythbuster use the same syntax.

Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Gentoo Bugday Strikes Back (February 13, 2013, 09:38 UTC)

bugday gentoo

In a try to revive the Gentoo Bugday I wrote this article in order to give some guide lines and encourage both users and developers to join. I think it would be great to get this event back and collaborate. Of course everyone can open/close bugs silently but this type of event is a good way to close bugs, attract new developers/users and improve community relations.  There is no need to be a Gentoo expert.  So I will give you some information about the event.

About:

Bugday is a monthly online event that takes place every first Saturday of every month  in #gentoo-bugs in the Freenode network. Its goal is to have users and developers collaborate  to close/open bugs, update current packages and improve documentation.

Location:

Gentoo Bugday take place in our official IRC channel #gentoo-bugs @ Freenode.  You can talk about almost everything. Your ebuilds, version bumps, bugs that you will choose to fix, etc.. This is a 24h event, so don’t worry about the timezone difference.

Requirements:

  1. A Gentoo installation (real hardware) or in a Virtual Machine.
  2. IRC Client to join #gentoo-bugs , #gentoo-dev-help (ebuild help) and #gentoo-wiki (wiki help)
  3. Positive energy / Will to help.
  4. (bonus): Coffee ;)

Goals:

  1. Improve quality of Bugzilla
  2. Improve Wiki’s documentation.
  3. Improve community relations.
  4. Attract new developers and users.
  5. Promote Gentoo.

Tasks:

  1. Fix bugs (users/developers)
  2. Triage incoming bugs (users/developers) (Good to start!)
  3. Version bumps (users/developers) ( Good to start!)
  4. Improve wiki’s articles(users/developers) (Good to start!)
  5. Add new wiki’s articles(users/developers)
  6. Close old fixed bugs (developers-only)

A good way to start is to take a look at the ‘maintainer-needed‘ list . In addtion try picking up a bug from maintainer-wanted alias  at Bugzilla.

TIP: You should DOUBLE/TRIPLE check everything before submit a new bug/patch/ebuild.

TIP2: Please avoid 0day bump requests.

And do not forget every day is a bugday!!

Organize your schedule and join us every first Saturday of every month @ #gentoo-bugs.

Consider starting from today reading the following docs in order to help you.

Useful Docs:

  1. Gentoo Bugday
  2. Get Involved in Gentoo Linux
  3. How to contribute to Gentoo
  4. Gentoo Dev Manual
  5. Contributing Ebuilds
  6. Gentoo Bug Reporting Guide
  7. Beautiful bug reports
  8. Gentoo’s Bugzilla User’s Guide
  9. How to get meaningful backtraces in Gentoo
  10. The Basics of Autotools

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

As one of my four talks at FOSDEM, I gave one on Gentoo titled “Package management and creation in Gentoo Linux.” The basic idea was, what could packagers and developers of other, non-Gentoo distros learn from Gentoo’s packaging format and how it’s iterated on that format multiple times over the years. It’s got some slides but the interesting part is where we run through actual ebuilds to see how they’ve changed as we’ve advanced through EAPIs (Ebuild APIs), starting at 16:39.

If you click through to YouTube, the larger (but not fullscreen) version seems to be the easiest to read.

It was scaled from 720×576 to a 480p video, so if you find it too hard to read the code, you can view the original WebM here.


Tagged: development, gentoo

February 12, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Transforming GuideXML to wiki (February 12, 2013, 18:12 UTC)

The Gentoo project has its own official wiki for some time now, and we are going to use it more and more in the next few months. For instance, in the last Gentoo Hardened meeting, we already discussed that most user-oriented documentation should be put on the wiki, and I’ve heard that there are ideas on moving Gentoo project pages at large towards the wiki. And also for the regular Gentoo documentation I will be moving those guides that we cannot maintain ourselves anymore easily towards the wiki.

To support migrations of documents, I created a gxml2wiki.xsl stylesheet. Such a stylesheet can be used, together with tools like xsltproc, to transform GuideXML documents into text output somewhat suitable for the wiki. It isn’t perfect (far from it actually) but at least it allows for a more simple migration of documents with minor editing afterwards.

Currently, using it is as simple as invoking it against the GuideXML document you want to transform:

~$ xsltproc gxml2wiki.xsl /path/to/document.xml

The output shown on the screen can then be used as a page. The following things still need to be corrected manually:

  • Whitespace is broken, sometimes there are too many newlines. I had to make the decision to put in newlines when needed (which makes too many newlines) rather than a few newlines too few (which makes it more difficult to find where to add in).
  • Links need to be double/triple checked, but i’ll try to fix that in later editions of the stylesheet
  • Commands will have “INTERNAL” in them – you’ll need to move the commands themselves into the proper location and only put the necessary output in the pre-tags. This is because the wiki format has more structure than GuideXML in this matter, thus transformations are more difficult to write in this regard.

The stylesheet currently automatically adds in a link towards a Server and security category, but of course you’ll need to change that to the proper category for the document you are converting.

Happy documentation hacking!

February 11, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
It's that time of the year again... (February 11, 2013, 21:58 UTC)

Which time of the year? The time when Google announces the new Summer of Code!

Okay so you know I"m not always very positive about the outcome of Summer of Code work, even though I’m extremely grateful to Constanze (and Mike, who got it i tree now!) for the work on filesystem based capabilities ­— I’m pretty sure at this point that it also has been instrumental for the Hardened team to have their xattr-based PaX marking (I’m tempted to re-consider Hardened for my laptops now that Skype is no longer a no-go, by the way). Other projects (many of which centred around continuous integration, with no results) ended up in much worse shape.

But since being always overly negative is not a good way to proceed in life, I’m going to propose a few possible things that could be useful to have, both for Gentoo Linux and libav/VLC (whichever is going to be part of GSoC this year). Hopefully if something comes out of them is going to be good.

First of all, a re-iteration of something I’ve been asking of Gentoo for a while: a real alternatives-like system. Debian has a very well implemented tool for selecting among alternative packages supporting multiple tools. In Gentoo we have eselect — and a bunch of modules. My laptop counts 10 different eselect packages installed, and for most of them, the overhead of having another package installed is bigger than the eselect module itself! This also does not really work that well, as for instance you cannot choose the tar command, and pkg-config vs pkgconf require you to make a single selection by installing one or the other (or disabling the flag from pkgconf, but that defeats the point, doesn’t it?).

Speaking of eselect and similar tools, we still have gcc-config and binutils-config as their own tools, without using the framework that we use for about everything else. Okay, the last guy who tried to make these bit more than he could chew, and the result has been abysmal, but the reason there is likely that the target was set too high: re-do the whole compiler handling so that it could support non-GCC compilers.. this might actually be too small a project for GSoC, but might work as a qualification task, similar to the ones we’ve got for libav in the past.

Going to libav, one thing that I was discussing with Luca, J-B and other VLC developers, was the possibility to integrate at least part of the DVD handling that is currently split between libdvdread and libdvdnav into libav itself. VLC already forked the two libraries (and I rewrote the build system) — me and Luca were looking into merging them back into a single libdvd library already… but a rewrite and especially one that can reuse code from libav, or introduce new code that can be shared, would probably be a good thing. I haven’t looked into it but I wouldn’t be surprised if libdvbpsi could follow the same treatment.

Finally, another project that could sound cool for libav would be to create a library, API and ABI compatible with xine, that only uses libav. I’m pretty sure that if most of the internals of xine are dropped (including the configuration file and the plugin system), it would be possible to have a shallow wrapper around libav instead of having a full blown project. It might lose support for some files, such as modules, and DVDs, but it would probably be a nice proof of concept and would show what we still need .. and the moment when we can deal with those formats straight into libav, we know we have something better than simply libav.

On a similar note, one of the last things I’ve worked on, in xine, was the “audio out conversion branch”, see for instance this very old post — it is no more no less than what we now is libavresample, just done much worse. Indeed, libavresample actually has the SIMD-optimized routines I never found out how to actually write, which makes it much nicer. Since xine, at the moment I left it, was actually quite nicely using libavutil already, it might be interesting to see what happens if all the audio conversion code is killed, and replaced with libavresample.

So these are my suggestions for this season of GSoC, at least for the projects I’m involved on… maybe I’ll even have time to mentor them this year, as with a bit of luck I’ll have stable employment when the time comes for this to happen (more on this to follow, but not yet).

The odyssey of making an eBook (February 11, 2013, 14:41 UTC)

Please note, if you’re reading this post on Gentoo Universe, that this blog is syndicated in its full English content; including posts like this which is, at this point, the status of a project that I have to call commercial. So don’t complain that you read this on “official Gentoo website” as Universe is quite far from being an official website. I could understand the complaint if it was posted on Planet Gentoo.

I mused last week about the possibility of publishing Autotools Mythbuster as an eBook — after posting the article I decided to look into which options I had for self-publishing, and, long story short, I ended up putting it for sale on Amazon and on Lulu (which nowadays handles eBooks as well). I’ve actually sent it to Kobo and Google Play as well, but they haven’t finished publishing it yet; Lulu is also taking care of iBooks and Barnes & Nobles.

So let’s first get the question out of the way: the pricing of the eBook has been set to $4.99 (or equivalent) on all stores; some stores apply extra taxes (Google Play would apply 23% VAT in most European countries; books are usually 4% VAT here in Italy, but eBooks are not!), and I’ve been told already that at least from Netherlands and Czech Republic, the Kindle edition almost doubles in price — that is suboptimal for both me and you all, as when that happens, my share is reduced from 70 to 35% (after expenses of course).

Much more interesting than this is, though, the technical aspect of publishing the guide as an eBook. The DocBook Stylesheets I’ve been using (app-text/docbook-xsl-ns-stylesheets) provide two ways to build an ePub file: one is through a pure XSLT that bases itself off the XHTML5 output, and only creates the file (leaving to the user to zip them up), the other is a one-call-everything-done through a Ruby script. The two options produce quite different files, respectively in ePub 3 and ePub 2 format. While it’s possible to produce an ePub 3 book that is compatible with older readers, as an interesting post from O’Reilly delineates, but doing so with the standard DocBook chain is not really possible, which is a bummer.

At the end, while my original build was with ePub 3 (which was fine for both Amazon and Google Play), I had to re-build it again for Lulu which requires ePub 2 — it might be worth noting that Lulu says that it’s because their partners, iBookstore and Nook store, would refuse the invalid file, as they check the file with epubcheck version 1… but as O’Reilly says, iBooks is one of the best implementation of ePub 3, so it’s mostly an artificial limitation, most likely caused by their toolchain or BN’s. At the end, I think from the next update forward I’ll stick with ePub 2 for a little while more.

On the other hand, getting these two to work also got me to have a working upgrade path to XHTML 5, which failed for me last time. The method I’ve been using to know exactly which chapters and sections to break on their own pages on the output, was the manual explicit chunking through the chunk.toc file — this is not available for XHTML5, but it turns out there is a nicer method by just including the processing instructions in the main DocBook files, which works with both the old XHTML1 and the new XHTML5 output, as well as ePub 2 and ePub 3. While the version of the stylesheet that generated the website last is not using XHTML5 yet, it will soon do that, as I’m working on a few more changes (among which the overdue Credits section).

One of the thing that I had to be more careful with, with ePub 2, were the “dangling links” to sections I planned but haven’t written yet. There are a few in both the website and the Kindle editions, but they are gone for the Lulu (and Kobo, whenever they’ll make it available) editions. I’ve been working a lot last week to fill in these blanks, and extend the sections, especially for what concerns libtool and pkg-config. This week I’ll work a bit more on the presentation as well, since I still lack a real cover (which is important for eBook at least), and there are a few things to fix on the published XHTML stylesheet as well. Hopefully, before next week there will be a new update for both website and ebooks that will cover most of this, and more.

The final word has to clarify one thing: both Amazon and Google Books put the review on hold the moment when they found the content available already online (mostly on my website and at Gitorious), and asked me to confirm how that was possible. Amazon unlocked the review just a moment later, and published by the next day; Google is still processing the book (maybe it’ll be easier when I’ll make the update and it’ll be an ePub 2 everywhere, with the same exact content and a cover!). It doesn’t seem to me like Lulu is doing anything like that, but it might just have noticed that the content is published on the same domain as the email address I was registered with, who knows?

Anyway to finish it off, once again, the eBook version is available at Amazon and Lulu — both versions will come with free update: I know Amazon allows me to update it on the fly and just require a re-download from their pages (or devices), I’ll try to get them to notify the buyers, otherwise it’ll just be notifying people here. Lulu also allows me to revise a book, but I have no idea whether they will warn the buyers and whether they’ll provide the update.. but if that’s not the case, just contact me with the Lulu order identifier and I’ll set up so that you get the updates.

February 10, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Multiple SSL implementations (February 10, 2013, 14:36 UTC)

If you run ~arch you probably noticed the a few days ago that all Telepathy-based applications failed to connect to Google Talk. The reason for this was a change in GnuTLS 3.1.7 that made it stricter while checking security parameters’ compliance, and in particular required 816-bit primes on default connections, whereas the GTalk servers provided only 768. The change has since been reverted, and version 3.1.8 connects to GTalk by default once again.

Since I hit this the first time I tried to set up KTP on KDE 4.10, I was quite appalled by the incompatibility, and was tempted to stay with Pidgin… but at the end, I found a way around this. The telepathy-gabble package was not doing anything to choose the TLS/SSL backend but deep inside it (both telepathy-gabble and telepathy-salut bundle the wocky XMPP library — I haven’t checked whether it’s the same code, and thus if it has to be unbundled), it’s possible to select between GnuTLS (the default) or OpenSSL. I’ve changed it to OpenSSL and everything went fine. Now this is exposed as an USE flag.

But this made me wonder: does it matter on runtime which backend one’s using? To be obviously honest, one of the reasons why people use GnuTLS over OpenSSL is the licensing concerns, as OpenSSL’s license is, by itself, incompatible with GPL. But the moment when you can ignore the licensing issues, does it make any difference to choose one or the other? It’s a hard question to answer, especially the moment you consider that we’re talking about crypto code, which tends to be written in such a way to be optimized for execution as well as memory. Without going into details of which one is faster in execution, I would assume that OpenSSL’s code is faster simply due to its age and spread (GnuTLS is not too bad, I have some more reserves for libgcrypt but nevermind that now, since it’s no longer used). Furthermore, Nikos himself noted that sometimes better algorithm has been discarded, before, because of the FSF copyright assignment shenanigan which I commented on a couple of months ago.

But more importantly than this, my current line of thought is wondering whether it’s better to have everything GnuTLS or everything OpenSSL — and if it has any difference to have mixed libraries. The size of the libraries themselves is not too different:

        exec         data       rodata        relro          bss     overhead    allocated   filename
      964847        47608       554299       105360        15256       208805      1896175   /usr/lib/libcrypto.so
      241354        25304        97696        12456          240        48248       425298   /usr/lib/libssl.so
       93066         1368        57934         2680           24        14640       169712   /usr/lib/libnettle.so
      753626         8396       232037        30880         2088        64893      1091920   /usr/lib/libgnutls.so

OpenSSL’s two libraries are around 2.21MB of allocated memory, whereas GnuTLS is 1.21 (more like 1.63 when adding GMP, which OpenSSL can optionally use). So in general, GnuTLS uses less memory, and it also has much higher shared-to-private ratios than OpenSSL, which is a good thing, as it means it creates smaller private memory areas. But what about loading both of them together?

On my laptop, after changing the two telepathy backends to use OpenSSL, there is no running process using GnuTLS at all. There are still libraries and programs making use of GnuTLS (and most of them without an OpenSSL backend, at least as far as the ebuild go) — even a dependency of the two backends (libsoup), which still does not load GnuTLS at all here.

While this does mean that I can get rid of GnuTLS on my system (NetworkManager, GStreamer, VLC, and even the gabble backend use it), it means that I don’t have to keep loaded into memory that library at all time. While 1.2MB is not that big a save, it’s still a drop in the ocean of the memory usage.

So, while sometimes what I call “software biodiversity” is a good thing, other times it only means we end up with a bunch of libraries all doing the same thing, all loaded at the same time for different software… oh well, it’s not like it’s the first time this happens…

February 09, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I guess many people may hit similar problems, so here is my experience of the upgrades. Generally it was pretty smooth, but required paying attention to the details and some documentation/forums lookups.

udev-171 -> udev-197 upgrade

  1. Make sure you have CONFIG_DEVTMPFS=y in kernel .config, otherwise the system becomes unbootable for sure (I think the error message during boot mentions that config option, which is good).
  2. The ebuild also asks for CONFIG_BLK_DEV_BSG=y, not sure if that's strictly needed but I'm including it here for completeness.
  3. Things work fine for me without DEVTMPFS_MOUNT. I haven't tried with it enabled, I guess it's optional.
  4. I do not have a split /usr. YMMV then if you do.
  5. Make sure to run "rc-update del udev-postmount".
  6. Expect network device names to change (I guess this is a non-issue for systems with a single network card). This can really mess up things in quite surprising ways. It seems /etc/udev/rules.d/70-persistent-net.rules no longer works (bug #453494). Note that the "new way" to do the same thing (http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames) is disabled by default in Gentoo (see /etc/udev/rules.d/80-net-name-slot.rules). For now I've adjusted my firewall and other configs, but I think I'll need to figure out the new persistent net naming system.

iptables-1.4.13 -> iptables-1.4.16.3

* Loading iptables state and starting firewall ...
WARNING: The state match is obsolete. Use conntrack instead.
iptables-restore v1.4.16.3: state: option "--state" must be specified

It can be really non-obvious what to do with this one. Change your rules from e.g. "-m state --state RELATED" to "-m conntrack --ctstate RELATED". See http://forums.gentoo.org/viewtopic-t-940302.html for more info.
  Also note that iptables-restore doesn't really provide good error messages, e.g. "iptables-restore: line 48 failed". I didn't find a way to make it say what exactly was wrong (the line in question was just a COMMIT line, it didn't actually identify the real offending line). These mysterious errors are usually caused by missing kernel support for some firewall features/targets.

two upgrades together

Actually what adds to the confusion is having these two upgrades done simultaneously. This makes it harder to identify which upgrade is responsible for which breakage. For an even smoother ride, I'd recommend upgrading iptables first, making sure the updated rules work, and then proceed with udev.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When I wrote it last time I wasn’t really planning on making this a particularly regular column, especially since the health of Gentoo is rarely good, and I rant enough about it that just adding one more series of articles is not going to be very helpful.

I haven’t really ranted much about the KDE 4.10 update, on my blog at least, although I did complain about it on Google+ — it wasn’t, though, a problem with Gentoo for the most part, but rather a QA failure within the KDE project coupled with a smaller failure in GnuTLS which caused headaches to just a bunch of users in ~arch, which is not a big deal.

On the other hand, Paweł wrote about the bigger failures for the past year or so (I would say the worst failure in upgrades since we got rid of Arfrever!) — the problem did not lie on the packages themselves, if upstream decides to change the rules of the game, it’s their prerogative, but we should have handled them properly, among others, by releasing news items for the both of them.

It’s not like these have been the only screwups, for instance miredo is still broken after ifconfig and ip have been moved to /bin, because Mike decided that adding a compatibility symlink was just a workaround…

the current status of ~arch is actually quite decent. The new Boost is masked (and will likely stay that way for a little while); we got rid of Ruby Enterprise (which was not maintained), and Rubinius really never entered the tree for real due to their choices of not making releases and not following LLVM closely enough, so even that is clean. We still lack a decent, recent JRuby, but the problem there is twofold: I don’t know enough about packaging Java, which is bothersome enough when I’m trying to get epubcheck3 in tree, and there are new dependencies that need to be packaged. Given my job direction, I doubt I’ll have much time for this though.

For what concerns the output of the tinderbox the situation is .. okayish. The stable testing actually entered a lull of looping around the same packages over and over again, which means it can relax for a little while. Until a new stable package is released. The ~arch testing is currently running the reverse dependencies of pkgconf, at least as much as they can hit, to see if it’s possible at all to discuss changing the default. I’m not sure if that’s going to be a good idea to be honest, but I’ll leave that to the rest of the developers to decide.

So anyway, this is the bottom line for the moment, I would say, I’m still hoping for things to improve over time instead of getting worse, but that kind of hope is considering taking a long hiatus — especially since there are build, test, or QA failures that haven’t been worked on in over three years…

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We've generated a new set of profiles for Gentoo installation. These are now called 13.0 instead of 10.0, e.g., "default/linux/amd64/10.0/desktop" becomes "default/linux/amd64/13.0/desktop".
Everyone should upgrade as soon as possible. This brings (nearly) no user-visible changes. Some new files have been added to the profile directories that make it possible for the developers to do more fine-grained use flag masking (see PMS-5 for the details), and this formally requires a new profile tree with EAPI=5 (and a recent portage version, but anything since sys-apps/portage-2.1.11.31 should work and anything since sys-apps/portage-2.1.11.50 should be perfect).
Since the 10.0 profiles will be deprecated immediately and removed in a year, emerge will suggest a replacement on every run. I strongly suggest you just follow that recommendation.
One additional change has been added to the package: the "server" profiles will be removed; they do not exist in the 13.0 tree anymore. If you have used a server profile so far, you should migrate to its parent, i.e. from "default/linux/amd64/10.0/server" to "default/linux/amd64/13.0". This may change the default value of some use-flags (the setting in "server" was USE="-perl -python snmp truetype xml"), so you may want to check the setting of these flags after switching profile, but otherwise nothing happens.

February 08, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

While on my machine KDE 4.10.0 runs perfectly fine, unfortunately a lot of Gentoo users see immediate crashes of plasma-desktop - which makes the graphical desktop environment completely unuseable. We know more or less what happened in the meantime, just not how to properly fix it...
The problem:

  • plasma-desktop uses a new code path in 4.10, which triggers a Qt bug leading to immediate SIGSEGV. 
  • The Qt bug only becomes fatal for some compiler options, and only on 64bit systems (amd64).
  • The Qt bug may be a fundamental architectural problem that needs proper thought.
The links:
The bugfixing situation:
  • Reverting the commit to plasma-workspace that introduced the problem makes the crash go away, but plasma-desktop starts hogging 100% CPU after a while. (This is done in plasma-workspace-4.10.0-r1 as a stopgap measure.) Kinda makes sense since the commit was there to fix a problem - now we hit the original problem.
  • The bug seems not to occur if Qt is compiled with CFLAGS="-Os". Cause unknown. 
  • David E. Narváez aka dmaggot wrote a patch for Qt that fixes this particular codepath but likely does not solve the global problem.
  • So far comments from Qt upstream indicate that this is in their opinion not the right way to fix the problem.
  • Our Gentoo Qt team understandably only wants to apply a patch if it has been accepted upstream.
Right now, the only option we (as Gentoo KDE team) have is wait for someone to pick up the phone. Either from KDE (to properly use the old codepath or provide some alternative), or from Qt (to fix the bug or apply a workaround)...

Sorry & stay tuned.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

Update! Update! Read all about it!You can find the recent updates in a tree near you. They are currently keyworded, but will be stablized as soon as the arch teams find time to do so. You may not want to wait that long as it is a Denial of Service, which is not as severe as it sounds in this case. The user would have to be logged in to cause a DoS.

There have been some other updates to the PostgreSQL ebuilds as well. PostgreSQL will no longer restart if you restarted your system logger. The ebuilds install PAM service files unique to each slot so you don’t have to worry about it being removed when you uninstall an old slot. And, finally, you can write your PL/Python in Python 3.

Greg KH a.k.a. gregkh (homepage, bugs)
AF_BUS, D-Bus, and the Linux kernel (February 08, 2013, 18:37 UTC)

There's been a lot of information scattered around the internet about these topic recently, so here's my attempt to put them all in one place to (hopefully) settle things down and give my inbox a break.

Last week I spent a number of days at the GNOME Developer Hackfest in Brussels, with the goal to help make the ability to distribute applications written for GNOME (and even more generally, Linux) in a better manner. A great summary of what happened there can be found in this H-Online article. Also please read Alexander Larsson's great summary of what we discussed and worked on for another view of this.

Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today.

Our goal (and I use "goal" in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a "libdbus" interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system.

nothing blocks

"But Greg!" some of you will shout, "What about the existing AF_BUS kernel patches that have been floating around for a while and that you put into the LTSI 3.4 kernel release?"

The existing AF_BUS patches are great for users who need a very low-latency, high-speed, D-Bus protocol on their system. This includes the crazy automotive Linux developers, who try to shove tens of thousands of D-Bus messages through their system at boot time, all while using extremely underpowered processors. For this reason, I included the AF_BUS patches in the LTSI kernel release, as that limited application can benefit from them.

Please remember the LTSI kernel is just like a distro kernel, it has no relation to upstream kernel development other than being a consumer of it. Patches are in this kernel because the LTSI member groups need them, they aren't always upstream, just like all Linux distro kernels work.

However, given that the AF_BUS patches have been rejected by the upstream Linux kernel developers, I advise that anyone relying on them be very careful about their usage, and be prepared to move away from them sometime in the future when this new "kernel dbus" code is properly merged.

As for when this new kernel code will be finished, I can only respond with the traditional "when it is done" mantra. I can't provide any deadlines, and at this point in time, don't need any additional help with it, we have enough people working on it at the moment. It's available publicly if you really want to see it, but I'll not link to it as it's nothing you really want to see or watch right now. When it gets to a usable state, I'll announce it in the usual places (linux-kernel mailing list) where it will be torn to the usual shreds and I will rewrite it all again to get it into a mergable state.

In the meantime, if you see me at any of the many Linux conferences I'll be attending around the world this year, and you are curious about the current status, buy me a beer and I'll be glad to discuss it in person.

If there's anything else people are wondering about this topic, feel free to comment on it here on google+, or email me.

February 07, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened goes onward (aka project meeting) (February 07, 2013, 21:40 UTC)

It’s been a while again, so time for another Gentoo Hardened online progress meeting.

Toolchain

GCC 4.8 is on development stage 4, so the hardened patches will be worked on next week. Some help on it is needed to test the patches on ARM, PPC and MIPS though. For those interested, keep a close eye on the hardened-dev overlay as those will contain the latest fixes. When GCC 4.9 starts development phase 1, Zorry will again try to upstream the patches.

With the coming fixes, we might probably (need to) remove the various hardenedno* GCC profiles from the hardened Gentoo profiles. This shouldn’t impact too many users as ebuilds add in the correct flags anyhow (for instance when needing to turn off PIE/PIC).

Kernel, grSecurity and PaX

The kernel release 3.7.0 that we have stable in our tree has seen a few setbacks, but no higher version is stable yet (mainly due to the stabilization period needed). 3.7.4-r1 and 3.7.5 are prime candidates with good track record,
so we might be stabilizing 3.7.5 in the very near future (next week probably).

On the PaX flag migration (you know, from ELF-header based marking to extended attributes marking), the documentation has seen its necessary upgrades and the userland utilities have been updated to reflect the use of xattr markings. The eclass we use for the markings will use the correct utility based on the environment.

One issue faced when trying to support both markings is that some actions (like the “paxctl -Cc” which creates the PT_PAX header if it is missing) make no sense with the other (as there is no header when using XATTR_PAX). The eclass will be updated to ignore these flags when XATTR_PAX is selected.

SELinux

Revision 10 is stable in the tree, and revision 11 is waiting stabilization period. A few more changes have been put in the policy repository already (which are installed when using the live ebuilds) and will of course be part of
revision 12.

A change in the userland utilities was also pushed out to allow permissive domains (so run a single domain in permissive mode instead of the entire system).

Finally, the SELinux eclass has been updated to remove SELinux modules from all defined SELinux module stores if the SELinux policy package is removed from the system. Before that, the user had to remove the modules from the store himself manually, but this is error-prone and easily forgotten, especially for the non-default SELinux policy stores.

Profiles

All hardened subprofiles are marked as deprecated now (you’ve seen the discussions on this on the mailinglist probably on this) so we now have a sane set of hardened profiles to manage. The subprofiles were used for things like
“desktop” or “server”, whereas users can easily stack their profiles as they see fit anyhow – so there was little reason for the project to continue managing those subprofiles.

Also, now that Gentoo has released its 13.0 profile, we will need to migrate our profiles to the 13.0 ones as well. So, the idea is to temporarily support 13.0 in a subprofile, test it thoroughly, and then remove the subprofile and switch the main one to 13.0.

System Integrity

The documentation for IMA and EVM is available on the Gentoo Hardened project site. They currently still refer to the IMA and EVM subsystems as development-only, but they are available in the stable kernels now. Especially the default policy that is available in the kernel is pretty useful. When you want to consider custom policies (for instance with SELinux integration) you’ll need a kernel patch that is already upstreamed but not applied to the stable kernels yet.

To support IMA/EVM, a package called ima-evm-utils is available in the hardened-dev overlay, which will be moved to the main tree soon.

Documentation

As mentioned before, the PaX documentation has seen quite a lot of updates. Other documents that have seen updates are the Hardened FAQ, Integrity subproject and SELinux documentation although most of them were small changes.

Another suggestion given is to clean up the Hardened project page; however, there has been some talk within Gentoo to move project pages to the Gentoo wiki. Such a move might make the suggestion easier to handle. And while on the subject of the wiki, we might want to move user guides to the wiki already.

Bugs

Bug 443630 refers to segmentation faults with libvirt when starting Qemu domains on a SELinux-enabled host. Sadly, I’m not able to test libvirt myself so either someone with SELinux and libvirt
expertise can chime in, or we will need to troubleshoot it by bug (using gdb, strace’ing more, …) which might take quite some time and is not user friendly…

Media

Various talks where held at FOSDEM regarding Gentoo Hardened, and a lot of people attended those talks. Also the round table was quite effective, with many users interacting with developers all around. For next year, chances are very high that we’ll give a “What has changed since last year” session and a round table again.

With many thanks to the usual suspects: Zorry, blueness, prometheanfire, lejonet, klondike and the several dozen contributors that are going to kill me for not mentioning their (nick)names.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
January in review: Istanbul, Dubai (February 07, 2013, 17:33 UTC)

Preface: It appears that I have fallen behind in my writings. It’s a shame really because I think of things that I should write in the moment and then forget. However, as I’m embracing slowish travel, sometimes I just don’t really do anything that is interesting to write about every day/week.

My last post was about my time in Greece. Since then I have been to Istanbul, Dubai, and (now) Sri Lanka. I was in Istanbul for about 10 days. My lasting impressions of Istanbul were:

  • +: Istanbul was the first Muslim country I’ve been to. This is is a positive because it opened up some thoughts of what to expect as I continue east. To see all the impressive mosques, to hear the azan (call to prayer) in the streets, to talk to some Turks about religion, really made it a new experience for me.
  • +: Istanbul receives many visitors per year, which makes it such that it is easy to converse, find stuff you need, etc
  • -: Istanbul receives many visitors per year, which makes it very touristy in some parts.
  • +: Istanbul is a huge city and there is much to see. I stepped on Asia for the first time. There are many old, old, buildings that leave you in awe. Oldest shopping area in the world, the Grand Bazaar, stuff like that.
  • -: Istanbul is a huge city and the public transit is not well connected, I thought.
  • –: Every shop owner harasses you to come in the store! The best defense that I can recommend is to walk with a purpose (like you are running an errand) but not in a hurry. This will bring the least amount of attention to yourself at risk of “missing” the finer details as you meander.

Turkey - Jan 2013-67

Let’s not joke anyone, Dubai was a skydiving trip, for sure. I spent 15 days in Dubai and made 30 jumps. It was a blast. I was at the dropzone most everyday and on the weather days, my generous hosts showed me around the city. I didn’t feel the need to take any pictures of the sites because, while impressive, they seemed too “fake” to me (outrageous, silly, etc). I went to the largest mall in the world, ate brunch in the shadow of the largest building in the world, largest aquarium, indoor ski hill in a desert, eventually it was just…meh. However, I will never forget “The Palm”

When deciding where to go onwards, as I knew I shouldn’t stay in Dubai too long (money matters, of course, I would spend my whole lot on fun and there is so much more to see). I ended up in Sri Lanka, because skyscanner told me there was a direct flight there on a budget airline. I don’t see the point in accepting layovers in my flight details at my pace. Then I found someone on HelpX that wanted an English teacher in exchange for accommodation. While I’m not a teacher, I am a native speaker, and that was acceptable at this level of classes. I did a week stint of that in a small village and now I’m relaxing at the beach…I’ll write more about Sri Lanka later and post pics, a fun photo so far:

20130209-134926.jpg

February 03, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)

Warning, this is a long post written over some hours. It is a brain dump of my thoughts regarding user interfaces and IT in general.

Technology is a funny beast. Largely because people so often get confused about what “technology” really is. Wind back the clock a few hundred millenia, then the concept of stone tools was all the rage. Then came metallurgy, mechanisation, industrialisation, with successive years comes a new wave of “technology”.

Today though, apparently it’s only these electronic gadgets that need apply. Well, perhaps a bit unfair, but the way some behave, you could be forgiven for thinking this.

What is amusing though, is when some innovation gets dreamt up, becomes widespread (or perhaps not) but then gets forgotten about, and re-discovered. No more have I noticed this, but in the field of user interfaces.

My introduction to computing

Now I’ll admit I’m hardly an old hand in the computing world. Not by a long shot. My days of computing go back to no later than about the late 80′s. My father, working for Telecom Australia (as they were then known) brought home a laptop computer.

A “luggable” by today’s standards, it had no battery and required 240V AC, a smallish monochrome plasma display with CGA graphics. The machine sported a Intel 80286 with a 80287 maths co-processor, maybe 2MB RAM tops, maybe a 10MB HDD and a 3.5″ floppy drive. The machine was about 4 inches high when folded up.

It of course, ran the DOS operating system. Not sure what version, maybe MS-DOS 5. My computing life began with simple games that you launched by booting the machine up, sticking in a floppy disk (a device you now only ever see in pictorial form next to the word “Save”) into the drive, firing up X-Tree Gold, and hunting down the actual .exe file to launch stuff.

Later on I think we did end up setting up QuikMenu but for a while, that’s how it was. I seem to recall at one point my father bringing home something of a true laptop, something that had a monochrome LCD screen and an internal battery. A 386 of some sort, but too little RAM to run those shiny panes of glass from Redmond.

Windows

I didn’t get to see this magical “Windows” later until about 1992 or ’93 or so when my father brought home a brand-new desktop. A Intel 486DX running at 33MHz, 8MB RAM, something like a 150MB HDD, and a new luxury, a colour VGA monitor. It also had Windows 3.1.

So, as many may have gathered, I’ve barely known computers without a command line. Through my primary school years I moved from just knowing the basics of DOS, to knowing how to maintain the old CONFIG.SYS and AUTOEXEC.BAT boot scripts, dealing with WIN.INI, fiddling around with the PIF editor to get contankerous DOS applications working.

Eventually I graduated to QBasic and learning to write software. Initially with only the commands PRINT and PLAY, baby steps that just spewed rubbish on the screen and made lots of noise with the PC speaker, but it was a start. I eventually learned how to make it do useful things, and even dabbled with other variants like CA Realizer BASIC.

My IT understanding entirely revolved around DOS however and the IBM PC clone. I did from time to time get to look at Apple’s offerings, at school there was the odd Apple computer, I think one Macintosh, and a few Apple IIs. With the exception of old-world MacOS, I had not experienced a desktop computer OS lacking a command line.

About this sort of time frame, a second computer appeared. This new one was a AMD Am486DX4 100MHz with I think 16MB or 32MB RAM, can’t recall exactly (it was 64MB and a Am5x86 133MHz by the time the box officially retired). It was running a similar, but different OS, Windows NT Workstation 3.1.

At this point we had a decent little network set up with the two machines connected via RG58 BNC-terminated coax. My box moved to Windows for Workgroups 3.11, and we soon had network file sharing and basic messaging (Chat and WinPopup).

Windows 95

Mid 1996, and I graduated to a new computer. This one was a Pentium 133MHz, 16MB RAM, 1GB HDD, and it ran the latest consumer OS of the day, Windows 95. Well, throw out everything I knew about the Program Manager. It took me a good month or more to figure out how to make icons on the desktop to launch DOS applications without resorting to the Start ? Run ? Browse dance.

After much rummaging through the Help, looking at various tutorials, I stumbled across it quite by accident — the right-click on the desktop, and noticing a menu item called “New”, with a curious item called “Shortcut”.

I later found out some time later that yes, Windows 95 did in fact have a Program Manager, although the way Windows 95 renders minimised MDI windows meant it didn’t have the old feel of the earlier user interface. I also later found out how to actually get at that start menu, and re-arrange it to my liking.

My father’s box had seen a few changes too. His box moved from NT 3.1, to 3.5, to 3.51 and eventually 4.0, before the box got set aside and replaced by a dual Pentium PRO 200MHz with 64MB RAM.

Linux

It wasn’t until my father was going to university for a post-grad IT degree, that I got introduced to Linux.In particular, Red Hat Linux 4.0. Now if people think Ubuntu is hard to use, my goodness, you’re in for a shock.

This got tossed onto the old 486DX4/100MHz box, where I first came to experiment.

Want a GUI? Well, after configuring XFree86, type ‘startx’ at the prompt. I toiled with a few distributions, we had these compilation sets which came with Red Hat, Slackware and Debian (potato I think). First thing I noticed was the desktop, it sorta looked like Windows 95.

The window borders were different, but I instantly recognised the “Start” button. It was FVWM2 with the FVWMTaskBar module. My immediate reaction was “Hey, they copied that!”, but then it was pointed out to me, that this desktop environment was somewhat older than the early “Chicago” releases by at least a year.

The machines at the uni were slightly different again, these ones did have more Win95-ish borders on them. FVWM95.

What attracted me to this OS initially was the games. Not the modern first person shooters, but games like Xbilliard, Xpool, hextris, games that you just didn’t see on DOS. Even then, I discovered there were sometimes ports of the old favourites like DOOM.

The OS dance

The years that followed for me was an oscillation between Windows 3.1/PC DOS 7, Windows 95, Slackware Linux, Red Hat Linux, a little later on Mandrake Linux, Caldera OpenLinux, SuSE Linux, SCO OpenServer 5, and even OS/2.

Our choise was mainly versions of SuSE or Red Hat, as the computer retailer near us sold boxes of them. At the time our Internet connection was via a belovid 28.8kbps dial-up modem link with a charge of about $2/hr. So downloading distributions was simply out of the question.

During this time I became a lot more proficient with Linux, in particular when I used Slackware. I experimented with many different window managers including: twm, ctwm, fvwm, fvwm2, fvwm95, mwm, olvwm, pmwm (SCO’s WM), KDE 1.0 (as distributed with SuSE 5.3), Gnome + Enlightenment (as distributed with Red Hat 6.0), qvwm, WindowMaker, AfterStep.

I got familiar with the long-winded xconfigurator tool, and even getting good at having an educated guess at modelines when I couldn’t find the specs in the monitor documentation. In the early days it was also not just necessary to know what video card you had, but also what precise RAMDAC chip it had!

Over time I settled on KDE as the desktop of choice under Linux. KDE 1.0 had a lot of flexibility and ease of use that many of its contemporaries lacked. Gnome+Enlightenment looked alright at first, but then the inability to change how the desktop looked without making your own themes bothered me, the point and click of KDE’s control panel just suited me as it was what I was used to in Windows 3.1 and 95. Not having to fuss around with the .fvwm2rc (or equivalent) was a nice change too. Even adding menu items to the K menu was easy.

One thing I had grown used to on Linux was how applications install themselves in the menu in a logical manner. Games got stashed under Games, utilities under Utilities, internet stuff under Internet, office productivity tools under Office. Every Linux install I had, the menu was neatly organised. Even the out-of-the-box FVWM configuration had some logical structure to it.

As a result, whenever I did use Windows on my desktop, a good amount of time was spent re-arranging the Start menu to make the menu more logical. Many a time I’d open the Start menu on someone else’s computer, and it’d just spew its guts out right across the screen, because every application thinks itself deserving of a top-level place below “Programs”.

This was a hang-over of the days of Windows 3.1. The MDI-style interface that was Program Manager couldn’t manage anything other than program groups as the top-level, and program items below that. Add to this a misguided belief that their product was more important than anyone elses, application vendors got used to this and just repeated the status quo when Windows 95/NT4 turned up.

This was made worse if someone installed Internet Explorer 4.0. It invaded like a cancer. Okay now your screenful of Start menu didn’t spew out across the screen, it just crammed itself into a single column with tiny little arrows on the top and bottom to scroll past the program groups one by one.

Windows 95 Rev C even came with IE4, however there was one trick. If you left the Windows 95 CD in the drive on the first boot, it’d pop up a box telling you that the install was not done, you’d click that away and IE4 Setup would do its damage. Eject the CD, and you were left with pristine Windows 95. Then when IE5 came around, it could safely be installed without it infecting everything.

Windows 2000

I never got hold of Windows 98 on my desktop, but at some point towards the very end of last century, I got my hands on a copy of Windows 2000 Release Candidate 2. My desktop was still a Pentium 133MHz, although it had 64MB RAM now and few more GB of disk space.

I loaded it on, and it surprised me just how quick the machine seemed to run. It felt faster than Windows 95. That said, it wasn’t all smooth sailing. Where was Network Neighbourhood? That was a nice feature of Windows 95. Ohh no, we have this thing called “My Network Places” now. I figured out how to kludge my own with the quick-launch, but it wasn’t as nice since applications’ file dialogues knew nothing of it.

The other shift was that Internet Explorer still lurked below the surface, and unlike Windows 95, there was no getting rid of it. My time using Linux had made me a Netscape user and so for me it was unneccesary bloat. Windows 2000 did similar Start Menu tricks, including “hiding” applications that it thought I didn’t use very often.

If it’s one thing that irritates me, it’s a computer hiding something from me arbitrarily.

Despite this, it didn’t take as long for me to adapt to it as I did from Windows 3.1 to 95 though, as the UI was still much the same. A few tweaks here and there.

In late 2001, my dinky old Pentium box got replaced. In fact, we replaced both our desktops. Two new dual Pentium III 1GHz boxes, 512MB RAM. My father’s was the first, with a nVidia Riva TNT2 32MB video card and a CD-ROM drive. Mine came in December as a Christmas/18th birthday present, with a ATI Radeon 7000 64MB video card and a DVD drive.

I was to run the old version of Windows NT 4.0 we had. Fun ensued with Windows NT not knowing anything about >8GB HDDs, but Service Pack 6a sorted that out, and the machine ran. I got Linux on there as well (SuSE initially) and apart from the need to distribute ~20GB of data between many FAT16 partitions of about 2GB each (Linux at this time couldn’t write NTFS), it worked. I had drive letters A through to O all occupied.

We celebrated by watching a DVD for the first time (it was our first DVD player in the house).

NT 4 wasn’t too bad to use, it was more like Windows 95, and I quickly settled into it. That said, its tenure was short lived. The momoent anything went wrong with the installation, I found I was right back to square one as the Emergency Repair Disk did not recognise the 40GB HDD. I wrustled up that old copy of Windows 2000 RC2 and found it worked okay, but wouldn’t accept the Windows 2000 drivers for the video card. So I nicked my father’s copy of Windows 2000 and ran that for a little while.

Windows XP was newly released, and so I did enquire about a student-license upgrade, but being a high-school student, Microsoft’s resellers would have none of that. Eventually we bought a new “Linux box” with an OEM copy of Windows 2000 with SP2, and I used that. All legal again, and everything worked.

At this point, I was dual-booting Linux and Windows 2000. Just before the move to ADSL, I had about 800 hours to use up (our dial-up account was one that accumulated unused hours) and so I went on a big download-spree. Slackware 8.0 was one of the downloaded ISOs, and so out went SuSE (which I was running at the time) and in went Slackware.

Suddenly I felt right at home. Things had changed a little, and I even had KDE there, but I felt more in control of my computer than I had in a long while. In addition to using an OS that just lets you do your thing, I had also returned from my OS travels, having gained an understanding of how this stuff works.

I came to realise that point-and-click UIs are fine when they work, hell when they don’t. When they work, any dummy can use them. When they don’t, they cry for the non-dummies to come and sort them out. Sometimes we can, sometimes it’s just reload, wash-rinse-repeat.

No more was this brought home to me when we got hold of a copy of Red Hat 8.0. I tried it for a while, but was immediately confronted by the inability to play the MP3s that I had acquired (mostly via the sneakernet). Ogg/Vorbis was in its infancy and I noticed that at the time, there didn’t seem to be any song metadata such as what ID3 tags provided, or at least XMMS didn’t show it.

A bit of time back on Slackware had taught me how to download sources, read the INSTALL file and compile things myself. So I just did what I always did. Over time I ran afoul with the Red Hat Package Manager, and found myself going in cycles doing many RPM solving dependency hell.

On top of this, there was now the need to man-handle the configuration tools that expected things the way the distribution packagers intended them.

Urgh, back to Slackware I go. I downloaded Slackware 9.0 and stayed with that a while. Eventually I really did go my own way with Linux From Scratch, which was good, but a chore.

These days I use Gentoo, and while I do have my fights with Portage (ohh slot-conflict, how I love you!!!), it does usually let me do what I want.

A time for experimentation and learning

During this time I was mostly a pure KDE user. KDE 2.0, then 3.0. I was learning all sorts of tricks reading HOWTO guides on the Linux Documentation Project. I knew absolutely no one around me that used Linux, in fact on Linux matters, I was the local “expert”. Where my peers (in 2002) might have seen it once or twice, I had been using it since 1996.

I had acquired some more computers by this time, and I was experimenting with setting up dial-up routers with proxy servers (Squid, then IP Masquerade), turning my old 386 into a dumb terminal with XDMCP, getting interoperation between the SCO OpenServer box (our old 486DX4/100MHz).

The ability for the Windows boxes to play along steadily improved over this time, from ethernet frames that passed like ships in the night (circa 1996; NetBEUI and IPX/SPX on the Windows 3.1, TCP/IP on Linux) though to begrudging communications with TCP/IP with newer releases of Windows.

Andrew Tridgell’s SAMBA package made its debut in my experimentation, and suddenly Windows actually started to talk sensible things to the Linux boxes and vice versa.

Over time the ability for Linux machines and Windows boxes to interoperate has improved with each year improving on the next layer in the OSI model. I recall some time in 1998 getting hold of an office suite called ApplixWare, but in general when I wanted word processing I turned to Netscape Composer and Xpaint as my nearest equivalent.

It wasn’t until 2000 or so that I got hold of StarOffice, and finally had an office suite that could work on Windows, Linux and OS/2 that was comparable to what I was using at school (Microsoft Office 97).

In 2002 I acquired an old Pentium 120MHz laptop, and promptly loaded that with Slackware 8 and OpenOffice 1.0. KDE 3.0 chugged with 48MB RAM, but one thing the machine did well was suspend and resume. A little while later we discovered eBay and upgraded to a second-hand Pentium II 266MHz, a machine that served me well into the following year.

For high-school work, this machine was fine. OpenOffice served the task well, and I was quite proficient at using Linux and KDE. I even was a trend-setter… listening to MP3s on the 15GB HDD a good year before the invention of the iPod.

Up to this point, it is worth mentioning that the Microsoft world, the UI hadn’t changed all that much in the time between Windows 95 and 2000/ME. Network Neighbourhood was probably the thing I noticed the most. At this time I was usual amongst my peers in that I had more than one computer at home, and they all talked to each other. Hence why Windows 95/98 through to 2000/ME didn’t create such an uproar.

What people DID notice was how poorly Windows ME (and the first release of 98) performed “under the hood”. More so for the latter than the former.

Windows XP

Of course I did mention trying to get hold of Windows XP earlier. It wasn’t until later in 2002 when the school was tossing out a lab of old AMD K6 machines with brand new boxes (and their old Windows NT 4 server for a Novell one) that I got to see Windows XP up close.

The boxes were to run Windows 2000 actually, but IBM had just sent us the boxes preloaded with XP, and we were to set up Zenworks to image them with Windows 2000. This was during my school holidays and I was assisting in the transition. So I fired up a box and had a poke around.

Up came the initial set up wizard, with the music playing, animations, and a little question mark character which at first did its silly little dance telling you how it was there to help you. Okay, if I had never used Windows before, I’d be probably thankful this was there, but this just felt like a re-hash of that sodding paperclip. At least it did eventually shut up and sit in the corner where I could ignore it. That wasn’t the end of it though.

Set up the user account, logged in, and bam, another bubble telling me to take the tour. In fact, to this day that bubble is one of the most annoying things about Windows XP because 11 years on, on a new install it insists on bugging you for the next 3 log-ins as if you’ve never used the OS before!

The machines had reasonable 17″ CRT monitors, the first thing I noticed was just how much extra space was taken up with the nice rounded corners and the absence of the My Computer and My Network Places icons on the desktop. No, these were in the Start menu now. Where are all the applications? Hidden under All Programs of course.

Hidden and not even sorted in any logical order, so if you’ve got a lot of programs it will take you a while to find the one you want, and even longer to find it’s not actually installed.

I took a squiz at the control panel. Now the control panel hadn’t changed all that much from Windows 3.1 days. It was still basically a window full of icons in Windows 2000/ME, albeit since Windows 95, it now used Explorer to render itself rather than a dedicated application.

Do we follow the tradition so that old hands can successfully guide the novices? No, we’ll throw everyone in the dark with this Category View nonsense! Do the categories actually help the novices? Well for some tasks, maybe, but for anything advanced, most definitely not!

Look and feel? Well if you want to go back to the way Windows used to look, select Classic theme, and knock yourself out. Otherwise, you’ve got the choice of 3 different styles, all hard-coded. Of course somewhere you can get additional themes. Never did figure out where, but it’s Gnome 1.0-style visual inflexibility all over again unless you’re able to hack the theme files yourself.

No thank-you, I’ll stick with the less pixel-wasting Classic theme if you don’t mind!

Meanwhile in Open Source land

As we know, it was a long break between releases of Windows XP, and over the coming years we heard much hype about what was to become Vista. For years to come though, I’d be seeing Windows XP everywhere.

My university work horses (I had a few) all exclusively ran Linux however. If I needed Windows there was a plethora of boxes at uni to use, and most of the machines I had were incapable of running anything newer than Windows 2000.

I now was more proficient in front of a Linux machine than any version of Windows. During this time I was using KDE most of the time. Gnome 2.0 was released, I gave it a try, but, it didn’t really grab me. One day I recall accidentally breaking the KDE installation on my laptop. Needing a desktop, I just reached for whatever I had, and found XFCE3.

I ran XFCE for about a month or two. I don’t recall exactly what brought me back to KDE, perhaps the idea of a dock for launching applications didn’t grab me. AfterStep afterall did something similar.

In 2003, one eBay purchase landed us with a Cobalt Qube2 clone, a Gateway Microserver. Experimenting with it, I managed to brick the (ancient) OS, and turned the thing into a lightish door stop. I had become accustomed to commands like `uname` which could tell me the CPU amonst other things.

I was used to seeing i386, i486, i586 and i686, but this thing came back with ‘mips’. What’s this? I did some research and found that there was an entire port. I also found some notes on bootstrapping a SGI Indy. Well this thing isn’t an Indy, but maybe the instructions have something going for them. I toiled, but didn’t get far…

Figuring it might be an idea to actually try these instructions on an Indy, we hit up eBay again, and after a few bids, we were the proud owners of a used SGI Indy R4600 133MHz with 256MB RAM running IRIX 6.5. I toiled a bit with IRIX, the 4DWM seemed okay to use, but certain parts of the OS were broken. Sound never worked, there was a port of Doom, but it’d run for about 10 seconds then die.

We managed to get some of the disc set for IRIX, but never did manage to get the Foundation discs needed to install. My research however, led me onto the Debian/MIPS port. I followed the instructions, installed it, and hey presto, Linux on a SGI box, and almost everything worked. VINO (the video capture interface) was amongst those things that didn’t at the time, but never mind. Sound was one of the things that did, and my goodness, does it sound good for a machine of that vintage!

Needless to say, the IRIX install was history. I still have copies of IRIX 6.5.30 stashed in my archives, lacking the foundation discs. The IRIX install didn’t last very long, so I can’t really give much of a critique of the UI. I didn’t have removable media so didn’t get to try the automounting feature. The shut down procedure was a nice touch, just tap the OFF button, the computer does the rest. The interface otherwise looked a bit like MWM. The machine however was infinitely more useful to me running Linux than it ever was under IRIX.

As I toiled with Debian/MIPS on the Indy, I discovered there was a port of this for the Qube2. Some downloads later and suddenly the useless doorstop was a useful server again.

Debian was a new experience for me, I quite liked APT. The version I installed evidently was the unstable release, so it had modern software. Liking this, I tried it on one of the other machines, and was met with, Debian Stab^Hle. Urgh… at the time I didn’t know enough about the releases, and on my own desktop I was already using Linux From Scratch by this time.

I was considering my own distribution that would target the Indy, amongst other systems. Already formulating ideas, and at one point, I had a mismash of about 3 different distributions on my laptop.

Eventually I discovered Gentoo, along with its MIPS port. Okay, not as much freedom as LFS, but very close to it. In fact, it gives you the same freedom if you can be arsed to write your own portage tree. One by one the machines got moved over, and that’s what I’ve used.

The primary desktop environment was KDE for most of them. Build times for this varied, but for most of my machines it was an overnight build. Especially for the Indy. Once installed though, it worked quite well. It took a little longer to start on the older machines, but was still mostly workable.

Up to this point, I had my Linux desktop set up just the way I liked it. Over the years the placement of desktop widgets and panels has moved around as I borrowed ideas I had seen elsewhere. KDE was good in that it was flexible enough for me to fundamentally change many aspects of the interface.

My keybindings were set up to be able to perform most window operations without the need of a mouse (useful when juggling the laptop in one’s hands whilst figuring out where the next lecture was), notification icons and the virtual desktop pager were placed to the side for easy access. The launcher and task bar moved around.

Initially down the bottom, it eventually wound up on the top of the screen much like OS/2 Warp 4, as that’s also where the menu bar appears for applications — up the top of the window. Thus minimum mouse movement. Even today, the Windows 7 desktop at work has the task bar up the top.

One thing that frustrated me with Windows at the time was the complete inability to change many aspects of the UI. Yes you could move the task bar around and add panels to it, but if you wanted to use some other keystroke to close the window? Tough. ALT-F4 is it. Want to bring up the menu items? Hit the logo key, or failing that, CTRL-ESC. Want to maximise a window? Either hit ALT-Space, then use the arrows to hit Maximise, or start reaching for the rodent. Far cry from just pressing Logo-Shift-C or Logo-Shift-X.

Ohh, and virtual desktops? Well, people have implemented crude hacks to achieve something like it. In general, anything I’ve used has felt like a bolt-on addition rather than a seamless integration.

I recall commenting about this, and someone pointing out this funny thing called “standardisation”. Yet, I seem to recall the P in PC standing for personal. i.e. this is my personal computer, no one else uses it, thus it should work the way I choose it to. Not what some graphic designer in Redmond or Cupertino thinks!

The moment you talk about standardisation or pining for things like Group Policy objects, you’ve crossed the boundary that separates a personal computer from a workstation.

Windows Vista

Eventually, after much fanfare, Microsoft did cough up a new OS. And cough up would be about the right description for it. It was behind schedule, and as a result, saw many cut backs. The fanciful new WinFS? Gone. Palladium? Well probably a good thing that did go, although I think I hear its echoes in Secure Boot.

What was delivered, is widely considered today a disaster. And it came at just the wrong time. Just as the market for low-end “netbook” computers exploded, just the sort of machine that Windows Vista runs the worst on.

Back in the day Microsoft recommended 8MB RAM for Windows 95 (and I can assure you it will even run in 4MB), but no one in their right mind would tollerate the constant rattle from the paging disk. The same could be said for Windows NT’s requirement of 12MB RAM and a 486. Consumers soon learned that “Windows Vista Basic ready” meant a warning label to steer clear of, or insist on it coming with Windows XP.

A new security feature, UAC, ended up making more of a nuisance of itself, causing people to do the knee-jerk reaction of shooting the messenger.

The new Aero interface wastes even more screen pixels than the “Luna” interface of Windows XP. And GPU cycles to boot. The only good thing about it was that the GPU did all the hard work putting windows on the screen. It looked pretty, when the CPU wasn’t overloaded, but otherwise the CPU had trouble keeping up and the whole effect was lost. Exactly what productivity gains one has by having a window do three somersaults before landing in the task bar on minimise is lost on me.

Windows Vista was the last release that could do the old Windows 95 style start menu. The newer Vista one was even more painful than the one in XP. The All Programs sub-menu opened out much like previous editions did (complete with the annoying “compress myself into a single column”). In Vista, this menu was now entrapped inside this small scrolling window.

Most of Vista’s problems were below the surface. Admittedly Service Pack 1 fixed a lot of the problems, it was already too late. No one wanted to know.

Even with the service packs, it still didn’t perform up to par on the netbooks that were common for the period. The original netbook for what it’s worth, was never intended to run any version of Windows, the entire concept came out of the One Laptop Per Child project, which was always going to be Linux based.

Asus developed the EeePC was one of the early candidates for the OLPC project. When another design got selected, Asus simply beefed up the spec, loaded on Xandros and pushed it out the door. Later models came with Windows XP, and soon, other manufacturers pitched in. This was a form factor with specs that ran Windows XP well, unfortunately Vista’s Aero interface was too much for the integrated graphics typically installed, and the memory requirements had the disk drive rattling constantly, sapping the machine of valuable kilojoules when running from the battery.

As to my run-in with Vista? For my birthday one year I was given a new laptop. This machine came pre-loaded with it, and of course, the very first task I did was spend a good few hours making the recovery discs then uninstalling every piece of imaginable crap that manufacturers insist on prebloating their machines with.

For what I needed, I actually needed Linux to run. The applications I use and depend on for university work, whilst compatible with Windows, run as second class citizens due to their Unix heritage. Packages like The Gimp, gEDA, LaTeX, git, to name a few, never quite run as effortlessly on Windows. The Gimp had a noticable lag when using my Wacom tablet, something that put my handwriting way off.

Linux ran on it, but with no support for the video card, GUI related tasks were quite choppy. In the end, it proved to be little use to me. My father at the time was struggling along with his now aging laptop using applications and hardware that did not support Windows Vista. I found a way to exorcise Windows Vista from the machine, putting Windows XP in its place.

The bloat becomes infectious

What was not lost on me, was that each new iteration of full desktops like KDE brought in more dependencies. During my latter years at University, I bought myself a little netbook. I was doing work for Gentoo/MIPS with their port of Linux, and thus a small machine that would run what I needed for university, and could serve as a test machine during my long trips between The Gap and Laidley (where I was doing work experience for Eze Corp) would go down nicely. So I fired off an email and a telegraphic money transfer over to Lemote in China, and on the doorstep turned up a Yeeloong netbook.

I dual booted Debian and Gentoo on this machine, in fact I still do. Just prior to buying this machine, I was limping along with an old Pentium II 300MHz laptop. I did have a Pentium 4M laptop, but a combination of clumsiness and age slowly caused the machine’s demise. Eventually it failed completely, and so I just had to make do with the PII which had been an earlier workhorse.

One thing, KDE 3.0 was fine on this laptop. Even 3.5 was okay. But when you’ve only got 300MHz of CPU and 160MB RAM, the modern KDE releases were just a bit too much. Parts of KDE were okay, but for the main desktop, it chugged along. Looking around, I needed a workable desktop, so I installed FVWM. I found the lack of a system tray annoyed me, so in went stalonetray. Then came maintaining the menu. Well, modern FVWM comes with a Perl script that automates this, so in that went.

Finally, a few visual touches, a desktop background loader, some keybinding experiments and I pretty much had what KDE gave me, that started in a fraction of the time, and built much faster. When the Yeeloong turned up and I got Gentoo onto there, the FVWM configuration here was the first thing to be installed on the Yeeloong, and so I had a sensible desktop for the Yeeloong.

Eventually I did get KDE 4 working on the Yeeloong, sort of. It was glitchy on MIPS. KDE 3.5 used to work without issue but 4.0 never ran quite right. I found myself using FVWM with just the bits of KDE that worked.

As time went on, university finished, and the part-time industrial experience became full-time work. My work at the time revolved around devices that needed Windows and a parallel port to program them. We had another spare P4 laptop, so grabbed that, tweaked Windows XP on there to my liking, and got to work. The P4 “lived” at Laidley and was my workstation of sorts, the Yeeloong came with me to and from there. Eventually that work finished, and through the connections I came to another company (Jacques Electronics). In the new position, it was Linux development on ARM.

The Windows installation wasn’t so useful any more. So in went a copy of the gPartED LiveCD, told Windows to shove, followed by a Gentoo disc and a Stage 3 tarball. For a while my desktop was just the Linux command line, then I got X and FVWM going, finally as I worked, KDE.

I was able to configure KDE once again, and on i686 hardware, it ran as it should. It felt like home, so it stayed. Over time the position at Jacques finished up, I came to work at VRT where I am to this day. The P4 machine stayed at the workplace, with the netbook being my personal machine away from work.

It’s worth pointing out that at this point, although Windows 7 had been around for some time, I was yet to actually come to use it first hand.

My first Apple

My initial time at VRT was spent working on a Python-based application to ferry metering data from various energy meters to various proprietary systems. The end result was a package that slotted in along side MacroView called Metermaster, and forms one of the core components in VRT’s Wages Hub system. While MacroView can run on Windows, and does for some (even Cygwin), VRT mainly deploys it on Ubuntu Linux. So my first project was all Linux based.

During this time, my new work colleagues were assessing my skills, and were looking at what I could work on next. One of the discussions revolved around working on some of their 3D modelling work using Unity3D. Unity3D at the time, ran on just two desktop OSes. Windows, and MacOS X.

My aging P4 laptop had a nVidia GeForce 420Go video device with 32MB memory. In short, if I hit that thing with a modern 3D games engine, it’d most likely crap itself. So I was up for a newer laptop. That got me thinking, did I want to try and face Windows again, or did I want to try something new?

MacOS was something I had only fleeting contact with. MacOS X I had actually never used myself. I knew a bit about it, such as its basis was on the FreeBSD userland, the Mach microkernel. I saw a 2008 model MacBook with a 256MB video device inbuilt, going cheap, so I figured I’d take the plunge.

My first impressions of MacOS X 10.5 were okay. I had a few technical glitches at first, namely MacOS X would sometimes hang during boot, just showing nothing more than an Apple logo and a swirling icon. Some updates refused to download, they’d just hang and the time estimate would blow out. Worst of all, it wouldn’t resume, it’d just start at the beginning.

In the end I wandered down to the NextByte store in the city, bought a copy of MacOS X 10.6. I bought the disc on April 1st, 2011, and it’s the one and only disc the DVD drive in the MacBook won’t accept. The day I bought it I was waiting at the bus stop, figured I’d have a look and see what docs there are. I put the disc in, hear a few noises, it spits the disc out again. So I put it back in again, and out it comes. Figuring this was a defective disc, I put the disc back in and march back down to the shop, receipt in one hand, cantankerous laptop in the other. So much for Apple kit “just working”.

Then the laptop decided it liked the pussy cat disc so much it wouldn’t give it back! Cue about 10 minutes in the service bay getting the disc to eject. Finally the machine reneged and spat the disc out. That night it tried the same tricks, so I grabbed an external DVD drive and did the install that way. Apart from this, OSX 10.6 has given me no problems in that regard.

As for the interface? I noticed a few things features that I appreciated from KDE, such as the ability to modify some of the standard shortcuts, although not all of them. Virtual desktops get called Spaces in MacOS X, but essentially the same deal.

My first problem was identifying what the symbols on the key shortcuts meant. Command and Shift were simple enough, but the symbol used to denote “Option” was not intuitive, and I can see some people getting confused for the one for Control. That said, once I found where the Terminal lived, I was right at home.

File browsing? Much like what I’m used to elsewhere. Stick a disc in, and it appears on the desktop. But then to eject? The keyboard eject button didn’t seem to work. Then I remembered a sarcastic comment one of my uncles made about using a Macintosh, requiring you to “throw your expensive software in the bin to eject”. So click the CD icon, drag to the rubbish bin icon, voilà, out it comes.

Apple’s applications have always put the menu bar of the application right up the top of the screen. I found this somewhat awkward when working with multiple applications since you find yourself clicking on one (or command-tabbing over to) one window, access the menu there, then clicking (or command-tabbing) to the other, access the menu up the top of the screen.

Switching applications with Command-Tab works by swapping between completely separate applications. Okay if you’re working with two completely separate applications, not so great if you’re working on many instances of the same application. Exposé works, probably works quite well if the windows are visually distinct when zoomed out, but if they look similar, one is reminded of Forrest Gump: “Life’s like a box of chocolates, you never know what you’re gonna get!”

The situation is better if you hit Command-Tab, then press a down-arrow, that gives you an Exposé of just the windows belonging to that application. A far cry from hitting Alt-Tab in FVWM to bring up the Window List and just cycling through. Switching between MacVim instances was a pain.

As for the fancy animations. Exposé looks good, but when the CPUs busy (and I do give it a hiding), the animation just creeps along at a snail’s pace. I’ll tolerate it if it’s over and done with within a second, but when it takes 10 seconds to slowly zoom out, I’m left sitting there going “Just get ON with it!” I’d be fine if it just skipped the animation and just switched from normal view to Exposé in a single frame. Unfortunately there’s nowhere to turn this off that I’ve found.

The dock works to an extent. It suffers a bit if you have a lot of applications running all at once, there’s only so much screen real-estate. A nice feature though is in the way it auto-hides and zooms.

When the mouse cursor is away from the dock, it drops off the edge of the screen. As the user configures this and sets up which edge it clings to, this is a reasonable option. As the mouse is brought near the space where the dock resides, it slowly pops out to meet the cursor. Not straight away, but progressively as the proximity of the cursor gets closer.

When fully extended, the icons nearest the cursor enlarge, allowing the rest to remain visible, but not occupy too much screen real-estate. The user is free to move the cursor amongst them, the ones closest zooming in, the ones furtherest away zooming out. Moving the cursor away causes the dock to slip away again.

Linux on the MacBook

And it had to happen, but eventually Linux did wind up on there. Again, KDE initially, but I again, found that KDE was just getting too bloated for my liking. It took about 6 months of running KDE before I started looking at other options.

FVWM was of course where I turned to first, in fact, it was what I used before KDE was finished compiling. I came to the realisation that I was mostly just using windows full-screen. So I thought, what about a tiling window manager?

Looking at a couple, I settled on Awesome. At first I tried it for a bit, didn’t like it, reverted straight back to FVWM. But then I gave it a second try.

Awesome works okay, it’s perhaps not the most attractive to look at, but it’s functional. At the end of the day looks aren’t what matter, it’s functionality. Awesome was promising in that it uses Lua for its configuration. It had a lot of the modern window manager features for interacting with today’s X11 applications. I did some reading up on the handbook, did some tweaking of the configuration file and soon had a workable desktop.

The default keybindings were actually a lot like what I already used, so that was a plus. In fact, it worked pretty good. Where it let me down was in window placement. In particular, floating windows, and dividing the screen.

Awesome of course works by a number of canned window layouts. It can make a window full screen (hiding the Awesome bar) or near full-screen, show two windows above/below each other or along side. Windows are given numerical tags which cause those windows to appear whenever a particular tag is selected, much like virtual desktops, only multiple tags can be active on a screen.

What irritated me most was trying to find a layout scheme that worked for me. I couldn’t seem to re-arrange the windows in the layout, and so if Awesome decided to plonk a window in a spot, I was stuck with it there. Or I could try cycling through the layouts to see if one of the others was better. I spent much energy arguing with it.

Floating windows were another hassle. Okay, modal dialogues need to float, but there was no way to manually override the floating status of a window. The Gimp was one prime example. Okay, you can tell it to not float its windows, but it still took some jiggery to get each window to sit where you wanted it. And not all applications give you this luxury.

Right now I’m back with the old faithful, FVWM.

FVWM

FVWM, as I have it set up on Gentoo

Windows 7

When one of my predecessors at VRT left to work for a financial firm down in Sydney, I wound up inheriting his old projects, and the laptop he used to develop them on. The machine dual-boots Ubuntu (with KDE) and Windows 7, and seeing as I already have the MacBook set up as I want it, I use that as my main workstation and leave the other booted into Windows 7 for those Windows-based tasks.

Windows 7 is much like Windows Vista in the UI. Behind the scenes, it runs a lot better. People aren’t kidding when they say Windows 7 is “Vista done right”. However, recall I mentioned about Windows Vista being the last to be able to do the classic Start menu? Maybe I’m dense, but I’m yet to spot the option in Windows 7. It isn’t where they put it Windows XP or Vista.

So I’m stuck with a Start menu that crams itself into a small bundle in one corner of the screen. Aero has been turned off in favour of a plain “classic” desktop. I have the task bar up the top of the screen.

One new feature of Windows 7 is that the buttons of running applications by default only show the icon of the application. Clicking that reveals tiny wee screenshots with wee tiny title text. More than once I’ve been using a colleague’s computer, he’ll have four spreadsheets open, I’ll click the icon to switch to one of them, and neither of us can figure out which one we want.

Thankfully you can tell it to not group the icons, showing a full title next to the icon on the task bar, but it’s an annoying default.

Being able to hit Logo-Left or Logo-Right to tile a window on the screen is nice, but I find more often than not I wind up hitting that when I mean to hit one of the other meta keys, and thus I have to reach for the rodent and maximise the window again. This is more to do with the key layout of the laptop than Windows 7, but it’s Windows 7′s behaviour and the inability to configure it that exacerbates the problem.

The new start menu I’d wager, is why Microsoft saw so many people pinning applications to the task bar. It’s not just for quick access, in some cases it’s the only bleeding hope they’d ever find their application again! Sure, you can type the name of the application, but circumstance doesn’t always favour that option. Great if you know exactly what the program is called, not so great if it’s someone else’s computer and you need to know if something is even there.

Thankfully most of the effects can be turned off, and so I’m left with a mostly Spartan desktop that just gets the job done. I liken using Windows to a business trip abroad, you’re not there for pleasure, and there’s nothing quite like home sweet home.

Windows 8

Now, I get to this latest instalment of desktop Operating Systems. I have yet to actually use it myself, but looking at a few screenshots, a few thoughts:

  • “Modern”: apart from being a silly name for a UI paradigm (what do you call it when it isn’t so “modern” anymore?), looks like it could really work well on the small screen. However, it relies quite heavily on gestures and keystrokes to navigate. All very well if you set these up to suit how you operate yourself, but not so great when forced upon you.
  • Different situations will call for different interface methods. Sometimes it is convenient to reach out and touch the screen, other times it’ll be easier to grab the rodent, other times it’ll be better to use the keyboard. Someone should be able to achieve most tasks (within reason) with any of the above, and seamlessly swap between these input methods as need arises.
  • “Charms” and “magic corners” makes the desktop sound like it belongs on the set of a Harry Potter film
  • Hidden menus that jump out only when you hit the relevant corner or edge of the screen by default without warning will likely startle and confuse
  • A single flat hierarchy of icons^Wtiles for all one’s applications? Are we back to MS-DOS Executive again?
  • “Press the logo key to access the Start screen”, and so what happens if the keyboard isn’t in convenient reach but the mouse is?
  • In a world where laptops are out-numbering desktops and monitors are getting wider faster than they’re getting taller, are extra-high ribbons really superior to drop-down menus for anyone other than touch users?

Apparently there’s a tutorial when you first start up Windows 8 for the first time. Comments have been made about how people have been completely lost working with the UI until they saw this tutorial. That should be a clue at least. Keystrokes are really just a shortened form of command line. Even the Windows 7 start menu, with its search bar, is looking more like a stylised command line (albeit one with minimal capability).

Are we really back to typing commands into pseudo command line interfaces?

The Command line: what is old is new again

The Command line: what is old is new again

I recall the Ad campaigns for Windows 7, on billboards, some attractive woman posing with the caption: “I’m a PC and Windows 7 was my idea”

Mmm mmm, so who’s idea was Windows 8 then? There’s no rounded rectangles to be seen, so clearly not Apple’s. I guess how well it goes remains to be seen.

It apparently has some good improvements behind the scenes, but anecdotal evidence at the workplace suggests that the ability to co-operate with a Samba 3.5-based Windows Domain is not among them. One colleague recently bought herself a new ultrabook running Windows 8.

I’m guessing sooner or later I’ll be asked to assist with setting up the Cisco VPN client and setting up links to file shares, but another colleague, despite getting the machine to successfully connect to the office Wifi, couldn’t manage to bring up a login prompt to connect to the file server, the machine instead just assuming the local username and password matched the credentials to be used on the remote server. I will have to wait and see.

Where to now?

Well I guess I’m going to stick with FVWM a bit longer, or maybe pull my finger out and go my own way. I think Linus has a point when he describes KDE as a bit “cartoony”. Animations make something easy to sell, but at the end of the day, it actually has to do the work. Some effects can add value to day-to-day tasks, but most of what I’ve seen over the years doesn’t seem to add much at all.

User interfaces are not one-size-fits-all. Never have been. Touch screen interfaces have to deal with problems like fat fingers, and so there’s a balancing act between how big to make controls and how much to fit on a screen. Keyboard interfaces require a decent area for a keypad, and in the case of standard computer keyboards, ideally, two hands free. Mice work for selecting individual objects, object groups and basic gestures, but make a poor choice for entering large amounts of data into a field.

For some, physical disability can make some interfaces a complete no-go. I’m not sure how I’d go trying to use a mouse or touch screen if I lost my eyesight for example. I have no idea how someone minus arms would go with a tablet — if you think fat fingers is a problem, think about toes! I’d imagine the screens on those devices often would be too small to read when using such a device with your feet, unless you happen to have very short legs.

Even for those who have full physical ability, there are times when one input method will be more appropriate at a given time than another. Forcing one upon a user is just not on.

Hiding information from a user has to be carefully considered. One of my pet peeves is when you can’t see some feature on screen because it is hidden from view. There is one thing if you yourself set up the computer to hide something, but quite another when it happens by default. Having a small screen area that activates and reveals a panel is fine, if the area is big enough and there is some warning that the panel is about to fly out.

As for organising applications? I’ve never liked the way everything just gets piled into the “Programs” directory of the Start Menu in Windows. It is just an utter mess. MacOS X isn’t much better.

The way things are in Linux might take someone a little discovery to find where an application has been put, but once discovered, it’s then just a memory exercise to get at it, or shortcuts can be created. Much better than hunting through a screen-full of unsorted applications.

Maybe Microsoft can improve on this with their Windows Store, if they can tempt a few application makers from the lucrative iOS and Android markets.

One thing is clear, the computer is a tool, and as such, must be able to be adapted for how the user needs to use that tool at any particular time for it to maintain maximum utility.

January 31, 2013
LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 96 OpenRC | SystemD | Pulseaudio (January 31, 2013, 22:38 UTC)

LC

In this podcast, comprookie talks about Gentoo and the OpenRC, udev, SystemD debate, his slacking abilities and so much less ...

Links

SystemD
http://www.freedesktop.org/wiki/Software/systemd
http://0pointer.de/blog/projects/the-biggest-myths.html
OpenRC
http://www.gentoo.org/proj/en/base/openrc/
eudev
http://www.gentoo.org/proj/en/eudev/
Gentoo udev
http://wiki.gentoo.org/wiki/Udev

Download

ogg

Markos Chandras a.k.a. hwoarang (homepage, bugs)
What happened to all the mentors? (January 31, 2013, 19:07 UTC)

I had this post in the Drafts for a while, but now it’s time to publish it since the situation does not seem to be improving at all.

As you probably now, if you want to become a Gentoo developer, you need to find yourself a mentor[1]. This used to be easy. I mean, all you had to do was to contact the teams you were interested in contributing as a developer and then one of the team members would step up and help you with your quizzes. However, lately, I find myself in the weird situation of having to become a mentor myself because potential recruits come back to recruiters and say that they could not find someone from the teams to help them. This is sub-optimal  for a couple of reasons. First of all, time constrains  Mentoring someone can take days, weeks or months. Recruiting someone after being trained (properly or not), can also take days, weeks or months. So somehow, I ended up spending twice as much time as I used to.  So we are back to those good old days, where someone needed to wait months before we fully recruit him. Secondly, a mentor and a recruiter should be different persons. This is necessary for recruits to gain a wider and more efficient training as different people will focus on different areas during this training period.

One may wonder, why teams are not willing to spend time to train new developers. I guess, this is because training people takes quite a lot of someone’s time and people tend to prefer fixing bugs and writing code than spending time training people. Another reason could be that teams are short on manpower, so try are mostly busy with other stuff and they just can’t do both at the same time. Others just don’t feel ready to become mentors which is rather weird because every developer was once a mentee. So it’s not like they haven’t done something similar before. Truth is that this seems to be a vicious circle. No manpower to train people -> less people are trained -> Not enough manpower in the teams.

In my opinion, getting more people on board is absolutely crucial for Gentoo. I strongly believe that people must spend time training new people because a) They could offload work to them ;) and b) it’s a bit sad to have quite a few interested and motivated people out there and not spend time to train them properly and get them on board. I sincerely hope this is a temporary situation and things will become better in the future.

ps: I will be in FOSDEM this weekend. If you are there and you would like to discuss about the Gentoo recruitment process or anything else, come and find me ;)

 

[1] http://www.gentoo.org/proj/en/devrel/handbook/handbook.xml?part=1&chap=2#doc_chap3

Marcus Hanwell a.k.a. cryos (homepage, bugs)
FOSDEM: Open Science and Open Chemistry (January 31, 2013, 15:14 UTC)

I will be talking about the Open Chemistry Project at FOSDEM this year in the FOSS for scientists devroom at 12:30pm on Saturday. I will discuss the development of a suite of tools for computational chemists and related disciplines, which includes the development of three desktop applications addressing 3D molecular structure editing, input preparation, output analysis, cheminformatics and integration with high-performance computing resources.

Open Chemistry

On Sunday Bill Hoffman will be speaking in the main track about Open Science, Open Software, and Reproducible Code at 3pm on Sunday. Bill and Alexander Neundorf will also be talking about Modern CMake in the cross desktop devroom on Saturday.

FOSDEM is one of the first conferences I attended (possibly the first, I can't remember if I went to a science conference before this). It will be great to return after so many years, and hopefully meet old colleagues and a few new ones. Please find me, Bill or Alex if you would like to discuss any of this work with us. I fly out tomorrow, and hope to get over jet lag quickly. Once FOSDEM is over we will be visiting Kitware SAS in Lyon, France for a couple of days (this is my first trip to our new office).

Then I have a few days in England visiting friends and family before heading back to the US.

January 30, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: iO Tillett Wright: Fifty shades of gay (January 30, 2013, 23:01 UTC)

Since the TED player seems to skip the last few seconds, I’m linking to the TED talk page but embedding a version from YouTube:

January 29, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)

January 28, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
State of Chromium Open Source packages (January 28, 2013, 14:56 UTC)

Let me present an informal an unofficial state of Chromium Open Source packages as I see it. Note a possible bias: I'm a Chromium developer (and this post represents my views, not the projects'), and a Gentoo Linux developer (and Chromium package maintenance lead - this is a team effort, and the entire team deserves credit, especially for keeping stable and beta ebuilds up to date).

  1. Gentoo Linux - ships stable, beta and dev channels. Security updates are promptly pushed to stable. NaCl (NativeClient) is enabled, although pNaCl (Portable NaCl) is disabled. Up to 23 use_system_... gyp switches are enabled (depending on USE flags).
  2. Arch Linux - ships stable channel, promptly reacts to security updates. NaCl is enabled, following Gentoo closely - I consider that good, and I'm glad people find that code useful. :) 5 use_system_... gyp switches are enabled. A notable thing is that the PKGBUILD is one of the shortest and simplest among Chromium packages - this seems to follow from The Arch Way. There is also chromium-dev on AUR - it is more heavily based on the Gentoo package, and tracks the upstream dev channel. Uses 19 use_system_... gyp switches.
  3. FreeBSD / OpenBSD - ship stable channel, and are doing pretty well, especially when taking amount of BSD-specific patching into account. NaCl is disabled.
  4. ALT Linux - ships stable channel. NaCl seems to be disabled by default, I'm not sure what's actually shipped in compiled package. Uses 11 use_system_... gyp switches.
  5. Debian - ancient 6.x version in Squeeze, 22.x in sid at the time of this writing. This is two major milestones behind, and is missing security updates. Not recommended at this moment. :( If you are on Debian, my advice is to use Google Chrome, since official debs should work, and monitor state of the open source Chromium package. You can always return to it when it gets updated.
  6. Fedora - not in official repositories, but Tom "spot" Callaway has an unofficial repo. Note: currently the version in that repo is 23.x, one major version behind on stable. Tom wrote an article in 2009 called Chromium: Why it isn't in Fedora yet as a proper package, so there is definitely an interest to get it packaged for Fedora, which I appreciate. Many of the issues he wrote about are now fixed, and I hope to work on getting the remaining ones fixed. Please stay tuned!
This is not intended to be an exhaustive list. I'm aware of openSUSE packages, there seems to be something happening for Ubuntu, and I've heard of Slackware, Pardus, PCLinuxOS and CentOS packaging. I do not follow these closely enough though to provide a meaningful "review".

Some conclusions: different distros package Chromium differently. Pay attention to the packaging lag: with about 6 weeks upstream release cycle and each major update being a security one, this matters. Support for NativeClient is another point. There are extension and Web Store apps that use it, and when more and more sites start to use it, this will become increasingly important. Then it is interesting why on some distros some bundled libraries are used even though upstream provides an option to use a system library that is known to work on other distros.

Finally, I like how different maintainers look at each other's packages, and how patches and bugs are frequently being sent upstream.

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Openstack on Gentoo (January 28, 2013, 06:00 UTC)

Just a simple announcement for now. It's a bit messy, but should work :D

I have packaged Openstack for Gentoo and it is now in tree, the most complete packaging is probably for Openstack Swift. Nova and some of the others are missing init scripts (being worked on). If you have problems or bugs, report as normal.

January 27, 2013
Looking for KDE users on ARM (January 27, 2013, 15:11 UTC)

I received few requests to make KDE stable for ARM. Unfortunately I can’t do a complete test but I’m able to compile on both armv5 and armv7.

Before stabilize, I may set a virtual machine on qemu to test better, but I’d prefer to receive some feedback from the users.

So, if you are running KDE on arm, feel free to comment here, send me an e-mail or add a comment in the stabilization bug.

If you want to partecipate, look at kde-stable project.

January 26, 2013
Hanno Böck a.k.a. hanno (homepage, bugs)

Based on the XKCD comic "Up Goer Five", someone made a nice little tool: An online text editor that lets you only use the 1000 most common words in English. And ask you to explain a hard idea with it.

Nice idea. I gave it a try. The most obvious example to use was my diploma thesis (on RSA-PSS and provable security), where I always had a hard time to explain to anyone what it was all about.

Well, obviously math, proof, algorithm, encryption etc. all are forbidden, but I had a hard time with the fact that even words like "message" (or anything equivalent) don't seem to be in the top 1000.

Here we go:

When you talk to a friend, she or he knows you are the person in question. But when you do this a friend far away through computers, you can not be sure.
That's why computers have ways to let you know if the person you are talking to is really the right person.

The ways we use today have one problem: We are not sure that they work. It may be that a bad person knows a way to be able to tell you that he is in fact your friend. We do not think that there are such ways for bad persons, but we are not completely sure.

This is why some people try to find ways that are better. Where we can be sure that no bad person is able to tell you that he is your friend. With the known ways today this is not completely possible. But it is possible in parts.

I have looked at those better ways. And I have worked on bringing these better ways to your computer.


So - do you now have an idea what I was taking about?

I found this nice tool through Ben Goldacre, who tried to explain randomized trials, blinding, systematic review and publication bias - go there and read it. Knowing what publication bias and systematic reviews are is much more important for you than knowing what RSA-PSS is. You can leave cryptography to the experts, but you should care about your health. And for the record, I recently tried myself to explain publication bias (german only).

January 25, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
MySQL, MariaDB & openSUSE 12.3 (January 25, 2013, 12:22 UTC)

MariaDB logoopenSUSE 12.3 is getting closer and closer and probably one of the last changes I pushed for MySQL was switching the default MySQL implementation. So in openSUSE 12.3 we will have MariaDB as a default.

If you are following what is going on in openSUSE in regards to MySQL, you probably already know, that we started shipping MariaDB together with openSUSE starting with version 11.3 back in 2010. It is now almost three years since we started providing it. There were some little issues on the way to resolve all conflicts and to make everything work nicely together. But I believe we polished everything and smoothed all rough edges. And now everything is working nice and fine, so it’s time to change something, isn’t it? :-D So let’s take a look of the change I made…

MariaDB as default, what does it mean?

First of all, for those who don’t know, MariaDB is MySQL fork – drop-in replacement for MySQL. Still same API, still same protocol, even same utilities. And mostly the same datafiles. So unless you have some deep optimizations depending on your current version, you should see no difference. And what will switch mean?

Actually, switching default doesn’t mean much in openSUSE. Do you remember the time when we set KDE as a default? And we still provide great Gnome experience with Gnome Shell. In openSUSE we believe in freedom of choice so even now, you can install either MySQL or MariaDB quite simply. And if you are interested, you can try testing beta versions of both – we have MySQL 5.6 and MariaDB 10.0 in server:database repo. So where is the change of default?

Actually, the only thing that changed is that everything now links against MariaDB and uses MariaDB libraries – no change from users point of view. And if you will try to update from system that used to have just one package called ‘mysql’, you’ll end up with MariaDB. And it will be default in LAMP pattern. But generally, you can still easily replace MariaDB with MySQL, if you like Oracle ;-) Yes, it is hard to make a splash with a default change if you are supporting both sides well…

What happens to MySQL?

Oracles MySQL will not go away! I’ll keep packaging their version and it will be available in the openSUSE. It’s just not going to be a default, but nothing prevents you from installing it. And if you had it in past and you are going to do just a plain upgrade, you’ll stick to it – we are not going to tell you what to use if you know what you want.

Why?

As mentioned before, being default doesn’t have many consequences. So why the switch? Wouldn’t it break stuff? Is that MariDB safe enough? Well, I’m personally using MariaDB since 2010 with few switches to MySQL and back, so it is better tested from my point of view. I originally switched for the kicks of living on the edge, but in the end I found MariaDB boringly stable (even though I run their latest alpha). I never had any serious issue with it. It also has some interesting goodies that it can offer to it’s user over MySQL. Even Wikipedia decided to switch. And our friends at Fedora are considering it too, but AFAIK they don’t have MariaDB yet in their distribution….

Don’t take it as a complain about MySQL guys and girls at Oracle, I know that they are doing a great job that even MariaDB is based on as they do periodical merges to get newest MySQL and they “just” add some more tweaks, engines and stuff.

So, as I like MariaDB and I think it’s time to move, I, as a maintainer of both, proposed to change the default. There were no strong objections, so we are doing it!

Overview

So overall, yes, we are changing default MySQL provider, but you probably wouldn’t even notice

Marcus Hanwell a.k.a. cryos (homepage, bugs)
Avogadro Paper Published Open Access (January 25, 2013, 10:29 UTC)

In January of last year I was invited to attend the Semantic Physical Science Workshop in Cambridge, England. That was a great meeting where I met like-minded scientists and developers working on adding semantic structure to data in the physical sciences. Peter managed to bring together a varied group with many backgrounds, and so the discussions were especially useful. I was there to think about how our work with Avogadro, and the wider Open Chemistry project might benefit from and contribute to this area.

Avogadro graphical abstract

My thanks go out to Peter Murray-Rust for inviting me to the Semantic Physical Science meeting and helping us to get the Avogadro paper published in the Journal of Cheminformatics as part of the Semantic Physical Science collection. Noel O'Boyle wrote up a blog post summarizing the Avogadro paper accesses in the first month (shown below - thanks Noel) compared to the Blue Obelisk paper and the Open Babel paper. We only just got the final version of the PDF/HTML published in early January, but already have 12 citations according to Google scholar, showing as the second most viewed article in the last 30 days, and the most viewed article in the last year. The paper made the Chemistry Central most accessed articles list in October and November.

&transpose&headers&range&gid&pub

I made a guest blog post talking about open access and the Avogadro paper, which was later republished for a different audience. I would like to thank Geoffrey Hutchison, Donald Curtis, David Lonie, Tim Vandermeersch and Eva Zurek for the work they put into the article, along with our contributors, collaborators and the users of Avogadro. If you use Avogadro in your work please cite our paper, and get in touch to let us know what you are doing with it. As we develop the next generation of Avogadro we would appreciate your input, feedback and suggestions on how we can make it more useful to the wider community.

January 24, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We are currently working on integrating carbon nanotube nanomechanical systems into superconducting radio-frequency electronics. Overall objective is the detection and control of nanomechanical motion towards its quantum limit. In this project, we've got a PhD position with project working title "Gigahertz nanomechanics with carbon nanotubes" available immediately.

You will design and fabricate superconducting on-chip structures suitable as both carbon nanotube contact electrodes and gigahertz circuit elements. In addition, you will build up and use - together with your colleagues - two ultra-low temperature measurement setups to conduct cutting-edge measurements.

Good knowledge of electrodynamics and possibly superconductivity are required. Certainly helpful is low temperature physics, some sort of programming experience, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

The combination of localized states within carbon nanotubes and superconducting contact materials leads to a manifold of fascinating physical phenomena and is a very active area of current research. An additional bonus is that the carbon nanotube can be suspended, i.e. the quantum dot between the contacts forms a nanomechanical system. In this research field a PhD position is immediately available; the working title of the project is "A carbon nanotube as a moving weak link".

You will develop and fabricate chip structures combining various superconductor contact materials with ultra-clean, as-grown carbon nanotubes. Together with your colleagues, you will optimize material, chip geometry, nanotube growth process, and measurement electronics. Measurements will take place in one of our ultra-low temperature setups.

Good knowledge of superconductivity is required. Certainly helpful is knowledge of semiconductor nanostructures and low temperature physics, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

Richard Freeman a.k.a. rich0 (homepage, bugs)
MythTV 0.26 In Portage (January 24, 2013, 01:31 UTC)

Well, all of MythTV 0.26 is now in portage, masked for testing for a few days.

If anyone is interested now is a good time to give it a try and report any issues you find. If all is quiet the masks will come off and we’ll be up-to-date (including all patches up to a few days ago).

Thanks to all who have contributed to the 0.26 bug. I can also happily report that I’m running Gentoo on my mythtv front-end, which should help me with maintaining things. MiniMyth is a great distro, but it has made it difficult to keep the front- and back-ends in sync.


Filed under: foss, gentoo, mythtv

January 23, 2013
Marcus Hanwell a.k.a. cryos (homepage, bugs)
The Roller Coaster of 2012 (January 23, 2013, 00:20 UTC)

It has been a long time since I wrote anything on here, I am still alive and kicking! 2012 was another roller coaster of a year, with many good and bad things happening. Louise and I got our green cards early on in the year (massive thanks to my employer), which was great after having lived in the US for over five years now. We started house hunting a few months after that, which was an adventure and a half.

As we were in the process of looking for a house I was promoted to technical leader at Kitware, and I continue to work on our Open Chemistry project. We ended up falling in love with the first house we found, and found a great realtor who took us back there for a second look. We then learned how different buying a house in the US versus England, but after several rounds of negotiations came to an agreement. We had a very long wait for completion, but that all proceeded well in the end.

As we moved out of the place we had been renting for the last three years we found out just how bad some landlords can be about returning security deposits...that is still ongoing and has not been a fun process. We never rented in England, but many friends have assured us that this isn't that unusual. Our move actually went very smoothly though, and we have some great friends who helped us with some of the heavy lifting. We have been learning what it is like to own a home in the country, with a well, septic, large garden etc. The learning curve has been a little steep at times! We attended two weddings (I was a groomsman in one) with two amazing groups of friends - it was a pleasure to be part of the day for two great friends.

I made a few guest blog posts, which I will try to talk more about in another post, and attended some great conferences including the ACS, Semantic Physical Science and Supercomputing. Our Avogadro paper was published, and was recently published in final form (I will write more about this too). I finally cancelled my dedicated server (an old Gentoo box), which I originally took when I was consulting in England, this was very disruptive in the end and I didn't have a complete backup of all data when it was taken offline. This caused lots of disruption to email (sorry if I never got back to you). I moved to a cloud server with Rackspace in the end, after playing with a few alternatives. I was retired as a Gentoo developer too (totally missed those emails), it was a great experience being a developer and I still value many of the friendships formed during that time. My passion for packaging has wained in recent years, and I tend to use Arch Linux more now (although still love lots of things about Gentoo).

Just before Xmas our ten year old German Shepherd developed a sudden paralysis in his back legs and had to be put down. It was pretty devastating, after having him from when he was 12 weeks old. He joined our little family just after we got our own place in England, he had five great years in England and another five in the US. He was with me for so much of my life (a degree, loss of my brother, marriage, loss of my sister, moving to another country, birth of our first child, getting a "real" job). We had family over for the holidays as we call them over here (Xmas and New Year back home), which was great but we may not have been the best of company after having just lost our dog.

I think I skipped lots of stuff too, but it was quite a year! Hoping for more of a steady ride this year to say the least.

January 22, 2013
Josh Saddler a.k.a. nightmorph (homepage, bugs)

a new song: walking home alone through moonlit streets by ioflow

for the 55th disquiet junto, two screws.

the task was to combine do and re by nils frahm into a new work. i chopped “re” into loops, and rearranged sections by sight and sound for a deliberately loose feel. the resulting piece is entirely unquantized, with percussion generated from the piano/pedal action sounds of “do” set under the “re” arrangement. the perc was performed with an mpd18 midi controller in real time, and then arranged by dragging individual hits with a mouse. since the original piano recordings were improvised, tempo fluctuates at around 70bpm, and i didn’t want to lock myself into anything tighter when creating the downtempo beats.

beats performed live on the mpd18, arranged in ardour3.

normally i’d program everything to a strict grid with renoise, but for this project, i used ardour3 (available in my overlay) almost exclusively, except for a bit of sample preparation in renoise and audacity. the faint background pads/strings were created with paulstretch. my ardour3 session was filled with hundreds of samples, each one placed by hand and nudged around to keep the jazzy feel, as seen in this screenshot:

ardour3 session

this is a very rough rework — no FX, detailed mixing/mastering, or complicated tricks. i ran outta time to do all the subtle things i usually do. instead, i spent all my time & effort on the arrangement and vibe. the minimal treatment worked better than everything i’d planned.

January 20, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
RolandDG DXY-800A under Linux (January 20, 2013, 09:35 UTC)

Many moons ago, we acquired an old RolandDG DXY-800A plotter.  This is an early A3 plotter which took 8 pens, driven via either the parallel port or the serial port.

It came with software to use with an old DOS-version of AutoCAD.  I also remember using it with QBasic.  We had the handbook, still do, somewhere, if only I could lay my hands on it.  Either that, or on the QBasic code I used to use with the thing, as that QBasic code did exercise most of the functionality.

Today I dusted it off, wondering if I could get it working again.  I had a look around.  The thing was not difficult to drive from what I recalled, and indeed, I found the first pointer in a small configuration file for Eagle PCB.

The magic commands:

H Go home
Jn Select Pen n (1..8)
Mxxxx,yyyy Move (with pen up) to position xxx.x, yyy.y mm from lower left corner.
Dxxxx,yyyy Draw (with pen down) a line to position xxx.x, yyy.y mm

Okay, this misses the text features, drawing circles and hatching, but it’s a good start.  Everything else can be emulated with the above anyway.  Something I’d have to do, since there was only one font, and I seem to recall, no ability to draw ellipses.

Inkscape has the ability to export HPGL, so I had a look at what the format looks like.  Turns out, the two are really easy to convert, and Inkscape HPGL is entirely line drawing commands.

hpgl2roland.pl is a quick and nasty script which takes Inkscape-generated HPGL, and outputs RolandDG plotter language. It’s crude, only understands a small subset of HPGL, but it’s a start.

It can be used as follows:

$ perl hpgl2roland.pl < drawing.hpgl > /dev/lp0

January 19, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
GHC as a cross-compiler (January 19, 2013, 23:34 UTC)

Another small breakthrough today for those who would like to see haskell programs running.

Here is a small incomplete HOWTO for gentoo users on how to build a crosscompiler running on x86_64 host targeted on ia64 platform.

It is just an example. You can pick any target.

First of all you need to enable haskell overlay and install host compiler:

# GHC_IS_UNREG=yeah emerge -av =ghc-7.6.1

The GHC_IS_UNREG=yeah bit is critical. If we won’t do it GHC build system will try to build registerised stage1 (which is a crosscompiler already).

Not setting GHC_IS_UNREG will break for a set of problems:

  • GHC will try to optimize generated bitcode with llvm‘s optimizer which will produce x86_64 instructions, not ia64.

  • GHC will try to run (broken on ia64) object splitter perl script: ghc-split.lprl.

The rest is rather simple:

# crossdev ia64-unknown-linux-gnu
# ia64-unknown-linux-gnu-emerge sys-libs/ncurses virtual/libffi dev-libs/gmp
# ln -s ${haskell_overlay}/haskell/dev-lang/ghc ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# cd ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# EXTRA_ECONF=--enable-unregisterised USE=ghcmakebinary ebuild ghc-9999.ebuild compile

It will fail as the following command tries to run ia64 binary on x86_64 host:

libraries/integer-gmp/cbits/mkGmpDerivedConstants > libraries/integer-gmp/cbits/GmpDerivedConstants.h

I’ve logged-in to ia64 box and ran mkGmpDerivedConstants to get a GmpDerivedConstants.h. Added the result to the ${WORKDIR} and reran last command.

After the build has finished I’ve got corsscompiler:

sf ghc-9999 # "inplace/bin/ghc-stage1" --info
 [("Project name","The Glorious Glasgow Haskell Compilation System")
 ,("GCC extra via C opts"," -fwrapv")
 ,("C compiler command","/usr/bin/ia64-unknown-linux-gnu-gcc")
 ,("C compiler flags"," -fno-stack-protector  -Wl,--hash-size=31 -Wl,--reduce-memory-overheads")
 ,("ld command","/usr/bin/ia64-unknown-linux-gnu-ld")
 ,("ld flags","     --hash-size=31     --reduce-memory-overheads")
 ,("ld supports compact unwind","YES")
 ,("ld supports build-id","YES")
 ,("ld is GNU ld","YES")
 ,("ar command","/usr/bin/ar")
 ,("ar flags","q")
 ,("ar supports at file","YES")
 ,("touch command","touch")
 ,("dllwrap command","/bin/false")
 ,("windres command","/bin/false")
 ,("perl command","/usr/bin/perl")
 ,("target os","OSLinux")
 ,("target arch","ArchUnknown")
 ,("target word size","8")
 ,("target has GNU nonexec stack","True")
 ,("target has .ident directive","True")
 ,("target has subsections via symbols","False")
 ,("Unregisterised","YES")
 ,("LLVM llc command","llc")
 ,("LLVM opt command","opt")
 ,("Project version","7.7.20130118")
 ,("Booter version","7.6.1")
 ,("Stage","1")
 ,("Build platform","x86_64-unknown-linux")
 ,("Host platform","x86_64-unknown-linux")
 ,("Target platform","ia64-unknown-linux")
 ,("Have interpreter","NO")
 ,("Object splitting supported","NO")
 ,("Have native code generator","NO")
 ,("Support SMP","NO")
 ,("Tables next to code","NO")
 ,("RTS ways","l debug  thr thr_debug thr_l thr_p ")
 ,("Dynamic by default","NO")
 ,("Leading underscore","NO")
 ,("Debug on","False")
 ,("LibDir","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib")
 ,("Global Package DB","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib/package.conf.d")
 ]

# cat a.hs
main = print 1
# "inplace/bin/ghc-stage1" a.hs -fforce-recomp -o a
[1 of 1] Compiling Main             ( a.hs, a.o )
Linking a ...
# file a
a: ELF 64-bit LSB executable, IA-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, not stripped
# LANG=C ls -lh a
-rwxr-xr-x 1 root portage 24M Jan 20 02:24 a
on ia64:
$ ./a
1

Results:

  • It’s not that hard to build a ghc with some exotic target if you have gcc there.

  • mkGmpDerivedConstants needs to be more cross-compiler friendly It should be really simple to implement, it only queries for data sizes/offsets. I think autotools is already able to do it.

  • GHC should be able to run llvm with correct -mtriple in crosscompiler case. That way we would get registerised crosscompiler.

Some TODOs:

In order to coexist with native compiler ghc should stop mangling —-target=ia64-unknown-linux-gnu option passed by user and name resulting compiler a ia64-unknown-linux-gnu-ghc and not ia64-unknown-linux-ghc.

That way I could have many flavours of compiler for one target. For example I would like to have x86_64-pc-linux-gnu-ghc as a registerised compiler and x86_64-unknown-linux-gnu-ghc as an unreg one.

And yes, they will all be tracked by gentoo’s package manager.


January 18, 2013
Luca Barbato a.k.a. lu_zero (homepage, bugs)
The case of defaults (Libav vs FFmpeg) (January 18, 2013, 17:18 UTC)

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.

Alexis Ballier a.k.a. aballier (homepage, bugs)

It’s been a while since I wanted to write about this and since there recently has been a sort of hijack without any kind of discussion to let libav be the default implementation for Gentoo, this motivated me.

Exactly two years ago, a group consisting of the majority of FFmpeg developers took over its maintainership. While I didn’t like the methods, I’m not an insider so my opinion stops here, especially since if you pay attention to who was involved: Luca was part of it. Luca has been a Gentoo developer since probably most of us even used Gentoo and I must admit I’ve never seen him heating any discussion, rather the contrary, and it’s always been a pleasure to work with him. What happened next, after a lot of turmoil, is that the developers split in two groups: libav formed by the “secessionists” and FFmpeg.

Good, so what do we chose now? One of the first things that was done on the libav side was to “cleanup” the API with the 0.7 release, meaning we had to fix almost all its consumers: Bad idea if you want wide adoption of a library that has an history of frequently changing its API and breaking all its consumers. Meanwhile, FFmpeg maintained two branches: the 0.7 branch compatible with the old API and the 0.8 one with the new API. The two branches were supposed to be identical except for the API change. On my side the choice was easy: Thanks but no thanks sir, I’ll stay with FFmpeg.
FFmpeg, while having its own development and improvements, has been doing daily merges of all libav changes, often with an extra pass of review and checks, so I can even benefit from all the improvements from libav while using FFmpeg.

So why should we use libav? I don’t know. Some projects use libav within their release process, so they are likely to be much more tested with libav than FFmpeg. However, until I see real bugs, I consider this as pure supposition and I have yet to see real facts. On the other hand, I can see lots of false claims, usually without any kind of reference: Tomáš claims that there’s no failure that is libav specific, well, some bugs tend to say the contrary and have been open for some time (I’ll get back to XBMC later). Another false claim is that FFmpeg-1.1 will have the same failures as libav-9: Since Diego made a tinderbox run for libav-9, I made the tests for FFmpeg 1.1 and made the failures block our old FFmpeg 0.11 tracker. If you click the links, you will see that the number of blockers is much smaller (something like 2/3) for the FFmpeg tracker. Another false claim I have seen is that there will be libav-only packages: I have yet to see one; the example I had as an answer is gst-plugins-libav, which unfortunately is in the same shape for both implementations.

In theory FFmpeg-1.1 and libav-9 should be on par, but in practice, after almost two years of disjoint development, small differences have started to accumulate. One of them is the audio resampling library: While libswresample has been in FFmpeg since the 0.9 series, libav developers did not want it and made another one, with a very similar API, called libavresample that appeared in libav-9. This smells badly as a NIH syndrome, but to be fair, it’s not the first time such things happen: libav and FFmpeg developers tend to write their own codecs instead of wrapping external libraries and usually achieve better results. The audio resampling library is why XBMC being broken with libav is, at least partly, my fault: While cleaning up its API usage of FFmpeg/libav, I made it use the public API for audio resampling, initially with libswresample but made sure it worked with libavresample from libav. At that time, this would mean it required libav git master since libav-9 was not even close to be released, so there was no point in trying to make it compatible with such a moving target. libswresample from FFmpeg was present since the 0.9 series, released more than one year ago. Meanwhile, XBMC-12 has entered its release process, meaning it will probably not work with libav easily. Too late, too bad.

Another important issue I’ve raised is the security holes: Nowadays, we are much more exposed to them. Instead of having to send a specially crafted video to my victim and make him open it with the right program, I only have to embed it in an HTML5 webpage and wait. This is why I am a bit concerned that security issues fixed 7 months ago in FFmpeg have only been fixed with the recently released libav-0.8.5. I’ve been told that these issues are just crashes are have been fixed in a better way in libav: This is probably true but I still consider the delay huge for such an important component of modern systems, and, thanks to FFmpeg merging from libav, the better fix will also land in FFmpeg. I have also been told that this will improve on the libav side, but again, I want to see facts rather than claims.

As a conclusion: Why is the default implementation changed? Some people seem to like it better and use false claims to force their preference. Is it a good idea for our users? Today, I don’t think so (remember: FFmpeg merges from libav and adds its own improvements), maybe later when we’ll have some clear evidence that libav is better (the improvements might be buggy or the merges might lead to subtle bugs). Will I fight to get the default back to FFmpeg ? No. I use it, will continue to use and maintain it, and will support people that want the default back to FFmpeg but that’s about it.


January 17, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Perl::Critic CERT Theme (January 17, 2013, 18:18 UTC)

So, Brian d Foy has compiled the CERT recommendations for securely programming in Perl. I’ve whipped up a perlcriticrc for it.

I’ve checked out he subversion from Perl::Critic and will submit the simple patch…if somebody else hasn’t beaten me to it.

January 15, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.
Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"
D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. Strunk
Journal of Applied Physics 113, 034303 (2013); arXiv:1208.2163 (PDF[*])
[*] Copyright American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)

UPDATE: Added some basic migration instructions to the bottom.
UPDATE2: Removed mplayer incompatibility mention. Mplayer-1.1 works with system libav.

As the summary says the default media codec provider for new installs will be libav instead of ffmpeg.

This change is being done due to various reasons like matching default with Fedora and Debian, or due to fact that some projects which are high-profile (eg sh*tload of people use them) will be probably libav only. One example being gst-libav which is in return required by libreoffice-4 which is due release in about month. To go for least pain for the user we decided to move from default ffmpeg to default libav library.

This change won’t affect your current installs at all but we would like to ask you to try to migrate to the libav and test and report any issues. So if stuff happen in the future and we are forced to throw libav as only implementation for everyone you are not left in the dark screaming for your suddenly missing features.

What to do when some package does not build with libav but ffmpeg is fine

There are no such packages left around if I am searching correctly (might be my blindness so do not take my word for it).

So if you encounter any package not building with libav just open bugreport on bugzilla and assign it to media-video team and add lu_zero[at]gentoo.org to CC to be sure he really takes a sneaky look to fix it. If you want to fix the issue yourself it gets even better. You write the patch open the bug in our bugzie and someone will include it. Also the patch should be sent to upstream for inclusion, so we don’t have to keep the patches in tree for long time.

What should I do when I have some issues with libav and I require more features that are on ffmpeg but not on libav

Its easier than fixing bugs about failing packages. Just nag to lu_zero (mail hidden somewhere in this post ;-)) and read this.

So when is this stuff going to ruin my day?

The switch in the tree and news item informing all users of media-video/ffmpeg will be created at the end of the January or early February, unless something really bad happens while you guys test it now.

I feel lucky and I want to switch right away so I can ruin your day by reporting bugs

Great I am really happy you want to contribute. The libav switch is pretty easy to be done as there are only 2 things to keep in mind.

You have to sync your useflags between virtual/ffmpeg and the newly-to-be-switched media-video/libav. This is most probably best to do just edit your package.use stuff and replace the media-video/ffmpeg line with media-video/libav one.

Then one would go straight away for emerge libav but there is one more caveat. Libav has split libpostproc library while ffmpeg still is using the internal one. Code wise they are most probably equal but you have to take account for it so just call emerge with both libraries.
emerge -1v libav libpostproc

If this succeeds you have to revdep-rebuild the packages you have or use @preserved-rebuild from portage-2.2 to rebuild all the RDEPENDS of libav.

Good luck and happy bug hunting.

January 14, 2013

Many times, when I had to set the make.conf on systems with particular architectures, I had a doubt on which is the best –jobs value.
The handook suggest to have ${core} + 1, but since I’m curious I wanted to test it by myself to be sure this is right.

To make a good test we need a package with a respectable build system that respects the make parallelization and takes at least few minutes to compile. Otherwise with packages that compile in few seconds we are unable to track the effective difference.
kde-base/kdelibs is, in my opinion, perfect.

If you are on architecture which kde-base/kdelibs is unavailable, just switch to another cmake-based package.

Now, download best_makeopts from my overlay. Below an explanation on what the script does and various suggestions.

  • You need to compile the package on a tmpfs filesystem and, I’m assuming you have /tmp mounted as a tmpfs too;
  • You need to have the tarball of the package on a tmpfs because if you have a slow disk, it may takes more time.
  • You need to switch your governor to performance.
  • You need to be sure you don’t have strange EMERGE_DEFAULT_OPTS.
  • You need to add ‘-B’ because we don’t want to include the time of the installation.
  • You need to drop the existent cache before compile.

As you can see, the for will emerge the same package with makeopts from 1 to 10. If you have, for example, a single core machine, just try the for from 1 to 4 is enough.

Please, during the test, don’t use the cpu for other purposes, and if you can, stop all services and make the test from the tty; you will see the time for every merge.

The following is an example on my machine:
-j1 : real 29m56.527s
-j2 : real 15m24.287s
-j3 : real 13m57.370s
-j4 : real 12m48.465s
-j5 : real 12m55.894s
-j6 : real 13m5.421s
-j7 : real 13m13.322s
-j8 : real 13m23.414s
-j9 : real 13m26.657s

The hardware is:
Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz which has 2 CPUs and 4 threads.
After -j4 you can see the regression.

Another example from an Intel Itanium with 4 CPUs.
-j1 : real 4m24.930s
-j2 : real 2m27.854s
-j3 : real 1m47.462s
-j4 : real 1m28.082s
-j5 : real 1m29.497s

I tested this script on ~20 different machines and in the majority of the cases, the best optimization was ${core} or more exactly ${threads} of your CPU.

Conclusion:
From the handbook:

A good choice is the number of CPUs (or CPU cores) in your system plus one, but this guideline isn’t always perfect.

I don’t know who, years ago, suggested in the handbook ${core} + 1 and I don’t want to trigger a flame. I’m just saying, ${core} + 1 is not the best optimization for me and the test confirms the part:“but this guideline isn’t always perfect”

In all cases ${threads} + ${X} is slower than only ${threads}, so don’t use -j20 if you have a dual-core cpu.

Also, I’m not saying to use ${threads}, I’m just saying feel free to make your tests to watch what is the best optimization.

If you have suggestions to improve the functionality of the script or you think that this script is wrong, feel free to comment or leave an email.