Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
January 25, 2013, 23:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

January 25, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A personal update (January 25, 2013, 17:32 UTC)

While I’m sure that it’s not of interest to the vast majority of readers coming from Gentoo Universe, I’m sure that some of you won’t mind some updates on my personal situation, at least to help you understand my current availability and what you can ask me to do for you, realistically.

First of all, I’m not currently in the USA — since I didn’t have a work visa, my stay was always supposed to be limited to three months at a time. The three months expired in early December, so before the expiration I traveled back to Europe — in particular to the bureaucracy of Italy and to the swamp of Venice; you can guess I don’t really like my motherland.

I’m not planning at this point to go back to the US anytime soon. Among other reasons, during 2012 I spent over six months there, and they have been very clear the last time I entered: I’m not welcome back right away — a few months would be enough, but that also means that the line of work I started back in February last year couldn’t proceed properly. While the original plan was for me to get an office in, or nearby, London, I haven’t seen any progress for said plan, which meant I went back to my old freelancing. I suppose this currently puts me in a consulting capacity more than anything.

Unfortunately, as you can guess, after a hiatus of a full year, most of my customers found already someone else to take care of them, and I’m currently only following one last customer in their project — for something they paid already for, which means that there isn’t any money to be made there. I am already trying to get a new position, this time as a full-time employee, as the life of a freelancer (in Italy!) really made me long for more stability. For the moment I have no certain news for my future employment, but you can probably guess that, in the case I do accept a full-time position, the time I have to spend on Gentoo is likely going to be reduced, unless said position requires me to use Gentoo — and I wouldn’t bet on that if I was you.

Furthermore I do expect that, whatever position I’m going to accept next, I’m going to move out of Italy — the political scene in Italy has never been good, but it reached my limit with the current populist promises from both sides of the aisles, and from the small challengers alike; and my freelancing experience makes me wonder how on earth it’s possible that only one (small) party is actually trying to fight the crisis and increase productivity … but this all is for a different time. Anyway, wherever am I going to end up (I’m aiming for one of the few English-speaking countries in Europe), it’s going to take a while for me to settle down (find a place to live, get it so that it’s half-decently convenient to me, etc.), which is going to eat away some of the time I spend on Gentoo.

Time is being eaten away already to be honest. Among other things, here at home I’ve got a bunch of paperwork to take care of: not only the general taxes that need to be paid and accounted for, but the bank took some of my time just to make sure I have money to cover for the expenses (during the year I accrued some debts here in Italy, as I was living off the American account), and so on. I’m also trying to reduce the expenses as much as it’s possible for me. Most of the hardware I had before, anyway, has been dropped already, back in June when the original plan was for me to get an H1B visa and jump out of here, so it’s less bothersome that it can seem at first.

The one thing that really bothers me the most is that since last year I’ve been feeling like wherever I am, I’m “borrowing” my space — it’s not something I like. While some people, such as Luca, feel comfortable with just carrying their things in a suitcase, and as long as they have a place to sleep and wash their clothes they are happy, I’ve always been quite the sedentary guy: I like having my space, personalized for my needs and so on. Even now back at home I don’t feel entirely stable because I do not know how long I’m going to stay here.

I’m afraid I have overindulged during the months in the US, relying too much on the promises made then. Hopefully, I’ll come out of the recent mess on my feet, and possibly with a less foul mood than I have been having recently.

Michal Hrusecky a.k.a. miska (homepage, bugs)
MySQL, MariaDB & openSUSE 12.3 (January 25, 2013, 12:22 UTC)

MariaDB logoopenSUSE 12.3 is getting closer and closer and probably one of the last changes I pushed for MySQL was switching the default MySQL implementation. So in openSUSE 12.3 we will have MariaDB as a default.

If you are following what is going on in openSUSE in regards to MySQL, you probably already know, that we started shipping MariaDB together with openSUSE starting with version 11.3 back in 2010. It is now almost three years since we started providing it. There were some little issues on the way to resolve all conflicts and to make everything work nicely together. But I believe we polished everything and smoothed all rough edges. And now everything is working nice and fine, so it’s time to change something, isn’t it? :-D So let’s take a look of the change I made…

MariaDB as default, what does it mean?

First of all, for those who don’t know, MariaDB is MySQL fork – drop-in replacement for MySQL. Still same API, still same protocol, even same utilities. And mostly the same datafiles. So unless you have some deep optimizations depending on your current version, you should see no difference. And what will switch mean?

Actually, switching default doesn’t mean much in openSUSE. Do you remember the time when we set KDE as a default? And we still provide great Gnome experience with Gnome Shell. In openSUSE we believe in freedom of choice so even now, you can install either MySQL or MariaDB quite simply. And if you are interested, you can try testing beta versions of both – we have MySQL 5.6 and MariaDB 10.0 in server:database repo. So where is the change of default?

Actually, the only thing that changed is that everything now links against MariaDB and uses MariaDB libraries – no change from users point of view. And if you will try to update from system that used to have just one package called ‘mysql’, you’ll end up with MariaDB. And it will be default in LAMP pattern. But generally, you can still easily replace MariaDB with MySQL, if you like Oracle ;-) Yes, it is hard to make a splash with a default change if you are supporting both sides well…

What happens to MySQL?

Oracles MySQL will not go away! I’ll keep packaging their version and it will be available in the openSUSE. It’s just not going to be a default, but nothing prevents you from installing it. And if you had it in past and you are going to do just a plain upgrade, you’ll stick to it – we are not going to tell you what to use if you know what you want.

Why?

As mentioned before, being default doesn’t have many consequences. So why the switch? Wouldn’t it break stuff? Is that MariDB safe enough? Well, I’m personally using MariaDB since 2010 with few switches to MySQL and back, so it is better tested from my point of view. I originally switched for the kicks of living on the edge, but in the end I found MariaDB boringly stable (even though I run their latest alpha). I never had any serious issue with it. It also has some interesting goodies that it can offer to it’s user over MySQL. Even Wikipedia decided to switch. And our friends at Fedora are considering it too, but AFAIK they don’t have MariaDB yet in their distribution….

Don’t take it as a complain about MySQL guys and girls at Oracle, I know that they are doing a great job that even MariaDB is based on as they do periodical merges to get newest MySQL and they “just” add some more tweaks, engines and stuff.

So, as I like MariaDB and I think it’s time to move, I, as a maintainer of both, proposed to change the default. There were no strong objections, so we are doing it!

Overview

So overall, yes, we are changing default MySQL provider, but you probably wouldn’t even notice

Marcus Hanwell a.k.a. cryos (homepage, bugs)
Avogadro Paper Published Open Access (January 25, 2013, 10:29 UTC)

In January of last year I was invited to attend the Semantic Physical Science Workshop in Cambridge, England. That was a great meeting where I met like-minded scientists and developers working on adding semantic structure to data in the physical sciences. Peter managed to bring together a varied group with many backgrounds, and so the discussions were especially useful. I was there to think about how our work with Avogadro, and the wider Open Chemistry project might benefit from and contribute to this area.

Avogadro graphical abstract

My thanks go out to Peter Murray-Rust for inviting me to the Semantic Physical Science meeting and helping us to get the Avogadro paper published in the Journal of Cheminformatics as part of the Semantic Physical Science collection. Noel O'Boyle wrote up a blog post summarizing the Avogadro paper accesses in the first month (shown below - thanks Noel) compared to the Blue Obelisk paper and the Open Babel paper. We only just got the final version of the PDF/HTML published in early January, but already have 12 citations according to Google scholar, showing as the second most viewed article in the last 30 days, and the most viewed article in the last year. The paper made the Chemistry Central most accessed articles list in October and November.

&transpose&headers&range&gid&pub

I made a guest blog post talking about open access and the Avogadro paper, which was later republished for a different audience. I would like to thank Geoffrey Hutchison, Donald Curtis, David Lonie, Tim Vandermeersch and Eva Zurek for the work they put into the article, along with our contributors, collaborators and the users of Avogadro. If you use Avogadro in your work please cite our paper, and get in touch to let us know what you are doing with it. As we develop the next generation of Avogadro we would appreciate your input, feedback and suggestions on how we can make it more useful to the wider community.

January 24, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We are currently working on integrating carbon nanotube nanomechanical systems into superconducting radio-frequency electronics. Overall objective is the detection and control of nanomechanical motion towards its quantum limit. In this project, we've got a PhD position with project working title "Gigahertz nanomechanics with carbon nanotubes" available immediately.

You will design and fabricate superconducting on-chip structures suitable as both carbon nanotube contact electrodes and gigahertz circuit elements. In addition, you will build up and use - together with your colleagues - two ultra-low temperature measurement setups to conduct cutting-edge measurements.

Good knowledge of electrodynamics and possibly superconductivity are required. Certainly helpful is low temperature physics, some sort of programming experience, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

The combination of localized states within carbon nanotubes and superconducting contact materials leads to a manifold of fascinating physical phenomena and is a very active area of current research. An additional bonus is that the carbon nanotube can be suspended, i.e. the quantum dot between the contacts forms a nanomechanical system. In this research field a PhD position is immediately available; the working title of the project is "A carbon nanotube as a moving weak link".

You will develop and fabricate chip structures combining various superconductor contact materials with ultra-clean, as-grown carbon nanotubes. Together with your colleagues, you will optimize material, chip geometry, nanotube growth process, and measurement electronics. Measurements will take place in one of our ultra-low temperature setups.

Good knowledge of superconductivity is required. Certainly helpful is knowledge of semiconductor nanostructures and low temperature physics, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.

Interested? Contact Andreas K. Hüttel (e-mail: andreas.huettel@ur.de, web: http://www.physik.uni-r.de/forschung/huettel/ ) for more information!

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Two days ago, Luca asked me to help him figure out what’s going on with a patch for libav which he knew to be the right thing, but was acting up in a fashion he didn’t understand: on his computer, it increased the size of the final shared object by 80KiB — while this number is certainly not outlandish for a library such as libavcodec, it does seem odd at a first glance that a patch removing source code is increasing the final size of the executable code.

My first wild guess which (spoiler alert) turned out to be right, was that removing branches out of the functions let GCC optimize the function further and decide to inline it. But how to actually be sure? It’s time to get the right tools for the job: dev-ruby/ruby-elf, dev-util/dwarves and sys-devel/binutils enter the battlefield.

We’ve built libav with and without the patch on my server, and then rbelf-size told us more or less the same story:

% rbelf-size --diff libav-{pre,post}/avconv
        exec         data       rodata        relro          bss     overhead    allocated   filename
     6286266       170112      2093445       138872      5741920       105740     14536355   libav-pre/avconv
      +19456           +0         -592           +0           +0           +0       +18864 

Yes there’s a bug in the command, I noticed. So there is a total increase of around 20KiB, where is it split? Given this is a build that includes debug info, it’s easy to find it through codiff:

% codiff -f libav-{pre,post}/avconv
[snip]

libavcodec/dsputil.c:
  avg_no_rnd_pixels8_9_c    | -163
  avg_no_rnd_pixels8_10_c   | -163
  avg_no_rnd_pixels8_8_c    | -158
  avg_h264_qpel16_mc03_10_c | +4338
  avg_h264_qpel16_mc01_10_c | +4336
  avg_h264_qpel16_mc11_10_c | +4330
  avg_h264_qpel16_mc31_10_c | +4330
  ff_dsputil_init           | +4390
 8 functions changed, 21724 bytes added, 484 bytes removed, diff: +21240

[snip]

If you wonder why it’s adding more code than we expected, it’s because there are other places where functions have been deleted by the patch, causing some reductions in other places. Now we know that the three functions that the patch deleted did remove some code, but five other functions added 4KiB each. It’s time to find out why.

A common way to do this is to generate the assembly file (which GCC usually does not represent explicitly) to compare the two — due to the size of the dsputil translation unit, this turned out to be completely pointless — just the changes in the jump labels cause the whole file to be rewritten. So we rely instead on objdump, which allows us to get a full disassembly of the executable section of the object file:

% objdump -d libav-pre/libavcodec/dsputil.o > dsputil-pre.s
% objdump -d libav-post/libavcodec/dsputil.o > dsputil-post.s
% diff -u dsputil-{pre,post}.s | diffstat
 unknown |245013 ++++++++++++++++++++++++++++++++--------------------------------
 1 file changed, 125163 insertions(+), 119850 deletions(-)

As you can see, trying a diff between these two files is going to be pointless, first of all because of the size of the disassembled files, and secondarily because each instruction has its address-offset prefixed, which means that every single line will be different. So what to do? Well, first of all it’s useful to just isolate one of the functions so that we reduce the scope of the changes to check — I found out that there is a nice way to do so, and it involves relying on the way the function is declared in the file:

% fgrep -A3 avg_h264_qpel16_mc03_10_c dsputil-pre.s
00000000000430f0 <avg_h264_qpel16_mc03_10_c>:
   430f0:       41 54                   push   %r12
   430f2:       49 89 fc                mov    %rdi,%r12
   430f5:       55                      push   %rbp
--
[snip]

While it takes a while to come up with the correct syntax, it’s a simple sed command that can get you the data you need:

% sed -n -e &apos/\<avg_h264_qpel16_mc03_10_c/, /^$/ s|^\s\+[0-9a-f]\+:|| p&apos dsputil-pre.s > dsputil-func-pre.s
% sed -n -e &apos/\<avg_h264_qpel16_mc03_10_c/, /^$/ s|^\s\+[0-9a-f]\+:|| p&apos dsputil-post.s > dsputil-func-post.s
% diff -u dsputil-func-{pre,post}.s | diffstat
 dsputil-func-post.s | 1430 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 1376 insertions(+), 54 deletions(-)

Okay that’s much better — but it’s still a lot of code to sift through, can’t we reduce it further? Well, actually… yes. My original guess was that some function was inlined; so let’s check for that. If a function is not inlined, it has to be called, the instruction for which, in this context, is callq. So let’s check if there are changes in the calls that happen:

% diff -u =(fgrep callq dsputil-func-pre.s) =(fgrep callq dsputil-func-post.s)
--- /tmp/zsh-flamehIkyD2        2013-01-24 05:53:33.880785706 -0800
+++ /tmp/zsh-flamebZp6ts        2013-01-24 05:53:33.883785509 -0800
@@ -1,7 +1,6 @@
-       e8 fc 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 e5 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 c6 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 a7 71 fc ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
-       e8 cd 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
-       e8 a3 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
-       e8 00 00 00 00          callq  43261 <avg_h264_qpel16_mc03_10_c+0x171>
+       e8 00 00 00 00          callq  8e670 <avg_h264_qpel16_mc03_10_c>
+       e8 71 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 52 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 33 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 14 bc f7 ff          callq  a390 <put_h264_qpel8_v_lowpass_10>
+       e8 00 00 00 00          callq  8f8d3 <avg_h264_qpel16_mc03_10_c+0x1263>

Yes, I do use zsh — on the other hand, now that I look at the code above I note that there’s a bug: it does not respect $TMPDIR as it should have used /tmp/.private/flame as base path, dang!

So the quick check shows that avg_pixels8_l2_10 is no longer called — but does that account for the whole size? Let’s see if it changed:

% nm -S libav-{pre,post}/libavcodec/dsputil.o | fgrep avg_pixels8_l2_10
00000000000072e0 0000000000000112 t avg_pixels8_l2_10
00000000000072e0 0000000000000112 t avg_pixels8_l2_10

The size is the same and it’s 274 bytes. The increase is 4330 bytes, which is around 15 times more than the size of the single function — what does that mean then? Well, a quick look around shows this piece of code:

        41 b9 20 00 00 00       mov    $0x20,%r9d
        41 b8 20 00 00 00       mov    $0x20,%r8d
        89 d9                   mov    %ebx,%ecx
        4c 89 e7                mov    %r12,%rdi
        c7 04 24 10 00 00 00    movl   $0x10,(%rsp)
        e8 cd 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
        48 8d b4 24 80 00 00    lea    0x80(%rsp),%rsi
        00 
        49 8d 7c 24 10          lea    0x10(%r12),%rdi
        41 b9 20 00 00 00       mov    $0x20,%r9d
        41 b8 20 00 00 00       mov    $0x20,%r8d
        89 d9                   mov    %ebx,%ecx
        48 89 ea                mov    %rbp,%rdx
        c7 04 24 10 00 00 00    movl   $0x10,(%rsp)
        e8 a3 40 fc ff          callq  72e0 <avg_pixels8_l2_10>
        48 8b 84 24 b8 04 00    mov    0x4b8(%rsp),%rax
        00 
        64 48 33 04 25 28 00    xor    %fs:0x28,%rax
        00 00 
        75 0c                   jne    4325c <avg_h264_qpel16_mc03_10_c+0x16c>

This is just a fragment but you can see that there are two calls to the function, followed by a pair of xor and jne instructions — which is the basic harness of a loop. Which means the function gets called multiple times. Knowing that this function is involved in 10-bit processing, it becomes likely that the function gets called twice per bit, or something along those lines — remove the call overhead (as the function is inlined) and you can see how twenty copies of that small function per caller account for the 4KiB.

So my guess was right, but incomplete: GCC not only inlined the function, but it also unrolled the loop, probably doing constant propagation in the process.

Is this it? Almost — the next step was to get some benchmark data when using the code, which was mostly Luca’s work (and I have next to no info on how he did that, to be entirely honest); the results on my server has been inconclusive, as the 2% loss that he originally registered was gone in further testing and would, anyway, be vastly within margin of error of a non-dedicated system — no we weren’t using full-blown profiling tools for that.

While we don’t have any sound numbers about it, what we’re worried about is for cache-starved architectures, such as Intel Atom, where the unrolling and inlining can easily cause performance loss, rather than gain — which is why all us developers facepalm in front of people using -funroll-all-loops and similar. I guess we’ll have to find an Atom system to do this kind of runs on…

Richard Freeman a.k.a. rich0 (homepage, bugs)
MythTV 0.26 In Portage (January 24, 2013, 01:31 UTC)

Well, all of MythTV 0.26 is now in portage, masked for testing for a few days.

If anyone is interested now is a good time to give it a try and report any issues you find. If all is quiet the masks will come off and we’ll be up-to-date (including all patches up to a few days ago).

Thanks to all who have contributed to the 0.26 bug. I can also happily report that I’m running Gentoo on my mythtv front-end, which should help me with maintaining things. MiniMyth is a great distro, but it has made it difficult to keep the front- and back-ends in sync.


Filed under: foss, gentoo, mythtv

January 23, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The usual Typo update report (January 23, 2013, 21:18 UTC)

You probably got used to read about me updating Typo at this point — the last update I wrote about was almost an year ago when I updated to Typo 6, using Rails 3 instead of 2. Then you probably remember my rant about what I would like of my blog …

Well, yesterday I was finally able to get rid of the last Rails 2.3 application that was running on my server, as a nuisance of a customer’s contract finally expired, and since I was finally able to get to update Typo without having to worry about the Ruby 1.8 compatibility that was dropped upstream. Indeed since the other two Ruby applications running on this server are Harvester for Planet Multimedia and a custom application I wrote for a customer, the first not using Rails at all, and the second written to work on both 1.8 and 1.9 alike, I was able to move from having three separate Rails slot installed (2.3, 3.0 and 3.1), to having only the latest 3.2, which means that security issues are no longer a problem for the short term either.

The new Typo version solves some of the smaller issues I’ve got with it before — starting from the way it uses Rails (now no longer requiring a single micro-version, but accepting any version after 3.2.11), and the correct dependency on the new addressable. At the same time it does not solve some of the most long-standing issues, as it insists on using the obsolete coderay 0.9 instead of the new 1.0 series.

So let’s go in order: the new version of Typo brings in another bunch of gems — which means I have to package a few more. One of them is fog which includes a long list of dependencies, most of which from the same author, and reminds me of how bad the dependencies issue is with Ruby packages. Luckily for me, even though the dependency is declared mandatory, a quick hacking around got rid of it just fine — okay hacking might be too much, it really is just a matter of removing it from the Gemfile and then removing the require statement for it, done.

For the moment I used the gem command to install the required packages — some of them are actually available on Hans’s overlay and I’ll be reviewing them soon (I was supposed to do that tonight, but my job got in the way) to add them to main tree. A few more requires me to write them from scratch so I’ll spend a few days on that soon. I have other things in my TODO pipeline but I’ll try to cover as many bases as I can.

While I’m not sure if this update finally solves the issue of posts being randomly marked as password-protected, at least this version solves the header in the content admin view, which means that I can finally see what drafts I have pending — and the default view also changed to show me the available drafts to finish, which is great for my workflow. I haven’t looked yet if the planning for future-published posts work, but I’ll wait for that.

My idea of forking Typo is still on, even though it might be more like a set of changes over it instead of being a full-on fork.. we’ll see.

Marcus Hanwell a.k.a. cryos (homepage, bugs)
The Roller Coaster of 2012 (January 23, 2013, 00:20 UTC)

It has been a long time since I wrote anything on here, I am still alive and kicking! 2012 was another roller coaster of a year, with many good and bad things happening. Louise and I got our green cards early on in the year (massive thanks to my employer), which was great after having lived in the US for over five years now. We started house hunting a few months after that, which was an adventure and a half.

As we were in the process of looking for a house I was promoted to technical leader at Kitware, and I continue to work on our Open Chemistry project. We ended up falling in love with the first house we found, and found a great realtor who took us back there for a second look. We then learned how different buying a house in the US versus England, but after several rounds of negotiations came to an agreement. We had a very long wait for completion, but that all proceeded well in the end.

As we moved out of the place we had been renting for the last three years we found out just how bad some landlords can be about returning security deposits...that is still ongoing and has not been a fun process. We never rented in England, but many friends have assured us that this isn't that unusual. Our move actually went very smoothly though, and we have some great friends who helped us with some of the heavy lifting. We have been learning what it is like to own a home in the country, with a well, septic, large garden etc. The learning curve has been a little steep at times! We attended two weddings (I was a groomsman in one) with two amazing groups of friends - it was a pleasure to be part of the day for two great friends.

I made a few guest blog posts, which I will try to talk more about in another post, and attended some great conferences including the ACS, Semantic Physical Science and Supercomputing. Our Avogadro paper was published, and was recently published in final form (I will write more about this too). I finally cancelled my dedicated server (an old Gentoo box), which I originally took when I was consulting in England, this was very disruptive in the end and I didn't have a complete backup of all data when it was taken offline. This caused lots of disruption to email (sorry if I never got back to you). I moved to a cloud server with Rackspace in the end, after playing with a few alternatives. I was retired as a Gentoo developer too (totally missed those emails), it was a great experience being a developer and I still value many of the friendships formed during that time. My passion for packaging has wained in recent years, and I tend to use Arch Linux more now (although still love lots of things about Gentoo).

Just before Xmas our ten year old German Shepherd developed a sudden paralysis in his back legs and had to be put down. It was pretty devastating, after having him from when he was 12 weeks old. He joined our little family just after we got our own place in England, he had five great years in England and another five in the US. He was with me for so much of my life (a degree, loss of my brother, marriage, loss of my sister, moving to another country, birth of our first child, getting a "real" job). We had family over for the holidays as we call them over here (Xmas and New Year back home), which was great but we may not have been the best of company after having just lost our dog.

I think I skipped lots of stuff too, but it was quite a year! Hoping for more of a steady ride this year to say the least.

January 22, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Crashes and DoS, what is it with them anyway? (January 22, 2013, 16:38 UTC)

During the recent Gentoo mudslinging about libav and FFmpeg, one of the contention points is the fact that FFmpeg boasts more “security fixes” than libav over time. Any security conscious developer would know that assessing the general reliability of a software requires much more than just counting CVEs — as they only get assigned when bugs are reported as security issues by somebody.

I ended up learning this first-hand. In August 2005 I was just fixing a few warnings out of xine-ui, with nothing in mind but cleaning up the build log — that patch ended up in Gentoo, but no new release was made for xine-ui itself. Come April of 2006 and a security researcher marked them as a security issue — we were already covered, for the most part, but other distros weren’t. The bug was fixed upstream, but not released, simply because nobody considered them security issues up to that point. My lesson was that issues that might lead to security problems are always better looked at from a security expert — that’s why I originally started working with ocert for verifying issues within xine.

So which kind of issues are considered security issues? In this case the problem was a format string — this is obvious, as it can theoretically allow, under given conditions, to write to arbitrary memory. The same is true for buffer overflows obviously. But what about unbound reads, which in my experience form the vast majority of crashes out there? I would say that there are two widely different problems with them, which can be categorized as security issues: information disclosure (if the attacker can decide where to read and can get useful information out of said read — such as the current base address for the executable or libraries of the process, which can be used later), and good old crashes — which for security purposes are called DoS: Denial of Service.

Not all DoS are crashes, not all crashes are DoS, though! In particular, you can DoS an app without having it crashing, but rather deadlocking, or otherwise exhausting all of one scarce resource — this is the preferred method for DoS on servers; indeed this is the way the Slowloris attack for Apache worked: it used all the connection handlers and caused the server to not answer legitimate clients; a crash would be much easier to identify and recover from, which is why DoS on servers are rarely full-blown crashes. Crashes cannot realistically be called DoS when they are user-initiated without a third-party intervening. It might sounds silly, and remind of an old joke – “Doctor, doctor, if I do this it hurts!” “Stop doing that, then!” – but it’s the case: if going to the app’s preferences and clicking something causes the app to crash, then there’s a bug which is a crash but is not a DoS.

This brings us to one of the biggest problem with calling something a DoS: it might be a DoS in one use-case, and not in another — let’s use libav as an example. It’s easy to argue that any crash in the libraries for decoding a stream as a DoS, as it’s common to download a file, and try to play it; said file is the element in the equation that comes from a possible attacker, and anything that can happen due to its decode is a security risk. Is it possible to argue that a crash in an encoding path is a DoS? Well, from a client’s perspective, it’s not common to — it’s still very possible that an attacker can trick you into downloading a file and re-encoding it, but it’s less common a situation, and in my experience, most of the encoding-related crashes are triggered only with a given subset of parameters, which makes it more difficult for an attacker to exploit than a decoder-side DoS. If the crash only happens when using avconv, also, it’s hard to declare it a DoS taking into consideration that at most, it should crash the encoding process, and that’s about it.

Let’s now turn the table, and instead of being the average user downloading movies from The Pirate Bay, we’re a video streaming service, such as YouTube, Vimeo or the like — but without the right expertise, which means that a DoS on your application is actually a big deal. In this situation, assuming your users control the streams that get encoded, you’re dealing with an input source that is untrusted, which means that you’re vulnerable to both crashes in the decoder and in the encoder as real-world DoS attacks. As you see what earlier required explicit user interaction and was hard to consider a full-blown DoS now gets much more important.

This kind of issues is why languages like Ada were created, and why many people out there insist that higher-level languages like Java, Python and Ruby are more secure than C, thanks to the existence of exceptions for error handling, making it easier to have fail-safe conditions which should solve the problem of DoS — the fact that there are just as many security issues in software written in high-level languages as low-level shows how false that concept is nowadays. Because while it does save from some kind of crashes, it also creates issues by the increase in the sheer area of exposure: the more layers, more code is involved in execution, and that can actually increase the chance for somebody to find an issue in them.

Area of exposure is important also for software like libav: if you enable every possible format under the sun for input and output, you’re enabling a whole lot of code, and you can suffer from a whole lot of vulnerabilities — if you’re targeting a much more reduced audience, like for instance you’re using it on a device that has to output only H.264 and Speex audio, you can easily turn everything else off, and reduce your exposure many times. You can probably see now why even when using libav or ffmpeg as backend, Chrome does not support all the input files that they support; it would just be too difficult to validate all the possible code out there, while it’s feasible to validate a subset of them.

This should have established the terms on what to consider DoS and when ­— so how do you handle this? Well, the first problem is to identify the crashes; you can either wait for an attack to happen, and react to that, or proactively try to identify crash situations, and obviously the latter is what you should do most of the time. Unfortunately, this requires the use of many different techniques, and none yields a 100% positive result, even the combined results are rarely sufficient to argue that a piece of software is 100% safe from crashes and other possible security issues.

One obvious thing is that you just have to make sure the code is not allowing things that should not happen, like incredibly high values or negative ones. This requires manual work and analysis of code, which is usually handled through code reviews – on the topic there is a nice article by Mozilla’s David Humphrey – at least for what concerns libav. But this by itself is not enough, as many times it’s values that are allowed by the specs, but are not handled properly, that cause the crashes. How to deal with them? A suggestion would be to use fuzzing, which is a technique in which a program is executed receiving, as input, a file that is corrupted starting from a valid one. A few years ago, a round of FFmpeg/VLC bugs were filed after Sam Hocevar released, and started using, his zzuf tool (which should be in Portage, if you want to look at it).

Unfortunately, fuzzing, just like using particular exemplars of attacks in the wild, have one big drawback – one that we could call “zenish” – you can easily forget that you’re looking at a piece of code that is crashing on invalid input, and you just go and resolve that one small issue. Do you remember the calibre security shenanigan ? It’s the same thing: if you only fix the one bit that is crashing on you without looking at the whole situation, an attacker, or a security researcher, can actually just look around and spot the next piece that is going to break on you. This is the one issue that me, Luca and the others in the libav project get vocal about when we’re told that we don’t pay attention to security only because it takes us a little longer to come up with a (proper) fix — well, this, and the fact that most of the CVE that are marked as resolved by FFmpeg we have had no way to verify for ourselves because we weren’t given access to the samples for reproducing the crashes; this changed after the last VDD for at least those coming from Google. If I’m not mistaken, at least one of them ended up with a different, complete fix rather than the partial bandaid put in by our peers at FFmpeg.

Testsuites for valid configurations and valid files are not useful to identify these problems, as those are valid files and should not cause a DoS anyway. On the other hand, just using a completely shot-in-the-dark fuzzing technique like zzuf could or could not help, depending on how much time you can pour to look at the failures. Some years ago, I read an interesting book, Fuzzing: Brute Force Vulnerability Discovery by Sutton, Greene and Amini. It was a very interesting read, although last I checked, the software they pointed to was mostly dead in the water. I should probably get back at it and see if I can find if there are new forks of that software that we can use to help getting there.

It’s also important to note that it’s not just a matter of causing a crash, you need to save the sample that caused the issue, and you need to make sure that it’s actually crashing. Even a “all okay” result might not be actually a pass, as in some cases, a corrupted file could cause a buffer overflow that, in a standard setup, could let the software keep running — hardened, and other tools, make it nicer to deal with that kind of issues at least…

Josh Saddler a.k.a. nightmorph (homepage, bugs)

a new song: walking home alone through moonlit streets by ioflow

for the 55th disquiet junto, two screws.

the task was to combine do and re by nils frahm into a new work. i chopped “re” into loops, and rearranged sections by sight and sound for a deliberately loose feel. the resulting piece is entirely unquantized, with percussion generated from the piano/pedal action sounds of “do” set under the “re” arrangement. the perc was performed with an mpd18 midi controller in real time, and then arranged by dragging individual hits with a mouse. since the original piano recordings were improvised, tempo fluctuates at around 70bpm, and i didn’t want to lock myself into anything tighter when creating the downtempo beats.

normally i’d program everything to a strict grid with renoise, but for this project, i used ardour3 (available in my overlay) almost exclusively, except for a bit of sample preparation in renoise and audacity. the faint background pads/strings were created with paulstretch. my ardour3 session was filled with hundreds of samples, each one placed by hand and nudged around to keep the jazzy feel, as seen in this screenshot:

ardour3 session

this is a very rough rework — no FX, detailed mixing/mastering, or complicated tricks. i ran outta time to do all the subtle things i usually do. instead, i spent all my time & effort on the arrangement and vibe. the minimal treatment worked better than everything i’d planned.

January 20, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
RolandDG DXY-800A under Linux (January 20, 2013, 09:35 UTC)

Many moons ago, we acquired an old RolandDG DXY-800A plotter.  This is an early A3 plotter which took 8 pens, driven via either the parallel port or the serial port.

It came with software to use with an old DOS-version of AutoCAD.  I also remember using it with QBasic.  We had the handbook, still do, somewhere, if only I could lay my hands on it.  Either that, or on the QBasic code I used to use with the thing, as that QBasic code did exercise most of the functionality.

Today I dusted it off, wondering if I could get it working again.  I had a look around.  The thing was not difficult to drive from what I recalled, and indeed, I found the first pointer in a small configuration file for Eagle PCB.

The magic commands:

H Go home
Jn Select Pen n (1..8)
Mxxxx,yyyy Move (with pen up) to position xxx.x, yyy.y mm from lower left corner.
Dxxxx,yyyy Draw (with pen down) a line to position xxx.x, yyy.y mm

Okay, this misses the text features, drawing circles and hatching, but it’s a good start.  Everything else can be emulated with the above anyway.  Something I’d have to do, since there was only one font, and I seem to recall, no ability to draw ellipses.

Inkscape has the ability to export HPGL, so I had a look at what the format looks like.  Turns out, the two are really easy to convert, and Inkscape HPGL is entirely line drawing commands.

hpgl2roland.pl is a quick and nasty script which takes Inkscape-generated HPGL, and outputs RolandDG plotter language. It’s crude, only understands a small subset of HPGL, but it’s a start.

It can be used as follows:

$ perl hpgl2roland.pl < drawing.hpgl > /dev/lp0

January 19, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
GHC as a cross-compiler (January 19, 2013, 23:34 UTC)

Another small breakthrough today for those who would like to see haskell programs running.

Here is a small incomplete HOWTO for gentoo users on how to build a crosscompiler running on x86_64 host targeted on ia64 platform.

It is just an example. You can pick any target.

First of all you need to enable haskell overlay and install host compiler:

# GHC_IS_UNREG=yeah emerge -av =ghc-7.6.1

The GHC_IS_UNREG=yeah bit is critical. If we won’t do it GHC build system will try to build registerised stage1 (which is a crosscompiler already).

Not setting GHC_IS_UNREG will break for a set of problems:

  • GHC will try to optimize generated bitcode with llvm‘s optimizer which will produce x86_64 instructions, not ia64.

  • GHC will try to run (broken on ia64) object splitter perl script: ghc-split.lprl.

The rest is rather simple:

# crossdev ia64-unknown-linux-gnu
# ia64-unknown-linux-gnu-emerge sys-libs/ncurses virtual/libffi dev-libs/gmp
# ln -s ${haskell_overlay}/haskell/dev-lang/ghc ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# cd ${cross_overlay}/ia64-unknown-linux-gnu/ghc
# EXTRA_ECONF=--enable-unregisterised USE=ghcmakebinary ebuild ghc-9999.ebuild compile

It will fail as the following command tries to run ia64 binary on x86_64 host:

libraries/integer-gmp/cbits/mkGmpDerivedConstants > libraries/integer-gmp/cbits/GmpDerivedConstants.h

I’ve logged-in to ia64 box and ran mkGmpDerivedConstants to get a GmpDerivedConstants.h. Added the result to the ${WORKDIR} and reran last command.

After the build has finished I’ve got corsscompiler:

sf ghc-9999 # "inplace/bin/ghc-stage1" --info
 [("Project name","The Glorious Glasgow Haskell Compilation System")
 ,("GCC extra via C opts"," -fwrapv")
 ,("C compiler command","/usr/bin/ia64-unknown-linux-gnu-gcc")
 ,("C compiler flags"," -fno-stack-protector  -Wl,--hash-size=31 -Wl,--reduce-memory-overheads")
 ,("ld command","/usr/bin/ia64-unknown-linux-gnu-ld")
 ,("ld flags","     --hash-size=31     --reduce-memory-overheads")
 ,("ld supports compact unwind","YES")
 ,("ld supports build-id","YES")
 ,("ld is GNU ld","YES")
 ,("ar command","/usr/bin/ar")
 ,("ar flags","q")
 ,("ar supports at file","YES")
 ,("touch command","touch")
 ,("dllwrap command","/bin/false")
 ,("windres command","/bin/false")
 ,("perl command","/usr/bin/perl")
 ,("target os","OSLinux")
 ,("target arch","ArchUnknown")
 ,("target word size","8")
 ,("target has GNU nonexec stack","True")
 ,("target has .ident directive","True")
 ,("target has subsections via symbols","False")
 ,("Unregisterised","YES")
 ,("LLVM llc command","llc")
 ,("LLVM opt command","opt")
 ,("Project version","7.7.20130118")
 ,("Booter version","7.6.1")
 ,("Stage","1")
 ,("Build platform","x86_64-unknown-linux")
 ,("Host platform","x86_64-unknown-linux")
 ,("Target platform","ia64-unknown-linux")
 ,("Have interpreter","NO")
 ,("Object splitting supported","NO")
 ,("Have native code generator","NO")
 ,("Support SMP","NO")
 ,("Tables next to code","NO")
 ,("RTS ways","l debug  thr thr_debug thr_l thr_p ")
 ,("Dynamic by default","NO")
 ,("Leading underscore","NO")
 ,("Debug on","False")
 ,("LibDir","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib")
 ,("Global Package DB","/var/tmp/portage/cross-ia64-unknown-linux-gnu/ghc-9999/work/ghc-9999/inplace/lib/package.conf.d")
 ]

# cat a.hs
main = print 1
# "inplace/bin/ghc-stage1" a.hs -fforce-recomp -o a
[1 of 1] Compiling Main             ( a.hs, a.o )
Linking a ...
# file a
a: ELF 64-bit LSB executable, IA-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, not stripped
# LANG=C ls -lh a
-rwxr-xr-x 1 root portage 24M Jan 20 02:24 a
on ia64:
$ ./a
1

Results:

  • It’s not that hard to build a ghc with some exotic target if you have gcc there.

  • mkGmpDerivedConstants needs to be more cross-compiler friendly It should be really simple to implement, it only queries for data sizes/offsets. I think autotools is already able to do it.

  • GHC should be able to run llvm with correct -mtriple in crosscompiler case. That way we would get registerised crosscompiler.

Some TODOs:

In order to coexist with native compiler ghc should stop mangling —-target=ia64-unknown-linux-gnu option passed by user and name resulting compiler a ia64-unknown-linux-gnu-ghc and not ia64-unknown-linux-ghc.

That way I could have many flavours of compiler for one target. For example I would like to have x86_64-pc-linux-gnu-ghc as a registerised compiler and x86_64-unknown-linux-gnu-ghc as an unreg one.

And yes, they will all be tracked by gentoo’s package manager.


January 18, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What we needs from daemons (January 18, 2013, 18:04 UTC)

In my post of yesterday I noted some things about the init scripts, small niceties that init scripts should do in Gentoo for them to work properly and to solve the issue of migrating pid files to /run. Today I’d like to add a few more notes of what I wish all daemons out there implemented at the very least.

First of all, while some people prefer for the daemon to not fork and background by itself, I honestly prefer it to — it makes so many things so much easier. But if you fork, wait till the forked process completed initialization before exiting! The reason why I’m saying this is that, unfortunately, it’s common for a daemon to start up, fork, then load its configuration file and find out there’s a mistake … leading to a script that thinks that the daemon started properly, while no process is left running. In init scripts, --wait allows you to tell the start-stop-daemon tool to wait for a moment to see if the daemon could start at all, but it’s not so nice, because you have to find the correct wait time empirically, and in almost every case you’re going to run longer than needed.

If you will background by yourself, please make sure that you create a pidfile to tell the init system which ID to signal to stop — and if you do have such a pidfile, please do not make it configurable on the configuration file, but set a compiled-in default and eventually allow an override at runtime. The runtime override is especially welcome if your software is supposed to have multiple instances configured on the same box — as then a single pidfile would conflict. Not having it configured on a file means that you no longer need to hack up a parser for the configuration file to be able to know what the user wants, but you can rely on either the default or your override.

Also if you do intend to support multiple instances of the same daemon make sure that you allow multiple configuration files to be passed in by he command-line. This simplifies a lot the whole handling of multiple-instances, and should be mandatory in that situation. Make sure you don’t re-use paths in that case either.

If you have messages you should also make sure that they are sent to syslog — please do not force, or even default, everything to log files! We have tons of syslog implementations, and at least the user does not have to guess which one of many files is going to be used for the messages from your last service start — at this point you probably guessed that there are a few things I hope to rectify in Munin 2.1.

I’m pretty sure that there are other concerns that could come up, but for now I guess this would be enough for me to have a much simpler life as an init script maintainer.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
The case of defaults (Libav vs FFmpeg) (January 18, 2013, 17:18 UTC)

I tried not to get into this discussion, mostly because it will degenerate to a mud sliding contest.

Alexis did not take well the fact that Tomáš changed the default provider for libavcodec and related libraries.

Before we start, one point:

I am as biased as Alexis, as we are both involved on the projects themselves. The same goes for Diego, but does not apply to Tomáš, he is just a downstream by transition (libreoffice uses gstreamer that uses *only* Libav).

Now the question at hand: which should be the default? FFmpeg or Libav?

How to decide?

- Libav has a strict review policy every patch goes through a review and has to be polished enough before landing the tree.

- FFmpeg merges daily what had been done in Libav and has a more lax approach on what goes in the tree and how.

- Libav has fate running on most architectures, many of those are running Gentoo, usually real hardware.

- FFmpeg has an old fate with less architectures, many qemu emulations.

- Libav defines the API

- FFmpeg follows adding bits here and there to “diversify”

- Libav has a major release per season, minor releases when needed

- FFmpeg releases a lot touting a lot of *Security*Fixes* (usually old code from the ancient times eventually fixed)

- Libav does care about crashes and fixes them, but does not claim every crash is a Security issue.

- FFmpeg goes by leaps to add MORE features, no matter what (including picking wip branches from my personal github and merging them before they are ready…)

- Libav is more careful, thus having less fringe features and focusing more polishing before landing new stuff.

So if you are a downstream you can pick what you want, but if you want something working everywhere you should target Libav.

If you are missing a feature from Libav that is in FFmpeg, feel free to point me to it and I’ll try my best to get it to you.

Alexis Ballier a.k.a. aballier (homepage, bugs)

It’s been a while since I wanted to write about this and since there recently has been a sort of hijack without any kind of discussion to let libav be the default implementation for Gentoo, this motivated me.

Exactly two years ago, a group consisting of the majority of FFmpeg developers took over its maintainership. While I didn’t like the methods, I’m not an insider so my opinion stops here, especially since if you pay attention to who was involved: Luca was part of it. Luca has been a Gentoo developer since probably most of us even used Gentoo and I must admit I’ve never seen him heating any discussion, rather the contrary, and it’s always been a pleasure to work with him. What happened next, after a lot of turmoil, is that the developers split in two groups: libav formed by the “secessionists” and FFmpeg.

Good, so what do we chose now? One of the first things that was done on the libav side was to “cleanup” the API with the 0.7 release, meaning we had to fix almost all its consumers: Bad idea if you want wide adoption of a library that has an history of frequently changing its API and breaking all its consumers. Meanwhile, FFmpeg maintained two branches: the 0.7 branch compatible with the old API and the 0.8 one with the new API. The two branches were supposed to be identical except for the API change. On my side the choice was easy: Thanks but no thanks sir, I’ll stay with FFmpeg.
FFmpeg, while having its own development and improvements, has been doing daily merges of all libav changes, often with an extra pass of review and checks, so I can even benefit from all the improvements from libav while using FFmpeg.

So why should we use libav? I don’t know. Some projects use libav within their release process, so they are likely to be much more tested with libav than FFmpeg. However, until I see real bugs, I consider this as pure supposition and I have yet to see real facts. On the other hand, I can see lots of false claims, usually without any kind of reference: Tomáš claims that there’s no failure that is libav specific, well, some bugs tend to say the contrary and have been open for some time (I’ll get back to XBMC later). Another false claim is that FFmpeg-1.1 will have the same failures as libav-9: Since Diego made a tinderbox run for libav-9, I made the tests for FFmpeg 1.1 and made the failures block our old FFmpeg 0.11 tracker. If you click the links, you will see that the number of blockers is much smaller (something like 2/3) for the FFmpeg tracker. Another false claim I have seen is that there will be libav-only packages: I have yet to see one; the example I had as an answer is gst-plugins-libav, which unfortunately is in the same shape for both implementations.

In theory FFmpeg-1.1 and libav-9 should be on par, but in practice, after almost two years of disjoint development, small differences have started to accumulate. One of them is the audio resampling library: While libswresample has been in FFmpeg since the 0.9 series, libav developers did not want it and made another one, with a very similar API, called libavresample that appeared in libav-9. This smells badly as a NIH syndrome, but to be fair, it’s not the first time such things happen: libav and FFmpeg developers tend to write their own codecs instead of wrapping external libraries and usually achieve better results. The audio resampling library is why XBMC being broken with libav is, at least partly, my fault: While cleaning up its API usage of FFmpeg/libav, I made it use the public API for audio resampling, initially with libswresample but made sure it worked with libavresample from libav. At that time, this would mean it required libav git master since libav-9 was not even close to be released, so there was no point in trying to make it compatible with such a moving target. libswresample from FFmpeg was present since the 0.9 series, released more than one year ago. Meanwhile, XBMC-12 has entered its release process, meaning it will probably not work with libav easily. Too late, too bad.

Another important issue I’ve raised is the security holes: Nowadays, we are much more exposed to them. Instead of having to send a specially crafted video to my victim and make him open it with the right program, I only have to embed it in an HTML5 webpage and wait. This is why I am a bit concerned that security issues fixed 7 months ago in FFmpeg have only been fixed with the recently released libav-0.8.5. I’ve been told that these issues are just crashes are have been fixed in a better way in libav: This is probably true but I still consider the delay huge for such an important component of modern systems, and, thanks to FFmpeg merging from libav, the better fix will also land in FFmpeg. I have also been told that this will improve on the libav side, but again, I want to see facts rather than claims.

As a conclusion: Why is the default implementation changed? Some people seem to like it better and use false claims to force their preference. Is it a good idea for our users? Today, I don’t think so (remember: FFmpeg merges from libav and adds its own improvements), maybe later when we’ll have some clear evidence that libav is better (the improvements might be buggy or the merges might lead to subtle bugs). Will I fight to get the default back to FFmpeg ? No. I use it, will continue to use and maintain it, and will support people that want the default back to FFmpeg but that’s about it.


January 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The unsolved problem of the init scripts (January 17, 2013, 23:31 UTC)

One of probably the biggest problems with maintaining software in Gentoo where a daemon is involved, is dealing with init scripts. And it’s not really that much of a problem with just Gentoo, as almost every distribution or operating system has its own to handle init scripts. I guess this is one of the nice ideas behind systemd: having a single standard for daemons to start, stop and reload is definitely a positive goal.

Even if I’m not sure myself whether I want the whole init system to be collapsed into a single one for every single operating system out there, there at least is a chance that upstream developers will provide a standard command-line for daemons so that init scripts no longer have to write a hundred lines of pre-start setup code commands. Unfortunately I don’t have much faith that this is going to change any time soon.

Anyway, leaving the daemons themselves alone, as that’s a topic for a post of its own and I don care about writing it now. What remains is the init script itself. Now, while it seems quite a few people didn’t know about this before, OpenRC has been supporting since almost ever a more declarative approach to init scripts by setting just a few variables, such as command, pidfile and similar, so that the script works, as long as the daemon follows the most generic approach. A whole documentation for this kind of scripts is present in the runscript man page and I won’t bore you with the details of it here.

Beside the declaration of what to start, there are a few more issues that are now mostly handled to different degrees depending on the init script, rather than in a more comprehensive and seamless fashion. Unfortunately, I’m afraid that this is likely going to stay the same way for a long time, as I’m sure that some of my fellow developers won’t care to implement the trickiest parts that can implemented, but at least i can try to give a few ideas of what I found out while spending time on said init scripts.

So the number one issue is of course the need to create the directories the daemon will use beforehand, if they are to be stored on temporary filesystems. What happens is that one of the first changes that came with the whole systemd movements was to create /run and use that to store pidfiles, locks and other runtime stateless files, mounting it as tmpfs at runtime. This was something I was very interested in to begin with because I was doing something similar before, on the router with a CF card (through an EIDE adapter) as harddisk, to avoid writing to it at runtime. Unfortunately, more than an year later, we still have lots of ebuilds out there that expects /var/run paths to be maintained from the merge to the start of the daemon. At least now there’s enough consensus about it that I can easily open bugs for them instead of just ignore them.

For daemons that need /var/run it’s relatively easy to deal with the missing path; while a few scripts do use mkdir, chown and chmod to handle the creation of the missing directories , there is a real neat helper to take care of it, checkpath — which is also documented in the aforementioned man page for runscript. But there has been many other places where the two directories are used, which are not initiated by an init script at all. One of these happens to be my dear Munin’s cron script used by the Master — what to do then?

This has actually been among the biggest issues regarding the transition. It was the original reason why screen was changed to save its sockets in the users’ home instead of the previous /var/run/screen path — with relatively bad results all over, including me deciding to just move to tmux. In Munin, I decided to solve the issue by installing a script in /etc/local.d so that on start the /var/run/munin directory would be created … but this is far from a decent standard way to handle things. Luckily, there actually is a way to solve this that has been standardised, to some extents — it’s called tmpfiles.d and was also introduced by systemd. While OpenRC implements the same basics, because of the differences in the two init systems, not all of the features are implemented, in particular the automatic cleanup of the files on a running system - on the other hand, that feature is not fundamental for the needs of either Munin or screen.

There is an issue with the way these files should be installed, though. For most packages, the correct path to install to would be /usr/lib/tmpfiles.d, but the problem with this is that on a multilib system you’d end up easily with having both /usr/lib and /usr/lib64 as directories, causing Portage’s symlink protection to kick in. I’d like to have a good solution to this, but honestly, right now I don’t.

So we have the tools at our disposal, what remains to be done then? Well, there’s still one issue: which path should we use? Should we keep /var/run to be compatible, or should we just decide that /run is a good idea and run with it? My guts say the latter at this point, but it means that we have to migrate quite a few things over time. I actually started now on porting my packages to use /run directly, starting from pcsc-lite (since I had to bump it to 1.8.8 yesterday anyway) — Munin will come with support for tmpfiles.d in 2.0.11 (unfortunately, it’s unlikely I’ll be able to add support for it upstream in that release, but in Gentoo it’ll be). Some more of my daemons will be updated as I bump them, as I already spent quite a lot of time on those init scripts to hone them down on some more issues that I’ll delineate in a moment.

For some, but not all!, of the daemons it’s actually possible to decide the pidfile location on the command line — for those, the solution to handle the move to the new path is dead easy, as you just make sure to pass something equivalent to -p ${pidfile} in the script, and then change the pidfile variable, and done. Unfortunately that’s not always an option, as the pidfile can be either hardcoded into the compiled program, or read from a configuration file (the latter is the case for Munin). In the first case, no big deal: you change the configuration of the package, or worse case you patch the software, and make it use the new path, update the init script and you’re done… in the latter case though, we have trouble at hand.

If the location of the pidfile is to be found in a configuration file, even if you change the configuration file that gets installed, you can’t count on the user actually updating the configuration file, which means your init script might get out of sync with the configuration file easily. Of course there’s a way to work around this, and that is to actually get the pidfile path from the configuration file itself, which is what I do in the munin-node script. To do so, you need to see what the syntax of the configuration file is. In the case of Munin, the file is just a set of key-value pairs separated by whitespace, which means a simple awk call can give you the data you need. In some other cases, the configuration file syntax is so messed up, that getting the data out of it is impossible without writing a full-blown parser (which is not worth it). In that case you have to rely on the user to actually tell you where the pidfile is stored, and that’s quite unreliable, but okay.

There is of course one thing now that needs to be said: what happens when the pidfile changes in the configuration between one start and the stop? If you’re reading the pidfile out of a configuration file it is possible that the user, or the ebuild, changed it in between causing quite big headaches trying to restart the service. Unfortunately my users experienced this when I changed Munin’s default from /var/run/munin/munin-node.pid to /var/run/munin-node.pid — the change was possible because the node itself runs as root, and then drops privileges when running the plugins, so there is no reason to wait for the subdirectory, and since most nodes will not have the master running, /var/run/munin wouldn’t be useful there at all. As I said, though, it would cause the started node to use a pidfile path, and the init script another, failing to stop the service before starting it new.

Luckily, William corrected it, although it’s still not out — the next OpenRC release will save some of the variables used at start time, allowing for this kind of problems to be nipped in the bud without having to add tons of workarounds in the init scripts. It will require some changes in the functions for graceful reloading, but that’s in retrospective a minor detail.

There are a few more niceties that you could do with init scripts in Gentoo to make them more fool proof and more reliable, but I suppose this would cover the main points that we’re hitting nowadays. I suppose for me it’s just going to be time to list and review all the init scripts I maintain, which are quite a few.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Perl::Critic CERT Theme (January 17, 2013, 18:18 UTC)

So, Brian d Foy has compiled the CERT recommendations for securely programming in Perl. I’ve whipped up a perlcriticrc for it.

I’ve checked out he subversion from Perl::Critic and will submit the simple patch…if somebody else hasn’t beaten me to it.

January 15, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.
Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"
D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. Strunk
Journal of Applied Physics 113, 034303 (2013); arXiv:1208.2163 (PDF[*])
[*] Copyright American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.

Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)

UPDATE: Added some basic migration instructions to the bottom.
UPDATE2: Removed mplayer incompatibility mention. Mplayer-1.1 works with system libav.

As the summary says the default media codec provider for new installs will be libav instead of ffmpeg.

This change is being done due to various reasons like matching default with Fedora and Debian, or due to fact that some projects which are high-profile (eg sh*tload of people use them) will be probably libav only. One example being gst-libav which is in return required by libreoffice-4 which is due release in about month. To go for least pain for the user we decided to move from default ffmpeg to default libav library.

This change won’t affect your current installs at all but we would like to ask you to try to migrate to the libav and test and report any issues. So if stuff happen in the future and we are forced to throw libav as only implementation for everyone you are not left in the dark screaming for your suddenly missing features.

What to do when some package does not build with libav but ffmpeg is fine

There are no such packages left around if I am searching correctly (might be my blindness so do not take my word for it).

So if you encounter any package not building with libav just open bugreport on bugzilla and assign it to media-video team and add lu_zero[at]gentoo.org to CC to be sure he really takes a sneaky look to fix it. If you want to fix the issue yourself it gets even better. You write the patch open the bug in our bugzie and someone will include it. Also the patch should be sent to upstream for inclusion, so we don’t have to keep the patches in tree for long time.

What should I do when I have some issues with libav and I require more features that are on ffmpeg but not on libav

Its easier than fixing bugs about failing packages. Just nag to lu_zero (mail hidden somewhere in this post ;-)) and read this.

So when is this stuff going to ruin my day?

The switch in the tree and news item informing all users of media-video/ffmpeg will be created at the end of the January or early February, unless something really bad happens while you guys test it now.

I feel lucky and I want to switch right away so I can ruin your day by reporting bugs

Great I am really happy you want to contribute. The libav switch is pretty easy to be done as there are only 2 things to keep in mind.

You have to sync your useflags between virtual/ffmpeg and the newly-to-be-switched media-video/libav. This is most probably best to do just edit your package.use stuff and replace the media-video/ffmpeg line with media-video/libav one.

Then one would go straight away for emerge libav but there is one more caveat. Libav has split libpostproc library while ffmpeg still is using the internal one. Code wise they are most probably equal but you have to take account for it so just call emerge with both libraries.
emerge -1v libav libpostproc

If this succeeds you have to revdep-rebuild the packages you have or use @preserved-rebuild from portage-2.2 to rebuild all the RDEPENDS of libav.

Good luck and happy bug hunting.

January 14, 2013

Many times, when I had to set the make.conf on systems with particular architectures, I had a doubt on which is the best –jobs value.
The handook suggest to have ${core} + 1, but since I’m curious I wanted to test it by myself to be sure this is right.

To make a good test we need a package with a respectable build system that respects the make parallelization and takes at least few minutes to compile. Otherwise with packages that compile in few seconds we are unable to track the effective difference.
kde-base/kdelibs is, in my opinion, perfect.

If you are on architecture which kde-base/kdelibs is unavailable, just switch to another cmake-based package.

Now, download best_makeopts from my overlay. Below an explanation on what the script does and various suggestions.

  • You need to compile the package on a tmpfs filesystem and, I’m assuming you have /tmp mounted as a tmpfs too;
  • You need to have the tarball of the package on a tmpfs because if you have a slow disk, it may takes more time.
  • You need to switch your governor to performance.
  • You need to be sure you don’t have strange EMERGE_DEFAULT_OPTS.
  • You need to add ‘-B’ because we don’t want to include the time of the installation.
  • You need to drop the existent cache before compile.

As you can see, the for will emerge the same package with makeopts from 1 to 10. If you have, for example, a single core machine, just try the for from 1 to 4 is enough.

Please, during the test, don’t use the cpu for other purposes, and if you can, stop all services and make the test from the tty; you will see the time for every merge.

The following is an example on my machine:
-j1 : real 29m56.527s
-j2 : real 15m24.287s
-j3 : real 13m57.370s
-j4 : real 12m48.465s
-j5 : real 12m55.894s
-j6 : real 13m5.421s
-j7 : real 13m13.322s
-j8 : real 13m23.414s
-j9 : real 13m26.657s

The hardware is:
Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz which has 2 CPUs and 4 threads.
After -j4 you can see the regression.

Another example from an Intel Itanium with 4 CPUs.
-j1 : real 4m24.930s
-j2 : real 2m27.854s
-j3 : real 1m47.462s
-j4 : real 1m28.082s
-j5 : real 1m29.497s

I tested this script on ~20 different machines and in the majority of the cases, the best optimization was ${core} or more exactly ${threads} of your CPU.

Conclusion:
From the handbook:

A good choice is the number of CPUs (or CPU cores) in your system plus one, but this guideline isn’t always perfect.

I don’t know who, years ago, suggested in the handbook ${core} + 1 and I don’t want to trigger a flame. I’m just saying, ${core} + 1 is not the best optimization for me and the test confirms the part:“but this guideline isn’t always perfect”

In all cases ${threads} + ${X} is slower than only ${threads}, so don’t use -j20 if you have a dual-core cpu.

Also, I’m not saying to use ${threads}, I’m just saying feel free to make your tests to watch what is the best optimization.

If you have suggestions to improve the functionality of the script or you think that this script is wrong, feel free to comment or leave an email.

January 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Book review — Amusing Ourselves to Death (January 13, 2013, 21:35 UTC)

This is a tricky review to write because I’m having a very bad time finishing this book. Indeed, while it did start well, and I was actually interested in the idea behind the book, it easily got nasty, in my mind. But let’s start from the top, and let me try to write a review of a book I’m not sure I’ll be able to finish without feeling ill.

I found the book, Amusing Ourselves to Death, through a blog post in one of the Planets I follow, and I found the premise extremely interesting: has the coming of the show business era meant that people are so much submersed by entertainment to lose sight of the significance of news? Unfortunately, as I said the book itself, to me, does not make the point properly, as it exaggerates to the point of no return. While the book has been written in 1985 – which means it has no way to know the way the Web changed media once again – it is proposed to be still relevant today in the introduction as written by the son of the author. I find that proposition unrealistic. It goes as far as stating that most of the students the book was told to read agreed with it — I would venture a guess that most of them didn’t want to disagree with their teacher.

First of all, the author is a typography snob and that can be easily seen when he spends pages and pages telling all the nice things about printed word — at the same time, taking slights at the previous “media” of spoken word. But while I do agree with one of the big points in the book (the fact that different forms makes discourse “change” — after all, my blog posts have a different tone from Autotools Mythbuster, and from my LWN articles), I do not think that a different tone makes for a more or less “validity” of it. Indeed this is why I find it extremely absurd that, for Wikipedia, I’m unreliable when writing on this blog, but I’m perfectly reliable the moment I write Autotools Mythbuster.

Now, if you were to take the first half of the book and title it something like “History of the printed word in early American history”, it would be a very good and enlightening read. It helps a lot to frame into context the history of America especially compared to Europe — I’m not much of an expert in history, but it’s interesting to note how in America, the religious organisations themselves sponsored literacy, while in Europe, Catholicism tried their best to keep people within the confines of illiteracy.

Unfortunately, he then starts with telling how evil the telegraph was by bringing in news from remote places, that people, in the author’s opinion, have no interest in, and should have no right to know… and the same kind of evilness is pointed out in photography (including the idea that photography has no context because there is no way to take a photograph out of context… which is utterly false, as many of us have seen during the reporting of recent wars. Okay, it’s all gotten much easier thanks to Photoshop, but in no way it was impossible in the ’80s.

Honestly, while I can understand having a foregone conclusion in mind, after explaining how people changed the way they speak with the advent of TV, no longer caring about syntax frills and similar, trying to say that in TV the messages are drown in a bunch of irrelevant frills is … a bit senseless. The same way it is senseless to me to say that typography is “pure message” — without even acknowledging that presentation is an issue for typography as much as TV, after all we wouldn’t have font designers otherwise.

While some things are definitely interesting to read – like the note about the use of pamphlet in the early American history that can easily compare to blogs today – the book itself is a bust, because there is no premise of objectivity, it’s just a long text to find reasons to reach the conclusion the author already had in mind… and that’s not what I like to read.

Hopefully it’ll go better with my next read.

Sebastian Pipping a.k.a. sping (homepage, bugs)

I’m late with this, but… If you have not seen this talk yet, you might want to. As usual with Jacob, very interesting and inspiring.

On Aaron Swartz (January 13, 2013, 18:43 UTC)

Through both LWN and netzpolitik.org I just heard that Aaron Swartz has committed suicide. While watching his speech “How we stopped SOPA” his name ring a bell with me, I looked into my inbox and found that he and I once had a brief chat on html2text, I piece of free software of his that I was in touch with in the context of Gentoo Linux. So there is this software, his website, these past mails, this amazing talk, his political work that I didn’t know about… and he’s dead. It only takes a few minutes of watching the talk to get the feeling that this is a great loss to society.

January 10, 2013
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Make django-staticfiles to follow DRY principle (January 10, 2013, 07:03 UTC)

When you work with Django and especially with static files or other template tags you realize that you have to include {% load staticfiles %} in all our template files. This violates the DRY principle because we have to repeat the {% load staticfiles %} template tag on each template file .

Lets give an example.

We have a base.html file which links some Javascript and CSS files from our static folder.

{% load staticfiles %}
<!DOCTYPE html>
<html>
    <head>
        <title>Webapp</title>
        <link rel="stylesheet" type="text/css" href="{% static "css/random-css.css" %}">
        <script type="text/javascript" src="{% static "js/random-javascript.js" %}"></script>
        {% block extra_js_top %}{% endblock %}
    </head>
...
</html>

Also we have index.html which extends base.html and in addition it loads some extra javascript.

{% extends "base.html" %}
{% load staticfiles %}
{% block extra_js_top %}
    <script type="text/javascript" src="{% static "js/extra-javascript.js" %}"></script>
{% endblock %}

As you can see I load again staticfiles in index.html. If I remove it, I will take this error. “TemplateSyntaxError at /, Invalid block tag ‘static’”. Unfortunately even if we extend base.html it will not inherit load template tag from the file and it will not load staticfiles to index.html that means it will not load our extra javascript file.
The truth is that there is a hack-y way to do that. After a small research I finally found a way to follow DRY principle and avoid repeating {% load staticfiles %} template tag in every template file.

Open one of the files that loads automatically from the beginning( settings.py, urls.py and models.py ). I will use settings.py.
So we add the following to settings.py:

from django import template
 
#django-staticfiles DRY principle
template.add_to_builtins('django.contrib.staticfiles.templatetags.staticfiles')

With that snippet of code we load statcifiles “globally” and we don’t have to load staticfiles for every template (not even in base.html) because it loads from the beginning.

PS: Sometimes on big projects this way maybe will not be so ‘correct’ or considered unconventional technique and complicate the developers.

I hope it will be useful.

Happy django-ing.

Further reading:

January 07, 2013
Alex Alexander a.k.a. wired (homepage, bugs)

Passwords. No one likes them, but everybody needs them. If you are concerned about your online safety, you probably have unique passwords for your critical accounts and some common pattern for all the almost-useless accounts you create when browsing the web.

At first I used to save my passwords in a gpg encrypted file. Over time however, I began using Firefox’s and Chrome’s password managers, mostly because of their awesome synching capabilities and form auto-filling.

Unfortunately, convenience comes at a price. I ended up relying on the password managers a bit too much, using my password pattern all over the place.

Then it hit me: I had strayed too much. Although my main accounts were relatively safe (strong passwords, two factor authentication), I had way too many weak passwords, synced on way too many devices, over syncing protocols of questionable security.

Looking for a better solution, I stumbled upon LastPass. Although LastPass uses an interesting security model, with passwords encrypted locally and a password generator that helps you maintain strong passwords for all your accounts, I didn’t like depending on an external service for something so critical. Its ui also left something to be desired.

Meet “pass“.

A Unix command line tool that takes advantage of commonly used tools like gnupg and git to provide safe storage for your passwords and other critical information.

Pass‘ concept is simple. It creates one file for each one of your passwords, which it then encrypts using gpg and your key. You can provide your own passwords or ask it to generate strong passwords for you automatically.

When you need a password you can ask pass to print it on screen or copy it to the clipboard, ready for you to paste in the desired password field.

Pass can optionally use git, allowing you to track the history of your passwords and sync them easily among your systems. I have a Linode server, so I use that + gitolite to keep things synced.

Installation and usage of the tool is straightforward, with clean instructions and bash completion support that makes it even easier to use.

All this does come with a cost, since you lose the ability to auto save passwords and fill out forms. But this is a small price you pay compared to the security benefits gained. I also love the fact that you can access your passwords with standard Unix tools in case of emergencies. The system is also useful for securely storing other critical information, like credit cards.

Pass is not for everyone and most people would be fine using something like LastPass or KeePass, but if you’re a Unix guy looking for a solid password management solution, pass may be what you’re looking for :)

Pass was written by zx2c4 (thanks!) and is available in Gentoo’s portage

emerge -av pass

For more information visit the project’s website at http://zx2c4.com/projects/password-store/

Jeremy Olexa a.k.a. darkside (homepage, bugs)
My holidays in Greece were excellent (January 07, 2013, 09:57 UTC)

No, the country is not in flames or rioting everyday, bad media, bad.

I spent 12 days in Greece. The Greek hospitality is superb, I can not ask for better friends in Greece. I first arrived in Thessaloniki, stayed there for a few nights. Then went to Larissa, and stayed with my friend and his family. There was a small communication barrier with his parents in this smaller town, they don’t get too many tourists. However, I had a very nice Christmas there and it was nice to be with such great people over the holidays. I went to a namesday celebration. Even though I couldn’t understand most of the conversations, they still welcomed me, gave me food and wine, and exchanged culture information. Then I went to Athens, stayed in a hostel, and spent New Year’s watching the fireworks over the Acropolis and the Parthenon. Cool experience! It was so great to be walking around the birthplace of “western ideals” – not the oldest civilization, but close. Some takeaway thoughts: 1) Greek hospitality is unlike anything I’ve experienced, really. I made sure that a I told everyone that they have an open door with me whenever we meet in “my new home” (meaning, I don’t know when or where), 2) you cannot go hungry in Greece, especially when they are cooking for you! 3) the cafe culture is great, 4) I want to go back during the summer

Of course, you will always find the not so nice parts. I got fooled by the old man scam, as seen here. Luckily, they only got 30€ from me, compared to some of the stories I’ve heard. Looking back on it, I just laugh at myself. Maybe I’ll be jaded towards a genuine experience in the future but, lesson learned. I don’t judge Athens by this one mishap, however.

Greece - Dec 2012-22

I only have pictures of Athens since I had to buy a new camera.. Pics here

January 06, 2013
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: ice is given (January 06, 2013, 09:31 UTC)

a new song: ice is given by ioflow

piano improvisation and ambient recordings for the 53rd disquiet junto, ice for 2013.

the assignment was to record the sound of ice in a glass, and make something of it.

the track picture shows my lo-fi setup for the field recording segment. i balanced a logitech USB microphone (which came with the Rock Band game) on a box of herbal tea (to keep it off the increasingly wet kitchen table), and started dropping ice cubes into a glass tumbler. audible is the initial crack and flex of the tray, scrabbling for cubes, tossing them into the cup. i made a point of recording the different tone of cubes dropped into a glass of hot water. i also filled the cup with ice, then recorded the sound of water running into it from the kitchen tap. i liked this sound enough to begin the song with it.

i decided that my first song of 2013 should incorporate the piano, so with the ice cubes recorded, i sat down to improvise an appropriately wintry melody. the result is a simple two-minute minor motif. i turned to the ardour3 beta to integrate the field recordings and the piano improvisation.

it’s been awhile since i last used my strymon bluesky reverb pedal, so i figured i should use it for this project. i setup a feedback-free hardware effects loop using my NI Komplete Audio6 interface with the help of #ardour IRC channel, and listened to the piano recording as it ran through fairly spacious settings on the BSR. (normal mode, room type, decay @ 3:00, predelay @ 11:00, low damp @ 4:00, high damp @ 8:00). with just a bit of “send” to the reverb unit, the piano really came to life.

i added a few more tracks in ardour for the ice cube snippets, with even more subtle audio sends to the BSR, and laid out the field recordings. i pulled them apart in several places, copying and pasting segments throughout the song; minimal treatment was needed to get a good balance of piano and ice.

ardour3 session

working environment in ardour3. laying out hardware FX and tracks.

title reference: Job 37:10

January 04, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
DIY Project: Gatsby cap (January 04, 2013, 22:13 UTC)

Those who have met me, might notice I have a somewhat unusual taste in clothing. One thing I despise is having clothes that are heavily branded, especially when the local shops then charge top dollar for them.

Where hats are concerned, I’m fussy. I don’t like the boring old varieties that abound $2 shops everywhere. I prefer something unique.

The mugshot of me with my Vietnamese coolie hat is probably the one most people on the web know me by. I was all set to try and make one, and I had an idea how I might achieve it, bought some materials I thought might work, but then I happened to be walking down Brunswick Street in Brisbane’s Fortitude Valley and saw a shop selling them for $5 each.

I bought one and have been wearing it on and off ever since. Or rather, I bought one, it wore out, I was given one as a present, wore that out, got given two more. The one I have today is #4.

I find them quite comfortable, lightweight, and most importantly, they’re cool and keep the sun off well. They are also one of the few full-brim designs that can accommodate wearing a pair of headphones or headset underneath. Being cheap is a bonus. The downside? One is I find they’re very divisive, people either love them or hate them — that said I get more compliments than complaints. The other, is they try to take off with the slightest bit of wind, and are quite bulky and somewhat fragile to stow.

I ride a bicycle to and from work, and so it’s just not practical to transport. Hanging around my neck, I can guarantee it’ll try to break free the moment I exceed 20km/hr… if I try and sit it on top of the helmet, it’ll slide around and generally make a nuisance.

Caps stow much easier. Not as good sun protection, but still can look good.   I’ve got a few baseball caps, but they’re boring and a tad uncomfortable.  I particularly like the old vintage gatsby caps — often worn by the 1930′s working class.  A few years back on my way to uni I happened to stop by a St. Vinnies shop near Brisbane Arcade (sadly, they have closed and moved on) and saw a gatsby-style denim cap going for about $10. I bought it, and people commented that the style suited me. This one was a little big on me, but I was able to tweak it a bit to make it fit.

Fast forward to today, it is worn out — the stitching is good, but there are significant tears on the panelling and the embedded plastic in the peak is broken in several places. I looked around for a replacement, but alas, they’re as rare as hens teeth here in Brisbane, and no, I don’t care for ordering overseas.

Down the road from where I live, I saw the local sports/fitness shop were selling those flat neoprene sun visors for about $10 each.  That gave me an idea — could I buy one of these and use it as the basis of a new cap?

These things basically consist of a peak and headband, attached to a dome consisting of 8 panels.  I took apart the old faithful and traced out the shape of one of the panels.

Now I already had the headband and peak sorted out from the sun visor I bought, these aren’t hard to manufacture from scratch either.  I just needed to cut out some panels from suitable material and stitch them together to make the dome.

There are a couple of parameters one can experiment that changes the visual properties of the cap.  Gatsby caps could be viewed as an early precursor to the modern baseball cap.  The prime difference is the shape of the panels.

Measurements of panel from old cap

The above graphic is also available as a PDF or SVG image.  The key measurements to note are A, which sets the head circumference, C which tweaks the amount of overhang, and D which sets the height of the dome.

The head circumference is calculated as ${panels}×${A} so in the above case, 8 panels, a measurement of 80mm, means a head circumference of 640mm.  Hence why it never quite fitted (58cm is about my size) me.  I figured a measurement of about 75mm would do the trick.

B and C are actually two of three parameters that separates a gatsby from the more modern baseball cap.  The other parameter is the length of the peak.  A baseball cap sets these to make the overall shape much more triangular, increasing B to about half D, and tweaking C to make the shape more spherical.

As for the overhang, I decided I’d increase this a bit, increasing C to about 105mm.  I left measurements B and D alone, making a fairly flattish dome.

For each of these measurements, once you come up with values that you’re happy with, add about 10mm to A, C and D for the actual template measurements to give yourself a fabric margin with which to sew the panels together.

As for material, I didn’t have any denim around, but on my travels I saw an old towel that someone had left by the side of the road — likely an escapee.  These caps back in the day would have been made with whatever material the maker had to hand.  Brushed cotton, denim, suede leather, wool all are common materials.  I figured this would be a cheap way to try the pattern out, and if it worked out, I’d then see about procuring some better material.

Below are the results, click on the images to enlarge.  I found due to the fact that this was my first attempt, and I just roughly cut the panels from a hand-drawn template, the panels didn’t quite meet in the middle.  This is hidden by making a small circular patch where the panels normally meet.  Traditionally a button is sewn here.  I sewed the patch from the underside so as to hide the edges of it.

Hand-made gatsbyHand-made gatsby (Underside)

Not bad for a first try, I note I didn’t quite get the panels aligned at dead centre, the seam between the front two is just slightly off centre by about 15mm.  The design looks alright to my eye, so I might look around for some suede leather and see if I can make a dressier one for more formal occasions.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
Signal handler safety, re-entering malloc (January 04, 2013, 20:23 UTC)

This is a story from real-world development. From signal(7):


   Async-signal-safe functions
       A  signal  handler  function must be very careful,
       since processing elsewhere may be interrupted at some
       arbitrary point in the execution of the program.
       POSIX has the concept of "safe function".  If a signal
       interrupts the execution of an  unsafe  function,
       and handler calls an unsafe function, then the behavior
       of the program is undefined.


After that a list of safe functions follows, and one notable things is that malloc and free are async-signal-unsafe!

I hit this issue while enabling tcmalloc's debugallocation for Chromium Debug builds. We have a StackDumpSignalHandler for tests, which prints a stack trace on various crashing signals for easier debugging. It's very useful, and worked fine for a pretty long while (which means that "but it works!" is not a valid argument for doing unsafe things).

Now when I enabled debugallocation, I noticed hangs triggered by the stack trace display. In one example, this stack trace:

@0  0x00000000019c6c85 in tcmalloc::Abort () at third_party/tcmalloc/chromium/src/base/abort.cc:15
@1 0x00000000019b39c1 in LogPrintf (severity=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s", ap=0x7fff52c379e8)
at third_party/tcmalloc/chromium/src/base/logging.h:210
@2 0x00000000019b3a8b in RAW_LOG (lvl=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s")
at third_party/tcmalloc/chromium/src/base/logging.h:230
@3 0x00000000019c3fb1 in MallocBlock::CheckLocked (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:461
@4 0x00000000019c3c42 in MallocBlock::CheckAndClear (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:401
@5 0x00000000019c436a in MallocBlock::Deallocate (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:557
@6 0x00000000019c1929 in DebugDeallocate (ptr=0x7fd18f143420, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:998
@7 0x00000000028d1482 in tc_delete (p=0x7fd18f143420) at ./third_party/tcmalloc/chromium/src/debugallocation.cc:1232
@8 0x000000000097dc04 in cc::ResourceProvider::deleteResourceInternal (this=0x7fd191827da0, it=...) at cc/resource_provider.cc:242
@9 0x000000000097daaf in cc::ResourceProvider::deleteResource (this=0x7fd191827da0, id=1) at cc/resource_provider.cc:230
@10 0x00000000006f9824 in (anonymous namespace)::ResourceProviderTest_Basic_Test::TestBody (this=0x7fd18dc5abf0)
at cc/resource_provider_unittest.cc:328
@11 0x00000000008ec801 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2071
@12 0x00000000008e9665 in testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2123
@13 0x00000000008dee0d in testing::Test::Run (this=0x7fd18dc5abf0) at testing/gtest/src/gtest.cc:2143
@14 0x00000000008df3ea in testing::TestInfo::Run (this=0x7fd191823020) at testing/gtest/src/gtest.cc:2319
@15 0x00000000008df8dc in testing::TestCase::Run (this=0x7fd19181f0d0) at testing/gtest/src/gtest.cc:2426
@16 0x00000000008e3eea in testing::internal::UnitTestImpl::RunAllTests (this=0x7fd19829dd60) at testing/gtest/src/gtest.cc:4249

generates SIGSEGV (tcmalloc::Abort). This is just debugallocation having stricter checks about usage of dynamically allocated memory. Now the StackDumpSignalHandler kicks in, and internally calls malloc. But we're already inside malloc code as you can see on the above stack trace (see frame @7, bold font), and re-entering it tries to take locks that are already held, resulting in a hang.

The fix required several changes:
  • no dynamic memory, and that includes std::string and std::vector, which use it internally
  • no buffered stdio or iostreams, they are not async-signal-safe (that includes fflush)
  • custom code for number-to-string conversion that doesn't need dynamically allocated memory (snprintf is not on the list of safe functions as of POSIX.1-2008; it seems to work on a glibc-2.15-based system, but as said before this is not a good assumption to make); in this code I've named it itoa_r, and it supports both base-10 and base-16 conversions, and also negative numbers for base-10
  • warming up backtrace(3): now this is really tricky, and backtrace(3) itself is not whitelisted for being safe; in fact, on the very first call it does some memory allocations; for now I've just added a call to backtrace() from a context that is safe and happens before the signal handler may be executed; implementing backtrace(3) in a known-safe way would be another fun thing to do
Note that for the above, I've also added a unit test that triggers the deadlock scenario. This will hopefully catch cases where calling backtrace(3) leads to trouble.

For more info, feel free to read the articles below:

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Munin and IPv6 (January 04, 2013, 16:48 UTC)

Okay here it comes another post about Munin for those who are using this awesome monitoring solution (okay I think I’ve been involved in upstream development more than I expected when Jeremy pointed me at it). While the main topic of this post is going to be IPv6 support, I’d like first to spend a few words for context of what’s going on.

Munin in Gentoo has been slightly patched in the 2.0 series — most of the patches were sent upstream the moment when they were introduced, and most of them have been merged in for the following release. Some of them though, including the one bringing my FreeIPMI plugin to replace the OpenIPMI plugins, or at least the first version of it, and those dealing with changes that wouldn’t have been kosher for other distributions (namely, Debian) at this point, were also not merged in the 2.0 branch upstream.

But now Steve opened a new branch for 2.0, which means that the development branch (Munin does not use the master branch, for a simple logistic reason of having a master/ directory in GIT I suppose) is directed toward the 2.1 series instead. This meant not only that I can finally push some of my recent plugin rewrites but also that I could make some more deep changes to it, including rewriting the seven asterisk plugins into a single one, and work hard on the HTTP-based plugins (for web servers and web services) so that they use a shared backend, like SNMP. This actually completely solved an issue that, in Gentoo, we solved only partially before — my ModSecurity ruleset blacklists the default libwww-perl user agent, so with the partial and complete fix, Munin advertises itself in the request; with the new code it includes also the plugin that is currently making the request so that it’s possible to know which requests belongs to what).

Speaking of Asterisk, by the way, I have to thank Sysadminman for lending me a test server for working on said plugins — this not only got us the current new Asterisk plugin (7-in-1!) but also let me modify just a tad said seven plugins, so that instead of using Net::Telnet, I could just use IO::Socket::INET. This has been merged for 2.0, which in turn means that the next ebuild will have one less dependency, and one less USE flag — the asterisk flag for said ebuild only added the Net::Telnet dependency.

To the main topic — how did I get to IPv6 in Munin? Well, I was looking at which other plugins need to be converted to “modernity” – which to me means re-using as much code possible, collapse multiple plugins in one through multigraph, and support virtual-nodes – and I found the squid plugins. This was interesting to me because I actually have one squid instance running, on the tinderbox host to avoid direct connection to the network from the tinderboxes themselves. These plugins do not use libwww-perl like the other HTTP plugins, I suppose (but I can’t be sure, for what I’m going to explain in a moment) because the cache://objects request that has to be done might or might not work with the noted library. Since as I said I have a squid instance, and these (multiple) plugins look exactly like the kind of target that I was looking for to rewrite, I started looking into them.

But once I started, I had a nasty surprise: my Squid instance only replies over IPv6, and that’s intended (the tinderboxes are only assigned IPv6 addresses, which makes it easier for me to access them, and have no NAT to the outside as I want to make sure that all network access is filtered through said proxy). Unfortunately, by default, libwww-perl does not support accessing IPv6. And indeed, neither do most of the other plugins, including the Asterisk I just rewrote, since they use IO::Socket::INET (instead of IO::Socket::INET6). A quick searching around, and this article turned up — although then this also turned up that relates to IPv6 support in Perl core itself.

Unfortunately, even with the core itself supporting IPv6, libwww-perl seems to be of different ideas, and that is a showstopper for me I’m afraid. At least, I need to find a way to get libwww-perl to play nicely if I want to use it over IPv6 (yes I’m going to work this around for the moment and just write the new squid plugins against the IPv4). On the other hand, using IO::Socket::IP would probably solve the issue for the remaining parts of the node and that will for sure at least give us some better support. Even better, it might be possible to abstract and have a Munin::Plugin::Socket that will fall-back to whatever we need. As it is, right now it’s a big question mark of what we can do there.

So what can be said about the current status of IPv6 support in Munin? Well, the Node uses Net::Server, and that in turn is not using IO::Socket::IP, but rather IO::Socket::INET or INET6 if installed — that basically means that the node itself will support IPv6 as long as INET6 is installed, and would call for using it as well, instead of using IO::Socket::IP ­— but the latter is the future and, for most people, will be part of the system anyway… The async support, in 2.0, will always use IPv4 to connect to the local node. This is not much of a problem, as Steve is working on merging the node and the async daemon in a single entity, which makes the most sense. Basically it means that in 2.1, all nodes will be spooled, instead of what we have right now.

The master, of course, also uses IPv6 — via IO::Socket::INET6 – yet another nail in the coffin of IO::Socket::IP? Maybe. – this covers all the communication between the two main components of Munin, and could be enough to declare it fully IPv6 compatible — and that’s what 2.0 is saying. But alas, this is not the case yet. On an interesting note, the fact that right now Munin supports arbitrary commands as transports, as long as they provide an I/O interface to the socket, make the fact that it supports IPv6 quite moot. Not only you just need an IPv6-capable SSH to handle it, but you can probably use SCTP instead of TCP simply by using a hacked up netcat! I’m not sure if monitoring would get any improvement of using SCTP, although I guess it might overcome some of the overhead related to establishing the connection, but.. well it’s a different story.

Of course, Munin’s own framework is only half of what has to support IPv6 for it to be properly supported; the heart of Munin is the plugins, which means that if they don’t support IPv6, we’re dead in the water. Perl plugins, as noted above, have quite a few issues with finding the right combination of modules for supporting IPv6. Bash plugins, and indeed any other language that could be used, would support IPv6 as good as the underlying tools — indeed, even though libwww-perl does not work with IPv6, plugins written with wget would work out of the box, on an IPv6-capable wget… but of course, the gains we have by using Perl are major enough that you don’t want to go that route.

All in all, I think what’s going to happen is that as soon as I’m done with the weekend’s work (which is quite a bit since the Friday was filled with a couple of server failures, and me finding out that one of my backups was not working as intended) I’ll prepare a branch and see how much of IO::Socket::IP we can leverage, and whether wrapping around that would help us with the new plugins. So we’ll see where this is going to lead us, maybe 2.1 will really be 100% IPv6 compatible…

January 02, 2013
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Cat stuck in the Christmas tree (January 02, 2013, 16:52 UTC)

Since the holidays are over, I decided to go back through some of the emails that I had received. Though I got a bunch of them with really funny cartoons, I found this one to be the best:

Long story, just pull - cat in the Christmas tree

The whole situation is hysterical to me, but the photo in the background makes it. The expression on the kid’s face fits perfectly; it’s the look of “Oh well, the cat’s in the tree again!”

Cheers,
Zach

Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Get Involved in Gentoo Linux (January 02, 2013, 13:31 UTC)

Nowadays I see lots of new blog posts about how to contribute in open source projects and I decided to write a blog post about how to contribute to Gentoo Linux and become a vital part of the project.gblend

My colleagues at university everytime we talk about Gentoo tell me that they cannot install Gentoo because it is too difficult for them or they are not ready to install it and configure it because they don’t have the experience and they finally give up.  Also some other colleagues tell me that they want to contribute to Gentoo and they don’t know how to start. Thats why I wrote this blogpost in order to give some guidelines for those who want to contribute.

In order to help and contribute in Gentoo you don’t have to know to code or to be a super duper Linux guru. Of course code/programming is the core of  open source projects but there are ways to contribute without knowing to code. Requirements are two things. A Gentoo installation and will to help.

Community

Gentoo like the rest FOSS Projects is based on volunteer efforts. The pylons of every FOSS project is its community. Without its community Gentoo wouldn’t exist. Even if someone doesn’t know to code, can contribute and learn from project’s community.

Forums:  Join our forums and help other users with their problem. It is a good opportunity also to learn more things about Gentoo.

Mailing Lists: Subscribe to our mailing lists and learn about the latest community and development news  of the project. Everyone can also help users to related mailing lists or discuss with Gentoo developers.

IRC: Join in our IRC channels. Help new users with their issues. Discuss with users and developers and express your opinion about the new features and the technical issues of the project. Make sure you will read our Code of Conduct first.

Planets: Follow our planet and watch some Gentoo-stuff blog posts from developers about Gentoo. There are interesting conversations (via comments) after the blog post between the users and the developers.

Promote: After you get some experience with the project promote your favourite distro( Gentoo of course ) writing blog posts and articles in forums and sites related to open source.  You can also spread the word in your local linux users group and at your university.

Participate in Events: Every month there are meetings from the most of the Gentoo project teams. The meetings take place at #gentoo-meetings. There is an ‘open floor’ at the end of the meeting where users can express their opinion.

Documentation

Gentoo has always been known for its wide variety and quality of documentation. It covers lots of aspect of Linux. Topics about desktop, software, security and most of them are not totally Gentoo based.  That’s the reason Gentoo documentation is successful and that’s why users from other Linux distributions using it. So you can be a part of this effort and improve the documentation.

Wiki:  Wiki is our fresh project. There are lots of ways to help here. Add new articles about the topics  you would like to see ( and have knowledge of them of course) and want to share it with the other Gentoo users. Improving and expanding wiki articles is a good way to help the project (avoid copy paste from other sources in the net). All users are encouraged to help, wiki is open for everyone. Use it responsibly because your posts will affect the Gentoo users who will try to follow your guide.

Translations: If English is not your native language translating wiki/documentation will be a very good way to help users that don’t know English and want to join to the community. Translations is a good way to contribute and expand the Gentoo community.

(bonus) Write article to your blog: If you find a configuration or a tool  or a new solution to a problem that saved your life at the Gentoo world. Don’t be afraid and share it with the other users.

Development ( Code )

As I said code is the core of any software project.  So if you have some knowledge with shell scripts and programming you are welcome to join the team. With small steps you can gain more experience with the project and contribute to it with your features and patches.

Bugs: Every FOSS project has its own bug tracking system, Gentoo as well has its own Bugzilla. There we report our issues. Build and run time failures , kernel problems ,  Gentoo tools issues,  stable requests.  You can also start contributing by confirming and reproducing bugs  and then try to offer solutions and fix the bugs( patches are welcome ). So feel free to  report new bugs to our Bugzilla. In addition there are requests to add or update(version bump*) ebuilds. Instead of requesting new ebuilds  and version bumps you can also write and  submit your ebuilds to our Bugzilla in order to be added to the Portage tree by a Gentoo developer.  Try picking up a bug from maintainer-wanted alias. If you need a review for your ebuild #gentoo-dev-help is the right place to do it.

* Please avoid 0day bump requests.

Arch Tester: An Arch Tester (a.k.a AT) is a trustworthy user capable of testing an application to determine its stability. Arch Testers should have a good understanding on how ebuilds works, bash scripting and should test lots of packages to their arch. You can become an AT at x86 and amd64 archs. Requirement is to have a stable Gentoo box. Your goal will be to test and install packages from the testing arch (~arch) and see if they are working in the stable arch. Then you can open a stabilization request to Bugzilla.

Sunrise Project: Sunrise is a starting point for gentoo users to contribute. The Sunrise team encourage users to write ebuilds and make sure that they follow Gentoo QA standards. Sunrise’s goal is to allow non-developers to maintain them. For questions you can ask at #gentoo-sunrise at Freenode.

Proxy-maintaining: The goal of this team is to maintain abandoned (orphaned) packages in order to prevent treecleaners from removing those packages.  Pick up some packages from the maintainer-needed list a begin to maintain it. For questions you can join #gentoo-dev-help.

Bugday: Bugday is an event which take place at #gentoo-bugs at Freenode every first weekend of every month. You can join and pick a bug and fix it. But have in mind that every day is a bugday so it doesn’t have to be a bugday to add your ebuild and fix bugs.

Advanced Community Projects: Portage, Gentoolkit Portage tools Kernel team, Infrastructure team, Security team and Hardened team. These projects are very special and important to the Gentoo so it is necessary to have a good level of knowledge in order to contribute to them. So if you have the skills join the party .  :)

Become a developer:  After you reach a good amount of contribution and you think you can be an active and vital member of the project you can start the process of becoming a developer. Talk to a Gentoo developer in order to mentor you and help you fill the ebuild and staff quiz and then the process of the recruiting will be completed with a live interview with a recruiter.

There are lots of  Gentoo project teams that need new members and help . Everyone can contribute to Gentoo either knowing to code or not. Every piece of help is useful for the project.

I think I covered the biggest part of the Gentoo and how to contribute to it . I’ll wait for your comments, if you think I missed something inform me. Fixes always welcome.

Start contributing from today.

Gentoo: If it moves, compile it ;)

Further reading:

  1. Gentoo Handbook
  2. Gentoo Development Guide
  3. Gentoo Projects Listing
  4. Benefits of Gentoo
  5. Easy way to assist us  by Markos Chandras
  6. How to contribute to Gentoo
  7. Beautiful bug reports
  8. Sunrise Project ( lots of good tutorials inside )
  9. Always looking for Arch Testers by Agostino Sarubbo

Thanks, it’s time to push Sabayon farther (January 02, 2013, 10:00 UTC)

I want to take take a few moments from my deserved Christmas break to say thanks to all the donors who have contributed to our last fundraiser. After 1.5 years, we’ve been able to hit our €5000 goal. This is a big, I mean really big, achievement for such a small (I am not sure now) but awesome distro like ours.

We’ve always wanted to bring Gentoo to everyone, make this awesome distro available on laptops, servers and of course, desktops without the need to compile, without the need of a compiler! It turns out that we’re getting there.

So, the biggest part of the “getting there” strategy was to implement a proper binary package manager and starting to automate the distro development, maintenance and release process.
Even though Entropy is in continuous development mode, we’ve got to the point that it’s reliable enough. Now, we must push Sabayon even farther.

Let me keep the development ideas I had for a separate blog post and tell you here what’s been done, what we’re going to do and what we still need in 2013.

First things first, last year we bought a new and shiny build server, which is kindly hosted by the University of Trento, Italy, featuring a Rack 2U dual Octa Opteron 6128, 48GB RAM and, earlier last year,
2x240GB Samsung 830 SSDs. In order to save (a lot of) money, I built the server myself and I spent something like 2500€ (including the SSDs). Take into consideration that prices for hardware in the EU are much higher than in the US.

Now we’re left with something like 3000€ or more and we’re planning to do another round of infra upgrades, save some money for hardware replacement in case of failures, buy t-shirts and DVDs to give out at local events, etc.

So far, the whole Sabayon infrastructure is spread across 3 Italian universities and TOP-IX (see at the bottom of http://www.sabayon.org for more details) and consists of four Rack 1U servers and one Rack 2U.
Whenever there’s a problem, I jump on a car and fix issues myself (like PSU, RAM, HDD/SSD failures) or kindly delegate the task to friends living closer than me.

As you can imagine, it’s easy to suck 200-300€ whenever there’s a problem and while we have failover plans (to EC2), these come with a cost as well.
As you may have already realized, free software does not really come for free, especially for those who are actually maintaining it. Automation and scaling out across multiple people (individuals involved in the development of this distro) are the key, and in particular the former, because it reduces the “human error” impact on the whole workflow.

As I mentioned above, I will prepare a separate blog post about what I mean with “automation”. For now, enjoy your Christmas holidays, the NYE celebrations and why not, some gaming with Steam on Sabayon.


During the last weeks, I spent several nights playing with UEFI and its extension called UEFI SecureBoot. I must admit that I have mixed feelings about UEFI in general; on one hand, you have a nice and modern “BIOS replacement” that can boot .efi files with no need for a bootloader like GRUB, on the other hand, some hardware, not even the most exotic one, is not yet glitch-free. But that’s what happens with new stuff in general. I cannot go much into detail without drifting away from the main topic, but surely enough, a simple google search about UEFI and Linux will point you to the problems I just mentioned above.

But hey, what does it all mean for our beloved Gentoo-based distro named Sabayon? Since DAILY ISO images dated 20121224, Sabayon can boot off UEFI systems, through DVD and USB (thanks to isohybrid –uefi) and, surprise surprise, with SecureBoot turned on!. I am almost sure that we’re the first Linux distro supporting SecureBoot out of the box (update: using shim!) and I am very proud of it. This is of course thanks to Matthew Garrett’s shim UEFI loader that is chainloading our signed UEFI GRUB2 image.

The process is simple and works like this: you boot an UEFI-compatible Sabayon ISO image off DVD or USB, if SecureBoot is turned on, shim will launch MokManager, that you can use to enroll our distro key, called sabayon.der and available on our image under the ”SecureBoot” directory. Once you enrolled the key, on some systems, you’re forced to reboot (I had to on my shiny new Asus Zenbook UX32VD), but then, the magic happens.

There is a tricky part however. Due to the way GRUB2 .efi images are generated (at install time, with settings depending on your partition layout and platform details), I have been forced to implement a nasty way to ensure that SecureBoot can still accept such platform-dependent images: our installer, Anaconda, now generates a hardware-specific SecureBoot keypair (private and public key), then our modified grub2-install version, automatically signs every .efi image it generates with that key, which is placed into the EFI Boot Partition under EFI/boot/sabayon ready to be enrolled by shim at the next boot.
This is sub-optimal, but after several days of messing around, it turned out that it’s the most reliable, cleanest and easiest way to support SecureBoot after install without disclosing our private key we use to sign our install media. Another advantage is that our distro keypair, once enrolled, will allow any Sabayon image to boot, while we still allow full control over the installed system to our users (by generating a platform-specific private key at install time).

SecureBoot is not that evil after all, my laptop came with Windows 8 (which I just ripped off completely) and SecureBoot disabled by default and lets anyone sign their own .efi binaries from the ”BIOS”. I don’t see how my freedom could be affected by this, though.


January 01, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Autotools Mythbuster: automake pains (January 01, 2013, 17:42 UTC)

And we start the new year with more Autotools Mythbusting — although in this case it’s not with the help of upstream, who actually seemed to make it more difficult. What’s going on? Well, there has been two releases already, 1.13 and 1.13.1, and the changes are quite “interesting” — or to use a different word, worrisome.

First of all, there are two releases because the first one (1.13) was removing two macros (AM_CONFIG_HEADER and AM_PROG_CC_STDC) that were not deprecated in the previous release. After a complain from Paolo Bonzini related to a patch to sed to get rid of the old macros, Stefano decided to re-introduce the macros as deprecated in 1.13.1. What does this tell me? Well, two things mainly: the first is that this release has been rushed out without enough testing (the beta for it was released on December 19th!). The second that there is still no proper process in the deprecation of features with clear deadlines of when they are to disappear.

This impression is further strengthened in respect with some of the deprecation that appear in this new release, and some of the removals that did not happen at all.

This release was supposed to mark the first one not supporting the old-style name of configure.in for the autoconf input script — if you have any project still using that name you should update now. For some reason – none of which has been discussed on the automake mailing list, unsurprisingly – it was decided to postpone this to the next release. It still is a perfectly good idea to rename the files now, but you can probably get pissed easily if you felt pressurized into getting ready for the new release, and then the requirement is dropped without further notice.

Another removal that was supposed to happen with this release was the three-parameters AM_INIT_AUTOMAKE call, which substitutes the parameters of AC_INIT, instead of providing the automake options. The use of this macro is, though, still common for packages that calculate their version number dynamically, such as from the GIT repository itself, as it’s not possible to have a variable version passed to AC_INIT. Now, instead of just marking the feature as deprecated but keeping it around, the situation is that the syntax is no longer documented but it’s still usable. Which means I have to document it myself, as I find it extremely stupid to have a feature that is not documented anywhere, but is found in the wild. It’s exactly for bad decisions like this that I started Autotools Mythbuster.

This is not much different from what has happened with the AM_PROG_MKDIR macro, which was supposed to be deprecated/removed in 1.12, with the variables being kept around for a little longer — first it ended up being completely messed up in 1.12 to the point that the first two releases of that series dropped the variables which were supposed to stay around and the removal of the macro (but not o fthe variables) is now scheduled for 1.14 because, among others, GNU gettext is still using it — the issue has been reported, and I also think it has been fixed in GIT already, but there is no new release, nor a date for it to get fixed in a release.

All of this is already documented in Autotools Mythbuster even though there is more work to do.

Then there are things that changed, or were introduced in this release. First of all, silent rules are no longer optional — this basically means that the silent-rules option to the automake init is now a no-op, and the generated makefiles all have the silent rules harness included (but not enabled by default as usual). For me this meant a rewrite of the related section as now you have one more variant of automake to support. Then there finally is support in aclocal to get the macro directory selected in configure.ac — unfortunately this for me meant I had to rewrite another section of my guide to account for it, and now both the old and the new method are documented in there.

There are more notes in the NEWS file, and more things that are scheduled to appear in the next release, an I’ll try to cover them in my Autotools Mythbuster over the next week or so — I’ll expect this time I need to get into the details of Makefile.am like i have tried to avoid up to now. It’s quite a bit of work but it might be what makes the difference for so many autotools users out there that I really can’t avoid the task at this point. In the mean time, I welcome all support, be it through patches, suggestions, Flattr, Amazon or whatever else — the easiest way is to show the guide around: not only it’ll reduce the headaches for me and the other distribution packagers to have people actually knowing how to work on autotools, but also the more people know about it, the more contributions are likely to come in. Writing Autotools Mythbuster is far from easy, and sometimes it’s not enjoyable at all, but I guess it’s for the best.

Finally, a word about the status of automake in Gentoo — I’m leaving to Mike to bump the package in tree, once he’s done that, I’ll prepare to run a tinderbox with it — hopefully just getting the reverse dependencies for automake would be enough, thanks to autotools.eclass. For when the tinderbox is running, I hope I’ll have all the possible failures covered in the guide, as it’ll make the job of my Gentoo peers much easier.

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy New Year – 2013 (January 01, 2013, 15:42 UTC)

Just wanted to take a quick moment and wish everyone a Happy New Year! It’s that day where we can all start anew, and make resolutions to do this or that (or to not do this or that :razz: ). My resolution is to get back to updating my blog on a regular basis. I don’t know that it will be nearly every day like it was before I moved, but I’m going to try to post often (the backlog of topics is getting quite large).

Anyway, Happy 2013 to all!

Cheers,
Zach

December 31, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why would paid-for support be better? (December 31, 2012, 20:46 UTC)

Last Saturday evening, I sent an e-mail to a low-volume mailinglist regarding IMA problems that I’m facing. I wasn’t expecting an answer very fast of course, being holidays, weekend and a low-volume mailinglist. But hey – it is the free software world, so I should expect some slack on this, right?

Well, not really. I got a reply on sunday – and not just an acknowledgement e-mail, but a to-the-point answer. It was immediately correct and described why, and helped me figure out things further. And this is not a unique case in the free software world: because you are dealing with the developers and users that have written the code that you are running/testing, you get a bunch of very motivated souls, all looking at your request when they can, and giving input when they can.

Compare that to commercial support from bigger vendors: in these cases, your request probably gets read by a single person whose state of mind is difficult to know (but from the communication you often get the impression that they either couldn’t care less or they are swamped with request tasks so they cannot devote enough time on your request). In most cases, they check the request for containing the right amount of information in the right format on the right fields, or even ignore that you did all that right and just ask you for (the same) information again. And who knows how many times I had to “state your business impact”.

Now, I know that commercial support from bigger vendor has the burden of a huge overload in requests, but is that truely that different in the free software world? Mailinglists such as the Linux kernel mailinglist (for kernel development) gets hundreds (thousands?) mails a day, and those with request for feedback or with questions get a reply quite swiftly. Mailinglists for distribution users get a lot of traffic as well, and each and every request is handled with due care and responded to within a very good timeframe (24h or less most of the time, sometimes a few days if the user is using a strange or exotic environment that not everyone knows how to handle).

I think one of the biggest advantages of the free software world is that the requests are public. That both teaches the many users on those mailinglists and fora on how to handle problems they haven’t seen before, as well as allows users to first look for a problem before reporting it. Everybody wins with this. And because it is public, many users are happily answering more and more questions because they get the visibility (with acknowledgements) they deserve: they gain a specific position in that particular area that others respect, because we can see how much effort (and good results) they gave earlier on.

So kudos to the free software world, a happy new year – and keep going forward.

December 30, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Finding IDs to submit (December 30, 2012, 18:02 UTC)

I have written a lot about the hardware IDs but i haven’t said much about submitting new entries to the upstream databases. Indeed, the package just mirrors the data that is collected by the USB and PCI databases that are managed by Stephen, Martin and Michal.

As an example, I’ll show you how I’ve been submitting the so-called Subsystem IDs for PCI devices from computers I either own, or fix up for customers and friends.

First off, you have to find a system or device whose subsystem IDs have not been submitted yet. Unfortunately I don’t have any computer at hand that I haven’t submitted to the database already. But fear not — it so happens I had an interesting opening. I rented a server from OVH recently, as I’ve had some trouble with one of my production hosts lately, and I’m entertaining the idea of moving everything on a new server and service altogether. But the whole thing is a topic for a completely different time. In any case, let’s see what we can do about these IDs now that I have an interesting system at hand.

First of all, while I don’t have the server at hand to know what’s in it, OVH does tell me what hardware is on it — in particular they tell me it’s an Intel D425KT board (yes I got a Kimsufi Atom, I got the three months lease for now and I’ll see if it can perform decently enough), so that’s a start. Alternatively, I could have asked dmidecode — but I just don’t have it installed on that server right now.

First step is to look at what lspci -v says:

00:00.0 Host bridge: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge
        Subsystem: Intel Corporation Device 544b
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=08 <?>

This is of course only the first entry in the list but it’s still something. You can see on the second line that it says “Subsystem: Intel Corporation Device 544b” — that means that it knows the subsystem vendor (ID 8086, I can tell you by heart — they have been funny at that), but it doesn’t know the subsystem device. So it’s what we’re looking for: an unknown system! Time to compare the output of lspci -vn — that one does not resolve the IDs, since we’ll need them to submit to the PCI database so if you’re not registered already, do register so that they can be submitted to begin with.

00:00.0 0600: 8086:a000
        Subsystem: 8086:544b
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=08 <?>

Okay so now we know that our first device is Intel’s (VID 8086) and has a000 as device ID — this brings us to https://pci-ids.ucw.cz/read/PC/8086/a000 easy, isn’t it? At the end of the page there’s a list of the known subsystem IDs; pending submissions does not show up the name, but they show up in the table with a darker gray background. All PCI ID entries are moderated by hand by the database’ s maintainers. When you’ll be reading this, the entry for my board will be in already, but right now it isn’t — if it wasn’t obvious, I’m looking for an entry that reads 8086 544b (which is under “Subsystem” above).

Now the form requires just a few words: the ID itself – which is 8086 544b with a space, not a colon – and a name. Note is for something that needs to be written on the pci.ids, so in most cases need to be empty. Discussion if when you wan tot comment on the certainly of your submission; for my laptop for instance we had some trouble with “Intel Corporation Device 0153” — which is now officially “3rd Gen Core Processor Thermal Subsystem”.

The name I’m going to submit is “Desktop Board D425KT” as that’s what the other entry in the database for that device uses as a format — okay it actually uses DeskTop but I’d rather not capitalize another T and see a kitten cry.

Now it’s time to go through all the other entries in the system — yes there are many of them, and most of the time the IDs are not set in the order of the PCI connections, so be careful. More interestingly, not all the subsystems are going to be listed in the same line. Indeed, the third entry that I have is this:

00:1c.0 0604: 8086:27d0 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00001000-00001fff
        Memory behind bridge: e0f00000-e12fffff
        Prefetchable memory behind bridge: 00000000e0000000-00000000e00fffff
        Capabilities: [40] Express Root Port (Slot+), MSI 00
        Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [90] Subsystem: 8086:544b
        Capabilities: [a0] Power Management version 2
        Capabilities: [100] Virtual Channel
        Capabilities: [180] Root Complex Link
        Kernel driver in use: pcieport

The subsystem ID is listed under “Capabilities” instead — but it’s always the same. This is actually critical: if the subsystem does not match, it means that it’s coming from a different component — for instance if you’re building your own computer, the subsystem of the internal CPU devices and those of the motherboard will not match, as they come from different vendors. And so would happen to add-on cards (PCI, PCI-E, AGP, …).

Sometimes, a different subsystem is also available on internal components that get different names from the motherboard itself — in this case, the Realtek network card on this motherboard reports a completely different ID and I really don’t know how to submit it:

01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)
        Subsystem: Intel Corporation Device d626
        Flags: bus master, fast devsel, latency 0, IRQ 44
        I/O ports at 1000 [size=256]
        Memory at e0004000 (64-bit, prefetchable) [size=4K]
        Memory at e0000000 (64-bit, prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable- Count=4 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Virtual Channel
        Capabilities: [160] Device Serial Number 01-00-00-00-36-4c-e0-00
        Kernel driver in use: r8169

If for whatever reason you make a mistake, you can click on the “Discuss” link on the submitted content and edit the name that you want to submit. I did make such a mistake during submitting the IDs for this.

So here are the tricks.. happy submission!

Unfortunately, all times we have a big list to keyword or stabilize, repoman complains about missing packages. So, in this post I will give you the solution to avoid this problem.

First, please download the batch-pretend script from my overlay.
I’m not a python programmer but I was able to edit the script made by Paweł Hajdan. I just deleted the bugzilla commit part, and I make the script able to print repoman full if the list is not complete.
This script works only with =www-client/pybugz-0.9.3

Now, to check if repoman will complain about your list, you need to do:
./batch-pretend.py --arch amd64 --repo /home/ago/gentoo-x86 -i /tmp/yourlist

where:

  • Batch-pretend.py is the script (obviously);
  • amd64 is the arch that you want to check. You will use ~amd64 for the keywordreq;
  • /home/ago/gentoo-x86 is the local copy of the CVS;
  • /tmp/yourlist is the list which contains the packages;

Few useful notes:

If you want to check on some arches, you can use a simple for:
for i in amd64 x86 sparc ppc ; do
./batch-pretend.py --arch "${i}" --repo /home/ago/gentoo-x86 -i /tmp/yourlist
done

The script will run ekeyword, so it will touch your local CVS copy of gentoo-x86. If this is not your intention, please make another copy and work there or don’t forget to run cvs up -C.

Before doing this work, you need to run cvs up in the root of your gentoo-x86 local CVS.

The list must be structured in this mode:
# bug #445900
=app-portage/eix-0.27.4
=www-client/pybugz-0.9.3
=dev-vcs/cvs-1.12.12-r6
#and so on..

December 29, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
IMA and EVM on Gentoo, part 2 (December 29, 2012, 21:42 UTC)

I have been playing with Linux IMA/EVM on a Gentoo Hardened (with SELinux) system for a while and have been documenting what I think is interesting/necessary for Gentoo Linux users when they want to use IMA/EVM as well. Note that the documentation of the Linux IMA/EVM project itself is very decent. It’s all on a single wiki page, but it’s decent and I learned a lot from it.

That being said, I do have the impression that the method they suggest for generating IMA hashes for the entire system is not always working properly. It might be because of SELinux on my system, but for now I’m searching for another method that does seem to work well (I’m currently trying my luck with a find … -exec evmctl based command). But once the hashes are registered, it works pretty well (well, there’s a probably small SELinux problem where loading a new policy or updating the existing policies seems to generate stale rules and I have to reboot my system, but I’ll find the culprit of that soon ;-)

The IMA Guide has been updated to reflect recent findings – including how to load a custom policy, and I have also started on the EVM Guide. I think it’ll take me a day or three to finish off the rough edges and then I’ll start creating a new SELinux node (KVM) image that users can use with various Gentoo Hardened-supported technologies enabled (PaX, grSecurity, SELinux, IMA and EVM).

So if you’re curious about IMA/EVM and willing to try it out on Gentoo Linux, please have a look at those documents and see if they assist you (or confuse you even more).

Steve Dibb a.k.a. beandog (homepage, bugs)
znurt.org cleanup (December 29, 2012, 05:36 UTC)

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again.  It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes.  It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to.  Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers.  My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.”  Comes with the territory, I guess.  Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff.  There’s always something cool to look into.  But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on.  Znurt went down, but I didn’t really have a good development environment anymore.  On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason.  Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff.  Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix.  Derp.  That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me.  At work, I only write shell scripts now (bash) and we use MySQL across the board.  Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me.  I love that database.  Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing.  I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts.  Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries.  So, I’m in the process of moving it to use PDO instead.  I’ve wanted to look into using it for a while, and so far I like it, for the most part.  Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good.  I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all.  It’ll be faster, and importing the data from portage will be more accurate.  I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest.  Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on.  I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database.  I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string.  Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again.  I’d forgotten what it was like.


December 27, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened IMA support (December 27, 2012, 20:40 UTC)

Adventurous users, contributors and developers can enable the Integrity Measurement Architecture subsystem in the Linux kernel with appraisal (since Linux kernel 3.7). In an attempt to support IMA (and EVM and other technologies) properly, the System Integrity subproject within Gentoo Hardened was launched a few months ago. And now that Linux kernel 3.7 is out (and stable) you can start enjoying this additional security feature.

With IMA (and IMA appraisal), you are able to protect your system from offline tampering: modifications made to your files while the system is offline will be detected as their hash values do not match the hash values stored in extended attributes (whereas the extended attributes are then protected through digitally signed values using the EVM technology).

I’m working on integrating IMA (and later EVM) properly, which of course includes the necessary documentation: concepts and a ima guide for starters, with more to follow. Be aware though that the integration is still in its infancy, but any questions and feedback is greatly appreciated, and bugreports (like bug 448872) are definitely welcome.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
My personal KDEPIM upgrade (again): laptop (December 27, 2012, 11:40 UTC)

One year after my last blog post on this topic I encountered some minor difficulties with combining KDEPIM-4.4 (i.e. kmail1) and the KDE 4.10 betas. These difficulties are fixed now, and the combination seems to work fine again. Anyway, I became curious about the level of stability of Akonadi-based kmail2 once more. After all, I've been running it continuously over the year on my office desktop with a constant-on fast internet connection, and that works quite well. So, I gave it a fresh try on my laptop too. I deleted my Akonadi configuration and cache, switched to Akonadi mysql backend, updated kmail and the rest of KDEPIM without migrating to 4.9.4, and re-added my IMAP account from scratch (with "Enable offline mode"). The overall use case description is "laptop with large amount of cached files from IMAP account, fluctuating internet connectivity". Now, here are my impressions...

  • Reaction time is occasionally sluggish, but overall OK.
  • The progress indicator behaves a bit odd, it checks the mail folders in seemingly random order and only knows 0% and 100% completion.
  • Random warning messages. It seems that kmail2 uses some features that "my" IMAP server does not understand. So, I'm getting frequent warning notifications that don't tell me anything and that I cannot do anything about. SET ANNOTATION, UID, ... Please either handle the errors, inform the user what exactly goes wrong, or ignore them in case they are irrelevant. Filed as a wish, bug 311265.
  • Network activity stops working sometime. This sounds worse than it actually is, since in 99% of all cases Akonadi now detects fine that the connection to the server is broken (e.g., after suspend/resume, after switching to a different WLAN, or after enabling a VPN tunnel) and reconnects immediately. In the few remaining cases, re-starting the Akonadi server does the trick. You just have to know what to kick.
  • More problematic is, while you're in online mode, any problems with connectivity will make kmail "hang". Clicking on a message leads to an attempt to retrieve it, which requires some response from the network. As it seems to me, all such requests are queued up for Akonadi to handle, and if that does not get a reply, pending requests are stuck in the queue... OK, you might say that this is a typical use case for offline mode, but then I would have to be able to predict when exactly my train enters the tunnel... Compare this to kmail1 disconnected IMAP accounts, where regular syncing would be delayed, but local work remained unaffected.
  • Offline mode is a nice concept, and half a solution for the last problem, but unfortunately it does not work as expected. For mysterious reasons, a considerable part of the messages is not cached locally. I switch my account to offline mode, click on a message, and obtain an error message "Cannot fetch this in offline mode". Well, bummer. Bug 285935.
  • This may just be my personal taste, but once something goes wrong (e.g., non-kde related crash, battery empty, ...) and the cache becomes corrupted somehow, I'd like to be able to do something from kmail2 without having to fiddle with akonadiconsole. A nice addition would be "Invalidate cache" in the context menu of a mail folder, or some sort of maintenance menu with semi-safe options.
  • Finally... something is definitely going wrong with PGP signatures; the signatures do not always verify on other mail clients. Tracking this down, it seems that CRLF is not preserved in messages, see bug 306005.
On the whole, for the laptop use case the "new" KDEPIM is now (4.9.4) more mature than the last time I tried. I'll keep it now and not downgrade again, but there are still some significant rough edges. The good thing is, the KDEPIM developers are aware of the above issues and debugging is going on, as you can see for example from this blog post by Alex Fiestas (whose use case pretty much mirrors my own).

December 26, 2012
Gnome 3.6 (December 26, 2012, 23:35 UTC)

We had a marathon with Alexandre (tetromino) in the last 2 weeks to get Gnome 3.6 ebuilds using python-r1 eclasses variants, EAPI=5 and gstreamer-1. And now it is finally in gentoo-x86, unmasked.

You probably read, heard or have seen stuff about EAPI=5 and new python eclasses before but, in short, here is what it will give you:

  • package manager will finally know for real what python version is used by which package and be able to act on it accordingly (no more python-updater when all ebuilds are migrated)
  • EAPI=5 subslots will hopefully put an end to revdep-rebuild usage. I already saw it in action while bumping some of the telepathy packages to discover that empathy was now automatically being rebuilt without further action than emerge -1 telepathy-logger.

No doubt lots of people are going to love this.

Gnome 3.6 probably still has a few rough edges so please, check bugzilla before filing new reports.

December 23, 2012
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: Why privacy matters (December 23, 2012, 22:13 UTC)

I am sharing this video because it has a few interesting points on the value of privacy, especially some that are helpful explaining privacy to others. Two examples:

Cory Doctorow (at 00:13):

“Privacy is the right to make a mistake.”

Christopher Soghoian (at 03:07):

“Everyone has something to hide. We have curtains on our windows, we wear cloths, we don’t broadcast our salaries or our medications [..].”

PS: This video was brought to my attention by a post at Netzpolitik.org.

December 22, 2012
Stuart Longland a.k.a. redhatter (homepage, bugs)
End of the world predictions (December 22, 2012, 07:21 UTC)

This is a little old, been kicking around on my computer over 10 years now, but it seems especially relevant given what some thought of the Mayan calendar…

December 21, 2012


Figure 1.1: End of World

Fig. 1: End of World banner

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, ready to rock the end of the world (or at least mid-winter/Southern Solstice)! The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.6.8, Xorg 1.12.4, KDE 4.9.4, Gnome 3.4.2, XFCE 4.10, Fluxbox 1.3.2, Firefox 17.0.1, LibreOffice 3.6.4.3, Gimp 2.8.2-r1, Blender 2.64a, Amarok 2.6.0, Mplayer 2.2.0, Chromium 24.0.1312.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. There is no new FAQ or artwork the 20121221 release, but you can still get the 12.0 artwork plus DVD cases and covers for the 12.0 release; and view the 12.1 FAQ (persistence mode is not available in 20121221).
  • Special Features:
    • ZFSOnLinux
    • Writable file systems using AUFS so you can emerge new packages!

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20121221 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20121221 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Creating a tumblelog with blohg (December 21, 2012, 05:39 UTC)

Warning: This post relies on unreleased blohg features. You will need to install blohg from the Mercurial repository or use the live ebuild (=www-apps/blohg-9999), if you are a Gentoo user. Please ignore this warning after blohg-1.0 release.

Tumblelogs are old stuff, but services like Tumblr popularized them a lot recently. Thumblelogs are a quick and simple way to share random content with readers. They can be used to share a link, a photo, a video, a quote, a chat log, etc.

blohg is a good blogging engine, we know, but what about tumblelogs?!

You can already share videos from Youtube and Vimeo, and can share most of the other stuff manually, but it is boring, and diverges from the main objective of the tumblelogs: simplicity.

To solve this issue, I developed a blohg extension (Yeah, blohg-1.0 supports extensions! \o/ ) that adds some cool reStructuredText directives:

quote

This directive is used to share quotes. It will create a blockquote element with the quote and add a signature with the author name, if provided.

Usage example:

.. quote::
   :author: Myself

   This is a random quote!

chat

This directive is used to share chat logs. It will add a div with the chat log, highlighted with Pygments.

Usage example:

.. chat::

   [00:56:38] <rafaelmartins> I'm crazy.
   [00:56:48] <rafaelmartins> I chat alone.

You can see the directives in action on my shiny new tumblelog:

http://rafael.martins.im/

The source code of the tumblelog, including the blohg extension and the mobile-friendly templates, is available here:

http://hg.rafaelmartins.eng.br/blogs/rafael.martins.im/

I have no plans to release this extension as part of blohg, but feel free to use it if you find it useful!

That's all!

December 20, 2012
Jeremy Olexa a.k.a. darkside (homepage, bugs)

I was in Budapest for 11 days. I couchsurfed there and it is longer than I normally stay at someone’s house, by far. So, thanks Paul! Budapest was nice, reminded me much of Prague. While, I was there I visited a Turkish Bath, that was very interesting experience. Imagine, a social, public “hot tub & sauna” with water naturally hot. I found a newly minted Crossfit gym, RC Duna, that opened up it’s doors for a traveller, so gracious. Even though I didn’t get to see the Opera in Vienna, I went to the Opera house in Budapest. It was my first time seeing a ballet, The Nutcracker. There were Christmas markets in Budapest too. I actually liked the Budapest ones more so than the Viennese markets. I also helped to organize the first (known) Hungarian Gentoo Linux Beer Meeting :)

Then I took a train to Belgrade, Serbia. The train was 8+ hours. I couchsurfed again for 3 nights. Had some wonderful chats with my host, Ljubica. She learned about US things, I learned about Serbian things, just what you could hope for, a cultural exchange via couchsurfing. I was her first US guest. Later on, an Argentinian fellow stayed there too and we had conversations about worldly topics, like “why are borders so important and do we need them?” and “speculating why Belgium’s lack of government even worked.” Then perhaps, the best part, I got to try authentic mate. In my opinion, there wasn’t much to actually see in Belgrade during the winter, I did walk around and went to the fortress. Otherwise, nursed my head cold which I got on the train.

I took the bus to Skopje, FYROM. I stayed in Skopje for 3 nights at a nice independent hostel, Shanti Hostel (recommended). I walked around the center (not much to see), walked through the old bazaar, and ate some good food. The dishes in Central Europe include lots of meat. I embarked on a mission to find the semi-finalist entry for the next 7 wonders of the world, Vrelo Cave, but I got lost and took a 10km hike along the river, it was spectacular! And peaceful. Perfect really. I wanted to see what was at the end of the trail, but eventually turned around because it didn’t end. On the way back, I slipped and came within feet of going in the drink. As my legs straddled a tree and my feet went through the branches that were clearly meant to handle no weight, I used that split second to be thankful. I used the next second to watch something black go bounce, …, bounce, SPLASH. It is funny how you can go from thankful to cursing about your camera in the river so quickly. I got up, looked around and thought about how I got off the path, dang. Being the frugal man I am, I continued off the path and went searching for my camera. Well, that was bad because I slipped again. As I was sliding on my ass and grabbing branches, I eventually stopped. It was at this point, I knew my camera was gone since I could see the battery popped out and was in the water. Le sigh. C’est la vie.

So, no pictures, friends. I had a few hundred pictures that I didn’t upload and they are gone. I might buy a camera again but for now, you will just have to take my word for it. My Mom says she will send me a disposable camera :D ha.

I’m off to Greece at 6am…