Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
January 01, 2013, 23:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

January 01, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.
Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"
D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. Strunk
accepted for publication by Journal of Applied Physics, arXiv:1208.2163 (PDF)

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy New Year – 2013 (January 01, 2013, 15:42 UTC)

Just wanted to take a quick moment and wish everyone a Happy New Year! It’s that day where we can all start anew, and make resolutions to do this or that (or to not do this or that :razz: ). My resolution is to get back to updating my blog on a regular basis. I don’t know that it will be nearly every day like it was before I moved, but I’m going to try to post often (the backlog of topics is getting quite large).

Anyway, Happy 2013 to all!

Cheers,
Zach

December 31, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why would paid-for support be better? (December 31, 2012, 20:46 UTC)

Last Saturday evening, I sent an e-mail to a low-volume mailinglist regarding IMA problems that I’m facing. I wasn’t expecting an answer very fast of course, being holidays, weekend and a low-volume mailinglist. But hey – it is the free software world, so I should expect some slack on this, right?

Well, not really. I got a reply on sunday – and not just an acknowledgement e-mail, but a to-the-point answer. It was immediately correct and described why, and helped me figure out things further. And this is not a unique case in the free software world: because you are dealing with the developers and users that have written the code that you are running/testing, you get a bunch of very motivated souls, all looking at your request when they can, and giving input when they can.

Compare that to commercial support from bigger vendors: in these cases, your request probably gets read by a single person whose state of mind is difficult to know (but from the communication you often get the impression that they either couldn’t care less or they are swamped with request tasks so they cannot devote enough time on your request). In most cases, they check the request for containing the right amount of information in the right format on the right fields, or even ignore that you did all that right and just ask you for (the same) information again. And who knows how many times I had to “state your business impact”.

Now, I know that commercial support from bigger vendor has the burden of a huge overload in requests, but is that truely that different in the free software world? Mailinglists such as the Linux kernel mailinglist (for kernel development) gets hundreds (thousands?) mails a day, and those with request for feedback or with questions get a reply quite swiftly. Mailinglists for distribution users get a lot of traffic as well, and each and every request is handled with due care and responded to within a very good timeframe (24h or less most of the time, sometimes a few days if the user is using a strange or exotic environment that not everyone knows how to handle).

I think one of the biggest advantages of the free software world is that the requests are public. That both teaches the many users on those mailinglists and fora on how to handle problems they haven’t seen before, as well as allows users to first look for a problem before reporting it. Everybody wins with this. And because it is public, many users are happily answering more and more questions because they get the visibility (with acknowledgements) they deserve: they gain a specific position in that particular area that others respect, because we can see how much effort (and good results) they gave earlier on.

So kudos to the free software world, a happy new year – and keep going forward.

December 30, 2012

Unfortunately, all times we have a big list to keyword or stabilize, repoman complains about missing packages. So, in this post I will give you the solution to avoid this problem.

First, please download the batch-pretend script from my overlay.
I’m not a python programmer but I was able to edit the script made by Paweł Hajdan. I just deleted the bugzilla commit part, and I make the script able to print repoman full if the list is not complete.
This script works only with =www-client/pybugz-0.9.3

Now, to check if repoman will complain about your list, you need to do:
./batch-pretend.py --arch amd64 --repo /home/ago/gentoo-x86 -i /tmp/yourlist

where:

  • Batch-pretend.py is the script (obviously);
  • amd64 is the arch that you want to check. You will use ~amd64 for the keywordreq;
  • /home/ago/gentoo-x86 is the local copy of the CVS;
  • /tmp/yourlist is the list which contains the packages;

Few useful notes:

If you want to check on some arches, you can use a simple for:
for i in amd64 x86 sparc ppc ; do
./batch-pretend.py --arch "${i}" --repo /home/ago/gentoo-x86 -i /tmp/yourlist
done

The script will run ekeyword, so it will touch your local CVS copy of gentoo-x86. If this is not your intention, please make another copy and work there or don’t forget to run cvs up -C.

Before doing this work, you need to run cvs up in the root of your gentoo-x86 local CVS.

The list must be structured in this mode:
# bug #445900
=app-portage/eix-0.27.4
=www-client/pybugz-0.9.3
=dev-vcs/cvs-1.12.12-r6
#and so on..

December 29, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Flashing a Kindle Fire with CyanogenMod (December 29, 2012, 22:07 UTC)

Those of you that follow me on Google Plus (or Facebook) already know this, but the other day I was wondering about whether I should have flashed my Kindle Fire (first generation) with CyanogenMod instead of keeping it with the original Amazon operating system. This is the tale of what I did, which includes a big screwup on my part.

But first, a small introduction. I’m the first person to complain about people “jailbreaking” iPhones and similar, as I think that if you have to buy something that you have to modify to make useful, then you shouldn’t have bought it in the first place. Especially if you try to justify with the name “jailbreak” an act that almost all of the public uses to pirate software — I’m a firm maintainer that if we want Free Software licenses to be respected, we have to consider EULAs just as worthy of respect; that is that you can show that they are evil, but you can’t call for disrespecting them.

But I have made exceptions before, and this mostly happen when the original manufacturer “forgets” to provide update, or fails to follow through with promised features. An example of this to me was when I bought an AppleTV hoping that Apple would have kept their promise of entering the European market for TV series and movies so that it would come to be useful. While now they do have something, they have not the ability to buy them to watch in the original English (which makes it useless to me), and that came only after I decided to just drop the device because it wasn’t keeping up with the rest of the world. At the time to avoid having to throw the device away, I ended up using the hacking procedure to turn it into an XBMC device.

So in this case the problem was that after coming back home from Los Angeles, I barely touched the Kindle Fire at all. Why? Well, even though I did buy season passes for some TV Series (Castle, Bones, NCIS), which would allow me to stream them on Linux (unlike Apple’s store that only works on their device or with their software, and unlike Netflix that does not work on Linux), and download to the Kindle Fire, neither option works when outside of the United States — so to actually download the content I paid for, I have to use a VPN.

While it’s not straight forward, it’s possible to set up a VPN connection from Linux to the iPad, and have it connect to Amazon through said VPN, there is no way to do so on the Kindle Fire (there’s no VPN support at all). So I ended up leaving it untouched, and after a month I was concerned about my purchase. So I started considering what were the compelling features of the Kindle Fire compared to any other Android-based tablet. Which mostly came down to the integration with Amazon: the books, the music and the videos (TV series and movies).

For what concerns the books, the Kindle app for Android is just as good as the native one — the only thing that is missing is the “Kindle Owners’ Lending Library”, but since I rarely read books on the Fire, that’s not a big deal (I have a Kindle Keyboard that I read books on). For the music, while I did use the Fire a few times to listen to that, it’s not a required feature, as I have an iPod Touch for that, that also comes with an Amazon MP3 application.

There are also the integration of the Amazon App Store, but that’s something that tries to cover for the lack of Google Play support — and in general there isn’t that much content in there. Lots of applications, even when available, are compatible with my HTC Desire HD but not with the Kindle Fire, so what’s the point? Audiobooks are not native — they are handled through the Audible application, which is available on Google Play, but is also available on my iPod Touch, which means I have no point about it.

So about the videos — that’s actually the sole reason why I ordered it. While it is possible to watch the streamed videos on Linux, Flash would use my monitor and not let me work when watching something, so I wanted a device I could stream the videos to and watch on… a couple of months after I bought the Fire, though, Amazon released an Instant Video application for the iPad, making it quite moot. Especially since the iPad has the VPN access I noted before, and I can connect the HDMI adapter to it and watch the streams on my 32" TV.

All this considered, the videos were the only thing that was really lost if I stopped using the Amazon firmware. So I looked it up and found three guides – 1 2 3 – that would have got me set up with an Android 4.1, CyanogenMod 10 based ROM. Since the device is very simple (no bluetooth, no GPS, no baseband, no NFC) supporting it should be relatively easy, the only problem, as usual, is to make sure you can root and flash it.

Unfortunately, when I went to flash it up, I made a fatal mistake: instead of flashing the bootloader’s image (a modified u-boot), I flashed the zip file of it. And the device wouldn’t boot up anymore. Thankfully, there are people like Christopher and Vladimir who pointed me at the fact that the CPU in that tablet (TI OMAP) has an USB boot option — but it requires to short one very tiny, nigh-microscopic pad on the main board to ground, so that it would try to boot from there. Lo and behold, thanks to a friend of mine with less shaky hands who happened to be around, I was able to follow the guide to unbrick the device, and got the CM10 ROM on top of it.

Now I finally got an Android 4 device (the HTC is still running the latest available CM7 — if somebody has a suggestion of a CM10 ROM that does not add tons of customization, and that doesn’t breach the Google license by bundling the Google Apps, I’d be happy to update), I’ve been able to test Chrome for Android, and VLC as well — and I have to say that it’s improving tons. Of course there are still quite a few things that are not really clean (for example there is no Flickr application that can run there!), but it’s improving.

If I were to buy a new tablet tomorrow, though, I would probably be buying a Samsung Galaxy Note 10 — why? Well, because I finally got a hold of a test version of it at the local Mediamarkt Mediaworld and the pen accessory is very nice to use, especially if you’re used to Wacom tablets, and that would give sense to a 10" laptop to me. I’m a bit upset with my iPad inability to do precise drawing to be honest. And since that’s not very commonly known, the Galaxy Notes don’t use capacitive pens, but magnetic ones just like the above-noted Wacoms, that’s why they are so precise.

Sven Vermeulen a.k.a. swift (homepage, bugs)
IMA and EVM on Gentoo, part 2 (December 29, 2012, 21:42 UTC)

I have been playing with Linux IMA/EVM on a Gentoo Hardened (with SELinux) system for a while and have been documenting what I think is interesting/necessary for Gentoo Linux users when they want to use IMA/EVM as well. Note that the documentation of the Linux IMA/EVM project itself is very decent. It’s all on a single wiki page, but it’s decent and I learned a lot from it.

That being said, I do have the impression that the method they suggest for generating IMA hashes for the entire system is not always working properly. It might be because of SELinux on my system, but for now I’m searching for another method that does seem to work well (I’m currently trying my luck with a find … -exec evmctl based command). But once the hashes are registered, it works pretty well (well, there’s a probably small SELinux problem where loading a new policy or updating the existing policies seems to generate stale rules and I have to reboot my system, but I’ll find the culprit of that soon ;-)

The IMA Guide has been updated to reflect recent findings – including how to load a custom policy, and I have also started on the EVM Guide. I think it’ll take me a day or three to finish off the rough edges and then I’ll start creating a new SELinux node (KVM) image that users can use with various Gentoo Hardened-supported technologies enabled (PaX, grSecurity, SELinux, IMA and EVM).

So if you’re curious about IMA/EVM and willing to try it out on Gentoo Linux, please have a look at those documents and see if they assist you (or confuse you even more).

Steve Dibb a.k.a. beandog (homepage, bugs)
znurt.org cleanup (December 29, 2012, 05:36 UTC)

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again.  It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes.  It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to.  Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers.  My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.”  Comes with the territory, I guess.  Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff.  There’s always something cool to look into.  But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on.  Znurt went down, but I didn’t really have a good development environment anymore.  On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason.  Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff.  Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix.  Derp.  That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me.  At work, I only write shell scripts now (bash) and we use MySQL across the board.  Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me.  I love that database.  Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing.  I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts.  Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries.  So, I’m in the process of moving it to use PDO instead.  I’ve wanted to look into using it for a while, and so far I like it, for the most part.  Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good.  I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all.  It’ll be faster, and importing the data from portage will be more accurate.  I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest.  Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on.  I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database.  I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string.  Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again.  I’d forgotten what it was like.


December 27, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened IMA support (December 27, 2012, 20:40 UTC)

Adventurous users, contributors and developers can enable the Integrity Measurement Architecture subsystem in the Linux kernel with appraisal (since Linux kernel 3.7). In an attempt to support IMA (and EVM and other technologies) properly, the System Integrity subproject within Gentoo Hardened was launched a few months ago. And now that Linux kernel 3.7 is out (and stable) you can start enjoying this additional security feature.

With IMA (and IMA appraisal), you are able to protect your system from offline tampering: modifications made to your files while the system is offline will be detected as their hash values do not match the hash values stored in extended attributes (whereas the extended attributes are then protected through digitally signed values using the EVM technology).

I’m working on integrating IMA (and later EVM) properly, which of course includes the necessary documentation: concepts and a ima guide for starters, with more to follow. Be aware though that the integration is still in its infancy, but any questions and feedback is greatly appreciated, and bugreports (like bug 448872) are definitely welcome.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Restarting a tinderbox (December 27, 2012, 15:52 UTC)

So after my post about glibc 2.17 we got the ebuild in tree, and I’m now re-calibrating the ~amd64 tinderbox to use it. This sounds like an easy task but it really isn’t so. The main problem is that with the new C library you want to make sure to start afresh: no pre-compiled dependencies should be in, or they won’t be found: you want the highest coverage as possible, and that takes some work.

So how do you re-calibrate the tinderbox? First off you stop the build, and then you have to clean it up. The cleanup sometimes is as easy as emerge --depclean — but in some cases, like this time, the Ruby packages’ dependencies are causing a bit of a stir, so I had to remove them altogether with qlist -I dev-ruby virtual/ruby dev-lang/ruby | xargs emerge -C after which the depclean command actually starts working.

Of course it’s not a two minutes command like on any other system, especially when going through the “Checking for lib consumers” step — the tinderbox has a 181G of data in its partition (a good deal of which is old logs which I should actually delete at this point — and no that won’t delete the logs in the reported bugs, as those are stored on s3!), without counting the distfiles (which are shared with its host).

In this situation, if there were automagic dependencies on system/world packages, it would actually bail out and I’d have to go manually clean them up. Luckily for me, there’s no problem today, but I have had this kind of problem before. This is actually one of the reasons why I want to keep the world set in the tinderbox as small as possible — right now it consists basically of: portage-utils, gentoolkit (for revdep-rebuild), java-dep-check, Python 2.7 (it’s an old thing, it might be droppable now, not sure), and netcat6 for sending the logs back to the analysis script. I would have liked to remove netcat6 from the list but last time the busybox nc implementation didn’t work as expected with IPv6.

The unmerge step should be straightforward, but unfortunately it seems to be causing more grief than it’s expected, in many cases. What happens is that Portage has special handling for symlinked directories — and after we migrated to use /run instead of /var/run all the packages that have not been migrated to not using keepdir on it, ebuild-side, will spend much more time at unmerge stage to make sure nothing gets broken. This is why we have a tracker bug and I’ve been reporting ebuilds creating the directory, rather than just packages that do not re-create it on the init script. Also, this is when I thank I decided to get rid of XFS as the file deletion there was just way too slow.

Even though Portage takes care of verifying the link-time dependencies, I’ve noticed that sometimes things are broken nonetheless, so depending on what one’s target is, it might be a good idea to just run revdep-rebuild to make sure that the system is consistent. In this case I’m not going to waste the time, as I’ll be rebuilding the whole system in the next step, after glibc gets updated. This way we’re sure that we’re running with a stable base. If packages are broken at this level, we’re in quite the pinch, but it’s not a huge deal.

Even though I’m keeping my world file to the minimum, the world and system set is quite huge, when you add up all the dependencies. The main reason is that the tinderbox enables lots and lots of flags – as I want to test most code – so things like gtk is brought in (by GCC, nonetheless), and the cascade effect can be quite nasty. The system rebuild can easily take a day or two. Thankfully, the design of the tinderbox scripts make it so that the logs are send through the bashrc file, and not through the tinderbox harness itself, which means that even if I get failures at this stage, I’ll get a log for them in the usual place.

After this is completed, it’s finally possible to resume the tinderbox building, and hopefully then some things will work more as intended — like for instance I might be able to get a PHP to work again… and I’ll probably change the tinderbox harness to try building things without USE=doc, if they fail, as too many packages right now fail with it enabled or, as Michael Mol pointed out, because there are circular dependencies.

So expect me working on the tinderbox for the next couple of days, and then start reporting bugs against glibc-2.17, the tracker for which I opened already, even though it’s empty at the time of writing.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
My personal KDEPIM upgrade (again): laptop (December 27, 2012, 11:40 UTC)

One year after my last blog post on this topic I encountered some minor difficulties with combining KDEPIM-4.4 (i.e. kmail1) and the KDE 4.10 betas. These difficulties are fixed now, and the combination seems to work fine again. Anyway, I became curious about the level of stability of Akonadi-based kmail2 once more. After all, I've been running it continuously over the year on my office desktop with a constant-on fast internet connection, and that works quite well. So, I gave it a fresh try on my laptop too. I deleted my Akonadi configuration and cache, switched to Akonadi mysql backend, updated kmail and the rest of KDEPIM without migrating to 4.9.4, and re-added my IMAP account from scratch (with "Enable offline mode"). The overall use case description is "laptop with large amount of cached files from IMAP account, fluctuating internet connectivity". Now, here are my impressions...

  • Reaction time is occasionally sluggish, but overall OK.
  • The progress indicator behaves a bit odd, it checks the mail folders in seemingly random order and only knows 0% and 100% completion.
  • Random warning messages. It seems that kmail2 uses some features that "my" IMAP server does not understand. So, I'm getting frequent warning notifications that don't tell me anything and that I cannot do anything about. SET ANNOTATION, UID, ... Please either handle the errors, inform the user what exactly goes wrong, or ignore them in case they are irrelevant. Filed as a wish, bug 311265.
  • Network activity stops working sometime. This sounds worse than it actually is, since in 99% of all cases Akonadi now detects fine that the connection to the server is broken (e.g., after suspend/resume, after switching to a different WLAN, or after enabling a VPN tunnel) and reconnects immediately. In the few remaining cases, re-starting the Akonadi server does the trick. You just have to know what to kick.
  • More problematic is, while you're in online mode, any problems with connectivity will make kmail "hang". Clicking on a message leads to an attempt to retrieve it, which requires some response from the network. As it seems to me, all such requests are queued up for Akonadi to handle, and if that does not get a reply, pending requests are stuck in the queue... OK, you might say that this is a typical use case for offline mode, but then I would have to be able to predict when exactly my train enters the tunnel... Compare this to kmail1 disconnected IMAP accounts, where regular syncing would be delayed, but local work remained unaffected.
  • Offline mode is a nice concept, and half a solution for the last problem, but unfortunately it does not work as expected. For mysterious reasons, a considerable part of the messages is not cached locally. I switch my account to offline mode, click on a message, and obtain an error message "Cannot fetch this in offline mode". Well, bummer. Bug 285935.
  • This may just be my personal taste, but once something goes wrong (e.g., non-kde related crash, battery empty, ...) and the cache becomes corrupted somehow, I'd like to be able to do something from kmail2 without having to fiddle with akonadiconsole. A nice addition would be "Invalidate cache" in the context menu of a mail folder, or some sort of maintenance menu with semi-safe options.
  • Finally... something is definitely going wrong with PGP signatures; the signatures do not always verify on other mail clients. Tracking this down, it seems that CRLF is not preserved in messages, see bug 306005.
On the whole, for the laptop use case the "new" KDEPIM is now (4.9.4) more mature than the last time I tried. I'll keep it now and not downgrade again, but there are still some significant rough edges. The good thing is, the KDEPIM developers are aware of the above issues and debugging is going on, as you can see for example from this blog post by Alex Fiestas (whose use case pretty much mirrors my own).

December 26, 2012
Gnome 3.6 (December 26, 2012, 23:35 UTC)

We had a marathon with Alexandre (tetromino) in the last 2 weeks to get Gnome 3.6 ebuilds using python-r1 eclasses variants, EAPI=5 and gstreamer-1. And now it is finally in gentoo-x86, unmasked.

You probably read, heard or have seen stuff about EAPI=5 and new python eclasses before but, in short, here is what it will give you:

  • package manager will finally know for real what python version is used by which package and be able to act on it accordingly (no more python-updater when all ebuilds are migrated)
  • EAPI=5 subslots will hopefully put an end to revdep-rebuild usage. I already saw it in action while bumping some of the telepathy packages to discover that empathy was now automatically being rebuilt without further action than emerge -1 telepathy-logger.

No doubt lots of people are going to love this.

Gnome 3.6 probably still has a few rough edges so please, check bugzilla before filing new reports.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GLIBC 2.17: what's going to be a trouble? (December 26, 2012, 11:27 UTC)

So LWN reports just today on the release of GLIBC 2.17 which solves a security issue and looks like was released mostly to support the new AArch64 architecture – i.e. arm64 – but the last entry in the reported news is possibly going to be a major headache and I’d better post already about it so that we have a reference for it.

I’m referring to this:


The `clock_*' suite of functions (declared in <time.h>) is now available directly in the main C library. Previously it was necessary to link with -lrt to use these functions. This change has the effect that a single-threaded program that uses a function such as `clock_gettime' (and is not linked with -lrt) will no longer implicitly load the pthreads library at runtime and so will not suffer the overheads associated with multi-thread support in other code such as the C++ runtime library.

This is in my opinion the most important change, not only because, as it’s pointed out, C++ software would have quite an improvement not to link to the pthreads library, but also because it’s the only change listed there that I can foresee trouble with already. And why is that? Well, that’s easy. Most of the software out there will do something along these lines to see what library to link to when using clock_gettime (the -lrt option was not always a good idea because it’s not existing for most other operating systems out there, including FreeBSD and Mac OS X).

AC_SEARCH_LIB([clock_gettime], [rt])

This is good, because it’ll try either librt, or just without any library at all (“none required”) which means that it’ll work on both old GLIBC systems, new GLIBC systems, FreeBSD, and OS X — there is something else on Solaris if I’m not mistaken, which can be added up there, but I honestly forgot its name. Unfortunately, this can easily end up with more trouble when software is underlinked.

With the old GLIBC, it was possible to link software with just librt and have them use the threading functions. Once librt will be dropped automatically by the configuration, threading libraries will no longer be brought in by it, and it might break quite a few packages. Of course, most of these would already have been failing with gold but as you remembered, I wasn’t able to get to the whole tree with it, and I haven’t set up a tinderbox for it again yet (I should, but it’s trouble enough with two!).

What about --as-needed in this picture? A full hard-on implementation would fail on the underlinking, where pthreads should have been linked explicitly, but would also make sure to not link librt when it’s not needed, which would make it possible to improve the performance of the code (by skipping over pthreads) even when the configure scripts are not written properly (like for instance if they are using AC_CHECK_LIB instead of AC_SEARCH_LIB). But since it’s not the linkage of librt that causes the performance issue, but rather the one for pthreads, it actually works out quite well, even if some packages might keep an extra linkage to librt which is not used.

There is a final note that I need o write about and honestly worries me quite a bit more than all those above. The librt library has not been dropped — only the clock functions have been moved over to the main C library, but the library keeps asynchronous and list-based I/O operation interfaces (AIO and LIO), the POSIX message queues interfaces, the shared memory interfaces, and the timer interfaces. This means that if you’re relying on a clock_gettime test to bring in librt, you’ll end up with a failing package. Luckily for me, I’ve avoided that situation already on feng (which uses the message queues interface) but as I said I foresee trouble at least for some packages.

Well, I guess I’ll just have to wait for the ebuild for 2.17 to be in the tree, and run a new tinderbox from scratch… we’ll see what gets us there!

December 25, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Trouble in GNU: an opportunity for improving? (December 25, 2012, 18:54 UTC)

I have posted a note about the way FSF (America) started acting like a dictator with the GNU project and the software maintained under its umbrella, which lead to the splitting of GnuTLS — which is something that Nikos is not currently commenting on, simply because he’s now negotiating what’s going to happen with it.

Well, the next step has been Paolo stepping down as GNU maintainer, after releasing a new version of sed. This actually made me think a bit more. What’s going on with sed, grep and the like? Well, most likely they’ll get a new maintainer and they’ll keep going that way. But should we see this as an opportunity? You probably remember that some time ago I suggested we could be less GNU — or at least, less reliant on GNU.

So while I’m definitely not going to fork sed myself ­– I have enough trouble with unpaper especially considering that while in America I didn’t have a scanner, which is a necessity to develop it – but there definitely is room for improvement with it. First of all, it would be a good choice to start with, to get rid of the damn gnulib and eventually implementing what is an extension of glibc itself as an external library (something like libgsupc). Even if this didn’t work on anything but FreeBSD and Linux, it would still be an improvement, and I’m pretty sure it would be feasible without needing that hairy mess of code that, in the source code for sed takes five times as much as the sed sources themselves — 200KiB are the sources for the program, 1.1MiB is the gnulib copies.

Having a new, much less political project to oversee the development of core system utilities would also most likely consolidate some projects that are currently being developed outside of GNU altogether, or simply don’t fit with their scope because they are Linux-specific, which would probably make for a better final user experience. Plus things like keeping man pages actually up to date instead of relying on the info manuals, would almost certainly help!

So can any of you think of other ways to improve the GNU utilities by breaking out of GNU’s boundaries (which is what Nikos and Paolo seem to be striving for), maybe it is possible to get something that is better for everybody and Free at the same time. Myself I know I need to spend some time to fix the dependency upon readline that is present in GnuTLS just for the utilities..

December 24, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why can't I get easy hardware (December 24, 2012, 15:12 UTC)

When I bought my Latitude I complained that it seemed to me more and more like a mistake — until the kernel started shipping with the correct (and fixed) drivers, so the things that originally didn’t work right (the SD card reader, the shutdown process, the touchpad, …) started working quite nicely. As of September 2011 (one year and a quarter after I bought it), between Linux and firmware updates from Dell and Broadcom, the laptop worked almost completely — only missing par still is the fingerprint reader, which I really don’t care that much about.

Recently, you probably have seen my UEFI post where I complained that I couldn’t install Sabayon on the new Zenbook (which is where I’m writing from, right now, on Gentoo). Well, that wasn’t the only problem I got with this laptop, and I should really start reporting issues to the kernel itself, but in the mean time let me write down some notes here.

First off, the keyboard backlight is nice and all, but I don’t need it – I learnt to touch-type when I was eight – so it would just be a battery waste of time. While the keys are reported correctly, and upower supports setting the backlight, at least the stable version of KDE doesn’t seem to support the backlight setting. I should ask my KDE friends if they can point me in the right direction. Another interesting point is that while the backlight is turned on at boot, it’s off after suspension — which is probably a bug in the kernel, but it’s working fine for me.

Speaking about things not turning back on after suspension, the WLAN LED on the keyboard is not turning on, at resume. And related to that, the rfkill key doesn’t seem to work that well either. It’s not a big deal but it’s a bit bothersome, especially since I would like to turn off the bluetooth adapter only (and since that’s supposedly hardware-controlled, it should get me some more battery life).

The monitor’s backlight is even more troublesome: first problem is deciding who should be handling it — i’s either the ACPI video driver (by default), the ASUS WMI driver, or the Intel driver — of the three, the only one that make it work is the Intel driver, and I’m not even sure if that’s actually controlling the backlight or just the tint on the screen, even though, when set to zero, it turns the screen OFF, not just display it as black. It does make it bearable though.

The brightness keys on the keyboard don’t work, by the way, nor does the one that should turn on and off the light sensor — the latter, isn’t even recognized as a key by the asus-wmi driver, and I can’t be sure of the correct device ID that I should use to turn on/off said light sensor. After I hacked the driver to not expose either the ACPI or the WMI brightness interfaces, I’m able to set the brightness from KDE at least — but it does not seem to stick, if I take it down, and after some time it starts and gets back to the maximum (when the power is connected, at least).

And finally, there is the matter of the SD card reader. Yesterday I went to use it, and I found out that … it didn’t work. Even though it’s an USB device, it’s not mass-storage — it’s a Realtek USB MMC device, which does not use the standard USB interface for MMC readers at all! After some googling around, I found that Realtek actually released a driver for that, and after some more digging I found out that said driver is currently (3.7) in the staging drivers’ tree as a virtual SCSI driver (with its own MMC stack) — together with a PCI-E peer, which has been already rewritten for the next release (3.8) as three split drivers (a MFD base, a MMC driver, and a MemoryStick driver). I tried looking into working on porting the USB one as well, but it seems to be a lot of work, and Realtek (or rather, Realsil) seems to be already working on it to port it to the real kernel, so it might be worth waiting.

To be fair what dropped away the idea from me of working on the SD card driver is that to have an idea of what’s going on I have to run 3.8 — and at RC1 panics as soon as I re-connect the power cable. So even though I would like to find enough time to work on some kernel code, this is unlikely to happen now. I guess I’ll spend the next three days working on Gentoo bugs, then I have a customer to take care of, so this is just going to be dropped off my list quite quickly.

December 23, 2012
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: Why privacy matters (December 23, 2012, 22:13 UTC)

I am sharing this video because it has a few interesting points on the value of privacy, especially some that are helpful explaining privacy to others. Two examples:

Cory Doctorow (at 00:13):

“Privacy is the right to make a mistake.”

Christopher Soghoian (at 03:07):

“Everyone has something to hide. We have curtains on our windows, we wear cloths, we don’t broadcast our salaries or our medications [..].”

PS: This video was brought to my attention by a post at Netzpolitik.org.

December 22, 2012
Stuart Longland a.k.a. redhatter (homepage, bugs)
End of the world predictions (December 22, 2012, 07:21 UTC)

This is a little old, been kicking around on my computer over 10 years now, but it seems especially relevant given what some thought of the Mayan calendar…

December 21, 2012


Figure 1.1: End of World

Fig. 1: End of World banner

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, ready to rock the end of the world (or at least mid-winter/Southern Solstice)! The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.6.8, Xorg 1.12.4, KDE 4.9.4, Gnome 3.4.2, XFCE 4.10, Fluxbox 1.3.2, Firefox 17.0.1, LibreOffice 3.6.4.3, Gimp 2.8.2-r1, Blender 2.64a, Amarok 2.6.0, Mplayer 2.2.0, Chromium 24.0.1312.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. There is no new FAQ or artwork the 20121221 release, but you can still get the 12.0 artwork plus DVD cases and covers for the 12.0 release; and view the 12.1 FAQ (persistence mode is not available in 20121221).
  • Special Features:
    • ZFSOnLinux
    • Writable file systems using AUFS so you can emerge new packages!

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20121221 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20121221 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Creating a tumblelog with blohg (December 21, 2012, 05:39 UTC)

Warning: This post relies on unreleased blohg features. You will need to install blohg from the Mercurial repository or use the live ebuild (=www-apps/blohg-9999), if you are a Gentoo user. Please ignore this warning after blohg-1.0 release.

Tumblelogs are old stuff, but services like Tumblr popularized them a lot recently. Thumblelogs are a quick and simple way to share random content with readers. They can be used to share a link, a photo, a video, a quote, a chat log, etc.

blohg is a good blogging engine, we know, but what about tumblelogs?!

You can already share videos from Youtube and Vimeo, and can share most of the other stuff manually, but it is boring, and diverges from the main objective of the tumblelogs: simplicity.

To solve this issue, I developed a blohg extension (Yeah, blohg-1.0 supports extensions! \o/ ) that adds some cool reStructuredText directives:

quote

This directive is used to share quotes. It will create a blockquote element with the quote and add a signature with the author name, if provided.

Usage example:

.. quote::
   :author: Myself

   This is a random quote!

chat

This directive is used to share chat logs. It will add a div with the chat log, highlighted with Pygments.

Usage example:

.. chat::

   [00:56:38] <rafaelmartins> I'm crazy.
   [00:56:48] <rafaelmartins> I chat alone.

You can see the directives in action on my shiny new tumblelog:

http://rafael.martins.im/

The source code of the tumblelog, including the blohg extension and the mobile-friendly templates, is available here:

http://hg.rafaelmartins.eng.br/blogs/rafael.martins.im/

I have no plans to release this extension as part of blohg, but feel free to use it if you find it useful!

That's all!

December 20, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why my Munin plugins are now written in Perl (December 20, 2012, 21:52 UTC)

This post is an interlude between Gentoo-related posts. The reason is that I have one in drafts that requires me to produce some results that I have not yet, so it’ll have to wait for the weekend or so.

You might remember that my original IPMI plugin was written in POSIX sh and awk, rather than bash and gawk as the original one. Since then, the new plugin (that as it turns out might become part of the 2.1 series but not to replace both the old ones, since RHEL and Fedora don’t package a new enough version of Freeipmi) has been rewritten in Perl, so using neither sh nor awk. Similarly, I’ve written a new plugin for sensors which I also wrote in Perl (although in this case the original one also used it).

So why did I learn a new language (since I never programmed in Perl before six months ago) just to get these plugins running? Well, as I said in the other post, the problem was calling the same command so many times, which is why I wanted to go multigraph — but when dealing with variables, sticking to POSIX sh is a huge headache. One of the common ways to handle this is to save to a temporary directory the output of a command and parse that multiple times, but that’s quite a pain, as it might require I/O to disk, and it also means that you have to execute more and more commands. Doing the processing in Perl means that you can save things in variables, or even just parse it once and split it into multiple objects, to be later used for output, which is what I’ve been doing for parsing FreeIPMI’s output.

But why Perl? Well, Munin itself is written in Perl, so while my usual language of choice is Ruby, the plugins are much more usable if doing it in Perl. Yes, there are some alternative nodes written in C and shell, but in general it’s a safe bet that these plugins will be executed on a system that at least supports Perl — the only system I can think of that wouldn’t be able to do so would be OpenWRT, but that’s a whole different story.

There are a number of plugins written in Python and Ruby, some in the official package, but most in the contrib repository and they could use some rewriting. Especially those that use net-snmp or other SNMP libraries, instead of Munin’s Net::SNMP wrapper.

But while the language is of slight concern, some of the plugins could use some rewriting simply to improve their behaviour. As I’ve said, using multigraphs it’s possible to reduce the number of times that the plugin is executed, and thus the number of calls to the backend, whatever that is (a program, or access to /sys), so in many cases plugins that support multiple “modes” or targets through wildcarding can be improved by making them a single plugin. In some cases, it’s even possible to reduce multiple plugins into one, as I did to the various apache_* plugins shipping with Munin itself, replaced on my system with apache_status as provided by the contrib repository, that fetches the server status page only once and then parses it to produce the three graphs that were, before that, created by three different plugins with three different fetches.

Another important trick up our sleeves while working on Munin plugins is dirty config which basically means that (under indication from the node itself), you can make the plugin output the values as well as the configuration itself during the config execution — this saves you one full trip to the node (to fetch the data), and usually that also means it saves you from having to send one more call to the backend. In particular with these changes my IPMI plugin went from requiring six calls to ipmi-sensors per update, for the three graphs, to just one. And since it’s either IPMI on the local bus (which might require some time to access) or over LAN (which takes more time), the difference is definitely visible both in timing, and in traffic — in particular one of the servers at my day job is monitoring another seven servers (which can’t be monitored through the plugin locally), which means that we went from 42 to 7 calls per update cycle.

So if you use Munin, and either have had timeout issues in the past or recently, or you have some time at hand to improve some plugins, you might want to follow what I’ve been doing, and start improving or re-writing plugins to support multigraph or dirtyconfig, and thus improve its performance.

Jeremy Olexa a.k.a. darkside (homepage, bugs)

I was in Budapest for 11 days. I couchsurfed there and it is longer than I normally stay at someone’s house, by far. So, thanks Paul! Budapest was nice, reminded me much of Prague. While, I was there I visited a Turkish Bath, that was very interesting experience. Imagine, a social, public “hot tub & sauna” with water naturally hot. I found a newly minted Crossfit gym, RC Duna, that opened up it’s doors for a traveller, so gracious. Even though I didn’t get to see the Opera in Vienna, I went to the Opera house in Budapest. It was my first time seeing a ballet, The Nutcracker. There were Christmas markets in Budapest too. I actually liked the Budapest ones more so than the Viennese markets. I also helped to organize the first (known) Hungarian Gentoo Linux Beer Meeting :)

Then I took a train to Belgrade, Serbia. The train was 8+ hours. I couchsurfed again for 3 nights. Had some wonderful chats with my host, Ljubica. She learned about US things, I learned about Serbian things, just what you could hope for, a cultural exchange via couchsurfing. I was her first US guest. Later on, an Argentinian fellow stayed there too and we had conversations about worldly topics, like “why are borders so important and do we need them?” and “speculating why Belgium’s lack of government even worked.” Then perhaps, the best part, I got to try authentic mate. In my opinion, there wasn’t much to actually see in Belgrade during the winter, I did walk around and went to the fortress. Otherwise, nursed my head cold which I got on the train.

I took the bus to Skopje, FYROM. I stayed in Skopje for 3 nights at a nice independent hostel, Shanti Hostel (recommended). I walked around the center (not much to see), walked through the old bazaar, and ate some good food. The dishes in Central Europe include lots of meat. I embarked on a mission to find the semi-finalist entry for the next 7 wonders of the world, Vrelo Cave, but I got lost and took a 10km hike along the river, it was spectacular! And peaceful. Perfect really. I wanted to see what was at the end of the trail, but eventually turned around because it didn’t end. On the way back, I slipped and came within feet of going in the drink. As my legs straddled a tree and my feet went through the branches that were clearly meant to handle no weight, I used that split second to be thankful. I used the next second to watch something black go bounce, …, bounce, SPLASH. It is funny how you can go from thankful to cursing about your camera in the river so quickly. I got up, looked around and thought about how I got off the path, dang. Being the frugal man I am, I continued off the path and went searching for my camera. Well, that was bad because I slipped again. As I was sliding on my ass and grabbing branches, I eventually stopped. It was at this point, I knew my camera was gone since I could see the battery popped out and was in the water. Le sigh. C’est la vie.

So, no pictures, friends. I had a few hundred pictures that I didn’t upload and they are gone. I might buy a camera again but for now, you will just have to take my word for it. My Mom says she will send me a disposable camera :D ha.

I’m off to Greece at 6am…

Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching policy types in Gentoo/SELinux (December 20, 2012, 09:31 UTC)

When you are running Gentoo with SELinux enabled, you will be running with a particular policy type, which you can devise from either /etc/selinux/config or from the output of the sestatus command. As a user on our IRC channel had some issues converting his strict-policy system to mcs, I thought about testing it out myself. Below are the steps I did and the reasoning why (and I will update the docs to reflect this accordingly).

Let’s first see if the type I am running at this moment is indeed strict, and that the mcs type is defined in the POLICY_TYPES variable. This is necessary because the sec-policy/selinux-* packages will then build the policy modules for the other types referenced in this variable as well.

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             strict
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28
 
test ~ # grep POLICY_TYPES /etc/portage/make.conf
POLICY_TYPES="targeted strict mcs"

If you notice that this is not the case, update the POLICY_TYPES variable and rebuild all SELinux policy packages using emerge $(qlist -IC sec-policy) first.

Let’s see if I indeed have policies for the other types available and that they are recent (modification date):

test ~ # ls -l /etc/selinux/*/policy
/etc/selinux/mcs/policy:
total 408
-rw-r--r--. 1 root root 417228 Dec 19 21:01 policy.27
 
/etc/selinux/strict/policy:
total 384
-rw-r--r--. 1 root root 392168 Dec 19 21:15 policy.27
 
/etc/selinux/targeted/policy:
total 396
-rw-r--r--. 1 root root 402931 Dec 19 21:01 policy.27

Great, we’re now going to switch to permissive mode and edit the SELinux configuration file to reflect that we are going to boot (later) into the mcs policy. Only change the type – I will not boot in permissive mode so the SELINUX=enforcing can stay.

test ~ # setenforce 0
 
test ~ # vim /etc/selinux/config
[... set SELINUXTYPE=mcs ...]

You can run sestatus to verify the changes, but be aware that – while the command does say that the mcs policy is loaded, this is not the case. The mcs policy is just defined as the policy to load:

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

So let’s load the mcs policy shall we?

test ~ # cd /usr/share/selinux/mcs/
test mcs # semodule -b base.pp -i $(ls *.pp | grep -v base | grep -v unconfined)

Next we are going to relabel all files on the file system, because the mcs policy adds in another component in the context (a sensitivity label – always set to 0 for mcs). We will also re-do the setfiles steps done initially while setting up SELinux on our system. This is because we need to relabel files that are “hidden” from the current file system because other file systems are mounted on top of it.

test mcs # rlpkg -a -r
Relabeling filesystem types: btrfs ext2 ext3 ext4 jfs xfs
Scanning for shared libraries with text relocations...
0 libraries with text relocations, 0 not relabeled.
Scanning for PIE binaries with text relocations...
0 binaries with text relocations detected.
 
test mcs # mount -o bind / /mnt/gentoo
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/dev
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/lib64
test mcs # umount /mnt/gentoo

Finally, edit /etc/fstab and change all rootcontext= parameters to include a trailing :s0, otherwise the root contexts of these file systems will be illegal (in the mcs-sense) as they do not contain the sensitivity level information.

test mcs # vim /etc/fstab
[... edit rootcontext's to now include ":s0" ...]

There ya go. Now reboot and notice that all is okay, and we’re running with the mcs policy loaded.

test ~ # id -Z
root:sysadm_r:sysadm_t:s0-s0:c0.c1023
test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

December 18, 2012
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: lost letters (December 18, 2012, 09:17 UTC)

a new song: lost letters by ioflow

prepared improvisation for the 50th disquiet junto, morse beat.

the assignment was to encode a word or phrase with the Morse method, and then translate that sequence into the song’s underlying rhythm.

i chose the meaning of my name, “the Lord is salvation.” i looked at the resulting dashes and dots and treated them as sheet music, improvising a minor-key motif for piano, using just my right hand.

with the basic sketch recorded, i duplicated an excerpt and ran it through a vintage tape delay effect, putting it in the background almost like a loop. i set to work adding a few notes here and there, some of them reversed, running into more tape delays; contrasting their sonic character with the main melody. the loop excerpt repeats a few times, occasionally transformed by offset placement with the main theme, or reinforced by single note chord changes.

from a very few audio fragments, a mournful story emerged. echoing piano lines and uncovered memories. i did my best to vary the structure while keeping the mood and emotions, but this is still pretty hasty work; i only had a few minutes to arrange this piece before the deadline, due to software issues with ardour 3 beta. ardour crashes every time i attempt to process an audio clip, such as reversing or stretching it. i had to separately render those segments with renoise, then import them to ardour.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio 3.0 (December 18, 2012, 07:57 UTC)

Yay, we just release PulseAudio 3.0! I’m not going to rehash the changelog that you can find in the release announcement as well as the longer release notes.

I would like to thank the 36 contributors over the last 6 months who have made this release what it is and continue to demonstrate what a vibrant community we have!

December 17, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The boot process (December 17, 2012, 12:24 UTC)

One of the things that is obvious, between the mailing lists and the comments to my previous post is that there are quite different expectations of what the boot process involves — which is to be expected, since in Gentoo the boot process, like many other things, is totally customized on a per-user basis.

As Greg and William said before, the whole point of supporting (or not) a split /usr approach is not something that is tied that much to udev itself, but more a matter of what is involved in the boot process at all. Reimar pointed out that in the comments to the other post, and I guess that’s the one thing that right now we have to consider a bit more thoroughly. So let’s see if I can analyse it a bit more closely.

Let me put a foreword here. The one problem that is the biggest regarding udev and split /usr is that, while it’s still possible to select whether to search for rules in the rootfs or /usr, it didn’t, and maybe doesn’t, search both paths at the same time. That is probably the only thing that I count as a total non-sense, and it’s breaking for break sake. And it realistically is one of the things that made many Gentoo users upset with Lennart and Kay: the migration of rules is easy for binary distributions – you just rebuild all the packages installing in the old path – but it’s a pain in the neck for Gentoo users; and the cost of searching both paths is unlikely to be noticeable.

So what do we consider as part of the boot? Well, as I said in the other post, if you expect to be able to log in without /usr, you’re probably out of luck, if you use PAM — while the modules are still available on the rootfs, many of them require libraries in /usr — ConsoleKit, Kerberos, PKCS#11, … This is also one of the reasons why I’m skeptical about just teaching Portage to move dependencies to the rootfs: it’ll probably move a good deal of libraries to the rootfs, especially for a desktop, which will in turn make the “lightweight rootfs” option moot.

Another reason why I don’t think that the automatic move is going to solve the problem, is that while it’s possible to teach Portage to move the libraries, it’s impossible to teach it to move plugins, or the datafiles that those libraries use. More about that in the next paragraphs.

So let’s drop the login issue: we don’t expect to be able to log in the system without /usr so it’s not an option. The next thing that is going to be a problem is coldplugging (I’ll consider hotplugging during boot as hotplugging but it might actually be more complex). The idea of coldplugging is that you want to start a given piece of software if, at boot, you find a given device connected. As an example you might want to start pcscd if a smartcard reader (be it a CCID one or another driver) is found, or ekeyd if an EntropyKey is connected, without the user having added them to the runlevels manually.

What’s the problem with this then? Well, the coldplugged services might require /usr for both the service and the libraries, which means you can’t run them without /usr — the udev-postmount service was, if I recall correctly, created just to deal with that, with udev actually keeping a score of which rules failed to execute, and re-executing them after /usr was mounted, but it relied on udev’s own handling of re-execution of rules, which I forgot if it still exists or not. If not, then that’s a big deal, but not something I want to care about to be honest. An easy way out of this is to say that coldplugging is not supported if your coldplugged services are needing /usr and you have it split, but it’s still quite hacky.

This blog post was supposed to be a bit longer, and provide among other things a visual representation of the boot-time service dependencies. It turns out now that I left it open for a whole week without being able to complete it as I intended. In particular, the graphical representation is messy because there are so many involved services, that on my laptop it’s seriously unreadable. I’ve been using the representation as a debug method to improve on my service files though, and I’ll write about that. It’s going to enter OpenRC’s git soon.

This said, this “half” post is good enough to read as it is. I’ll write more about it later on.

December 16, 2012
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
The difference between Ubuntu and Gentoo ;) (December 16, 2012, 22:34 UTC)

This gem comes from the xda developers forums; thanks barry99705!

"Using/installing Ubuntu is like buying a car. It may have a few features you'll never need or use, and might need to have a couple features added as aftermarket parts.

Using/installing Gentoo is like buying a pile of sheet metal, a few rubber trees, small pile of copper, a pile of sand, and an oil well. Then you have to cut and fabricate the car's body from the sheet metal, extract the rubber from the trees, then use that to make the tires and all the seals on the car. Use the pile of copper to make all the wires, and use the leftover rubber(you did save the scraps didn't you) to make the insulation. Melt down the pile of glass to make the windshield, side and back windows, also the headlights and lights themselves. Then you need to extract the crude oil from the well to refine your own engine oil and gas. In the end, you have a car created to your exact specifications (if you know what the hell you're doing) that may or may not be any better than just buying a car off the lot."

Of course I should additionally mention that Gentoo provides awesome documentation for all the steps and most of the actual assembly work is done single-handedly by portage!

December 15, 2012
Richard Freeman a.k.a. rich0 (homepage, bugs)
Gentoo and Copyright Assignments (December 15, 2012, 13:43 UTC)

A topic that has been fairly quiet for years has roared into life on a few separate occasions in the last month within the Gentoo community: copyright assignments. The goal of this post is to talk a little about the issues around these as I see them. I’ll state upfront that I’m not married to any particular approach.

But first, I think it is helpful to consider why this topic is flaring up. The two situations I’m aware of where this has come up in the last month or so both concern contributions (willing or not) from outside of Gentoo. One concerns a desire to be able to borrow eclass code from downstream distros like Exherbo, and the other is the eudev fork. In both cases the issue is with the general Gentoo policy that all Gentoo code have a statement at the top to the effect of “Copyright 2012 Gentoo Foundation.”

Now, Diego has already blogged about some of the issues created by this policy already, and I want to set that aside for the moment. Regardless of whether the Foundation can lay claim to ownership of copyright on past contributions, the question remains, should Gentoo aim to have copyright ownership (or something similar) for all Gentoo work be owned by the Foundation?

Right now I’m reaching out to other free software organizations to understand their own policies in this area. Regardless of whether we want to have Gentoo own our copyrights or not there are still legal questions around what to put on that copyright line, especially when a file is an amalgamation of code originated both inside and outside of Gentoo, perhaps even by parties who are hostile to the effort. I can’t speak for the Trustees as a whole, but I suspect that after gathering info we’ll try to have some open discussion on the lists, and perhaps even have a community-wide vote before making new policy. I don’t want to promise that – in fact I’d recommend that any community-wide vote be advisory only unless a requirement for supermajority were set, as I don’t want half the community up in arms because a 50.1% majority passed some highly unpopular policy.

So, what are some of the directions in which Gentoo might go? Why might we choose to go in these directions? Below I outline some of the options I’m aware of:

Maintain the status quo
We could just leave the issue of copyright assignment somewhat ambiguous as has been done. If Gentoo were forced to litigate over copyright ownership right now an argument could be made that because contributors willingly allowed us to stick that copyright notice on our files and made their contribution with the knowledge of our policies, that they have given implicit consent to our doing so.

I’m not a big fan of this approach – it has the virtue of requiring less work, but really has no benefits one way or the other (and as you’ll read below their are benefits from declaring a position one way or the other).

This requires us to come up with a policy around what goes on the copyright notice line. I suspect that there won’t be much controversy for Gentoo-originated work like most ebuilds, as there isn’t much controversy over them now. However, for stuff like eudev or code borrowed from other projects this could get quite messy. With no one organization owning much of the code in any file the copyright line could become quite a mess.

Do not require copyright assignment
We could just make it a policy that Gentoo would aim to own the name Gentoo, but not the actual code we distribute. This would mean that we could freely accept any code we wished (assuming it was GPL or CC BY-SA compatible per our social contract). This would also mean that Gentoo as an organization would find it difficult to pursue license violations, and future relicensing would be rather difficult.

From an ability to merge outside code this is clearly the preferred solution. This approach still carries all the difficulties of managing the copyright notice, since again no one organization is likely to hold the majority of copyright ownership of our files. Also, if we were to go this route we should strongly consider requiring that all contributions be licensed under GPL v2+, and not just GPL v2. Since Gentoo would not own the copyright if we ever wanted to move to a newer GPL version we would not have the option to do so unless this were done.

Gentoo would still own the name Gentoo, so from a branding/community standpoint we’d have a clear identity. If somebody else copied our code wholesale the Foundation couldn’t do much to prevent this unless we retroactively asked a bunch of devs to sign agreements allowing us to do so, but we could keep an outside group from using the name Gentoo, or any of our other trademarks.

Require copyright assignment
We could make it a policy that all contributions to Gentoo be made in conjunction with some form of copyright assignment, or contributor licensing agreement. I’ll set aside for now the question of how exactly this would be implemented.

In this model Gentoo would have full legal standing to pursue license violations, and to re-license our code. In practice I’m not sure how likely we’d actually be to do either. The copyright notice line would be easy to manage, even if we made the occasional exception to the policy, since the exceptions could of course be managed as exceptions as well. Most likely the majority of the code in any file would only be owned by a few entities at most.

The downside to this approach is that it basically requires turning away code, or making exceptions. Want to fork udev? Good luck getting them to assign copyright to Gentoo.

There could probably be blanket exceptions for small contributions which aren’t likely to create questions of copyright ownership. And we could of course have a transition policy where we accept outside code but all modifications must be Gentoo-owned. Again, I don’t see that as a good fit for something like eudev if the goal is to keep it aligned with upstream.

I think the end result of this would be that work that is outside of Gentoo would tend to stay outside of Gentoo. The eudev project could do its thing, but not as a Gentoo project. This isn’t necessarily a horrible thing – OpenRC wasn’t really a “Gentoo project” for much of its life (I’m not quite sure where it stands at the moment).

Alternatives
There are in-between options as well, such as encouraging the voluntary assignment/licensing of copyright (which is what KDE does), or dividing Gentoo up into projects we aim to own or not. So, we might aim to own our ebuilds and the essential eclasses and portage, but maybe there is the odd eclass or side project like eudev that we don’t care about owning. Maybe we aim to own new contributions (either all or most).

There are good things to be said for a KDE-like approach. It gives us some of the benefits of attribution, and all of the benefits of not requiring attribution. We could probably pursue license violations vigorously as we’d likely hold control of copyright over the majority of our work (aside from things like eudev – which obviously aren’t our work to begin with). Relicensing would be a bit of a pain – for anything we have control over we could of course relicense it, but for anything else we’d have to at least make some kind of effort to get approval. Legally that all becomes a murky area. If we were to go with this route again I’d probably suggest that we require all code to be licensed GPL v2+ or similar just to give us a little bit of automatic flexibility.

I’m certainly interested in feedback from the Gentoo community around these options, things I hadn’t thought of, etc. Feel free to comment here or on gentoo-nfp.


Filed under: foss, gentoo, gentoo foundation

December 14, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My take on the separate /usr issue (December 14, 2012, 19:18 UTC)

This is a blog post I would have definitely preferred not to write — it’s a topic that honestly does not touch me that much, for a few reasons I’ll explore in a moment, and at the same time it’s one that is quite controversial as it has quite a few meanings layered one on top of the other. Since I’m writing this, I would though first to make sure that the readers know who I am and why I’m probably going to just delete posts that tell me that I don’t care about compatibility with older systems and other operating systems.

My first project within Gentoo has been Gentoo/FreeBSD — I have a (sometimes insane) interest in portability with operating systems that are by far not mainstream. I’m a supporter of what I define “software biodiversity”, and I think that even crazy experiments have the right to exist, if anything to learn tricks and issues to avoid. So please don’t give me that kind of crap I noted above.

So, let’s see — I generally have little interest in keeping things around just for the sake of it, and as I wrote a long time ago I don’t use a separate /boot in most cases. I also generally dislike legacies for the sake of legacies. I guess it’s thus a good idea to start looking at which legacies bring us to the point of discussing whether /usr should be split. If it’s not to be split, there’s no point debating supporting split /usr, no?

The first legacy, which is specific to Gentoo is tied to the fact that our default portage tree is set to /usr/portage, and that both the ebuilds’ tree itself and the source files (distfiles), as well as the built binary packages, are stored there. This particular tree is hungry in both disk space itself and even more so in inodes. Since both the tree, and in general the open source projects we package, keep growing, the amount of these two resources we need increases as well, and since they are by default on /usr, it’s entirely possible that if this tree’s resources are allocated statically when partitioning, it’ll reach a point where there won’t be enough space, or inodes, to allocate anything in it. If /usr/portage resides in the root filesystem, it’s also very possible, if not even very likely, that the system would stop entirely to work because there is not enough space available on it.

One solution to this problem is to allocate /usr/portage on its own partition — I still don’t like it that much as an option, because /usr is supposed to be, standing to the FHS/LSB, for read-only data. Most other distributions you’ll find using subdirectories in /var as that’s what it’s designed to be used for. So why are we using /usr? Well, it turns out that this is something that was inspired from FreeBSD, where /usr is used for just about everything including temporary directories and other similar uses. Indeed, /usr/portage finds its peer in /usr/ports which is where Daniel seems to have been inspired to write Portage in the first place. It should be an easy legacy to overcome, but probably migrating it is going to be tricky enough that nobody has done so yet. Too bad.

Before somebody ask, yes, for a while ago splitting the whole /var – which is general considered much more sensible – was a pain in the neck, among other things because things were using /var/run and other similar paths before the partition could be mounted. The situation is now much better thanks to the fact that /run is now available much earlier in the boot process — this is not yet properly handled by all the init scripts out there but we’re reaching that point, slowly.

Okay so to the next issue: when do you want to split /usr at all? Well, this all depends on a number of factors, but I guess the first question is whether you’re installing a new system or maintaining an old one. If you install a new one, I really can’t think of any good reason to split /usr out — the only one that comes passing in my mind is if you want to have it in LVM and keep the rootfs as a standalone partition — and I don’t see why. I’d rather, at that point, put the rootfs in LVM as well, and just use an initrd to accomplish that — if it’s too difficult, well, that’s a reason to fix the way initrd or LVM are handled, not to keep insisting to split /usr! Interestingly enough, such a situation calls for the same /boot split I resented five years ago. I still use LVM without having the rootfs in it, and without needing to split /usr at all.

Speaking of which, most ready-to-install distributions only offer the option of using LVM — it makes sense, as you need to cater for as many systems as possible at once. This is why Gentoo Linux is generally disconnected to the rest: the power of doing things for what you want to use it for, makes it generally possible to skip the overgeneralization, and that’s why we’re virtually the only distribution out there able to work without an initrd.

Another point that came up often is with a system where the space in rootfs was badly allocated, and /usr is being split because there is not enough space. I’m sorry that this is a common issue, and I do know that it’s a pain to re-partition such a system as it involves at least a minimal downtime. But this is why we have workarounds, including the whole initrd thing. I mean, it’s not that difficult to manage, with the initrd, and yes I can understand that it’s more work than just having the whole system boot without /usr — but it’s a sensible way to handle it, in my opinion. It’s work or work, work for everybody under the sun to get split /usr working properly, or work for those who got the estimate wrong and now need the split /usr and you can guess who I prefer doing the work anyway (hint: like everybody in this line of business, I’m lazy).

Some people have said that /usr is often provided on NFS, and a very simple, lightweight rootfs is used in these circumstances — I understand this need, but the current solution to support split /usr is causing the rootfs to not be as simple and lightweight as before — the initrd route in that sense is probably the best option: you just get an initrd to be able to mount the root through NFS, and you’re done. The only problem with this solution is handling if /etc needs to be different from one system to the next, but I’m pretty sure it’s something that can be more easily fixed as well.

I have to be honest, there is one part of /usr that I end up splitting away very often: /usr/lib/debug — the reason is simple: it keeps increasing with the size of the sources, rather than with the size of the compiled code, and with the new versions of the compilers, which add more debug information. I got to a point where the debug files occupied four/five times the size of the rest of the rootfs. But this is quite the exception.

But why would it have to be that much of a problem to keep a split /usr? Well, it’s mostly a matter of what you’re supposed to be able to use without /usr being mounted. For many cases, udev was and is the only problem, as they really don’t want much in the matter of early-boot environment beside being able to start lvm and mount /usr, but the big problem happen if you want to be able to have even a single login with /usr not mounted — because the PAM chain has quite a few dependencies that wouldn’t be available until it’s mounted. Moving PAM itself is not much of an option, and this gets worse, because start-stop-daemon can technically also use chains that, partially, need /usr to be available, and if that happens, no init script using s-s-d would be able to run. And that’s bad.

So, do I like the collapsing of everything in /usr? Maybe not that much because it’s a lot of work to support multiple locations, and to migrate configurations. But at the same time I’m not going to bother, I’ll just keep the rootfs and /usr in the same partition for the time being, and if I have to split something out, it’ll be /var.

December 13, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GNU is actually a totalitarian regime (December 13, 2012, 22:43 UTC)

You probably remember that I’m not one to fall in line with the Free Software Foundation — a different story goes for the FSFe which I support and I look forward for the moment when I can come back as a supporter; for the moment I’m afraid that I have to contribute only as a developer.

Well, it seems like more people are joining in the club. After Werner complained about the handling of GNU copyright assignments – not much after my covering of Gentoo’s assignments which should probably make those suggesting a GNUish approach to said copyright assignment think a lot – Nikos of GnuTLS decided to split off the GNU project.

Why did Nikos decide this? Well, it seems like the problem is that both Werner and Nikos are tired of the secrecy in the GNU project and even more of the inability to discuss, even in a private setting, some topics because they are deemed taboo by the FSF, in the person of Richard Stallman.

So, Nikos decided to move the lists, source code and website on its own hosting, and then declared GnuTLS no longer part of the GNU project. Do you think that this would have put the FSF in a “what are we doing wrong?” mood? Hah, naïve are you! Indeed the response from the FSF (in the person of Richard Stallman, see a pattern?) was to tell Nikos (who wrote, contributed to GNU, and maintained the project) that he can’t take his own project and take it out of GNU, and that if he wants he can resign from the maintainer’s post.

Well, now it seems like we might end up with a “libreTLS” package, as Nikos is open to renaming the project… it’s going to be quite a bit of a problem I’d say, if anything because I want to track Nikos’s development more than GNU’s, and thus I would hope for the “reverse fork” from the GNU project to just die off. Considering I also had to sign the assignment paperwork (and in the time that said paperwork was being handled, I lost time/motivation for the contributions I had in mind, lovely isn’t it?).

Well, what this makes very clear to me is that I still don’t like the way the GNU project, and the FSF are managed, and that my respect for Stallman’s behaviour is, once again, zero.

Markos Chandras a.k.a. hwoarang (homepage, bugs)
Proxy Maintainers – How do we perform? (December 13, 2012, 20:14 UTC)

Following my recent recruitment performance post, here comes the second part of my Gentoo Miniconf 2012 presentation. The following two graphs aim to demonstrate the performance of proxy-maintainers aka, how Gentoo users help us improve and push new ebuilds to the portage tree

Orphaned Packages 2012/10Orphaned Packages 2012/12

One can notice the increased number of maintainer-needed@ packages but this is because we “retired” a lot of inactive developers in the last 2 months. I expect this number to not increase further in the near future.

I would like to thank all of you who are actively participating in this team. Keep up the good work!

Steve Dibb a.k.a. beandog (homepage, bugs)
another semester done (December 13, 2012, 08:25 UTC)

I just finished my Fall semester for 2012 today at UVU.  This was, by far, the hardest semester I’ve ever had since I’ve been in school.  It was brutal.  I had three classes which carried with it more work than I was expecting, and I spent a lot of time in the past four months doing nothing but homework.  I was talking to my cousin tonight about it (while we were doing some late-night skateboarding in the winter, which, it’s actually really nice out here right now), and I mentioned that the stress was a huge burden on me.  Stress is normal, but I’ve learned that if something heavy is really going on, I notice I will stop being cheery.  I don’t really get somber, but it’s more like, just focused and serious all the time.  Which can be a real bummer.

But, the semester is finished, and it’s freed up a lot of time and has taken that huge burden off of me.  I got good grades, and along with that, and some great friends that really stepped up at the last minute and helped me out, it’s really gotten me humbled and grateful to God and everyone that stood by me.  I’m really glad this semester is done.

One thing I learned from this last jaunt around is that I’ve decided I’m never taking online classes again.  I had two this semester, and one on campus.  Looking back, I’ve always had a range of issues with online courses.  Either I don’t understand the material very well because I can’t chat with the professor one on one, or I slack the whole time (I did 50% of the coursework in one day.  I’m not kidding).  The worst one though is I never really feel like I “get” the material.  I jump through hoops, get a grade, and move on, but it doesn’t seem like I learned anything.

So, I’m sticking to just two classes from here on out, and doing them all on-campus.  That’ll be manageable.

For now I’m really looking forward to not so much having more time, but having less stress.  I’ve been wanting to work on some cool side projects, and I also have been itching to go skating … a lot.  So tonight I went on a two-hour run with my cousin down Main Street in Bountiful, and it was really cool.  We call it a “mort run” since we start at the top of a hill and go all the way down to the mortuary.  It’s smooth all the way down and  you can just push around and then either skate back up hill or walk.  It’s a good workout.

The best part tonight though was debating whether or not we should go to the drive-through at Del Taco, knock on the window and ask for something.  We didn’t, but we circled the place like eight times and probably freaked out the employees while we debated it.  Eventually, we realized he didn’t have enough cash to buy something on the dollar menu (he was a penny short), so we spent half an hour wandering around downtown looking for lost change.  It was pretty fun. :)

Soooooooooooo ….. projects.  One thing I have time to look into now is znurt.org.  It’s broken.  I’ve known it’s been broken.  It would take me probably less than an hour to fix it.  I haven’t made the time, for a lot of reasons.  It’s actually been on my calendar reminding me over and over that I need to get it done.  I’m debating what to do about the site.  I could just fix the one error and move on, but it’s still kind of living in a state of neglect.  Ideally, I should hand the project over to someone else and let them maintain it.  I dunno yet.  Part of me doesn’t wanna let it go, but I guess a bigger part doesn’t care enough to actually fix it so … yah.  Gotta make a decision there.

Other than that, not much going on.  I moved to a new apartment, back into a complex.  I like it here.  I have a dishwasher now, which I’m really grateful for (I haven’t had one in the last three apartments).  The funny thing about that is I seriously have so few dishes, that filling up the entire thing with all of mine it’s half full.

Anyhoo, I am really looking forward to moving on.  My big thing is I wanna get some serious skating time in while I’ve got the time.  That and enjoy the holidays with friends and family.  I’m looking forward to next semester too.  I’ve got a class on meteorology and another on U.S. history.  I’m almost done with generals.  The crazy part about all of this?  Since I went back to school two years ago, I’ve put in 30 credit hours.  Insane, for someone working full time.  I tell you what.


Sven Vermeulen a.k.a. swift (homepage, bugs)
Another hardened month has passed… (December 13, 2012, 08:02 UTC)

… so it’s time for a new update ;-)

Toolchain

GCC 4.8 is still in its stage 3 development phase, so Zorry will send out the patches to the GCC development community when this phase is done. For Gentoo hardened itself, we now support all architectures except for IA64 (which never had SSP).

Full uclibc support is now in place for amd64, i686, mips32r2: not only is their technological support ok, but stages are now also automatically built to support installations through the regular installation instructions. The next target to get stages automatically built for is armv7a.

Kernel and grSecurity/PaX

Stabilization on 3.6.x is still showing some difficulties. Until those are resolved, we’re still stable in 3.5.4. We have a couple of panics in some odd cases, but these will need to be resolved before we can stabilize further.

glibc-2.16 will also drop the declarations for PT_PAX (in elf.h) and the binutils will also not cover PT_PAX phdr anymore. So, we will standardize fully on xattr-based PaX flags. This will get some proper focus in the next period to ensure this is done correctly. Most work on this support is focusing on communication towards users and the pax-utils eclass support.

There was some confusion if the tmpfs-xattr patch would or would not properly restrict access, but it looks like the PaX patch on mm/shmem.c was based upon the Gentoo patch and enhanced with the needed restrictions, so we can just keep the PaX code.

On USE=”pax_kernel”, which should enable some updates on userland utilities when applications are run under a PaX enabled kernel, prometheanfire tried to get this as a global USE flag (as many applications might eventually want to get a trigger on it). However, due to some confusion on the meaning of the USE flag, and potential need to depend on additional tools, we’re going to stick with a local flag for now.

SELinux

schmitt953 will help in the testing and possible development of SELinux policies for Samba 4.

Furthermore, the userspace utilities have been stabilized (except for the setools-3.3.7-r5+ due to some swig problems, but those have been worked around in setools-3.3.7-r6). Also, the rev8 policies are in the tree and no big problems were reported on them. They are currently still ~arch, but will be stabilized in the next few days. A new rev9 release will be pushed to the hardened-dev overlay soon as well.

Profiles

nvidia is unmasked for the hardened profiles, but still has X and tools USE flags masked, and is only supported on kernels 3.0.x and higher.

Also, the hardened/linux/uclibc/arm/armv7a profile is now available as a development profile. Profiles will be updated as the architectures for ARM are getting supported, so expect more in the next month.

System Integrity

We were waiting for kernel 3.7, which just got released, so we can now start integrating this further. Expect more updates by next meeting.

Docs

For SELinux, some information on USE=”unconfined” is added to the SELinux handbook. Blueness will also start documenting the xattr pax stuff.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
How app-office/libreoffice-bin is made (December 13, 2012, 00:08 UTC)

While usually Gentoo users compile all their packages on their own computers, LibreOffice tends to be too big a bite for that. This is why we provide for amd64 and x86 app-office/libreoffice-bin and app-office/libreoffice-bin-debug, two packages with a precompiled binary installation and its debug information. In the beginning we just used the binaries from the official LibreOffice distribution. Turns out, however, that these binaries bundle a large number of libraries that we have in Gentoo anyway (bug 361695), and for a lot of reasons bundled libraries are bad. So, we decided to roll our own binaries for stable Gentoo installations. Let me describe a bit how it is done.

Linux pinacolada 3.4.9-gentoo #2 SMP Thu Oct 11 00:05:55 CEST 2012 x86_64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz GenuineIntel GNU/Linux
On the machine doing the build, two chroots are dedicated to the package build process, one a plain amd64 chroot, the other an x86 chroot entered via linux32. Both have no ~arch packages installed at all, only stable keywords are accepted; both have a very minimal world file listing only a few packages useful for a maintainer as e.g. gentoolkit or eix. Procedure is identical for both.  In addition, in both chroots the compiler flags are chosen for as wide compatibility as possible. This means
# for x86
CFLAGS="-march=i586 -mtune=generic -O2 -pipe -g"
# for amd64
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -g"
and obviously the same for CXXFLAGS. Both chroots also use the portage features splitdebug and compressdebug to make debug information available in a separate directory tree. Prior to build, the existing packages are updated, unnecessary packages are cleaned, and dynamic linking is checked:
emerge --sync
emerge -uDNav world
emerge --depclean --ask

revdep-rebuild
In case any problems occur, these are checked, solved, and the procedure is repeated until all the operations become a no-op.
Next step is adapting the (rather simplistic) build script to the new libreoffice version. This means mainly checking for new or discarded useflags, and deciding which value these should have in the binary build. Since LibreOffice-3.6 we also have to decide now which bundled extensions to build. The choice of useflags is influenced by several factors. For example, pdfimport is disabled because the resulting dependency on poppler might lead to broken binaries rather too often.
Then, well, then it's running the build. Generating all 12 flavours (base, kde, gnome with and without java for both amd64 and x86) takes roughly a weekend. Time to go out to the christmas market and sip a Glühwein.
In the meantime, we can also adapt the libreoffice-bin ebuilds for the new version. The defined phase functions are mostly boring, since they only have to copy files into the system. Normally, they can be taken over from the previous version. The dependency declarations, however, have to be copied anew each time from the corresponding app-office/libreoffice ebuild, taking into account the chosen use-flag values. DEPEND is set empty since we're not actually building anything during installation.
Finally, COMMON_DEPEND is extended by an additional block named BIN_COMMON_DEPEND, specific for the binary package. Here, we specify any dependencies that need to be stricter now, where a library upgrade would for a normal package require revdep-rebuild - which is not possible for a binary package. Typical candidates where we have to fix the minimum or exact library version are glibc, icu, or libcmis.
Once the build has finished, 8.8G of files have to be uploaded to the Gentoo server, added to the mirror system, and then given some time to propagate. Then, we can commit the new ebuild, and open a stabilization request bug. Finished!
(Oh and in case you're wondering, new packages are coming tomorrow. :)

December 12, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What I'd like from my blog (December 12, 2012, 21:18 UTC)

My blog is, at this point, a vital part of my routine. I use my blog to write about my personal projects, I write about the non-restricted parts of my jobs, and I write about the work that goes into Gentoo Linux and other projects I follow.

I have over 2100 posts over time, especially thanks to the recent import of my original blog on Gentoo infrastructure. I don’t really know if it’s a lot, but sometimes Typo seems to miss something about it. Unfortunately I’m also running an older version of Typo, because I haven’t switched that virtual server to Ruby 1.9 yet as one of my customers is running a version of Radiant that is not going to work otherwise.

Said customer also bitched so hard, and screamed not to keep the site on my server, but as it happens the new webmasters that are supposed to pick up the website, and should have been cheaper and faster than me… have been working since June and still delivered nothing. Hopefully they’ll be done soon and I can kick said customer from the server.

Anyway, at this point there are a few things that I’d like to get out of my blogging platform in the future, which might require me to fork Typo and create my own version, which is likely going to be stripped down — as many things I really don’t care about, that are added here, like the short URLs, which I might just export as I think I used them at some point, but then I would handle through mod_rewrite rather than on the Rails side.

So let’s see what I don’t like about the current Typo I’m using:

  • The database access is more than a bit messed up; it probably has to do that upstream only cares about MySQL, while I want to run it on PostgreSQL; and this causes more than a couple of problems — have you noticed that sometimes my posts end up password-protected? Well, what happens is that the settings for the single posts are serialized in YAML and de-serialized, but somethings something bad happens and the YAML becomes invalid, causing the password-protection to kick in. I know there is an ActiveRecord extension that allows for key-value pairs to be stored in PostgreSQL-specific column types instead of having to (de)serialize them all the time, but again, this wouldn’t be something upstream would use.
  • Alternatively I’ve been toying with the idea of using MongoDB as a backend. Even with the issues that I have pointed out before, I think it might work well for a blog, especially since then the comments would be tied tot he post itself, rather than have the current connected tables.
  • There is a problem with the tags handling, again something upstream doesn’t seem to care about – at some point I remember reading they were mostly interested in making every single word in a post a tag to cross-connect posts with the same word; it’s one of the reasons why I’m not sure if I want to update it. If I change the title of one of the tags to make it more descriptive, then I edit a post that has that tag, it creates one more tag for each word in that title, instead of preserving the older tags. I really should clean up the tags I got right now.
  • I would also like that when I get to the “new post” page it would create it already and then get me back to editing it — this is important to me because sometimes if I have to restart Chromium, or suspend the laptop, something goes very wrong and it creates multiple drafts for the same post. And cleaning them up is a long task.
  • A better implementation of notification for new posts, and integration with Flattr, would be also very good. While IFTTT makes it easy to post the new entries to Twitter and LinkedIn, its lack of integration for Flattr is a major pain, and the fact that right now, to use auto-submit, I have to duplicate part of the content in the HTML of the pages, is also a problem. So being able to create a “Flattr thing” the moment when I actually post something would be a major plus for me.
  • Since I’m actually quite the paranoid, another thing I would like to have would be either two-factor authentication with Google Authenticator on a cellphone, or (actually, in addition to) certificate-based authentication for the admin interface. Having a safe way to make sure that I’m the only one logging in would make me remove some of the administrative interface rules on ModSecurity, which would in turn let me write posts from public WiFi networks sidestepping the problem I posted about the other day.
  • Scheduled posting. This used to be supported, but it’s been completely broken for years at this point, but it was very useful to me a long time ago since I would just write a bunch of posts and schedule them to be posted once a day. I suppose this should now be changed so that the planned posts are only actually posted if a process is called to make sure that the new posts are now “elapsed”… but again this is something that I’d like to have, and you readers would probably enjoy, as it would probably make for more and better content overall.

I definitely do not want to go with WordPress, I just wish I had the time to write my own Typo fork, and make it more usable for what I do, rather than hoping that the upstream development for Typo does not go in a direction I don’t like at all.. Maybe somebody else has the same requirements and would like to join me in this project; if so, send me an email.. maybe it’ll finally be the time I decide to start on the fork itself.

December 11, 2012
Matthew Thode a.k.a. prometheanfire (homepage, bugs)

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

December 10, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using pam_selinux to switch contexts (December 10, 2012, 20:11 UTC)

With SELinux managing the access controls of applications towards the resources on the system, a not-to-be forgotten important component on any Unix/Linux system is the authentication part. Most systems use or support PAM, the Pluggable Authentication Modules, and for SELinux this plays an important role.

Applications that are PAM-enabled use PAM for the authentication of user activities. If this includes setting up an authenticated session, then the “session” part of the PAM configuration is also handled. And for SELinux, this is a nice-to-have, since this means applications that are not SELinux-aware can still enjoy transitions towards specified domains depending on the user that is authenticated.

The “not SELinux-aware” here is important. By default, applications keep running in one security context for their lifetime. If they invoke a execve or similar call (which is used to start another application or command when used in combination with a fork), then the SELinux policy might trigger an automatic transition if the holy grail of fourfold rules is set:

  1. a transition from the current context to the new one is allowed
  2. the label of the executed command/label is marked as an entrypoint for the new context
  3. the current context is allowed to execute that application
  4. an automatic transition rule is made from the current context to the new one over the command label

Or, in SELinux policy terms, assuming the domains are source_t and destination_t with the label of the executed file being file_exec_t:

allow source_t destination_t:process transition;
allow destination_t file_exec_t:file entrypoint;
allow source_t file_exec_t:file execute;
type_transition source_t file_exec_t : process destination_t;

If those four settings are valid, then (and only then) can the automatic transition be active.

Sadly, for applications that run user actions (like cron systems, remote logon services and more) this is not sufficient, since there are two major downsides to this “flexibility”:

  1. The rules to transition are static and do not depend on the identity of the user for which activities are launched. The policy can not deduce this identity from a file context either.
  2. The policy is statically defined: different transitions based on different user identities are not possibel.

To overcome this problem, applications can be made SELinux-aware, linking with the libselinux library and invoking the necessary switches themselves (or running the commands with runcon). Luckily, this is where the PAM system comes to play to aide us in setting up this policy behavior.

When an application is PAM-enabled, it will invoke PAM calls to authenticate and possibly set up the user session. The actions that PAM invokes are defined by the PAM configuration files. For instance, for the at daemon:

## /etc/pam.d/atd
#
# The PAM configuration file for the at daemon
#

auth    required        pam_env.so
auth    include         system-services
account include         system-services
session include         system-services

I am not going to dive into the details of PAM in this blog post, so let’s just jump to the session management part. In the above example file, if PAM sets up (or shuts down) a user session for the service (at in our case), it will go through the PAM services that are listed in the system-services definition, which looks like so:

## /etc/pam.d/system-services
auth            sufficient      pam_permit.so
account         include         system-auth
session         optional        pam_loginuid.so
session         required        pam_limits.so 
session         required        pam_env.so 
session         required        pam_unix.so 
session         optional        pam_permit.so

Until now, nothing SELinux-specific is enabled. But if we change the session section of the at service to the following, then the SELinux pam module will be called as well:

session optional        pam_selinux.so close
session include         system-services
session optional        pam_selinux.so multiple open

Now that the SELinux module is called, pam_selinux will try to switch the context of the process based on the definitions in the /etc/selinux/strict/contexts location (substitute strict with the policy type you use). The outcome of this switching can be checked with the getseuser application:

~# getseuser root system_u:system_r:crond_t
seuser:  root, level (null)
Context 0       root:sysadm_r:cronjob_t
Context 1       root:staff_r:cronjob_t

By providing the contexts in configurable files in /etc/selinux/strict/contexts, a non-SELinux aware application suddenly becomes SELinux-aware (through the PAM support it already has) without needing to patch or even rebuild the application. All that is need is to allow the security context of the application to switch ids and roles (as that is by default not allowed), which I believe is offered through the following statements:

domain_subj_id_change_exemption(atd_t)
domain_role_change_exemption(atd_t)

selinux_validate_context(atd_t)
selinux_compute_access_vector(atd_t)
selinux_compute_create_context(atd_t)
selinux_compute_relabel_context(atd_t)
selinux_compute_user_contexts(atd_t)

seutil_read_config(atd_t)
seutil_read_default_contexts(atd_t)

Jeremy Olexa a.k.a. darkside (homepage, bugs)
November 2012 wrap up (December 10, 2012, 13:39 UTC)

To wrap up my November, I finished up my stay in Prague. The below were two-day trips, where I was embracing home-base travel – meaning I would go somewhere then come back.

Before I left the Czech Republic, I also went to Cesky Krumlov, an amazing medieval town, UNESCO town, castle, brewery, winding streets, very glad I went there. I’m thinking about how to get back there during the summer. Cesky Krumlov is the second most visited city in the Czech Republic. I took the train there and the bus back. The train was quite nice but there was a few connections, at one point I was following the herd as we went from train to bus to train and I was confused but it worked out in the end. I got to Krumlov, walked to the hostel Krumlov House (recommended), ate at the delicious Two Marys restaurant, hung out with the staff, and went to a local bar. Then I walked around the castle, went to a brewery tour, relaxed for a few days, and took it all in. I took the bus back to Prague because it was quicker and cheaper.

Czech Republic (Prague, Olomouc, Cesky Krumlov) Oct/Nov 2012-243
(The view of the city from the castle)
Cesky Krumlov Pics

Dresden, Germany for a few days. I carpooled here with 3 other Germans as they were going home for the weekend and then couchsurfed. The generosity of people is amazing in this world. I was only there for a few nights, the first night, I walked around then ate out with my host. The next day, I went to the Botanical gardens (many pictures for my Grandpa), the VW Factory (no pictures allowed) – I’d recommend the glass factory tour to those that are engineering types, it is quite nice, then I walked around the city some. Went into a church, climbed to the top viewing point, and went out to eat again and chatted worldly topics with my host. She never had a guest from the USA before. The very unique thing about Dresden, even though it looks old, it is not since it was rebuilt after the war. I also carpooled back, the Germans love to be efficient.

Dresden Pics

Then we can fast forward to December 1, when I got on the bus for Vienna. I lost my camera on November 30th, so there is only mental pictures of Vienna. I stayed there for 3 nights. It is an expensive city relative to Czech Republic and farther east, but I liked it. I stayed at an independent hostel, Hostel Ruthersteiner (recommended as well) I met with my friend Marijn and we walked around the city with his family and colleague. I tried to goto a Viennese Opera but there was only standing room and I didn’t feel like standing still for 2.5 hours so of course I went to the Viennese Christmas Markets instead and enjoyed many a glühwein (hot wine). I also toured the UN headquarters in Vienna and had lunch with my friend there. I could imagine myself going back there later in life to soak in the cultural activities that are more suited for older people or families.

Now, I am in Budapest. More on that later…

December 09, 2012
How to find issues related to LINGUAS (December 09, 2012, 18:11 UTC)

Usually, I want to find all possible issues with the LINGUAS variable, so in my arch testing environment I have enabled all linguas that the main tree uses.
To keep my make.conf more ‘clear’ I’m using source and another file called linguas.conf.

So, this is my /etc/portage/linguas.conf:
LINGUAS="am fil zh af ca cs da de el es et gl hu nb nl pl pt ro ru sk sl sv uk bg cy en eo fo ga he id ku lt lv mk ms nn sw tn zu ja zh_TW en_GB pt_BR ko zh_CN ar en_CA fi kk oc sr tr fa wa nds as be bn bn_BD bn_IN en_US es_AR es_CL es_ES es_MX eu fy fy_NL ga_IE gu gu_IN hi hi_IN is ka kn ml mr nn_NO or pa pa_IN pt_PT rm si sq sv_SE ta ta_LK te th vi ast dz km my om sh ug uz ca@valencia sr@ijekavian sr@ijekavianlatin sr@latin csb hne mai se es_LA fr_CA zh_HK br la no es_CR et_EE sr_CS bo hsb hy mn sr@Latn lb ne bs tg uz@cyrillic xh be_BY brx ca_XV dgo en_ZA gd kok ks ky lo mni nr ns pap ps rw sa_IN sat sd ss st sw_TZ ti ts ve mt ia az me tl ak hy_AM lg nso son ur_PK it fr nb nb_NO hr nan ur tk cs_CZ da_DK de_1901 de_CH en_AU lt_LT pl_PL sa sk_SK th_TH ta_IN tt sco ha mi ven ar_SY el_GR ro_RO ru_RU sl_SI uk_UA vi_VN ar_SY te_IN de_DE es_VE fa_IR fr_FR hu_HU id_ID it_IT ja_JP ka_GE nl_NL sr_BA sr_RS ca_ES fi_FI he_IL jv ru_gold yi eu_ES"

Now you need to set in your make.conf:
source /etc/portage/linguas.conf

I will update this post if there will be new linguas/languages in the future.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
g-octave news: the octave overlay (December 09, 2012, 16:13 UTC)

After having lots of problems with people that can't use g-octave properly, sometimes because they don't seems to be able to read documentation, elog messages and/or just ask, and after a suggestion of Sebastien Fabbro (bicatali), I write down some simple scripts to update the g-octave package database and an overlay using g-octave and a cronjob.

I built a virtual machine on my own server and set up a weekly cronjob, that will hopefully keep the packages up-to-date.

The overlay is available on Github:

https://github.com/rafaelmartins/octave-overlay

To install it, follow the instrunctions available on the README file. The overlay is available on layman, named octave.

Packages with unresolvable dependencies, e.g. packages with dependencies unavailable on gentoo-x86, aren't available in the overlay. If you find some package that is supposed to work and isn't available on the overlay please open an issue on Github, and I'll take a look ASAP.

As a bonus, g-octave code itself was moved to Github:

https://github.com/rafaelmartins/g-octave

Feel free to submit pull requests if you think that something is broken and you know how to fix it.

And as another bonus, the g-octave website (http://g-octave.org/) is now running on the Read the Docs service, that is way more reliable than my own server. This should avoid the recent documentation downtimes.

December 08, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using stunnel for mutual authentication (December 08, 2012, 12:24 UTC)

Sometimes services do not support SSL/TLS, or if they do, they do not support using mutual authentication (i.e. requesting that the client also provides a certificate which is trusted by the service). If that is a requirement in your architecture, you can use stunnel to provide this additional SSL/TLS layer.

As an example, I have a mail server running on localhost, and I want to provide SSMTP services with mutual authentication on top of this service, using stunnel. First of all, I provide two certificates and private keys that are both signed by the same CA, and keep the CA certificate close as well:

  • client.key is the private key for the client
  • client.pem is the certificate for the client (which contains the public key and CA signature)
  • server.key and server.pem are the same but for the server
  • root-genfic.crt is the certificate of the signing CA

First of all, we setup the stunnel, listening on 1465 (as 465 requires the stunnel service to run as root, which I’d rather not) and fowarding towards 127.0.0.1:25:

cert = /etc/ssl/services/stunnel/server.pem
key = /etc/ssl/services/stunnel/server.key
setuid = stunnel
setgid = stunnel
pid = /var/run/stunnel/stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2 # This enables the mutual authentication
CAfile = /etc/ssl/certs/root-genfic.crt

[smtp]
accept = 1465
connect = 127.0.0.1:25

To test out mutual authentication this way, I used the following command-line snippet. The delays between the lines are because the mail client is supposed to wait for the mail server to give its reply and if not, the data gets lost. I’m sure this can be made easier (with netcat I could just use “-i 1″ to print a line with a one-second delay), but it works ;-)

~$  (sleep 1; echo "EHLO localdomain"; sleep 1; echo "MAIL FROM:remote@test.localdomain"; \
sleep 1; echo "RCPT TO:user@localhost"; sleep 1; echo "DATA"; sleep 1; cat TEMPFILE) | \
openssl s_client -connect 192.168.100.102:1465 -crlf -ign_eof -ssl3 -key client.key -cert client.pem

The TEMPFILE file contains the email content (you know, Subject, From, To, other headers, data, …).

If the provided certificate isn’t trusted, then you’ll find the following in the log file (on Gentoo, thats /var/log/daemon.log by default but you can setup logging in stunnel as well):

Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: Starting certificate verification: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: CERT: Verification error: unable to get local issuer certificate
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: Certificate check failed: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: SSL alert (write): fatal: bad certificate
Dec  8 13:17:32 testsys stunnel: LOG3[20237:2766895953664]: SSL_accept: 140890B2: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned

When a trusted certificate is shown, the connection goes through.

Finally, if you not only want to validate if the certificate is trusted, but also only want to accept a given number of certificates, you can set the stunnel variable verify to 3. If you set it to 4, it will not check the CA and only allow a connection to go through if the presented certificate is one in the stunnel trusted certificates.

Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita becomes a part of the KDE project (December 08, 2012, 07:58 UTC)

I'm happy to announce that Trojitá, a fast IMAP e-mail client, has become part of the KDE project. You can find it below extragear/pim/trojita.

Why moving under the KDE umbrella?

After reading the KDE's manifesto, it became obvious that the KDE project's values align quite well with what we want to achieve in Trojitá. Becoming part of a bigger community is a logical next step -- it will surely make Trojitá more visible, and the KDE community will get a competing e-mail client for those who might not be happy with the more established offerings. Competition is good, people say.

But I don't want to install KDE!

You don't have to. Trojitá will remain usable without KDE; you won't need it for running Trojitá, nor for compiling the application. We don't use any KDE-specific classes, so we do not link to kdelibs at all. In future, I hope we will be able to offer an optional feature to integrate with KDE more closely, but there are no plans to make Trojitá require the KDE libraries.

How is it going?

Extremely well! Five new people have already contributed code to Trojitá, and the localization team behind KDE got a terrific job with providing translation into eleven languages (and I had endless hours of fun hacking together lconvert-based setup to make sure that Trojitá's Qt-based translations work well with KDE's gettext-based workflow -- oh boy was that fun!). Trojitá also takes part in the Google Code-in project; Mohammed Nafees has already added a feature for multiple sender identities. I also had a great chat with the KDE PIM maintainers about sharing of our code in future.

What's next?

A lot of work is still in front of us -- from boring housekeeping like moving to KDE's Bugzilla for issue tracking to adding exciting (and complicated!) new features like support for multiple accounts. But the important part is that Trojitá is live and progressing swiftly -- features are being added, bugs are getting fixed on a faily basis and other people besides me are actually using the application on a daaily basis. According to Ohloh's statistics, we have a well established, mature codebase maintained by a large development team with increasing year-over-year commits.

Interested?

If you are interested in helping out, check out the instructions and just start hacking!

Cheers,
Jan

December 07, 2012
Kernel: vanilla-sources maintenance (December 07, 2012, 12:01 UTC)

In the last time I’m helping the kernel team with the bump of vanilla-sources.

It does not take much time because I’m doing it with a script. So, personally, I will continue to bump the following series:

  • 2.6.32
  • 3.0
  • 3.2
  • 3.4
  • 3.6

I will remove the EOL series as soon as possible.

If you have requests, please let me know.

December 06, 2012
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)
Recording VoIP calls using pulseaudio and avconv (December 06, 2012, 15:58 UTC)

For ages, I've wanted an option in Skype or Empathy to record my video and voice calls1. Text is logged constantly because it doesn't cost much in the form of resources, but voice and video are harder.

In lieu of integrated support inside Empathy, and also because I mostly use Skype (for various reasons), the workaround I have is to do an X11 screen grab and encode it to a file. This is not hard at all. A cursory glance at the man page of avconv will tell you how to do it:

avconv -s:v [screen-size] -f x11grab -i "$DISPLAY" output_file.mkv

[screen-size] is in the form of 1366x768 (Width x Height), etc, and you can extend this to record audio by passing the -f pulse -i default flags to avconv2but that's not quite right, is it? Those flags will only record your own voice! You want to record both your own voice and the voices of the people you're talking to. As far as I know, avconv cannot record from multiple audio sources, and hence we must use Pulseaudio to combine all the voices into a single audio source!

As a side note, I really love Pulseaudio for the very flexible way in which you can manipulate audio streams. I'm baffled by the prevailing sense of dislike that people have towards it! The level of script-level control you get with Pulseaudio is unparallelled compared to any other general-purpose audio server3. One would expect geeks to like such a tool—especially since all the old bugs with it are now fixed.

So, the aim is to take my voice coming in through the microphone, and the voices of everyone else coming out of my speakers, and mix them into one audio stream which can be passed to avconv, and encoded into the video file. In technical terms, the voice coming in from the microphone is exposed as an audio source, and the audio for the speakers is going to an audio sink. Pulseaudio allows applications to listen to the audio going into a sink through a monitor source. So in effect, every sink also has a source attached to it. This will be very useful in just a minute.

The work now boils down to combining two sources together into one single source for avconv. Now, apparently, there's a Pulseaudio module to combine sinks but there isn't any in-built module to combine sources. So we route both the sources to a module-null-sink, and then monitor it! That's it.


pactl load-module module-null-sink sink_name=combined
pactl load-module module-loopback sink=combined source=[voip-source-id]
pactl load-module module-loopback sink=combined source=[mic-source-id]
avconv -s:v [screen-size" -f x11grab -i "$DISPLAY" -f pulse -i combined.monitor output_file.mkv

Here's a script that does this and more (it also does auto setup and cleanup). Run it, and it should Just Work™.

Cheers!

1. It goes without saying that doing so is a breach of the general expectation of privacy, and must be done with the consent of all parties involved. In some countries, not getting consent may even be illegal.
2. If you don't use Pulseaudio, see the man page of avconv for other options, and stop reading now. The cool stuff requires Pulseaudio. :)
3. I don't count JACK as a general-purpose audio system. It's specialized for a unique pro-audio use case.

Richard Freeman a.k.a. rich0 (homepage, bugs)
The Dark Side of Quality (December 06, 2012, 15:48 UTC)

Voltaire once said that the best is the enemy of the good. I think that there are few places where one can see as many abuses of quality as you’ll find in many FOSS projects, including Gentoo.

Often FOSS errs on the side of insufficient quality. Developers who are scratching itches don’t always have incentive to polish their work, and as a result many FOSS projects result in a sub-optimal user experience. In these cases “good enough” is standing in the way of “the best.”

However, I’d like to briefly comment on an opposite situation, where “the best” stands in the way of “good enough.” As an illustrative example, consider the excellent practice of removing bundled libraries from upstream projects. I won’t go on about why this is a good thing – others have already done so more extensively. And make no mistake – I agree that this is a good thing, the following notwithstanding.

The problem comes when things like bundled libraries become a reason to not package software at all. Two examples I’m aware of where this has happened recently are media-sound/logitechmediaserver-bin and media-gfx/darktable. In the former there is a push to remove the package due to the inclusion of bundled libraries. In the latter the current version is lagging somewhat because while upstream actually created an ebuild, it bundles libraries. Another example is www-client/chromium, which still bundles libraries despite a very impressive campaign by the chromium team to remove them.

The usual argument for banning packages containing bundled libraries is that they can contain security problems. However, I think this is misleading at best. If upstream bundles zlib in their package we cry about potential security bugs (and rightly so), however, if upstream simply writes their own compression functions and includes them in the code, we don’t bat an eyelash, even though this is more likely to cause security problems. The only reason we can complain about zlib is BECAUSE it is extensively audited, making it easy to spot the security problems. We’re not reacting to the severity of problems, but only to the detectablity of them.

Security is a very important aspect of quality, but any reasonable treatment of security has to consider the threat model. While software that bundles a library is rightfully considered “lower” in quality than one that does not, what matters more is whether this is a quality difference that is meaningful to end users, and what their alternatives are. If the alternative for the user is to just install the same software with the same issues, but from an even lower quality source with no commitment to security updates, then removing a package from Gentoo actually increases the risks to our users. This is not unlike the situation that exists with SSL, where an unencrypted connection is presented to the user as being more secure than an SSL connection with a self-signed certificate, when this is not true at all. If somebody uses darktable to process photos that they take, then they’re probably not concerned with a potential buffer overflow in a bundled version of dcraw. If the another user operated a service that accepted files from strangers on the internet, then they might be more concerned.

What is the solution?: A policy that gives users reasonably secure software from a reputable source, with clear disclosure. We should encourage devs to unbundle libraries, consider bugs pointing out bundled libraries valid, accept patches to unbundle libraries when they are available, and add an elog notice to packages containing bundled libraries in the interest of disclosure. Packages with known security vulnerabilities would be subject to the existing security policy. However, developers would still be free to place packages in the tree that contain bundled libraries, unmasked, and they could be stabilized. Good enough for upstream should be good enough for Gentoo (again, baring specific known vulnerabilities), but that won’t stop us from improving further.


Filed under: gentoo

gstreamer 1.0 (December 06, 2012, 00:03 UTC)

It has been a while since I have last written here but I am not dead and I still somehow manage to contribute to Gentoo.

In the past weeks, I have been working on making Gnome 3.6 ready for inclusion in portage. It rapidly appeared that Gnome 3.6 would have to use both gstreamer 0.10 and gstreamer 1.0 however gstreamer team is badly understaffed and only Alexandre (tetromino) who is not even a gstreamer team member had tried to start bumping ebuilds to gstreamer 1.0.

But then Alexandre got busy and this development stalled a bit. After I finished bumping the overlay to Gnome 3.6.1, I took the challenge to rewrite the gstreamer eclasses to make them easier to use and understand. They were, in my opinion, quite scary with version checks everywhere and I think it is one of the reason that so few people wants to work in gstreamer team :)

If you do not follow gentoo-dev, most of the code moved to gst-plugins10.eclass which received some magic touches that basically makes 99% of the version dependant code go away. As an added bonus, the eclasses are now documented and support EAPI 1 to 5. EAPI 0 support got dropped because of missing slot operators which is really annoying right now with gstreamer.

So if you hit some gstreamer compilation problems in the last few days, please forgive me, upgrade road was a bit bumpy but, overall, it was not so bad. And now, I am happy to say that gstreamer 1.0 is in portage which clears the road for gnome 3.6 inclusion.

On a final note, I also continued Alexandre’s work of bumping last 0.10 releases and so we are up-to-date on that front as well.

Happy compiling !

December 05, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
nginx as reverse SMTP proxy (December 05, 2012, 22:03 UTC)

I’ve noticed that not that many resources are online telling you how you can use nginx as a reverse SMTP proxy. Using a reverse SMTP proxy makes sense even if you have just one mail server back-end, either because you can easily switch towards another one, or because you want to put additional checks before handing off the mail to the back-end.

In the below example, a back-end mail server is running on localhost (in my case it’s a Postfix back-end, but that doesn’t matter). Mails received by Nginx will be forwarded to this server.

user nginx nginx;
worker_processes 1;

error_log /var/log/nginx/error_log debug;

events {
        worker_connections 1024;
        use epoll;
}
http {

        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';


        server {
                listen 127.0.0.1:8008;
                server_name localhost;
                access_log /var/log/nginx/localhost.access_log main;
                error_log /var/log/nginx/localhost.error_log info;

                root /var/www/localhost/htdocs;

                location ~ .php$ {
                        add_header Auth-Server 127.0.0.1;
                        add_header Auth-Port 25;
                        return 200;
                }
        }
}

mail {
        server_name localhost;

        auth_http localhost:8008/auth-smtppass.php;

        server {
                listen 192.168.100.102:25;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
}

If you first look at the mail setting, you notice that I include an auth_http directive. This is needed by Nginx as it will consult this back-end service on what to do with the mail (the moment that it receives the recipient information). The URL I use is arbitrarily chosen here, as I don’t really run a PHP service in the background (yet).

In the http section, I create the same resource that the mails’ auth_http wants to connect to. I then declare the two return headers that Nginx needs (Auth-Server and Auth-Port) with the back-end information (127.0.0.1:25). If I ever need to do load balancing or other tricks, I’ll write up a simple PHP script and serve it from PHP-FPM or so.

Next on the list is to enable SSL (not difficult) with client authentication (which isn’t supported by Nginx for the mail module (yet) sadly, so I’ll need to look at a different approach for that).

BTW, this is all on a simple Gentoo Hardened with SELinux enabled. The following booleans were set to true: nginx_enable_http_server, nginx_enable_smtp_server and nginx_can_network_connect_http.

December 04, 2012
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: debris (December 04, 2012, 07:29 UTC)

a new song: debris by ioflow

reworking music from three netlabel releases, for the 48th disquiet junto, fraternité, dérivé.

a last-minute contribution to this junto. i was in a car wreck a couple days ago, so abruptly my planned participation time was reduced to just a day and a half. i could only spend a little while per session sitting at the DAW. the track’s title is a reference to that event.

everything was sequenced with renoise, as seen in the screenshot.

the three source tracks were very hard to work with; this was easily the hardest junto i’ve attempted. i had to make several passes through the tracks, pulling out tiny sub-one-second sections here and there, building up percussion, or finding a droney passages that would work for background material.

for the percussion, i zoomed in and grabbed pieces of non-tonal audio, gated them to remove incidental noise, and checked playback at other speeds for useful sounds. some of the samples were doubly useful with different filter and speed settings. most of the percussion sounds were created after isolating one channel or mixing down to mono; this gave a sharper, clickier sound. occasionally, some of the hits/sticks were left in stereo for a slightly fuller sound.

the melody/drone passages were all pulled from the “unloop” track. i chopped out a short section of mostly percussion-free sound at the beginning of the song, isolated one channel, and ran this higher-pitched drone into paulstretch, stretched to 50x. i played with the bandwidth and noise/tone sliders to get the distinctive crystalline sound, rendering it a few more times. by playing this tone at different speeds using renoise’s basic sample editor, i was able to layer octaves, fading different copies of the sample in and out for some evolving harmonics as desired.

a signal follower attached to the low-passed kick drum flexed the drone’s volume on the beat, adding some liveliness, resulting in a pleasant low-key “bloom pads” effect. i don’t go for huge sidechain compression; just a touch is all that’s needed to reinforce the rhythm. a slow LFO set to “random” mode, attached to a bitcrusher, downgraded the clap sounds with some pleasant crunch.

calf reverb and vintage tape delay plugins rounded out the FX, with the percussion patterns treated liberally, resulting in some complex sounds despite simple arrangement. the only other effect was a tape warmth plugin on the master channel; everything was kept quite minimal, for aesthetic and time reasons. given that i only had a day or so to work on the track, i knew i couldn’t try for too many complicated tricks or melodies.

December 01, 2012
Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)
Libreoffice 4.0 and other cool stuff (December 01, 2012, 12:47 UTC)

During the following week there will be hard feature freeze on libreoffice and 4.0 branch will be created. This means that we can finally start to do some sensible stuff, like testing it like hell in Gentoo.

This release is packed with new features so let me list at least some relevant to our Gentoo stuff:

  • repaired nsplugin interface (who the hell uses it :P) that was fixed by Stephan Bergmann for wich you ALL guys should sent him some cookies :-)
  • liblangtag direct po/mo usage that ensures easier translations usage because the translations are not converted in to internal sdf format
  • liborcus library debut which brings out some features from calc to nice small lib so anyone can reuse them, plus it is easier to maintain, cookies to Kohei Yoshida
  • bluetooth remote control that allows you to just mess with your presentations over bluetooth, also there is android remote app for that over network ;-)
  • telepathy colaboration framework inclusion that allows you to mess with mutiple other people on one document in semi-realtime manner (it is mostly tech preview and you don’t see what is the other guy doing, it just appears in the doc)
  • binfilter is gone! Which is awesome as it was huge load of code that was really stinky

For more changes you can just read the wiki article, just keep in mind that this wiki page will be updated until the release, so it does not contain all the stuff.

Build related stuff

  • We are going to require new library that allows us to parse mspub format. Fridrich Strba was obviously bored so he wrote yet another format parser :-)
  • Pdfimport is no longer pseudo-extension but it is directly built in with normal useflag, which saves quite a lot of copy&paste code and it looks like it operates faster now.
  • The openldap schema provider is now hard-required so you can use adresbooks (Mork driver handles that). I bet some of you lads wont like this much, but ldap itself does not have too much deps and it is usefull for quite few business cases.
  • There are also some nice removals, like glib and librsvg are goners from default reqs (no-suprise for gnomers that they will still need them). From other it no longer needs the sys-libs/db, which I finally removed from my system.
  • Gcc requirement was raised to 4.6, because otherwise boost acts like *censored* and I have better stuff to do than just fix it all the time.
  • Saxon buindling has been delt with and removed completely.
  • Paralel build is sorted out so it will use correct amount of cpus and will fork gcc only required times not n^n times.
  • And last but most probably worst, the plugin foundation that was in java is slowly migrating to python, and it needs python:3.3 or later. This did not make even me happy :-)

Other fancy libreoffice stuff

Michel Meeks is running merges against the Apache Openoffice so we try hard to get even fixes that are not in our codebase (thankfully allowed by license this way). So with lots of efforts we review all their code changes and try to merge it over into our implementation. This will grow more and more complex over a time, because in libo we actually try to use the new stuff like new C++ std/Boost/… so there are more and more collisions. Lets see how long it will be worth it (of course oneliners are easy to pick up :P).

What is going in stable?

We at last got libreoffice-3.6 and binary stable. After this there was found svg bug with librsvg (see above, its gone from 4.0) so the binaries will be rebuilt and next version bump will loose the svg useflag. This was caused by how I wrote the detection of new switches and overlook on my side, I simply tried to just launch the libreo with -svg and didn’t dig further. Other than that the whole package is production ready and there should not be much new regressions.

November 30, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

If you're seeing a message like "Failed to move to new PID namespace: Cannot allocate memory" when running Chrome, this is actually a problem with the Linux kernel.

For more context, see http://code.google.com/p/chromium/issues/detail?id=110756 . In case you wonder what the fix is, the patch is available at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=976a702ac9eeacea09e588456ab165dc06f9ee83, and it should be in Linux-3.7-rc6.

November 28, 2012
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Gentoo: Graphing the Developer Web of Trust (November 28, 2012, 13:57 UTC)

“Nothing gets people’s interest peaked like colorful graphics. Therefore, graphing the web of trust in your local area as you build it can help motivate people to participate as well as giving everyone a clear sense of what’s being accomplished as things progress.”

I graphed the Gentoo Developer Web of Trust, as motivated by the (outdated) Debian Web of Trust.

Graph (same as link above) – Redrawn weekly : http://qa-reports.gentoo.org/output/wot-graph.png
Stats per Node : http://qa-reports.gentoo.org/output/wot-stats.html
Source : http://git.overlays.gentoo.org/gitweb/?p=proj/qa-scripts.git;a=blob;f=gen-dev-wot.sh;hb=HEAD

Enjoy.