Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
December 22, 2012, 23:07 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

December 22, 2012
Stuart Longland a.k.a. redhatter (homepage, bugs)
End of the world predictions (December 22, 2012, 07:21 UTC)

This is a little old, been kicking around on my computer over 10 years now, but it seems especially relevant given what some thought of the Mayan calendar…

December 21, 2012


Figure 1.1: End of World

Fig. 1: End of World banner

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, ready to rock the end of the world (or at least mid-winter/Southern Solstice)! The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.6.8, Xorg 1.12.4, KDE 4.9.4, Gnome 3.4.2, XFCE 4.10, Fluxbox 1.3.2, Firefox 17.0.1, LibreOffice 3.6.4.3, Gimp 2.8.2-r1, Blender 2.64a, Amarok 2.6.0, Mplayer 2.2.0, Chromium 24.0.1312.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. There is no new FAQ or artwork the 20121221 release, but you can still get the 12.0 artwork plus DVD cases and covers for the 12.0 release; and view the 12.1 FAQ (persistence mode is not available in 20121221).
  • Special Features:
    • ZFSOnLinux
    • Writable file systems using AUFS so you can emerge new packages!

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20121221 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20121221 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Creating a tumblelog with blohg (December 21, 2012, 05:39 UTC)

Warning: This post relies on unreleased blohg features. You will need to install blohg from the Mercurial repository or use the live ebuild (=www-apps/blohg-9999), if you are a Gentoo user. Please ignore this warning after blohg-1.0 release.

Tumblelogs are old stuff, but services like Tumblr popularized them a lot recently. Thumblelogs are a quick and simple way to share random content with readers. They can be used to share a link, a photo, a video, a quote, a chat log, etc.

blohg is a good blogging engine, we know, but what about tumblelogs?!

You can already share videos from Youtube and Vimeo, and can share most of the other stuff manually, but it is boring, and diverges from the main objective of the tumblelogs: simplicity.

To solve this issue, I developed a blohg extension (Yeah, blohg-1.0 supports extensions! \o/ ) that adds some cool reStructuredText directives:

quote

This directive is used to share quotes. It will create a blockquote element with the quote and add a signature with the author name, if provided.

Usage example:

.. quote::
   :author: Myself

   This is a random quote!

chat

This directive is used to share chat logs. It will add a div with the chat log, highlighted with Pygments.

Usage example:

.. chat::

   [00:56:38] <rafaelmartins> I'm crazy.
   [00:56:48] <rafaelmartins> I chat alone.

You can see the directives in action on my shiny new tumblelog:

http://rafael.martins.im/

The source code of the tumblelog, including the blohg extension and the mobile-friendly templates, is available here:

http://hg.rafaelmartins.eng.br/blogs/rafael.martins.im/

I have no plans to release this extension as part of blohg, but feel free to use it if you find it useful!

That's all!

December 20, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why my Munin plugins are now written in Perl (December 20, 2012, 21:52 UTC)

This post is an interlude between Gentoo-related posts. The reason is that I have one in drafts that requires me to produce some results that I have not yet, so it’ll have to wait for the weekend or so.

You might remember that my original IPMI plugin was written in POSIX sh and awk, rather than bash and gawk as the original one. Since then, the new plugin (that as it turns out might become part of the 2.1 series but not to replace both the old ones, since RHEL and Fedora don’t package a new enough version of Freeipmi) has been rewritten in Perl, so using neither sh nor awk. Similarly, I’ve written a new plugin for sensors which I also wrote in Perl (although in this case the original one also used it).

So why did I learn a new language (since I never programmed in Perl before six months ago) just to get these plugins running? Well, as I said in the other post, the problem was calling the same command so many times, which is why I wanted to go multigraph — but when dealing with variables, sticking to POSIX sh is a huge headache. One of the common ways to handle this is to save to a temporary directory the output of a command and parse that multiple times, but that’s quite a pain, as it might require I/O to disk, and it also means that you have to execute more and more commands. Doing the processing in Perl means that you can save things in variables, or even just parse it once and split it into multiple objects, to be later used for output, which is what I’ve been doing for parsing FreeIPMI’s output.

But why Perl? Well, Munin itself is written in Perl, so while my usual language of choice is Ruby, the plugins are much more usable if doing it in Perl. Yes, there are some alternative nodes written in C and shell, but in general it’s a safe bet that these plugins will be executed on a system that at least supports Perl — the only system I can think of that wouldn’t be able to do so would be OpenWRT, but that’s a whole different story.

There are a number of plugins written in Python and Ruby, some in the official package, but most in the contrib repository and they could use some rewriting. Especially those that use net-snmp or other SNMP libraries, instead of Munin’s Net::SNMP wrapper.

But while the language is of slight concern, some of the plugins could use some rewriting simply to improve their behaviour. As I’ve said, using multigraphs it’s possible to reduce the number of times that the plugin is executed, and thus the number of calls to the backend, whatever that is (a program, or access to /sys), so in many cases plugins that support multiple “modes” or targets through wildcarding can be improved by making them a single plugin. In some cases, it’s even possible to reduce multiple plugins into one, as I did to the various apache_* plugins shipping with Munin itself, replaced on my system with apache_status as provided by the contrib repository, that fetches the server status page only once and then parses it to produce the three graphs that were, before that, created by three different plugins with three different fetches.

Another important trick up our sleeves while working on Munin plugins is dirty config which basically means that (under indication from the node itself), you can make the plugin output the values as well as the configuration itself during the config execution — this saves you one full trip to the node (to fetch the data), and usually that also means it saves you from having to send one more call to the backend. In particular with these changes my IPMI plugin went from requiring six calls to ipmi-sensors per update, for the three graphs, to just one. And since it’s either IPMI on the local bus (which might require some time to access) or over LAN (which takes more time), the difference is definitely visible both in timing, and in traffic — in particular one of the servers at my day job is monitoring another seven servers (which can’t be monitored through the plugin locally), which means that we went from 42 to 7 calls per update cycle.

So if you use Munin, and either have had timeout issues in the past or recently, or you have some time at hand to improve some plugins, you might want to follow what I’ve been doing, and start improving or re-writing plugins to support multigraph or dirtyconfig, and thus improve its performance.

Jeremy Olexa a.k.a. darkside (homepage, bugs)

I was in Budapest for 11 days. I couchsurfed there and it is longer than I normally stay at someone’s house, by far. So, thanks Paul! Budapest was nice, reminded me much of Prague. While, I was there I visited a Turkish Bath, that was very interesting experience. Imagine, a social, public “hot tub & sauna” with water naturally hot. I found a newly minted Crossfit gym, RC Duna, that opened up it’s doors for a traveller, so gracious. Even though I didn’t get to see the Opera in Vienna, I went to the Opera house in Budapest. It was my first time seeing a ballet, The Nutcracker. There were Christmas markets in Budapest too. I actually liked the Budapest ones more so than the Viennese markets. I also helped to organize the first (known) Hungarian Gentoo Linux Beer Meeting :)

Then I took a train to Belgrade, Serbia. The train was 8+ hours. I couchsurfed again for 3 nights. Had some wonderful chats with my host, Ljubica. She learned about US things, I learned about Serbian things, just what you could hope for, a cultural exchange via couchsurfing. I was her first US guest. Later on, an Argentinian fellow stayed there too and we had conversations about worldly topics, like “why are borders so important and do we need them?” and “speculating why Belgium’s lack of government even worked.” Then perhaps, the best part, I got to try authentic mate. In my opinion, there wasn’t much to actually see in Belgrade during the winter, I did walk around and went to the fortress. Otherwise, nursed my head cold which I got on the train.

I took the bus to Skopje, FYROM. I stayed in Skopje for 3 nights at a nice independent hostel, Shanti Hostel (recommended). I walked around the center (not much to see), walked through the old bazaar, and ate some good food. The dishes in Central Europe include lots of meat. I embarked on a mission to find the semi-finalist entry for the next 7 wonders of the world, Vrelo Cave, but I got lost and took a 10km hike along the river, it was spectacular! And peaceful. Perfect really. I wanted to see what was at the end of the trail, but eventually turned around because it didn’t end. On the way back, I slipped and came within feet of going in the drink. As my legs straddled a tree and my feet went through the branches that were clearly meant to handle no weight, I used that split second to be thankful. I used the next second to watch something black go bounce, …, bounce, SPLASH. It is funny how you can go from thankful to cursing about your camera in the river so quickly. I got up, looked around and thought about how I got off the path, dang. Being the frugal man I am, I continued off the path and went searching for my camera. Well, that was bad because I slipped again. As I was sliding on my ass and grabbing branches, I eventually stopped. It was at this point, I knew my camera was gone since I could see the battery popped out and was in the water. Le sigh. C’est la vie.

So, no pictures, friends. I had a few hundred pictures that I didn’t upload and they are gone. I might buy a camera again but for now, you will just have to take my word for it. My Mom says she will send me a disposable camera :D ha.

I’m off to Greece at 6am…

Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching policy types in Gentoo/SELinux (December 20, 2012, 09:31 UTC)

When you are running Gentoo with SELinux enabled, you will be running with a particular policy type, which you can devise from either /etc/selinux/config or from the output of the sestatus command. As a user on our IRC channel had some issues converting his strict-policy system to mcs, I thought about testing it out myself. Below are the steps I did and the reasoning why (and I will update the docs to reflect this accordingly).

Let’s first see if the type I am running at this moment is indeed strict, and that the mcs type is defined in the POLICY_TYPES variable. This is necessary because the sec-policy/selinux-* packages will then build the policy modules for the other types referenced in this variable as well.

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             strict
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28
 
test ~ # grep POLICY_TYPES /etc/portage/make.conf
POLICY_TYPES="targeted strict mcs"

If you notice that this is not the case, update the POLICY_TYPES variable and rebuild all SELinux policy packages using emerge $(qlist -IC sec-policy) first.

Let’s see if I indeed have policies for the other types available and that they are recent (modification date):

test ~ # ls -l /etc/selinux/*/policy
/etc/selinux/mcs/policy:
total 408
-rw-r--r--. 1 root root 417228 Dec 19 21:01 policy.27
 
/etc/selinux/strict/policy:
total 384
-rw-r--r--. 1 root root 392168 Dec 19 21:15 policy.27
 
/etc/selinux/targeted/policy:
total 396
-rw-r--r--. 1 root root 402931 Dec 19 21:01 policy.27

Great, we’re now going to switch to permissive mode and edit the SELinux configuration file to reflect that we are going to boot (later) into the mcs policy. Only change the type – I will not boot in permissive mode so the SELINUX=enforcing can stay.

test ~ # setenforce 0
 
test ~ # vim /etc/selinux/config
[... set SELINUXTYPE=mcs ...]

You can run sestatus to verify the changes, but be aware that – while the command does say that the mcs policy is loaded, this is not the case. The mcs policy is just defined as the policy to load:

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

So let’s load the mcs policy shall we?

test ~ # cd /usr/share/selinux/mcs/
test mcs # semodule -b base.pp -i $(ls *.pp | grep -v base | grep -v unconfined)

Next we are going to relabel all files on the file system, because the mcs policy adds in another component in the context (a sensitivity label – always set to 0 for mcs). We will also re-do the setfiles steps done initially while setting up SELinux on our system. This is because we need to relabel files that are “hidden” from the current file system because other file systems are mounted on top of it.

test mcs # rlpkg -a -r
Relabeling filesystem types: btrfs ext2 ext3 ext4 jfs xfs
Scanning for shared libraries with text relocations...
0 libraries with text relocations, 0 not relabeled.
Scanning for PIE binaries with text relocations...
0 binaries with text relocations detected.
 
test mcs # mount -o bind / /mnt/gentoo
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/dev
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/lib64
test mcs # umount /mnt/gentoo

Finally, edit /etc/fstab and change all rootcontext= parameters to include a trailing :s0, otherwise the root contexts of these file systems will be illegal (in the mcs-sense) as they do not contain the sensitivity level information.

test mcs # vim /etc/fstab
[... edit rootcontext's to now include ":s0" ...]

There ya go. Now reboot and notice that all is okay, and we’re running with the mcs policy loaded.

test ~ # id -Z
root:sysadm_r:sysadm_t:s0-s0:c0.c1023
test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

December 18, 2012
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: lost letters (December 18, 2012, 09:17 UTC)

a new song: lost letters by ioflow

prepared improvisation for the 50th disquiet junto, morse beat.

the assignment was to encode a word or phrase with the Morse method, and then translate that sequence into the song’s underlying rhythm.

i chose the meaning of my name, “the Lord is salvation.” i looked at the resulting dashes and dots and treated them as sheet music, improvising a minor-key motif for piano, using just my right hand.

with the basic sketch recorded, i duplicated an excerpt and ran it through a vintage tape delay effect, putting it in the background almost like a loop. i set to work adding a few notes here and there, some of them reversed, running into more tape delays; contrasting their sonic character with the main melody. the loop excerpt repeats a few times, occasionally transformed by offset placement with the main theme, or reinforced by single note chord changes.

from a very few audio fragments, a mournful story emerged. echoing piano lines and uncovered memories. i did my best to vary the structure while keeping the mood and emotions, but this is still pretty hasty work; i only had a few minutes to arrange this piece before the deadline, due to software issues with ardour 3 beta. ardour crashes every time i attempt to process an audio clip, such as reversing or stretching it. i had to separately render those segments with renoise, then import them to ardour.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio 3.0 (December 18, 2012, 07:57 UTC)

Yay, we just release PulseAudio 3.0! I’m not going to rehash the changelog that you can find in the release announcement as well as the longer release notes.

I would like to thank the 36 contributors over the last 6 months who have made this release what it is and continue to demonstrate what a vibrant community we have!

December 16, 2012
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
The difference between Ubuntu and Gentoo ;) (December 16, 2012, 22:34 UTC)

This gem comes from the xda developers forums; thanks barry99705!

"Using/installing Ubuntu is like buying a car. It may have a few features you'll never need or use, and might need to have a couple features added as aftermarket parts.

Using/installing Gentoo is like buying a pile of sheet metal, a few rubber trees, small pile of copper, a pile of sand, and an oil well. Then you have to cut and fabricate the car's body from the sheet metal, extract the rubber from the trees, then use that to make the tires and all the seals on the car. Use the pile of copper to make all the wires, and use the leftover rubber(you did save the scraps didn't you) to make the insulation. Melt down the pile of glass to make the windshield, side and back windows, also the headlights and lights themselves. Then you need to extract the crude oil from the well to refine your own engine oil and gas. In the end, you have a car created to your exact specifications (if you know what the hell you're doing) that may or may not be any better than just buying a car off the lot."

Of course I should additionally mention that Gentoo provides awesome documentation for all the steps and most of the actual assembly work is done single-handedly by portage!

December 15, 2012
Richard Freeman a.k.a. rich0 (homepage, bugs)
Gentoo and Copyright Assignments (December 15, 2012, 13:43 UTC)

A topic that has been fairly quiet for years has roared into life on a few separate occasions in the last month within the Gentoo community: copyright assignments. The goal of this post is to talk a little about the issues around these as I see them. I’ll state upfront that I’m not married to any particular approach.

But first, I think it is helpful to consider why this topic is flaring up. The two situations I’m aware of where this has come up in the last month or so both concern contributions (willing or not) from outside of Gentoo. One concerns a desire to be able to borrow eclass code from downstream distros like Exherbo, and the other is the eudev fork. In both cases the issue is with the general Gentoo policy that all Gentoo code have a statement at the top to the effect of “Copyright 2012 Gentoo Foundation.”

Now, Diego has already blogged about some of the issues created by this policy already, and I want to set that aside for the moment. Regardless of whether the Foundation can lay claim to ownership of copyright on past contributions, the question remains, should Gentoo aim to have copyright ownership (or something similar) for all Gentoo work be owned by the Foundation?

Right now I’m reaching out to other free software organizations to understand their own policies in this area. Regardless of whether we want to have Gentoo own our copyrights or not there are still legal questions around what to put on that copyright line, especially when a file is an amalgamation of code originated both inside and outside of Gentoo, perhaps even by parties who are hostile to the effort. I can’t speak for the Trustees as a whole, but I suspect that after gathering info we’ll try to have some open discussion on the lists, and perhaps even have a community-wide vote before making new policy. I don’t want to promise that – in fact I’d recommend that any community-wide vote be advisory only unless a requirement for supermajority were set, as I don’t want half the community up in arms because a 50.1% majority passed some highly unpopular policy.

So, what are some of the directions in which Gentoo might go? Why might we choose to go in these directions? Below I outline some of the options I’m aware of:

Maintain the status quo
We could just leave the issue of copyright assignment somewhat ambiguous as has been done. If Gentoo were forced to litigate over copyright ownership right now an argument could be made that because contributors willingly allowed us to stick that copyright notice on our files and made their contribution with the knowledge of our policies, that they have given implicit consent to our doing so.

I’m not a big fan of this approach – it has the virtue of requiring less work, but really has no benefits one way or the other (and as you’ll read below their are benefits from declaring a position one way or the other).

This requires us to come up with a policy around what goes on the copyright notice line. I suspect that there won’t be much controversy for Gentoo-originated work like most ebuilds, as there isn’t much controversy over them now. However, for stuff like eudev or code borrowed from other projects this could get quite messy. With no one organization owning much of the code in any file the copyright line could become quite a mess.

Do not require copyright assignment
We could just make it a policy that Gentoo would aim to own the name Gentoo, but not the actual code we distribute. This would mean that we could freely accept any code we wished (assuming it was GPL or CC BY-SA compatible per our social contract). This would also mean that Gentoo as an organization would find it difficult to pursue license violations, and future relicensing would be rather difficult.

From an ability to merge outside code this is clearly the preferred solution. This approach still carries all the difficulties of managing the copyright notice, since again no one organization is likely to hold the majority of copyright ownership of our files. Also, if we were to go this route we should strongly consider requiring that all contributions be licensed under GPL v2+, and not just GPL v2. Since Gentoo would not own the copyright if we ever wanted to move to a newer GPL version we would not have the option to do so unless this were done.

Gentoo would still own the name Gentoo, so from a branding/community standpoint we’d have a clear identity. If somebody else copied our code wholesale the Foundation couldn’t do much to prevent this unless we retroactively asked a bunch of devs to sign agreements allowing us to do so, but we could keep an outside group from using the name Gentoo, or any of our other trademarks.

Require copyright assignment
We could make it a policy that all contributions to Gentoo be made in conjunction with some form of copyright assignment, or contributor licensing agreement. I’ll set aside for now the question of how exactly this would be implemented.

In this model Gentoo would have full legal standing to pursue license violations, and to re-license our code. In practice I’m not sure how likely we’d actually be to do either. The copyright notice line would be easy to manage, even if we made the occasional exception to the policy, since the exceptions could of course be managed as exceptions as well. Most likely the majority of the code in any file would only be owned by a few entities at most.

The downside to this approach is that it basically requires turning away code, or making exceptions. Want to fork udev? Good luck getting them to assign copyright to Gentoo.

There could probably be blanket exceptions for small contributions which aren’t likely to create questions of copyright ownership. And we could of course have a transition policy where we accept outside code but all modifications must be Gentoo-owned. Again, I don’t see that as a good fit for something like eudev if the goal is to keep it aligned with upstream.

I think the end result of this would be that work that is outside of Gentoo would tend to stay outside of Gentoo. The eudev project could do its thing, but not as a Gentoo project. This isn’t necessarily a horrible thing – OpenRC wasn’t really a “Gentoo project” for much of its life (I’m not quite sure where it stands at the moment).

Alternatives
There are in-between options as well, such as encouraging the voluntary assignment/licensing of copyright (which is what KDE does), or dividing Gentoo up into projects we aim to own or not. So, we might aim to own our ebuilds and the essential eclasses and portage, but maybe there is the odd eclass or side project like eudev that we don’t care about owning. Maybe we aim to own new contributions (either all or most).

There are good things to be said for a KDE-like approach. It gives us some of the benefits of attribution, and all of the benefits of not requiring attribution. We could probably pursue license violations vigorously as we’d likely hold control of copyright over the majority of our work (aside from things like eudev – which obviously aren’t our work to begin with). Relicensing would be a bit of a pain – for anything we have control over we could of course relicense it, but for anything else we’d have to at least make some kind of effort to get approval. Legally that all becomes a murky area. If we were to go with this route again I’d probably suggest that we require all code to be licensed GPL v2+ or similar just to give us a little bit of automatic flexibility.

I’m certainly interested in feedback from the Gentoo community around these options, things I hadn’t thought of, etc. Feel free to comment here or on gentoo-nfp.


Filed under: foss, gentoo, gentoo foundation

December 14, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My take on the separate /usr issue (December 14, 2012, 19:18 UTC)

This is a blog post I would have definitely preferred not to write — it’s a topic that honestly does not touch me that much, for a few reasons I’ll explore in a moment, and at the same time it’s one that is quite controversial as it has quite a few meanings layered one on top of the other. Since I’m writing this, I would though first to make sure that the readers know who I am and why I’m probably going to just delete posts that tell me that I don’t care about compatibility with older systems and other operating systems.

My first project within Gentoo has been Gentoo/FreeBSD — I have a (sometimes insane) interest in portability with operating systems that are by far not mainstream. I’m a supporter of what I define “software biodiversity”, and I think that even crazy experiments have the right to exist, if anything to learn tricks and issues to avoid. So please don’t give me that kind of crap I noted above.

So, let’s see — I generally have little interest in keeping things around just for the sake of it, and as I wrote a long time ago I don’t use a separate /boot in most cases. I also generally dislike legacies for the sake of legacies. I guess it’s thus a good idea to start looking at which legacies bring us to the point of discussing whether /usr should be split. If it’s not to be split, there’s no point debating supporting split /usr, no?

The first legacy, which is specific to Gentoo is tied to the fact that our default portage tree is set to /usr/portage, and that both the ebuilds’ tree itself and the source files (distfiles), as well as the built binary packages, are stored there. This particular tree is hungry in both disk space itself and even more so in inodes. Since both the tree, and in general the open source projects we package, keep growing, the amount of these two resources we need increases as well, and since they are by default on /usr, it’s entirely possible that if this tree’s resources are allocated statically when partitioning, it’ll reach a point where there won’t be enough space, or inodes, to allocate anything in it. If /usr/portage resides in the root filesystem, it’s also very possible, if not even very likely, that the system would stop entirely to work because there is not enough space available on it.

One solution to this problem is to allocate /usr/portage on its own partition — I still don’t like it that much as an option, because /usr is supposed to be, standing to the FHS/LSB, for read-only data. Most other distributions you’ll find using subdirectories in /var as that’s what it’s designed to be used for. So why are we using /usr? Well, it turns out that this is something that was inspired from FreeBSD, where /usr is used for just about everything including temporary directories and other similar uses. Indeed, /usr/portage finds its peer in /usr/ports which is where Daniel seems to have been inspired to write Portage in the first place. It should be an easy legacy to overcome, but probably migrating it is going to be tricky enough that nobody has done so yet. Too bad.

Before somebody ask, yes, for a while ago splitting the whole /var – which is general considered much more sensible – was a pain in the neck, among other things because things were using /var/run and other similar paths before the partition could be mounted. The situation is now much better thanks to the fact that /run is now available much earlier in the boot process — this is not yet properly handled by all the init scripts out there but we’re reaching that point, slowly.

Okay so to the next issue: when do you want to split /usr at all? Well, this all depends on a number of factors, but I guess the first question is whether you’re installing a new system or maintaining an old one. If you install a new one, I really can’t think of any good reason to split /usr out — the only one that comes passing in my mind is if you want to have it in LVM and keep the rootfs as a standalone partition — and I don’t see why. I’d rather, at that point, put the rootfs in LVM as well, and just use an initrd to accomplish that — if it’s too difficult, well, that’s a reason to fix the way initrd or LVM are handled, not to keep insisting to split /usr! Interestingly enough, such a situation calls for the same /boot split I resented five years ago. I still use LVM without having the rootfs in it, and without needing to split /usr at all.

Speaking of which, most ready-to-install distributions only offer the option of using LVM — it makes sense, as you need to cater for as many systems as possible at once. This is why Gentoo Linux is generally disconnected to the rest: the power of doing things for what you want to use it for, makes it generally possible to skip the overgeneralization, and that’s why we’re virtually the only distribution out there able to work without an initrd.

Another point that came up often is with a system where the space in rootfs was badly allocated, and /usr is being split because there is not enough space. I’m sorry that this is a common issue, and I do know that it’s a pain to re-partition such a system as it involves at least a minimal downtime. But this is why we have workarounds, including the whole initrd thing. I mean, it’s not that difficult to manage, with the initrd, and yes I can understand that it’s more work than just having the whole system boot without /usr — but it’s a sensible way to handle it, in my opinion. It’s work or work, work for everybody under the sun to get split /usr working properly, or work for those who got the estimate wrong and now need the split /usr and you can guess who I prefer doing the work anyway (hint: like everybody in this line of business, I’m lazy).

Some people have said that /usr is often provided on NFS, and a very simple, lightweight rootfs is used in these circumstances — I understand this need, but the current solution to support split /usr is causing the rootfs to not be as simple and lightweight as before — the initrd route in that sense is probably the best option: you just get an initrd to be able to mount the root through NFS, and you’re done. The only problem with this solution is handling if /etc needs to be different from one system to the next, but I’m pretty sure it’s something that can be more easily fixed as well.

I have to be honest, there is one part of /usr that I end up splitting away very often: /usr/lib/debug — the reason is simple: it keeps increasing with the size of the sources, rather than with the size of the compiled code, and with the new versions of the compilers, which add more debug information. I got to a point where the debug files occupied four/five times the size of the rest of the rootfs. But this is quite the exception.

But why would it have to be that much of a problem to keep a split /usr? Well, it’s mostly a matter of what you’re supposed to be able to use without /usr being mounted. For many cases, udev was and is the only problem, as they really don’t want much in the matter of early-boot environment beside being able to start lvm and mount /usr, but the big problem happen if you want to be able to have even a single login with /usr not mounted — because the PAM chain has quite a few dependencies that wouldn’t be available until it’s mounted. Moving PAM itself is not much of an option, and this gets worse, because start-stop-daemon can technically also use chains that, partially, need /usr to be available, and if that happens, no init script using s-s-d would be able to run. And that’s bad.

So, do I like the collapsing of everything in /usr? Maybe not that much because it’s a lot of work to support multiple locations, and to migrate configurations. But at the same time I’m not going to bother, I’ll just keep the rootfs and /usr in the same partition for the time being, and if I have to split something out, it’ll be /var.

December 13, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GNU is actually a totalitarian regime (December 13, 2012, 22:43 UTC)

You probably remember that I’m not one to fall in line with the Free Software Foundation — a different story goes for the FSFe which I support and I look forward for the moment when I can come back as a supporter; for the moment I’m afraid that I have to contribute only as a developer.

Well, it seems like more people are joining in the club. After Werner complained about the handling of GNU copyright assignments – not much after my covering of Gentoo’s assignments which should probably make those suggesting a GNUish approach to said copyright assignment think a lot – Nikos of GnuTLS decided to split off the GNU project.

Why did Nikos decide this? Well, it seems like the problem is that both Werner and Nikos are tired of the secrecy in the GNU project and even more of the inability to discuss, even in a private setting, some topics because they are deemed taboo by the FSF, in the person of Richard Stallman.

So, Nikos decided to move the lists, source code and website on its own hosting, and then declared GnuTLS no longer part of the GNU project. Do you think that this would have put the FSF in a “what are we doing wrong?” mood? Hah, naïve are you! Indeed the response from the FSF (in the person of Richard Stallman, see a pattern?) was to tell Nikos (who wrote, contributed to GNU, and maintained the project) that he can’t take his own project and take it out of GNU, and that if he wants he can resign from the maintainer’s post.

Well, now it seems like we might end up with a “libreTLS” package, as Nikos is open to renaming the project… it’s going to be quite a bit of a problem I’d say, if anything because I want to track Nikos’s development more than GNU’s, and thus I would hope for the “reverse fork” from the GNU project to just die off. Considering I also had to sign the assignment paperwork (and in the time that said paperwork was being handled, I lost time/motivation for the contributions I had in mind, lovely isn’t it?).

Well, what this makes very clear to me is that I still don’t like the way the GNU project, and the FSF are managed, and that my respect for Stallman’s behaviour is, once again, zero.

Markos Chandras a.k.a. hwoarang (homepage, bugs)
Proxy Maintainers – How do we perform? (December 13, 2012, 20:14 UTC)

Following my recent recruitment performance post, here comes the second part of my Gentoo Miniconf 2012 presentation. The following two graphs aim to demonstrate the performance of proxy-maintainers aka, how Gentoo users help us improve and push new ebuilds to the portage tree

Orphaned Packages 2012/10Orphaned Packages 2012/12

One can notice the increased number of maintainer-needed@ packages but this is because we “retired” a lot of inactive developers in the last 2 months. I expect this number to not increase further in the near future.

I would like to thank all of you who are actively participating in this team. Keep up the good work!

Steve Dibb a.k.a. beandog (homepage, bugs)
another semester done (December 13, 2012, 08:25 UTC)

I just finished my Fall semester for 2012 today at UVU.  This was, by far, the hardest semester I’ve ever had since I’ve been in school.  It was brutal.  I had three classes which carried with it more work than I was expecting, and I spent a lot of time in the past four months doing nothing but homework.  I was talking to my cousin tonight about it (while we were doing some late-night skateboarding in the winter, which, it’s actually really nice out here right now), and I mentioned that the stress was a huge burden on me.  Stress is normal, but I’ve learned that if something heavy is really going on, I notice I will stop being cheery.  I don’t really get somber, but it’s more like, just focused and serious all the time.  Which can be a real bummer.

But, the semester is finished, and it’s freed up a lot of time and has taken that huge burden off of me.  I got good grades, and along with that, and some great friends that really stepped up at the last minute and helped me out, it’s really gotten me humbled and grateful to God and everyone that stood by me.  I’m really glad this semester is done.

One thing I learned from this last jaunt around is that I’ve decided I’m never taking online classes again.  I had two this semester, and one on campus.  Looking back, I’ve always had a range of issues with online courses.  Either I don’t understand the material very well because I can’t chat with the professor one on one, or I slack the whole time (I did 50% of the coursework in one day.  I’m not kidding).  The worst one though is I never really feel like I “get” the material.  I jump through hoops, get a grade, and move on, but it doesn’t seem like I learned anything.

So, I’m sticking to just two classes from here on out, and doing them all on-campus.  That’ll be manageable.

For now I’m really looking forward to not so much having more time, but having less stress.  I’ve been wanting to work on some cool side projects, and I also have been itching to go skating … a lot.  So tonight I went on a two-hour run with my cousin down Main Street in Bountiful, and it was really cool.  We call it a “mort run” since we start at the top of a hill and go all the way down to the mortuary.  It’s smooth all the way down and  you can just push around and then either skate back up hill or walk.  It’s a good workout.

The best part tonight though was debating whether or not we should go to the drive-through at Del Taco, knock on the window and ask for something.  We didn’t, but we circled the place like eight times and probably freaked out the employees while we debated it.  Eventually, we realized he didn’t have enough cash to buy something on the dollar menu (he was a penny short), so we spent half an hour wandering around downtown looking for lost change.  It was pretty fun. :)

Soooooooooooo ….. projects.  One thing I have time to look into now is znurt.org.  It’s broken.  I’ve known it’s been broken.  It would take me probably less than an hour to fix it.  I haven’t made the time, for a lot of reasons.  It’s actually been on my calendar reminding me over and over that I need to get it done.  I’m debating what to do about the site.  I could just fix the one error and move on, but it’s still kind of living in a state of neglect.  Ideally, I should hand the project over to someone else and let them maintain it.  I dunno yet.  Part of me doesn’t wanna let it go, but I guess a bigger part doesn’t care enough to actually fix it so … yah.  Gotta make a decision there.

Other than that, not much going on.  I moved to a new apartment, back into a complex.  I like it here.  I have a dishwasher now, which I’m really grateful for (I haven’t had one in the last three apartments).  The funny thing about that is I seriously have so few dishes, that filling up the entire thing with all of mine it’s half full.

Anyhoo, I am really looking forward to moving on.  My big thing is I wanna get some serious skating time in while I’ve got the time.  That and enjoy the holidays with friends and family.  I’m looking forward to next semester too.  I’ve got a class on meteorology and another on U.S. history.  I’m almost done with generals.  The crazy part about all of this?  Since I went back to school two years ago, I’ve put in 30 credit hours.  Insane, for someone working full time.  I tell you what.


Sven Vermeulen a.k.a. swift (homepage, bugs)
Another hardened month has passed… (December 13, 2012, 08:02 UTC)

… so it’s time for a new update ;-)

Toolchain

GCC 4.8 is still in its stage 3 development phase, so Zorry will send out the patches to the GCC development community when this phase is done. For Gentoo hardened itself, we now support all architectures except for IA64 (which never had SSP).

Full uclibc support is now in place for amd64, i686, mips32r2: not only is their technological support ok, but stages are now also automatically built to support installations through the regular installation instructions. The next target to get stages automatically built for is armv7a.

Kernel and grSecurity/PaX

Stabilization on 3.6.x is still showing some difficulties. Until those are resolved, we’re still stable in 3.5.4. We have a couple of panics in some odd cases, but these will need to be resolved before we can stabilize further.

glibc-2.16 will also drop the declarations for PT_PAX (in elf.h) and the binutils will also not cover PT_PAX phdr anymore. So, we will standardize fully on xattr-based PaX flags. This will get some proper focus in the next period to ensure this is done correctly. Most work on this support is focusing on communication towards users and the pax-utils eclass support.

There was some confusion if the tmpfs-xattr patch would or would not properly restrict access, but it looks like the PaX patch on mm/shmem.c was based upon the Gentoo patch and enhanced with the needed restrictions, so we can just keep the PaX code.

On USE=”pax_kernel”, which should enable some updates on userland utilities when applications are run under a PaX enabled kernel, prometheanfire tried to get this as a global USE flag (as many applications might eventually want to get a trigger on it). However, due to some confusion on the meaning of the USE flag, and potential need to depend on additional tools, we’re going to stick with a local flag for now.

SELinux

schmitt953 will help in the testing and possible development of SELinux policies for Samba 4.

Furthermore, the userspace utilities have been stabilized (except for the setools-3.3.7-r5+ due to some swig problems, but those have been worked around in setools-3.3.7-r6). Also, the rev8 policies are in the tree and no big problems were reported on them. They are currently still ~arch, but will be stabilized in the next few days. A new rev9 release will be pushed to the hardened-dev overlay soon as well.

Profiles

nvidia is unmasked for the hardened profiles, but still has X and tools USE flags masked, and is only supported on kernels 3.0.x and higher.

Also, the hardened/linux/uclibc/arm/armv7a profile is now available as a development profile. Profiles will be updated as the architectures for ARM are getting supported, so expect more in the next month.

System Integrity

We were waiting for kernel 3.7, which just got released, so we can now start integrating this further. Expect more updates by next meeting.

Docs

For SELinux, some information on USE=”unconfined” is added to the SELinux handbook. Blueness will also start documenting the xattr pax stuff.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
How app-office/libreoffice-bin is made (December 13, 2012, 00:08 UTC)

While usually Gentoo users compile all their packages on their own computers, LibreOffice tends to be too big a bite for that. This is why we provide for amd64 and x86 app-office/libreoffice-bin and app-office/libreoffice-bin-debug, two packages with a precompiled binary installation and its debug information. In the beginning we just used the binaries from the official LibreOffice distribution. Turns out, however, that these binaries bundle a large number of libraries that we have in Gentoo anyway (bug 361695), and for a lot of reasons bundled libraries are bad. So, we decided to roll our own binaries for stable Gentoo installations. Let me describe a bit how it is done.

Linux pinacolada 3.4.9-gentoo #2 SMP Thu Oct 11 00:05:55 CEST 2012 x86_64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz GenuineIntel GNU/Linux
On the machine doing the build, two chroots are dedicated to the package build process, one a plain amd64 chroot, the other an x86 chroot entered via linux32. Both have no ~arch packages installed at all, only stable keywords are accepted; both have a very minimal world file listing only a few packages useful for a maintainer as e.g. gentoolkit or eix. Procedure is identical for both.  In addition, in both chroots the compiler flags are chosen for as wide compatibility as possible. This means
# for x86
CFLAGS="-march=i586 -mtune=generic -O2 -pipe -g"
# for amd64
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -g"
and obviously the same for CXXFLAGS. Both chroots also use the portage features splitdebug and compressdebug to make debug information available in a separate directory tree. Prior to build, the existing packages are updated, unnecessary packages are cleaned, and dynamic linking is checked:
emerge --sync
emerge -uDNav world
emerge --depclean --ask

revdep-rebuild
In case any problems occur, these are checked, solved, and the procedure is repeated until all the operations become a no-op.
Next step is adapting the (rather simplistic) build script to the new libreoffice version. This means mainly checking for new or discarded useflags, and deciding which value these should have in the binary build. Since LibreOffice-3.6 we also have to decide now which bundled extensions to build. The choice of useflags is influenced by several factors. For example, pdfimport is disabled because the resulting dependency on poppler might lead to broken binaries rather too often.
Then, well, then it's running the build. Generating all 12 flavours (base, kde, gnome with and without java for both amd64 and x86) takes roughly a weekend. Time to go out to the christmas market and sip a Glühwein.
In the meantime, we can also adapt the libreoffice-bin ebuilds for the new version. The defined phase functions are mostly boring, since they only have to copy files into the system. Normally, they can be taken over from the previous version. The dependency declarations, however, have to be copied anew each time from the corresponding app-office/libreoffice ebuild, taking into account the chosen use-flag values. DEPEND is set empty since we're not actually building anything during installation.
Finally, COMMON_DEPEND is extended by an additional block named BIN_COMMON_DEPEND, specific for the binary package. Here, we specify any dependencies that need to be stricter now, where a library upgrade would for a normal package require revdep-rebuild - which is not possible for a binary package. Typical candidates where we have to fix the minimum or exact library version are glibc, icu, or libcmis.
Once the build has finished, 8.8G of files have to be uploaded to the Gentoo server, added to the mirror system, and then given some time to propagate. Then, we can commit the new ebuild, and open a stabilization request bug. Finished!
(Oh and in case you're wondering, new packages are coming tomorrow. :)

December 12, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What I'd like from my blog (December 12, 2012, 21:18 UTC)

My blog is, at this point, a vital part of my routine. I use my blog to write about my personal projects, I write about the non-restricted parts of my jobs, and I write about the work that goes into Gentoo Linux and other projects I follow.

I have over 2100 posts over time, especially thanks to the recent import of my original blog on Gentoo infrastructure. I don’t really know if it’s a lot, but sometimes Typo seems to miss something about it. Unfortunately I’m also running an older version of Typo, because I haven’t switched that virtual server to Ruby 1.9 yet as one of my customers is running a version of Radiant that is not going to work otherwise.

Said customer also bitched so hard, and screamed not to keep the site on my server, but as it happens the new webmasters that are supposed to pick up the website, and should have been cheaper and faster than me… have been working since June and still delivered nothing. Hopefully they’ll be done soon and I can kick said customer from the server.

Anyway, at this point there are a few things that I’d like to get out of my blogging platform in the future, which might require me to fork Typo and create my own version, which is likely going to be stripped down — as many things I really don’t care about, that are added here, like the short URLs, which I might just export as I think I used them at some point, but then I would handle through mod_rewrite rather than on the Rails side.

So let’s see what I don’t like about the current Typo I’m using:

  • The database access is more than a bit messed up; it probably has to do that upstream only cares about MySQL, while I want to run it on PostgreSQL; and this causes more than a couple of problems — have you noticed that sometimes my posts end up password-protected? Well, what happens is that the settings for the single posts are serialized in YAML and de-serialized, but somethings something bad happens and the YAML becomes invalid, causing the password-protection to kick in. I know there is an ActiveRecord extension that allows for key-value pairs to be stored in PostgreSQL-specific column types instead of having to (de)serialize them all the time, but again, this wouldn’t be something upstream would use.
  • Alternatively I’ve been toying with the idea of using MongoDB as a backend. Even with the issues that I have pointed out before, I think it might work well for a blog, especially since then the comments would be tied tot he post itself, rather than have the current connected tables.
  • There is a problem with the tags handling, again something upstream doesn’t seem to care about – at some point I remember reading they were mostly interested in making every single word in a post a tag to cross-connect posts with the same word; it’s one of the reasons why I’m not sure if I want to update it. If I change the title of one of the tags to make it more descriptive, then I edit a post that has that tag, it creates one more tag for each word in that title, instead of preserving the older tags. I really should clean up the tags I got right now.
  • I would also like that when I get to the “new post” page it would create it already and then get me back to editing it — this is important to me because sometimes if I have to restart Chromium, or suspend the laptop, something goes very wrong and it creates multiple drafts for the same post. And cleaning them up is a long task.
  • A better implementation of notification for new posts, and integration with Flattr, would be also very good. While IFTTT makes it easy to post the new entries to Twitter and LinkedIn, its lack of integration for Flattr is a major pain, and the fact that right now, to use auto-submit, I have to duplicate part of the content in the HTML of the pages, is also a problem. So being able to create a “Flattr thing” the moment when I actually post something would be a major plus for me.
  • Since I’m actually quite the paranoid, another thing I would like to have would be either two-factor authentication with Google Authenticator on a cellphone, or (actually, in addition to) certificate-based authentication for the admin interface. Having a safe way to make sure that I’m the only one logging in would make me remove some of the administrative interface rules on ModSecurity, which would in turn let me write posts from public WiFi networks sidestepping the problem I posted about the other day.
  • Scheduled posting. This used to be supported, but it’s been completely broken for years at this point, but it was very useful to me a long time ago since I would just write a bunch of posts and schedule them to be posted once a day. I suppose this should now be changed so that the planned posts are only actually posted if a process is called to make sure that the new posts are now “elapsed”… but again this is something that I’d like to have, and you readers would probably enjoy, as it would probably make for more and better content overall.

I definitely do not want to go with WordPress, I just wish I had the time to write my own Typo fork, and make it more usable for what I do, rather than hoping that the upstream development for Typo does not go in a direction I don’t like at all.. Maybe somebody else has the same requirements and would like to join me in this project; if so, send me an email.. maybe it’ll finally be the time I decide to start on the fork itself.

Bloody upstream (December 12, 2012, 19:33 UTC)

Please note, this post is likely to be interpreted as a rant. From one point of view it is. It’s mostly a general rant geared toward those upstreams that is generally impossible to talk into helping us distribution out.

The first one is the IEEE — you might remember that back in April I was troubled by their refusal to apply a permissive license to their OUI database, and actually denied that they allow redistribution of said database. A few weeks ago I had to bite the bullet and added both the OUI and the IAB databases to the hwids package that we’re using in Gentoo, so that we can use them on different software packages, including bluez and udev.

While I’m trying not to bump the package as often as before, simply because the two new files increase the size of the package four times. But I am updating the repository more often so that I can see if something changes and could be useful to bump it sooner. And what I noticed is that the two files are managed very badly by IEEE.

At some point, while adding one entry to the OUI list, the charset of the file was screwed up, replacing the UTF-8 with mojibake then somebody fixed it, then somebody decided that using UTF-8 was too good for them and decided to go back to pure ASCII, doing some near-equivalent replacement – although whoever changed ß to b probably got to learn some German – then somebody decided to fix it up again … then again somebody broke it while adding an entry, another guy tried to go back to ASCII, and someone else fixed it up again.

How much noise is this in the history of the file? Lots. I really wish they actually wrote a decent app to manage those databases so they don’t break them every other time they have to add something to the list.

The other upstream is Blender. You probably remember I was complaining about their multi-level bundling ad the fact that there are missing license information for at least one of the bundled libraries. Well, we’re now having another problem. I was working on the bump to 2.65, but now either I return to bundle Bullet, or I have to patch it because they added new APIs to the library.

So right now we have in tree a package that:

  • we need to patch to be able to build against a modern version of libav;
  • we need to patch to make sure it doesn’t crash;
  • we need to patch to make it use over half a dozen system libraries that it otherwise bundles;
  • we need to patch to avoid it becoming a security nightmare for users by auto-executing scripts in downloaded files;
  • bundles libraries with unclear licensing terms;
  • has two build systems, with different features available, neither of which is really suitable for a distribution.

Honestly, I reached a point where I’m considering p.masking the package for removal and deal with those consequences rather than dealing with Blender. I know it has quite a few users especially in Gentoo, but if upstream is unwilling to work with us to make it fit properly, I’d like users to speak to them to see that they get their act together at this point. Debian is also suffering from issues related to the libav updates and stuff like that. Without even going into the license issues.

So if you have contacts with Blender developers, please ask them to actually start reducing the amount of bundled libraries, decide on which of the two build systems we should be using, and possibly start to clear up the licensing terms of the package as a whole (including the libraries!). Unfortunately, I’d expect them not to listen — until maybe distributions, as a whole, decide to drop Blender because of the same reasons, to make them question the sanity of their development model.

December 11, 2012
Matthew Thode a.k.a. prometheanfire (homepage, bugs)

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

December 10, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using pam_selinux to switch contexts (December 10, 2012, 20:11 UTC)

With SELinux managing the access controls of applications towards the resources on the system, a not-to-be forgotten important component on any Unix/Linux system is the authentication part. Most systems use or support PAM, the Pluggable Authentication Modules, and for SELinux this plays an important role.

Applications that are PAM-enabled use PAM for the authentication of user activities. If this includes setting up an authenticated session, then the “session” part of the PAM configuration is also handled. And for SELinux, this is a nice-to-have, since this means applications that are not SELinux-aware can still enjoy transitions towards specified domains depending on the user that is authenticated.

The “not SELinux-aware” here is important. By default, applications keep running in one security context for their lifetime. If they invoke a execve or similar call (which is used to start another application or command when used in combination with a fork), then the SELinux policy might trigger an automatic transition if the holy grail of fourfold rules is set:

  1. a transition from the current context to the new one is allowed
  2. the label of the executed command/label is marked as an entrypoint for the new context
  3. the current context is allowed to execute that application
  4. an automatic transition rule is made from the current context to the new one over the command label

Or, in SELinux policy terms, assuming the domains are source_t and destination_t with the label of the executed file being file_exec_t:

allow source_t destination_t:process transition;
allow destination_t file_exec_t:file entrypoint;
allow source_t file_exec_t:file execute;
type_transition source_t file_exec_t : process destination_t;

If those four settings are valid, then (and only then) can the automatic transition be active.

Sadly, for applications that run user actions (like cron systems, remote logon services and more) this is not sufficient, since there are two major downsides to this “flexibility”:

  1. The rules to transition are static and do not depend on the identity of the user for which activities are launched. The policy can not deduce this identity from a file context either.
  2. The policy is statically defined: different transitions based on different user identities are not possibel.

To overcome this problem, applications can be made SELinux-aware, linking with the libselinux library and invoking the necessary switches themselves (or running the commands with runcon). Luckily, this is where the PAM system comes to play to aide us in setting up this policy behavior.

When an application is PAM-enabled, it will invoke PAM calls to authenticate and possibly set up the user session. The actions that PAM invokes are defined by the PAM configuration files. For instance, for the at daemon:

## /etc/pam.d/atd
#
# The PAM configuration file for the at daemon
#

auth    required        pam_env.so
auth    include         system-services
account include         system-services
session include         system-services

I am not going to dive into the details of PAM in this blog post, so let’s just jump to the session management part. In the above example file, if PAM sets up (or shuts down) a user session for the service (at in our case), it will go through the PAM services that are listed in the system-services definition, which looks like so:

## /etc/pam.d/system-services
auth            sufficient      pam_permit.so
account         include         system-auth
session         optional        pam_loginuid.so
session         required        pam_limits.so 
session         required        pam_env.so 
session         required        pam_unix.so 
session         optional        pam_permit.so

Until now, nothing SELinux-specific is enabled. But if we change the session section of the at service to the following, then the SELinux pam module will be called as well:

session optional        pam_selinux.so close
session include         system-services
session optional        pam_selinux.so multiple open

Now that the SELinux module is called, pam_selinux will try to switch the context of the process based on the definitions in the /etc/selinux/strict/contexts location (substitute strict with the policy type you use). The outcome of this switching can be checked with the getseuser application:

~# getseuser root system_u:system_r:crond_t
seuser:  root, level (null)
Context 0       root:sysadm_r:cronjob_t
Context 1       root:staff_r:cronjob_t

By providing the contexts in configurable files in /etc/selinux/strict/contexts, a non-SELinux aware application suddenly becomes SELinux-aware (through the PAM support it already has) without needing to patch or even rebuild the application. All that is need is to allow the security context of the application to switch ids and roles (as that is by default not allowed), which I believe is offered through the following statements:

domain_subj_id_change_exemption(atd_t)
domain_role_change_exemption(atd_t)

selinux_validate_context(atd_t)
selinux_compute_access_vector(atd_t)
selinux_compute_create_context(atd_t)
selinux_compute_relabel_context(atd_t)
selinux_compute_user_contexts(atd_t)

seutil_read_config(atd_t)
seutil_read_default_contexts(atd_t)

Jeremy Olexa a.k.a. darkside (homepage, bugs)
November 2012 wrap up (December 10, 2012, 13:39 UTC)

To wrap up my November, I finished up my stay in Prague. The below were two-day trips, where I was embracing home-base travel – meaning I would go somewhere then come back.

Before I left the Czech Republic, I also went to Cesky Krumlov, an amazing medieval town, UNESCO town, castle, brewery, winding streets, very glad I went there. I’m thinking about how to get back there during the summer. Cesky Krumlov is the second most visited city in the Czech Republic. I took the train there and the bus back. The train was quite nice but there was a few connections, at one point I was following the herd as we went from train to bus to train and I was confused but it worked out in the end. I got to Krumlov, walked to the hostel Krumlov House (recommended), ate at the delicious Two Marys restaurant, hung out with the staff, and went to a local bar. Then I walked around the castle, went to a brewery tour, relaxed for a few days, and took it all in. I took the bus back to Prague because it was quicker and cheaper.

Czech Republic (Prague, Olomouc, Cesky Krumlov) Oct/Nov 2012-243
(The view of the city from the castle)
Cesky Krumlov Pics

Dresden, Germany for a few days. I carpooled here with 3 other Germans as they were going home for the weekend and then couchsurfed. The generosity of people is amazing in this world. I was only there for a few nights, the first night, I walked around then ate out with my host. The next day, I went to the Botanical gardens (many pictures for my Grandpa), the VW Factory (no pictures allowed) – I’d recommend the glass factory tour to those that are engineering types, it is quite nice, then I walked around the city some. Went into a church, climbed to the top viewing point, and went out to eat again and chatted worldly topics with my host. She never had a guest from the USA before. The very unique thing about Dresden, even though it looks old, it is not since it was rebuilt after the war. I also carpooled back, the Germans love to be efficient.

Dresden Pics

Then we can fast forward to December 1, when I got on the bus for Vienna. I lost my camera on November 30th, so there is only mental pictures of Vienna. I stayed there for 3 nights. It is an expensive city relative to Czech Republic and farther east, but I liked it. I stayed at an independent hostel, Hostel Ruthersteiner (recommended as well) I met with my friend Marijn and we walked around the city with his family and colleague. I tried to goto a Viennese Opera but there was only standing room and I didn’t feel like standing still for 2.5 hours so of course I went to the Viennese Christmas Markets instead and enjoyed many a glühwein (hot wine). I also toured the UN headquarters in Vienna and had lunch with my friend there. I could imagine myself going back there later in life to soak in the cultural activities that are more suited for older people or families.

Now, I am in Budapest. More on that later…

December 09, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Debunking EFI myths (December 09, 2012, 18:31 UTC)

I’m somehow hoping, although I won’t trust it that much, that this is going to be the last post in the trilogy of myths in need of debunking — the previous two posts involved ccache and x32 and both of them has been quite the controversy. Unfortunately, seeing how much some news sites decided to mangle my words on the topic, I expect this to happen again.

So here it is a list of myths that I heard, especially in relation of my recent posts on the topic.

I didn’t write about Secure Boot. I actually linked to a video from Jo Shields of Debian fame. If you wish, you can pair that with Greg’s video on the same subject. Looks like people need to see computers booting to actually know it is possible. Stupid times we’re in.

I didn’t say that it’s impossible to boot with UEFI! Some (pretend) news site decided to use my post – that explained how to solve the chicken-and-egg problem with needing EFI variables support to set up GRUB 2 to boot off EFI – was actually saying that it’s impossible to boot Linux on UEFI systems.

The only thing in my post that referred to the inability to boot was regarding the default install that Sabayon had: the DVD boots off as BIOS legacy, but there is no way to boot off the harddrive that way.This is not unexpected as, at that point, Sabayon didn’t support EFI at that point at all — Fabio now got that one fixed, and he also implemented some (naïve maybe, at this point) Secure Boot support.

UEFI is not there just to let you use a mouse. Some people expect that the only thing that UEFI is good to do is to have support for mice and graphical setup interfaces. This is a faulty assumption because I have had BIOS-based software that used graphical interfaces and mice before, and my ZenBook has a “classic” textual interface. Sure the new UEFI system allows for a cleaner way to write such interfaces, as it provides drivers for the video card as well as for the input devices, but that’s far from it.

Basically, UEFI does not make that much of a difference for the final user, even though the new Secure Boot can actually make a much more interesting technology for the end user, as it allows (to some degree, depending on the vendor) to only trust your own key and disallow other operating systems to be booted. If you’re a device manufacturer, this has its own importance, even though is what people refers to as “TiVoization” — it makes it extremely easy to set up a signed EFI stub to be the only one being booted … If you have a server or a desktop that you don’t want other people, even with physical access, to access, you probably want to have your key as the only trusted one. Sure there is some “Trusted GRUB” project that many see as a response to the Secure Boot feature — from Matthew’s comments, I wouldn’t want to go near it (storing the whole kernel into the TPM? Are you kidding me?).

SystemRescueCD does not support UEFI. This was my mistake. Indeed the version of SysRescueCD that I had available was 2.x which did not support UEFI. The new SysRescueCD 3 works perfectly fine. And since they boot an UEFI-capable kernel through UEFI boot, you no longer need to do anything in particular beside using grub2-install, and that will take care of efibootmgr.

By the way, I wouldn’t mind having a KDE-based interface that would let me choose what to boot on the next reboot, akin to what OS X already had for a long time… of course that would mean that I would have to find one for Windows as well — my Dell laptop uses dual boot with an external HDD, as I described before.

How to find issues related to LINGUAS (December 09, 2012, 18:11 UTC)

Usually, I want to find all possible issues with the LINGUAS variable, so in my arch testing environment I have enabled all linguas that the main tree uses.
To keep my make.conf more ‘clear’ I’m using source and another file called linguas.conf.

So, this is my /etc/portage/linguas.conf:
LINGUAS="am fil zh af ca cs da de el es et gl hu nb nl pl pt ro ru sk sl sv uk bg cy en eo fo ga he id ku lt lv mk ms nn sw tn zu ja zh_TW en_GB pt_BR ko zh_CN ar en_CA fi kk oc sr tr fa wa nds as be bn bn_BD bn_IN en_US es_AR es_CL es_ES es_MX eu fy fy_NL ga_IE gu gu_IN hi hi_IN is ka kn ml mr nn_NO or pa pa_IN pt_PT rm si sq sv_SE ta ta_LK te th vi ast dz km my om sh ug uz ca@valencia sr@ijekavian sr@ijekavianlatin sr@latin csb hne mai se es_LA fr_CA zh_HK br la no es_CR et_EE sr_CS bo hsb hy mn sr@Latn lb ne bs tg uz@cyrillic xh be_BY brx ca_XV dgo en_ZA gd kok ks ky lo mni nr ns pap ps rw sa_IN sat sd ss st sw_TZ ti ts ve mt ia az me tl ak hy_AM lg nso son ur_PK it fr nb nb_NO hr nan ur tk cs_CZ da_DK de_1901 de_CH en_AU lt_LT pl_PL sa sk_SK th_TH ta_IN tt sco ha mi ven ar_SY el_GR ro_RO ru_RU sl_SI uk_UA vi_VN ar_SY te_IN de_DE es_VE fa_IR fr_FR hu_HU id_ID it_IT ja_JP ka_GE nl_NL sr_BA sr_RS ca_ES fi_FI he_IL jv ru_gold yi eu_ES"

Now you need to set in your make.conf:
source /etc/portage/linguas.conf

I will update this post if there will be new linguas/languages in the future.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
g-octave news: the octave overlay (December 09, 2012, 16:13 UTC)

After having lots of problems with people that can't use g-octave properly, sometimes because they don't seems to be able to read documentation, elog messages and/or just ask, and after a suggestion of Sebastien Fabbro (bicatali), I write down some simple scripts to update the g-octave package database and an overlay using g-octave and a cronjob.

I built a virtual machine on my own server and set up a weekly cronjob, that will hopefully keep the packages up-to-date.

The overlay is available on Github:

https://github.com/rafaelmartins/octave-overlay

To install it, follow the instrunctions available on the README file. The overlay is available on layman, named octave.

Packages with unresolvable dependencies, e.g. packages with dependencies unavailable on gentoo-x86, aren't available in the overlay. If you find some package that is supposed to work and isn't available on the overlay please open an issue on Github, and I'll take a look ASAP.

As a bonus, g-octave code itself was moved to Github:

https://github.com/rafaelmartins/g-octave

Feel free to submit pull requests if you think that something is broken and you know how to fix it.

And as another bonus, the g-octave website (http://g-octave.org/) is now running on the Read the Docs service, that is way more reliable than my own server. This should avoid the recent documentation downtimes.

December 08, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using stunnel for mutual authentication (December 08, 2012, 12:24 UTC)

Sometimes services do not support SSL/TLS, or if they do, they do not support using mutual authentication (i.e. requesting that the client also provides a certificate which is trusted by the service). If that is a requirement in your architecture, you can use stunnel to provide this additional SSL/TLS layer.

As an example, I have a mail server running on localhost, and I want to provide SSMTP services with mutual authentication on top of this service, using stunnel. First of all, I provide two certificates and private keys that are both signed by the same CA, and keep the CA certificate close as well:

  • client.key is the private key for the client
  • client.pem is the certificate for the client (which contains the public key and CA signature)
  • server.key and server.pem are the same but for the server
  • root-genfic.crt is the certificate of the signing CA

First of all, we setup the stunnel, listening on 1465 (as 465 requires the stunnel service to run as root, which I’d rather not) and fowarding towards 127.0.0.1:25:

cert = /etc/ssl/services/stunnel/server.pem
key = /etc/ssl/services/stunnel/server.key
setuid = stunnel
setgid = stunnel
pid = /var/run/stunnel/stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2 # This enables the mutual authentication
CAfile = /etc/ssl/certs/root-genfic.crt

[smtp]
accept = 1465
connect = 127.0.0.1:25

To test out mutual authentication this way, I used the following command-line snippet. The delays between the lines are because the mail client is supposed to wait for the mail server to give its reply and if not, the data gets lost. I’m sure this can be made easier (with netcat I could just use “-i 1″ to print a line with a one-second delay), but it works ;-)

~$  (sleep 1; echo "EHLO localdomain"; sleep 1; echo "MAIL FROM:remote@test.localdomain"; \
sleep 1; echo "RCPT TO:user@localhost"; sleep 1; echo "DATA"; sleep 1; cat TEMPFILE) | \
openssl s_client -connect 192.168.100.102:1465 -crlf -ign_eof -ssl3 -key client.key -cert client.pem

The TEMPFILE file contains the email content (you know, Subject, From, To, other headers, data, …).

If the provided certificate isn’t trusted, then you’ll find the following in the log file (on Gentoo, thats /var/log/daemon.log by default but you can setup logging in stunnel as well):

Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: Starting certificate verification: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: CERT: Verification error: unable to get local issuer certificate
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: Certificate check failed: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: SSL alert (write): fatal: bad certificate
Dec  8 13:17:32 testsys stunnel: LOG3[20237:2766895953664]: SSL_accept: 140890B2: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned

When a trusted certificate is shown, the connection goes through.

Finally, if you not only want to validate if the certificate is trusted, but also only want to accept a given number of certificates, you can set the stunnel variable verify to 3. If you set it to 4, it will not check the CA and only allow a connection to go through if the presented certificate is one in the stunnel trusted certificates.

Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita becomes a part of the KDE project (December 08, 2012, 07:58 UTC)

I'm happy to announce that Trojitá, a fast IMAP e-mail client, has become part of the KDE project. You can find it below extragear/pim/trojita.

Why moving under the KDE umbrella?

After reading the KDE's manifesto, it became obvious that the KDE project's values align quite well with what we want to achieve in Trojitá. Becoming part of a bigger community is a logical next step -- it will surely make Trojitá more visible, and the KDE community will get a competing e-mail client for those who might not be happy with the more established offerings. Competition is good, people say.

But I don't want to install KDE!

You don't have to. Trojitá will remain usable without KDE; you won't need it for running Trojitá, nor for compiling the application. We don't use any KDE-specific classes, so we do not link to kdelibs at all. In future, I hope we will be able to offer an optional feature to integrate with KDE more closely, but there are no plans to make Trojitá require the KDE libraries.

How is it going?

Extremely well! Five new people have already contributed code to Trojitá, and the localization team behind KDE got a terrific job with providing translation into eleven languages (and I had endless hours of fun hacking together lconvert-based setup to make sure that Trojitá's Qt-based translations work well with KDE's gettext-based workflow -- oh boy was that fun!). Trojitá also takes part in the Google Code-in project; Mohammed Nafees has already added a feature for multiple sender identities. I also had a great chat with the KDE PIM maintainers about sharing of our code in future.

What's next?

A lot of work is still in front of us -- from boring housekeeping like moving to KDE's Bugzilla for issue tracking to adding exciting (and complicated!) new features like support for multiple accounts. But the important part is that Trojitá is live and progressing swiftly -- features are being added, bugs are getting fixed on a faily basis and other people besides me are actually using the application on a daaily basis. According to Ohloh's statistics, we have a well established, mature codebase maintained by a large development team with increasing year-over-year commits.

Interested?

If you are interested in helping out, check out the instructions and just start hacking!

Cheers,
Jan

December 07, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The myth of the perfectionist QA (December 07, 2012, 20:48 UTC)

There is a bad trend going on that seems to purport Gentoo’s QA (which for the past couple of years has meant mostly me alone), as a perfectionist hindering getting stuff done — and I call all of this utter bull feces. It’s probably telling that the people who seem to then expect that every single issue has an explicit written rule, with no leeway for different situations.

So let me give you some insights so that you can actually get a realistic clue of what’s going on. Rich is complaining that I made it a task of mine to make sure that the software in Portage don’t use bundled libraries; for some reasons, he seems to assume that I have no clue on how cumbersome it is to deal with said bundled libraries, and he couldn’t be more wrong. You know what my first bundled libraries project? xine. And it took me quite a long time to get rid of (almost all) the bundled libraries; even more so because many of said libraries were more or less heavily modified internally. Was it worth it? Totally. It actually made the xine package much more reliable to security issues (and there had been quite a few), as well as solving a number of other issues, including the most infamous problem with the symbols’ collisions between the system’s libfaad (brought in by ffmpeg) and the internal copy of it.

So, I know it’s a lot of work, and I know it’s not always a task for the faint of heart, and most of the times there is no immediate improvement by fixing those. Why am I insisting on it as a point of policy? Because we have quite a few examples with software being bundled for which vulnerabilities were found and could be leveraged by an attacker, especially for what concern zlib and software that either downloads or receives compressed content over the network.

So from what Rich wrote, we’re trying to hinder getting stuff in tree by refusing it if it bundles libraries. That’s so much of a lie that it’s not even funny. We have had packages entering the tree with bundled libraries, and I’m pretty sure there still is a ton of it. Heck, do you remember what I wrote about Blender just last month? Blender has a number of bundled libraries, and yet it’s in tree, I maintain it, and it’s going stable from time to time.

What is important in that scene? That I’m trying to get rid of said libraries. The same applies to Chromium and most other packages that have a ton of bundled libraries; most packages are responsible enough, and generally know enough about the package that they can work on getting rid of said libraries, if that’s feasible at all — in the case of Chromium it’s an extremely difficult task I’m sure, mostly because upstream does not care the least, and that was clear at last VDD when we were discussing the patches applied over ffmpeg/libav.

So let’s get into the specific details for the complains, as Rich’s “just an example” is not an example of what happens at all. There is a server software written by Logitech for their SqueezeBox devices, that is now called logitechmediaserver-bin in Gentoo. Said package has been known to be a bundled for years — Logitech is bundling a large set of CPAN modules with it, as it seems like the bulk of the code is written in Perl or something like that. I’ve known it at quite a few versions to bundle a CPAN module that, in turn, bundled an old copy of zlib, one that is vulnerable to a few different issues. Right now, it’s not only bundling a bunch of modules, but I found out that it installs a number of files for architectures that are completely incompatible with your system (i.e. a PowerPC binary on amd64). This bothered me quite a lot more. Why? Because it means that the (proxy) maintainer is not discriminating at all, and just installing whatever the upstream package comes with. Guess what? That’s not a good way to package the software.

When the proxy maintainer is not doing stuff that gets even nearby the quality level of most of the rest of the tree, and the Gentoo developer who should be proxying him ignoring the problems, things are messy enough. But it gets worse when you get a person known for bad patches sold as fixes from over three years ago, who still expect to be able to call the shots.

If you see bug #251494 he’s actually suggesting marking the package stable because it’s not going to run with the current unstable Perl which is a completely backward reason (if it can’t work with the latest Perl, it means that it hasn’t been tested in testing for a long time), and is going to create more trouble later on (the moment when the new Perl goes stable, we have a known broken, unfixable package in stable). But he’s Daniel Robbins, and some people expect that that’s enough for him to be right — if that was the case, I suppose he’d still be the Gentoo Linux lead, and not just of a wannabe project.

Anyway, here’s the deal: QA policies are more like a guideline, in general. On the other hand, there is no reason why we should be forced to mark things stable if they do not follow the common QA policies — especially for proprietary software, and software requiring particular hardware, marking things stable is not that great an idea, as they tend to be orphaned quite easily if a single developer retires. We already have quite enough packages stable in x86, ppc and sparc that are not really able to run, because they were always broken, but has been stabled in ancient times. Sometimes, we even have packages keyworded that cannot be used on a given arch, but they do build, and the failure would only happen at runtime, so they have been, again in ancient times, keyworded.

Maybe this is what people who want to follow Daniel expect: coming back to the “it builds, ship it!” mentality that made Gentoo a joke in the eyes of almost everybody at the time. For sure, it’s not what I want, and I don’t think it’s what the users as a whole group want or need.

Kernel: vanilla-sources maintenance (December 07, 2012, 12:01 UTC)

In the last time I’m helping the kernel team with the bump of vanilla-sources.

It does not take much time because I’m doing it with a script. So, personally, I will continue to bump the following series:

  • 2.6.32
  • 3.0
  • 3.2
  • 3.4
  • 3.6

I will remove the EOL series as soon as possible.

If you have requests, please let me know.

December 06, 2012
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)
Recording VoIP calls using pulseaudio and avconv (December 06, 2012, 15:58 UTC)

For ages, I've wanted an option in Skype or Empathy to record my video and voice calls1. Text is logged constantly because it doesn't cost much in the form of resources, but voice and video are harder.

In lieu of integrated support inside Empathy, and also because I mostly use Skype (for various reasons), the workaround I have is to do an X11 screen grab and encode it to a file. This is not hard at all. A cursory glance at the man page of avconv will tell you how to do it:

avconv -s:v [screen-size] -f x11grab -i "$DISPLAY" output_file.mkv

[screen-size] is in the form of 1366x768 (Width x Height), etc, and you can extend this to record audio by passing the -f pulse -i default flags to avconv2but that's not quite right, is it? Those flags will only record your own voice! You want to record both your own voice and the voices of the people you're talking to. As far as I know, avconv cannot record from multiple audio sources, and hence we must use Pulseaudio to combine all the voices into a single audio source!

As a side note, I really love Pulseaudio for the very flexible way in which you can manipulate audio streams. I'm baffled by the prevailing sense of dislike that people have towards it! The level of script-level control you get with Pulseaudio is unparallelled compared to any other general-purpose audio server3. One would expect geeks to like such a tool—especially since all the old bugs with it are now fixed.

So, the aim is to take my voice coming in through the microphone, and the voices of everyone else coming out of my speakers, and mix them into one audio stream which can be passed to avconv, and encoded into the video file. In technical terms, the voice coming in from the microphone is exposed as an audio source, and the audio for the speakers is going to an audio sink. Pulseaudio allows applications to listen to the audio going into a sink through a monitor source. So in effect, every sink also has a source attached to it. This will be very useful in just a minute.

The work now boils down to combining two sources together into one single source for avconv. Now, apparently, there's a Pulseaudio module to combine sinks but there isn't any in-built module to combine sources. So we route both the sources to a module-null-sink, and then monitor it! That's it.


pactl load-module module-null-sink sink_name=combined
pactl load-module module-loopback sink=combined source=[voip-source-id]
pactl load-module module-loopback sink=combined source=[mic-source-id]
avconv -s:v [screen-size" -f x11grab -i "$DISPLAY" -f pulse -i combined.monitor output_file.mkv

Here's a script that does this and more (it also does auto setup and cleanup). Run it, and it should Just Work™.

Cheers!

1. It goes without saying that doing so is a breach of the general expectation of privacy, and must be done with the consent of all parties involved. In some countries, not getting consent may even be illegal.
2. If you don't use Pulseaudio, see the man page of avconv for other options, and stop reading now. The cool stuff requires Pulseaudio. :)
3. I don't count JACK as a general-purpose audio system. It's specialized for a unique pro-audio use case.

Richard Freeman a.k.a. rich0 (homepage, bugs)
The Dark Side of Quality (December 06, 2012, 15:48 UTC)

Voltaire once said that the best is the enemy of the good. I think that there are few places where one can see as many abuses of quality as you’ll find in many FOSS projects, including Gentoo.

Often FOSS errs on the side of insufficient quality. Developers who are scratching itches don’t always have incentive to polish their work, and as a result many FOSS projects result in a sub-optimal user experience. In these cases “good enough” is standing in the way of “the best.”

However, I’d like to briefly comment on an opposite situation, where “the best” stands in the way of “good enough.” As an illustrative example, consider the excellent practice of removing bundled libraries from upstream projects. I won’t go on about why this is a good thing – others have already done so more extensively. And make no mistake – I agree that this is a good thing, the following notwithstanding.

The problem comes when things like bundled libraries become a reason to not package software at all. Two examples I’m aware of where this has happened recently are media-sound/logitechmediaserver-bin and media-gfx/darktable. In the former there is a push to remove the package due to the inclusion of bundled libraries. In the latter the current version is lagging somewhat because while upstream actually created an ebuild, it bundles libraries. Another example is www-client/chromium, which still bundles libraries despite a very impressive campaign by the chromium team to remove them.

The usual argument for banning packages containing bundled libraries is that they can contain security problems. However, I think this is misleading at best. If upstream bundles zlib in their package we cry about potential security bugs (and rightly so), however, if upstream simply writes their own compression functions and includes them in the code, we don’t bat an eyelash, even though this is more likely to cause security problems. The only reason we can complain about zlib is BECAUSE it is extensively audited, making it easy to spot the security problems. We’re not reacting to the severity of problems, but only to the detectablity of them.

Security is a very important aspect of quality, but any reasonable treatment of security has to consider the threat model. While software that bundles a library is rightfully considered “lower” in quality than one that does not, what matters more is whether this is a quality difference that is meaningful to end users, and what their alternatives are. If the alternative for the user is to just install the same software with the same issues, but from an even lower quality source with no commitment to security updates, then removing a package from Gentoo actually increases the risks to our users. This is not unlike the situation that exists with SSL, where an unencrypted connection is presented to the user as being more secure than an SSL connection with a self-signed certificate, when this is not true at all. If somebody uses darktable to process photos that they take, then they’re probably not concerned with a potential buffer overflow in a bundled version of dcraw. If the another user operated a service that accepted files from strangers on the internet, then they might be more concerned.

What is the solution?: A policy that gives users reasonably secure software from a reputable source, with clear disclosure. We should encourage devs to unbundle libraries, consider bugs pointing out bundled libraries valid, accept patches to unbundle libraries when they are available, and add an elog notice to packages containing bundled libraries in the interest of disclosure. Packages with known security vulnerabilities would be subject to the existing security policy. However, developers would still be free to place packages in the tree that contain bundled libraries, unmasked, and they could be stabilized. Good enough for upstream should be good enough for Gentoo (again, baring specific known vulnerabilities), but that won’t stop us from improving further.


Filed under: gentoo

gstreamer 1.0 (December 06, 2012, 00:03 UTC)

It has been a while since I have last written here but I am not dead and I still somehow manage to contribute to Gentoo.

In the past weeks, I have been working on making Gnome 3.6 ready for inclusion in portage. It rapidly appeared that Gnome 3.6 would have to use both gstreamer 0.10 and gstreamer 1.0 however gstreamer team is badly understaffed and only Alexandre (tetromino) who is not even a gstreamer team member had tried to start bumping ebuilds to gstreamer 1.0.

But then Alexandre got busy and this development stalled a bit. After I finished bumping the overlay to Gnome 3.6.1, I took the challenge to rewrite the gstreamer eclasses to make them easier to use and understand. They were, in my opinion, quite scary with version checks everywhere and I think it is one of the reason that so few people wants to work in gstreamer team :)

If you do not follow gentoo-dev, most of the code moved to gst-plugins10.eclass which received some magic touches that basically makes 99% of the version dependant code go away. As an added bonus, the eclasses are now documented and support EAPI 1 to 5. EAPI 0 support got dropped because of missing slot operators which is really annoying right now with gstreamer.

So if you hit some gstreamer compilation problems in the last few days, please forgive me, upgrade road was a bit bumpy but, overall, it was not so bad. And now, I am happy to say that gstreamer 1.0 is in portage which clears the road for gnome 3.6 inclusion.

On a final note, I also continued Alexandre’s work of bumping last 0.10 releases and so we are up-to-date on that front as well.

Happy compiling !

December 05, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
nginx as reverse SMTP proxy (December 05, 2012, 22:03 UTC)

I’ve noticed that not that many resources are online telling you how you can use nginx as a reverse SMTP proxy. Using a reverse SMTP proxy makes sense even if you have just one mail server back-end, either because you can easily switch towards another one, or because you want to put additional checks before handing off the mail to the back-end.

In the below example, a back-end mail server is running on localhost (in my case it’s a Postfix back-end, but that doesn’t matter). Mails received by Nginx will be forwarded to this server.

user nginx nginx;
worker_processes 1;

error_log /var/log/nginx/error_log debug;

events {
        worker_connections 1024;
        use epoll;
}
http {

        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';


        server {
                listen 127.0.0.1:8008;
                server_name localhost;
                access_log /var/log/nginx/localhost.access_log main;
                error_log /var/log/nginx/localhost.error_log info;

                root /var/www/localhost/htdocs;

                location ~ .php$ {
                        add_header Auth-Server 127.0.0.1;
                        add_header Auth-Port 25;
                        return 200;
                }
        }
}

mail {
        server_name localhost;

        auth_http localhost:8008/auth-smtppass.php;

        server {
                listen 192.168.100.102:25;
                protocol smtp;
                timeout 5s;
                proxy on;
                xclient off;
                smtp_auth none;
        }
}

If you first look at the mail setting, you notice that I include an auth_http directive. This is needed by Nginx as it will consult this back-end service on what to do with the mail (the moment that it receives the recipient information). The URL I use is arbitrarily chosen here, as I don’t really run a PHP service in the background (yet).

In the http section, I create the same resource that the mails’ auth_http wants to connect to. I then declare the two return headers that Nginx needs (Auth-Server and Auth-Port) with the back-end information (127.0.0.1:25). If I ever need to do load balancing or other tricks, I’ll write up a simple PHP script and serve it from PHP-FPM or so.

Next on the list is to enable SSL (not difficult) with client authentication (which isn’t supported by Nginx for the mail module (yet) sadly, so I’ll need to look at a different approach for that).

BTW, this is all on a simple Gentoo Hardened with SELinux enabled. The following booleans were set to true: nginx_enable_http_server, nginx_enable_smtp_server and nginx_can_network_connect_http.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A perfect use case for IPv6 (December 05, 2012, 19:37 UTC)

You probably remember, or encountered at least once, my modsecurity ruleset which I use to filter spam comments on this blog, among other things. One of the things that the ruleset does is filtering based on a number of DNSBL which covers among other open proxies and infected nodes — this is great, because most of the comment spam you’ll ever receive passes through open proxies, or through computers that have been infected by malware.

Unfortunately, this has a side effect: public networks such as airports’, Starbucks shops’, and the gogo in-flight wifi that I’m using now, use a very wide NAT, and the sheer number of devices connected mean that there is no way that the IP address wouldn’t be counted as an infected node. This would normally mean that I won’t be able to blog from within the plane, so how am I doing that right now? I simply opened a VPN connection to the office in LA and route all accesses to my server through that. It works, but it really feels wrong.

Well, turns out that there is a very easy way to deal with it: you just need to assign unique IP addresses for each of the devices connected — easy, isn’t it? And since you don’t want them reused you probably want a single per-device address that is unique among all the possible devices.. wait isn’t this what IPv6 is designed to be? Yes it is.

Indeed, I would say that even more so than a private entity, be it a person or a company, public wireless networks are a perfect reason to get more IPv6 service out there, and I’m very surprised that none of these companies seem to have smarten up in providing IPv6, especially in light of the recent switch on for services like Facebook, Google, and so on.

And it’s funny that the companies that make available the in-flight wireless, and provide IPv6, have such a similar name, while being totally unrelated… gogo and gogo6.

On a different note, I have to say that the staff for Delta Airlines in LAX today has been the most friendly, prepared and fast than I have ever experienced. Even in the face of an hour delay on the plane, they’ve communicated clearly, and defused a situation that could have been very tense. Congrats!

December 04, 2012
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: debris (December 04, 2012, 07:29 UTC)

a new song: debris by ioflow

reworking music from three netlabel releases, for the 48th disquiet junto, fraternité, dérivé.

a last-minute contribution to this junto. i was in a car wreck a couple days ago, so abruptly my planned participation time was reduced to just a day and a half. i could only spend a little while per session sitting at the DAW. the track’s title is a reference to that event.

everything was sequenced with renoise, as seen in the screenshot.

the three source tracks were very hard to work with; this was easily the hardest junto i’ve attempted. i had to make several passes through the tracks, pulling out tiny sub-one-second sections here and there, building up percussion, or finding a droney passages that would work for background material.

for the percussion, i zoomed in and grabbed pieces of non-tonal audio, gated them to remove incidental noise, and checked playback at other speeds for useful sounds. some of the samples were doubly useful with different filter and speed settings. most of the percussion sounds were created after isolating one channel or mixing down to mono; this gave a sharper, clickier sound. occasionally, some of the hits/sticks were left in stereo for a slightly fuller sound.

the melody/drone passages were all pulled from the “unloop” track. i chopped out a short section of mostly percussion-free sound at the beginning of the song, isolated one channel, and ran this higher-pitched drone into paulstretch, stretched to 50x. i played with the bandwidth and noise/tone sliders to get the distinctive crystalline sound, rendering it a few more times. by playing this tone at different speeds using renoise’s basic sample editor, i was able to layer octaves, fading different copies of the sample in and out for some evolving harmonics as desired.

a signal follower attached to the low-passed kick drum flexed the drone’s volume on the beat, adding some liveliness, resulting in a pleasant low-key “bloom pads” effect. i don’t go for huge sidechain compression; just a touch is all that’s needed to reinforce the rhythm. a slow LFO set to “random” mode, attached to a bitcrusher, downgraded the clap sounds with some pleasant crunch.

calf reverb and vintage tape delay plugins rounded out the FX, with the percussion patterns treated liberally, resulting in some complex sounds despite simple arrangement. the only other effect was a tape warmth plugin on the master channel; everything was kept quite minimal, for aesthetic and time reasons. given that i only had a day or so to work on the track, i knew i couldn’t try for too many complicated tricks or melodies.

December 02, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
I'm doing it for you (December 02, 2012, 07:44 UTC)

Okay this is not going to be a very fun post to read, and the title can already make you think that I’m being an arrogant bastard this time around, but I got a feeling that lately people are missing the point that even being grumpy, I’m not usually grumpy just because, I’m usually grumpy because I’m trying to get things to improve rather than stagnate or get worse.

So let’s take an example right now. Thomáš postd about some of the changes that are to be expected on LibreOffice 4 — one of these is that the LDAP client libraries are no longer an optional dependency but have to be present. I wasn’t happy about that.

I actually stumbled across that just the other day when installing the new laptop: while installing KDE component with the default USE flags, OpenLDAP would have been installed. The reason is obviously that the ldap USE flag is enabled by default, which makes sense, as it’s (unfortunately) the most common “shared address book” database available. But why should I get an LDAP server if I selected explicitly a desktop profile?

So the first task at hand, was to make sure that the minimal USE flag was present on the package (it was), and if it did what was intended, i.e., not install the LDAP server — and that is the case indeed. Good, so we can install only the client libraries. Unfortunately the default dependencies were slightly wrong, with said USE flag, as some things like libtool (for libltdl) are only really used by the server components. This was easy to fix, together with a couple more fixes.

But as I proposed on the mailing list to change the defaults, for the desktop profile, to have the minimal USE flag enabled, hell broke loose ­— now the good point about it is that the minimal USE flag is definitely being over-used — and I’m afraid I’m at fault there as well, since both NRPE and NSCA have a minimal USE flag. I guess it’s time to reel back on that for me as well. And I now I have a patch to get openldap to gain a server USE flag, enabled by default – except, hopefully, on the desktop profile – to replace the old minimal flag. Incidentally looking into it I also found that said USE flag was actually clashing with the cxx one, for no good reason as far as I could tell. But Robin doesn’t even like the idea of going with a server USE flag for OpenLDAP!

On a different note, let’s take hwids — I originally created the package to reduce the amount of code our units’ firmware required, but while at it I ended up with a problematic file on my hands, as I wrote the oui.txt file downloaded from IEEE has been redistributed for a number of years, but when I contacted them to make sure I could redistribute it, they told me that it wasn’t possible. Unfortunately the new versions of systemd/udev use that file to generate some hardware database — finally implementing my suggestion from four years ago better late than never!

Well, I ended up having to take some flak, and some risk, and now the new hwids package fetches that file (as well as the iab.txt file) and also fully implements re-building the hardware database, so that we can keep it up to date from Portage, without having to get people to re-build their udev package over and over.

So, excuse me if I’m quite hard to work with sometimes, but the amount of crap I have to take when doing my best to make Gentoo better, for users and developers, is so high that sometimes I’d just like to say “screw it” and leave it to someone else to fix the mess. But I’m not doing that — if you don’t see me around much in the next few days, it’s because I’m leaving LA on Wednesday, and I can’t post on the blog while flying to New York (because the gogonet IP addresses are in virtually every possible blacklist, now and in the future - so no way I can post to the blog, unless I figure out a way to set up a VPN and route traffic to my blog to said VPN …).

And believe it or not, but I do have other concerns in my life beside Gentoo.

December 01, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Tinderbox and expenses (December 01, 2012, 17:44 UTC)

I’ve promised some insight into how much running the tinderbox actually costed me. And since today marks two months from Google AdSense’s crazy blacklisting of my website, I guess it’s a good a time as any other.

SO let’s start with the obvious first expense: the hardware itself. My original Tinderbox was running on the box I called Yamato, which costed me some €1700 and change, without the harddrives, this was back in 2008 — and about half the cost was paid with donation from users. Over time, Yamato had to have its disks replaced a couple of times (and sometimes the cost came out of donations). That computer has been used for other purposes, including as my primary desktop for a long time, so I can’t really complain about the parts that I had to pay myself. Other devices, and connectivity, and all those things, ended up being shared between my tinderbox efforts and my freelancing job, so I also don’t complain about those in the least.

The new Tinderbox host is Excelsior, which has been bought with the Pledgie which got me paying only some $1200 of my pocket, the rest coming in from the contributors. The space, power and bandwidth, have been offered by my employer which solved quite a few problems. Since now I don’t have t pay for the power, and last time I went back to Italy (in June) I turned off, and got rid of, most of my hardware (the router was already having some trouble; Yamato’s motherboard was having trouble anyway, I saved the harddrive to decide what to do, and sold the NAS to a friend of mine), I can assess how much I was spending on the power bill for that answer.

My usual power bill was somewhere around €270 — which obviously includes all the usual house power consumption as well as my hardware and, due to the way the power is billed in Italy, an advance on the next bill. The bill for the months between July and September, the first one where I was fully out of my house, was for -€67 and no, it’s not a typo, it was a negative bill! Calculator at hand, he actual difference between between the previous bills and the new is around €50 month — assuming that only a third of that was about the tinderbox hardware, that makes it around €17 per month spent on the power bill. It’s not much but it adds up. Connectivity — that’s hard to assess, so I’d rather not even go there.

With the current setup, there is of course one expense that wasn’t there before: AWS. The logs that the tinderbox generates are stored on S3, since they need to be accessible, and they are lots. And one of the reasons why Mike is behaving like a child about me just linking the build logs instead of attaching them, is that he expects me to delete them because they are too expensive to keep indefinitely. So, how much does the S3 storage cost me? Right now, it costs me a whopping $0.90 a month. Yes you got it right, it costs me less than one dollar a month for all the storage. I guess the reason is because they are not stored for high reliability or high speed access, and they are highly compressible (even though they are not compressed by default).

You can probably guess at this point that I’m not going to clear out the logs from AWS for a very long time at this point. Although I would like for some logs not to be so big for nothing — like the sdlmame one that used to use the -v switch to GCC which causes all the calls to print a long bunch of internal data that is rarely useful on a default log output.

Luckily for me (and for the users relying on the tinderbox output!) those expenses are well covered with the Flattr revenue from my blog’s posts — and thank to Socialvest I no longer have to have doubts on whether I should keep the money or use it to flattr others — I currently have over €100 ready for the next six/seven months worth of flattrs! Before this, between my freelancer’s jobs, Flattr, and the ads on the blog, I would also be able to cover at least the cost of the server (and barely the cost of the domains — but that’s partly my fault for having.. a number).

Unfortunately, as I said at the top of the post, there no longer are ads served by Google on my blog. Why? Well, a month and a half ago I received a complain from Google, saying that one post of mine in which I namechecked a famous adult website, in the context of a (at the time) recent perceived security issue, is adult material, and that it goes against the AdSense policies to have ads served on a website with adult content. I would still argue that just namechecking a website shouldn’t be considered adult content, but while I did submit an appeal to Google, a month and a half later I have no response at hand. They didn’t blacklist the whole domain though, they only blacklisted my blog, so the ads are still showed on Autotools Mythbuster (which I count to resume working almost full time pretty soon) but the result is bleak: I went down from €12-€16 a month to a low €2 a month due to this, and that is no longer able to cover for the serve expense by itself.

This does not mean that anything will change in the future, immediate or not. This blog for me has more value than the money that I can get back from it, as it’s a way for me to showcase my ability and, to a point, get employment — but you can understand that it still upsets me a liiiittle bit the way they handled that particular issue.

Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)
Libreoffice 4.0 and other cool stuff (December 01, 2012, 12:47 UTC)

During the following week there will be hard feature freeze on libreoffice and 4.0 branch will be created. This means that we can finally start to do some sensible stuff, like testing it like hell in Gentoo.

This release is packed with new features so let me list at least some relevant to our Gentoo stuff:

  • repaired nsplugin interface (who the hell uses it :P) that was fixed by Stephan Bergmann for wich you ALL guys should sent him some cookies :-)
  • liblangtag direct po/mo usage that ensures easier translations usage because the translations are not converted in to internal sdf format
  • liborcus library debut which brings out some features from calc to nice small lib so anyone can reuse them, plus it is easier to maintain, cookies to Kohei Yoshida
  • bluetooth remote control that allows you to just mess with your presentations over bluetooth, also there is android remote app for that over network ;-)
  • telepathy colaboration framework inclusion that allows you to mess with mutiple other people on one document in semi-realtime manner (it is mostly tech preview and you don’t see what is the other guy doing, it just appears in the doc)
  • binfilter is gone! Which is awesome as it was huge load of code that was really stinky

For more changes you can just read the wiki article, just keep in mind that this wiki page will be updated until the release, so it does not contain all the stuff.

Build related stuff

  • We are going to require new library that allows us to parse mspub format. Fridrich Strba was obviously bored so he wrote yet another format parser :-)
  • Pdfimport is no longer pseudo-extension but it is directly built in with normal useflag, which saves quite a lot of copy&paste code and it looks like it operates faster now.
  • The openldap schema provider is now hard-required so you can use adresbooks (Mork driver handles that). I bet some of you lads wont like this much, but ldap itself does not have too much deps and it is usefull for quite few business cases.
  • There are also some nice removals, like glib and librsvg are goners from default reqs (no-suprise for gnomers that they will still need them). From other it no longer needs the sys-libs/db, which I finally removed from my system.
  • Gcc requirement was raised to 4.6, because otherwise boost acts like *censored* and I have better stuff to do than just fix it all the time.
  • Saxon buindling has been delt with and removed completely.
  • Paralel build is sorted out so it will use correct amount of cpus and will fork gcc only required times not n^n times.
  • And last but most probably worst, the plugin foundation that was in java is slowly migrating to python, and it needs python:3.3 or later. This did not make even me happy :-)

Other fancy libreoffice stuff

Michel Meeks is running merges against the Apache Openoffice so we try hard to get even fixes that are not in our codebase (thankfully allowed by license this way). So with lots of efforts we review all their code changes and try to merge it over into our implementation. This will grow more and more complex over a time, because in libo we actually try to use the new stuff like new C++ std/Boost/… so there are more and more collisions. Lets see how long it will be worth it (of course oneliners are easy to pick up :P).

What is going in stable?

We at last got libreoffice-3.6 and binary stable. After this there was found svg bug with librsvg (see above, its gone from 4.0) so the binaries will be rebuilt and next version bump will loose the svg useflag. This was caused by how I wrote the detection of new switches and overlook on my side, I simply tried to just launch the libreo with -svg and didn’t dig further. Other than that the whole package is production ready and there should not be much new regressions.

November 30, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

If you're seeing a message like "Failed to move to new PID namespace: Cannot allocate memory" when running Chrome, this is actually a problem with the Linux kernel.

For more context, see http://code.google.com/p/chromium/issues/detail?id=110756 . In case you wonder what the fix is, the patch is available at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commitdiff;h=976a702ac9eeacea09e588456ab165dc06f9ee83, and it should be in Linux-3.7-rc6.

November 28, 2012
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Gentoo: Graphing the Developer Web of Trust (November 28, 2012, 13:57 UTC)

“Nothing gets people’s interest peaked like colorful graphics. Therefore, graphing the web of trust in your local area as you build it can help motivate people to participate as well as giving everyone a clear sense of what’s being accomplished as things progress.”

I graphed the Gentoo Developer Web of Trust, as motivated by the (outdated) Debian Web of Trust.

Graph (same as link above) – Redrawn weekly : http://qa-reports.gentoo.org/output/wot-graph.png
Stats per Node : http://qa-reports.gentoo.org/output/wot-stats.html
Source : http://git.overlays.gentoo.org/gitweb/?p=proj/qa-scripts.git;a=blob;f=gen-dev-wot.sh;hb=HEAD

Enjoy.

November 27, 2012
Pacho Ramos a.k.a. pacho (homepage, bugs)
About maintainer-needed (November 27, 2012, 18:35 UTC)

As you can see at:
http://euscan.iksaif.net/maintainers/maintainer-needed@gentoo.org/

there are a lot of packages assigned to maintainer-needed. This packages lack an active maintainer and his bugs are solved usually by people in maintainer-needed alias (like pinkbyte, hasufell, kensington and me). Even if we are still able to keep the bug list "short" (when excluding "enhancement" and "qa" tagged bugs) any help on this task is really appreciated and, then:
1. If you are already a Gentoo Dev and would like to help us, simply join the team adding you to mail alias. There is no need to go to bug list and fix any specified amount of bugs by obligation. For example, I simply try to go to fix maintainer-needed bugs when I have a bit of time after taking care of other things.
2. If you are a user, you can:
- Step up as maintainer using proxy-maintainers project:
http://www.gentoo.org/proj/en/qa/proxy-maintainers/index.xml
- Go to bugs:
http://tinyurl.com/cssc95v
and provide fixes, patches... for them ;)

Thanks a lot for your contribution!

November 25, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why you need the real_* thing with genkernel (November 25, 2012, 19:05 UTC)

Today it bit me. I rebooted my workstation, and all hell broke loose. Well, actually, it froze. Literally, if you consider my root file system. When the system tried to remount the root file system read-write, it gave me this:

mount: / not mounted or bad option

So I did the first thing that always helps me, and that is to disable the initramfs booting and boot straight from the kernel. Now for those wondering why I boot with an initramfs while it still works directly with a kernel: it’s a safety measure. Ever since there are talks, rumours, fear, uncertainty and doubt about supporting a separate /usr file system I started supporting an initramfs on my system in case an update really breaks the regular boot cycle. Same because I use lvm on most file systems, and software RAID on all of them. If I wouldn’t have an initramfs laying around, I would be screwed the moment userspace decides not to support this straight from a kernel boot. Luckily, this isn’t the case (yet) so I could continue working without an initramfs. But I digress. Back to the situation.

Booting without initramfs worked without errors of any kind. Next thing is to investigate why it fails. I reboot back with the initramfs, get my read-only root file system and start looking around. In my dmesg output, I notice the following:

EXT4-fs (md3): Cannot change data mode on remount

So that’s weird, not? What is this data mode? Well, the data mode tells the file system (ext4 for me) how to handle writing data to disk. As you are all aware, ext4 is a journaled file system, meaning it writes changes into a journal before applying, allowing changes to be replayed when the system suddenly crashes. By default, ext4 uses ordered mode, writing the metadata (information about files and such, like inode information, timestamps, block maps, extended attributes, … but not the data itself) to the journal right after writing data to the disk, after which the metadata is then written to disk as well.

On my system though, I use data=journal so data too is written to the journal first. This gives a higher degree of protection in case of a system crash (or immediate powerdown – my laptop doesn’t recognize batteries anymore and with a daughter playing around, I’ve had my share of sudden powerdowns). I do boot with the rootflags=data=journal and I have data=journal in my fstab.

But the above error tells me otherwise. It tells me that the mode is not what I want it to be. So after fiddling a bit with the options and (of course) using Google to find more information, I found out that my initramfs doesn’t check the rootflags parameter, so it mounts the root file system with the standard (ordered) mode. Trying to remount it later will fail, as my fstab contains the data=journal tag, and running mount -o remount,rw,data=ordered for fun doesn’t give many smiles.

The man page for genkernel however showed me that it uses real_rootflags. So I reboot with that parameter set to real_rootflags=data=journal and all is okay again.

Edit: I wrote that even changing the default mount options in the file system itself (using tune2fs /dev/md3 -o journal_data) didn’t help. However, that seems to be an error on my part, I didn’t reboot after toggling this, which is apparently required. Thanks to Xake for pointing that out.

November 24, 2012
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
EAPI=5, ghc-7.6 and other goodies (November 24, 2012, 20:53 UTC)

Today I have unmasked ghc-7.6.1 in gentoo‘s haskell overlay. Quite a few of things is broken (like unbumped yet gtk2hs), but major things (like darcs) seem to work fine. Feel free to drop a line on #gentoo-haskell to get the thing fixed.

Some notes and events in the overlay:

  • ghc-7.6.1 is available for all major arches we try to support
  • a few ebuilds of overlay were converted to EAPI=5 to use subslot depends (see below)
  • we’ve got working ghc-9999 ebuild with shared libraries by default! (see below)

ghc-7.6

That beast brought two major problems to it’s users:

  1. Prelude.catch gone away and is called ‘System.IO.Error.catchIOError’ now
  2. directory package broke interface to existing function ‘getModificationTime’ without old compatible variant.

While the first breakage is easy to fix by something like:

#if MIN_VERSION_base(4,6,0)
catch :: IO a -> (IOError -> IO a) -> IO a
catch = System.IO.Error.catchIOError

(or just switch to extensible-exceptions package if you need support for really old ghc versions).

The second one is literally a disaster

-getModificationTime :: FilePath -> IO ClockTime
+getModificationTime :: FilePath -> IO UTCTime

It is not as straightforward and "fixes" in various packages break PVP in a very funny way.

Look at this example.

Now that package has random signature type depending on which directory version it decided to build against.

TODO: find a nice and simple ‘:: ClockTime -> IO UTCTime’ compatibility function to end that keep creeping mess. (I wish the directory package to provide that).

Okay. Enough ranting.

EAPI=5

Some of experienced gentoo haskell users already know about the magic haskell-updater tool written by Ivan to fix the mess after ghc upgrade or some base library upgrade.

Typical symptom of broken libraries is the similar ghc-pkg check result:

There are problems in package data-accessor-monads-fd-0.2.0.3:
  dependency "monads-fd-0.1.0.4-830f79a91000e99707aac145b972f786" doesn't exist
There are problems in package LibZip-0.10.2:
  dependency "mtl-2.0.1.0-b1b6de8085e5ea10cc0eb01054b69110" doesn't exist
There are problems in package jail-0.0.1.1:
  dependency "monads-fd-0.1.0.4-830f79a91000e99707aac145b972f786" doesn't exist

Why it happens?

Well, ghc’s library ABI depends on ABIs on all the libraries it uses. It has quite nasty consequences.

Once you upgrade a library you need to:

  1. rebuld all the reverse dependencies
  2. and their reverse dependencies (recursive)

The first point can be solved by EAPI 5 so called SUBSLOT feature.

The second one is not solved yet, but i was said is planned for EAPI=6. Thus you will still need to use haskell-updater time to time.

Anyway, I’ve bumped binary package today and to show how portage picks all it’s immediate users:

# emerge -av1 dev-haskell/binary

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  r  U ~] dev-haskell/binary-0.6.4.0:0/0.6.4.0::gentoo-haskell [0.6.2.0:0/0.6.2.0::gentoo-haskell] USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  r  U ~] dev-haskell/sha-1.6.1:0/1.6.1::gentoo-haskell [1.6.0:0/1.6.0::gentoo-haskell] USE="doc hscolour -hoogle -profile" 2,651 kB
[ebuild  r  U ~] dev-haskell/zip-archive-0.1.2.1-r2:0/0.1.2.1::gentoo-haskell [0.1.2.1-r1:0/0.1.2.1::gentoo-haskell] USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/data-binary-ieee754-0.4.3:0/0.4.3::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/dyre-0.8.11:0/0.8.11::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hxt-9.3.1.1:0/9.3.1.1::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hashed-storage-0.5.10:0/0.5.10::gentoo-haskell  USE="doc hscolour {test} -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/dbus-core-0.9.3-r1:0/0.9.3::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB
[ebuild  rR   ~] dev-haskell/hoogle-4.2.14:0/4.2.14::gentoo-haskell  USE="doc fetchdb hscolour -fetchdb-ghc -hoogle -localdb -profile" 0 kB
[ebuild  rR   ~] www-apps/gitit-0.10.0.2-r1:0/0.10.0.2::gentoo-haskell  USE="doc hscolour plugins -hoogle -profile" 0 kB
[ebuild  r  U ~] dev-haskell/yesod-auth-1.1.1.7:0/1.1.1.7::gentoo-haskell [1.1.1.6:0/1.1.1.6::gentoo-haskell] USE="doc hscolour -hoogle -profile" 17 kB
[ebuild  rR   ~] dev-haskell/yesod-1.1.4:0/1.1.4::gentoo-haskell  USE="doc hscolour -hoogle -profile" 0 kB

Total: 12 packages (4 upgrades, 8 reinstalls), Size of downloads: 2,668 kB

Would you like to merge these packages? [Yes/No]

I would like to rebuild all the sha (and so on) revdeps as well, but EAPI can’t express that kind of depends yet.

The EAPI=5 ebuild slowly drift to main portage tree as well.

ghc-9999

The most iteresting thing!

With great Mark’s help we now have live ghc ebuild right out of gti tree!

One of the most notable things is the dynamic linking by default.

# ldd `which happy` # ghc-7.7.20121116
    linux-vdso.so.1 (0x00007fffb0bff000)
    libHScontainers-0.5.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/containers-0.5.0.0/libHScontainers-0.5.0.0-ghc7.7.20121116.so (0x00007fe616972000)
    libHSarray-0.4.0.1-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/array-0.4.0.1/libHSarray-0.4.0.1-ghc7.7.20121116.so (0x00007fe6166d0000)
    libHSbase-4.6.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/base-4.6.0.0/libHSbase-4.6.0.0-ghc7.7.20121116.so (0x00007fe615df9000)
    libHSinteger-gmp-0.5.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/integer-gmp-0.5.0.0/libHSinteger-gmp-0.5.0.0-ghc7.7.20121116.so (0x00007fe615be6000)
    libHSghc-prim-0.3.0.0-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/ghc-prim-0.3.0.0/libHSghc-prim-0.3.0.0-ghc7.7.20121116.so (0x00007fe615976000)
    libHSrts-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/rts-1.0/libHSrts-ghc7.7.20121116.so (0x00007fe615715000)
    libc.so.6 => /lib64/libc.so.6 (0x00007fe61536c000)
    libHSdeepseq-1.3.0.1-ghc7.7.20121116.so => /usr/lib64/ghc-7.7.20121116/containers-0.5.0.0/../deepseq-1.3.0.1/libHSdeepseq-1.3.0.1-ghc7.7.20121116.so (0x00007fe615162000)
    libgmp.so.10 => /usr/lib64/libgmp.so.10 (0x00007fe614ef4000)
    libffi.so.6 => /usr/lib64/libffi.so.6 (0x00007fe614cec000)
    libm.so.6 => /lib64/libm.so.6 (0x00007fe6149f2000)
    librt.so.1 => /lib64/librt.so.1 (0x00007fe6147ea000)
    libdl.so.2 => /lib64/libdl.so.2 (0x00007fe6145e6000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fe616d41000)
    libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe6143ca000)

$ ls -lh `which pandoc` # ghc-7.7.20121116
-rwxr-xr-x 1 root root 6.3M Nov 16 16:38 /usr/bin/pandoc
$ ls -lh `which pandoc` # ghc-7.4.2
-rwxr-xr-x 1 root root 27M Nov 18 17:46 /usr/bin/pandoc

Actually, the whole ghc-9999 installation is 150MB smaller, than ghc-7.4.1 on amd64.

Quite a win!

And as a side effect revdep-rebuild (or portage’s FEATURES=preserved-rebuild) can note (and fix) introduced breakages due to upgrades!

Work on the ghc cross-compilation in the ebuild slowly continues (needs some upstream fixes to support toolchains inferred from build/host/target triplets).

Have fun!


November 23, 2012
Ian Whyman a.k.a. thev00d00 (homepage, bugs)
Test Post #1 (November 23, 2012, 13:31 UTC)

Hello Guys,

This is just a test post to make sure the new WordPress is working correctly.

November 22, 2012
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Gentoo Miniconf 2012: Review (November 22, 2012, 17:36 UTC)

After one month I think it was time to write my review about Gentoo miniconf. :-)

In 20 and 21 October I attended to the Gentoo Miniconf which was a part of the bootstrapping-awesome project, 4 conferences (openSUSE Conference/Gentoo Miniconf/LinuxDays/SUSE Labs)  where took place in the Technical Czech University at Prague.

Photo by Martin Stehno

Day 0: After our flight arrived in Prague’s airport – we went straight to the pre-conference welcome party in a cafe near the university where the conference took place. There we met the other greeks who arrived in the previous days and I had also the chance to meet a lot of Gentoo developers and talk with them.

Day 1: The first day started earlier in the morning. Me and Dimitris went to the venue before the conference started in order to prepare the room for the miniconf. The day started with Theo as host to welcome us. There were plenty of interesting presentations  that covered a lot of aspects of Gentoo, the Trustees/Council, Public Relations, The Gentoo KDE team, Gentoo Prefix, Security, Catalyst and Benchmarking. The highlight of the day was when Robin Johnson introduced the Infrastructure team and started a very interesting BoF which talked about the state of the Infra team, currently running web apps and the burning issue of the git migration. The first day ended with lots of beers in the big party of the conference in the center of the Prague next to the famous Charles Bridge.

Gentoo Developers group photo
Photo by Jorge Manuel B. S. Vicetto

 

Day 2:The second day was more relaxed. There were presentations about Gentoo@ IsoHunt, 3D and Linux graphics and Οpen/GnuPG .After the lunch break a Οpen/GnuPG key signing party began outside of the miniconf’s room.After the key signing party we continued with a workshop regarding Puppet also a presentation about how to use testing on Gentoo to improve QA and finally the last presentation ended with Markos and Tomáš talking about how to get involved into development of Gentoo. In the end Theo and Michal closed the session of the miniconf.

 

I really liked Prague especially the beers and the Czech cuisine.

Gentoo Miniconf was a great exp erience for me. I could write lot of pages about the miniconf because I was in the room the whole days and I saw all the presentations.

I had also the opportunity to get in touch and talk with lots of Gentoo developers and contributors from other FOSS projects. Thanks to Theo and Michal for organizing this awesome event.

More about the presentations and the videos of the miniconf  can be found  here.
Looking forward to the next Gentoo miniconf(why not a conference).

November 20, 2012
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Project homepages for slackers (November 20, 2012, 03:50 UTC)

Create a homepage and documentation for a project is a boring task. I have a few projects that were not released yet due to lack of time and motivation to create a simple webpage and write down some Sphinx-based documentation.

To fix this issue I did a quick hack based on my favorite pieces of software: Flask, docutils and Mercurial. It is a single file web application that creates homepages automatically for my projects, using data gathered from my Mercurial repositories. It uses the tags, the README file, and a few variables declared on the repository's .hgrc file to build an interesting homepage for each project. I just need to improve my READMEs! :)

It works similarly to the PyPI Package Index, but accepts any project hosted on a Mercurial repository, including my non-Python and Gentoo-only projects.

My instance of the application lives here:

http://projects.rafaelmartins.eng.br/

The application is highly tied to my workflow, e.g. the way I handle tags and the directory structure of my repositories on my server, but the code is available in a Mercurial repository:

http://hg.rafaelmartins.eng.br/projects/

Most of my projects aren't listed yet, and I'll start enabling them as soon as I fix their READMEs.

November 19, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

This past Saturday (17 November 2012), I participated in the St. Jude Children’s Hospital Give Thanks Walk. This year was a bit different than the previous ones, as it also had a competitive 5k run (which was actually a 6k). I woke up Saturday morning, hoping that the weather report was incorrect, and that it would be much warmer than they had anticipated. However, it was not. When I arrived at the race site (which was the beautiful Creve Coeur Lake Park [one of my absolute favourites in the area]), it was a bit nippy at 6°C. However, the sun came out and warmed up everything a bit. Come race time, it wasn’t actually all that bad, and at least it wasn’t raining or snowing. :)

When I started the race, I was still a bit cold even with my stocking cap. However, by about halfway through the 6k, I had to roll up my sleeves because I was sweating pretty badly. It was an awesome run, and I felt great at the end of it. I think that the best part was being outside with a bunch of people that were also there to support an outstanding cause like Saint Jude Children’s Hospital. There were some heartfelt stories from families of patients, and nice conversations with fellow runners.

I actually finished the race in 24’22″, which wasn’t all that bad of a time:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - runner placement list
Click to enlarge

In fact, it put me in first place, with 2’33″ between me and the runner-up! Though coming in first place wasn’t a goal of mine, I was in competition with myself. I had set a personal goal of completing the 6k in 26’30″ and actually came in under it! My placement earned me both a medal and a great certificate:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - first-place medal and certificate
Click to enlarge

After the announcements of the winners and thanks to all of the sponsors, the female first-place runner (Lisa Schmitz) and I had our photo taken together in front of the finish line:

2012 St. Jude Give Thanks Walk - Saint Louis, MO - male and female first-place runners
Click to enlarge

Thank you to everyone that sponsored and supported me for this run! The children and families of Saint Jude received tens-of-thousands of dollars just from the Saint Louis race alone!

Cheers,
Nathan Zachary (“Zach”)

Michal Hrusecky a.k.a. miska (homepage, bugs)
GPG Key Signing Party (November 19, 2012, 08:19 UTC)

Last Thursday we had GPG Key & CAcert Signing party at SUSE office inviting anybody who wants to get his key signed. I would say, that it went quite well, we had about 20 people showing up, we had some fun, and we now trust each other some more!

GPG Key Signing

We started with GPG key signing. You know, the ussual stuff. Two rows moving against each other, people exchanging paper slips

Signing keys

For actually signing keys at home, we recommended people to use signing-party package and caff in particular. It’s easy to use tool as long as you can send mails from command line (there are some options to set up against SMTP directly, but I run into some issues). All you need to do is to call

caff HASH

and it will download the key, show you identities and fingerprint, sign it for you and send each signed identity to the owner by itself vie e-mail. And all that with nice wizard. It can’t be simpler than that.

Importing signatures

When my signed keys started coming back, I was wondering how do I process them. It was simply too many emails. I searched a little bit, but I get too lazy quite soon, so as I have all my mails stored locally in Maildir by offlineimap, I just wrote a following one liner to import them all.

   grep -Rl 'Your signed' INBOX | while read i; do 
        gpg -d "$i" | gpg --import -a;
   done

Maybe somebody will find it useful as well, maybe somebody more experienced will tell me in comments how to do it correctly ;-)

CAcert

One friend of mine – Theo – really wanted to be able to issue CAcert certificates, so we added CAcert assurance to the program. For those who doesn’t know, CAcert is nonprofit certification authority based on web of trust. You’ll get verified by volunteers and when enough of them trusts you enough, you are trusted by authority itself. When people are verifying you, they give you some points based on how they are trusted and how do they trust you. Once you get 50 points, you are trusted enough to get your certificate signed and once you have 100, you are trusted enough to start verifying other people (after a little quiz to make sure you know what are you doing).

I knew that my colleague Michal čihař is able and willing to issue some points but as he was starting with issuing 10 and I with 15, I also asked few nearby living assurers from CAcert website. Unfortunately I got no reply, but we were organizing everything quite quickly. But we had another colleague – Martin Vidner – showing up and being able to issue some points. I assured another 11 people on the party and now I can give out 25 points. As well as Michal and I guess Martin is now somewhere around 20 as well. So it means that if you need to be able to issue CAcert certificates, visiting just SUSE office in Prague is enough! But still, contact us beforehand, sometimes we do have a vacation ;-)

November 18, 2012
Secretly({Plan, Code, Think}) && PublishLater() (November 18, 2012, 12:19 UTC)

During the last years I started several open source projects. Some turned out to be useful, maybe successful, many were just rubbish. Nothing new until here.

Every time I start a new project, I usually don’t really know where I am headed and what my long-term goals are. My excitement and motivation tipically come from solving simple everyday and personal problems or just addressing {short,mid}-term goals. This is actually enough for me to just hack hack hack all night long. There is no big picture, no pressure from the outside world, no commitment requirements. It’s just me and my compiler/interpreter having fun together. I call this the “initial grace period”.

During this period, I usually never share my idea with other people, ever. I kind of keep my project in a locked pod, away from hostile eyes. Should I share my idea at this time, the project might get seriously injured and my excitement severely affected. People would only see the outcome of my thought, but not the thought process itself nor detailed plans behind it, because I just don’t have them! Besides this might be both considered against any basic Software Engineering rules or against some exotic “free software” principles, it works for me.

I don’t want my idea to be polluted as long as I don’t have something that resembles it in the form of a consistent codebase. And until that time, I don’t want others to see my work and judge its usefulness basing on incomplete or just inconsistent pieces of information.

At the very same time, writing documents about my idea and its goals beforehand is also a no-go, because I have “no clue” myself as mentioned earlier.

This is why revision control systems and the implicit development model they force on individuals are so important, especially for me.
Giving you the ability to code on your stuff, changes, improvements, without caring about the external world until you are really really done with it, is what I ended up needing so so much.
Every time I forgot to follow this “secrecy” strategy, I had to spend more time discussing about my (still confused?) idea on {why,what,how} I am doing than coding itself. Round trips are always expensive, no matter what you’re talking about!

Many internal tools we at Sabayon successfully use have gone through this development process. Other staffers sometimes tell things like “he’s been quiet in the last few days, he must be working on some new features”, and it turns out that most of the times this is true.

This is what I wanted to share with you today though. Don’t wait for your idea to become clearer in your mind, it won’t happen by itself. Just take a piece of paper (or your text editor), start writing your own secret goals (don’t make the mistake of calling them “functional requirements” like I did sometimes), divide them into modest/expected and optimistic/crazy and start coding as soon as possible on your own version/branch of the repo. Then go back to your list of goals, see if they need to be tweaked and go back coding again. Iterate until you’re satisfied of the result, and then, eventually, let your code fly away to some public site.

But, until then, don’t tell anybody what you’re doing! Don’t expect any constructive feedback during the “initial grace period”, it is very likely that it will be just be destructive.

Git, I love ya!


November 17, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
The hardened project continues going forward… (November 17, 2012, 19:34 UTC)

This wednesday, the Gentoo Hardened team held its monthly online meeting, discussing the things that have been done the last few weeks and the ideas that are being worked out for the next. As I did with the last few meetings, allow me to summarize it for all interested parties…

Toolchain

The upstream GCC development on the 4.8 version progressed into its 3rd stage of its development cycle. Sadly, many of our hardened patches didn’t make the release. Zorry will continue working on these things, hopefully still being able to merge a few – and otherwise it’ll be for the next release.

For the MIPS platform, we might not be able to support the hardenedno* GCC profiles [1] in time. However, this is not seen as a blocker (we’re mostly interested in the hardened ones, not the ones without hardening ;-) so this could be done later on.

Blueness is migrating the stage building for the uclibc stages towards catalyst, providing more clean stages. For the amd64 and i686 platforms, the uclibc-hardened and uclibc-vanilla stages are already done, and mips32r2/uclibc is on the way. Later, ARM stages will be looked at. Other platforms, like little endian MIPS, are also on the roadmap.

Kernel

The latest hardened-sources (~arch) package contains a patch supporting the user.* namespace for extended attributes in tmpfs, as needed for the XATTR_PAX support [2]. However, this patch has not been properly investigated nor tested, so input is definitely welcome. During the meeting, it was suggested to cap the length of the attribute value and only allow the user.pax attribute, as we are otherwise allowing unprivileged applications to “grow data” in the kernel memory space (the tmpfs).

Prometheanfire confirmed that recent-enough kernels (3.5.4-r1 and later) with nested paging do not exhibit the performance issues reported earlier.

SELinux

The 20120725 upstream policies are stabilized on revision 5. Although a next revision is already available in the hardened-dev overlay, it will not be pushed to the main tree due to a broken admin interface. Revision 7 is slated to be made available later the same day to fix this, and is the next candidate for being pushed to the main tree.

The september-released newer userspace utilities for SELinux are also going to be stabilized in the next few days (at the time of writing this post, they are ;-). These also support epatch_user so that users and developers can easily add in patches to try out stuff without having to repackage the application themselves.

grSecurity and PaX

The toolchain support for PT_PAX (the ELF-header based PaX markings) is due to be removed soon, meaning that the XATTR_PAX support will need to be matured by then. This has a few consequences on available packages (which will need a bump and fix) such as elfix, but also on the pax-utils.eclass file (interested parties are kindly requested to test out the new eclass before it reaches “production”). Of course, it will also mean that the new PaX approach needs to be properly documented for end users and developers.

pipacs also mentioned that he is working on a paxctld daemon. Just like SELinux’ restorecond daemon, this deamon will look for files and check them against a known database of binaries with their appropriate PaX markings. If the markings are set differently (or not set), the paxctld daemon will rectify the situation. For Gentoo, this is less of a concern as we already set the proper information through the ebuilds.

Profiles

The old SELinux profiles, which were already deprecated for a while, have been removed from the portage tree. That means that all SELinux-using profiles use the features/selinux inclusion rather than a fully build (yet difficult to maintain) profile definition.

System Integrity

A few packages, needed to support or work with ima/evm, have been pushed to the hardened-dev overlay.

Documentation

The SELinux handbook has been updated with the latest policy changes (such as supporting the named init scripts). We also documented SELinux policy constraints which was long overdue.

So again a nice month of (volunteer) work on the security state of Gentoo Hardened. Thanks again to all (developers, contributors and users) for making Gentoo Hardened where it is today. Zorry will send out the meeting log later to the mailinglist, so you can look at the more gory details of the meeting if you want.

  • [1] GCC profiles are a set of parameters passed on to GCC as a “default” setting. Gentoo hardened uses GCC profiles to support using non-hardening features if the users wants to (through the gcc-config application).
  • [2] XATTR_PAX is a new way of handling PaX markings on binaries. Previously, we kept the PaX markings (i.e. flags telling the kernel PaX code to allow or deny specific behavior or enable certain memory-related hardening features for a specific application) as flags in the binary itself (inside the ELF header). With XATTR_PAX, this is moved to an extended attribute called “user.pax”.