Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thilo Bangert
. Thomas Anderson
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
January 09, 2013, 23:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

January 07, 2013
Alex Alexander a.k.a. wired (homepage, bugs)

Passwords. No one likes them, but everybody needs them. If you are concerned about your online safety, you probably have unique passwords for your critical accounts and some common pattern for all the almost-useless accounts you create when browsing the web.

At first I used to save my passwords in a gpg encrypted file. Over time however, I began using Firefox’s and Chrome’s password managers, mostly because of their awesome synching capabilities and form auto-filling.

Unfortunately, convenience comes at a price. I ended up relying on the password managers a bit too much, using my password pattern all over the place.

Then it hit me: I had strayed too much. Although my main accounts were relatively safe (strong passwords, two factor authentication), I had way too many weak passwords, synced on way too many devices, over syncing protocols of questionable security.

Looking for a better solution, I stumbled upon LastPass. Although LastPass uses an interesting security model, with passwords encrypted locally and a password generator that helps you maintain strong passwords for all your accounts, I didn’t like depending on an external service for something so critical. Its ui also left something to be desired.

Meet “pass“.

A Unix command line tool that takes advantage of commonly used tools like gnupg and git to provide safe storage for your passwords and other critical information.

Pass‘ concept is simple. It creates one file for each one of your passwords, which it then encrypts using gpg and your key. You can provide your own passwords or ask it to generate strong passwords for you automatically.

When you need a password you can ask pass to print it on screen or copy it to the clipboard, ready for you to paste in the desired password field.

Pass can optionally use git, allowing you to track the history of your passwords and sync them easily among your systems. I have a Linode server, so I use that + gitolite to keep things synced.

Installation and usage of the tool is straightforward, with clean instructions and bash completion support that makes it even easier to use.

All this does come with a cost, since you lose the ability to auto save passwords and fill out forms. But this is a small price you pay compared to the security benefits gained. I also love the fact that you can access your passwords with standard Unix tools in case of emergencies. The system is also useful for securely storing other critical information, like credit cards.

Pass is not for everyone and most people would be fine using something like LastPass or KeePass, but if you’re a Unix guy looking for a solid password management solution, pass may be what you’re looking for :)

Pass was written by zx2c4 (thanks!) and is available in Gentoo’s portage

emerge -av pass

For more information visit the project’s website at http://zx2c4.com/projects/password-store/

Jeremy Olexa a.k.a. darkside (homepage, bugs)
My holidays in Greece were excellent (January 07, 2013, 09:57 UTC)

No, the country is not in flames or rioting everyday, bad media, bad.

I spent 12 days in Greece. The Greek hospitality is superb, I can not ask for better friends in Greece. I first arrived in Thessaloniki, stayed there for a few nights. Then went to Larissa, and stayed with my friend and his family. There was a small communication barrier with his parents in this smaller town, they don’t get too many tourists. However, I had a very nice Christmas there and it was nice to be with such great people over the holidays. I went to a namesday celebration. Even though I couldn’t understand most of the conversations, they still welcomed me, gave me food and wine, and exchanged culture information. Then I went to Athens, stayed in a hostel, and spent New Year’s watching the fireworks over the Acropolis and the Parthenon. Cool experience! It was so great to be walking around the birthplace of “western ideals” – not the oldest civilization, but close. Some takeaway thoughts: 1) Greek hospitality is unlike anything I’ve experienced, really. I made sure that a I told everyone that they have an open door with me whenever we meet in “my new home” (meaning, I don’t know when or where), 2) you cannot go hungry in Greece, especially when they are cooking for you! 3) the cafe culture is great, 4) I want to go back during the summer

Of course, you will always find the not so nice parts. I got fooled by the old man scam, as seen here. Luckily, they only got 30€ from me, compared to some of the stories I’ve heard. Looking back on it, I just laugh at myself. Maybe I’ll be jaded towards a genuine experience in the future but, lesson learned. I don’t judge Athens by this one mishap, however.

Greece - Dec 2012-22

I only have pictures of Athens since I had to buy a new camera.. Pics here

January 06, 2013
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: ice is given (January 06, 2013, 09:31 UTC)

a new song: ice is given by ioflow

piano improvisation and ambient recordings for the 53rd disquiet junto, ice for 2013.

the assignment was to record the sound of ice in a glass, and make something of it.

the track picture shows my lo-fi setup for the field recording segment. i balanced a logitech USB microphone (which came with the Rock Band game) on a box of herbal tea (to keep it off the increasingly wet kitchen table), and started dropping ice cubes into a glass tumbler. audible is the initial crack and flex of the tray, scrabbling for cubes, tossing them into the cup. i made a point of recording the different tone of cubes dropped into a glass of hot water. i also filled the cup with ice, then recorded the sound of water running into it from the kitchen tap. i liked this sound enough to begin the song with it.

i decided that my first song of 2013 should incorporate the piano, so with the ice cubes recorded, i sat down to improvise an appropriately wintry melody. the result is a simple two-minute minor motif. i turned to the ardour3 beta to integrate the field recordings and the piano improvisation.

it’s been awhile since i last used my strymon bluesky reverb pedal, so i figured i should use it for this project. i setup a feedback-free hardware effects loop using my NI Komplete Audio6 interface with the help of #ardour IRC channel, and listened to the piano recording as it ran through fairly spacious settings on the BSR. (normal mode, room type, decay @ 3:00, predelay @ 11:00, low damp @ 4:00, high damp @ 8:00). with just a bit of “send” to the reverb unit, the piano really came to life.

i added a few more tracks in ardour for the ice cube snippets, with even more subtle audio sends to the BSR, and laid out the field recordings. i pulled them apart in several places, copying and pasting segments throughout the song; minimal treatment was needed to get a good balance of piano and ice.

ardour3 session

working environment in ardour3. laying out hardware FX and tracks.

title reference: Job 37:10

January 04, 2013
Stuart Longland a.k.a. redhatter (homepage, bugs)
DIY Project: Gatsby cap (January 04, 2013, 22:13 UTC)

Those who have met me, might notice I have a somewhat unusual taste in clothing. One thing I despise is having clothes that are heavily branded, especially when the local shops then charge top dollar for them.

Where hats are concerned, I’m fussy. I don’t like the boring old varieties that abound $2 shops everywhere. I prefer something unique.

The mugshot of me with my Vietnamese coolie hat is probably the one most people on the web know me by. I was all set to try and make one, and I had an idea how I might achieve it, bought some materials I thought might work, but then I happened to be walking down Brunswick Street in Brisbane’s Fortitude Valley and saw a shop selling them for $5 each.

I bought one and have been wearing it on and off ever since. Or rather, I bought one, it wore out, I was given one as a present, wore that out, got given two more. The one I have today is #4.

I find them quite comfortable, lightweight, and most importantly, they’re cool and keep the sun off well. They are also one of the few full-brim designs that can accommodate wearing a pair of headphones or headset underneath. Being cheap is a bonus. The downside? One is I find they’re very divisive, people either love them or hate them — that said I get more compliments than complaints. The other, is they try to take off with the slightest bit of wind, and are quite bulky and somewhat fragile to stow.

I ride a bicycle to and from work, and so it’s just not practical to transport. Hanging around my neck, I can guarantee it’ll try to break free the moment I exceed 20km/hr… if I try and sit it on top of the helmet, it’ll slide around and generally make a nuisance.

Caps stow much easier. Not as good sun protection, but still can look good.   I’ve got a few baseball caps, but they’re boring and a tad uncomfortable.  I particularly like the old vintage gatsby caps — often worn by the 1930′s working class.  A few years back on my way to uni I happened to stop by a St. Vinnies shop near Brisbane Arcade (sadly, they have closed and moved on) and saw a gatsby-style denim cap going for about $10. I bought it, and people commented that the style suited me. This one was a little big on me, but I was able to tweak it a bit to make it fit.

Fast forward to today, it is worn out — the stitching is good, but there are significant tears on the panelling and the embedded plastic in the peak is broken in several places. I looked around for a replacement, but alas, they’re as rare as hens teeth here in Brisbane, and no, I don’t care for ordering overseas.

Down the road from where I live, I saw the local sports/fitness shop were selling those flat neoprene sun visors for about $10 each.  That gave me an idea — could I buy one of these and use it as the basis of a new cap?

These things basically consist of a peak and headband, attached to a dome consisting of 8 panels.  I took apart the old faithful and traced out the shape of one of the panels.

Now I already had the headband and peak sorted out from the sun visor I bought, these aren’t hard to manufacture from scratch either.  I just needed to cut out some panels from suitable material and stitch them together to make the dome.

There are a couple of parameters one can experiment that changes the visual properties of the cap.  Gatsby caps could be viewed as an early precursor to the modern baseball cap.  The prime difference is the shape of the panels.

Measurements of panel from old cap

The above graphic is also available as a PDF or SVG image.  The key measurements to note are A, which sets the head circumference, C which tweaks the amount of overhang, and D which sets the height of the dome.

The head circumference is calculated as ${panels}×${A} so in the above case, 8 panels, a measurement of 80mm, means a head circumference of 640mm.  Hence why it never quite fitted (58cm is about my size) me.  I figured a measurement of about 75mm would do the trick.

B and C are actually two of three parameters that separates a gatsby from the more modern baseball cap.  The other parameter is the length of the peak.  A baseball cap sets these to make the overall shape much more triangular, increasing B to about half D, and tweaking C to make the shape more spherical.

As for the overhang, I decided I’d increase this a bit, increasing C to about 105mm.  I left measurements B and D alone, making a fairly flattish dome.

For each of these measurements, once you come up with values that you’re happy with, add about 10mm to A, C and D for the actual template measurements to give yourself a fabric margin with which to sew the panels together.

As for material, I didn’t have any denim around, but on my travels I saw an old towel that someone had left by the side of the road — likely an escapee.  These caps back in the day would have been made with whatever material the maker had to hand.  Brushed cotton, denim, suede leather, wool all are common materials.  I figured this would be a cheap way to try the pattern out, and if it worked out, I’d then see about procuring some better material.

Below are the results, click on the images to enlarge.  I found due to the fact that this was my first attempt, and I just roughly cut the panels from a hand-drawn template, the panels didn’t quite meet in the middle.  This is hidden by making a small circular patch where the panels normally meet.  Traditionally a button is sewn here.  I sewed the patch from the underside so as to hide the edges of it.

Hand-made gatsbyHand-made gatsby (Underside)

Not bad for a first try, I note I didn’t quite get the panels aligned at dead centre, the seam between the front two is just slightly off centre by about 15mm.  The design looks alright to my eye, so I might look around for some suede leather and see if I can make a dressier one for more formal occasions.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
Signal handler safety, re-entering malloc (January 04, 2013, 20:23 UTC)

This is a story from real-world development. From signal(7):


   Async-signal-safe functions
       A  signal  handler  function must be very careful,
       since processing elsewhere may be interrupted at some
       arbitrary point in the execution of the program.
       POSIX has the concept of "safe function".  If a signal
       interrupts the execution of an  unsafe  function,
       and handler calls an unsafe function, then the behavior
       of the program is undefined.


After that a list of safe functions follows, and one notable things is that malloc and free are async-signal-unsafe!

I hit this issue while enabling tcmalloc's debugallocation for Chromium Debug builds. We have a StackDumpSignalHandler for tests, which prints a stack trace on various crashing signals for easier debugging. It's very useful, and worked fine for a pretty long while (which means that "but it works!" is not a valid argument for doing unsafe things).

Now when I enabled debugallocation, I noticed hangs triggered by the stack trace display. In one example, this stack trace:

@0  0x00000000019c6c85 in tcmalloc::Abort () at third_party/tcmalloc/chromium/src/base/abort.cc:15
@1 0x00000000019b39c1 in LogPrintf (severity=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s", ap=0x7fff52c379e8)
at third_party/tcmalloc/chromium/src/base/logging.h:210
@2 0x00000000019b3a8b in RAW_LOG (lvl=-4,
pat=0x32aeb18 "memory allocation/deallocation mismatch at %p: allocated with %s being deallocated with %s")
at third_party/tcmalloc/chromium/src/base/logging.h:230
@3 0x00000000019c3fb1 in MallocBlock::CheckLocked (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:461
@4 0x00000000019c3c42 in MallocBlock::CheckAndClear (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:401
@5 0x00000000019c436a in MallocBlock::Deallocate (this=0x7fd18f143400, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:557
@6 0x00000000019c1929 in DebugDeallocate (ptr=0x7fd18f143420, type=-21308287)
at ./third_party/tcmalloc/chromium/src/debugallocation.cc:998
@7 0x00000000028d1482 in tc_delete (p=0x7fd18f143420) at ./third_party/tcmalloc/chromium/src/debugallocation.cc:1232
@8 0x000000000097dc04 in cc::ResourceProvider::deleteResourceInternal (this=0x7fd191827da0, it=...) at cc/resource_provider.cc:242
@9 0x000000000097daaf in cc::ResourceProvider::deleteResource (this=0x7fd191827da0, id=1) at cc/resource_provider.cc:230
@10 0x00000000006f9824 in (anonymous namespace)::ResourceProviderTest_Basic_Test::TestBody (this=0x7fd18dc5abf0)
at cc/resource_provider_unittest.cc:328
@11 0x00000000008ec801 in testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2071
@12 0x00000000008e9665 in testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void> (object=0x7fd18dc5abf0,
method=&virtual testing::Test::TestBody(), location=0x29463ab "the test body") at testing/gtest/src/gtest.cc:2123
@13 0x00000000008dee0d in testing::Test::Run (this=0x7fd18dc5abf0) at testing/gtest/src/gtest.cc:2143
@14 0x00000000008df3ea in testing::TestInfo::Run (this=0x7fd191823020) at testing/gtest/src/gtest.cc:2319
@15 0x00000000008df8dc in testing::TestCase::Run (this=0x7fd19181f0d0) at testing/gtest/src/gtest.cc:2426
@16 0x00000000008e3eea in testing::internal::UnitTestImpl::RunAllTests (this=0x7fd19829dd60) at testing/gtest/src/gtest.cc:4249

generates SIGSEGV (tcmalloc::Abort). This is just debugallocation having stricter checks about usage of dynamically allocated memory. Now the StackDumpSignalHandler kicks in, and internally calls malloc. But we're already inside malloc code as you can see on the above stack trace (see frame @7, bold font), and re-entering it tries to take locks that are already held, resulting in a hang.

The fix required several changes:
  • no dynamic memory, and that includes std::string and std::vector, which use it internally
  • no buffered stdio or iostreams, they are not async-signal-safe (that includes fflush)
  • custom code for number-to-string conversion that doesn't need dynamically allocated memory (snprintf is not on the list of safe functions as of POSIX.1-2008; it seems to work on a glibc-2.15-based system, but as said before this is not a good assumption to make); in this code I've named it itoa_r, and it supports both base-10 and base-16 conversions, and also negative numbers for base-10
  • warming up backtrace(3): now this is really tricky, and backtrace(3) itself is not whitelisted for being safe; in fact, on the very first call it does some memory allocations; for now I've just added a call to backtrace() from a context that is safe and happens before the signal handler may be executed; implementing backtrace(3) in a known-safe way would be another fun thing to do
Note that for the above, I've also added a unit test that triggers the deadlock scenario. This will hopefully catch cases where calling backtrace(3) leads to trouble.

For more info, feel free to read the articles below:

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Munin and IPv6 (January 04, 2013, 16:48 UTC)

Okay here it comes another post about Munin for those who are using this awesome monitoring solution (okay I think I’ve been involved in upstream development more than I expected when Jeremy pointed me at it). While the main topic of this post is going to be IPv6 support, I’d like first to spend a few words for context of what’s going on.

Munin in Gentoo has been slightly patched in the 2.0 series — most of the patches were sent upstream the moment when they were introduced, and most of them have been merged in for the following release. Some of them though, including the one bringing my FreeIPMI plugin to replace the OpenIPMI plugins, or at least the first version of it, and those dealing with changes that wouldn’t have been kosher for other distributions (namely, Debian) at this point, were also not merged in the 2.0 branch upstream.

But now Steve opened a new branch for 2.0, which means that the development branch (Munin does not use the master branch, for a simple logistic reason of having a master/ directory in GIT I suppose) is directed toward the 2.1 series instead. This meant not only that I can finally push some of my recent plugin rewrites but also that I could make some more deep changes to it, including rewriting the seven asterisk plugins into a single one, and work hard on the HTTP-based plugins (for web servers and web services) so that they use a shared backend, like SNMP. This actually completely solved an issue that, in Gentoo, we solved only partially before — my ModSecurity ruleset blacklists the default libwww-perl user agent, so with the partial and complete fix, Munin advertises itself in the request; with the new code it includes also the plugin that is currently making the request so that it’s possible to know which requests belongs to what).

Speaking of Asterisk, by the way, I have to thank Sysadminman for lending me a test server for working on said plugins — this not only got us the current new Asterisk plugin (7-in-1!) but also let me modify just a tad said seven plugins, so that instead of using Net::Telnet, I could just use IO::Socket::INET. This has been merged for 2.0, which in turn means that the next ebuild will have one less dependency, and one less USE flag — the asterisk flag for said ebuild only added the Net::Telnet dependency.

To the main topic — how did I get to IPv6 in Munin? Well, I was looking at which other plugins need to be converted to “modernity” – which to me means re-using as much code possible, collapse multiple plugins in one through multigraph, and support virtual-nodes – and I found the squid plugins. This was interesting to me because I actually have one squid instance running, on the tinderbox host to avoid direct connection to the network from the tinderboxes themselves. These plugins do not use libwww-perl like the other HTTP plugins, I suppose (but I can’t be sure, for what I’m going to explain in a moment) because the cache://objects request that has to be done might or might not work with the noted library. Since as I said I have a squid instance, and these (multiple) plugins look exactly like the kind of target that I was looking for to rewrite, I started looking into them.

But once I started, I had a nasty surprise: my Squid instance only replies over IPv6, and that’s intended (the tinderboxes are only assigned IPv6 addresses, which makes it easier for me to access them, and have no NAT to the outside as I want to make sure that all network access is filtered through said proxy). Unfortunately, by default, libwww-perl does not support accessing IPv6. And indeed, neither do most of the other plugins, including the Asterisk I just rewrote, since they use IO::Socket::INET (instead of IO::Socket::INET6). A quick searching around, and this article turned up — although then this also turned up that relates to IPv6 support in Perl core itself.

Unfortunately, even with the core itself supporting IPv6, libwww-perl seems to be of different ideas, and that is a showstopper for me I’m afraid. At least, I need to find a way to get libwww-perl to play nicely if I want to use it over IPv6 (yes I’m going to work this around for the moment and just write the new squid plugins against the IPv4). On the other hand, using IO::Socket::IP would probably solve the issue for the remaining parts of the node and that will for sure at least give us some better support. Even better, it might be possible to abstract and have a Munin::Plugin::Socket that will fall-back to whatever we need. As it is, right now it’s a big question mark of what we can do there.

So what can be said about the current status of IPv6 support in Munin? Well, the Node uses Net::Server, and that in turn is not using IO::Socket::IP, but rather IO::Socket::INET or INET6 if installed — that basically means that the node itself will support IPv6 as long as INET6 is installed, and would call for using it as well, instead of using IO::Socket::IP ­— but the latter is the future and, for most people, will be part of the system anyway… The async support, in 2.0, will always use IPv4 to connect to the local node. This is not much of a problem, as Steve is working on merging the node and the async daemon in a single entity, which makes the most sense. Basically it means that in 2.1, all nodes will be spooled, instead of what we have right now.

The master, of course, also uses IPv6 — via IO::Socket::INET6 – yet another nail in the coffin of IO::Socket::IP? Maybe. – this covers all the communication between the two main components of Munin, and could be enough to declare it fully IPv6 compatible — and that’s what 2.0 is saying. But alas, this is not the case yet. On an interesting note, the fact that right now Munin supports arbitrary commands as transports, as long as they provide an I/O interface to the socket, make the fact that it supports IPv6 quite moot. Not only you just need an IPv6-capable SSH to handle it, but you can probably use SCTP instead of TCP simply by using a hacked up netcat! I’m not sure if monitoring would get any improvement of using SCTP, although I guess it might overcome some of the overhead related to establishing the connection, but.. well it’s a different story.

Of course, Munin’s own framework is only half of what has to support IPv6 for it to be properly supported; the heart of Munin is the plugins, which means that if they don’t support IPv6, we’re dead in the water. Perl plugins, as noted above, have quite a few issues with finding the right combination of modules for supporting IPv6. Bash plugins, and indeed any other language that could be used, would support IPv6 as good as the underlying tools — indeed, even though libwww-perl does not work with IPv6, plugins written with wget would work out of the box, on an IPv6-capable wget… but of course, the gains we have by using Perl are major enough that you don’t want to go that route.

All in all, I think what’s going to happen is that as soon as I’m done with the weekend’s work (which is quite a bit since the Friday was filled with a couple of server failures, and me finding out that one of my backups was not working as intended) I’ll prepare a branch and see how much of IO::Socket::IP we can leverage, and whether wrapping around that would help us with the new plugins. So we’ll see where this is going to lead us, maybe 2.1 will really be 100% IPv6 compatible…

January 02, 2013
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Cat stuck in the Christmas tree (January 02, 2013, 16:52 UTC)

Since the holidays are over, I decided to go back through some of the emails that I had received. Though I got a bunch of them with really funny cartoons, I found this one to be the best:

Long story, just pull - cat in the Christmas tree

The whole situation is hysterical to me, but the photo in the background makes it. The expression on the kid’s face fits perfectly; it’s the look of “Oh well, the cat’s in the tree again!”

Cheers,
Zach

Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Get Involved in Gentoo Linux (January 02, 2013, 13:31 UTC)

Nowadays I see lots of new blog posts about how to contribute in open source projects and I decided to write a blog post about how to contribute to Gentoo Linux and become a vital part of the project.gblend

My colleagues at university everytime we talk about Gentoo tell me that they cannot install Gentoo because it is too difficult for them or they are not ready to install it and configure it because they don’t have the experience and they finally give up.  Also some other colleagues tell me that they want to contribute to Gentoo and they don’t know how to start. Thats why I wrote this blogpost in order to give some guidelines for those who want to contribute.

In order to help and contribute in Gentoo you don’t have to know to code or to be a super duper Linux guru. Of course code/programming is the core of  open source projects but there are ways to contribute without knowing to code. Requirements are two things. A Gentoo installation and will to help.

Community

Gentoo like the rest FOSS Projects is based on volunteer efforts. The pylons of every FOSS project is its community. Without its community Gentoo wouldn’t exist. Even if someone doesn’t know to code, can contribute and learn from project’s community.

Forums:  Join our forums and help other users with their problem. It is a good opportunity also to learn more things about Gentoo.

Mailing Lists: Subscribe to our mailing lists and learn about the latest community and development news  of the project. Everyone can also help users to related mailing lists or discuss with Gentoo developers.

IRC: Join in our IRC channels. Help new users with their issues. Discuss with users and developers and express your opinion about the new features and the technical issues of the project. Make sure you will read our Code of Conduct first.

Planets: Follow our planet and watch some Gentoo-stuff blog posts from developers about Gentoo. There are interesting conversations (via comments) after the blog post between the users and the developers.

Promote: After you get some experience with the project promote your favourite distro( Gentoo of course ) writing blog posts and articles in forums and sites related to open source.  You can also spread the word in your local linux users group and at your university.

Participate in Events: Every month there are meetings from the most of the Gentoo project teams. The meetings take place at #gentoo-meetings. There is an ‘open floor’ at the end of the meeting where users can express their opinion.

Documentation

Gentoo has always been known for its wide variety and quality of documentation. It covers lots of aspect of Linux. Topics about desktop, software, security and most of them are not totally Gentoo based.  That’s the reason Gentoo documentation is successful and that’s why users from other Linux distributions using it. So you can be a part of this effort and improve the documentation.

Wiki:  Wiki is our fresh project. There are lots of ways to help here. Add new articles about the topics  you would like to see ( and have knowledge of them of course) and want to share it with the other Gentoo users. Improving and expanding wiki articles is a good way to help the project (avoid copy paste from other sources in the net). All users are encouraged to help, wiki is open for everyone. Use it responsibly because your posts will affect the Gentoo users who will try to follow your guide.

Translations: If English is not your native language translating wiki/documentation will be a very good way to help users that don’t know English and want to join to the community. Translations is a good way to contribute and expand the Gentoo community.

(bonus) Write article to your blog: If you find a configuration or a tool  or a new solution to a problem that saved your life at the Gentoo world. Don’t be afraid and share it with the other users.

Development ( Code )

As I said code is the core of any software project.  So if you have some knowledge with shell scripts and programming you are welcome to join the team. With small steps you can gain more experience with the project and contribute to it with your features and patches.

Bugs: Every FOSS project has its own bug tracking system, Gentoo as well has its own Bugzilla. There we report our issues. Build and run time failures , kernel problems ,  Gentoo tools issues,  stable requests.  You can also start contributing by confirming and reproducing bugs  and then try to offer solutions and fix the bugs( patches are welcome ). So feel free to  report new bugs to our Bugzilla. In addition there are requests to add or update(version bump*) ebuilds. Instead of requesting new ebuilds  and version bumps you can also write and  submit your ebuilds to our Bugzilla in order to be added to the Portage tree by a Gentoo developer.  Try picking up a bug from maintainer-wanted alias. If you need a review for your ebuild #gentoo-dev-help is the right place to do it.

* Please avoid 0day bump requests.

Arch Tester: An Arch Tester (a.k.a AT) is a trustworthy user capable of testing an application to determine its stability. Arch Testers should have a good understanding on how ebuilds works, bash scripting and should test lots of packages to their arch. You can become an AT at x86 and amd64 archs. Requirement is to have a stable Gentoo box. Your goal will be to test and install packages from the testing arch (~arch) and see if they are working in the stable arch. Then you can open a stabilization request to Bugzilla.

Sunrise Project: Sunrise is a starting point for gentoo users to contribute. The Sunrise team encourage users to write ebuilds and make sure that they follow Gentoo QA standards. Sunrise’s goal is to allow non-developers to maintain them. For questions you can ask at #gentoo-sunrise at Freenode.

Proxy-maintaining: The goal of this team is to maintain abandoned (orphaned) packages in order to prevent treecleaners from removing those packages.  Pick up some packages from the maintainer-needed list a begin to maintain it. For questions you can join #gentoo-dev-help.

Bugday: Bugday is an event which take place at #gentoo-bugs at Freenode every first weekend of every month. You can join and pick a bug and fix it. But have in mind that every day is a bugday so it doesn’t have to be a bugday to add your ebuild and fix bugs.

Advanced Community Projects: Portage, Gentoolkit Portage tools Kernel team, Infrastructure team, Security team and Hardened team. These projects are very special and important to the Gentoo so it is necessary to have a good level of knowledge in order to contribute to them. So if you have the skills join the party .  :)

Become a developer:  After you reach a good amount of contribution and you think you can be an active and vital member of the project you can start the process of becoming a developer. Talk to a Gentoo developer in order to mentor you and help you fill the ebuild and staff quiz and then the process of the recruiting will be completed with a live interview with a recruiter.

There are lots of  Gentoo project teams that need new members and help . Everyone can contribute to Gentoo either knowing to code or not. Every piece of help is useful for the project.

I think I covered the biggest part of the Gentoo and how to contribute to it . I’ll wait for your comments, if you think I missed something inform me. Fixes always welcome.

Start contributing from today.

Gentoo: If it moves, compile it ;)

Further reading:

  1. Gentoo Handbook
  2. Gentoo Development Guide
  3. Gentoo Projects Listing
  4. Benefits of Gentoo
  5. Easy way to assist us  by Markos Chandras
  6. How to contribute to Gentoo
  7. Beautiful bug reports
  8. Sunrise Project ( lots of good tutorials inside )
  9. Always looking for Arch Testers by Agostino Sarubbo

Thanks, it’s time to push Sabayon farther (January 02, 2013, 10:00 UTC)

I want to take take a few moments from my deserved Christmas break to say thanks to all the donors who have contributed to our last fundraiser. After 1.5 years, we’ve been able to hit our €5000 goal. This is a big, I mean really big, achievement for such a small (I am not sure now) but awesome distro like ours.

We’ve always wanted to bring Gentoo to everyone, make this awesome distro available on laptops, servers and of course, desktops without the need to compile, without the need of a compiler! It turns out that we’re getting there.

So, the biggest part of the “getting there” strategy was to implement a proper binary package manager and starting to automate the distro development, maintenance and release process.
Even though Entropy is in continuous development mode, we’ve got to the point that it’s reliable enough. Now, we must push Sabayon even farther.

Let me keep the development ideas I had for a separate blog post and tell you here what’s been done, what we’re going to do and what we still need in 2013.

First things first, last year we bought a new and shiny build server, which is kindly hosted by the University of Trento, Italy, featuring a Rack 2U dual Octa Opteron 6128, 48GB RAM and, earlier last year,
2x240GB Samsung 830 SSDs. In order to save (a lot of) money, I built the server myself and I spent something like 2500€ (including the SSDs). Take into consideration that prices for hardware in the EU are much higher than in the US.

Now we’re left with something like 3000€ or more and we’re planning to do another round of infra upgrades, save some money for hardware replacement in case of failures, buy t-shirts and DVDs to give out at local events, etc.

So far, the whole Sabayon infrastructure is spread across 3 Italian universities and TOP-IX (see at the bottom of http://www.sabayon.org for more details) and consists of four Rack 1U servers and one Rack 2U.
Whenever there’s a problem, I jump on a car and fix issues myself (like PSU, RAM, HDD/SSD failures) or kindly delegate the task to friends living closer than me.

As you can imagine, it’s easy to suck 200-300€ whenever there’s a problem and while we have failover plans (to EC2), these come with a cost as well.
As you may have already realized, free software does not really come for free, especially for those who are actually maintaining it. Automation and scaling out across multiple people (individuals involved in the development of this distro) are the key, and in particular the former, because it reduces the “human error” impact on the whole workflow.

As I mentioned above, I will prepare a separate blog post about what I mean with “automation”. For now, enjoy your Christmas holidays, the NYE celebrations and why not, some gaming with Steam on Sabayon.


During the last weeks, I spent several nights playing with UEFI and its extension called UEFI SecureBoot. I must admit that I have mixed feelings about UEFI in general; on one hand, you have a nice and modern “BIOS replacement” that can boot .efi files with no need for a bootloader like GRUB, on the other hand, some hardware, not even the most exotic one, is not yet glitch-free. But that’s what happens with new stuff in general. I cannot go much into detail without drifting away from the main topic, but surely enough, a simple google search about UEFI and Linux will point you to the problems I just mentioned above.

But hey, what does it all mean for our beloved Gentoo-based distro named Sabayon? Since DAILY ISO images dated 20121224, Sabayon can boot off UEFI systems, through DVD and USB (thanks to isohybrid –uefi) and, surprise surprise, with SecureBoot turned on!. I am almost sure that we’re the first Linux distro supporting SecureBoot out of the box (update: using shim!) and I am very proud of it. This is of course thanks to Matthew Garrett’s shim UEFI loader that is chainloading our signed UEFI GRUB2 image.

The process is simple and works like this: you boot an UEFI-compatible Sabayon ISO image off DVD or USB, if SecureBoot is turned on, shim will launch MokManager, that you can use to enroll our distro key, called sabayon.der and available on our image under the ”SecureBoot” directory. Once you enrolled the key, on some systems, you’re forced to reboot (I had to on my shiny new Asus Zenbook UX32VD), but then, the magic happens.

There is a tricky part however. Due to the way GRUB2 .efi images are generated (at install time, with settings depending on your partition layout and platform details), I have been forced to implement a nasty way to ensure that SecureBoot can still accept such platform-dependent images: our installer, Anaconda, now generates a hardware-specific SecureBoot keypair (private and public key), then our modified grub2-install version, automatically signs every .efi image it generates with that key, which is placed into the EFI Boot Partition under EFI/boot/sabayon ready to be enrolled by shim at the next boot.
This is sub-optimal, but after several days of messing around, it turned out that it’s the most reliable, cleanest and easiest way to support SecureBoot after install without disclosing our private key we use to sign our install media. Another advantage is that our distro keypair, once enrolled, will allow any Sabayon image to boot, while we still allow full control over the installed system to our users (by generating a platform-specific private key at install time).

SecureBoot is not that evil after all, my laptop came with Windows 8 (which I just ripped off completely) and SecureBoot disabled by default and lets anyone sign their own .efi binaries from the ”BIOS”. I don’t see how my freedom could be affected by this, though.


January 01, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Right at the start the new year 2013 brings the pleasant news that our manuscript "Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips" has found its way into Journal of Applied Physics. The background of this work is - once again - spin injection and spin-dependent transport in carbon nanotubes. (To be more precise, the manuscript resulted from our ongoing SFB 689 project.) Control of the contact magnetization is the first step for all the experiments. Some time ago we picked Pd0.3Ni0.7 as contact material since the palladium generates only a low resistance between nanotube and its leads. The behaviour of the contact strips fabricated from this alloy turned out to be rather complex, though, and this manuscript summarizes our results on their magnetic properties.
Three methods are used to obtain data - SQUID magnetization measurements of a large ensemble of lithographically identical strips, anisotropic magnetoresistance measurements of single strips, and magnetic force microscopy of the resulting domain pattern. All measurements are consistent with the rather non-intuitive result that the magnetically easy axis is perpendicular to the geometrically long strip axis. We can explain this by maneto-elastic coupling, i.e., stress imprinted during fabrication of the strips leads to preferential alignment of the magnetic moments orthogonal to the strip direction.

"Transversal Magnetic Anisotropy in Nanoscale PdNi-Strips"
D. Steininger, A. K. Hüttel, M. Ziola, M. Kiessling, M. Sperl, G. Bayreuther, and Ch. Strunk
accepted for publication by Journal of Applied Physics, arXiv:1208.2163 (PDF)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Autotools Mythbuster: automake pains (January 01, 2013, 17:42 UTC)

And we start the new year with more Autotools Mythbusting — although in this case it’s not with the help of upstream, who actually seemed to make it more difficult. What’s going on? Well, there has been two releases already, 1.13 and 1.13.1, and the changes are quite “interesting” — or to use a different word, worrisome.

First of all, there are two releases because the first one (1.13) was removing two macros (AM_CONFIG_HEADER and AM_PROG_CC_STDC) that were not deprecated in the previous release. After a complain from Paolo Bonzini related to a patch to sed to get rid of the old macros, Stefano decided to re-introduce the macros as deprecated in 1.13.1. What does this tell me? Well, two things mainly: the first is that this release has been rushed out without enough testing (the beta for it was released on December 19th!). The second that there is still no proper process in the deprecation of features with clear deadlines of when they are to disappear.

This impression is further strengthened in respect with some of the deprecation that appear in this new release, and some of the removals that did not happen at all.

This release was supposed to mark the first one not supporting the old-style name of configure.in for the autoconf input script — if you have any project still using that name you should update now. For some reason – none of which has been discussed on the automake mailing list, unsurprisingly – it was decided to postpone this to the next release. It still is a perfectly good idea to rename the files now, but you can probably get pissed easily if you felt pressurized into getting ready for the new release, and then the requirement is dropped without further notice.

Another removal that was supposed to happen with this release was the three-parameters AM_INIT_AUTOMAKE call, which substitutes the parameters of AC_INIT, instead of providing the automake options. The use of this macro is, though, still common for packages that calculate their version number dynamically, such as from the GIT repository itself, as it’s not possible to have a variable version passed to AC_INIT. Now, instead of just marking the feature as deprecated but keeping it around, the situation is that the syntax is no longer documented but it’s still usable. Which means I have to document it myself, as I find it extremely stupid to have a feature that is not documented anywhere, but is found in the wild. It’s exactly for bad decisions like this that I started Autotools Mythbuster.

This is not much different from what has happened with the AM_PROG_MKDIR macro, which was supposed to be deprecated/removed in 1.12, with the variables being kept around for a little longer — first it ended up being completely messed up in 1.12 to the point that the first two releases of that series dropped the variables which were supposed to stay around and the removal of the macro (but not o fthe variables) is now scheduled for 1.14 because, among others, GNU gettext is still using it — the issue has been reported, and I also think it has been fixed in GIT already, but there is no new release, nor a date for it to get fixed in a release.

All of this is already documented in Autotools Mythbuster even though there is more work to do.

Then there are things that changed, or were introduced in this release. First of all, silent rules are no longer optional — this basically means that the silent-rules option to the automake init is now a no-op, and the generated makefiles all have the silent rules harness included (but not enabled by default as usual). For me this meant a rewrite of the related section as now you have one more variant of automake to support. Then there finally is support in aclocal to get the macro directory selected in configure.ac — unfortunately this for me meant I had to rewrite another section of my guide to account for it, and now both the old and the new method are documented in there.

There are more notes in the NEWS file, and more things that are scheduled to appear in the next release, an I’ll try to cover them in my Autotools Mythbuster over the next week or so — I’ll expect this time I need to get into the details of Makefile.am like i have tried to avoid up to now. It’s quite a bit of work but it might be what makes the difference for so many autotools users out there that I really can’t avoid the task at this point. In the mean time, I welcome all support, be it through patches, suggestions, Flattr, Amazon or whatever else — the easiest way is to show the guide around: not only it’ll reduce the headaches for me and the other distribution packagers to have people actually knowing how to work on autotools, but also the more people know about it, the more contributions are likely to come in. Writing Autotools Mythbuster is far from easy, and sometimes it’s not enjoyable at all, but I guess it’s for the best.

Finally, a word about the status of automake in Gentoo — I’m leaving to Mike to bump the package in tree, once he’s done that, I’ll prepare to run a tinderbox with it — hopefully just getting the reverse dependencies for automake would be enough, thanks to autotools.eclass. For when the tinderbox is running, I hope I’ll have all the possible failures covered in the guide, as it’ll make the job of my Gentoo peers much easier.

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy New Year – 2013 (January 01, 2013, 15:42 UTC)

Just wanted to take a quick moment and wish everyone a Happy New Year! It’s that day where we can all start anew, and make resolutions to do this or that (or to not do this or that :razz: ). My resolution is to get back to updating my blog on a regular basis. I don’t know that it will be nearly every day like it was before I moved, but I’m going to try to post often (the backlog of topics is getting quite large).

Anyway, Happy 2013 to all!

Cheers,
Zach

December 31, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Why would paid-for support be better? (December 31, 2012, 20:46 UTC)

Last Saturday evening, I sent an e-mail to a low-volume mailinglist regarding IMA problems that I’m facing. I wasn’t expecting an answer very fast of course, being holidays, weekend and a low-volume mailinglist. But hey – it is the free software world, so I should expect some slack on this, right?

Well, not really. I got a reply on sunday – and not just an acknowledgement e-mail, but a to-the-point answer. It was immediately correct and described why, and helped me figure out things further. And this is not a unique case in the free software world: because you are dealing with the developers and users that have written the code that you are running/testing, you get a bunch of very motivated souls, all looking at your request when they can, and giving input when they can.

Compare that to commercial support from bigger vendors: in these cases, your request probably gets read by a single person whose state of mind is difficult to know (but from the communication you often get the impression that they either couldn’t care less or they are swamped with request tasks so they cannot devote enough time on your request). In most cases, they check the request for containing the right amount of information in the right format on the right fields, or even ignore that you did all that right and just ask you for (the same) information again. And who knows how many times I had to “state your business impact”.

Now, I know that commercial support from bigger vendor has the burden of a huge overload in requests, but is that truely that different in the free software world? Mailinglists such as the Linux kernel mailinglist (for kernel development) gets hundreds (thousands?) mails a day, and those with request for feedback or with questions get a reply quite swiftly. Mailinglists for distribution users get a lot of traffic as well, and each and every request is handled with due care and responded to within a very good timeframe (24h or less most of the time, sometimes a few days if the user is using a strange or exotic environment that not everyone knows how to handle).

I think one of the biggest advantages of the free software world is that the requests are public. That both teaches the many users on those mailinglists and fora on how to handle problems they haven’t seen before, as well as allows users to first look for a problem before reporting it. Everybody wins with this. And because it is public, many users are happily answering more and more questions because they get the visibility (with acknowledgements) they deserve: they gain a specific position in that particular area that others respect, because we can see how much effort (and good results) they gave earlier on.

So kudos to the free software world, a happy new year – and keep going forward.

December 30, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Finding IDs to submit (December 30, 2012, 18:02 UTC)

I have written a lot about the hardware IDs but i haven’t said much about submitting new entries to the upstream databases. Indeed, the package just mirrors the data that is collected by the USB and PCI databases that are managed by Stephen, Martin and Michal.

As an example, I’ll show you how I’ve been submitting the so-called Subsystem IDs for PCI devices from computers I either own, or fix up for customers and friends.

First off, you have to find a system or device whose subsystem IDs have not been submitted yet. Unfortunately I don’t have any computer at hand that I haven’t submitted to the database already. But fear not — it so happens I had an interesting opening. I rented a server from OVH recently, as I’ve had some trouble with one of my production hosts lately, and I’m entertaining the idea of moving everything on a new server and service altogether. But the whole thing is a topic for a completely different time. In any case, let’s see what we can do about these IDs now that I have an interesting system at hand.

First of all, while I don’t have the server at hand to know what’s in it, OVH does tell me what hardware is on it — in particular they tell me it’s an Intel D425KT board (yes I got a Kimsufi Atom, I got the three months lease for now and I’ll see if it can perform decently enough), so that’s a start. Alternatively, I could have asked dmidecode — but I just don’t have it installed on that server right now.

First step is to look at what lspci -v says:

00:00.0 Host bridge: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge
        Subsystem: Intel Corporation Device 544b
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=08 <?>

This is of course only the first entry in the list but it’s still something. You can see on the second line that it says “Subsystem: Intel Corporation Device 544b” — that means that it knows the subsystem vendor (ID 8086, I can tell you by heart — they have been funny at that), but it doesn’t know the subsystem device. So it’s what we’re looking for: an unknown system! Time to compare the output of lspci -vn — that one does not resolve the IDs, since we’ll need them to submit to the PCI database so if you’re not registered already, do register so that they can be submitted to begin with.

00:00.0 0600: 8086:a000
        Subsystem: 8086:544b
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=08 <?>

Okay so now we know that our first device is Intel’s (VID 8086) and has a000 as device ID — this brings us to https://pci-ids.ucw.cz/read/PC/8086/a000 easy, isn’t it? At the end of the page there’s a list of the known subsystem IDs; pending submissions does not show up the name, but they show up in the table with a darker gray background. All PCI ID entries are moderated by hand by the database’ s maintainers. When you’ll be reading this, the entry for my board will be in already, but right now it isn’t — if it wasn’t obvious, I’m looking for an entry that reads 8086 544b (which is under “Subsystem” above).

Now the form requires just a few words: the ID itself – which is 8086 544b with a space, not a colon – and a name. Note is for something that needs to be written on the pci.ids, so in most cases need to be empty. Discussion if when you wan tot comment on the certainly of your submission; for my laptop for instance we had some trouble with “Intel Corporation Device 0153” — which is now officially “3rd Gen Core Processor Thermal Subsystem”.

The name I’m going to submit is “Desktop Board D425KT” as that’s what the other entry in the database for that device uses as a format — okay it actually uses DeskTop but I’d rather not capitalize another T and see a kitten cry.

Now it’s time to go through all the other entries in the system — yes there are many of them, and most of the time the IDs are not set in the order of the PCI connections, so be careful. More interestingly, not all the subsystems are going to be listed in the same line. Indeed, the third entry that I have is this:

00:1c.0 0604: 8086:27d0 (rev 01) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00001000-00001fff
        Memory behind bridge: e0f00000-e12fffff
        Prefetchable memory behind bridge: 00000000e0000000-00000000e00fffff
        Capabilities: [40] Express Root Port (Slot+), MSI 00
        Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [90] Subsystem: 8086:544b
        Capabilities: [a0] Power Management version 2
        Capabilities: [100] Virtual Channel
        Capabilities: [180] Root Complex Link
        Kernel driver in use: pcieport

The subsystem ID is listed under “Capabilities” instead — but it’s always the same. This is actually critical: if the subsystem does not match, it means that it’s coming from a different component — for instance if you’re building your own computer, the subsystem of the internal CPU devices and those of the motherboard will not match, as they come from different vendors. And so would happen to add-on cards (PCI, PCI-E, AGP, …).

Sometimes, a different subsystem is also available on internal components that get different names from the motherboard itself — in this case, the Realtek network card on this motherboard reports a completely different ID and I really don’t know how to submit it:

01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)
        Subsystem: Intel Corporation Device d626
        Flags: bus master, fast devsel, latency 0, IRQ 44
        I/O ports at 1000 [size=256]
        Memory at e0004000 (64-bit, prefetchable) [size=4K]
        Memory at e0000000 (64-bit, prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable- Count=4 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Virtual Channel
        Capabilities: [160] Device Serial Number 01-00-00-00-36-4c-e0-00
        Kernel driver in use: r8169

If for whatever reason you make a mistake, you can click on the “Discuss” link on the submitted content and edit the name that you want to submit. I did make such a mistake during submitting the IDs for this.

So here are the tricks.. happy submission!

Unfortunately, all times we have a big list to keyword or stabilize, repoman complains about missing packages. So, in this post I will give you the solution to avoid this problem.

First, please download the batch-pretend script from my overlay.
I’m not a python programmer but I was able to edit the script made by Paweł Hajdan. I just deleted the bugzilla commit part, and I make the script able to print repoman full if the list is not complete.
This script works only with =www-client/pybugz-0.9.3

Now, to check if repoman will complain about your list, you need to do:
./batch-pretend.py --arch amd64 --repo /home/ago/gentoo-x86 -i /tmp/yourlist

where:

  • Batch-pretend.py is the script (obviously);
  • amd64 is the arch that you want to check. You will use ~amd64 for the keywordreq;
  • /home/ago/gentoo-x86 is the local copy of the CVS;
  • /tmp/yourlist is the list which contains the packages;

Few useful notes:

If you want to check on some arches, you can use a simple for:
for i in amd64 x86 sparc ppc ; do
./batch-pretend.py --arch "${i}" --repo /home/ago/gentoo-x86 -i /tmp/yourlist
done

The script will run ekeyword, so it will touch your local CVS copy of gentoo-x86. If this is not your intention, please make another copy and work there or don’t forget to run cvs up -C.

Before doing this work, you need to run cvs up in the root of your gentoo-x86 local CVS.

The list must be structured in this mode:
# bug #445900
=app-portage/eix-0.27.4
=www-client/pybugz-0.9.3
=dev-vcs/cvs-1.12.12-r6
#and so on..

December 29, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Flashing a Kindle Fire with CyanogenMod (December 29, 2012, 22:07 UTC)

Those of you that follow me on Google Plus (or Facebook) already know this, but the other day I was wondering about whether I should have flashed my Kindle Fire (first generation) with CyanogenMod instead of keeping it with the original Amazon operating system. This is the tale of what I did, which includes a big screwup on my part.

But first, a small introduction. I’m the first person to complain about people “jailbreaking” iPhones and similar, as I think that if you have to buy something that you have to modify to make useful, then you shouldn’t have bought it in the first place. Especially if you try to justify with the name “jailbreak” an act that almost all of the public uses to pirate software — I’m a firm maintainer that if we want Free Software licenses to be respected, we have to consider EULAs just as worthy of respect; that is that you can show that they are evil, but you can’t call for disrespecting them.

But I have made exceptions before, and this mostly happen when the original manufacturer “forgets” to provide update, or fails to follow through with promised features. An example of this to me was when I bought an AppleTV hoping that Apple would have kept their promise of entering the European market for TV series and movies so that it would come to be useful. While now they do have something, they have not the ability to buy them to watch in the original English (which makes it useless to me), and that came only after I decided to just drop the device because it wasn’t keeping up with the rest of the world. At the time to avoid having to throw the device away, I ended up using the hacking procedure to turn it into an XBMC device.

So in this case the problem was that after coming back home from Los Angeles, I barely touched the Kindle Fire at all. Why? Well, even though I did buy season passes for some TV Series (Castle, Bones, NCIS), which would allow me to stream them on Linux (unlike Apple’s store that only works on their device or with their software, and unlike Netflix that does not work on Linux), and download to the Kindle Fire, neither option works when outside of the United States — so to actually download the content I paid for, I have to use a VPN.

While it’s not straight forward, it’s possible to set up a VPN connection from Linux to the iPad, and have it connect to Amazon through said VPN, there is no way to do so on the Kindle Fire (there’s no VPN support at all). So I ended up leaving it untouched, and after a month I was concerned about my purchase. So I started considering what were the compelling features of the Kindle Fire compared to any other Android-based tablet. Which mostly came down to the integration with Amazon: the books, the music and the videos (TV series and movies).

For what concerns the books, the Kindle app for Android is just as good as the native one — the only thing that is missing is the “Kindle Owners’ Lending Library”, but since I rarely read books on the Fire, that’s not a big deal (I have a Kindle Keyboard that I read books on). For the music, while I did use the Fire a few times to listen to that, it’s not a required feature, as I have an iPod Touch for that, that also comes with an Amazon MP3 application.

There are also the integration of the Amazon App Store, but that’s something that tries to cover for the lack of Google Play support — and in general there isn’t that much content in there. Lots of applications, even when available, are compatible with my HTC Desire HD but not with the Kindle Fire, so what’s the point? Audiobooks are not native — they are handled through the Audible application, which is available on Google Play, but is also available on my iPod Touch, which means I have no point about it.

So about the videos — that’s actually the sole reason why I ordered it. While it is possible to watch the streamed videos on Linux, Flash would use my monitor and not let me work when watching something, so I wanted a device I could stream the videos to and watch on… a couple of months after I bought the Fire, though, Amazon released an Instant Video application for the iPad, making it quite moot. Especially since the iPad has the VPN access I noted before, and I can connect the HDMI adapter to it and watch the streams on my 32" TV.

All this considered, the videos were the only thing that was really lost if I stopped using the Amazon firmware. So I looked it up and found three guides – 1 2 3 – that would have got me set up with an Android 4.1, CyanogenMod 10 based ROM. Since the device is very simple (no bluetooth, no GPS, no baseband, no NFC) supporting it should be relatively easy, the only problem, as usual, is to make sure you can root and flash it.

Unfortunately, when I went to flash it up, I made a fatal mistake: instead of flashing the bootloader’s image (a modified u-boot), I flashed the zip file of it. And the device wouldn’t boot up anymore. Thankfully, there are people like Christopher and Vladimir who pointed me at the fact that the CPU in that tablet (TI OMAP) has an USB boot option — but it requires to short one very tiny, nigh-microscopic pad on the main board to ground, so that it would try to boot from there. Lo and behold, thanks to a friend of mine with less shaky hands who happened to be around, I was able to follow the guide to unbrick the device, and got the CM10 ROM on top of it.

Now I finally got an Android 4 device (the HTC is still running the latest available CM7 — if somebody has a suggestion of a CM10 ROM that does not add tons of customization, and that doesn’t breach the Google license by bundling the Google Apps, I’d be happy to update), I’ve been able to test Chrome for Android, and VLC as well — and I have to say that it’s improving tons. Of course there are still quite a few things that are not really clean (for example there is no Flickr application that can run there!), but it’s improving.

If I were to buy a new tablet tomorrow, though, I would probably be buying a Samsung Galaxy Note 10 — why? Well, because I finally got a hold of a test version of it at the local Mediamarkt Mediaworld and the pen accessory is very nice to use, especially if you’re used to Wacom tablets, and that would give sense to a 10" laptop to me. I’m a bit upset with my iPad inability to do precise drawing to be honest. And since that’s not very commonly known, the Galaxy Notes don’t use capacitive pens, but magnetic ones just like the above-noted Wacoms, that’s why they are so precise.

Sven Vermeulen a.k.a. swift (homepage, bugs)
IMA and EVM on Gentoo, part 2 (December 29, 2012, 21:42 UTC)

I have been playing with Linux IMA/EVM on a Gentoo Hardened (with SELinux) system for a while and have been documenting what I think is interesting/necessary for Gentoo Linux users when they want to use IMA/EVM as well. Note that the documentation of the Linux IMA/EVM project itself is very decent. It’s all on a single wiki page, but it’s decent and I learned a lot from it.

That being said, I do have the impression that the method they suggest for generating IMA hashes for the entire system is not always working properly. It might be because of SELinux on my system, but for now I’m searching for another method that does seem to work well (I’m currently trying my luck with a find … -exec evmctl based command). But once the hashes are registered, it works pretty well (well, there’s a probably small SELinux problem where loading a new policy or updating the existing policies seems to generate stale rules and I have to reboot my system, but I’ll find the culprit of that soon ;-)

The IMA Guide has been updated to reflect recent findings – including how to load a custom policy, and I have also started on the EVM Guide. I think it’ll take me a day or three to finish off the rough edges and then I’ll start creating a new SELinux node (KVM) image that users can use with various Gentoo Hardened-supported technologies enabled (PaX, grSecurity, SELinux, IMA and EVM).

So if you’re curious about IMA/EVM and willing to try it out on Gentoo Linux, please have a look at those documents and see if they assist you (or confuse you even more).

Steve Dibb a.k.a. beandog (homepage, bugs)
znurt.org cleanup (December 29, 2012, 05:36 UTC)

So, I finally managed to getting around to fixing the backend of znurt.org so that the keywords would import again.  It was a combination of the portage metadata location moving, and a small set of sloppy code in part of the import script that made me roll my eyes.  It’s fixed now, but the site still isn’t importing everything correctly.

I’ve been putting off working on it for so long, just because it’s a hard project to get to.  Since I started working full-time as a sysadmin about two years ago, it killed off my hobby of tinkering with computers.  My attitude shifted from “this is fun” to “I want this to work and not have me worry about it.”  Comes with the territory, I guess.  Not to say I don’t have fun — I do a lot of research at work, either related to existing projects or new stuff.  There’s always something cool to look into.  But then I come home and I’d rather just focus on other things.

I got rid of my desktops, too, because soon afterwards I didn’t really have anything to hack on.  Znurt went down, but I didn’t really have a good development environment anymore.  On top of that, my interest in the site had waned, and the whole thing just adds up to a pile of indifference.

I contemplated giving the site away to someone else so that they could maintain it, as I’ve done in the past with some of my projects, but this one, I just wanted to hang onto it for some reason.  Admittedly, not enough to maintain it, but enough to want to retain ownership.

With this last semester behind me, which was brutal, I’ve got more time to do other stuff.  Fixing Znurt had *long* been on my todo list, and I finally got around to poking it with a stick to see if I could at least get the broken imports working.

I was anticipating it would be a lot of work, and hard to find the issue, but the whole thing took under two hours to fix.  Derp.  That’s what I get for putting stuff off.

One thing I’ve found interesting in all of this is how quickly my memory of working with code (PHP) and databases (PostgreSQL) has come back to me.  At work, I only write shell scripts now (bash) and we use MySQL across the board.  Postgres is an amazing database replacement, and it’s amazing how, even not using it regularly in awhile, it all comes back to me.  I love that database.  Everything about it is intuitive.

Anyway, I was looking through the import code, and doing some testing.  I flushed the entire database contents and started a fresh import, and noticed it was breaking in some parts.  Looking into it, I found that the MDB2 PEAR package has a memory leak in it, which kills the scripts because it just runs so many queries.  So, I’m in the process of moving it to use PDO instead.  I’ve wanted to look into using it for a while, and so far I like it, for the most part.  Their fetch helper functions are pretty lame, and could use some obvious features like fetching one value and returning result sets in associative arrays, but it’s good.  I’m going through the backend and doing a lot of cleanup at the same time.

Feature-wise, the site isn’t gonna change at all.  It’ll be faster, and importing the data from portage will be more accurate.  I’ve got bugs on the frontend I need to fix still, but they are all minor and I probably won’t look at them for now, to be honest.  Well, maybe I will, I dunno.

Either way, it’s kinda cool to get into the code again, and see what’s going on.  I know I say this a lot with my projects, but it always amazes me when I go back and I realize how complex the process is — not because of my code, but because there are so many factors to take into consideration when building this database.  I thought it’d be a simple case of reading metadata and throwing it in there, but there’s all kinds of things that I originally wrote, like using regular expressions to get the package components from an ebuild version string.  Fortunately, there’s easier ways to query that stuff now, so the goal is to get it more up to date.

It’s kinda cool working on a big code project again.  I’d forgotten what it was like.


December 27, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened IMA support (December 27, 2012, 20:40 UTC)

Adventurous users, contributors and developers can enable the Integrity Measurement Architecture subsystem in the Linux kernel with appraisal (since Linux kernel 3.7). In an attempt to support IMA (and EVM and other technologies) properly, the System Integrity subproject within Gentoo Hardened was launched a few months ago. And now that Linux kernel 3.7 is out (and stable) you can start enjoying this additional security feature.

With IMA (and IMA appraisal), you are able to protect your system from offline tampering: modifications made to your files while the system is offline will be detected as their hash values do not match the hash values stored in extended attributes (whereas the extended attributes are then protected through digitally signed values using the EVM technology).

I’m working on integrating IMA (and later EVM) properly, which of course includes the necessary documentation: concepts and a ima guide for starters, with more to follow. Be aware though that the integration is still in its infancy, but any questions and feedback is greatly appreciated, and bugreports (like bug 448872) are definitely welcome.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Restarting a tinderbox (December 27, 2012, 15:52 UTC)

So after my post about glibc 2.17 we got the ebuild in tree, and I’m now re-calibrating the ~amd64 tinderbox to use it. This sounds like an easy task but it really isn’t so. The main problem is that with the new C library you want to make sure to start afresh: no pre-compiled dependencies should be in, or they won’t be found: you want the highest coverage as possible, and that takes some work.

So how do you re-calibrate the tinderbox? First off you stop the build, and then you have to clean it up. The cleanup sometimes is as easy as emerge --depclean — but in some cases, like this time, the Ruby packages’ dependencies are causing a bit of a stir, so I had to remove them altogether with qlist -I dev-ruby virtual/ruby dev-lang/ruby | xargs emerge -C after which the depclean command actually starts working.

Of course it’s not a two minutes command like on any other system, especially when going through the “Checking for lib consumers” step — the tinderbox has a 181G of data in its partition (a good deal of which is old logs which I should actually delete at this point — and no that won’t delete the logs in the reported bugs, as those are stored on s3!), without counting the distfiles (which are shared with its host).

In this situation, if there were automagic dependencies on system/world packages, it would actually bail out and I’d have to go manually clean them up. Luckily for me, there’s no problem today, but I have had this kind of problem before. This is actually one of the reasons why I want to keep the world set in the tinderbox as small as possible — right now it consists basically of: portage-utils, gentoolkit (for revdep-rebuild), java-dep-check, Python 2.7 (it’s an old thing, it might be droppable now, not sure), and netcat6 for sending the logs back to the analysis script. I would have liked to remove netcat6 from the list but last time the busybox nc implementation didn’t work as expected with IPv6.

The unmerge step should be straightforward, but unfortunately it seems to be causing more grief than it’s expected, in many cases. What happens is that Portage has special handling for symlinked directories — and after we migrated to use /run instead of /var/run all the packages that have not been migrated to not using keepdir on it, ebuild-side, will spend much more time at unmerge stage to make sure nothing gets broken. This is why we have a tracker bug and I’ve been reporting ebuilds creating the directory, rather than just packages that do not re-create it on the init script. Also, this is when I thank I decided to get rid of XFS as the file deletion there was just way too slow.

Even though Portage takes care of verifying the link-time dependencies, I’ve noticed that sometimes things are broken nonetheless, so depending on what one’s target is, it might be a good idea to just run revdep-rebuild to make sure that the system is consistent. In this case I’m not going to waste the time, as I’ll be rebuilding the whole system in the next step, after glibc gets updated. This way we’re sure that we’re running with a stable base. If packages are broken at this level, we’re in quite the pinch, but it’s not a huge deal.

Even though I’m keeping my world file to the minimum, the world and system set is quite huge, when you add up all the dependencies. The main reason is that the tinderbox enables lots and lots of flags – as I want to test most code – so things like gtk is brought in (by GCC, nonetheless), and the cascade effect can be quite nasty. The system rebuild can easily take a day or two. Thankfully, the design of the tinderbox scripts make it so that the logs are send through the bashrc file, and not through the tinderbox harness itself, which means that even if I get failures at this stage, I’ll get a log for them in the usual place.

After this is completed, it’s finally possible to resume the tinderbox building, and hopefully then some things will work more as intended — like for instance I might be able to get a PHP to work again… and I’ll probably change the tinderbox harness to try building things without USE=doc, if they fail, as too many packages right now fail with it enabled or, as Michael Mol pointed out, because there are circular dependencies.

So expect me working on the tinderbox for the next couple of days, and then start reporting bugs against glibc-2.17, the tracker for which I opened already, even though it’s empty at the time of writing.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
My personal KDEPIM upgrade (again): laptop (December 27, 2012, 11:40 UTC)

One year after my last blog post on this topic I encountered some minor difficulties with combining KDEPIM-4.4 (i.e. kmail1) and the KDE 4.10 betas. These difficulties are fixed now, and the combination seems to work fine again. Anyway, I became curious about the level of stability of Akonadi-based kmail2 once more. After all, I've been running it continuously over the year on my office desktop with a constant-on fast internet connection, and that works quite well. So, I gave it a fresh try on my laptop too. I deleted my Akonadi configuration and cache, switched to Akonadi mysql backend, updated kmail and the rest of KDEPIM without migrating to 4.9.4, and re-added my IMAP account from scratch (with "Enable offline mode"). The overall use case description is "laptop with large amount of cached files from IMAP account, fluctuating internet connectivity". Now, here are my impressions...

  • Reaction time is occasionally sluggish, but overall OK.
  • The progress indicator behaves a bit odd, it checks the mail folders in seemingly random order and only knows 0% and 100% completion.
  • Random warning messages. It seems that kmail2 uses some features that "my" IMAP server does not understand. So, I'm getting frequent warning notifications that don't tell me anything and that I cannot do anything about. SET ANNOTATION, UID, ... Please either handle the errors, inform the user what exactly goes wrong, or ignore them in case they are irrelevant. Filed as a wish, bug 311265.
  • Network activity stops working sometime. This sounds worse than it actually is, since in 99% of all cases Akonadi now detects fine that the connection to the server is broken (e.g., after suspend/resume, after switching to a different WLAN, or after enabling a VPN tunnel) and reconnects immediately. In the few remaining cases, re-starting the Akonadi server does the trick. You just have to know what to kick.
  • More problematic is, while you're in online mode, any problems with connectivity will make kmail "hang". Clicking on a message leads to an attempt to retrieve it, which requires some response from the network. As it seems to me, all such requests are queued up for Akonadi to handle, and if that does not get a reply, pending requests are stuck in the queue... OK, you might say that this is a typical use case for offline mode, but then I would have to be able to predict when exactly my train enters the tunnel... Compare this to kmail1 disconnected IMAP accounts, where regular syncing would be delayed, but local work remained unaffected.
  • Offline mode is a nice concept, and half a solution for the last problem, but unfortunately it does not work as expected. For mysterious reasons, a considerable part of the messages is not cached locally. I switch my account to offline mode, click on a message, and obtain an error message "Cannot fetch this in offline mode". Well, bummer. Bug 285935.
  • This may just be my personal taste, but once something goes wrong (e.g., non-kde related crash, battery empty, ...) and the cache becomes corrupted somehow, I'd like to be able to do something from kmail2 without having to fiddle with akonadiconsole. A nice addition would be "Invalidate cache" in the context menu of a mail folder, or some sort of maintenance menu with semi-safe options.
  • Finally... something is definitely going wrong with PGP signatures; the signatures do not always verify on other mail clients. Tracking this down, it seems that CRLF is not preserved in messages, see bug 306005.
On the whole, for the laptop use case the "new" KDEPIM is now (4.9.4) more mature than the last time I tried. I'll keep it now and not downgrade again, but there are still some significant rough edges. The good thing is, the KDEPIM developers are aware of the above issues and debugging is going on, as you can see for example from this blog post by Alex Fiestas (whose use case pretty much mirrors my own).

December 26, 2012
Gnome 3.6 (December 26, 2012, 23:35 UTC)

We had a marathon with Alexandre (tetromino) in the last 2 weeks to get Gnome 3.6 ebuilds using python-r1 eclasses variants, EAPI=5 and gstreamer-1. And now it is finally in gentoo-x86, unmasked.

You probably read, heard or have seen stuff about EAPI=5 and new python eclasses before but, in short, here is what it will give you:

  • package manager will finally know for real what python version is used by which package and be able to act on it accordingly (no more python-updater when all ebuilds are migrated)
  • EAPI=5 subslots will hopefully put an end to revdep-rebuild usage. I already saw it in action while bumping some of the telepathy packages to discover that empathy was now automatically being rebuilt without further action than emerge -1 telepathy-logger.

No doubt lots of people are going to love this.

Gnome 3.6 probably still has a few rough edges so please, check bugzilla before filing new reports.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
GLIBC 2.17: what's going to be a trouble? (December 26, 2012, 11:27 UTC)

So LWN reports just today on the release of GLIBC 2.17 which solves a security issue and looks like was released mostly to support the new AArch64 architecture – i.e. arm64 – but the last entry in the reported news is possibly going to be a major headache and I’d better post already about it so that we have a reference for it.

I’m referring to this:


The `clock_*' suite of functions (declared in <time.h>) is now available directly in the main C library. Previously it was necessary to link with -lrt to use these functions. This change has the effect that a single-threaded program that uses a function such as `clock_gettime' (and is not linked with -lrt) will no longer implicitly load the pthreads library at runtime and so will not suffer the overheads associated with multi-thread support in other code such as the C++ runtime library.

This is in my opinion the most important change, not only because, as it’s pointed out, C++ software would have quite an improvement not to link to the pthreads library, but also because it’s the only change listed there that I can foresee trouble with already. And why is that? Well, that’s easy. Most of the software out there will do something along these lines to see what library to link to when using clock_gettime (the -lrt option was not always a good idea because it’s not existing for most other operating systems out there, including FreeBSD and Mac OS X).

AC_SEARCH_LIB([clock_gettime], [rt])

This is good, because it’ll try either librt, or just without any library at all (“none required”) which means that it’ll work on both old GLIBC systems, new GLIBC systems, FreeBSD, and OS X — there is something else on Solaris if I’m not mistaken, which can be added up there, but I honestly forgot its name. Unfortunately, this can easily end up with more trouble when software is underlinked.

With the old GLIBC, it was possible to link software with just librt and have them use the threading functions. Once librt will be dropped automatically by the configuration, threading libraries will no longer be brought in by it, and it might break quite a few packages. Of course, most of these would already have been failing with gold but as you remembered, I wasn’t able to get to the whole tree with it, and I haven’t set up a tinderbox for it again yet (I should, but it’s trouble enough with two!).

What about --as-needed in this picture? A full hard-on implementation would fail on the underlinking, where pthreads should have been linked explicitly, but would also make sure to not link librt when it’s not needed, which would make it possible to improve the performance of the code (by skipping over pthreads) even when the configure scripts are not written properly (like for instance if they are using AC_CHECK_LIB instead of AC_SEARCH_LIB). But since it’s not the linkage of librt that causes the performance issue, but rather the one for pthreads, it actually works out quite well, even if some packages might keep an extra linkage to librt which is not used.

There is a final note that I need o write about and honestly worries me quite a bit more than all those above. The librt library has not been dropped — only the clock functions have been moved over to the main C library, but the library keeps asynchronous and list-based I/O operation interfaces (AIO and LIO), the POSIX message queues interfaces, the shared memory interfaces, and the timer interfaces. This means that if you’re relying on a clock_gettime test to bring in librt, you’ll end up with a failing package. Luckily for me, I’ve avoided that situation already on feng (which uses the message queues interface) but as I said I foresee trouble at least for some packages.

Well, I guess I’ll just have to wait for the ebuild for 2.17 to be in the tree, and run a new tinderbox from scratch… we’ll see what gets us there!

December 25, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Trouble in GNU: an opportunity for improving? (December 25, 2012, 18:54 UTC)

I have posted a note about the way FSF (America) started acting like a dictator with the GNU project and the software maintained under its umbrella, which lead to the splitting of GnuTLS — which is something that Nikos is not currently commenting on, simply because he’s now negotiating what’s going to happen with it.

Well, the next step has been Paolo stepping down as GNU maintainer, after releasing a new version of sed. This actually made me think a bit more. What’s going on with sed, grep and the like? Well, most likely they’ll get a new maintainer and they’ll keep going that way. But should we see this as an opportunity? You probably remember that some time ago I suggested we could be less GNU — or at least, less reliant on GNU.

So while I’m definitely not going to fork sed myself ­– I have enough trouble with unpaper especially considering that while in America I didn’t have a scanner, which is a necessity to develop it – but there definitely is room for improvement with it. First of all, it would be a good choice to start with, to get rid of the damn gnulib and eventually implementing what is an extension of glibc itself as an external library (something like libgsupc). Even if this didn’t work on anything but FreeBSD and Linux, it would still be an improvement, and I’m pretty sure it would be feasible without needing that hairy mess of code that, in the source code for sed takes five times as much as the sed sources themselves — 200KiB are the sources for the program, 1.1MiB is the gnulib copies.

Having a new, much less political project to oversee the development of core system utilities would also most likely consolidate some projects that are currently being developed outside of GNU altogether, or simply don’t fit with their scope because they are Linux-specific, which would probably make for a better final user experience. Plus things like keeping man pages actually up to date instead of relying on the info manuals, would almost certainly help!

So can any of you think of other ways to improve the GNU utilities by breaking out of GNU’s boundaries (which is what Nikos and Paolo seem to be striving for), maybe it is possible to get something that is better for everybody and Free at the same time. Myself I know I need to spend some time to fix the dependency upon readline that is present in GnuTLS just for the utilities..

December 24, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why can't I get easy hardware (December 24, 2012, 15:12 UTC)

When I bought my Latitude I complained that it seemed to me more and more like a mistake — until the kernel started shipping with the correct (and fixed) drivers, so the things that originally didn’t work right (the SD card reader, the shutdown process, the touchpad, …) started working quite nicely. As of September 2011 (one year and a quarter after I bought it), between Linux and firmware updates from Dell and Broadcom, the laptop worked almost completely — only missing par still is the fingerprint reader, which I really don’t care that much about.

Recently, you probably have seen my UEFI post where I complained that I couldn’t install Sabayon on the new Zenbook (which is where I’m writing from, right now, on Gentoo). Well, that wasn’t the only problem I got with this laptop, and I should really start reporting issues to the kernel itself, but in the mean time let me write down some notes here.

First off, the keyboard backlight is nice and all, but I don’t need it – I learnt to touch-type when I was eight – so it would just be a battery waste of time. While the keys are reported correctly, and upower supports setting the backlight, at least the stable version of KDE doesn’t seem to support the backlight setting. I should ask my KDE friends if they can point me in the right direction. Another interesting point is that while the backlight is turned on at boot, it’s off after suspension — which is probably a bug in the kernel, but it’s working fine for me.

Speaking about things not turning back on after suspension, the WLAN LED on the keyboard is not turning on, at resume. And related to that, the rfkill key doesn’t seem to work that well either. It’s not a big deal but it’s a bit bothersome, especially since I would like to turn off the bluetooth adapter only (and since that’s supposedly hardware-controlled, it should get me some more battery life).

The monitor’s backlight is even more troublesome: first problem is deciding who should be handling it — i’s either the ACPI video driver (by default), the ASUS WMI driver, or the Intel driver — of the three, the only one that make it work is the Intel driver, and I’m not even sure if that’s actually controlling the backlight or just the tint on the screen, even though, when set to zero, it turns the screen OFF, not just display it as black. It does make it bearable though.

The brightness keys on the keyboard don’t work, by the way, nor does the one that should turn on and off the light sensor — the latter, isn’t even recognized as a key by the asus-wmi driver, and I can’t be sure of the correct device ID that I should use to turn on/off said light sensor. After I hacked the driver to not expose either the ACPI or the WMI brightness interfaces, I’m able to set the brightness from KDE at least — but it does not seem to stick, if I take it down, and after some time it starts and gets back to the maximum (when the power is connected, at least).

And finally, there is the matter of the SD card reader. Yesterday I went to use it, and I found out that … it didn’t work. Even though it’s an USB device, it’s not mass-storage — it’s a Realtek USB MMC device, which does not use the standard USB interface for MMC readers at all! After some googling around, I found that Realtek actually released a driver for that, and after some more digging I found out that said driver is currently (3.7) in the staging drivers’ tree as a virtual SCSI driver (with its own MMC stack) — together with a PCI-E peer, which has been already rewritten for the next release (3.8) as three split drivers (a MFD base, a MMC driver, and a MemoryStick driver). I tried looking into working on porting the USB one as well, but it seems to be a lot of work, and Realtek (or rather, Realsil) seems to be already working on it to port it to the real kernel, so it might be worth waiting.

To be fair what dropped away the idea from me of working on the SD card driver is that to have an idea of what’s going on I have to run 3.8 — and at RC1 panics as soon as I re-connect the power cable. So even though I would like to find enough time to work on some kernel code, this is unlikely to happen now. I guess I’ll spend the next three days working on Gentoo bugs, then I have a customer to take care of, so this is just going to be dropped off my list quite quickly.

December 23, 2012
Sebastian Pipping a.k.a. sping (homepage, bugs)
Fwd: Why privacy matters (December 23, 2012, 22:13 UTC)

I am sharing this video because it has a few interesting points on the value of privacy, especially some that are helpful explaining privacy to others. Two examples:

Cory Doctorow (at 00:13):

“Privacy is the right to make a mistake.”

Christopher Soghoian (at 03:07):

“Everyone has something to hide. We have curtains on our windows, we wear cloths, we don’t broadcast our salaries or our medications [..].”

PS: This video was brought to my attention by a post at Netzpolitik.org.

December 22, 2012
Stuart Longland a.k.a. redhatter (homepage, bugs)
End of the world predictions (December 22, 2012, 07:21 UTC)

This is a little old, been kicking around on my computer over 10 years now, but it seems especially relevant given what some thought of the Mayan calendar…

December 21, 2012


Figure 1.1: End of World

Fig. 1: End of World banner

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, ready to rock the end of the world (or at least mid-winter/Southern Solstice)! The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.6.8, Xorg 1.12.4, KDE 4.9.4, Gnome 3.4.2, XFCE 4.10, Fluxbox 1.3.2, Firefox 17.0.1, LibreOffice 3.6.4.3, Gimp 2.8.2-r1, Blender 2.64a, Amarok 2.6.0, Mplayer 2.2.0, Chromium 24.0.1312.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. There is no new FAQ or artwork the 20121221 release, but you can still get the 12.0 artwork plus DVD cases and covers for the 12.0 release; and view the 12.1 FAQ (persistence mode is not available in 20121221).
  • Special Features:
    • ZFSOnLinux
    • Writable file systems using AUFS so you can emerge new packages!

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20121221 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20121221 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Creating a tumblelog with blohg (December 21, 2012, 05:39 UTC)

Warning: This post relies on unreleased blohg features. You will need to install blohg from the Mercurial repository or use the live ebuild (=www-apps/blohg-9999), if you are a Gentoo user. Please ignore this warning after blohg-1.0 release.

Tumblelogs are old stuff, but services like Tumblr popularized them a lot recently. Thumblelogs are a quick and simple way to share random content with readers. They can be used to share a link, a photo, a video, a quote, a chat log, etc.

blohg is a good blogging engine, we know, but what about tumblelogs?!

You can already share videos from Youtube and Vimeo, and can share most of the other stuff manually, but it is boring, and diverges from the main objective of the tumblelogs: simplicity.

To solve this issue, I developed a blohg extension (Yeah, blohg-1.0 supports extensions! \o/ ) that adds some cool reStructuredText directives:

quote

This directive is used to share quotes. It will create a blockquote element with the quote and add a signature with the author name, if provided.

Usage example:

.. quote::
   :author: Myself

   This is a random quote!

chat

This directive is used to share chat logs. It will add a div with the chat log, highlighted with Pygments.

Usage example:

.. chat::

   [00:56:38] <rafaelmartins> I'm crazy.
   [00:56:48] <rafaelmartins> I chat alone.

You can see the directives in action on my shiny new tumblelog:

http://rafael.martins.im/

The source code of the tumblelog, including the blohg extension and the mobile-friendly templates, is available here:

http://hg.rafaelmartins.eng.br/blogs/rafael.martins.im/

I have no plans to release this extension as part of blohg, but feel free to use it if you find it useful!

That's all!

December 20, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Why my Munin plugins are now written in Perl (December 20, 2012, 21:52 UTC)

This post is an interlude between Gentoo-related posts. The reason is that I have one in drafts that requires me to produce some results that I have not yet, so it’ll have to wait for the weekend or so.

You might remember that my original IPMI plugin was written in POSIX sh and awk, rather than bash and gawk as the original one. Since then, the new plugin (that as it turns out might become part of the 2.1 series but not to replace both the old ones, since RHEL and Fedora don’t package a new enough version of Freeipmi) has been rewritten in Perl, so using neither sh nor awk. Similarly, I’ve written a new plugin for sensors which I also wrote in Perl (although in this case the original one also used it).

So why did I learn a new language (since I never programmed in Perl before six months ago) just to get these plugins running? Well, as I said in the other post, the problem was calling the same command so many times, which is why I wanted to go multigraph — but when dealing with variables, sticking to POSIX sh is a huge headache. One of the common ways to handle this is to save to a temporary directory the output of a command and parse that multiple times, but that’s quite a pain, as it might require I/O to disk, and it also means that you have to execute more and more commands. Doing the processing in Perl means that you can save things in variables, or even just parse it once and split it into multiple objects, to be later used for output, which is what I’ve been doing for parsing FreeIPMI’s output.

But why Perl? Well, Munin itself is written in Perl, so while my usual language of choice is Ruby, the plugins are much more usable if doing it in Perl. Yes, there are some alternative nodes written in C and shell, but in general it’s a safe bet that these plugins will be executed on a system that at least supports Perl — the only system I can think of that wouldn’t be able to do so would be OpenWRT, but that’s a whole different story.

There are a number of plugins written in Python and Ruby, some in the official package, but most in the contrib repository and they could use some rewriting. Especially those that use net-snmp or other SNMP libraries, instead of Munin’s Net::SNMP wrapper.

But while the language is of slight concern, some of the plugins could use some rewriting simply to improve their behaviour. As I’ve said, using multigraphs it’s possible to reduce the number of times that the plugin is executed, and thus the number of calls to the backend, whatever that is (a program, or access to /sys), so in many cases plugins that support multiple “modes” or targets through wildcarding can be improved by making them a single plugin. In some cases, it’s even possible to reduce multiple plugins into one, as I did to the various apache_* plugins shipping with Munin itself, replaced on my system with apache_status as provided by the contrib repository, that fetches the server status page only once and then parses it to produce the three graphs that were, before that, created by three different plugins with three different fetches.

Another important trick up our sleeves while working on Munin plugins is dirty config which basically means that (under indication from the node itself), you can make the plugin output the values as well as the configuration itself during the config execution — this saves you one full trip to the node (to fetch the data), and usually that also means it saves you from having to send one more call to the backend. In particular with these changes my IPMI plugin went from requiring six calls to ipmi-sensors per update, for the three graphs, to just one. And since it’s either IPMI on the local bus (which might require some time to access) or over LAN (which takes more time), the difference is definitely visible both in timing, and in traffic — in particular one of the servers at my day job is monitoring another seven servers (which can’t be monitored through the plugin locally), which means that we went from 42 to 7 calls per update cycle.

So if you use Munin, and either have had timeout issues in the past or recently, or you have some time at hand to improve some plugins, you might want to follow what I’ve been doing, and start improving or re-writing plugins to support multigraph or dirtyconfig, and thus improve its performance.

Jeremy Olexa a.k.a. darkside (homepage, bugs)

I was in Budapest for 11 days. I couchsurfed there and it is longer than I normally stay at someone’s house, by far. So, thanks Paul! Budapest was nice, reminded me much of Prague. While, I was there I visited a Turkish Bath, that was very interesting experience. Imagine, a social, public “hot tub & sauna” with water naturally hot. I found a newly minted Crossfit gym, RC Duna, that opened up it’s doors for a traveller, so gracious. Even though I didn’t get to see the Opera in Vienna, I went to the Opera house in Budapest. It was my first time seeing a ballet, The Nutcracker. There were Christmas markets in Budapest too. I actually liked the Budapest ones more so than the Viennese markets. I also helped to organize the first (known) Hungarian Gentoo Linux Beer Meeting :)

Then I took a train to Belgrade, Serbia. The train was 8+ hours. I couchsurfed again for 3 nights. Had some wonderful chats with my host, Ljubica. She learned about US things, I learned about Serbian things, just what you could hope for, a cultural exchange via couchsurfing. I was her first US guest. Later on, an Argentinian fellow stayed there too and we had conversations about worldly topics, like “why are borders so important and do we need them?” and “speculating why Belgium’s lack of government even worked.” Then perhaps, the best part, I got to try authentic mate. In my opinion, there wasn’t much to actually see in Belgrade during the winter, I did walk around and went to the fortress. Otherwise, nursed my head cold which I got on the train.

I took the bus to Skopje, FYROM. I stayed in Skopje for 3 nights at a nice independent hostel, Shanti Hostel (recommended). I walked around the center (not much to see), walked through the old bazaar, and ate some good food. The dishes in Central Europe include lots of meat. I embarked on a mission to find the semi-finalist entry for the next 7 wonders of the world, Vrelo Cave, but I got lost and took a 10km hike along the river, it was spectacular! And peaceful. Perfect really. I wanted to see what was at the end of the trail, but eventually turned around because it didn’t end. On the way back, I slipped and came within feet of going in the drink. As my legs straddled a tree and my feet went through the branches that were clearly meant to handle no weight, I used that split second to be thankful. I used the next second to watch something black go bounce, …, bounce, SPLASH. It is funny how you can go from thankful to cursing about your camera in the river so quickly. I got up, looked around and thought about how I got off the path, dang. Being the frugal man I am, I continued off the path and went searching for my camera. Well, that was bad because I slipped again. As I was sliding on my ass and grabbing branches, I eventually stopped. It was at this point, I knew my camera was gone since I could see the battery popped out and was in the water. Le sigh. C’est la vie.

So, no pictures, friends. I had a few hundred pictures that I didn’t upload and they are gone. I might buy a camera again but for now, you will just have to take my word for it. My Mom says she will send me a disposable camera :D ha.

I’m off to Greece at 6am…

Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching policy types in Gentoo/SELinux (December 20, 2012, 09:31 UTC)

When you are running Gentoo with SELinux enabled, you will be running with a particular policy type, which you can devise from either /etc/selinux/config or from the output of the sestatus command. As a user on our IRC channel had some issues converting his strict-policy system to mcs, I thought about testing it out myself. Below are the steps I did and the reasoning why (and I will update the docs to reflect this accordingly).

Let’s first see if the type I am running at this moment is indeed strict, and that the mcs type is defined in the POLICY_TYPES variable. This is necessary because the sec-policy/selinux-* packages will then build the policy modules for the other types referenced in this variable as well.

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             strict
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28
 
test ~ # grep POLICY_TYPES /etc/portage/make.conf
POLICY_TYPES="targeted strict mcs"

If you notice that this is not the case, update the POLICY_TYPES variable and rebuild all SELinux policy packages using emerge $(qlist -IC sec-policy) first.

Let’s see if I indeed have policies for the other types available and that they are recent (modification date):

test ~ # ls -l /etc/selinux/*/policy
/etc/selinux/mcs/policy:
total 408
-rw-r--r--. 1 root root 417228 Dec 19 21:01 policy.27
 
/etc/selinux/strict/policy:
total 384
-rw-r--r--. 1 root root 392168 Dec 19 21:15 policy.27
 
/etc/selinux/targeted/policy:
total 396
-rw-r--r--. 1 root root 402931 Dec 19 21:01 policy.27

Great, we’re now going to switch to permissive mode and edit the SELinux configuration file to reflect that we are going to boot (later) into the mcs policy. Only change the type – I will not boot in permissive mode so the SELINUX=enforcing can stay.

test ~ # setenforce 0
 
test ~ # vim /etc/selinux/config
[... set SELINUXTYPE=mcs ...]

You can run sestatus to verify the changes, but be aware that – while the command does say that the mcs policy is loaded, this is not the case. The mcs policy is just defined as the policy to load:

test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              disabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

So let’s load the mcs policy shall we?

test ~ # cd /usr/share/selinux/mcs/
test mcs # semodule -b base.pp -i $(ls *.pp | grep -v base | grep -v unconfined)

Next we are going to relabel all files on the file system, because the mcs policy adds in another component in the context (a sensitivity label – always set to 0 for mcs). We will also re-do the setfiles steps done initially while setting up SELinux on our system. This is because we need to relabel files that are “hidden” from the current file system because other file systems are mounted on top of it.

test mcs # rlpkg -a -r
Relabeling filesystem types: btrfs ext2 ext3 ext4 jfs xfs
Scanning for shared libraries with text relocations...
0 libraries with text relocations, 0 not relabeled.
Scanning for PIE binaries with text relocations...
0 binaries with text relocations detected.
 
test mcs # mount -o bind / /mnt/gentoo
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/dev
test mcs # setfiles -r /mnt/gentoo /etc/selinux/mcs/contexts/files/file_contexts /mnt/gentoo/lib64
test mcs # umount /mnt/gentoo

Finally, edit /etc/fstab and change all rootcontext= parameters to include a trailing :s0, otherwise the root contexts of these file systems will be illegal (in the mcs-sense) as they do not contain the sensitivity level information.

test mcs # vim /etc/fstab
[... edit rootcontext's to now include ":s0" ...]

There ya go. Now reboot and notice that all is okay, and we’re running with the mcs policy loaded.

test ~ # id -Z
root:sysadm_r:sysadm_t:s0-s0:c0.c1023
test ~ # sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             mcs
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     denied
Max kernel policy version:      28

December 18, 2012
Josh Saddler a.k.a. nightmorph (homepage, bugs)
music made with gentoo: lost letters (December 18, 2012, 09:17 UTC)

a new song: lost letters by ioflow

prepared improvisation for the 50th disquiet junto, morse beat.

the assignment was to encode a word or phrase with the Morse method, and then translate that sequence into the song’s underlying rhythm.

i chose the meaning of my name, “the Lord is salvation.” i looked at the resulting dashes and dots and treated them as sheet music, improvising a minor-key motif for piano, using just my right hand.

with the basic sketch recorded, i duplicated an excerpt and ran it through a vintage tape delay effect, putting it in the background almost like a loop. i set to work adding a few notes here and there, some of them reversed, running into more tape delays; contrasting their sonic character with the main melody. the loop excerpt repeats a few times, occasionally transformed by offset placement with the main theme, or reinforced by single note chord changes.

from a very few audio fragments, a mournful story emerged. echoing piano lines and uncovered memories. i did my best to vary the structure while keeping the mood and emotions, but this is still pretty hasty work; i only had a few minutes to arrange this piece before the deadline, due to software issues with ardour 3 beta. ardour crashes every time i attempt to process an audio clip, such as reversing or stretching it. i had to separately render those segments with renoise, then import them to ardour.

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio 3.0 (December 18, 2012, 07:57 UTC)

Yay, we just release PulseAudio 3.0! I’m not going to rehash the changelog that you can find in the release announcement as well as the longer release notes.

I would like to thank the 36 contributors over the last 6 months who have made this release what it is and continue to demonstrate what a vibrant community we have!

December 17, 2012
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The boot process (December 17, 2012, 12:24 UTC)

One of the things that is obvious, between the mailing lists and the comments to my previous post is that there are quite different expectations of what the boot process involves — which is to be expected, since in Gentoo the boot process, like many other things, is totally customized on a per-user basis.

As Greg and William said before, the whole point of supporting (or not) a split /usr approach is not something that is tied that much to udev itself, but more a matter of what is involved in the boot process at all. Reimar pointed out that in the comments to the other post, and I guess that’s the one thing that right now we have to consider a bit more thoroughly. So let’s see if I can analyse it a bit more closely.

Let me put a foreword here. The one problem that is the biggest regarding udev and split /usr is that, while it’s still possible to select whether to search for rules in the rootfs or /usr, it didn’t, and maybe doesn’t, search both paths at the same time. That is probably the only thing that I count as a total non-sense, and it’s breaking for break sake. And it realistically is one of the things that made many Gentoo users upset with Lennart and Kay: the migration of rules is easy for binary distributions – you just rebuild all the packages installing in the old path – but it’s a pain in the neck for Gentoo users; and the cost of searching both paths is unlikely to be noticeable.

So what do we consider as part of the boot? Well, as I said in the other post, if you expect to be able to log in without /usr, you’re probably out of luck, if you use PAM — while the modules are still available on the rootfs, many of them require libraries in /usr — ConsoleKit, Kerberos, PKCS#11, … This is also one of the reasons why I’m skeptical about just teaching Portage to move dependencies to the rootfs: it’ll probably move a good deal of libraries to the rootfs, especially for a desktop, which will in turn make the “lightweight rootfs” option moot.

Another reason why I don’t think that the automatic move is going to solve the problem, is that while it’s possible to teach Portage to move the libraries, it’s impossible to teach it to move plugins, or the datafiles that those libraries use. More about that in the next paragraphs.

So let’s drop the login issue: we don’t expect to be able to log in the system without /usr so it’s not an option. The next thing that is going to be a problem is coldplugging (I’ll consider hotplugging during boot as hotplugging but it might actually be more complex). The idea of coldplugging is that you want to start a given piece of software if, at boot, you find a given device connected. As an example you might want to start pcscd if a smartcard reader (be it a CCID one or another driver) is found, or ekeyd if an EntropyKey is connected, without the user having added them to the runlevels manually.

What’s the problem with this then? Well, the coldplugged services might require /usr for both the service and the libraries, which means you can’t run them without /usr — the udev-postmount service was, if I recall correctly, created just to deal with that, with udev actually keeping a score of which rules failed to execute, and re-executing them after /usr was mounted, but it relied on udev’s own handling of re-execution of rules, which I forgot if it still exists or not. If not, then that’s a big deal, but not something I want to care about to be honest. An easy way out of this is to say that coldplugging is not supported if your coldplugged services are needing /usr and you have it split, but it’s still quite hacky.

This blog post was supposed to be a bit longer, and provide among other things a visual representation of the boot-time service dependencies. It turns out now that I left it open for a whole week without being able to complete it as I intended. In particular, the graphical representation is messy because there are so many involved services, that on my laptop it’s seriously unreadable. I’ve been using the representation as a debug method to improve on my service files though, and I’ll write about that. It’s going to enter OpenRC’s git soon.

This said, this “half” post is good enough to read as it is. I’ll write more about it later on.

December 16, 2012
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
The difference between Ubuntu and Gentoo ;) (December 16, 2012, 22:34 UTC)

This gem comes from the xda developers forums; thanks barry99705!

"Using/installing Ubuntu is like buying a car. It may have a few features you'll never need or use, and might need to have a couple features added as aftermarket parts.

Using/installing Gentoo is like buying a pile of sheet metal, a few rubber trees, small pile of copper, a pile of sand, and an oil well. Then you have to cut and fabricate the car's body from the sheet metal, extract the rubber from the trees, then use that to make the tires and all the seals on the car. Use the pile of copper to make all the wires, and use the leftover rubber(you did save the scraps didn't you) to make the insulation. Melt down the pile of glass to make the windshield, side and back windows, also the headlights and lights themselves. Then you need to extract the crude oil from the well to refine your own engine oil and gas. In the end, you have a car created to your exact specifications (if you know what the hell you're doing) that may or may not be any better than just buying a car off the lot."

Of course I should additionally mention that Gentoo provides awesome documentation for all the steps and most of the actual assembly work is done single-handedly by portage!

December 15, 2012
Richard Freeman a.k.a. rich0 (homepage, bugs)
Gentoo and Copyright Assignments (December 15, 2012, 13:43 UTC)

A topic that has been fairly quiet for years has roared into life on a few separate occasions in the last month within the Gentoo community: copyright assignments. The goal of this post is to talk a little about the issues around these as I see them. I’ll state upfront that I’m not married to any particular approach.

But first, I think it is helpful to consider why this topic is flaring up. The two situations I’m aware of where this has come up in the last month or so both concern contributions (willing or not) from outside of Gentoo. One concerns a desire to be able to borrow eclass code from downstream distros like Exherbo, and the other is the eudev fork. In both cases the issue is with the general Gentoo policy that all Gentoo code have a statement at the top to the effect of “Copyright 2012 Gentoo Foundation.”

Now, Diego has already blogged about some of the issues created by this policy already, and I want to set that aside for the moment. Regardless of whether the Foundation can lay claim to ownership of copyright on past contributions, the question remains, should Gentoo aim to have copyright ownership (or something similar) for all Gentoo work be owned by the Foundation?

Right now I’m reaching out to other free software organizations to understand their own policies in this area. Regardless of whether we want to have Gentoo own our copyrights or not there are still legal questions around what to put on that copyright line, especially when a file is an amalgamation of code originated both inside and outside of Gentoo, perhaps even by parties who are hostile to the effort. I can’t speak for the Trustees as a whole, but I suspect that after gathering info we’ll try to have some open discussion on the lists, and perhaps even have a community-wide vote before making new policy. I don’t want to promise that – in fact I’d recommend that any community-wide vote be advisory only unless a requirement for supermajority were set, as I don’t want half the community up in arms because a 50.1% majority passed some highly unpopular policy.

So, what are some of the directions in which Gentoo might go? Why might we choose to go in these directions? Below I outline some of the options I’m aware of:

Maintain the status quo
We could just leave the issue of copyright assignment somewhat ambiguous as has been done. If Gentoo were forced to litigate over copyright ownership right now an argument could be made that because contributors willingly allowed us to stick that copyright notice on our files and made their contribution with the knowledge of our policies, that they have given implicit consent to our doing so.

I’m not a big fan of this approach – it has the virtue of requiring less work, but really has no benefits one way or the other (and as you’ll read below their are benefits from declaring a position one way or the other).

This requires us to come up with a policy around what goes on the copyright notice line. I suspect that there won’t be much controversy for Gentoo-originated work like most ebuilds, as there isn’t much controversy over them now. However, for stuff like eudev or code borrowed from other projects this could get quite messy. With no one organization owning much of the code in any file the copyright line could become quite a mess.

Do not require copyright assignment
We could just make it a policy that Gentoo would aim to own the name Gentoo, but not the actual code we distribute. This would mean that we could freely accept any code we wished (assuming it was GPL or CC BY-SA compatible per our social contract). This would also mean that Gentoo as an organization would find it difficult to pursue license violations, and future relicensing would be rather difficult.

From an ability to merge outside code this is clearly the preferred solution. This approach still carries all the difficulties of managing the copyright notice, since again no one organization is likely to hold the majority of copyright ownership of our files. Also, if we were to go this route we should strongly consider requiring that all contributions be licensed under GPL v2+, and not just GPL v2. Since Gentoo would not own the copyright if we ever wanted to move to a newer GPL version we would not have the option to do so unless this were done.

Gentoo would still own the name Gentoo, so from a branding/community standpoint we’d have a clear identity. If somebody else copied our code wholesale the Foundation couldn’t do much to prevent this unless we retroactively asked a bunch of devs to sign agreements allowing us to do so, but we could keep an outside group from using the name Gentoo, or any of our other trademarks.

Require copyright assignment
We could make it a policy that all contributions to Gentoo be made in conjunction with some form of copyright assignment, or contributor licensing agreement. I’ll set aside for now the question of how exactly this would be implemented.

In this model Gentoo would have full legal standing to pursue license violations, and to re-license our code. In practice I’m not sure how likely we’d actually be to do either. The copyright notice line would be easy to manage, even if we made the occasional exception to the policy, since the exceptions could of course be managed as exceptions as well. Most likely the majority of the code in any file would only be owned by a few entities at most.

The downside to this approach is that it basically requires turning away code, or making exceptions. Want to fork udev? Good luck getting them to assign copyright to Gentoo.

There could probably be blanket exceptions for small contributions which aren’t likely to create questions of copyright ownership. And we could of course have a transition policy where we accept outside code but all modifications must be Gentoo-owned. Again, I don’t see that as a good fit for something like eudev if the goal is to keep it aligned with upstream.

I think the end result of this would be that work that is outside of Gentoo would tend to stay outside of Gentoo. The eudev project could do its thing, but not as a Gentoo project. This isn’t necessarily a horrible thing – OpenRC wasn’t really a “Gentoo project” for much of its life (I’m not quite sure where it stands at the moment).

Alternatives
There are in-between options as well, such as encouraging the voluntary assignment/licensing of copyright (which is what KDE does), or dividing Gentoo up into projects we aim to own or not. So, we might aim to own our ebuilds and the essential eclasses and portage, but maybe there is the odd eclass or side project like eudev that we don’t care about owning. Maybe we aim to own new contributions (either all or most).

There are good things to be said for a KDE-like approach. It gives us some of the benefits of attribution, and all of the benefits of not requiring attribution. We could probably pursue license violations vigorously as we’d likely hold control of copyright over the majority of our work (aside from things like eudev – which obviously aren’t our work to begin with). Relicensing would be a bit of a pain – for anything we have control over we could of course relicense it, but for anything else we’d have to at least make some kind of effort to get approval. Legally that all becomes a murky area. If we were to go with this route again I’d probably suggest that we require all code to be licensed GPL v2+ or similar just to give us a little bit of automatic flexibility.

I’m certainly interested in feedback from the Gentoo community around these options, things I hadn’t thought of, etc. Feel free to comment here or on gentoo-nfp.


Filed under: foss, gentoo, gentoo foundation

December 13, 2012
Markos Chandras a.k.a. hwoarang (homepage, bugs)
Proxy Maintainers – How do we perform? (December 13, 2012, 20:14 UTC)

Following my recent recruitment performance post, here comes the second part of my Gentoo Miniconf 2012 presentation. The following two graphs aim to demonstrate the performance of proxy-maintainers aka, how Gentoo users help us improve and push new ebuilds to the portage tree

Orphaned Packages 2012/10Orphaned Packages 2012/12

One can notice the increased number of maintainer-needed@ packages but this is because we “retired” a lot of inactive developers in the last 2 months. I expect this number to not increase further in the near future.

I would like to thank all of you who are actively participating in this team. Keep up the good work!

Steve Dibb a.k.a. beandog (homepage, bugs)
another semester done (December 13, 2012, 08:25 UTC)

I just finished my Fall semester for 2012 today at UVU.  This was, by far, the hardest semester I’ve ever had since I’ve been in school.  It was brutal.  I had three classes which carried with it more work than I was expecting, and I spent a lot of time in the past four months doing nothing but homework.  I was talking to my cousin tonight about it (while we were doing some late-night skateboarding in the winter, which, it’s actually really nice out here right now), and I mentioned that the stress was a huge burden on me.  Stress is normal, but I’ve learned that if something heavy is really going on, I notice I will stop being cheery.  I don’t really get somber, but it’s more like, just focused and serious all the time.  Which can be a real bummer.

But, the semester is finished, and it’s freed up a lot of time and has taken that huge burden off of me.  I got good grades, and along with that, and some great friends that really stepped up at the last minute and helped me out, it’s really gotten me humbled and grateful to God and everyone that stood by me.  I’m really glad this semester is done.

One thing I learned from this last jaunt around is that I’ve decided I’m never taking online classes again.  I had two this semester, and one on campus.  Looking back, I’ve always had a range of issues with online courses.  Either I don’t understand the material very well because I can’t chat with the professor one on one, or I slack the whole time (I did 50% of the coursework in one day.  I’m not kidding).  The worst one though is I never really feel like I “get” the material.  I jump through hoops, get a grade, and move on, but it doesn’t seem like I learned anything.

So, I’m sticking to just two classes from here on out, and doing them all on-campus.  That’ll be manageable.

For now I’m really looking forward to not so much having more time, but having less stress.  I’ve been wanting to work on some cool side projects, and I also have been itching to go skating … a lot.  So tonight I went on a two-hour run with my cousin down Main Street in Bountiful, and it was really cool.  We call it a “mort run” since we start at the top of a hill and go all the way down to the mortuary.  It’s smooth all the way down and  you can just push around and then either skate back up hill or walk.  It’s a good workout.

The best part tonight though was debating whether or not we should go to the drive-through at Del Taco, knock on the window and ask for something.  We didn’t, but we circled the place like eight times and probably freaked out the employees while we debated it.  Eventually, we realized he didn’t have enough cash to buy something on the dollar menu (he was a penny short), so we spent half an hour wandering around downtown looking for lost change.  It was pretty fun. :)

Soooooooooooo ….. projects.  One thing I have time to look into now is znurt.org.  It’s broken.  I’ve known it’s been broken.  It would take me probably less than an hour to fix it.  I haven’t made the time, for a lot of reasons.  It’s actually been on my calendar reminding me over and over that I need to get it done.  I’m debating what to do about the site.  I could just fix the one error and move on, but it’s still kind of living in a state of neglect.  Ideally, I should hand the project over to someone else and let them maintain it.  I dunno yet.  Part of me doesn’t wanna let it go, but I guess a bigger part doesn’t care enough to actually fix it so … yah.  Gotta make a decision there.

Other than that, not much going on.  I moved to a new apartment, back into a complex.  I like it here.  I have a dishwasher now, which I’m really grateful for (I haven’t had one in the last three apartments).  The funny thing about that is I seriously have so few dishes, that filling up the entire thing with all of mine it’s half full.

Anyhoo, I am really looking forward to moving on.  My big thing is I wanna get some serious skating time in while I’ve got the time.  That and enjoy the holidays with friends and family.  I’m looking forward to next semester too.  I’ve got a class on meteorology and another on U.S. history.  I’m almost done with generals.  The crazy part about all of this?  Since I went back to school two years ago, I’ve put in 30 credit hours.  Insane, for someone working full time.  I tell you what.


Sven Vermeulen a.k.a. swift (homepage, bugs)
Another hardened month has passed… (December 13, 2012, 08:02 UTC)

… so it’s time for a new update ;-)

Toolchain

GCC 4.8 is still in its stage 3 development phase, so Zorry will send out the patches to the GCC development community when this phase is done. For Gentoo hardened itself, we now support all architectures except for IA64 (which never had SSP).

Full uclibc support is now in place for amd64, i686, mips32r2: not only is their technological support ok, but stages are now also automatically built to support installations through the regular installation instructions. The next target to get stages automatically built for is armv7a.

Kernel and grSecurity/PaX

Stabilization on 3.6.x is still showing some difficulties. Until those are resolved, we’re still stable in 3.5.4. We have a couple of panics in some odd cases, but these will need to be resolved before we can stabilize further.

glibc-2.16 will also drop the declarations for PT_PAX (in elf.h) and the binutils will also not cover PT_PAX phdr anymore. So, we will standardize fully on xattr-based PaX flags. This will get some proper focus in the next period to ensure this is done correctly. Most work on this support is focusing on communication towards users and the pax-utils eclass support.

There was some confusion if the tmpfs-xattr patch would or would not properly restrict access, but it looks like the PaX patch on mm/shmem.c was based upon the Gentoo patch and enhanced with the needed restrictions, so we can just keep the PaX code.

On USE=”pax_kernel”, which should enable some updates on userland utilities when applications are run under a PaX enabled kernel, prometheanfire tried to get this as a global USE flag (as many applications might eventually want to get a trigger on it). However, due to some confusion on the meaning of the USE flag, and potential need to depend on additional tools, we’re going to stick with a local flag for now.

SELinux

schmitt953 will help in the testing and possible development of SELinux policies for Samba 4.

Furthermore, the userspace utilities have been stabilized (except for the setools-3.3.7-r5+ due to some swig problems, but those have been worked around in setools-3.3.7-r6). Also, the rev8 policies are in the tree and no big problems were reported on them. They are currently still ~arch, but will be stabilized in the next few days. A new rev9 release will be pushed to the hardened-dev overlay soon as well.

Profiles

nvidia is unmasked for the hardened profiles, but still has X and tools USE flags masked, and is only supported on kernels 3.0.x and higher.

Also, the hardened/linux/uclibc/arm/armv7a profile is now available as a development profile. Profiles will be updated as the architectures for ARM are getting supported, so expect more in the next month.

System Integrity

We were waiting for kernel 3.7, which just got released, so we can now start integrating this further. Expect more updates by next meeting.

Docs

For SELinux, some information on USE=”unconfined” is added to the SELinux handbook. Blueness will also start documenting the xattr pax stuff.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
How app-office/libreoffice-bin is made (December 13, 2012, 00:08 UTC)

While usually Gentoo users compile all their packages on their own computers, LibreOffice tends to be too big a bite for that. This is why we provide for amd64 and x86 app-office/libreoffice-bin and app-office/libreoffice-bin-debug, two packages with a precompiled binary installation and its debug information. In the beginning we just used the binaries from the official LibreOffice distribution. Turns out, however, that these binaries bundle a large number of libraries that we have in Gentoo anyway (bug 361695), and for a lot of reasons bundled libraries are bad. So, we decided to roll our own binaries for stable Gentoo installations. Let me describe a bit how it is done.

Linux pinacolada 3.4.9-gentoo #2 SMP Thu Oct 11 00:05:55 CEST 2012 x86_64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz GenuineIntel GNU/Linux
On the machine doing the build, two chroots are dedicated to the package build process, one a plain amd64 chroot, the other an x86 chroot entered via linux32. Both have no ~arch packages installed at all, only stable keywords are accepted; both have a very minimal world file listing only a few packages useful for a maintainer as e.g. gentoolkit or eix. Procedure is identical for both.  In addition, in both chroots the compiler flags are chosen for as wide compatibility as possible. This means
# for x86
CFLAGS="-march=i586 -mtune=generic -O2 -pipe -g"
# for amd64
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -g"
and obviously the same for CXXFLAGS. Both chroots also use the portage features splitdebug and compressdebug to make debug information available in a separate directory tree. Prior to build, the existing packages are updated, unnecessary packages are cleaned, and dynamic linking is checked:
emerge --sync
emerge -uDNav world
emerge --depclean --ask

revdep-rebuild
In case any problems occur, these are checked, solved, and the procedure is repeated until all the operations become a no-op.
Next step is adapting the (rather simplistic) build script to the new libreoffice version. This means mainly checking for new or discarded useflags, and deciding which value these should have in the binary build. Since LibreOffice-3.6 we also have to decide now which bundled extensions to build. The choice of useflags is influenced by several factors. For example, pdfimport is disabled because the resulting dependency on poppler might lead to broken binaries rather too often.
Then, well, then it's running the build. Generating all 12 flavours (base, kde, gnome with and without java for both amd64 and x86) takes roughly a weekend. Time to go out to the christmas market and sip a Glühwein.
In the meantime, we can also adapt the libreoffice-bin ebuilds for the new version. The defined phase functions are mostly boring, since they only have to copy files into the system. Normally, they can be taken over from the previous version. The dependency declarations, however, have to be copied anew each time from the corresponding app-office/libreoffice ebuild, taking into account the chosen use-flag values. DEPEND is set empty since we're not actually building anything during installation.
Finally, COMMON_DEPEND is extended by an additional block named BIN_COMMON_DEPEND, specific for the binary package. Here, we specify any dependencies that need to be stricter now, where a library upgrade would for a normal package require revdep-rebuild - which is not possible for a binary package. Typical candidates where we have to fix the minimum or exact library version are glibc, icu, or libcmis.
Once the build has finished, 8.8G of files have to be uploaded to the Gentoo server, added to the mirror system, and then given some time to propagate. Then, we can commit the new ebuild, and open a stabilization request bug. Finished!
(Oh and in case you're wondering, new packages are coming tomorrow. :)

December 11, 2012
Matthew Thode a.k.a. prometheanfire (homepage, bugs)

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

Disclaimer

  1. Keep in mind that ZFS on Linux is not fully supported, for differing values of support
  2. I don't care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.

Initialization

Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). You can use the Gentoo LiveDVD, look for 12.1 or newer. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.

Formatting

I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile= -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=on rpool/ROOT
#rootfs
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
#portage
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
#homedirs
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
wget ftp://gentoo.osuosl.org/pub/gentoo/releases/amd64/autobuilds/current-stage3-amd64-hardened/stage3-amd64-hardened-*.tar.bz2
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /etc/zfs/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-9999.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-9999/work/spl-9999 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-9999/work/zfs-/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
echo "=sys-kernel/spl-0.6.0_rc12 ~amd64       #needed for zfs support" >> /etc/portage/package.accept_keywords
echo "=sys-fs/zfs-0.6.0_rc12-r1 ~amd64           #needed for zfs support" >> /etc/portage/package.accept_keywords
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

December 10, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using pam_selinux to switch contexts (December 10, 2012, 20:11 UTC)

With SELinux managing the access controls of applications towards the resources on the system, a not-to-be forgotten important component on any Unix/Linux system is the authentication part. Most systems use or support PAM, the Pluggable Authentication Modules, and for SELinux this plays an important role.

Applications that are PAM-enabled use PAM for the authentication of user activities. If this includes setting up an authenticated session, then the “session” part of the PAM configuration is also handled. And for SELinux, this is a nice-to-have, since this means applications that are not SELinux-aware can still enjoy transitions towards specified domains depending on the user that is authenticated.

The “not SELinux-aware” here is important. By default, applications keep running in one security context for their lifetime. If they invoke a execve or similar call (which is used to start another application or command when used in combination with a fork), then the SELinux policy might trigger an automatic transition if the holy grail of fourfold rules is set:

  1. a transition from the current context to the new one is allowed
  2. the label of the executed command/label is marked as an entrypoint for the new context
  3. the current context is allowed to execute that application
  4. an automatic transition rule is made from the current context to the new one over the command label

Or, in SELinux policy terms, assuming the domains are source_t and destination_t with the label of the executed file being file_exec_t:

allow source_t destination_t:process transition;
allow destination_t file_exec_t:file entrypoint;
allow source_t file_exec_t:file execute;
type_transition source_t file_exec_t : process destination_t;

If those four settings are valid, then (and only then) can the automatic transition be active.

Sadly, for applications that run user actions (like cron systems, remote logon services and more) this is not sufficient, since there are two major downsides to this “flexibility”:

  1. The rules to transition are static and do not depend on the identity of the user for which activities are launched. The policy can not deduce this identity from a file context either.
  2. The policy is statically defined: different transitions based on different user identities are not possibel.

To overcome this problem, applications can be made SELinux-aware, linking with the libselinux library and invoking the necessary switches themselves (or running the commands with runcon). Luckily, this is where the PAM system comes to play to aide us in setting up this policy behavior.

When an application is PAM-enabled, it will invoke PAM calls to authenticate and possibly set up the user session. The actions that PAM invokes are defined by the PAM configuration files. For instance, for the at daemon:

## /etc/pam.d/atd
#
# The PAM configuration file for the at daemon
#

auth    required        pam_env.so
auth    include         system-services
account include         system-services
session include         system-services

I am not going to dive into the details of PAM in this blog post, so let’s just jump to the session management part. In the above example file, if PAM sets up (or shuts down) a user session for the service (at in our case), it will go through the PAM services that are listed in the system-services definition, which looks like so:

## /etc/pam.d/system-services
auth            sufficient      pam_permit.so
account         include         system-auth
session         optional        pam_loginuid.so
session         required        pam_limits.so 
session         required        pam_env.so 
session         required        pam_unix.so 
session         optional        pam_permit.so

Until now, nothing SELinux-specific is enabled. But if we change the session section of the at service to the following, then the SELinux pam module will be called as well:

session optional        pam_selinux.so close
session include         system-services
session optional        pam_selinux.so multiple open

Now that the SELinux module is called, pam_selinux will try to switch the context of the process based on the definitions in the /etc/selinux/strict/contexts location (substitute strict with the policy type you use). The outcome of this switching can be checked with the getseuser application:

~# getseuser root system_u:system_r:crond_t
seuser:  root, level (null)
Context 0       root:sysadm_r:cronjob_t
Context 1       root:staff_r:cronjob_t

By providing the contexts in configurable files in /etc/selinux/strict/contexts, a non-SELinux aware application suddenly becomes SELinux-aware (through the PAM support it already has) without needing to patch or even rebuild the application. All that is need is to allow the security context of the application to switch ids and roles (as that is by default not allowed), which I believe is offered through the following statements:

domain_subj_id_change_exemption(atd_t)
domain_role_change_exemption(atd_t)

selinux_validate_context(atd_t)
selinux_compute_access_vector(atd_t)
selinux_compute_create_context(atd_t)
selinux_compute_relabel_context(atd_t)
selinux_compute_user_contexts(atd_t)

seutil_read_config(atd_t)
seutil_read_default_contexts(atd_t)

Jeremy Olexa a.k.a. darkside (homepage, bugs)
November 2012 wrap up (December 10, 2012, 13:39 UTC)

To wrap up my November, I finished up my stay in Prague. The below were two-day trips, where I was embracing home-base travel – meaning I would go somewhere then come back.

Before I left the Czech Republic, I also went to Cesky Krumlov, an amazing medieval town, UNESCO town, castle, brewery, winding streets, very glad I went there. I’m thinking about how to get back there during the summer. Cesky Krumlov is the second most visited city in the Czech Republic. I took the train there and the bus back. The train was quite nice but there was a few connections, at one point I was following the herd as we went from train to bus to train and I was confused but it worked out in the end. I got to Krumlov, walked to the hostel Krumlov House (recommended), ate at the delicious Two Marys restaurant, hung out with the staff, and went to a local bar. Then I walked around the castle, went to a brewery tour, relaxed for a few days, and took it all in. I took the bus back to Prague because it was quicker and cheaper.

Czech Republic (Prague, Olomouc, Cesky Krumlov) Oct/Nov 2012-243
(The view of the city from the castle)
Cesky Krumlov Pics

Dresden, Germany for a few days. I carpooled here with 3 other Germans as they were going home for the weekend and then couchsurfed. The generosity of people is amazing in this world. I was only there for a few nights, the first night, I walked around then ate out with my host. The next day, I went to the Botanical gardens (many pictures for my Grandpa), the VW Factory (no pictures allowed) – I’d recommend the glass factory tour to those that are engineering types, it is quite nice, then I walked around the city some. Went into a church, climbed to the top viewing point, and went out to eat again and chatted worldly topics with my host. She never had a guest from the USA before. The very unique thing about Dresden, even though it looks old, it is not since it was rebuilt after the war. I also carpooled back, the Germans love to be efficient.

Dresden Pics

Then we can fast forward to December 1, when I got on the bus for Vienna. I lost my camera on November 30th, so there is only mental pictures of Vienna. I stayed there for 3 nights. It is an expensive city relative to Czech Republic and farther east, but I liked it. I stayed at an independent hostel, Hostel Ruthersteiner (recommended as well) I met with my friend Marijn and we walked around the city with his family and colleague. I tried to goto a Viennese Opera but there was only standing room and I didn’t feel like standing still for 2.5 hours so of course I went to the Viennese Christmas Markets instead and enjoyed many a glühwein (hot wine). I also toured the UN headquarters in Vienna and had lunch with my friend there. I could imagine myself going back there later in life to soak in the cultural activities that are more suited for older people or families.

Now, I am in Budapest. More on that later…

December 09, 2012
How to find issues related to LINGUAS (December 09, 2012, 18:11 UTC)

Usually, I want to find all possible issues with the LINGUAS variable, so in my arch testing environment I have enabled all linguas that the main tree uses.
To keep my make.conf more ‘clear’ I’m using source and another file called linguas.conf.

So, this is my /etc/portage/linguas.conf:
LINGUAS="am fil zh af ca cs da de el es et gl hu nb nl pl pt ro ru sk sl sv uk bg cy en eo fo ga he id ku lt lv mk ms nn sw tn zu ja zh_TW en_GB pt_BR ko zh_CN ar en_CA fi kk oc sr tr fa wa nds as be bn bn_BD bn_IN en_US es_AR es_CL es_ES es_MX eu fy fy_NL ga_IE gu gu_IN hi hi_IN is ka kn ml mr nn_NO or pa pa_IN pt_PT rm si sq sv_SE ta ta_LK te th vi ast dz km my om sh ug uz ca@valencia sr@ijekavian sr@ijekavianlatin sr@latin csb hne mai se es_LA fr_CA zh_HK br la no es_CR et_EE sr_CS bo hsb hy mn sr@Latn lb ne bs tg uz@cyrillic xh be_BY brx ca_XV dgo en_ZA gd kok ks ky lo mni nr ns pap ps rw sa_IN sat sd ss st sw_TZ ti ts ve mt ia az me tl ak hy_AM lg nso son ur_PK it fr nb nb_NO hr nan ur tk cs_CZ da_DK de_1901 de_CH en_AU lt_LT pl_PL sa sk_SK th_TH ta_IN tt sco ha mi ven ar_SY el_GR ro_RO ru_RU sl_SI uk_UA vi_VN ar_SY te_IN de_DE es_VE fa_IR fr_FR hu_HU id_ID it_IT ja_JP ka_GE nl_NL sr_BA sr_RS ca_ES fi_FI he_IL jv ru_gold yi eu_ES"

Now you need to set in your make.conf:
source /etc/portage/linguas.conf

I will update this post if there will be new linguas/languages in the future.

Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
g-octave news: the octave overlay (December 09, 2012, 16:13 UTC)

After having lots of problems with people that can't use g-octave properly, sometimes because they don't seems to be able to read documentation, elog messages and/or just ask, and after a suggestion of Sebastien Fabbro (bicatali), I write down some simple scripts to update the g-octave package database and an overlay using g-octave and a cronjob.

I built a virtual machine on my own server and set up a weekly cronjob, that will hopefully keep the packages up-to-date.

The overlay is available on Github:

https://github.com/rafaelmartins/octave-overlay

To install it, follow the instrunctions available on the README file. The overlay is available on layman, named octave.

Packages with unresolvable dependencies, e.g. packages with dependencies unavailable on gentoo-x86, aren't available in the overlay. If you find some package that is supposed to work and isn't available on the overlay please open an issue on Github, and I'll take a look ASAP.

As a bonus, g-octave code itself was moved to Github:

https://github.com/rafaelmartins/g-octave

Feel free to submit pull requests if you think that something is broken and you know how to fix it.

And as another bonus, the g-octave website (http://g-octave.org/) is now running on the Read the Docs service, that is way more reliable than my own server. This should avoid the recent documentation downtimes.

December 08, 2012
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using stunnel for mutual authentication (December 08, 2012, 12:24 UTC)

Sometimes services do not support SSL/TLS, or if they do, they do not support using mutual authentication (i.e. requesting that the client also provides a certificate which is trusted by the service). If that is a requirement in your architecture, you can use stunnel to provide this additional SSL/TLS layer.

As an example, I have a mail server running on localhost, and I want to provide SSMTP services with mutual authentication on top of this service, using stunnel. First of all, I provide two certificates and private keys that are both signed by the same CA, and keep the CA certificate close as well:

  • client.key is the private key for the client
  • client.pem is the certificate for the client (which contains the public key and CA signature)
  • server.key and server.pem are the same but for the server
  • root-genfic.crt is the certificate of the signing CA

First of all, we setup the stunnel, listening on 1465 (as 465 requires the stunnel service to run as root, which I’d rather not) and fowarding towards 127.0.0.1:25:

cert = /etc/ssl/services/stunnel/server.pem
key = /etc/ssl/services/stunnel/server.key
setuid = stunnel
setgid = stunnel
pid = /var/run/stunnel/stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
verify = 2 # This enables the mutual authentication
CAfile = /etc/ssl/certs/root-genfic.crt

[smtp]
accept = 1465
connect = 127.0.0.1:25

To test out mutual authentication this way, I used the following command-line snippet. The delays between the lines are because the mail client is supposed to wait for the mail server to give its reply and if not, the data gets lost. I’m sure this can be made easier (with netcat I could just use “-i 1″ to print a line with a one-second delay), but it works ;-)

~$  (sleep 1; echo "EHLO localdomain"; sleep 1; echo "MAIL FROM:remote@test.localdomain"; \
sleep 1; echo "RCPT TO:user@localhost"; sleep 1; echo "DATA"; sleep 1; cat TEMPFILE) | \
openssl s_client -connect 192.168.100.102:1465 -crlf -ign_eof -ssl3 -key client.key -cert client.pem

The TEMPFILE file contains the email content (you know, Subject, From, To, other headers, data, …).

If the provided certificate isn’t trusted, then you’ll find the following in the log file (on Gentoo, thats /var/log/daemon.log by default but you can setup logging in stunnel as well):

Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: Starting certificate verification: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: CERT: Verification error: unable to get local issuer certificate
Dec  8 13:17:32 testsys stunnel: LOG4[20237:2766895953664]: Certificate check failed: depth=0, /C=US/ST=California/L=Santa Barbara/O=SSL Server/OU=For Testing Purposes Only/CN=localhost/emailAddress=root@localhost
Dec  8 13:17:32 testsys stunnel: LOG7[20237:2766895953664]: SSL alert (write): fatal: bad certificate
Dec  8 13:17:32 testsys stunnel: LOG3[20237:2766895953664]: SSL_accept: 140890B2: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned

When a trusted certificate is shown, the connection goes through.

Finally, if you not only want to validate if the certificate is trusted, but also only want to accept a given number of certificates, you can set the stunnel variable verify to 3. If you set it to 4, it will not check the CA and only allow a connection to go through if the presented certificate is one in the stunnel trusted certificates.

Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita becomes a part of the KDE project (December 08, 2012, 07:58 UTC)

I'm happy to announce that Trojitá, a fast IMAP e-mail client, has become part of the KDE project. You can find it below extragear/pim/trojita.

Why moving under the KDE umbrella?

After reading the KDE's manifesto, it became obvious that the KDE project's values align quite well with what we want to achieve in Trojitá. Becoming part of a bigger community is a logical next step -- it will surely make Trojitá more visible, and the KDE community will get a competing e-mail client for those who might not be happy with the more established offerings. Competition is good, people say.

But I don't want to install KDE!

You don't have to. Trojitá will remain usable without KDE; you won't need it for running Trojitá, nor for compiling the application. We don't use any KDE-specific classes, so we do not link to kdelibs at all. In future, I hope we will be able to offer an optional feature to integrate with KDE more closely, but there are no plans to make Trojitá require the KDE libraries.

How is it going?

Extremely well! Five new people have already contributed code to Trojitá, and the localization team behind KDE got a terrific job with providing translation into eleven languages (and I had endless hours of fun hacking together lconvert-based setup to make sure that Trojitá's Qt-based translations work well with KDE's gettext-based workflow -- oh boy was that fun!). Trojitá also takes part in the Google Code-in project; Mohammed Nafees has already added a feature for multiple sender identities. I also had a great chat with the KDE PIM maintainers about sharing of our code in future.

What's next?

A lot of work is still in front of us -- from boring housekeeping like moving to KDE's Bugzilla for issue tracking to adding exciting (and complicated!) new features like support for multiple accounts. But the important part is that Trojitá is live and progressing swiftly -- features are being added, bugs are getting fixed on a faily basis and other people besides me are actually using the application on a daaily basis. According to Ohloh's statistics, we have a well established, mature codebase maintained by a large development team with increasing year-over-year commits.

Interested?

If you are interested in helping out, check out the instructions and just start hacking!

Cheers,
Jan