Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Erik Mackdanz
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Peter Wilmott
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Vlastimil Babka
. Yury German
. Zack Medico

Last updated:
May 25, 2016, 18:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

May 24, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a brief call for help.

Is there anyone out there who uses a recent kmail (I'm running 16.04.1 since yesterday, before that it was the latest KDE4 release) with a Novell Groupwise IMAP server?

I'm trying hard, I really like kmail and would like to keep using it, but for me right now it's extremely unstable (to the point of being unusable) - and I suspect by now that the server IMAP implementation is at least partially to blame. In the past I've seen definitive broken server behaviour (like negative IMAP uids), the feared "Multiple merge candidates" keeps popping up again and again, and the IMAP resource becomes unresponsive every few minutes...

So any datapoints of other kmail plus Groupwise imap users would be very much appreciated.

For reference, the server here is Novell Groupwise 2014 R2, version 14.2.0 11.3.2016, build number 123013.

Thanks!!!

May 23, 2016
Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)
GStreamer and Meson: A New Hope (May 23, 2016, 10:19 UTC)

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

May 21, 2016
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
balde internals, part 1: Foundations (May 21, 2016, 15:25 UTC)

For those of you that don't know, as I never actually announced the project here, I'm working on a microframework to develop web applications in C since 2013. It is called balde, and I consider its code ready for a formal release now, despite not having all the features I planned for it. Unfortunately its documentation is not good enough yet.

I don't work on it for quite some time, then I don't remember how everything works, and can't write proper documentation. To make this easier, I'm starting a series of posts here in this blog, describing the internals of the framework and the design decisions I made when creating it, so I can remember how it works gradually. Hopefully in the end of the series I'll be able to integrate the posts with the official documentation of the project and release it! \o/

Before the release, users willing to try balde must install it manually from Github or using my Gentoo overlay (package is called net-libs/balde there). The previously released versions are very old and deprecated at this point.

So, I'll start talking about the foundations of the framework. It is based on GLib, that is the base library used by Gtk+ and GNOME applications. balde uses it as an utility library, without implementing classes or relying on advanced features of the library. That's because I plan to migrate away from GLib in the future, reimplementing the required functionality in a BSD-licensed library. I have a list of functions that must be implemented to achieve this objective in the wiki, but this is not something with high priority for now.

Another important foundation of the framework is the template engine. Instead of parsing templates in runtime, balde will parse templates in build time, generating C code, that is compiled into the application binary. The template engine is based on a recursive-descent parser, built with a parsing expression grammar. The grammar is simple enough to be easily extended, and implements most of the features needed by a basic template engine. The template engine is implemented as a binary, that reads the templates and generates the C source files. It is called balde-template-gen and will be the subject of a dedicated post in this series.

A notable deficiency of the template engine is the lack of iterators, like for and while loops. This is a side effect of another basic characteristic of balde: all the data parsed from requests and sent to responses is stored as string into the internal structures, and all the public interfaces follow the same principle. That means that the current architecture does not allow passing a list of items to a template. And that also means that the users must handle type conversions from and to strings, as needed by their applications.

Static files are also converted to C code and compiled into the application binary, but here balde just relies on GLib GResource infrastructure. This is something that should be reworked in the future too. Integrate templates and static resources, implementing a concept of themes, is something that I want to do as soon as possible.

To make it easier for newcomers to get started with balde, it comes with a binary that can create a skeleton project using GNU Autotools, and with basic unit test infrastructure. The binary is called balde-quickstart and will be the subject of a dedicated post here as well.

That's all for now.

In the next post I'll talk about how URL routing works.

May 16, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
How LINGUAS are thrice wrong! (May 16, 2016, 06:34 UTC)

The LINGUAS environment variable serves two purposes in Gentoo. On one hand, it’s the USE_EXPAND flag group for USE flags controlling installation of localizations. On the other, it’s a gettext-specfic environment variable controlling installation of localizations in some of build systems supporting gettext. Fun fact is, both uses are simply wrong.

Why LINGUAS as an environment variable is wrong?

Let’s start with the upstream-blessed LINGUAS environment variable. If set, it limits localization files installed by autotools+gettext-based build systems (and some more) to the subset matching specified locales. At first, it may sound like a useful feature. However, it is an implicit feature, and therefore one causing a lot of confusion for the package manager.

Long story short, in this context the package manager does not know anything about LINGUAS. It’s just a random environment variable, that has some value and possibly may be saved somewhere in package metadata. However, this value can actually affect the installed files in a hardly predictable way. So, even if package managers actually added some special meaning to LINGUAS (which would be non-PMS compliant), it still would not be good enough.

What does this practically mean? It means that if I set LINGUAS to some value on my system, then most of the binary packages produced by it suddenly have files stripped, as compared to non-LINGUAS builds. If I installed the binary package on some other system, it would match the LINGUAS of build host rather than the install host. And this means the binary packages are simply incorrect.

Even worse, any change to LINGUAS can not be propagated correctly. Even if the package manager decided to rebuild packages based on changes in LINGUAS, it has no way of knowing which locales were supported by a package, and if LINGUAS was used at all. So you end up rebuilding all installed packages, just in case.

Why LINGUAS USE flags are wrong?

So, how do we solve all those problems? Of course, we introduce explicit LINGUAS flags. This way, the developer is expected to list all supported locales in IUSE, the package manager can determine the enabled localizations and match binary packages correctly. All seems fine. Except, there are two problems.

The first problem is that it is cumbersome. Figuring out supported localizations and adding a dozen flags on a number of packages is time-consuming. What’s even worse, those flags need to be maintained once added. Which means you have to check supported localizations for changes on every version bump. Not all developers do that.

The second problem is that it is… a QA violation, most of the time. We already have quite a clear policy that USE flags are not supposed to control installation of small files with no explicit dependencies — and most of the localization files are exactly that!

Let me remind you why we have that policy. There are two reasons: rebuilds and binary packages.

Rebuilds are bad because every time you change LINGUAS, you end up rebuilding relevant packages, and those can be huge. You may think it uncommon — but just imagine you’ve finally finished building your new shiny Gentoo install, and noticed that you forgot to enable the localization. And guess what! You have to build a number of packages, again.

Binary packages are even worse since they are tied to a specific USE flag combination. If you build a binary package with specific LINGUAS, it can only be installed on hosts with exactly the same LINGUAS. While it would be trivial to strip localizations from installed binary package, you have to build a fresh one. And with dozen lingua-flags… you end up having thousands of possible binary package variants, if not more.

Why EAPI 5 makes things worse… or better?

Reusing the LINGUAS name for the USE_EXPAND group looked like a good idea. After all, the value would end up in ebuild environment for use by the build system, and in most of the affected packages, LINGUAS worked out of the box with no ebuild changes! Except that… it wasn’t really guaranteed to before EAPI 5.

In earlier EAPIs, LINGUAS could contain pretty much anything, since no special behavior was reserved for it. However, starting with EAPI 5 the package manager guarantees that it will only contain those values that correspond to enabled flags. This is a good thing, after all, since it finally makes LINGUAS work reliably. It has one side effect though.

Since LINGUAS is reduced to enabled USE flags, and enabled USE flags can only contain defined USE flags… it means that in any ebuild missing LINGUAS flags, LINGUAS should be effectively empty (yes, I know Portage does not do that currently, and it is a bug in Portage). To make things worse, this means set to an empty value rather than unset. In other words, disabling localization completely.

This way, a small implicit QA issue of implicitly affecting installed localization files turned out into a bigger issue of suddenly stopping to install localizations. Which in turn can’t be fixed without introducing proper set of LINGUAS everywhere, causing other kind of QA issues and additional maintenance burden.

What would be the good solution, again?

First of all, kill LINGUAS. Either make it unset for good (and this won’t be easy since PMS kinda implies making all PM-defined variables read-only), or disable any special behavior associated with it. Make the build system compile and install all localizations.

Then, use INSTALL_MASK. It’s designed to handle this. It strips files from installed systems while preserving them in binary packages. Which means your binary packages are now more portable, and every system you install them to will get correct localizations stripped. Isn’t that way better than rebuilding things?

Now, is that going to happen? I doubt it. People are rather going to focus on claiming that buggy Portage behavior was good, that QA issues are fine as long as I can strip some small files from my system in the ‘obvious’ way, that the specification should be changed to allow a corner case…

May 13, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Whew, I know that’s a long title for a post, but I wanted to make sure that I mentioned every term so that people having the same problem could readily find the post that explains what solved it for me. For some time now (ever since the 346.x series [340.76, which was the last driver that worked for me, was released on 27 January 2015]), I have had a problem with the NVIDIA Linux Display Drivers (known as nvidia-drivers in Gentoo Linux). The problem that I’ve experienced is that the newer drivers would, upon starting an X session, immediately clock up to Performance Level 2 or 3 within PowerMizer.

Before using these newer drivers, the Performance Level would only increase when it was really required (3D rendering, HD video playback, et cetera). I probably wouldn’t have even noticed that the Performance Level was changing, except that it would cause the GPU fan to spin faster, which was noticeably louder in my office.

After scouring the interwebs, I found that I was not the only person to have this problem. For reference, see this article, and this one about locking to certain Performance Levels. However, I wasn’t able to find a solution for the exact problem that I was having. If you look at the screenshot below, you’ll see that the Performance Level is set at 2 which was causing the card to run quite hot (79°C) even when it wasn’t being pushed.

NVIDIA Linux drivers stuck on high Performance Level in Xorg
Click to enlarge

It turns out that I needed to add some options to my X Server Configuration. Unfortunately, I was originally making changes in /etc/X11/xorg.conf, but they weren’t being honoured. I added the following lines to /etc/X11/xorg.conf.d/20-nvidia.conf, and the changes took effect:


Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

The portion in bold (the RegistryDwords option) was what ultimately fixed the problem for me. More information about the NVIDIA drivers can be found in their README and Installation Guide, and in particular, these settings are described on the X configuration options page. The PowerMizerDefaultAC setting may seem like it is for laptops that are plugged in to AC power, but as this system was a desktop, I found that it was always seen as being “plugged in to AC power.”

As you can see from the screenshots below, these settings did indeed fix the PowerMizer Performance Levels and subsequent temperatures for me:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Whilst I was adding X configuration options, I also noticed that Coolbits (search for “Coolbits” on that page) were supported with the Linux driver. Here’s the excerpt about Coolbits for version 364.19 of the NVIDIA Linux driver:

Option “Coolbits” “integer”
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.

WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer’s design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer’s operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.

When “2” (Bit 1) is set in the “Coolbits” option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.

When “4” (Bit 2) is set in the “Coolbits” option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.

When “8” (Bit 3) is set in the “Coolbits” option value, the PowerMizer page in the nvidia-settings control panel will display a table that allows setting per-clock domain and per-performance level offsets to apply to clock values. This is allowed on certain GeForce GPUs. Not all clock domains or performance levels may be modified.

When “16” (Bit 4) is set in the “Coolbits” option value, the nvidia-settings command line interface allows setting GPU overvoltage. This is allowed on certain GeForce GPUs.

When this option is set for an X screen, it will be applied to all X screens running on the same GPU.

The default for this option is 0 (unsupported features are disabled).

I found that I would personally like to have the options enabled by “4” and “8”, and that one can combine Coolbits by simply adding them together. For instance, the ones I wanted (“4” and “8”) added up to “12”, so that’s what I put in my configuration:


Section "Device"
     Identifier    "Device 0"
     Driver        "nvidia"
     VendorName    "NVIDIA Corporation"
     BoardName     "GeForce GTX 470"
     Option        "Coolbits" "12"
     Option        "RegistryDwords" "PowerMizerEnable=0x1; PowerMizerDefaultAC=0x3;"
EndSection

and that resulted in the following options being available within the nvidia-settings utility:

NVIDIA Linux drivers Performance Level after Xorg settings
Click to enlarge

Though the Coolbits portions aren’t required to fix the problems that I was having, I find them to be helpful for maintenance tasks and configurations. I hope, if you’re having problems with the NVIDIA drivers, that these instructions help give you a better understanding of how to workaround any issues you may face. Feel free to comment if you have any questions, and we’ll see if we can work through them.

Cheers,
Zach

May 10, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
New Company: Edge Security (May 10, 2016, 14:48 UTC)

I've just launched a website for my new information security consulting company, Edge Security. We're expert hackers, with a fairly diverse skill set and a lot of experience. I mention this here because in a few months we plan to release an open-source kernel module for Linux called WireGuard. No details yet, but keep your eyes open in this space.

May 08, 2016
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
How to deal with portage blockers (May 08, 2016, 21:27 UTC)

TL;DR:

  • use --autounmask=n
  • use --backtrack=1000 (or more)
  • package.mask all outdated packages that cause conflicts (usually requires more iterations)
  • run world update

The problem

Occasionally (more frequently on haskel packages) portage starts taking long time to only tell you it was not able to figure out the upgrade path.

Some people suggest "wipe-blockers-and-reinstall" solution. This post will try to explain how to actually upgrade (or find out why it’s not possible) without actually destroying your system.

Real-world example

I’ll take a real-world example in Gentoo’s bugzilla: bug 579520 where original upgrade error looked like that:

!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

x11-libs/gtk+:3

  (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by
    (no parents that aren't satisfied by other packages in this slot)

  (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by
    >=x11-libs/gtk+-3.19.12:3[introspection?] required by (gnome-base/nautilus-3.20.0:0/0::gnome, installed)
    ^^              ^^^^^^^^^
    >=x11-libs/gtk+-3.20.0:3[cups?] required by (gnome-base/gnome-core-libs-3.20.0:3.0/3.0::gnome, installed)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.4:3[introspection?] required by (media-video/totem-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.0:3[introspection?] required by (app-editors/gedit-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.5:3 required by (gnome-base/dconf-editor-3.20.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.6:3[introspection?] required by (x11-libs/gtksourceview-3.20.0:3.0/3::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.3:3[introspection,X] required by (media-gfx/eog-3.20.0:1/1::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.8:3[X,introspection?] required by (x11-wm/mutter-3.20.0:0/0::gnome, installed)
    ^^              ^^^^^^^^
    >=x11-libs/gtk+-3.19.12:3[X,wayland?] required by (gnome-base/gnome-control-center-3.20.0:2/2::gnome, installed)
    ^^              ^^^^^^^^^
    >=x11-libs/gtk+-3.19.11:3[introspection?] required by (app-text/gspell-1.0.0:0/0::gnome, ebuild scheduled for merge)
    ^^              ^^^^^^^^^

x11-base/xorg-server:0

  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-nouveau-1.0.12:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-fbdev-0.4.4:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-input-evdev-2.10.1:0/0::gentoo, installed)
                        ^^^^^^^^^^

  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-video-vesa-2.3.4:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-synaptics-1.8.2:0/0::gentoo, installed)
                        ^^^^^^^^^^
    x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-mouse-1.9.1:0/0::gentoo, installed)
                        ^^^^^^^^^^

app-text/poppler:0

  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
    >=app-text/poppler-0.32:0/51=[cxx,jpeg,lcms,tiff,xpdf-headers(+)] required by (net-print/cups-filters-1.5.0:0/0::gentoo, installed)
                           ^^^^^^
    >=app-text/poppler-0.16:0/51=[cxx] required by (app-office/libreoffice-5.0.5.2:0/0::gentoo, installed)
                           ^^^^^^
    >=app-text/poppler-0.12.3-r3:0/51= required by (app-text/texlive-core-2014-r4:0/0::gentoo, installed)
                                ^^^^^^

  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by
    >=app-text/poppler-0.33[cairo] required by (app-text/evince-3.20.0:0/evd3.4-evv3.3::gnome, ebuild scheduled for merge)
    ^^                 ^^^^

net-fs/samba:0

  (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by
    (no parents that aren't satisfied by other packages in this slot)

  (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by
    net-fs/samba[smbclient] required by (media-sound/xmms2-0.8-r2:0/0::gentoo, ebuild scheduled for merge)
                 ^^^^^^^^^


It may be possible to solve this problem by using package.mask to
prevent one of those packages from being selected. However, it is also
possible that conflicting dependencies exist such that they are
impossible to satisfy simultaneously.  If such a conflict exists in
the dependencies of two different packages, then those packages can
not be installed simultaneously.

For more information, see MASKED PACKAGES section in the emerge man
page or refer to the Gentoo Handbook.


emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=".
(dependency required by "app-office/libreoffice-5.0.5.2::gentoo" [installed])
(dependency required by "@selected" [set])
(dependency required by "@world" [argument])

A lot of text! Let’s trim it down to essential detail first (AKA how to actually read it). I’ve dropped the "cause" of conflcts from previous listing and left only problematic packages:

!!! Multiple package instances within a single package slot have been pulled
!!! into the dependency graph, resulting in a slot conflict:

x11-libs/gtk+:3
  (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by
  (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by

x11-base/xorg-server:0
  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by

app-text/poppler:0
  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by

net-fs/samba:0
  (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by
  (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by

emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=".

That is more manageable. We have 4 "conflicts" here and one "missing" package.

Note: all the listed requirements under "conflicts" (the previous listing) suggest these are >= depends. Thus the dependencies themselves can’t block upgrade path and error message is misleading.

For us it means that portage leaves old versions of gtk+ (and others) for some unknown reason.

To get an idea on how to explore that situation we need to somehow hide outdated packages from portage and retry an update. It will be the same as uninstalling and reinstalling a package but not actually doing it:)

package.mask does exactly that. To make package hidden for real we will also need to disable autounmask: --autounmask=n (default is y).

Let’s hide outdated x11-libs/gtk+-3.18.7 package from portage:

# /etc/portage/package.mask
<x11-libs/gtk+-3.20.0:3

Blocker list became shorter but still does not make sense:

x11-base/xorg-server:0
  (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by
  (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by
                        ^^^^^^^^^^

app-text/poppler:0
  (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by
  (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by

Blocking more old stuff:

# /etc/portage/package.mask
<x11-libs/gtk+-3.20.0:3
<x11-base/xorg-server-1.18.3
<app-text/poppler-0.42.0

The output is:

[blocks B      ] <dev-util/gdbus-codegen-2.48.0 ("<dev-util/gdbus-codegen-2.48.0" is blocking dev-libs/glib-2.48.0)

* Error: The above package list contains packages which cannot be
* installed at the same time on the same system.

 (dev-libs/glib-2.48.0:2/2::gentoo, ebuild scheduled for merge) pulled in by

 (dev-util/gdbus-codegen-2.46.2:0/0::gentoo, ebuild scheduled for merge) pulled in by

That’s our blocker! Stable <dev-util/gdbus-codegen-2.48.0 blocks unstable blocking dev-libs/glib-2.48.0.

The solution is to ~arch keyword dev-util/gdbus-codegen-2.48.0:

# /etc/portage/package.accept_keywords
dev-util/gdbus-codegen

And run world update.

Simple!


April 28, 2016
GSoC 2016: Five projects accepted (April 28, 2016, 00:00 UTC)

We are excited to announce that 5 students have been selected to participate with Gentoo during the Google Summer of Code 2016!

You can follow our students’ progress on the gentoo-soc mailing list and chat with us regarding our GSoC projects via IRC in #gentoo-soc on freenode.
Congratulations to all the students. We look forward to their contributions!

GSoC logo

Accepted projects

Clang native support - Lei Zhang: Bring native clang/LLVM support in Gentoo.

Continuous Stabilization - Pallav Agarwal: Automate the package stabilization process using continuous integration practices.

kernelconfig - André Erdmann: Consistently generate custom Linux kernel configurations from curated sources.

libebuild - Denys Romanchuk: Create a common shared C-based implementation for package management and other ebuild operations in the form of a library.

Gentoo-GPG- Angelos Perivolaropoulos: Code the new Meta­-Manifest system for Gentoo and improve Gentoo Keys capabilities.

Events: Gentoo Miniconf 2016 (April 28, 2016, 00:00 UTC)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University.

Want to participate? The call for papers is open until 1 August 2016.

April 25, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University in Prague (FIT ČVUT).

The call for papers is now open where you can submit your session proposal until 1 August 2016. Want to have a meeting, discussion, presentation, workshop, do ebuild hacking, or anything else? Tell us!

miniconf-2016

April 15, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level — and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.

Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There’s the whole Gentoo-keys project that aims to eventually solve the problem but it’s not there yet. Maybe this year’s Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I’m not saying it would be that hard — but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I’m not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won’t be kept unencrypted outside the memory. All nice and shiny.

The problem is — this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I’m around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there’s one more issue: the software. I won’t be running all those scripts in my home. So it’s not just me you have to trust — you have to trust all other people with administrative access to the machine that’s running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won’t be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What’s the value of this signature then?

And this is the exact reason why I don’t do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I’ve mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What’s important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What’s the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn’t guarantee that the alterations are genuine, the danger coming from them is rather small (if any).

April 11, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

A couple weeks ago, I decided to update my primary laptop’s kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:


#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the ‘iwlwifi’ driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:


[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn’t loading:


[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:


# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):


[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the “Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2” error message.

Cheers,
Zach

March 31, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

I’ll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account!  So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program.  Since I don’t like writing trivial stuff, I’m going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions.  Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on.  Chpt 1 of Kerrisk’s The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today.  If you look thorugh the code base of uClibc you’ll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU.  These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro.  Consider the approach to portability taken by GNU autotools.  Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__.  Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__.  If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)

This is getting a bit ugly and syntactically ambiguous.  Someone not familiar with this could easily misinterpret it, or reject it.

So I’ve hit bugs like these.  I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__.   Alternatively I hit this in geocode-glib and geoclue, and they did accept it.  I went with the wrong minded approach because that’s what was already there, and I didn’t feel like sifting through their code base and revamping their build system.  This isn’t just laziness, its historical weight.

So kudos to musl.  And for all the faults of GNU autotools, at least its approach to portability is correct.

 

 

 

 

March 28, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

RBAC is a security feature of the hardened-sources kernels.  As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources.   Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root.  In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned!  I wouldn’t recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not.  RBAC has to be part of a more comprehensive security plan and is not a single security solution.  In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers.  Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges.  So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself.  In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel.  Of course, if you’re planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy.  This is where the fun begins because you need to ask yourself what kind of a system you’re going to be running and decide on the policies you’re going to implement.  Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack.  I did this for years when I ran a moodle server for D’Youville College.  For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file.  RBAC is ideally suited for this.  Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs.  Basically your services should have attempted to access each resource at least once so the event gets logged.  You then distill those logs into a policy file describing only what should be permitted and tweak as needed.  Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm –F –L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm –F –L /etc/grsec/learning.log –O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to

Easy right?  This will get you pretty far but you’ll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning.  A typical example here, is you might have ssh’ed in from one IP, but now you’re ssh-ing in from a different IP and you’re getting denied.  To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role.  Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0

This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied.  As we’ll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root.  To do this you edit the policy file and look for subject /bin/ls under role root.  You then add an ‘l’ to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the ‘l’

You restart RBAC using  gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log  -O /etc/grsec/partial-learning.policy.  That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied.  But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there?  While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let’s start with a failry permissive policy file for root:

role admin sA
subject / rvka {
	/			rwcdmlxi
}

role default
subject / {
	/			h
	-CAP_ALL
	connect disabled
	bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
	/			r
	/boot			h
#
	/bin			rx
	/sbin			rx
	/usr/bin		rx
	/usr/libexec		rx
	/usr/sbin		rx
	/usr/local/bin		rx
	/usr/local/sbin		rx
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
	/usr/lib32		rx
	/usr/lib64		rx
	/usr/local/lib32	rx
	/usr/local/lib64	rx
#
	/dev			hx
	/dev/log		r
	/dev/urandom		r
	/dev/null		rw
	/dev/tty		rw
	/dev/ptmx		rw
	/dev/pts		rw
	/dev/initctl		rw
#
	/etc/grsec		h
#
	/home			rwcdl
	/root			rcdl
#
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
#
	/run/lock		rwcdl
	/sys			h
	/tmp			rwcdl
	/var			rwcdl
#
	+CAP_ALL
	-CAP_MKNOD
	-CAP_NET_ADMIN
	-CAP_NET_BIND_SERVICE
	-CAP_SETFCAP
	-CAP_SYS_ADMIN
	-CAP_SYS_BOOT
	-CAP_SYS_MODULE
	-CAP_SYS_RAWIO
	-CAP_SYS_TTY_CONFIG
	-CAP_SYSLOG
#
	bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
	connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
	sock_allow_family all
}

The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is ‘special’ meaning that it doesn’t correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role’s behavior. The ‘s’ and ‘A’ in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn’t have (eg, it is not subject ptrace restrictions). Its good to have the ‘A’ flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it’ll still function the same way.

The root role has the ‘u’ and the ‘G’ flags. The ‘u’ flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the ‘g’ flag instead which says to match a group by the same name. The ‘G’ flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don’t need to add those permissions yourself. Finally the default role doesn’t and shouldn’t have any flags. If its not a ‘u’ or ‘g’ or ‘s’ role, then its a default role.

Before we jump into the subject blocks, you’ll notice a couple of lines after the root role. The first says ‘role_transitions admin’ and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says ‘role_allow_ip 0.0.0.0/0’. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don’t have the line in there, get used to logging on on console because you’ll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the ‘subject’ keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the ‘/’ subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We’ll see this towards the end of the article.

Let’s start by looking at the ‘/’ subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have ‘/ h’ which says that every file in the file system starting from ‘/’ downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don’t forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn’t the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the ‘connect’ and ‘bind’ lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here ‘disabled’ means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let’s look at the ‘/’ subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags ‘rvka’. Here ‘r’ means that we relax ptrace restrictions for this subject, ‘a’ means we do not hide access to /dev/grsec, ‘k’ means we allow this subject to kill protected processes and ‘v’ means we allow this subject to view hidden processes. So ‘k’ and ‘v’ are interesting and have counterparts ‘p’ and ‘h’ respectively. If a subject is flagged as ‘p’ it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with ‘k’. Similarly processes belonging to a subject marked ‘h’ can only be viewed by processes belonging to a subject marked ‘v’. Nifty, eh? The only object line in this subject block is ‘/ rwcdmlxi’. This says that this subject can ‘r’ead, ‘w’rite, ‘c’reate, ‘d’elete, ‘m’ark as setuid/setgid, hard ‘l’ink to, e’x’ecute, and ‘i’nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let’s look at the ‘/’ subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don’t open any security holes. Notice that here we have ‘+CAP_ALL’ followed by a series of ‘-CAP_*’. Each of these were included otherwise gradm would complain. For example, if ‘CAP_SYS_ADMIN’ is not removed, an attacker can mount filesystems to bypass your policies.

So I won’t go through this entire subject block in detail, but let me highlight a few points. First consider these lines

	/			r
	/boot			h
	/etc/grsec		h
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
	/sys			h

The first line gives ‘r’ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by ‘h’iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory

but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module

Next consider these lines:

	/bin			rx
	/sbin			rx
	...
	/lib32			rx
	/lib64			rx
	/lib64/modules		h

Since the ‘x’ flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The ‘r’ flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the ‘r’ flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop ‘r’ on /bin and not break our system. This, however, is not the case with the library directories. Dropping ‘r’ on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw

The ‘h’ flag will hide /dev and its contents, but the ‘x’ flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The ‘h’ is required to hide the directory and its contents because, as we saw above, ‘x’ is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log

In the admin role, all those files are visible.

Let’s end our study of this subject by looking at the ‘bind’, ‘connect’ and ‘sock_allow_family’ lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The ‘sock_allow_family’ allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename ‘role root’ to ‘role myusername’. You’ll probably want to remove the ‘role_transitions’ line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don’t want this user to be able to list any files or directories. You can simply add a line to his ‘/’ subject block which reads ‘/bin/ls h’ and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just ‘h’ all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user’s listing of a directory to just his home. To do this, we’ll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let’s proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}

Note that this is identical to the extremely restrictive ‘/’ subject for the default role except that the subject is ‘/bin/ls’ not ‘/’. There is also a subject flag ‘o’ which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0

Well that makes sense. We’ve started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we’ll restore read access to it by adding a line ‘/lib64/ld-2.21.so r’. Repeating our test, we get a seg fault! Obviously, we don’t just need read access to the ld.so, but we also execute privileges. We add ‘x’ and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

Of course! We need ‘rx’ for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

All we need now is ‘/home/myusername r’ and we’re done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}

Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System.  A good reference for the role, subject and object flags can be found in these  Tables.

March 23, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Of OpenStack and uwsgi (March 23, 2016, 05:00 UTC)

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

  • Install >=www-servers/uwsgi-2.0.11.2-r1 with the python use flag as it has an updated init script.
  • Make sure you note the group you want for the webserver to access the uwsgi sockets, I chose nginx.

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi

UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'

/etc/uwsgi.d/keystone-admin.ini

[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin

/etc/uwsgi.d/keystone-main.ini

[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main

I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini

[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

/var/www/keystone/main perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon

#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()

March 22, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

A random OpenPGP-signed e-mail

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

Mail with a trusted signature

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

Something fishy is going on!

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

All the glory details about an X.509 trust chain

Encrypted messages are of course supported, too:

An ancrypted message

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!

March 06, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
Hasp HL Library (March 06, 2016, 13:10 UTC)

Hasp HL Library

git clone https://git.zx2c4.com/hasplib

The Hasp HL is a copy protection dongle that ships horrible closed-source drivers.

This is a very simple OSS library based on libusb for accessing MemoHASP functions of the Hasp HL USB dongle. It currently can view the ID of a dongle, validate the password, read from memory locations, and write to memory locations.

This library allows use of the dongle without any drivers!

API

Include hasplib.h, and compile your application alongside hasplib.c and optionally hasplib-simple.c.

Main Functions

Get a list of all connected dongles:

size_t hasp_find_dongles(hasp_dongle ***dongles);

Login to that dongle using the password, and optionally view the memory size:

bool hasp_login(hasp_dongle *dongle, uint16_t password1, uint16_t password2, uint16_t *memory_size);

Instead of the first two steps, you can also retreive the first connected dongle that fits your password:

hasp_dongle *hasp_find_login_first_dongle(uint16_t password1, uint16_t password2);

Read the ID of a dongle:

bool hasp_id(hasp_dongle *dongle, uint32_t *id);

Read from a memory location:

bool hasp_read(hasp_dongle *dongle, uint16_t location, uint16_t *value);

Write to a memory location:

bool hasp_write(hasp_dongle *dongle, uint16_t location, uint16_t value);

Free the list of dongles opened earlier:

void hasp_free_dongles(hasp_dongle **dongles);

Free a single dongle:

void hasp_free_dongle(hasp_dongle *dongle);

Simple Functions

The simple API wraps the main API and provides access to a default dongle, which is the first connected dongle that responds to the given passwords. It handles dongle disconnects and reconnections.

Create a hasp_simple * object for a given password pair:

hasp_simple *hasp_simple_login(uint16_t password1, uint16_t password2);

Free this object:

void hasp_simple_free(hasp_simple *simple);

Read an ID, returning 0 if an error occurred:

uint32_t hasp_simple_id(hasp_simple *simple);

Read a memory location, returning 0 if an error occurred:

uint16_t hasp_simple_read(hasp_simple *simple, uint16_t location);

Write to a memory location, returning its success:

bool hasp_simple_write(hasp_simple *simple, uint16_t location, uint16_t value);

Licensing

This is released under the GPLv3. See COPYING for more information. If you need a less restrictive license, please contact me.

March 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.9 (March 02, 2016, 08:23 UTC)

py3status v2.9 is out with a good bunch of new modules, exciting improvements and fixes !

Thanks

This release is made of their stuff, thank you contributors !

  • @4iar
  • @AnwariasEu
  • @cornerman
  • Alexandre Bonnetain
  • Alexis ‘Horgix’ Chotard
  • Andrwe Lord Weber
  • Ben Oswald
  • Daniel Foerster
  • Iain Tatch
  • Johannes Karoff
  • Markus Weimar
  • Rail Aliiev
  • Themistokle Benetatos

New modules

  • arch_updates module, by Iain Tatch
  • deadbeef module to show current track playing, by Themistokle Benetatos
  • icinga2 module, by Ben Oswald
  • scratchpad_async module, by johannes karoff
  • wifi module, by Markus Weimar

Fixes and enhancements

  • Rail Aliiev implement flake8 check via travis-ci, we now have a new build-passing badge
  • fix: handle format_time tztime parameter thx to @cornerman, fix issue #177
  • fix: respect ordering of the ipv6 i3status module even on empty configuration, fix #158 as reported by @nazco
  • battery_level module: add multiple battery support, by 4iar
  • battery_level module: added formatting options, by Alexandre Bonnetain
  • battery_level module: added option hide_seconds, by Andrwe Lord Weber
  • dpms module: added color support, by Andrwe Lord Weber
  • spotify module: added format_down option, by Andrwe Lord Weber
  • spotify module: fixed color & playbackstatus check, by Andrwe Lord Weber
  • spotify module: workaround broken dbus, removed PlaybackStatus query, by christian
  • weather_yahoo module: support woeid, add more configuration parameters, by Rail Aliiev

What’s next ?

Some major core enhancements and code clean up are coming up thanks to @cornerman, @Horgix and @pydsigner. The next release will be faster than ever and even less CPU consuming !

Meanwhile, this 2.9 release is available on pypi and Gentoo portage, have fun !

February 29, 2016
Gentoo accepted to GSoC 2016 (February 29, 2016, 00:00 UTC)

Students are encouraged to start working now on their project proposals. You can peruse the list of ideas or come up with your own. In any case, it is highly recommended you talk to a mentor sooner rather than later. The official application period for student proposals starts on March 14th.

Do not hesitate to join us in the #gentoo-soc channel on freenode. We will be happy to answer your questions there.
More information on Gentoo’s GSoC effort is also available on our Wiki.

February 28, 2016
Richard Freeman a.k.a. rich0 (homepage, bugs)
Gentoo Ought to be About Choice (February 28, 2016, 02:07 UTC)

“Gentoo is about choice.”  We’ve said it so often that it seems like we just don’t bother to say it any more.  However, with some of the recent conflicts on the lists (which I’ve contributed to) and indeed across the FOSS community at large, I think this is a message that is worth repeating…

Ok, bare with me because I’m going to talk about systemd.  This post isn’t really about systemd, but it would probably not be nearly as important in its absence.  So, we need to talk about why I’m bringing this up.

How we got here

Systemd has brought a wave of change in the Linux community, and most of the popular distros have decided to adopt it.  This has created a bit of a vacuum for those who strongly prefer to avoid it, and many of these have adopted Gentoo (the only other large-ish option is Slackware), and indeed some have begun to contribute back.  The resulting shift in demographics have caused tensions in the community, and I believe this has created a tendency for us to focus too much on what makes us different.

Where we are now

Every distro has a niche of some kind – a mission that gives it a purpose for existence.  It is the thing that its community coalesces around.  When a distro loses this sense of purpose, it will die or fork, whether by the forces of lost contributors or lost profits.  This purpose can certainly evolve over time, but ultimately it is this purpose which holds everything together.

For many years in Gentoo our purpose has been about providing choices, and enabling the user.  Sometimes we enable them to shoot their own feet, and we often enable them to break things in ways that our developers would prefer not to troubleshoot.  We tend to view the act of suppressing choices as contrary to our values, even if we don’t always have the manpower to support every choice that can possibly exist.

The result of this philosophy is what we all see around us.  Gentoo is a distro that can be used to build the most popular desktop linux-based operating system (ChromeOS), and which reportedly is also used as the basis of servers that run NASDAQ[1].  It shouldn’t be surprising that Gentoo works with no fewer than 7 device-manager implementations and 4 service managers.

Still, many in the Linux community struggle to understand us.  They mistake our commitment to providing a choice as some kind of endorsement of that choice.  Gentoo isn’t about picking winners.  We’re not an anti-systemd distro, even if many who dislike systemd may be found among us and it is straightforward to install Gentoo without “systemd” appearing anywhere in the filesystem.  We’re not a pro-systemd distro, even if (IMHO) we offer one of the best and undiluted systemd experiences around.  We’re a distro where developers and users with a diverse set of interests come together to contribute using a set of tools that makes it practical for each of us to reach in and pull out the system that we want to have.

Where we need to be

Ultimately, I think a healthy Gentoo is one which allows us all to express our preferences and exchange our knowledge, but where in the end we all get behind a shared goal of empowering our users to make the decisions.  There will always be conflict when we need to pick a default, but we must view defaults as conveniences and not endorsements.  Our defaults must be reasonably well-supported, but not litmus tests against which packages and maintainers are judged.  And, in the end, we all benefit when we are exposed to those who disagree and are able to glean from them the insights that we might have otherwise missed on our own.

When we stop making Gentoo about a choice, and start making it about having a choice, we find our way.

1 – http://www.computerworld.com/article/2510334/financial-it/how-linux-mastered-wall-street.html


Filed under: foss, gentoo, linux, Uncategorized

February 26, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Setting USE_EXPAND flags in package.use (February 26, 2016, 17:32 UTC)

This has apparently been supported in Portage for some time, but I only learned it recently from a gentoo-dev mail: you do not have to write down the expanded USE-flags in package.use anymore (or set them in make.conf)!

For example, if I wanted to set some APACHE2_MODULES and a custom APACHE2_MPM, the standard package.use entry would be something like:

www-servers/apache apache2_modules_proxy apache2_modules_proxy apache2_modules_proxy_http apache2_mpms_event ssl

Not as pretty/convenient as a ‘APACHE2_MODULES=”proxy proxy_http”‘ line in make.conf. Here is the best-of-both-worlds syntax (also supported in Paludis apparently):

www-servers/apache ssl APACHE2_MODULES: proxy proxy_http APACHE2_MPMS: event

Or if you use python 2.7 as your main python interpreter, set 3.4 for libreoffice-5.1 😉

app-office/libreoffice PYTHON_SINGLE_TARGET: python3_4

Have fun cleaning your package.use file

February 23, 2016
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)

ctmg - extremely simple encrypted container system

ctmg is an encrypted container manager for Linux using cryptsetup and various standard file system utilities. Containers have the extension .ct and are mounted at a directory of the same name, but without the extension. Very simple to understand, and very simple to implement; ctmg is a simple bash script.

Usage

Usage: ctmg [ new | delete | open | close | list ] [arguments...]
  ctmg new    container_path container_size[units_suffix]
  ctmg delete container_path
  ctmg open   container_path
  ctmg close  container_path
  ctmg list

Calling ctmg with no arguments will call list if there are any containers open, and otherwise show the usage screen. Calling ctmg with a filename argument will call open if it is not already open and otherwise will call close.

Examples

Create a 100MiB encrypted container called "example"

zx2c4@thinkpad ~ $ ctmg create example 100MiB
[#] truncate -s 100MiB /home/zx2c4/example.ct
[#] cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --batch-mode luksFormat /home/zx2c4/example.ct
Enter passphrase:
[#] chown 1000:1000 /home/zx2c4/example.ct
[#] cryptsetup luksOpen /home/zx2c4/example.ct ct_example
Enter passphrase for /home/zx2c4/example.ct:
[#] mkfs.ext4 -q -E root_owner=1000:1000 /dev/mapper/ct_example
[+] Created new encrypted container at /home/zx2c4/example.ct
[#] cryptsetup luksClose ct_example

Open a container, add a file, and then close it

zx2c4@thinkpad ~ $ ctmg open example
[#] cryptsetup luksOpen /home/zx2c4/example.ct ct_example
Enter passphrase for /home/zx2c4/example.ct: 
[#] mkdir -p /home/zx2c4/example
[#] mount /dev/mapper/ct_example /home/zx2c4/example
[+] Opened /home/zx2c4/example.ct at /home/zx2c4/example
zx2c4@thinkpad ~ $ echo "super secret" > example/mysecretfile.txt
zx2c4@thinkpad ~ $ ctmg close example
[#] umount /home/zx2c4/example
[#] cryptsetup luksClose ct_example
[#] rmdir /home/zx2c4/example
[+] Closed /home/zx2c4/example.ct

Installation

$ git clone https://git.zx2c4.com/ctmg
$ cd ctmg
$ sudo make install

Or, use the package from your distribution:

Gentoo

# emerge ctmg

Git Daemon Dummy: 301 Redirects for git:// (February 23, 2016, 02:35 UTC)

Git Daemon Dummy: 301 Redirects for git://

With the wide deployment of HTTPS, the plaintext nature of git:// is becoming less and less desirable. In order to inform users of the git://-based URIs to switch to https://-based URIs, while still being able to shut down aging git-daemon infrastructure, this git-daemon-dummy is an extremely lightweight daemon that simply provides an informative error message to connecting git:// users, providing the new URI.

It drops all privileges, chroots, sets rlimits, and uses seccomp-bpf to limit the amount of available syscalls. To remain high performance, it makes use of epoll.

Example

zx2c4@thinkpad ~ $ git clone git://git.zx2c4.com/cgit
Cloning into 'cgit'...
fatal: remote error: 
******************************************************

  This git repository has moved! Please clone with:

      $ git clone https://git.zx2c4.com/cgit

******************************************************

Installation

$ git clone https://git.zx2c4.com/git-daemon-dummy
$ cd git-daemon-dummy
$ make
$ ./git-daemon-dummy

Usage

Usage: ./git-daemon-dummy [OPTION]...
  -d, --daemonize              run as a background daemon
  -f, --foreground             run in the foreground (default)
  -P FILE, --pid-file=FILE     write pid of listener process to FILE
  -p PORT, --port=PORT         listen on port PORT (default=9418)
  -h, --help                   display this message

February 18, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Creating Gentoo VM Images (February 18, 2016, 06:00 UTC)

Initial Setup and Info

This guide uses Openstack's Diskimage-builder tool for generation of images, while you can use this for Openstack, you can also create generic images with it.

Setting up Diskimage-builder is fairly simple, when you use it, it does expect to be run as root.

All you need to do is follow this guide, at it's simplest it's just a couple of git clones and PATH setup.

You will need app-emulation/qemu for generation of qcow2 files.

The current setup utilizes the stage4 images being generated, see this link for more details.

There are currently only 4 profiles supported, however I hope to support musl and selinux profiles 'soon'.

  • default/linux/amd64/13.0
  • default/linux/amd64/13.0/no-multilib
  • hardened/linux/amd64
  • hardened/linux/amd64/no-multilib

Generating an Openstack image

To use a profile other than default/linux/amd64/13.0 set the GENTOO_PROFILE environment variable to one of the other supported profiles.

disk-image-create -a amd64 -t qcow2 --image-size 2 gentoo simple-init growroot vm is all you need to start. It will output a file named image.qcow2.

For openstack there are two ways you could go for initial setup (post-vm start). The first and most common is cloud-init, but that includes a few python deps that I don't think are really needed. The other is simple-init (glean), which is more limited, but as it's name suggests, simple.

Here is a link to glean (simple-init) for those wanting more info glean

Generating a Generic Image You Can Log Into

Using the devuser element you can set up custom users. You will need to set up some more environment variables though.

Docs can be found here

An example invocation follows, simple-init may be needed so that interfaces get dhcp addresses, though you may wat to set that up manually, your choice.

DIB_DEV_USER_PASSWORD=foobar DIB_DEV_USER_USERNAME=gentoo DIB_DEV_USER_PWDLESS_SUDO=yes DIB_DEV_USER_AUTHORIZED_KEYS=/dev/null disk-image-create -a amd64 -t qcow2 --image-size 2 gentoo simple-init growroot devuser vm

February 17, 2016
Yury German a.k.a. blueknight (homepage, bugs)
Gentoo Blogs – Announcement for Developers (February 17, 2016, 17:33 UTC)

== Announcement Gentoo Blogs: ==

We have upgraded the WordPress install, the themes and the plugins for
the blogs.gentoo.org.

There are a few announcements as some things have changed with the
latest version:

1. “Twenty Fifteen” Theme – PROBLEMS / Not Working
“Twenty Fifteen” was the theme for the previous version of wordpress as
the default theme, and if you accepted the default theme it is your
theme as well.

There were some changes where some sites, are not displaying correctly
using that theme. Please take a look at your site and feel free to pick
another theme.  Wordpress has introduced “Twenty Sixteen” which looks
cleaner now and might be a good choice.

2. “Twenty Thirteen” theme is currently broken.
The new WordPress update also brought with it a broken theme now.
“Twenty Thirteen” no longer works correctly as well. Please take a look
at alternative themes for your web site. As within seven (7) days we
will be turning off that theme.

3. Picasa Albums Plugin
The Picasa Albums plugin has not been updated in Two (2) years, and with
this version is no longer functioning. If you are using this version. If
you are using this plug-in please let me know. As we would have to find
a replacement.

If you have any questions please feel free to contact me directly or
planet@gentoo.org

February 16, 2016

Description:
Portage-utils is small and fast portage helper tools written in C.

I discovered that a crafted file is able to cause a stack-based buffer overflow.

The complete ASan output:

~ # qfile -f qfile-OOB-crash.log                                                                                                                                                                                                                                          
=================================================================                                                                                                                                                                                                              
==12240==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd067c1ac1 at pc 0x000000495bdc bp 0x7ffd067bd6f0 sp 0x7ffd067bceb0                                                                                                                                     
READ of size 4095 at 0x7ffd067c1ac1 thread T0                                                                                                                                                                                                                                  
    #0 0x495bdb in strncpy /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:632:5                                                                                                                                  
    #1 0x4fb5b9 in prepare_qfile_args /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:297:3                                                                                                                                                      
    #2 0x4fb5b9 in qfile_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:530                                                                                                                                                                
    #3 0x4e7f22 in q_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./q.c:79:10                                                                                                                                                                      
    #4 0x4e7afe in main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/main.c:1405:9                                                                                                                                                                      
    #5 0x7f5ccc29e854 in __libc_start_main /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/csu/libc-start.c:289                                                                                                                                                            
    #6 0x4192f8 in _init (/usr/bin/q+0x4192f8)                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                               
Address 0x7ffd067c1ac1 is located in stack of thread T0 at offset 17345 in frame                                                                                                                                                                                               
    #0 0x4f8b3f in qfile_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:394                                                                                                                                                                
                                                                                                                                                                                                                                                                               
  This frame has 10 object(s):                                                                                                                                                                                                                                                 
    [32, 4128) 'pkg.i'                                                                                                                                                                                                                                                         
    [4256, 8353) 'rpath.i'                                                                                                                                                                                                                                                     
    [8624, 8632) 'fullpath.i'                                                                                                                                                                                                                                                  
    [8656, 8782) 'slot.i'                                                                                                                                                                                                                                                      
    [8816, 8824) 'slot_hack.i'                                                                                                                                                                                                                                                 
    [8848, 8856) 'slot_len.i'                                                                                                                                                                                                                                                  
    [8880, 12977) 'tmppath.i'                                                                                                                                                                                                                                                  
    [13248, 17345) 'abspath.i'                                                                                                                                                                                                                                                 
    [17616, 17736) 'state' <== Memory access at offset 17345 partially underflows this variable                                                                                                                                                                                
    [17776, 17784) 'p' 0x100020cf0350: 00 00 00 00 00 00 00 00[01]f2 f2 f2 f2 f2 f2 f2
  0x100020cf0360: f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2
  0x100020cf0370: f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 00 00 00 00 00 00
  0x100020cf0380: 00 00 00 00 00 00 00 00 00 f2 f2 f2 f2 f2 00 f3
  0x100020cf0390: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
  0x100020cf03a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==12240==ABORTING

Affected version:
All versions.

Fixed version:
0.61

Commit fix:
https://gitweb.gentoo.org/proj/portage-utils.git/commit/?id=070f64a84544f74ad633f08c9c07f99a06aea551

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Not Assigned.

Timeline:
2016-02-01: bug discovered
2016-02-01: bug reported to upstream
2016-02-04: upstream release a fix
2016-02-16: advisory release

Note:
This bug was found with American Fuzzy Lop.
As the commit clearly state, the ability to read directly from a file was removed.

Permalink:

portage-utils: stack-based buffer overflow in qfile.c

Description:
Portage-utils is small and fast portage helper tools written in C.

I discovered that a crafted file is able to cause an heap-based buffer overflow.

The complete ASan output:

~ # qlop -f $CRAFTED_FILE -s
Mon Jan 25 11:38:31 2016 >>> gentoo
=================================================================
==14281==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61900001e44a at pc 0x000000425676 bp 0x7fff2b3f3970 sp 0x7fff2b3f3130
READ of size 1 at 0x61900001e44a thread T0
    #0 0x425675 in __interceptor_strncmp /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:218:3
    #1 0x50d5b1 in show_sync_history /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qlop.c:350:7
    #2 0x50d5b1 in qlop_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qlop.c:687
    #3 0x4e7f22 in q_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./q.c:79:10
    #4 0x4e7afe in main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/main.c:1405:9
    #5 0x7fafd8594854 in __libc_start_main /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/csu/libc-start.c:289
    #6 0x4192f8 in _init (/usr/bin/q+0x4192f8)

0x61900001e44a is located 0 bytes to the right of 970-byte region [0x61900001e080,0x61900001e44a)
allocated by thread T0 here:
    #0 0x4a839e in realloc /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:61:3
    #1 0x7fafd85dc95f in getdelim /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/libio/iogetdelim.c:106

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:218:3 in __interceptor_strncmp
Shadow bytes around the buggy address:
  0x0c327fffbc30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c327fffbc80: 00 00 00 00 00 00 00 00 00[02]fa fa fa fa fa fa
  0x0c327fffbc90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c327fffbca0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c327fffbcb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c327fffbcc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c327fffbcd0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5                                                                                                                                                                                                                                                  
  Stack use after scope:   f8                                                                                                                                                                                                                                                  
  Global redzone:          f9                                                                                                                                                                                                                                                  
  Global init order:       f6                                                                                                                                                                                                                                                  
  Poisoned by user:        f7                                                                                                                                                                                                                                                  
  Container overflow:      fc                                                                                                                                                                                                                                                  
  Array cookie:            ac                                                                                                                                                                                                                                                  
  Intra object redzone:    bb                                                                                                                                                                                                                                                  
  ASan internal:           fe                                                                                                                                                                                                                                                  
  Left alloca redzone:     ca                                                                                                                                                                                                                                                  
  Right alloca redzone:    cb                                                                                                                                                                                                                                                  
==14281==ABORTING

Affected version:
All versions.

Fixed version:
0.61

Commit fix:
https://gitweb.gentoo.org/proj/portage-utils.git/commit/?id=7aff0263204d80304108dbe4f0061f44ed8f189f

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Not Assigned.

Timeline:
2016-01-26: bug discovered
2016-01-27: bug reported to upstream
2016-01-29: upstream release a fix
2016-02-16: advisory release

Note:
This bug was found with American Fuzzy Lop.

Permalink:

portage-utils: heap-based buffer overflow in qlop.c

February 14, 2016
I love free software but I love you more (February 14, 2016, 22:07 UTC)

The Free Software Foundation Europe is running its campaign once again this year, and I quote: In the Free Software society we exchange a lot of criticism. We write bug reports, tell others how they can improve the software, ask them for new features, and generally are not shy about criticising others. There is nothing … Continue reading "I love free software but I love you more"

February 10, 2016
Denis Dupeyron a.k.a. calchan (homepage, bugs)
It is GSoC season again (February 10, 2016, 17:54 UTC)

Google Summer of Code 2016 is starting.

If you are a student please be patient. Your time will come soon, at which point we’ll be able to answer all your questions. In this initial phase the audience is project mentors. Note that you do not need to be a Gentoo developer to be a mentor.

While we are finalizing the application, we need all of you to submit your project ideas before the end of next week (February 19th). To do so you should go to this year’s idea page and follow the instructions in the “Ideas” section. If you proposed an idea last year and would like to propose it again for this year, you can look it up and add it. Or you can just tell us and we will do it for you. Don’t hesitate to add an idea even it isn’t totally fleshed out; we will help you fill in the blanks. A project represents 3 months of full-time work for a student. A good rule of thumb is that it should take you between 2 and 4 weeks to complete depending on how expert you are in the field. You can also have a look at last year’s idea page for examples.

If you would like to be a mentor for this year please tell us sooner rather than later. You can be a mentor following up with students while they’re working on their project, or you can be an expert-mentor in a specific field who will be called in to advise on an as-needed basis (or maybe never). We will also need people to help us review project proposals. In case you can help in any capacity don’t hesitate to contact us. GSoC is seriously fun!

If you want to reach us or have any questions about any of the above, the easiest way is to ping Calchan or rafaelmartins in the #gentoo-soc channel on Freenode.

February 08, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
A quick note on portable shebangs (February 08, 2016, 12:57 UTC)

While at first shebangs may seem pretty obvious and well supported, there is a number of not-so-well-known portability issues affecting them. Only during my recent development work, I have hit more than one of them. For this reason, I’d like to write a quick note summarizing how to stay on the safe side and keep your scripts working across various systems.

Please note I will only cover the basic solution to the most important portability issues. If you’d like to know more about shebang handling in various systems, I’d like to recommend you an excellent article ‘The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours’ by Sven Mascheck.

So, in order to stay portable you should note that:

  1. Many systems (Linux included!) have limits on shebang length. If you exceed this length, the kernel will cut the shebang in the middle of a path component, and usually try to execute the script with the partial path! To stay safe you need to keep shebang short. Since you can’t really control where the programs are installed (think of Prefix!), you should always rely on PATH lookups.
  2. Shebangs do not have built-in PATH lookups. Instead, you have to use the /usr/bin/env tool which performs the lookup on its argument (the exact path is mostly portable, with a few historical exceptions).
  3. Different systems split parameters in shebangs differently. In particular, Linux splits on the first space only, passing everything following it as a single parameter. To stay portable, you can not pass more than one parameter, and it can not contain whitespace. Which — considering the previous points made — means the parameter is reserved for program name passed to env, and you can not pass any actual parameters.
  4. Shebang nesting (i.e. referencing an interpreted script inside a shebang) is supported only by some systems, and only to some extent. For this reason, shebangs need to reference actual executable programs. However, using env effectively works around the issue since env is the immediate interpreter.

A few quick examples:

#!/usr/bin/env python  # GOOD!

#!python  # BAD: won't work

#!/usr/bin/env python -b  # BAD: it may try to spawn program named 'python -b'

#!/usr/bin/python  # BAD: absolute path is non-portable, also see below

#!/foo/bar/baz/usr/bin/python  # BAD: prefix can easily exceed length limit

#!/usr/lib/foo/foo.sh  # BAD: calling interpreted scripts is non-portable

January 31, 2016
Gentoo at FOSDEM 2016 (January 31, 2016, 13:00 UTC)

Gentoo Linux was present at this year's Free and Open Source Developer European Meeting (FOSDEM). For those not familiar with FOSDEM it is a conference that consists of more than 5,000 developers and more than 600 presentations over a two-day span at the premises of the Université libre de Bruxelles. The presentations are both streamed … Continue reading "Gentoo at FOSDEM 2016"

January 30, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo at FOSDEM: Posters (systemd, arches) (January 30, 2016, 15:24 UTC)

Especially after Lennart Poettering made some publicity for Gentoo Linux in his keynote talk (unfortunately I missed it due to other commitments :), we've had a lot of visitors at our FOSDEM booth. So, because of popular demand, here are again the files for our posters. They are based on the great "Gentoo Abducted" design by Matteo Pescarin.  Released under CC BY-SA 2.5 as the original. Enjoy!



PDF SVG


PDF SVG

January 29, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Stage4 tarballs, minimal and cloud (January 29, 2016, 06:00 UTC)

Where are they

The tarballs can be found in the normal place.

Minimal

This is meant to be just what you need to boot, the disk won't expand itself, it won't even get networking info or set any passwords for you (no default password).

This tarball is suposed to be the base you generate more complex images from, it is what is going to be used by Openstack's diskimage-builder.

The primary things it does is get you a kernel, bootloader and sshd.

stage4-minimal spec

Cloud

This was primarilly targeted for use with openstack but it should work with amazon as well, both use cloud-init.

Network interfaces are expected to use dhcp, a couple of other useful things are installed as well, syslog, logrotate, etc.

By default cloud-init will take data (keys mainly) and set them up for the 'gentoo' user.

stage4-cloud spec

Next

I'll be posting about the work being done to take these stages and build bootable images. At the momebt I do have images available here.

openstack images

January 26, 2016
Hanno Böck a.k.a. hanno (homepage, bugs)

Update: When I wrote this blog post it was an open question for me whether using Address Sanitizer in production is a good idea. A recent analysis posted on the oss-security mailing list explains in detail why using Asan in its current form is almost certainly not a good idea. Having any suid binary built with Asan enables a local root exploit - and there are various other issues. Therefore using Gentoo with Address Sanitizer is only recommended for developing and debugging purposes.

GentooAddress Sanitizer is a remarkable feature that is part of the gcc and clang compilers. It can be used to find many typical C bugs - invalid memory reads and writes, use after free errors etc. - while running applications. It has found countless bugs in many software packages. I'm often surprised that many people in the free software community seem to be unaware of this powerful tool.

Address Sanitizer is mainly intended to be a debugging tool. It is usually used to test single applications, often in combination with fuzzing. But as Address Sanitizer can prevent many typical C security bugs - why not use it in production? It doesn't come for free. Address Sanitizer takes significantly more memory and slows down applications by 50 - 100 %. But for some security sensitive applications this may be a reasonable trade-off. The Tor project is already experimenting with this with its Hardened Tor Browser.

One project I've been working on in the past months is to allow a Gentoo system to be compiled with Address Sanitizer. Today I'm publishing this and want to allow others to test it. I have created a page in the Gentoo Wiki that should become the central documentation hub for this project. I published an overlay with several fixes and quirks on Github.

I see this work as part of my Fuzzing Project. (I'm posting it here because the Gentoo category of my personal blog gets indexed by Planet Gentoo.)

I am not sure if using Gentoo with Address Sanitizer is reasonable for a production system. One thing that makes me uneasy in suggesting this for high security requirements is that it's currently incompatible with Grsecurity. But just creating this project already caused me to find a whole number of bugs in several applications. Some notable examples include Coreutils/shred, Bash ([2], [3]), man-db, Pidgin-OTR, Courier, Syslog-NG, Screen, Claws-Mail ([2], [3]), ProFTPD ([2], [3]) ICU, TCL ([2]), Dovecot. I think it was worth the effort.

I will present this work in a talk at FOSDEM in Brussels this Saturday, 14:00, in the Security Devroom.

January 25, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Mangling shell options in ebuilds (January 25, 2016, 10:46 UTC)

A long time ago eutils.eclass was gifted with a set of terribly ugly functions to push/pop various variables and shell options. Those functions were written very badly, and committed without any review. As a result, a number of eclasses and ebuilds are now using that code without even understanding how bad it is.

In this post, I would like to shortly summarize how to properly and reliably save states of shell options. While the resulting code is a little bit longer than use of e*_push and e*_pop functions, it is much more readable, does not abuse eval, does not abuse global variables and is more reliable.

Preferable solution: subshell scope

Of course, the preferable way of altering shell options is to do that in a subshell. This is the only way that reliably isolates the alterations from parent ebuild environment. However, subshells are rarely desired — so this is something you’d rather reuse if it’s already there, rather than introducing just for the sake of shell option mangling.

Mangling shopt options

Most of the ‘new’ bash options are mangled using shopt builtin. In this case, the -s and -u switches are used to change the option state, while the -p option can be used to get the current value. The current value is output in the form of shopt command syntax that can be called directly to restore the previous value.

my_function() {
	local prev_shopt=$(shopt -p nullglob)
	# prev_shopt='shopt -u nullglob' now
	shopt -s nullglob
	# ...
	${prev_shopt}
}

Mangling set options

The options set using the set builtin can be manipulated in a similar way. While the builtin support both short and long options, I strongly recommend using long options for readability. In fact, the long option names can be used through shopt with the additional -o parameter.

my_function() {
	local prev_shopt=$(shopt -p -o noglob)
	# prev_shopt='set +o noglob' now
	set -o noglob  # or shopt -s -o noglob
	# ...
	${prev_shopt}
}

Mangling umask

The umask builtin returns the current octal umask when called with no parameters. Furthermore, the -p parameter can be used to get full command for use alike shopt -p output.

my_function() {
	local prev_umask=$(umask)
	# prev_umask=0022 now
	umask 077
	# ...
	umask "${prev_umask}"
}

alternative_function() {
	local prev_umask=$(umask -p)
	# prev_umask='umask 0022' now
	umask 077
	# ...
	${prev_umask}
}

Mangling environment variables

The eutils hackery went as far as to reinvent local variables using… global stacks. Not that it makes any sense. Whenever you want to change variable’s value, attributes or just unset it temporarily, just use local variables. If the change needs to apply to part of a function, create a sub-function and put the local variable inside it.

While at it, please remember that bash does not support local functions. Therefore, you need to namespace your functions to avoid collisions and unset them after use.

my_function() {
	# unset FOO in local scope (this also prevents it from being exported)
	local FOO
	# 'localize' bar for modifications, preserving value
	local bar="${bar}"

	#...

	my_sub_func() {
		# export LC_ALL=POSIX in function scope
		local -x LC_ALL=POSIX
		#...
	}
	my_sub_func
	# unset the function after use
	unset -f my_sub_func
}

Update: mangling shell options without a single subshell

(added on 2016-01-28)

izabera has brought it to my attention that the shopt builtin supports -q option to suppress output and uses exit statuses to return the original flag state. This makes it possible to set and unset the flags without using a single subshell or executing returned commands.

Since I do not expect most shell script writers to use such a long replacement, I present it merely as a curiosity.

my_setting_function() {
	shopt -q nullglob
	local prev_shopt=${?}
	shopt -s nullglob

	#...

	[[ ${prev_shopt} -eq 0 ]] || shopt -u nullglob
}

my_unsetting_function() {
	shopt -q extquote
	local prev_shopt=${?}
	shopt -u extquote

	#...

	[[ ${prev_shopt} -eq 0 ]] && shopt -s extquote
}

January 24, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.6 is released (January 24, 2016, 11:14 UTC)

Hi all,
we are pleased to announce version 0.6 of Trojitá, a fast Qt IMAP e-mail client. This release brings several new features as well as the usual share of bugfixes:

  • Plugin-based infrastructure for the address book, which will allow better integration with other applications
  • Usability improvements in the message composer on several fronts
  • Better keyboard-only usability for those of us who do not touch mouse that often
  • More intuitive message tagging, and support for standardized actions for junk mail
  • Optional sharing of authentication data between IMAP and SMTP
  • Change to using Qt5 by default. This is the last release which still supports Qt4.
  • Improved robustness on unstable network connections
  • The old status bar is now gone to save screen real estate
  • IMAP interoperability fixes
  • Speed improvements

This release has been tagged in git as "v0.6". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

This release is named after the Aegean island Λέσβος (Lesvos). Jan was there for the past five weeks, and he insisted on mentioning this challenging experience.

The Trojitá developers

January 22, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Please test www-apache/mod_perl-2.0.10_pre201601 (January 22, 2016, 18:39 UTC)

We're trying to get both Perl 5.22 and Apache 2.4 stable on Gentoo these days. One thing that would be really useful is to have a www-apache/mod_perl that works with all current Perl and Apache versions... and there's a candidate for that: a snapshot of what is hopefully going to be mod_perl-2.0.10 pretty soon. So...

Please keyword (if necessary) and test www-apache/mod_perl-2.0.10_pre201601!!! Feedback for all Perl and Apache versions is very much appreciated. Gentoo developers can directly edit our compatibility table with the results, everyone else please comment on this blog post or file bugs in case of problems!

Please always include exact www-servers/apache, dev-lang/perl, and www-apache/mod_perl versions!

January 18, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Of OpenStack and SSL (January 18, 2016, 06:00 UTC)

SSL in vanila OpenStack

The nature of OpenStack projects is largely like projects in Gentoo. Even though they are all under the OpenStack umbrella that doesn't mean they all have to work the same, or even work together.

For instance, nova has the ability to do ssl itself, you can define a CA and public/private keypair. Glance (last time I checked) doesn't do ssl yourself so you must offload it. Other service might do ssl themselves but not in the same way nova does it.

This means that the most 'standard' setup would be to not run ssl, but this isn't exactly desirable. So run a ssl reverse proxy.

Basic Setup

  • OpenStack services are set up on one host (just in this example).
  • OpenStack services are configed to listen on localhost only.
  • Public, Internal and Admin URLs need to be defined with https.
  • Some tuning needs to be done so services work properly, primarily to glance and nova-novnc.
  • Nginx is used as the reverse proxy.

Configs and Tuning

General Config for All Services/Sites

This is the basic setup for each of the openstack services, the only difference between them will be what goes in the location subsection.

server {
    listen LOCAL_PUBLIC_IPV4:PORT;
    listen [LOCAL_PUBLIC_IPV6]:PORT;
    server_name = name.subdomain.example.com;
    access_log /var/log/nginx/keystone/access.log;
    error_log /var/log/nginx/keystone/error.log;

    ssl on;
    ssl_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    ssl_certificate_key /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header Public-Key-Pins 'pin-sha256="PUB_KEY_PIN_SHA"; max-age=2592000; includeSubDomains';
    ssl_dhparam /etc/nginx/params.4096;
    resolver TRUSTED_DNS_SERVER;
    resolver_timeout 5s;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header X-XSS-Protection "1; mode=block";
    add_header Content-Security-Policy "default-src 'self' https: wss:;";
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;

    location / {
        # this changes depending on the service
        proxy_pass http://127.0.0.1:PORT;
    }
}

Keystone and Uwsgi

It turns out keystone has switched to uwsgi for it's service backend. This is good because it means we can have the web server connect to that, no more trying to do it all by itself. I'll leave the setting up of uwsgi itself as an exercise to the reader :P

This config has a few extra things, but it is currently what I know to be 'secure' (similiar config on this blog gets an A+ on all those ssl test things). It's the last location piece that changes the most between services.

location / {
    uwsgi_pass unix:///run/uwsgi/keystone_admin.socket;
    include /etc/nginx/uwsgi_params;
    uwsgi_param SCRIPT_NAME admin;
}

Glance

Glance just needs one thing on top of the general proxying that it needs. It needs client_max_body_size 0; in the main server stanza so that you can upload images without being cut off at some low size.

client_max_body_size 0;
location / {
    proxy_pass http://127.0.0.1:9191;
}

Nova

The serviecs for nova just need the basic proxy_pass line. The only exception is novnc, it needs some proxy headers passed.

location / {
    proxy_pass http://127.0.0.1:6080;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
}

Rabbitmq

Rabbit is fairly simple, you just need to enable ssl and disable the plaintext port (setting up your config of course).

[
    {ssl, [{versions, ['tlsv1.2', 'tlsv1.1']}]},
    {rabbit, [
        {tcp_listeners, []},
        {ssl_listeners, [5671]},
        {ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/CA_CERT.pem"},
                       {certfile,  "/etc/rabbitmq/ssl/PUB_KEY.pem"},
                       {keyfile,   "PRIV_KEYKEY.key"},
                       {versions, ['tlsv1.2', 'tlsv1.1']}
                      ]
        }]
    }
].

Openstack Configs

The openstack configs can differ slightly but they are all mostly the same now they are using the same libraries (oslo stuff).

General Config

[keystone_authtoken]
auth_uri = https://name.subdomain.example.com:5000
auth_url = https://name.subdomain.example.com:35357

[oslo_messaging_rabbit]
rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True

Nova

osapi_compute_listen = 127.0.0.1
metadata_listen = 127.0.0.1
novncproxy_host = 127.0.0.1
enabled_apis = osapi_compute, metadata
[vnc]
novncproxy_base_url = https://name.subdomain.example.com:6080/vnc_auto.html
# the following only on the 'master' host
vncserver_proxyclient_address = 1.2.3.4
vncserver_listen = 1.2.3.4

[glance]
host = name.subdomain.example.com
protocol = https
api_servers = https://name.subdomain.example.com:9292

[neutron]
url = https://name.subdomain.example.com:9696
auth_url = https://name.subdomain.example.com:35357

Cinder

# api-servers get this
osapi_volume_listen = 127.0.0.1

# volume-servers and api-servers get this
glance_api_servers=https://name.subdomain.example.com:9292

Glance

glance_api_servers=https://name.subdomain.example.com:9292

Glance

# api
bind_host = 127.0.0.1
registry_host = name.subdomain.example.com
registry_port = 9191
registry_client_protocol = https

.

# cache
registry_host = name.subdomain.example.com
registry_port = 9191

.

# registry
bind_host = 127.0.0.1
rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True

.

# scrubber
registry_host = name.subdomain.example.com
registry_port = 9191

Neutron

# neutron.conf
bind_host = 127.0.0.1
nova_url = https://name.subdomain.example.com:8774/v2

[nova]
auth_url = https://name.subdomain.example.com:35357

.

# metadata_agent.ini
nova_metadata_ip = name.subdomain.example.com
nova_metadata_protocol = https

January 16, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

You might have seen the word TEXTREL thrown around security or hardening circles, or used in Gentoo Linux installation warnings, but one thing that is clear out there is that the documentation around this term is not very useful to understand why they are a problem. so I've been asked to write something about it.

Let's start with taking apart the terminology. TEXTREL is jargon for "text relocation", which is once again more jargon, as "text" in this case means "code portion of an executable file." Indeed, in ELF files, the .text section is the one that contains all the actual machine code.

As for "relocation", the term is related to dynamic loaders. It is the process of modifying the data loaded from the loaded file to suit its placement within memory. This might also require some explanation.

When you build code into executables, any named reference is translated into an address instead. This includes, among others, variables, functions, constants and labels — and also some unnamed references such as branch destinations on statements such as if and for.

These references fall into two main typologies: relative and absolute references. This is the easiest part to explain: a relative reference takes some address as "base" and then adds or subtracts from it. Indeed, many architectures have a "base register" which is used for relative references. In case of executable code, particularly with the reference to labels and branch destinations, relative references translate into relative jumps, which are relative to the current instruction pointer. An absolute reference is instead a fully qualified pointer to memory, well at least to the address space of the running process.

While absolute addresses are kinda obvious as a concept, they are not very practical for a compiler to emit in many cases. For instance, when building shared objects, there is no way for the compiler to know which addresses to use, particularly because a single process can load multiple objects, and they need to all be loaded at different addresses. So instead of writing to the file the actual final (unknown) address, what gets written by the compiler first – and by the link editor afterwards – is a placeholder. It might sound ironic, but an absolute reference is then emitted as a relative reference based upon the loading address of the object itself.

When the loader takes an object and loads it to memory, it'll be mapped at a given "start" address. After that, the absolute references are inspected, and the relative placeholder resolved to the final absolute address. This is the process of relocation. Different types of relocation (or displacements) exists, but they are not the topic of this post.

Relocations as described up until now can apply to both data and code, but we single out code relocations as TEXTRELs. The reason for this is to be found in mitigation (or hardening) techniques. In particular, what is called W^X, NX or PaX. The basic idea of this technique is to disallow modification to executable areas of memory, by forcing the mapped pages to either be writable or executable, but not both (W^X reads "writable xor executable".) This has a number of drawbacks, which are most clearly visible with JIT (Just-in-Time) compilation processes, including most JavaScript engines.

But beside JIT problem, there is the problem with relocations happening in code section of an executable, too. Since the relocations need to be written to, it is not feasible (or at least not easy) to provide an exclusive writeable or executable access to those. Well, there are theoretical ways to produce that result, but it complicates memory management significantly, so the short version is that generally speaking, TEXTRELs and W^X techniques don't go well together.

This is further complicated by another mitigation strategy: ASLR, Address Space Layout Randomization. In particular, ASLR fully defeats prelinking as a strategy for dealing with TEXTRELs — theoretically on a system that allows TEXTREL but has the address space to map every single shared object at a fixed address, it would not be necessary to relocate at runtime. For stronger ASLR you also want to make sure that the executables themselves are mapped at different addresses, so you use PIE, Position Independent Executable, to make sure they don't depend on a single stable loading address.

Usage of PIE was for a long while limited to a few select hardened distributions, such as Gentoo Hardened, but it's getting more common, as ASLR is a fairly effective mitigation strategy even for binary distributions where otherwise function offsets would be known to an attacker.

At the same time, SELinux also implements protection against text relocation, so you no longer need to have a patched hardened kernel to provide this protection.

Similarly, Android 6 is now disallowing the generation of shared objects with text relocations, although I have no idea if binaries built to target this new SDK version gain any more protection at runtime, since it's not really my area of expertise.

Michał Górny a.k.a. mgorny (homepage, bugs)

The way packages are maintained in Gentoo have been evolving for quite some time already. So far all of that has been happening on top of old file formats which slowly started to diverge from the needs of Gentoo developers, and become partially broken. The concept of herds has become blurry, with confusion in definition between different developers and partial assumption about their deprecation. Maintenance of herd by project has been broken by moving projects to the Wiki. Some projects have stopped using herds, others have been declaring them in metadata.xml in different ways.

The problem has finally reached the Gentoo Council and has been discussed on 2015-10-25 meeting (note: no summary still…). The Council attempted to address different problems by votes, and create a new solution by combining the results of votes. However, finally it decided that it is not possible to create a good specification this way. Instead, the meeting has brought two major points. Firstly, herds are definitely deprecated. Secondly, someone needs to provide a complete, consistent replacement in GLEP form.

This is how GLEP 67 came to be. It was based on results of previous discussion, Council votes and thorough analysis of different problems. It provides a complete, consistent system for maintaining packages and expressing the maintenance information. It has been approved by the Council on 2016-01-10, with two week deadline on preparing to the switch.

Therefore, on 2016-01-24 Gentoo is going to switch to the new maintenance structure described in GLEP 67 completely. The announcement with transition details has been sent already. Instead, I’d like to focus on describing how things are going to work starting from the day GLEP 67 becomes implemented.

Who is going to maintain packages?

Before getting into technical details, GLEP 67 starts by limiting possible package maintainer entries. Until now, metadata files were allowed to list practically any e-mail addresses for package maintainers. From now on, only real people (either developers or proxied maintainers) and projects (meeting the requirements of GLEP 39, in particular having a Wiki page) are allowed to be maintainers. All maintainers are identified by e-mail addresses which are required to be unique (i.e. sharing the same address between multiple projects is forbidden) and registered on bugs.g.o.

This supports the two major goals behind maintainer specifications: bug assignment and responsibility assignment. The former is rather obvious — Bugzilla is the most reliable communication platform for both Gentoo developers and users. Therefore, it is important that the bugs can be assigned to appropriate entities without any issues. The latter aims to address the problem of unclear ‘ownership’ of some packages, and packages maintained by ‘dead’ e-mail aliases.

In other words, from now on we require that for every maintained package in Gentoo, we need to be able to obtain a complete, clear list of people maintaining it (directly and via projects). We no longer accept ‘dumb’ e-mail aliases that make it impossible to distinguish real maintainers from people who are simply following bugs. This gives three important advantages:

  1. we can actually ping the correct people on IRC without having to go through hoops,
  2. we can determine whether a package is actually maintained by someone, rather than assigned to an alias from which nobody reads bug mail anymore,
  3. we can clearly determine who is responsible for a package and who is the appropriate person to acknowledge changes.

Changes for maintainer-needed packages

The new requirements brought a new issue: maintainer-needed@g.o. The specific use of this e-mail alias did not really seem to suit a project. Creating a ‘maintainer-needed project’ would either imply creating a dead entity or assigning actual maintainers to supposedly-unmaintained packages. On the other hand, I did not really want to introduce special cases in the specification.

Instead, I have decided that the best way forward is to remove it. In other words, unmaintained packages now explicitly list no maintainers. The assignment to maintainer-needed@g.o will be done implicitly, and is a rule specific to the Gentoo repository. Other repositories may use different policies for packages with no explicit maintainers (like assigning to the repository owner).

The metadata.xml and projects.xml files

The changes to metadata.xml are really minimal, and backwards compatible. The <herd/> element is no longer used, and will be prohibited. Instead, <maintainer/> elements are going to be used for all kinds of maintainers. There are no incompatible changes in those elements, therefore existing tools will not be broken.

The <maintainer/> element gets a new obligatory type attribute that needs to either be person or project. The latter value may cause tools to look the project up (by e-mail) in projects.xml.

The projects.xml file (unlike past herds.xml) is clearly defined in the repository scope. In particular, the tools must not assume it always comes out of Gentoo repository. Other repositories are allowed to define their own projects (though overriding projects is not allowed), and project lookup needs to respect masters= setting in repositories.

For the Gentoo repository, projects.xml is generated automatically from the Wiki project pages, and distributed both via api.gentoo.org and via the usual repository distribution means (the rsync and git mirrors).

Summary

A quick summary for Gentoo developers of how things look like with GLEP 67:

  1. Only people and projects can maintain packages. If you want to maintain packages in an organized group, you have to create a project — with Wiki page, unique e-mail address and bugs.g.o account.
  2. Only the explicitly listed project members (and subproject members whenever member inheritance is enabled) are considered maintainers of the package. Other people subscribing the e-mail alias are not counted.
  3. Packages with no maintainers have no <maintainer/> elements. The bugs are still implicitly assigned to maintainer-needed@g.o but this e-mail alias is no longer used in metadata.xml.
  4. <herd/> elements are no longer used, <maintainer/>s are used instead.
  5. <maintainer/> requires a new type attribute that takes either person or project value.

January 05, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)

Demo start:
https://www.youtube.com/watch?v=2A7V3GLWF6U&feature=youtu.be&t=37s

Where “OpenRC 0.19.1 is starting up Gentoo Linux (x86_64)” scrolls into display:
https://www.youtube.com/watch?v=2A7V3GLWF6U&feature=youtu.be&t=1m21s

January 03, 2016

The new year kicks off with two large events with Gentoo participation: The Southern California Linux Expo SCALE14x and FOSDEM 2016, both featuring a Gentoo booth.

SCALE14x

SCALE14x logo

First we have the Southern California Linux Expo SCALE in its 14th edition. The Pasadena Convention Center will host the event this year from January 21 to January 24.

Gentoo will be present as an exhibitor, like in the previous years.

Thanks to the organizers, we can share a special promotional code for attending SCALE with our community, valid for full access passes. Using the code GNTOO on the registration page will get you a 50% discount.

FOSDEM 2016

FOSDEM 2016 logo

Then, on the last weekend of January, we’ll be on the other side of the pond in Brussels, Belgium where FOSDEM 2016 will take place on January 30 and 31.

Located at the Université libre de Bruxelles, it doesn’t just offer interesting talks, but also the finest Belgian beers when the sun sets. :)

This year, Gentoo will also be manning a stand with gadgets, swag, and LiveDVDs.

Booth locations

We’ll update this news item with more detailed information on how to find our booths at both conferences once we have information from the organizers.

January 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux on DELL XPS 13 9350 (January 02, 2016, 10:43 UTC)

As I found little help about this online I figured I’d write a summary piece about my recent experience in installing Gentoo Linux on a DELL XPS 13 9350.

UEFI or MBR ?

This machine ships with a NVME SSD so don’t think twice : UEFI is the only sane way to go.

BIOS configuration

I advise to use the pre-installed Windows 10 to update the XPS to the latest BIOS (1.1.7 at the time of writing). Then you need to change some stuff to boot and get the NVME SSD disk discovered by the live CD.

  • Turn off Secure Boot
  • Set SATA Operation to AHCI (will break your Windows boot but who cares)

Live CD

Go for the latest SystemRescueCD (it’s Gentoo based, you won’t be lost) as it’s quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We’ll be using GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

  • 500Mo UEFI boot partition (type EF00)
  • 16Go Swap partition
  • 60Go Linux root partition
  • 400Go home partition

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +16G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +60G ↵
Hex Code: ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

If your live CD is old (pre 4.4 kernel), the integrated broadcom 4350 wifi card won’t be available !

My trick was to use my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

  • get your Android phone connected on your local WiFi (unless you want to use your cellular data)
  • plug in your phone using USB to your XPS
  • on your phone, go to Settings / More / Tethering & portable hotspot
  • enable USB tethering

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

Et voilà, you have now access to the internet.

Proceed with installation

The only thing to worry about is to format the UEFI boot partition as FAT32.

# mkfs.vfat -F 32 /dev/nvme0n1p1

Then follow the Gentoo handbook as usual for the next steps of the installation process until you arrive to the kernel and the bootloader / grub part.

From this moment I can already say that NO we won’t be using GRUB at all so don’t bother installing it. Why ? Because at the time of writing, the efi-64 support of GRUB was totally not working at all as it failed to discover the NVME SSD disk on boot.

Kernel sources and consideration

The trick here is that we’ll setup the boot ourselves directly from the BIOS later so we only need to build a standalone kernel (meaning able to boot without an initramfs).

EDIT: as of Jan. 10 2016, kernel 4.4 is available on portage so you don’t need the patching below any more !

Make sure you install and use at least a 4.3.x kernel (4.3.3 at the time of writing). Add sys-kernel/gentoo-sources to your /etc/portage/package.keywords file if needed. If you have a 4.4 kernel available, you can skip patching it below.

Patching 4.3.x kernels for Broadcom 4350 WiFi support

To get the broadcom 4350 WiFi card working on 4.3.x, we need to patch the kernel sources. This is very easy to do thanks to Gentoo’s user patches support. Do this before installing gentoo-sources (or reinstall it afterwards).

This example is for gentoo-sources-4.3.3, adjust your version accordingly :

(chroot) # mkdir -p /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # cd /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/0001-bcm4350.patch

When emerging the gentoo-sources package, you should see the patch being applied. Check that it worked by issuing :

(chroot) # grep BRCM_CC_4350 /usr/src/linux/drivers/net/wireless/brcm80211/brcmfmac/chip.c
case BRCM_CC_4350_CHIP_ID:

The resulting kernel module will be called brcmfmac, make sure to load it on boot by adding it in your /etc/conf.d/modules :

modules="brcmfmac"

EDIT: as of Jan. 7 2016, version 20151207 of linux-firmware ships with the needed files so you don’t need to download those any more !

Then we need to download the WiFi card’s firmware files which are not part of the linux-firmware package at the time of writing (20150012).

(chroot) # emerge '>=sys-kernel/linux-firmware-20151207'

# DO THIS ONLY IF YOU DONT HAVE >=sys-kernel/linux-firmware-20151207 available !
(chroot) # cd /lib/firmware/brcm/
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/BCM-0a5c-6412.hcd
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/brcmfmac4350-pcie.bin

Kernel config & build

I used genkernel to build my kernel. I’ve done a very few adjustments but these are the things to mind in this pre-built kernel :

  • support for NVME SSD added as builtin
  • it is builtin for ext4 only (other FS are not compiled in)
  • support for DM_CRYPT and LUKS ciphers for encrypted /home
  • the root partition is hardcoded in the kernel as /dev/nvme0n1p3 so if yours is different, you’ll need to change CONFIG_CMDLINE and compile it yourself
  • the CONFIG_CMDLINE above is needed because you can’t pass kernel parameters using UEFI so you have to hardcode them in the kernel itself
  • support for the intel graphic card DRM and framebuffer (there’s a kernel bug with skylake CPUs which will spam the logs but it still works good)

Get the kernel config and compile it :

(chroot) # mkdir -p /etc/kernels
(chroot) # cd /etc/kernels
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/kernel-config-x86_64-4.3.3-gentoo
(chroot) # genkernel kernel

The proposed kernel config here is for gentoo-sources-4.3.3 so make sure to rename the file for your current version.

This kernel is far from perfect but it works very good so far, sound, webcam and suspend work smoothly !

make.conf settings for intel graphics

I can recommend using the following on your /etc/portage/make.conf :

INPUT_DRIVERS="evdev synaptics"
VIDEO_CARDS="intel i965"

fstab for SSD

Don’t forget to make sure the noatime option is used on your fstab for / and /home !

/dev/nvme0n1p1    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p2    none     swap    sw                0 0
/dev/nvme0n1p3    /        ext4    noatime   0 1
/dev/nvme0n1p4    /home    ext4    noatime   0 1

As pointed out by stefantalpalaru on comments, it is recommended to schedule a SSD TRIM on your crontab once in a while, see Gentoo Wiki on SSD for more details.

encrypted /home auto-mounted at login

I advise adding the cryptsetup to your USE variable in /etc/portage/make.conf and then updating your @world with a emerge -NDuq @world.

I assume you don’t have created your user yet so your unmounted /home is empty. Make sure that :

  • your /dev/nvme0n1p4 home partition is not mounted
  • you removed the corresponding /home line from your /etc/fstab (we’ll configure pam_mount to get it auto-mounted on login)

AFAIK, the LUKS password you’ll set on the first slot when issuing luksFormat below should be the same as your user’s password !

(chroot) # cryptsetup luksFormat -s 512 /dev/nvme0n1p4
(chroot) # cryptsetup luksOpen /dev/nvme0n1p4 crypt_home
(chroot) # mkfs.ext4 /dev/mapper/crypt_home
(chroot) # mount /dev/mapper/crypt_home /home
(chroot) # useradd -m -G wheel,audio,video,plugdev,portage,users USERNAME
(chroot) # passwd USERNAME
(chroot) # umount /home
(chroot) # cryptsetup luksClose crypt_home

We’ll use sys-auth/pam_mount to manage the mounting of our /home partition when a user logs in successfully, so make sure you emerge pam_mount first, then configure the following files :

  • /etc/security/pam_mount.conf.xml (only line added is the volume one)
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<!--
	See pam_mount.conf(5) for a description.
-->

<pam_mount>

		<!-- debug should come before everything else,
		since this file is still processed in a single pass
		from top-to-bottom -->

<debug enable="0" />

		<!-- Volume definitions -->

<volume user="USERNAME" fstype="auto" path="/dev/nvme0n1p4" mountpoint="/home" options="fsck,noatime" />

		<!-- pam_mount parameters: General tunables -->

<!--
<luserconf name=".pam_mount.conf.xml" />
-->

<!-- Note that commenting out mntoptions will give you the defaults.
     You will need to explicitly initialize it with the empty string
     to reset the defaults to nothing. -->
<mntoptions allow="nosuid,nodev,loop,encryption,fsck,nonempty,allow_root,allow_other" />
<!--
<mntoptions deny="suid,dev" />
<mntoptions allow="*" />
<mntoptions deny="*" />
-->
<mntoptions require="nosuid,nodev" />

<!-- requires ofl from hxtools to be present -->
<logout wait="0" hup="no" term="no" kill="no" />


		<!-- pam_mount parameters: Volume-related -->

<mkmountpoint enable="1" remove="true" />


</pam_mount>
  • /etc/pam.d/system-auth (only lines added are the ones with pam_mount.so)
auth		required	pam_env.so 
auth		required	pam_unix.so try_first_pass likeauth nullok 
auth		optional	pam_mount.so
auth		optional	pam_permit.so

account		required	pam_unix.so 
account		optional	pam_permit.so

password	optional	pam_mount.so
password	required	pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 
password	required	pam_unix.so try_first_pass use_authtok nullok sha512 shadow 
password	optional	pam_permit.so

session		optional	pam_mount.so
session		required	pam_limits.so 
session		required	pam_env.so 
session		required	pam_unix.so 
session		optional	pam_permit.so

That’s it, easy heh ?! When you login as your user, pam_mount will decrypt your home partition using your user’s password and mount it on /home !

UEFI booting your Gentoo Linux

The best (and weird ?) way I found for booting the installed Gentoo Linux and its kernel is to configure the UEFI boot directly from the XPS BIOS.

The idea is that the BIOS can read the files from the EFI boot partition since it is formatted as FAT32. All we have to do is create a new boot option from the BIOS and configure it to use the kernel file stored in the EFI boot partition.

  • reboot your machine
  • get on the BIOS (hit F2)
  • get on the General / Boot Sequence menu
  • click Add
  • set a name (like Gentoo 4.3.3) and find + select the kernel file (use the integrated file finder)

IMG_20160102_113759

  • remove all unwanted boot options

IMG_20160102_113619

  • save it and reboot

Your Gentoo kernel and OpenRC will be booting now !

Suggestions, corrections, enhancements ?

As I said, I wrote all this quickly to spare some time to whoever it could help. I’m sure there are a lot of improvements to be done still so I’ll surely update this article later on.

December 31, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
uWSGI v2.0.12 (December 31, 2015, 14:18 UTC)

It’s been a long time since I made a blog post about a uWSGI release but this one is special to me because it contains some features I asked for to a colleague of mine.

For his first contributions to a big Open Source project, our fellow @shir0kamii added two features (spooler_get_task and -if-hostname-match) which were backported in this release and that we needed at work for quite a long time : congratulations again :)

Highlights

  • official PHP 7 support
  • uwsgi.spooler_get_task API to easily read and inspect a spooler file from your code
  • -if-hostname-match regexp on uWSGI configuration files to allow a more flexible configuration based on the hostname of the machine

Of course, all of this is already available on Gentoo Linux !

Full changelog here as usual.

December 30, 2015
32C3 (December 30, 2015, 21:18 UTC)

This year I participated in the Chaos Computer Club's annual congress for the first time, despite it being the 32nd such event being arranged, hence its name 32c3. This year's event has the subname of "Gated Communities" and follows last year in its location in Hamburg after having been in Berlin for a while. By this … Continue reading "32C3"

December 28, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
SCALE 14x (December 28, 2015, 20:30 UTC)

I don’t know about you but I’m going to SCALE 14x.

I'm going to SCALE 14x!

SCALE is always a lot of fun and last year I particularly liked their new embedded track. This year’s schedule looks equally exciting. Among the speakers you may recognize former Gentoo developers as well as a few OSS celebrities.

We’ll have a booth again this year. Feel free to come talk to us or help us on the booth. One of the new things this year is we’re getting a Gentoo specific promotional code for a 50% rebate on full-access passes for the entire event. The code is GNTOO. It’s not only for Gentoo developers, but also for all Gentoo users, potential users, new-year-resolution future users, friends, etc… Use it! Spread it! I’m looking forward to see you all there.

December 19, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
XScreenSaver unlock dialog tuning (December 19, 2015, 19:42 UTC)

I’m having a bit of trouble accepting that one of the dialogs that is presented to me as frequently as the XScreenSaver unlock window below is by far the least shiny part of my daily Linux desktop experience.


Tuning just the knobs that XScreenSaver already comes with, I eventually got to this point:


The logo still is too much noise and the font still lacks anti-aliasing. However most of the text noise, the pre-90s aesthetics and the so-called thermometer are gone.

To bring it to your desktop, use this content for ~/.Xdefaults

xscreensaver.dateFormat:
xscreensaver.passwd.body.label:
xscreensaver.passwd.heading.label:
xscreensaver.passwd.login.label:
xscreensaver.passwd.thermometer.width: 2
xscreensaver.passwd.uname: False 
xscreensaver.passwd.unlock.label:

xscreensaver.Dialog.background:         #000000
xscreensaver.Dialog.foreground:         #ffffff
xscreensaver.Dialog.Button.background:  #000000
xscreensaver.Dialog.Button.foreground:  #ffffff
xscreensaver.Dialog.text.background:    #000000
xscreensaver.Dialog.text.foreground:    #ffffff

xscreensaver.Dialog.shadowThickness: 1
xscreensaver.Dialog.topShadowColor:     #000000
xscreensaver.Dialog.bottomShadowColor:  #000000

and run

xrdb < ~/.Xdefaults  && xscreensaver-command -restart

as advised by the XScreenSaver Manual.

For other approaches, I’m only aware of this one: xscreensaver lock window themes. Please comment below if you know about other approaches. Thank you!

PS: The screensaver in the background is Fireflies. For a Debian package, you can run make deb from a Git clone.

December 16, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

If you’ve read some of my previous posts, you know that I’ve been maintaining this hardened-Gentoo derived, uClibc-based, micro Linux distribution called “tor-ramdisk”.   Its a small image, less than 7 MB in size, whose only purpose is to host a Tor relay or exit node in a ramdisk environment which vanishes when the system is shut down, leaving no traces behind.  My student Melissa Carlson and I started the project in 2008 and I’ve been pushing out an update with every major release of Tor.  Over the years, I automated the build system and put the scripts on a git repository at gitweb.torproject.org that Roger Dingledine gave me.  If you want to build your own image, all you need do is run the scripts in the chroot of a hardened amd64 or i686 uClibc stage3 image which you can get off the mirrors.

Recently, upstream switched the tor-0.2.7.x branch to depend on openssl’s elliptic curves code.  Unfortunately, this code is patented and can’t be distribute in tor-ramdisk.  If you read the tor ebuilds, you’ll see that the 0.2.7.x version has to DEPEND on dev-libs/openssl[-bindist] while the 0.2.6 only depends on dev-libs/openssl.  Luckily, there’s libressl which aims to be a free drop in replacement for openssl.  Its in the tree now and almost ready to go — I say almost because we’re still in the transition phase.  Libressl in Gentoo has been primarily hasufel‘s project, but I’ve been helping out here and there.  I got a few upstream commits to make sure it works in both uClibc and musl as well as with our hardened compiler.

So, this is the first tor-ramdisk release with libressl.  Of course I tested both the amd64 and i686 images and they work as expected, but libressl is new-ish so I’m still not sure what to expect.  Hopefully we’ll get better security than we did from openssl, but I’ve got my eyes out for any CVE’s.  At least we’re getting patent free code.

Tor-ramdisk has been one of those mature projects where all I have to do is tweak a few version numbers in the scripts, press go, and out pops a new image.  But I do have some plans for improvements: 1) I need to clean up the text user interface a bit.  Its menu driven and I need to consolidate some of the menu items, like the ‘resource’ menu and the ‘entropy’ menus.  2) I want to enable IPv6 support.  For the longest time Tor was IPv4 only but for a couple of years now, it has supported IPv6.  3) Tor can run in one of several modes.  Its ideal as a relay or exit node, but can be configured as client or hidden service for processes running on a different box.  However, tor-ramdisk can’t be used as a bridge.  I want to see if I can add bridge support.  4) Finally, I’m thinking of switching from uClibc to musl and building static PIE executables.  This would make a tighter image than the current one.

If you’re interested in running tor-ramdisk, here are some links:

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download: http://opensource.dyc.edu/tor-ramdisk-downloads
ChangeLog: http://opensource.dyc.edu/tor-ramdisk-changelog

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads
ChangeLog: same as i686.

November 29, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
lxc, ipv6 and iproute2 (November 29, 2015, 21:19 UTC)

Not so recently I got a soyoustart system since it is provided with an option to install Gentoo out of box.

The machine comes with a single ipv4 and a /64 amount of ipv6 addresses.

LXC

I want to use the box to host some of my flask applications (plaid mainly), keep some continuous integration instances for libav and some other experiments with compilers and libraries (such as musl, cparser other).

Since Diego was telling me about lxc I picked it. It is simple, requires not much effort and in Gentoo we have at least some documentation.

Setting up

I followed the documentation provided and it worked quite well up to a point. The btrfs integration works as explained, creating new Gentoo instances just worked, setting up the network… Required some effort.

Network woes

I have just 1 single ipv4 and some ipv6 so why not leveraging them? I decided to partition my /64 and use some, configured the bridge to take ::::1::1 and set up the container configuration like this:

lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.ipv4 = 192.168.1.4/16
lxc.network.ipv4.gateway = auto
lxc.network.ipv6 = ::::1::4/80
lxc.network.ipv6.gateway = auto
lxc.network.hwaddr = 02:00:ee:cb:8a:04

But the route to my container wasn’t advertised.

Having no idea why I just kept poking around sysctl and iproute2 until I got:

  • sysctl.conf:
  net.ipv6.conf.all.forwarding = 1
  net.ipv6.conf.eth0.proxy_ndp = 1

And

ip -6 neigh add proxy ::::1::4 dev eth0

In my container runner script.

I know that at least other people had the problem so here this mini-post.

November 28, 2015
Erik Mackdanz a.k.a. stasibear (homepage, bugs)
Roll a custom bare metal installer MacGyver-style (November 28, 2015, 00:00 UTC)

Unsolicited Context (you can skip this)

At my day job I develop a big data platform that runs primarily on OpenStack. Often we're on the hook to install the OpenStack layer before we can install our platform on it.

While OpenStack is a remarkble platform, installing it is a big deal. The OpenStack project publishes very good manual installation docs, but it's still measured in days not hours, even after they've published packages for a couple of the most mainstream distros.

So, third-party installers have stepped in. PackStack is community-driven but is meant only for toy installs. Red Hat claims to have a serious Director-based offering for enterprise, but after scrapping two previous offerings the longevity comes into question. RackSpace Private Cloud worked nicely for us when it used Chef and targed both Ubuntu and EL distros, but they scrapped it for a Puppet-based, Ubuntu-only offering (we're a Red Hat shop, it is what it is). Several other contenders exist, each with some problem: it supports only certain distros, it fails to keep up with the rapid OpenStack release cycle, it limits the features offered, and it may not be around in a few months.

This was fun for the first year. Now, after two years of the same questions and discussions with no real progress, my heart sinks with sorrow and fatigue when I hear yet another question about "How are we going to install OpenStack?"

I commented to a co-worker, "You can do 98% of the installation with Kickstart scripts". Enterprises don't want your scripts, though, they must use a product. There's a time for products, which are presumably well-documented and well-supported. There's also a time when you could've scripted the whole thing in the time it took the organization to overcome paralysis in making a decision about a product.

An exercise

This post is an exercise, borne from that last condition, which tries to answer: What would it looked like if I scripted out my own bare-metal installer?

And what if I used my preferred distro Gentoo for installing? Packages exist, which I've been curious to try, and which are maintained by devs who are official in both OpenStack and Gentoo projects.

And what if I installed Gentoo on the targets? I can use any filesystem, any kernel flags, any anything.

The trend is toward "bare-metal installers", which work like this:

  • A small provisioner host runs an installation app which includes a PXE server.
  • The target machines (which may be physical or virtual) boot from the network to an in-memory "discovery" image.
  • The discovery image sniffs out details about the target host's hardware (RAM, disk, NICs) and reports back to the provisioner app.
  • The user then maps roles e.g. host A will be an OpenStack neutron host with its disk configured like so and its NICs on these networks.
  • It is made so: the target reboots into something like Kickstart which performs the installation and configuration.
  • The provisioner instance is ideally left running, so that machines can later be added or repurposed with ease.

Note that nothing there is specific to OpenStack. This pattern can install any distributed system, even a hybrid physical-virtualized environment, as long as the system shares a broadcast domain.

I want to keep those nice properties, but strip out some of the product-ness. First of all, my experience has been that I already know what the hardware is long before I'm in setting up a provisioner instance. Let's throw out the discovery phase because we should already have a plan for what networks are required, and which hosts are suited for what tasks based on things like disk capacity.

A design

The simplest thing that could work is to have a provisioner host (I'm using a container) which serves a kernel over PXE. That kernel has an initramfs embedded in it (did you know you could do that?) The ramdisk can install everything, which is how installer images work all around the world. This is easier to do than you thought.

Then, the ramdisk can switch, based on a host's MAC address, to have pre-determined (baked-in) knowledge of whether a host should be, for example, a contoller node instead of a compute node. We could even bake binary packages right into the ramdisk, with our only real constraint that the ramdisk must, er, fit into RAM.

If we don't want to sweat the RAM limitation, we can have our ramdisk load its payload (configuration, packages) from the provisioner node. This is what Kickstart does, but with a custom DSL (domain-specific language) that we don't need (and anyway Gentoo doesn't have one).

By the way, the venerable System Rescue CD provides a pretty good system like this already, which can install Gentoo, using a feature called autorun scripts. Who needs Kickstart?

Hands Dirty