Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Zack Medico

Last updated:
November 24, 2014, 02:11 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

November 20, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
RIP ns2 (November 20, 2014, 12:39 UTC)

Today we did shutdown our now oldest running Gentoo Linux production server : ns2.

Obviously this machine was happily spreading our DNS records around the world but what’s remarkable about it is that it has been doing so for 2717 straight days !

$ uptime
 13:00:45 up 2717 days,  2:20,  1 user,  load average: 0.13, 0.04, 0.01

As I mentioned when we did shutdown stabber, our beloved firewall, our company has been running Gentoo Linux servers in production for a long time now and we’re always a bit sad when we have to power off one of them.

As usual, I want to take this chance to thank everyone contributing to Gentoo Linux ! Without our collective work, none of this would have been possible.

November 19, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Request Tracker (November 19, 2014, 15:52 UTC)

So, I’ve kind of taken over Request Tracker (bestpractical.com).

Initially I took it because I’m interested in using RT at work to take track customer service emails. All I did at the time was bump the version and remove old, insecure versions from the tree.

However, as I’ve finally gotten around to working on getting it setup, I’ve discovered there were a lot of issues that had gone unreported.

The intention is for RT to run out of its virtual host root, like /var/www/localhost/rt-4.2.9/bin/rt and configured by /var/www/localhost/rt-4.2.9/etc/RT_SiteConfig.pm, and for it to reference any supplementary packages with ${VHOST_ROOT} as its root. However, because of a broken install process and a broken hook script used by webapp-config that didn’t happen. Further, the rt_apache.conf included by us was outdated by a few years, too, which in itself isn’t a bad thing, except that it was wrong for RT 4+.

I spent much longer than I care to admit trying to figure out why my settings weren’t sticking when I edited RT_SiteConfig.pm. I was trying to run RT under its own path rather than on a subdomain, but Set($WebPath, ‘/rt’) wasn’t doing what it should.

It also complained about not being able to write to /usr/share/webapps/rt/rt-4.2.9/data/mason_data/obj, which clearly wasn’t right.

Once I tried moving RT_SiteConfig.pm to /usr/share/webapps/rt/rt-4.2.9/etc/, and chmod and chown on ../data/mason_data/obj, everything worked as it should.

Knowing this was wrong and that it would prevent anyone using our package from having multiple installation, aka vhosts, I set out to fix it.

It was a descent into madness. Things I expected to happen did not. Things that shouldn’t have been a problem were. Much of the trouble I had circled around webapp-config and webapp.eclass.

But, I prevailed, and now you can really have multiple RT installations side-by-side. Also, I’ve added an article (wiki.gentoo.org) to our wiki with updated instructions on getting RT up and running.

Caveat: I didn’t use FastCGI, so that part may be wrong still, but mod_perl is good to go.

November 16, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Nach mehreren Wochen downtime — primär durch mich verschuldet — ist rsync1.de.gentoo.org nun wieder online.
Wie vorher wird das komplette Repository aus einer RAM disk ausgeliefert, daher ist der Mirror relativ flott.

# rsync --list-only rsync://rsync1.de.gentoo.org/gentoo-portage/
drwxr-xr-x          3,480 2014/11/16 16:01:19 .
-rw-r--r--            121 2014/01/01 01:31:01 header.txt
-rw-r--r--          3,658 2014/08/18 21:01:02 skel.ChangeLog
-rw-r--r--          8,119 2014/08/30 12:01:02 skel.ebuild
-rw-r--r--          1,231 2014/08/18 21:01:02 skel.metadata.xml
drwxr-xr-x            860 2014/11/16 16:01:02 app-accessibility
drwxr-xr-x          4,800 2014/11/16 16:01:03 app-admin
drwxr-xr-x            100 2014/11/16 16:01:03 app-antivirus
[..]
drwxr-xr-x          1,240 2014/11/16 16:01:21 x11-wm
drwxr-xr-x            340 2014/11/16 16:01:21 xfce-base
drwxr-xr-x          1,340 2014/11/16 16:01:21 xfce-extra

Die Hardware darunter ist gesponsort von Manitu.

Introducing Gambit to Gentoo (November 16, 2014, 14:50 UTC)

Hi!

I would like to introduce you to Gambit, a rather young Qt-based chess UI with excellent usability and its very own engine.

It has been living in the betagarden overlay while maturing and just hit the Gentoo main repository.
Install through

emerge -av games-board/gambit

as usual.

November 15, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
RDepending on Perl itself (November 15, 2014, 17:36 UTC)

Writing correct dependency specifications is an art in itself. So, here's a small guide for Gentoo developers how to specify runtime dependencies on dev-lang/perl. First, the general rule.
Check the following two things: 1) is your package linking anywhere to libperl.so, 2) is your package installing any Perl modules into Perl's vendor directory (e.g., /usr/lib64/perl5/vendor_perl/5.20.1/)? If at least one of these two questions is answered with yes, you need in your dependency string a slot operator, i.e. "dev-lang/perl:=" Obviously, your ebuild will have to be EAPI=5 for that. If neither 1) nor 2) are the case, "dev-lang/perl" is enough.
Now, with eclasses. If you use perl-module.eclass or perl-app.eclass, two variables control automatic adding of dependencies. GENTOO_DEPEND_ON_PERL sets whether the eclass automatically adds a dependency on Perl, and defaults to yes in both cases. GENTOO_DEPEND_ON_PERL_SUBSLOT controls whether the slot operator ":=" is used. It defaults to yes in perl-module.eclass and to no in perl-app.eclass. (This is actually the only difference between the eclasses.) The idea behind that is that a Perl module package always installs modules into vendor_dir, while an application can have its own separate installation path for its modules or not install any modules at all.
In many cases, if a package installs Perl modules you'll need Perl at build time as well since the module build system is written in Perl. If a package links to Perl, that is obviously needed at build time too.

So, summarizing:
eclass 1) or 2) true 1) false, 2) false
none "dev-lang/perl:=" needed in RDEPEND and most likely also DEPEND "dev-lang/perl" needed in RDEPEND, maybe also in DEPEND
perl-module.eclass no need to do anything GENTOO_DEPEND_ON_PERL_SUBSLOT=no possible before inherit
perl-app.eclass GENTOO_DEPEND_ON_PERL_SUBSLOT=yes needed before inherit no need to do anything

November 14, 2014
Gentoo Monthly Newsletter: October 2014 (November 14, 2014, 19:30 UTC)

Gentoo News

Council News

The council addressed a number of issues this month. The change with the biggest long-term significance was clearing the way to proceed with the git migration once infra is ready. This included removing changelogs from future git commits, removing cvs headers, and simplifying our news repository format. The infra and git migration projects will coordinate the actual migration hopefully in the not-so-distant future.

The council also endorsed getting rid of herds, but acknowledged that there are some details that need to be worked out before pulling the plug. The bikeshedding was moved back to the lists so all could share in the fun.

There are still some concerns with the games team. The council decided to give the team more time to sort things out internally before interfering. It was acknowledged that most of the serious issues were already resolved with the decision to allow anybody to elect to make their packages a part of the games herd or not. Some QA concerns with some games were brought up, but it was felt that this is best dealt with on a per-package basis with QA/treecleaners and that games shouldn’t receive any special treatment one way or the other.

Other decisions include removing einstall from EAPI6, and approving GLEP64 (VDB caching / API). There was also a status update on multilib (nearly done), and migrating project pages to the wiki (sadly we can’t just get rid of unmigrated projects like the x86 and amd64 arches).

PYTHON_SINGLE_TARGETS updates

(by Ian Stakenvicius)

On November 7th, packages inheriting python-single-r1 got a whole lot easier for end-users to manage.

It used to be that any package supporting just one Python required it to have a python_single_target_* USE-flag set to choose it, even if the package was only compatible with one Python in the first place. Since November 7th, if a package is only compatible with a single supported Python version (say, python-2.7), then it no longer uses python_single_target_* use flags and relies instead on that implementation being enabled in PYTHON_TARGETS.

The most visible change seen from this is package rebuilds from removal of a lot of PYTHON_SINGLE_TARGET flags, especially on python-2.7-only packages. However, the removal of these flags also means that setting PYTHON_SINGLE_TARGET to something other than python2_7 no longer needs all of those packages to be listed in package.use.

Portage users are also likely to notice that exceptions to PYTHON_SINGLE_TARGET that would require package.use changes are now also be calculated properly by –autounmask, instead of solely being reported as an illegible REQUIRED_USE error.

Gentoo Developer Moves

Summary

Gentoo is made up of 243 active developers, of which 39 are currently away.
Gentoo has recruited a total of 804 developers since its inception.

Changes

  • Yixun Lan joined the electronics team

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 163
Packages 17876
Ebuilds 38009
Architecture Stable Testing Total % of Packages
alpha 3663 592 4255 23.80%
amd64 10926 6462 17388 97.27%
amd64-fbsd 0 1580 1580 8.84%
arm 2709 1812 4521 25.29%
arm64 565 46 611 3.42%
hppa 3103 502 3605 20.17%
ia64 3218 629 3847 21.52%
m68k 624 99 723 4.04%
mips 0 2423 2423 13.55%
ppc 6869 2479 9348 52.29%
ppc64 4381 988 5369 30.03%
s390 1445 376 1821 10.19%
sh 1625 461 2086 11.67%
sparc 4160 921 5081 28.42%
sparc-fbsd 0 319 319 1.78%
x86 11576 5402 16978 94.98%
x86-fbsd 0 3245 3245 18.15%

gmn-portage-stats-2014-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201410-02 perl-core/Locale-Maketext (and 1 more) Perl, Perl Locale-Maketext module: Multiple vulnerabilities 446376
201410-01 app-shells/bash Bash: Multiple vulnerabilities 523742

Package Removals/Additions

Removals

Package Developer Date
media-sound/cowbell k_f 06 Oct 2014
x11-plugins/msn-pecan voyageur 08 Oct 2014
x11-plugins/pidgin-facebookchat voyageur 08 Oct 2014
dev-perl/IO-Socket-IP dilfridge 11 Oct 2014
dev-perl/Template-Latex dilfridge 13 Oct 2014
app-emulation/emul-linux-x86-compat ulm 14 Oct 2014
app-doc/djbdns-man mjo 15 Oct 2014
app-text/unix2dos mjo 18 Oct 2014
app-text/regex idella4 29 Oct 2014
games-board/chessdb mr_bones_ 30 Oct 2014
dev-ml/async_core aballier 30 Oct 2014

Additions

Package Developer Date
net-analyzer/openvas-tools jlec 01 Oct 2014
net-p2p/bitcoin-cli blueness 02 Oct 2014
app-benchmarks/wrk vikraman 02 Oct 2014
dev-perl/Net-IPv4Addr mjo 04 Oct 2014
dev-ruby/compass-core graaff 05 Oct 2014
dev-ruby/compass-import-once graaff 05 Oct 2014
media-sound/apulse jauhien 05 Oct 2014
dev-perl/Test-Warnings zlogene 05 Oct 2014
x11-misc/rofi jer 06 Oct 2014
dev-python/parse alunduil 06 Oct 2014
dev-python/clint alunduil 07 Oct 2014
app-admin/lastpass robbat2 08 Oct 2014
dev-perl/XML-Entities dilfridge 09 Oct 2014
dev-python/Numdifftools jlec 10 Oct 2014
app-text/krop dilfridge 10 Oct 2014
net-voip/vidyodesktop prometheanfire 10 Oct 2014
kde-misc/kcm-touchpad mrueg 11 Oct 2014
dev-perl/Unicode-Normalize dilfridge 11 Oct 2014
dev-perl/Net-IDN-Encode dilfridge 11 Oct 2014
dev-perl/tkispell dilfridge 11 Oct 2014
perl-core/IO-Socket-IP dilfridge 11 Oct 2014
virtual/perl-IO-Socket-IP dilfridge 11 Oct 2014
dev-python/pyhamcrest alunduil 11 Oct 2014
dev-python/enum34 alunduil 11 Oct 2014
dev-db/postgresql titanofold 11 Oct 2014
dev-python/doublex alunduil 11 Oct 2014
dev-python/pycallgraph alunduil 12 Oct 2014
dev-python/python-termstyle alunduil 12 Oct 2014
dev-python/rednose alunduil 12 Oct 2014
dev-python/PyQt5 pesa 13 Oct 2014
net-analyzer/ipguard jer 13 Oct 2014
dev-perl/Template-Plugin-Latex dilfridge 13 Oct 2014
dev-perl/LaTeX-Driver dilfridge 14 Oct 2014
dev-perl/Pod-LaTeX dilfridge 14 Oct 2014
dev-perl/LaTeX-Encode dilfridge 14 Oct 2014
dev-perl/MooseX-FollowPBP dilfridge 14 Oct 2014
dev-perl/LaTeX-Table dilfridge 14 Oct 2014
virtual/perl-Term-ReadLine dilfridge 14 Oct 2014
dev-python/python-etcd zmedico 15 Oct 2014
dev-db/etcd zmedico 15 Oct 2014
dev-libs/extra-cmake-modules kensington 15 Oct 2014
kde-frameworks/kglobalaccel kensington 15 Oct 2014
kde-frameworks/kwallet kensington 15 Oct 2014
kde-frameworks/kjobwidgets kensington 15 Oct 2014
kde-frameworks/kxmlgui kensington 15 Oct 2014
kde-frameworks/plasma kensington 15 Oct 2014
kde-frameworks/kcrash kensington 15 Oct 2014
kde-frameworks/kdesignerplugin kensington 15 Oct 2014
kde-frameworks/frameworkintegration kensington 15 Oct 2014
kde-frameworks/kf-env kensington 15 Oct 2014
kde-frameworks/kdesu kensington 15 Oct 2014
kde-frameworks/ki18n kensington 15 Oct 2014
kde-frameworks/kitemmodels kensington 15 Oct 2014
kde-frameworks/kguiaddons kensington 15 Oct 2014
kde-frameworks/knewstuff kensington 15 Oct 2014
kde-frameworks/kcoreaddons kensington 15 Oct 2014
kde-frameworks/kapidox kensington 15 Oct 2014
kde-frameworks/kactivities kensington 15 Oct 2014
kde-frameworks/kdelibs4support kensington 15 Oct 2014
kde-frameworks/kcmutils kensington 15 Oct 2014
kde-frameworks/sonnet kensington 15 Oct 2014
kde-frameworks/kconfig kensington 15 Oct 2014
kde-frameworks/kidletime kensington 15 Oct 2014
kde-frameworks/kunitconversion kensington 15 Oct 2014
kde-frameworks/kio kensington 15 Oct 2014
kde-frameworks/kdbusaddons kensington 15 Oct 2014
kde-frameworks/kconfigwidgets kensington 15 Oct 2014
kde-frameworks/kauth kensington 15 Oct 2014
kde-frameworks/kcompletion kensington 15 Oct 2014
kde-frameworks/kcodecs kensington 15 Oct 2014
kde-frameworks/kpty kensington 15 Oct 2014
kde-frameworks/solid kensington 15 Oct 2014
kde-frameworks/kplotting kensington 15 Oct 2014
kde-frameworks/kbookmarks kensington 15 Oct 2014
kde-frameworks/knotifyconfig kensington 15 Oct 2014
kde-frameworks/kemoticons kensington 15 Oct 2014
kde-frameworks/kinit kensington 15 Oct 2014
kde-frameworks/kross kensington 15 Oct 2014
kde-frameworks/kwidgetsaddons kensington 15 Oct 2014
kde-frameworks/kimageformats kensington 15 Oct 2014
kde-frameworks/kdewebkit kensington 15 Oct 2014
kde-frameworks/kdeclarative kensington 15 Oct 2014
kde-frameworks/attica kensington 15 Oct 2014
kde-frameworks/kservice kensington 15 Oct 2014
kde-frameworks/kiconthemes kensington 15 Oct 2014
kde-frameworks/kdnssd kensington 15 Oct 2014
kde-frameworks/kmediaplayer kensington 15 Oct 2014
kde-frameworks/knotifications kensington 15 Oct 2014
kde-frameworks/kded kensington 15 Oct 2014
kde-frameworks/kjsembed kensington 15 Oct 2014
kde-frameworks/kjs kensington 15 Oct 2014
kde-frameworks/ktexteditor kensington 15 Oct 2014
kde-frameworks/kdoctools kensington 15 Oct 2014
kde-frameworks/krunner kensington 15 Oct 2014
kde-frameworks/kitemviews kensington 15 Oct 2014
kde-frameworks/karchive kensington 15 Oct 2014
kde-frameworks/khtml kensington 15 Oct 2014
kde-frameworks/kwindowsystem kensington 15 Oct 2014
kde-frameworks/kparts kensington 15 Oct 2014
kde-frameworks/ktextwidgets kensington 15 Oct 2014
kde-frameworks/threadweaver kensington 15 Oct 2014
kde-base/oxygen-fonts kensington 15 Oct 2014
dev-libs/sni-qt mrueg 15 Oct 2014
dev-db/etcdctl zmedico 15 Oct 2014
dev-db/go-etcd zmedico 16 Oct 2014
sys-fs/etcd-fs zmedico 16 Oct 2014
dev-python/mamba alunduil 16 Oct 2014
virtual/podofo-build zmedico 16 Oct 2014
dev-games/goatee hasufell 16 Oct 2014
games-board/goatee-gtk hasufell 16 Oct 2014
app-crypt/etcd-ca zmedico 16 Oct 2014
dev-python/expects alunduil 17 Oct 2014
app-emacs/rust-mode jauhien 18 Oct 2014
app-vim/rust-mode jauhien 18 Oct 2014
app-shells/rust-zshcomp jauhien 18 Oct 2014
dev-lang/rust-bin jauhien 18 Oct 2014
dev-python/args alunduil 18 Oct 2014
sys-process/xjobs mjo 19 Oct 2014
dev-python/parse-type alunduil 19 Oct 2014
dev-perl/Devel-CheckCompiler dilfridge 19 Oct 2014
dev-perl/Cwd-Guard dilfridge 19 Oct 2014
dev-perl/Module-Build-XSUtil dilfridge 19 Oct 2014
dev-perl/File-Find-Rule-Perl dilfridge 19 Oct 2014
dev-perl/PPI-PowerToys dilfridge 19 Oct 2014
dev-util/jenkins-bin mrueg 20 Oct 2014
dev-python/sphinxcontrib-cheeseshop alunduil 21 Oct 2014
dev-perl/BZ-Client dilfridge 21 Oct 2014
dev-perl/Data-Serializer dilfridge 21 Oct 2014
dev-perl/Math-NumberCruncher dilfridge 21 Oct 2014
dev-python/behave alunduil 22 Oct 2014
dev-python/django-opensearch ercpe 22 Oct 2014
app-admin/lastpass-cli zx2c4 22 Oct 2014
dev-python/simpleeval cedk 22 Oct 2014
net-misc/xrdp mgorny 23 Oct 2014
dev-libs/collada-dom aballier 23 Oct 2014
sci-libs/libccd aballier 23 Oct 2014
dev-ml/ocaml-re aballier 24 Oct 2014
dev-ml/cudf aballier 24 Oct 2014
dev-perl/File-ShareDir-Install dilfridge 24 Oct 2014
dev-perl/POSIX-strftime-Compiler dilfridge 24 Oct 2014
dev-perl/Apache-LogFormat-Compiler dilfridge 24 Oct 2014
dev-python/doublex-expects alunduil 25 Oct 2014
app-crypt/libu2f-host flameeyes 25 Oct 2014
app-crypt/libykneomgr flameeyes 25 Oct 2014
app-crypt/yubikey-neo-manager flameeyes 25 Oct 2014
dev-perl/Redis dilfridge 25 Oct 2014
dev-perl/Types-Serialiser dilfridge 25 Oct 2014
net-analyzer/ospd jlec 26 Oct 2014
dev-perl/Cache-FastMmap dilfridge 26 Oct 2014
dev-python/dockerpty alunduil 27 Oct 2014
app-text/restview radhermit 27 Oct 2014
dev-ml/parmap aballier 27 Oct 2014
dev-ml/camlbz2 aballier 27 Oct 2014
net-misc/x11rdp mgorny 27 Oct 2014
app-emulation/fig alunduil 27 Oct 2014
dev-perl/Algorithm-ClusterPoints dilfridge 27 Oct 2014
dev-ml/dose3 aballier 28 Oct 2014
x11-libs/libQGLViewer aballier 28 Oct 2014
dev-ml/cmdliner aballier 29 Oct 2014
dev-ml/uutf aballier 29 Oct 2014
dev-ml/jsonm aballier 29 Oct 2014
dev-ml/opam aballier 29 Oct 2014
sci-libs/octomap aballier 29 Oct 2014
app-text/regex idella4 29 Oct 2014
dev-python/regex idella4 29 Oct 2014
games-rpg/soltys calchan 30 Oct 2014
sci-libs/orocos_kdl aballier 30 Oct 2014
dev-cpp/metslib aballier 31 Oct 2014
media-libs/libsixel hattya 31 Oct 2014
app-crypt/libscrypt blueness 31 Oct 2014
sec-policy/selinux-android swift 31 Oct 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 October 2014 and 01 November 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-11

Bug Activity Number
New 1881
Closed 1153
Not fixed 171
Duplicates 168
Total 6198
Blocker 4
Critical 18
Major 65

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Linux Gnome Desktop Team 50
2 Gentoo Perl team 43
3 Gentoo Games 42
4 Gentoo KDE team 39
5 Gentoo's Team for Core System packages 39
6 Netmon Herd 32
7 Python Gentoo Team 27
8 PHP Bugs 25
9 Gentoo Toolchain Maintainers 21
10 Others 834

gmn-closed-2014-11

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 107
2 Gentoo Linux Gnome Desktop Team 69
3 Gentoo's Team for Core System packages 65
4 Gentoo Security 58
5 Gentoo KDE team 53
6 Python Gentoo Team 49
7 Gentoo Games 47
8 Gentoo Perl team 44
9 Default Assignee for New Packages 43
10 Others 1345

gmn-opened-2014-11

 

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

November 09, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
PyPy is back, and for real this time! (November 09, 2014, 23:17 UTC)

As you may recall, I was looking for a dedicated PyPy maintainer for quite some time. Sadly, all the people who helped (and who I’d like to thank a lot) ended up lacking time soon enough. So finally I’ve decided to look into the hacks reducing build-time memory use and take care of the necessary ebuild and packaging work myself.

So first of all, you may notice that the new PyPy (source-code) ebuilds have a new USE flag called low-memory. When this flag is enabled, the translation process is done using PyPy with some memory-reducing adjustments suggested by upstream. The net result is that it finally is possible to build PyPy with 3.5G RAM (on amd64) and 1G of swap (the latter being used the compiler is spawned and memory used during translation is no longer necessary), at a cost of slightly increased build time.

As noted above, the low-memory option requires using PyPy to perform the translation. So while having to enforce that, I went a bit further and made the ebuild default to using PyPy whenever available. In fact, even for a first PyPy build you are recommended to install dev-python/pypy-bin first and let the ebuild use it to bootstrap your own PyPy.

Next, I have cleaned up the ebuilds a bit and enforced more consistency. Changing maintainers and binary package builders have resulted in the ebuilds being a bit inconsistent. Now you can finally expect pypy-bin to install exactly the same set of files as source-built pypy.

I have also cleaned up the remaining libpypy-c symlinks. The library is not packaged upstream currently, and therefore has no proper public name. Using libpypy-c.so is just wrong, and packages can’t reliably refer to that. I’d rather wait with installing it till there’s some precedence in renaming. The shared library is still built but it’s kept inside the PyPy home directory.

All those changes were followed by a proper version bump to 2.4.0. While you still may have issues upgrading PyPy, Zac already committed a patch to Portage and the next release should be able to handle PyPy upgrades seamlessly. I have also built all the supported binary package variants, so you can choose those if you don’t want to spend time building PyPy.

Finally, I have added the ebuilds for PyPy 3. They are a little bit more complex than regular PyPy, especially because the build process and some of the internal modules still require Python 2. Sadly, PyPy 3 is based on Python 3.2 with small backports, so I don’t expect package compatibility much greater than CPython 3.2 had.

If you want to try building some packages with PyPy 3, you can use the convenience PYTHON_COMPAT_OVERRIDE hack:

PYTHON_COMPAT_OVERRIDE='pypy3' emerge -1v mypackage

Please note that it is only a hack, and as such it doesn’t set proper USE flags (PYTHON_TARGETS are simply ignored) or enforce dependencies.

If someone wants to help PyPy on Gentoo a bit, there are still unsolved issues needing a lot of specialist work. More specifically:

  1. #465546; PyPy needs to be modified to support /usr prefix properly (right now, it requires prefix being /usr/lib*/pypy which breaks distutils packages assuming otherwise.
  2. #525940; non-SSE2 JIT does not build.
  3. #429372; we lack proper sandbox install support.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
gentooJoin 2004/04/11 (November 09, 2014, 11:06 UTC)

How time flies!
gentooJoin: 2004/04/11

Now I feel ooold

November 05, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Just a simple webapp, they said ... (November 05, 2014, 08:38 UTC)

The complexity of modern software is quite insanely insane. I just realized ...
Writing a small webapp with flask, I've had to deal with the following technologies/languages:

  • System package manager, in this case portage
  • SQL DBs, both SQLite (local testing) and PostgreSQL (production)
  • python/flask, the core of this webapp
  • jinja2, the template language usually used with it
  • HTML, because the templates don't just appear magically
  • CSS (mostly hidden in Bootstrap) to make it look sane
  • JavaScript, because dynamic shizzle
  • (flask-)sqlalchemy, ORMs are easier than writing SQL by hand when you're in a hurry
  • alembic, for DB migrations and updates
  • git, because version control
So that's about a dozen things that each would take years to master. And for a 'small' project there's not much time to learn them deeply, so we staple together what we can, learning as we go along ...

And there's an insane amount of context switching going on, you go from mangling CSS to rewriting SQL in the span of a few minutes. It's an impressive polyglot marathon, but how is this supposed to generate sustainable and high-quality results?

Und then I go home in the evening and play around with OpenCL and such things. Learning never ends - but how are we going to build things that last for more than 6 months? Too many moving parts, too much change, and never enough time to really understand what we're doing :)

November 02, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

I just finished updating 102 packages. The change? Removing the following from the ebuilds:

DEPEND="selinux? ( sec-policy/selinux-${packagename} )"

In the past, we needed this construction in both DEPEND and RDEPEND. Recently however, the SELinux eclass got updated with some logic to relabel files after the policy package is deployed. As a result, the DEPEND variable no longer needs to refer to the SELinux policy package.

This change also means that for those moving from a regular Gentoo installation to an SELinux installation will have much less packages to rebuild. In the past, getting USE="selinux" (through the SELinux profiles) would rebuild all packages that have a DEPEND dependency to the SELinux policy package. No more – only packages that depend on the SELinux libraries (like libselinux) or utilities rebuild. The rest will just pull in the proper policy package.

October 31, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
EVE Online on Gentoo Linux (October 31, 2014, 16:56 UTC)

Good news, everyone! I’m finally rid of Windows.

A couple weeks ago my Windows installation corrupted itself on the 5 minute trip home from the community theatre. I didn’t command it to go to sleep, I just unplugged it and closed the lid. Somehow, it managed to screw up its startup files, and the restore process didn’t do what it was supposed to so I was greeted with a blank screen. No errors. Just staring into the void.

I’ve been using Windows as the sole OS on this machine with Gentoo running in VirtualBox for various reasons related to minor annoyances of unsupported hardware, but as I needed a working machine sooner rather than later and the only tools I could find to solve my Windows problem appeared to be old, defunct, and/or suspicious, I downloaded an ISO of SystemRescueCd (www.sysresccd.org) and installed Gentoo in the sliver of space left on the drive.

There were only two real reasons why I was intent on keeping Windows: Netflix (netflix.com) and EVE Online (eveonline.com). I intended to get Windows up and running once the show was over at the theatre, but then I read about Netflix being supported in Linux (www.mpagano.com). That left me with just one reason to keep Windows: EVE. I turned to Wine (www.winehq.org) and discovered reports of it running EVE quite well (appdb.winehq.org). I also learned that the official Mac OS release of EVE runs on Cider (www.transgaming.com), which is based on Wine.

I had another hitch: I chose the no-multilib stage3 for that original sliver thinking I wouldn’t be running anything other than 64 bit software, and drive space was at a premium. EVE Online is 32 bit.

So I had to begin my adventure with switching to multilib. This didn’t involve me reinstalling Gentoo thanks to a handy, but unsupported and unofficial, guide (jkroon.blogs.uls.co.za) by Jaco Kroon.

As explained on Multilib System without emul-linux Packages (wiki.gentoo.org), I decided it’s better to build my own 32 bit library. So, the next step is to mask the emulation packages:

# /etc/portage/package.mask
app-emulation/emul-linux-x86-*

Because I didn’t want to build a 32 bit variant for everything on my system, I iterated through what Portage wanted and marked several packages to build their 32 bit variant via use flags. This is what I wound up with:

# /etc/portage/package.use
app-arch/bzip2 abi_x86_32
app-emulation/wine mono abi_x86_32
dev-libs/elfutils static-libs abi_x86_32
dev-libs/expat abi_x86_32
dev-libs/glib abi_x86_32
dev-libs/gmp abi_x86_32
dev-libs/icu abi_x86_32
dev-libs/libffi abi_x86_32
dev-libs/libgcrypt abi_x86_32
dev-libs/libgpg-error abi_x86_32
dev-libs/libpthread-stubs abi_x86_32
dev-libs/libtasn1 abi_x86_32
dev-libs/libxml2 abi_x86_32
dev-libs/libxslt abi_x86_32
dev-libs/nettle abi_x86_32
dev-util/pkgconfig abi_x86_32
media-libs/alsa-lib abi_x86_32
media-libs/fontconfig abi_x86_32
media-libs/freetype abi_x86_32
media-libs/glu abi_x86_32
media-libs/libjpeg-turbo abi_x86_32
media-libs/libpng abi_x86_32
media-libs/libtxc_dxtn abi_x86_32
media-libs/mesa abi_x86_32
media-libs/openal abi_x86_32
media-sound/mpg123 abi_x86_32
net-dns/avahi abi_x86_32
net-libs/gnutls abi_x86_32
net-print/cups abi_x86_32
sys-apps/dbus abi_x86_32
sys-devel/llvm abi_x86_32
sys-fs/udev gudev abi_x86_32
sys-libs/gdbm abi_x86_32
sys-libs/ncurses abi_x86_32
sys-libs/zlib abi_x86_32
virtual/glu abi_x86_32
virtual/jpeg abi_x86_32
virtual/libffi abi_x86_32
virtual/libiconv abi_x86_32
virtual/libudev abi_x86_32
virtual/opengl abi_x86_32
virtual/pkgconfig abi_x86_32
x11-libs/libX11 abi_x86_32
x11-libs/libXau abi_x86_32
x11-libs/libXcursor abi_x86_32
x11-libs/libXdamage abi_x86_32
x11-libs/libXdmcp abi_x86_32
x11-libs/libXext abi_x86_32
x11-libs/libXfixes abi_x86_32
x11-libs/libXi abi_x86_32
x11-libs/libXinerama abi_x86_32
x11-libs/libXrandr abi_x86_32
x11-libs/libXrender abi_x86_32
x11-libs/libXxf86vm abi_x86_32
x11-libs/libdrm abi_x86_32
x11-libs/libvdpau abi_x86_32
x11-libs/libxcb abi_x86_32
x11-libs/libxshmfence abi_x86_32
x11-proto/damageproto abi_x86_32
x11-proto/dri2proto abi_x86_32
x11-proto/dri3proto abi_x86_32
x11-proto/fixesproto abi_x86_32
x11-proto/glproto abi_x86_32
x11-proto/inputproto abi_x86_32
x11-proto/kbproto abi_x86_32
x11-proto/presentproto abi_x86_32
x11-proto/randrproto abi_x86_32
x11-proto/renderproto abi_x86_32
x11-proto/xcb-proto abi_x86_32 python_targets_python3_4
x11-proto/xextproto abi_x86_32
x11-proto/xf86bigfontproto abi_x86_32
x11-proto/xf86driproto abi_x86_32
x11-proto/xf86vidmodeproto abi_x86_32
x11-proto/xineramaproto abi_x86_32
x11-proto/xproto abi_x86_32

Now emerge both Wine — the latest and greatest of course — and the questionable library so textures will be rendered:

emerge -av media-libs/libtxc_dxtn =app-emulation/wine-1.7.29

You may get some messages along the lines of:

emerge: there are no ebuilds to satisfy ">=sys-libs/zlib-1.2.8-r1".

This was a bit of a head scratcher for me. I have syslibs/zlib-1.2.8-r1 installed. I didn’t have to accept its keyword. It’s already stable! I haven’t really looked into why, but you have to accept its keyword to press forward:

# echo '=sys-libs/zlib-1.2.8-r1' >> /etc/portage/package.accept_keywords

You’ll have to do the above several times for other packages when you try to emerge Wine. Most of the time the particular version it wants is something you already have installed. Check what you do have installed with eix or other favorite tool so you don’t downgrade anything. Once wine is installed, as your user run:

$ winecfg

Download the EVE Online Windows installer and run it using Wine:

$ wine EVE_Online_Installer_*.exe

Once that’s done, invoke the launcher as:

$ force_s3tc_enable=true wine 'C:\Program Files (x86)\CCP\EVE\eve.exe'

force_s3tc_enable=true is needed to enable texture rendering. Without it, EVE will freeze during start up. (If you didn’t emerge media-libs/libtxc_dxtn, EVE will start, but none of the textures will load, and you’ll have a lot of black on black objects.) I didn’t have to do any of the other things I’ve found, such as disabling DirectX 11.

As for my Linux setup: I have a Radeon HD6480G (SUMO/r600) in my ThinkPad Edge E525, and I’m using the open source radeon (www.x.org) drivers with graphics on high and medium anti-aliasing with Mesa and OpenGL. For the most part, I find the game play to be smooth and indistinguishable from my experience on Windows.

There are a few things that don’t work well. Psychedelic, rendering artifacts galore when I open the in-game browser (IGB) or switch to another application, but that’s resolve without logging out of EVE by changing the graphics quality to something else. It may be related to resource caching, but I need to do more testing. I haven’t tried going into the Captain’s Quarters (other users have reported crashes entering there) as back on Windows that brings my system to a crawl, and there isn’t anything particularly interesting about going in there…yet.

Overall, I’m quite happy with the EVE/Wine experience on Gentoo. It was quite easy and there wasn’t any real troubleshooting for me to do.

If you’re a fellow Gentoo-er in EVE, drop me a line. If you want to give EVE a go, have an extra week on me.

Update: I’ve been informed by Aatos Taavi that running EVE in windowed mode works quite well. I’ve also been informed that we need to declare stable packages in portage.accept_keywords because abi_x86_32 is use masked.

October 30, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don't want to have a trollfest on my blog, because moderating it is something I hate and I'm sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don't think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don't see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it's easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I've considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don't think I trust the stability promises for the moment and I don't reboot servers that often anyway.

There are good things to the systemd design. And I'm sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don't like Lennart's approach on this to begin with, and I don't think it's uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don't like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don't agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the "software biodiversity". At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I'll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.

Sven Vermeulen a.k.a. swift (homepage, bugs)

In a few moments, SELinux users which have the ~arch KEYWORDS set (either globally or for the SELinux utilities in particular) will notice that the SELinux userspace will upgrade to version 2.4 (release candidate 5 for now). This upgrade comes with a manual step that needs to be performed after upgrade. The information is mentioned as post-installation message of the policycoreutils package, and basically sais that you need to execute:

~# /usr/libexec/selinux/semanage_migrate_store

The reason is that the SELinux utilities expect the SELinux policy module store (and the semanage related files) to be in /var/lib/selinux and no longer in /etc/selinux. Note that this does not mean that the SELinux policy itself is moved outside of that location, nor is the basic configuration file (/etc/selinux/config). It is what tools such as semanage manage that is moved outside that location.

I tried to automate the migration as part of the packages themselves, but this would require the portage_t domain to be able to move, rebuild and load policies, which it can’t (and to be honest, shouldn’t). Instead of augmenting the policy or making updates to the migration script as delivered by the upstream project, we currently decided to have the migration done manually. It is a one-time migration anyway.

If for some reason end users forget to do the migration, then that does not mean that the system breaks or becomes unusable. SELinux still works, SELinux aware applications still work; the only thing that will fail are updates on the SELinux configuration through tools like semanage or setsebool – the latter when you want to persist boolean changes.

~# semanage fcontext -l
ValueError: SELinux policy is not managed or store cannot be accessed.
~# setsebool -P allow_ptrace on
Cannot set persistent booleans without managed policy.

If you get those errors or warnings, all that is left to do is to do the migration. Note in the following that there is a warning about ‘else’ blocks that are no longer supported: that’s okay, as far as I know (and it was mentioned on the upstream mailinglist as well as not something to worry about) it does not have any impact.

~# /usr/libexec/selinux/semanage_migrate_store
Migrating from /etc/selinux/mcs/modules/active to /var/lib/selinux/mcs/active
Attempting to rebuild policy from /var/lib/selinux
sysnetwork: Warning: 'else' blocks in optional statements are unsupported in CIL. Dropping from output.

You can also add in -c so that the old policy module store is cleaned up. You can also rerun the command multiple times:

~# /usr/libexec/selinux/semanage_migrate_store -c
warning: Policy type mcs has already been migrated, but modules still exist in the old store. Skipping store.
Attempting to rebuild policy from /var/lib/selinux

You can manually clean up the old policy module store like so:

~# rm -rf /etc/selinux/mcs/modules

So… don’t worry – the change is small and does not break stuff. And for those wondering about CIL I’ll talk about it in one of my next posts.

October 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico's code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don't need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager's dream company.

But let's get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the "touch" interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can't share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the configure.ac file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward ­— only I had to figure out how to install a Python package as I never did so before. It's actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don't keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices ­— but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you're setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn't make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it "Yubikey NEO U2F+CCID". It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won't be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I'll be a good post for next week.

October 25, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we've been waiting for the day for a while. I've been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let's start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can't vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I'll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don't know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you're using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you're not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I'll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you're supposed to use it right now. I don't know if the former workflow is going to be deprecated at some point, but I wouldn't be surprised.

For those like me who bought the NEO devices, you'll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it'll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you'll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I'll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let's all hope that more sites will start protecting your accounts with U2F — I'll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

Gentoo Monthly Newsletter: September 2014 (October 25, 2014, 09:10 UTC)

Gentoo News

Council News

The september council meeting was quite uneventful. The only outcome of note was that the dohtml function for ebuilds will be deprecated now and banned in a later EAPI, with some internal consequences for, e.g., einstalldocs.

Releases

New LiveDVD - Iron Penguin Edition thanks to the Gentoo Infrastructure team and Fernando Reyes. If you haven’t yet checked it out, what are you waiting for? Go get it on your closest mirror.

Gentoo Miniconf 2014

(shameless copy of Tomas Chvatal’s report on the gentoo-project mailing list)

Hello guys,

First I would like to say big thank you to Amy (amynka) for prodding and nudging people and working on the booth. Next in line is Christopher (chithead) whom also handled our booth and even brought with him fancy MIPS machine and monitor all the way from Berlin. Kudos for that. And last I want to commend all the people giving the talks during the day. In the end we did bit Q&A with users, which was short so rest I spent asking how we should do the miniconf and what would be desired. So first lets take look on what we had and what we can do there to make it even cooler for next time:

Booth

Place where we share/sell SWAG chat with community. People stopped by, took some stickers here and there and watched the MIPS boxie we had there. I have to admit that I screwed up with our materials a bit and we didn’t have much on the stand. I thought we have more leftover stickers/brochures, but we had just few and super plan to get Gentoo t-shirts failed me miserably…

Future possibilities

Someone from Gentoo ev. could arrive too and actually sell some stuff like cups/tshirts as we seem unable to get something working here in Czech republic. With that we would have really pretty booth. People were quite interested in our merchandise and are even willing to buy it.

Track

We had one day of talks, and basically everything went smoothly and videos will be available in near future on youtube. I will try to remember to post link here as reply when it is done (if it is not here in a week, prod me on irc because that means I forgot).

Future possibilities

We should make the thing 2 days, so it is worth for people to go to Prague, for one day I guess it is not that motivating. We should start looking for talks sooner than couple of months in advance so people can plan for it.

Overall state/possibilities

First here are photos:
http://www.root.cz/galerie/linuxdays-2014-sobota/
http://www.root.cz/galerie/linuxdays-2014-nedele/

Linuxdays people are more than happy to provide us with the room if we have the content. Most of the people attending to the conference speak english, so even tho quite parts of the tracks are czech, we can talk with the people around. We could do it yearly/bi-yearly, my take would be to create 2 days miniconf each two year, so next one could be done 2016 unless of course you want it next year again and tell me right now

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

  • Chris Reffett joined the Wiki team
  • Alex Brandt joined the Python and OpenStack teams
  • Brian Evans joined the PHP team
  • Alec Warner left the ComRel and Infrastructure teams
  • Michał Górny left the Portage team
  • Denis Dupeyron left the ComRel team
  • Robin H. Johnson left the ComRel team

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17722
Ebuilds 37899
Architecture Stable Testing Total % of Packages
alpha 3661 582 4243 23.94%
amd64 10915 6318 17233 97.24%
amd64-fbsd 0 1573 1573 8.88%
arm 2701 1773 4474 25.25%
arm64 569 34 603 3.40%
hppa 3097 490 3587 20.24%
ia64 3213 627 3840 21.67%
m68k 612 98 710 4.01%
mips 0 2419 2419 13.65%
ppc 6866 2460 9326 52.62%
ppc64 4369 969 5338 30.12%
s390 1458 355 1813 10.23%
sh 1646 432 2078 11.73%
sparc 4156 916 5072 28.62%
sparc-fbsd 0 316 316 1.78%
x86 11564 5361 16925 95.50%
x86-fbsd 0 3238 3238 18.27%

gmn-portage-stats-2014-10

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201409-10 app-shells/bash Bash: Code Injection (Updated fix for GLSA 201409-09) 523592
201409-09 app-shells/bash Bash: Code Injection 523592
201409-08 dev-libs/libxml2 libxml2: Denial of Service 509834
201409-07 net-proxy/c-icap c-icap: Denial of Service 455324
201409-06 www-client/chromium Chromium: Multiple vulnerabilities 522484
201409-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 522448
201409-04 dev-db/mysql MySQL: Multiple vulnerabilities 460748
201409-03 net-misc/dhcpcd dhcpcd: Denial of service 518596
201409-02 net-analyzer/net-snmp Net-SNMP: Denial of Service 431752
201409-01 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 519014

Package Removals/Additions

Removals

Package Developer Date
dev-python/amara dev-zero 07 Sep 2014
dev-python/Bcryptor pacho 07 Sep 2014
dev-python/Yamlog pacho 07 Sep 2014
app-crypt/opencdk pacho 07 Sep 2014
net-dialup/gnome-ppp pacho 07 Sep 2014
media-plugins/vdr-dxr3 pacho 07 Sep 2014
media-video/dxr3config pacho 07 Sep 2014
media-video/em8300-libraries pacho 07 Sep 2014
media-video/em8300-modules pacho 07 Sep 2014
net-misc/xsupplicant pacho 07 Sep 2014
www-apache/mod_lisp2 pacho 07 Sep 2014
dev-python/py-gnupg pacho 07 Sep 2014
media-sound/decibel-audio-player pacho 07 Sep 2014
sys-power/gtk-cpuspeedy pacho 07 Sep 2014
app-emulation/emul-linux-x86-glibc-errno-compat pacho 07 Sep 2014
sys-fs/chironfs pacho 07 Sep 2014
net-p2p/giftui pacho 07 Sep 2014
app-misc/discomatic pacho 07 Sep 2014
x11-misc/uf-view pacho 07 Sep 2014
games-action/minetest_build hasufell 09 Sep 2014
games-action/minetest_common hasufell 09 Sep 2014
games-action/minetest_survival hasufell 09 Sep 2014
www-client/opera-next jer 15 Sep 2014
www-apps/swish-e dilfridge 19 Sep 2014
dev-qt/qcustomplot jlec 29 Sep 2014

Additions

Package Developer Date
dev-ruby/typhoeus graaff 01 Sep 2014
dev-python/toolz patrick 02 Sep 2014
dev-python/cytoolz patrick 02 Sep 2014
dev-python/unicodecsv patrick 02 Sep 2014
dev-python/characteristic idella4 02 Sep 2014
dev-python/service_identity idella4 02 Sep 2014
dev-libs/gom pacho 02 Sep 2014
games-roguelike/mazesofmonad hasufell 02 Sep 2014
dev-ruby/ast mrueg 04 Sep 2014
dev-ruby/cliver mrueg 04 Sep 2014
dev-ruby/parser mrueg 04 Sep 2014
dev-ruby/astrolabe mrueg 04 Sep 2014
net-ftp/pybootd vapier 04 Sep 2014
net-analyzer/nbwmon jer 04 Sep 2014
net-misc/megatools dlan 05 Sep 2014
dev-python/placefinder idella4 06 Sep 2014
dev-python/flask-cors idella4 09 Sep 2014
app-crypt/crackpkcs12 vapier 10 Sep 2014
dev-qt/linguist-tools pesa 11 Sep 2014
dev-qt/qdbus pesa 11 Sep 2014
dev-qt/qdoc pesa 11 Sep 2014
dev-qt/qtconcurrent pesa 11 Sep 2014
dev-qt/qtdiag pesa 11 Sep 2014
dev-qt/qtgraphicaleffects pesa 11 Sep 2014
dev-qt/qtimageformats pesa 11 Sep 2014
dev-qt/qtnetwork pesa 11 Sep 2014
dev-qt/qtpaths pesa 11 Sep 2014
dev-qt/qtprintsupport pesa 11 Sep 2014
dev-qt/qtquick1 pesa 11 Sep 2014
dev-qt/qtquickcontrols pesa 11 Sep 2014
dev-qt/qtserialport pesa 11 Sep 2014
dev-qt/qttranslations pesa 11 Sep 2014
dev-qt/qtwebsockets pesa 11 Sep 2014
dev-qt/qtwidgets pesa 11 Sep 2014
dev-qt/qtx11extras pesa 11 Sep 2014
dev-qt/qtxml pesa 11 Sep 2014
www-client/otter jer 13 Sep 2014
dev-util/pycharm-community xmw 14 Sep 2014
dev-util/pycharm-professional xmw 14 Sep 2014
media-libs/libgltf dilfridge 14 Sep 2014
www-client/opera-beta jer 15 Sep 2014
dev-libs/libbase58 blueness 15 Sep 2014
net-libs/courier-unicode hanno 16 Sep 2014
dev-libs/bareos-fastlzlib mschiff 16 Sep 2014
sys-libs/nss-usrfiles ryao 17 Sep 2014
sys-cluster/poolmon mschiff 18 Sep 2014
dev-python/pyClamd xmw 20 Sep 2014
sci-libs/htslib jlec 20 Sep 2014
dev-python/pika xarthisius 21 Sep 2014
games-rpg/wasteland2 hasufell 21 Sep 2014
app-backup/holland-lib-common alunduil 21 Sep 2014
app-backup/holland-backup-sqlite alunduil 21 Sep 2014
app-backup/holland-backup-pgdump alunduil 21 Sep 2014
app-backup/holland-backup-example alunduil 21 Sep 2014
app-backup/holland-backup-random alunduil 21 Sep 2014
app-backup/holland-lib-lvm alunduil 21 Sep 2014
app-backup/holland-lib-mysql alunduil 21 Sep 2014
app-backup/holland-backup-mysqldump alunduil 21 Sep 2014
app-backup/holland-backup-mysqlhotcopy alunduil 21 Sep 2014
app-backup/holland-backup-mysql-lvm alunduil 21 Sep 2014
app-backup/holland-backup-mysql-meta alunduil 21 Sep 2014
app-backup/holland alunduil 21 Sep 2014
net-libs/libndp pacho 22 Sep 2014
dev-python/keystonemiddleware prometheanfire 22 Sep 2014
media-libs/libbdplus beandog 22 Sep 2014
dev-python/texttable alunduil 23 Sep 2014
dev-perl/IMAP-BodyStructure chainsaw 25 Sep 2014
net-libs/uhttpmock pacho 25 Sep 2014
dev-perl/Data-Validate-IP chainsaw 25 Sep 2014
dev-perl/Data-Validate-Domain chainsaw 25 Sep 2014
dev-perl/Template-Plugin-Cycle chainsaw 25 Sep 2014
dev-perl/XML-Directory chainsaw 25 Sep 2014
dev-python/treq ryao 25 Sep 2014
dev-python/eliot ryao 25 Sep 2014
dev-python/xcffib idella4 26 Sep 2014
dev-qt/qtsensors pesa 26 Sep 2014
dev-python/path-py floppym 27 Sep 2014
dev-perl/Archive-Extract dilfridge 27 Sep 2014
dev-python/requests-mock alunduil 27 Sep 2014
dev-libs/appstream-glib eva 27 Sep 2014
dev-qt/qtpositioning pesa 28 Sep 2014
dev-qt/qcustomplot jlec 28 Sep 2014
dev-perl/Data-Structure-Util dilfridge 28 Sep 2014
dev-perl/IO-Event dilfridge 28 Sep 2014
dev-libs/qcustomplot jlec 29 Sep 2014
dev-python/webassets yngwin 30 Sep 2014
dev-python/google-apputils idella4 30 Sep 2014
dev-python/pyinsane voyageur 30 Sep 2014
dev-python/pyocr voyageur 30 Sep 2014
app-text/paperwork voyageur 30 Sep 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 September 2014 and 01 October 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-10

Bug Activity Number
New 1196
Closed 769
Not fixed 175
Duplicates 136
Total 6132
Blocker 5
Critical 17
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 49
2 Gentoo Linux Gnome Desktop Team 38
3 Python Gentoo Team 21
4 Qt Bug Alias 20
5 Perl Devs @ Gentoo 20
6 Gentoo KDE team 20
7 Portage team 19
8 Gentoo Games 17
9 Netmon Herd 16
10 Others 548

gmn-closed-2014-10

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 92
2 Gentoo Security 62
3 Gentoo Linux Gnome Desktop Team 59
4 Gentoo's Team for Core System packages 39
5 Gentoo Games 37
6 Portage team 33
7 Python Gentoo Team 32
8 Gentoo KDE team 32
9 Perl Devs @ Gentoo 27
10 Others 782

gmn-opened-2014-10

 

Tip of the month

(thanks to Thomas D. for the link to the blog post)

In case you like messing with your kernel Kconfig options to tweak the kernel image for your Gentoo boxes, you may want to know that menuconfig accepts regular expressions for searching symbols. You can start the search by typing ‘/’. For example, if you want to find all symbols ending with PCI do something like this after pressing ‘/’.

PCI$

You get a bunch of results, and then you can press the number listed on the left to jump directly to that symbol.

Related references:

http://michaelmk.blogspot.de/2014/08/jumping-directly-into-found-results-in.html

https://plus.google.com/101327154101389327284/posts/MyrhGjng1rQ

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

October 23, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)
Tor-ramdisk 20141022 released (October 23, 2014, 21:40 UTC)

Following the latest and greatest exploit in openssl, CVE-2014-3566, aka the POODLE issue, the tor team released version 0.2.4.25.  For those of you not familiar, tor is a system of online anonymity which encrypts and bounces your traffic through relays so as to obfuscated the origin.  Back in 2008, I started a uClibc-based micro Linux distribution, called tor-ramdisk, whose only purpose is to host a tor relay in hardened Gentoo environment purely in RAM.

While the POODLE bug is an openssl issue and is resolved by the latest release 1.0.1j, the tor team decided to turn off the affected protocol, SSL v3 or TLS 1.0 or later.  They also fixed tor to avoid a crash when built using openssl 0.9.8zc, 1.0.0o, or 1.0.1j, with the ‘no-ssl3′ configuration option.  These important fixes to two major components of tor-ramdisk waranted a new release.  Take a look at the upstream ChangeLog for more information.

Since I was upgrading stuff, I also upgrade the kernel to vanilla 3.17.1 + Gentoo’s hardened-patches-3.17.1-1.extras.  All the other components remain the same as the previous release.

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download:  http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download:  http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads

October 19, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
dev-lang/perl
virtual/perl-*
perl-core/*
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

October 18, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Tracking patches (October 18, 2014, 11:53 UTC)

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

October 14, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)

Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.

October 13, 2014
Raúl Porcel a.k.a. armin76 (homepage, bugs)
S390 documentation in the Gentoo Wiki (October 13, 2014, 08:44 UTC)

Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide: https://wiki.gentoo.org/wiki/S390/Hercules

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is: https://wiki.gentoo.org/wiki/S390/Install

I have also written some tips and tricks for z/VM: https://wiki.gentoo.org/wiki/S390/z/VM_tips_and_tricks They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!


October 11, 2014
Mike Pagano a.k.a. mpagano (homepage, bugs)
Netflix on Gentoo (October 11, 2014, 13:11 UTC)

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.

netflix

 

$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0

 

 

October 08, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v1.6 (October 08, 2014, 09:01 UTC)

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules :)

Changelog

  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module clementine.py: Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman

Contributors

Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !

October 06, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
How to stop Bleeding Hearts and Shocking Shells (October 06, 2014, 21:35 UTC)

Heartbleed logoThe free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:

bzip2
gzip
tar
unzip
xz-utils
nano
ca-certificates
mime-types
pax-utils
bash
build-docbook-catalog
docbook-xml-dtd
docbook-xsl-stylesheets
openjade
opensp
po4a
sgml-common
perl
python
elfutils
expat
glib
gmp
libffi
libgcrypt
libgpg-error
libpcre
libpipeline
libxml2
libxslt
mpc
mpfr
openssl
popt
Locale-gettext
SGMLSpm
TermReadKey
Text-CharWidth
Text-WrapI18N
XML-Parser
gperf
gtk-doc-am
intltool
pkgconfig
iputils
netifrc
openssh
rsync
wget
acl
attr
baselayout
busybox
coreutils
debianutils
diffutils
file
findutils
gawk
grep
groff
help2man
hwids
kbd
kmod
less
man-db
man-pages
man-pages-posix
net-tools
sed
shadow
sysvinit
tcp-wrappers
texinfo
util-linux
which
pambase
autoconf
automake
binutils
bison
flex
gcc
gettext
gnuconfig
libtool
m4
make
patch
e2fsprogs
udev
linux-headers
cracklib
db
e2fsprogs-libs
gdbm
glibc
libcap
ncurses
pam
readline
timezone-data
zlib
procps
psmisc
shared-mime-info

October 04, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#ifndef HAVE_FALLOCATE64
#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate
#endif

/*
 * fallocate64() - Manipulate file space.
 *
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
 */
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
{
    return syscall(__NR_fallocate, fd, mode, offset, len);
}
#endif /* ! HAVE_FALLOCATE */

The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla 0.9.33.2 release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].

October 03, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <vcizek@suse.com>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.

Hanno Böck a.k.a. hanno (homepage, bugs)
New laptop Lenovo Thinkpad X1 Carbon 20A7 (October 03, 2014, 21:05 UTC)

Thinkpad X1 CarbonWhile I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

Downsides:
* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.

September 28, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Unblocking F-keys (e.g. F9 for htop) in Guake 0.5.0 (September 28, 2014, 18:36 UTC)

I noticed that I couldn’t kill a process in htop today, F9 did not seem to be working, actualy most of the F-keys did not.

The reason turnout out to be that Guake 0.5.0 takes over keys F1 to F10 for direct access to tabs 1 to 10.
That may work for most terminal applications, but for htop it’s a killer.

So how can I prevent Guake from taking F9 over?
The preferences dialog allows me to assign a different key, but not no key. Really? There is no context menu, backspace and delete didn’t help. For now I assume it’s not possible.
So I fire up the gconf-editor, menu > Edit > Find… > “guake” — there it is. However, upon “Edit key…” gconf-editor says to me:

Currently pairs and schemas can’t be edited. This will be changed in a later version.

Very nice.

In the end what did work was to run

gconftool-2 --set /schemas/apps/guake/keybindings/local/switch_tab9 \
	--type string ''

and to restart Guake.

I just opened a bug for this. If you like, you can follow it at https://github.com/Guake/guake/issues/376 .

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What does #shellshock mean for Gentoo? (September 28, 2014, 10:56 UTC)

Gentoo Penguins with chicks at Jougla Point, Antarctica
Photo credit: Liam Quinn

This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash

That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.

September 27, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)
Tor-ramdisk 20140925 released (September 27, 2014, 16:35 UTC)

I’ve been blogging about my non-Gentoo work using my drupal site at http://opensource.dyc.edu/  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released 0.2.4.24 with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download: http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads

 

September 24, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.

September 21, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
bcache (September 21, 2014, 12:59 UTC)

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:

wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018
Tadaah!

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms


Definitely awesome!

September 13, 2014
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
ghc 7.8.3 and rare architectures (September 13, 2014, 09:03 UTC)

After some initially positive experience with ghc-7.8-rc1 I’ve decided to upstream most of gentoo fixes.

On rare arches ghc-7.8.3 behaves a bit bad:

  • ia64 build stopped being able to link itself after ghc-7.4 (gprel overflow)
  • on sparc, ia64 and ppc ghc was not able to create working shared libraries
  • integer-gmp library on ia64 crashed, and we had to use integer-simple

I have written a small story of those fixes here if you are curious.

TL;DR:

To get ghc-7.8.3 working nicer for exotic arches you will need to backport at least the following patches:

Thank you!


September 10, 2014
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Unifying PostgreSQL Ebuilds (September 10, 2014, 14:01 UTC)

After an excruciating wait and years of learning PostgreSQL, it’s time to unify the PostgreSQL ebuilds.I’m not sure what the original motivation was to split the ebuilds, but, from the history I’ve seen on Gentoo, it has always been that way. That’s a piss-poor reason for continuing to do things a certain way. Especially when that way is wrong and makes things more tedious and difficult than they ought to be.

I’m to blame for pressing forward with the splitting the ebuilds to -docs, -base, and -server when I first got started in Gentoo. I knew from the outset that having them split was not a good idea. I just didn’t know as much as I do now to defend one way or the other. To be fair, Patrick (bonsaikitten) would have gone with whatever I decided to do, but I thought I understood the advantages. Now I look at it and just see disadvantages.

Let’s first look at the build times for building the split ebuilds:

1961.35user 319.42system 33:59.44elapsed 111%CPU (0avgtext+0avgdata 682896maxresident)k
46696inputs+2000640outputs (34major+34350937minor)pagefaults 0swaps
1955.12user 325.01system 33:39.86elapsed 112%CPU (0avgtext+0avgdata 682896maxresident)k
7176inputs+1984960outputs (33major+34349678minor)pagefaults 0swaps
1942.90user 318.89system 33:53.70elapsed 111%CPU (0avgtext+0avgdata 682928maxresident)k
28496inputs+1999688outputs (124major+34343901minor)pagefaults 0swaps

And now the unified ebuild:

1823.57user 278.96system 30:29.20elapsed 114%CPU (0avgtext+0avgdata 683024maxresident)k
32520inputs+1455688outputs (100major+30199771minor)pagefaults 0swaps
1795.63user 282.55system 30:35.92elapsed 113%CPU (0avgtext+0avgdata 683024maxresident)k
9848inputs+1456056outputs (30major+30225195minor)pagefaults 0swaps
1802.47user 275.66system 30:08.30elapsed 114%CPU (0avgtext+0avgdata 683056maxresident)k
13800inputs+1454880outputs (49major+30193986minor)pagefaults 0swaps

So, the unified ebuild is about 10% faster than the split ebuilds.

There are also a few bugs open that will be resolved by moving to a unified ebuild. Whenever someone changes anything in their flags, Portage tends to only pick up dev-db/postgresql-server as needing to be recompiled rather than the appropriate dev-db/postgresql-base, which results in broken setups and failures to even build. I’ve even been accused of pulling the rug out from under people. I swear, it’s not me…it’s Portage…who I lied to. Kind of.

There should be little interruption, though, to the end user. I’ll be keeping all the features that splitting brought us. Okay, feature. There’s really just one feature: Proper slotting. Portage will be informed of the package moves, and everything should be hunky-dory with one caveat: A new ‘server’ USE flag is being introduced to control whether to build everything or just the clients and libraries.

No, I don’t want to do a lib-only flag. I don’t want to work on another hack.

You can check out the progress on my overlay. I’ll be working on the updating the dependent packages as well so they’re all ready to go in one shot.

September 08, 2014
Gentoo Monthly Newsletter: August 2014 (September 08, 2014, 21:20 UTC)

Gentoo News

Council News

Concerning the handling of bash-completion and of phase functions in eclasses in general the council decided no actions. The former should be handled by the shell-tools team, the latter needs more discussion on the mailing lists.

Then we had two hot topics. The first was the games team policy; the council clarified that the games team has in no way authority over game ebuilds maintained by other developers. In addition, the games team should elect a lead in the near future. If it doesn’t it will be considered dysfunctional.  Tim Harder (radhermit) acts as interim lead and organizes the elections.

Next, rumors about the handling of dynamic dependencies in Portage had sparked quite a stir. The council asks the Portage team basically not to remove dynamic dependency handling before they haven’t worked out and presented a good plan how Gentoo would work without them. Portage tree policies and the
handling of eclasses and virtuals in particular need to be clarified.

Finally the list of planned features for EAPI 6 was amended by two items, namely additional options for configure and a non-runtime switchable ||= () or-dependency.

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

  • Ian Stakenvicius (axs) joined the multilib project
  • Michał Górny (mgorny) joined the QA team
  • Kristian Fiskerstrand (k_f) joined the Security team
  • Richard Freeman (rich0) joined the systemd team
  • Pavlos Ratis (dastergon) joined the Gentoo Infrastructure team
  • Patrice Clement (monsieur) and Ian Stakenvicius (axs) joined the perl team
  • Chris Reffett (creffett) joined the Wiki team
  • Pavlos Ratis (dastergon) left the KDE project
  • Dirkjan Ochtman (djc) left the ComRel project

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17653
Ebuilds 37397
Architecture Stable Testing Total % of Packages
alpha 3661 574 4235 23.99%
amd64 10895 6263 17158 97.20%
amd64-fbsd 0 1573 1573 8.91%
arm 2692 1755 4447 25.19%
arm64 570 32 602 3.41%
hppa 3073 496 3569 20.22%
ia64 3196 626 3822 21.65%
m68k 614 98 712 4.03%
mips 0 2410 2410 13.65%
ppc 6841 2475 9316 52.77%
ppc64 4332 971 5303 30.04%
s390 1464 349 1813 10.27%
sh 1650 427 2077 11.77%
sparc 4135 922 5057 28.65%
sparc-fbsd 0 317 317 1.80%
x86 11572 5297 16869 95.56%
x86-fbsd 0 3241 3241 18.36%

gmn-portage-stats-2014-09

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201408-19 app-office/openoffice-bin (and 3 more) OpenOffice, LibreOffice: Multiple vulnerabilities 283370
201408-18 net-analyzer/nrpe NRPE: Multiple Vulnerabilities 397603
201408-17 app-emulation/qemu QEMU: Multiple vulnerabilities 486352
201408-16 www-client/chromium Chromium: Multiple vulnerabilities 504328
201408-15 dev-db/postgresql-server PostgreSQL: Multiple vulnerabilities 456080
201408-14 net-misc/stunnel stunnel: Information disclosure 503506
201408-13 dev-python/jinja Jinja2: Multiple vulnerabilities 497690
201408-12 www-servers/apache Apache HTTP Server: Multiple vulnerabilities 504990
201408-11 dev-lang/php PHP: Multiple vulnerabilities 459904
201408-10 dev-libs/libgcrypt Libgcrypt: Side-channel attack 519396
201408-09 dev-libs/libtasn1 GNU Libtasn1: Multiple vulnerabilities 511536
201408-08 sys-apps/file file: Denial of Service 505534
201408-07 media-libs/libmodplug ModPlug XMMS Plugin: Multiple vulnerabilities 480388
201408-06 media-libs/libpng libpng: Multiple vulnerabilities 503014
201408-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 519790
201408-04 dev-util/catfish Catfish: Multiple Vulnerabilities 502536
201408-03 net-libs/libssh LibSSH: Information disclosure 503504
201408-02 media-libs/freetype FreeType: Arbitrary code execution 504088
201408-01 dev-php/ZendFramework Zend Framework: SQL injection 369139

Package Removals/Additions

Removals

Package Developer Date
virtual/perl-Class-ISA dilfridge 02 Aug 2014
virtual/perl-Filter dilfridge 02 Aug 2014
dev-vcs/gitosis robbat2 04 Aug 2014
dev-vcs/gitosis-gentoo robbat2 04 Aug 2014
virtual/python-argparse mgorny 11 Aug 2014
virtual/python-unittest2 mgorny 11 Aug 2014
app-emacs/sawfish ulm 19 Aug 2014
virtual/ruby-test-unit graaff 20 Aug 2014
games-action/d2x mr_bones_ 25 Aug 2014
games-arcade/koules mr_bones_ 25 Aug 2014
dev-lang/libcilkrts ottxor 26 Aug 2014

Additions

Package Developer Date
dev-python/oslotest prometheanfire 01 Aug 2014
dev-db/tokumx chainsaw 01 Aug 2014
sys-boot/gummiboot mgorny 02 Aug 2014
app-admin/supernova alunduil 03 Aug 2014
dev-db/mysql-cluster robbat2 03 Aug 2014
net-libs/txtorcon mrueg 04 Aug 2014
dev-ruby/prawn-table mrueg 06 Aug 2014
sys-apps/cv zx2c4 06 Aug 2014
media-libs/openctm amynka 07 Aug 2014
sci-libs/levmar amynka 07 Aug 2014
media-gfx/printrun amynka 07 Aug 2014
dev-python/alabaster idella4 10 Aug 2014
dev-haskell/regex-pcre slyfox 11 Aug 2014
dev-python/gcs-oauth2-boto-plugin vapier 12 Aug 2014
dev-python/astropy-helpers jlec 12 Aug 2014
dev-perl/Math-ModInt chainsaw 13 Aug 2014
dev-ruby/classifier-reborn mrueg 13 Aug 2014
media-gfx/meshlab amynka 14 Aug 2014
dev-libs/librevenge scarabeus 15 Aug 2014
www-apps/jekyll-coffeescript mrueg 15 Aug 2014
www-apps/jekyll-gist mrueg 15 Aug 2014
www-apps/jekyll-paginate mrueg 15 Aug 2014
www-apps/jekyll-watch mrueg 15 Aug 2014
sec-policy/selinux-salt swift 15 Aug 2014
www-apps/jekyll-sass-converter mrueg 15 Aug 2014
dev-ruby/rouge mrueg 15 Aug 2014
dev-ruby/ruby-beautify graaff 16 Aug 2014
sys-firmware/nvidia-firmware idl0r 17 Aug 2014
media-libs/libmpris2client ssuominen 20 Aug 2014
xfce-extra/xfdashboard ssuominen 20 Aug 2014
www-client/opera-developer jer 20 Aug 2014
dev-libs/openspecfun patrick 21 Aug 2014
dev-libs/marisa dlan 22 Aug 2014
media-sound/dcaenc beandog 22 Aug 2014
sci-mathematics/geogebra amynka 23 Aug 2014
dev-python/crumbs alunduil 25 Aug 2014
media-gfx/kxstitch kensington 26 Aug 2014
media-gfx/symboleditor kensington 26 Aug 2014
dev-perl/Sort-Key chainsaw 26 Aug 2014
dev-perl/Sort-Key-IPv4 chainsaw 26 Aug 2014
sci-visualization/yt xarthisius 26 Aug 2014
dev-ruby/globalid graaff 27 Aug 2014
dev-python/certifi idella4 27 Aug 2014
www-apps/jekyll-sitemap mrueg 27 Aug 2014
sys-apps/tuned dlan 29 Aug 2014
app-portage/g-sorcery jauhien 29 Aug 2014
app-portage/gs-elpa jauhien 29 Aug 2014
app-portage/gs-pypi jauhien 29 Aug 2014
app-admin/eselect-rust jauhien 29 Aug 2014
sys-block/raid-check chutzpah 29 Aug 2014
dev-python/python3-openid maksbotan 30 Aug 2014
dev-python/python-social-auth maksbotan 30 Aug 2014
dev-python/websocket-client alunduil 31 Aug 2014
dev-ruby/ethon graaff 31 Aug 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 August 2014 and 31 August 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-08

Bug Activity Number
New 1575
Closed 981
Not fixed 187
Duplicates 145
Total 6023
Blocker 5
Critical 19
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 102
2 Gentoo's Team for Core System packages 39
3 Gentoo KDE team 37
4 Default Assignee for Orphaned Packages 32
5 Julian Ospald (hasufell) 26
6 Gentoo Games 25
7 Portage team 25
8 Netmon Herd 24
9 Python Gentoo Team 23
10 Others 647

gmn-closed-2014-08

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 160
2 Gentoo Security 61
3 Default Assignee for Orphaned Packages 60
4 Gentoo KDE team 45
5 Gentoo's Team for Core System packages 45
6 Gentoo Linux Gnome Desktop Team 37
7 Gentoo Games 28
8 Portage team 28
9 Python Gentoo Team 26
10 Others 1084

gmn-opened-2014-08

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

September 05, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
Bash pitfalls: globbing everywhere! (September 05, 2014, 08:31 UTC)

Bash has many subtle pitfalls, some of them being able to live unnoticed for a very long time. A common example of that kind of pitfall is ubiquitous filename expansion, or globbing. What many script writers forget about to notice is that practically anything that looks like a pattern and is not quoted is subject to globbing, including unquoted variables.

There are two extra snags that add up to this. Firstly, many people forget that not only asterisks (*) and question marks (?) make up patterns — square brackets ([) do that as well. Secondly, by default bash (and POSIX shell) take failed expansions literally. That is, if your glob does not match any file, you may not even know that you are globbing.

It's all just a matter of running in the proper directory for the result to change. Of course, it's often unlikely — maybe even close to impossible. You can work towards preventing that by running in a safe directory. But in the end, writing predictable software is a fine quality.

How to notice mistakes?

Bash provides a two major facilities that could help you stop mistakes — shopts nullglob and failglob.

The nullglob option is a good choice for a default for your script. After enabling it, failing filename expansions result in no parameters rather than verbatim pattern itself. This has two important implications.

Firstly, it makes iterating over optional files easy:

for f in a/* b/* c/*; do
    some_magic "${f}"
done

Without nullglob, the above may actually return a/* if no file matches the pattern. For this reason, you would need to add an additional check for existence of file inside the loop. With nullglob, it will just ‘omit’ the unmatched arguments. In fact, if none of the patterns match the loop won't be run even once.

Secondly, it turns every accidental glob into null. While this isn't the most friendly warning and in fact it may have very undesired results, you're more likely to notice that something is going wrong.

The failglob option is better if you can assume you don't need to match files in its scope. In this case, bash treats every failing filename expansion as a fatal error and terminates execution with an appropriate message.

The main advantage of failglob is that it makes you aware of any mistake before someone hits it the hard way. Of course, assuming that it doesn't accidentally expand into something already.

There is also a choice of noglob. However, I wouldn't recommend it since it works around mistakes rather than fixing them, and makes the code rely on a non-standard environment.

Word splitting without globbing

One of the pitfalls I myself noticed lately is the attempt of using unquoted variable substitution to do word splitting. For example:

for i in ${v}; do
    echo "${i}"
done

At a first glance, everything looks fine. ${v} contains a whitespace-separated list of words and we iterate over each word. The pitfall here is that words in ${v} are subject to filename expansion. For example, if a lone asterisk would happen to be there (like v='10 * 4'), you'd actually get all files in the current directory. Unexpected, isn't it?

I am aware of three solutions that can be used to accomplish word splitting without implicit globbing:

  1. setting shopt -s noglob locally,
  2. setting GLOBIGNORE='*' locally,
  3. using the swiss army knife of read to perform word splitting.

Personally, I dislike the first two since they require set-and-restore magic, and the latter also has the penalty of doing the globbing then discarding the result. Therefore, I will expand on using read:

read -r -d '' -a words <<<"${v}"
for i in "${words[@]}"; do
    echo "${i}"
done

While normally read is used to read from files, we can use the here string syntax of bash to feed the variable into it. The -r option disables backslash escape processing that is undesired here. -d '' causes read to process the whole input and not stop at any delimiter (like newline). -a words causes it to put the split words into array ${words[@]} — and since we know how to safely iterate over an array, the underlying issue is solved.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
32bit Madness (September 05, 2014, 07:41 UTC)

This week I ran into a funny issue doing backups with rsync:

rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...


Figure 1.1: Iron Penguin

Fig. 1: Iron Penguin

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team and likewhoa. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.15.6, Xorg 1.16.0, KDE 4.13.3, Gnome 3.12.2, XFCE 4.10, Fluxbox 1.3.5, LXQT Desktop 0.7.0, i3 Desktop 2.8, Firefox 31.0, LibreOffice 4.2.5.2, Gimp 2.8.10-r1, Blender 2.71-r1, Amarok 2.8.0-r2, Chromium 37.0.2062.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. The FAQ is located at FAQ. DVD cases and covers for the 20140826 release are located at Artwork. Persistence mode is back in the 20140826 release!.

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20140826 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20140826 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

September 03, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
AMD HSA (September 03, 2014, 07:25 UTC)

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:

[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./kfd_check_installation.sh 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
^C
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

August 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Showing return code in PS1 (August 30, 2014, 23:14 UTC)

If you do daily management on Unix/Linux systems, then checking the return code of a command is something you’ll do often. If you do SELinux development, you might not even notice that a command has failed without checking its return code, as policies might prevent the application from showing any output.

To make sure I don’t miss out on application failures, I wanted to add the return code of the last executed command to my PS1 (i.e. the prompt displayed on my terminal).
I wasn’t able to add it to the prompt easily – in fact, I had to use a bash feature called the prompt command.

When the PROMPT_COMMMAND variable is defined, then bash will execute its content (which I declare as a function) to generate the prompt. Inside the function, I obtain the return code of the last command ($?) and then add it to the PS1 variable. This results in the following code snippet inside my ~/.bashrc:

export PROMPT_COMMAND=__gen_ps1
 
function __gen_ps1() {
  local EXITCODE="$?";
  # Enable colors for ls, etc.  Prefer ~/.dir_colors #64489
  if type -P dircolors >/dev/null ; then
    if [[ -f ~/.dir_colors ]] ; then
      eval $(dircolors -b ~/.dir_colors)
    elif [[ -f /etc/DIR_COLORS ]] ; then
      eval $(dircolors -b /etc/DIR_COLORS)
    fi
  fi
 
  if [[ ${EUID} == 0 ]] ; then
    PS1="RC=${EXITCODE} \[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] "
  else
    PS1="RC=${EXITCODE} \[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] "
  fi
}

With it, my prompt now nicely shows the return code of the last executed command. Neat.

Edit: Sean Patrick Santos showed me my utter failure in that this can be accomplished with the PS1 variable immediately, without using the overhead of the PROMPT_COMMAND. Just make sure to properly escape the $ sign which I of course forgot in my late-night experiments :-(.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
PowerPC is back (and little endian) (August 30, 2014, 17:32 UTC)

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.

PowerPC

I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

August 29, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened august meeting (August 29, 2014, 14:43 UTC)

Another month has passed, so we had another online meeting to discuss the progress within Gentoo Hardened.

Lead elections

The yearly lead elections within Gentoo Hardened were up again. Zorry (Magnus Granberg) was re-elected as project lead so doesn’t need to update his LinkedIn profile yet ;-)

Toolchain

blueness (Anthony G. Basile) has been working on the uclibc stages for some time. Due to the configurable nature of these setups, many /etc/portage files were provided as part of the stages, which shouldn’t happen. Work is on the way to update this accordingly.

For the musl setup, blueness is also rebuilding the stages to use a symbolic link to the dynamic linker (/lib/ld-linux-arch.so) as recommended by the musl maintainers.

Kernel and grsecurity with PaX

A bug has been submitted which shows that large binary files (in the bug, a chrome binary with debug information is shown to be more than 2 Gb in size) cannot be pax-mark’ed, with paxctl informing the user that the file is too big. The problem is when the PAX marks are in ELF (as the application mmaps the binary) – users of extended attributes based PaX markings do not have this problem. blueness is working on making things a bit more intelligent, and to fix this.

SELinux

I have been making a few changes to the SELinux setup:

  • The live ebuilds (those with version 9999 which use the repository policy rather than snapshots of the policies) are now being used as “master” in case of releases: the ebuilds can just be copied to the right version to support the releases. The release script inside the repository is adjusted to reflect this as well.
  • The SELinux eclass now supports two variables, SELINUX_GIT_REPO and SELINUX_GIT_BRANCH, which allows users to use their own repository, and developers to work in specific branches together. By setting the right value in the users’ make.conf switching policy repositories or branches is now a breeze.
  • Another change in the SELinux eclass is that, after the installation of SELinux policies, we will check the reverse dependencies of the policy package and relabel the files of these packages. This allows us to only have RDEPEND dependencies towards the SELinux policy packages (if the application itself does not otherwise link with libselinux), making the dependency tree within the package manager more correct. We still need to update these packages to drop the DEPEND dependency, which is something we will focus on in the next few months.
  • In order to support improved cooperation between SELinux developers in the Gentoo Hardened team – perfinion (Jason Zaman) is in the queue for becoming a new developer in our mids – a coding style for SELinux policies is being drafted up. This is of course based on the coding style of the reference policy, but with some Gentoo specific improvements and more clarifications.
  • perfinion has been working on improving the SELinux support in OpenRC (release 0.13 and higher), making some of the additions that we had to make in the past – such as the selinux_gentoo init script – obsolete.

The meeting also discussed a few bugs in more detail, but if you really want to know, just hang on and wait for the IRC logs ;-) Other usual sections (system integrity and profiles) did not have any notable topics to describe.

August 22, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

As of today, more than 50% of the 37527 ebuilds in the Gentoo portage tree use the newest ebuild API (EAPI) version, EAPI=5!
The details of the various EAPIs can be found in the package manager specification (PMS); the most notable new feature of EAPI 5, which has sped up acceptance a lot is the introduction of so-called subslots. A package A can specify a subslot, another package B that depends on it can specify that it needs to be rebuilt when the subslot of A changes. This leads to much more elegant solutions for many of the the link or installation path problems that revdep-rebuild, emerge @preserved-rebuild, or e.g. perl-cleaner try to solve... Another useful new feature in EAPI=5 is the masking of use-flags specifically for stable-marked ebuilds.
You can follow the adoption of EAPIs in the portage tree on an automatically updated graph page.

August 19, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching to new laptop (August 19, 2014, 20:11 UTC)

I’m slowly but surely starting to switch to a new laptop. The old one hasn’t completely died (yet) but given that I had to force its CPU frequency at the lowest Hz or the CPU would burn (and the system suddenly shut down due to heat issues), and that the connection between the battery and laptop fails (so even new battery didn’t help out) so I couldn’t use it as a laptop… well, let’s say the new laptop is welcome ;-)

Building Gentoo isn’t an issue (having only a few hours per day to work on it is) and while I’m at it, I’m also experimenting with EFI (currently still without secure boot, but with EFI) and such. Considering that the Gentoo Handbook needs quite a few updates (and I’m thinking to do more than just small updates) knowing how EFI works is a Good Thing ™.

For those interested – the EFI stub kernel instructions in the article on the wiki, and also in Greg’s wonderful post on booting a self-signed Linux kernel (which I will do later) work pretty well. I didn’t try out the “Adding more kernels” section in it, as I need to be able to (sometimes) edit the boot options (which isn’t easy to accomplish with EFI stub-supporting kernels afaics). So I installed Gummiboot (and created a wiki article on it).

Lots of things still planned, so little time. But at least building chromium is now a bit faster – instead of 5 hours and 16 minutes, I can now enjoy the newer versions after little less than 40 minutes.

August 14, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

Foreword

Let’s say we have to design an application that should span across multiple datacenters while being able to scale as easily as firing up a new vm/container without the need to update any kind of configuration.

Facing this kind of challenge is exciting and requires us to address a few key scaffolding points before actually starting to code something :

  • having a robust and yet versatile application container to run our application
  • having a datacenter aware, fault detecting and service discovery service

Seeing the title of this article, the two components I’ll demonstrate are obviously uWSGI and Consul which can now work together thanks to the uwsgi-consul plugin.

While this article example is written in python, you can benefit from the same features in all the languages supported by uWSGI which includes go, ruby, perl ad php !

Our first service discovering application

The application will demonstrate how simple it is for a client to discover all the available servers running a specific service on a given port. The best part is that the services will be registered and deregistered automatically by uWSGI as they’re loaded and unloaded.

The demo application logic is as follows :

  1. uWSGI will load two server applications which are each responsible for providing the specified service on the given port
  2. uWSGI will automatically register the configured service into Consul
  3. uWSGI will also automatically register a health check for the configured service into Consul so that Consul will also be able to detect any failure of the service
  4. Consul will then respond to any client requesting the list of the available servers (nodes) providing the specified service
  5. The client will query Consul for the service and get either an empty response (no server available / loaded) or the list of the available servers

Et voilà, the client can dynamically detect new/obsolete servers and start working !

Setting up uWSGI and its Consul plugin

On Gentoo Linux, you’ll just have to run the following commands to get started (other users refer to the uWSGI documentation or your distro’s package manager). The plugin will be built by hand as I’m still not sure how I’ll package the uWSGI external plugins…

$ sudo ACCEPT_KEYWORDS="~amd64" emerge uwsgi
$ cd /usr/lib/uwsgi/
$ sudo uwsgi --build-plugin https://github.com/unbit/uwsgi-consul
$ cd -

 

You’ll have installed the uwsgi-consul plugin which you should see here :

$ ls /usr/lib/uwsgi/consul_plugin.so
/usr/lib/uwsgi/consul_plugin.so

 

That’s all we need to have uWSGI working with Consul.

Setting up a Consul server

Gentoo users will need to add the ultrabug overlay (use layman) and then install consul (other users refer to the Consul documentation or your distro’s package manager).

$ sudo layman -a ultrabug
$ sudo ACCEPT_KEYWORDS="~amd64" USE="web" emerge consul

 

Running the server and its UI is also quite straightforward. For this example, we will run it directly from a dedicated terminal so you can also enjoy the logs and see what’s going on (Gentoo users have an init script and conf.d ready for them shall they wish to go further).

Open a new terminal and run :

$ consul agent -data-dir=/tmp/consul-agent -server -bootstrap -ui-dir=/var/lib/consul/ui -client=0.0.0.0

 

You’ll see consul running and waiting for work. You can already enjoy the web UI by pointing your browser to http://127.0.0.1:8500/ui/.

Running the application

To get this example running, we’ll use the uwsgi-consul-demo code that I prepared.

First of all we’ll need the consulate python library (available on pypi via pip). Gentoo users can just install it (also from the ultrabug overlay added before) :

$ sudo ACCEPT_KEYWORDS="~amd64" emerge consulate

 

Now let’s clone the demo repository and get into the project’s directory.

$ git clone git@github.com:ultrabug/uwsgi-consul-demo.git
$ cd uwsgi-consul-demo

 

First, we’ll run the client which should report that no server is available yet. We will keep this terminal open to see the client detecting in real time the appearance and disappearance of the servers as we start and stop uwsgi :

$ python client.py 
no consul-demo-server available
[...]
no consul-demo-server available

 

Open a new terminal and get inside the project’s directory. Let’s have uWSGI load the two servers and register them in Consul :

$ uwsgi --ini uwsgi-consul-demo.ini --ini uwsgi-consul-demo.ini:server1 --ini uwsgi-consul-demo.ini:server2
[...]
* server #1 is up on port 2001


* server #2 is up on port 2002

[consul] workers ready, let's register the service to the agent
[consul] service consul-demo-server registered succesfully
[consul] workers ready, let's register the service to the agent
[consul] service consul-demo-server registered succesfully

 

Now let’s check back our client terminal, hooray it has discovered the two servers on the host named drakar (that’s my local box) !

consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2002
consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2001

Expanding our application

Ok it works great on our local machine but we want to see how to add more servers to the fun and scale dynamically.

Let’s add another machine (named cheetah here) to the fun and have servers running there also while our client is still running on our local machine.

On cheetah :

  • install uWSGI as described earlier
  • install Consul as described earlier

Run a Consul agent (no need of a server) and tell him to work with your already running consul server on your box (drakar in my case) :

$ /usr/bin/consul agent -data-dir=/tmp/consul-agent -join drakar -ui-dir=/var/lib/consul/ui -client=0.0.0.0

The -join <your host or IP> is the important part.

 

Now run uWSGI so it starts and registers two new servers on cheetah :

$ uwsgi --ini uwsgi-consul-demo.ini --ini uwsgi-consul-demo.ini:server1 --ini uwsgi-consul-demo.ini:server2

 

And check the miracle on your client terminal still running on your local box, the new servers have appeared and will disappear if you stop uwsgi on the cheetah node :

consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2001
consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2002
consul-demo-server found on node cheetah (yy.yy.yy.yy) using port 2001
consul-demo-server found on node cheetah (yy.yy.yy.yy) using port 2002

Go mad

Check the source code, it’s so simple and efficient you’ll cry ;)

I hope this example has given you some insights and ideas for your current or future application designs !

August 11, 2014
Gentoo Monthly Newsletter: July 2014 (August 11, 2014, 00:00 UTC)

Gentoo News

Trustee Election Results

The two open seats for the Gentoo Trustees for the 2014-2016 term will be:

  • Alec Warner (antarus) First Term
  • Roy Bamford (neddyseagoon) Fourth Term

Since there were only two nominees for the two seats up for election, there was no official election. They were appointed uncontested.

Council Election Results

The Gentoo Council for the 2014-2015 term will be:

  • Anthony G. Basile (blueness)
  • Ulrich Müller (ulm)
  • Andreas K. Hüttel (dilfridge)
  • Richard Freeman (rich0)
  • William Hubbs (williamh)
  • Donnie Berkholz (dberkholz)
  • Tim Harder (radhermit)

Official announcement here.

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

The following developers have recently changed roles:

  • Projects:
    • mgorny joined Portage
    • k_f joined Gentoo-keys
    • zlogene joined Proxy maintainers
    • civil joined Qt
    • pesa replaced pinkbyte as Qt lead
    • TomWij removed himself from Bug-wranglers
    • Gentoo sound migrated to wiki
    • Artwork migrated to wiki
    • Desktop-util migrated to wiki
    • Accessibility migrated to wiki
    • Enlightenment migrated to wiki
  • Herds:
    • eselect herd was added
    • zlogene joined s390
    • twitch153 joined tools-portage
    • pinkbyte left leechcraft
    • k_f joined crypto

Additions

The following developers have recently joined the project:

  • Xavier Miller (xaviermiller)
  • Patrice Clement (monsieurp)
  • Amy Winston (amynka)
  • Kristian Fiskerstrand (k_f)

Returning Dev

  • Tom Gall (tgall)

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17595
Ebuilds 37628
Architecture Stable Testing Total % of Packages
alpha 3658 561 4219 23.98%
amd64 10863 6239 17102 97.20%
amd64-fbsd 0 1577 1577 8.96%
arm 2681 1743 4424 25.14%
arm64 559 32 591 3.36%
hppa 3061 482 3543 20.14%
ia64 3189 612 3801 21.60%
m68k 618 87 705 4.01%
mips 0 2402 2402 13.65%
ppc 6838 2353 9191 52.24%
ppc64 4326 866 5192 29.51%
s390 1477 331 1808 10.28%
sh 1670 403 2073 11.78%
sparc 4114 898 5012 28.49%
sparc-fbsd 0 317 317 1.80%
x86 11535 5288 16823 95.61%
x86-fbsd 0 3237 3237 18.40%

gmn-portage-stats-2014-08

Security

Package Removals/Additions

Removals

Package Developer Date
perl-core/Class-ISA dilfridge 05 Jul 2014
dev-python/argparse mgorny 06 Jul 2014
dev-python/ordereddict mgorny 06 Jul 2014
perl-core/Filter dilfridge 07 Jul 2014
app-text/qgoogletranslator grozin 09 Jul 2014
dev-lisp/openmcl grozin 09 Jul 2014
dev-lisp/openmcl-build-tools grozin 09 Jul 2014
net-libs/cyassl blueness 15 Jul 2014
dev-ruby/text-format graaff 18 Jul 2014
dev-ruby/jruby-debug-base graaff 18 Jul 2014
games-util/rubygfe graaff 18 Jul 2014
perl-core/PodParser dilfridge 20 Jul 2014
virtual/perl-PodParser dilfridge 21 Jul 2014
perl-core/digest-base dilfridge 22 Jul 2014
virtual/perl-digest-base dilfridge 22 Jul 2014
perl-core/i18n-langtags dilfridge 22 Jul 2014
virtual/perl-i18n-langtags dilfridge 22 Jul 2014
perl-core/locale-maketext dilfridge 23 Jul 2014
virtual/perl-locale-maketext dilfridge 23 Jul 2014
perl-core/net-ping dilfridge 23 Jul 2014
virtual/perl-net-ping dilfridge 23 Jul 2014
virtual/perl-Switch dilfridge 25 Jul 2014
perl-core/Switch dilfridge 25 Jul 2014
x11-misc/keytouch pacho 27 Jul 2014
x11-misc/keytouch-editor pacho 27 Jul 2014
media-video/y4mscaler pacho 27 Jul 2014
dev-python/manifestdestiny pacho 27 Jul 2014
dev-cpp/libsexymm pacho 27 Jul 2014

Additions

Package Developer Date
www-client/vimb radhermit 01 Jul 2014
dev-util/libsparse jauhien 01 Jul 2014
dev-python/docker-py chutzpah 01 Jul 2014
dev-util/ext4_utils jauhien 01 Jul 2014
dev-haskell/base16-bytestring gienah 02 Jul 2014
dev-haskell/boxes gienah 02 Jul 2014
dev-haskell/chell gienah 02 Jul 2014
dev-haskell/conduit-extra gienah 02 Jul 2014
dev-haskell/cryptohash-conduit gienah 02 Jul 2014
dev-haskell/ekg-core gienah 02 Jul 2014
dev-haskell/equivalence gienah 02 Jul 2014
dev-haskell/hastache gienah 02 Jul 2014
dev-haskell/options gienah 02 Jul 2014
dev-haskell/patience gienah 02 Jul 2014
dev-haskell/prelude-extras gienah 02 Jul 2014
dev-haskell/tf-random gienah 02 Jul 2014
dev-haskell/quickcheck-instances gienah 02 Jul 2014
dev-haskell/streaming-commons gienah 02 Jul 2014
dev-haskell/vector-th-unbox gienah 02 Jul 2014
dev-haskell/tasty-th gienah 02 Jul 2014
dev-haskell/dlist-instances gienah 02 Jul 2014
dev-haskell/temporary-rc gienah 02 Jul 2014
dev-haskell/stmonadtrans gienah 02 Jul 2014
dev-haskell/data-hash gienah 02 Jul 2014
dev-haskell/yesod-auth-hashdb gienah 02 Jul 2014
sci-mathematics/agda-lib-ffi gienah 02 Jul 2014
dev-haskell/lifted-async gienah 02 Jul 2014
dev-haskell/wai-conduit gienah 02 Jul 2014
dev-haskell/shelly gienah 02 Jul 2014
dev-haskell/chell-quickcheck gienah 03 Jul 2014
dev-haskell/tasty-ant-xml gienah 03 Jul 2014
dev-haskell/lcs gienah 03 Jul 2014
dev-haskell/tasty-golden gienah 03 Jul 2014
sec-policy/selinux-tcsd swift 04 Jul 2014
dev-perl/Class-ISA dilfridge 05 Jul 2014
net-wireless/gqrx zerochaos 06 Jul 2014
dev-perl/Filter dilfridge 07 Jul 2014
app-misc/abduco xmw 10 Jul 2014
virtual/perl-Math-BigRat dilfridge 10 Jul 2014
virtual/perl-bignum dilfridge 10 Jul 2014
dev-perl/Net-Subnet chainsaw 11 Jul 2014
dev-java/opencsv ercpe 11 Jul 2014
dev-java/trident ercpe 11 Jul 2014
dev-java/htmlparser-org ercpe 11 Jul 2014
dev-java/texhyphj ercpe 12 Jul 2014
dev-util/vmtouch dlan 12 Jul 2014
sys-block/megactl robbat2 14 Jul 2014
dev-python/fexpect jlec 14 Jul 2014
mail-filter/postfwd mschiff 15 Jul 2014
dev-python/wheel djc 15 Jul 2014
dev-ruby/celluloid-io mrueg 15 Jul 2014
sys-process/tiptop patrick 16 Jul 2014
dev-ruby/meterpreter_bins zerochaos 17 Jul 2014
sys-power/thermald dlan 17 Jul 2014
net-analyzer/check_mk dlan 17 Jul 2014
app-admin/fleet alunduil 19 Jul 2014
perl-core/Pod-Parser dilfridge 20 Jul 2014
virtual/perl-Pod-Parser dilfridge 21 Jul 2014
sci-libs/libcerf ottxor 21 Jul 2014
games-fps/enemy-territory-omnibot ottxor 22 Jul 2014
dev-libs/libflatarray slis 22 Jul 2014
perl-core/Digest dilfridge 22 Jul 2014
virtual/perl-Digest dilfridge 22 Jul 2014
net-libs/stem mrueg 22 Jul 2014
perl-core/I18N-LangTags dilfridge 22 Jul 2014
virtual/perl-I18N-LangTags dilfridge 22 Jul 2014
perl-core/Locale-Maketext dilfridge 22 Jul 2014
virtual/perl-Locale-Maketext dilfridge 23 Jul 2014
perl-core/Net-Ping dilfridge 23 Jul 2014
virtual/perl-Net-Ping dilfridge 23 Jul 2014
dev-libs/libbson ultrabug 23 Jul 2014
sci-libs/silo slis 24 Jul 2014
dev-python/pgpdump jlec 24 Jul 2014
net-libs/libasr zx2c4 25 Jul 2014
dev-libs/npth zx2c4 25 Jul 2014
net-wireless/bladerf-firmware zerochaos 25 Jul 2014
net-wireless/bladerf-fpga zerochaos 25 Jul 2014
net-wireless/bladerf zerochaos 25 Jul 2014
sci-libs/cgnslib slis 25 Jul 2014
sci-visualization/visit slis 25 Jul 2014
dev-perl/Switch dilfridge 25 Jul 2014
dev-util/objconv slyfox 28 Jul 2014
app-crypt/monkeysign k_f 29 Jul 2014
virtual/bitcoin-leveldb blueness 29 Jul 2014
dev-db/percona-server robbat2 29 Jul 2014
sys-cluster/galera robbat2 30 Jul 2014
dev-db/mariadb-galera robbat2 30 Jul 2014
net-im/corebird dlan 30 Jul 2014
dev-libs/libpfm slis 31 Jul 2014
dev-perl/ExtUtils-Config civil 31 Jul 2014
dev-libs/papi slis 31 Jul 2014
dev-perl/ExtUtils-Helpers civil 31 Jul 2014
sys-cluster/hpx slis 31 Jul 2014
dev-perl/ExtUtils-InstallPaths civil 31 Jul 2014
dev-perl/Module-Build-Tiny civil 31 Jul 2014
www-plugins/pipelight ryao 31 Jul 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 July 2014 and 31 July 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-07

Bug Activity Number
New 1405
Closed 958
Not fixed 164
Duplicates 180
Total 5912
Blocker 5
Critical 19
Major 69

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo KDE team 41
2 Gentoo Security 38
3 Java team 29
4 Gentoo's Team for Core System packages 28
5 Gentoo Linux Gnome Desktop Team 24
6 Gentoo Games 24
7 Netmon Herd 23
8 Qt Bug Alias 22
9 Perl Devs @ Gentoo 22
10 Others 706

gmn-closed-2014-07

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 85
2 Gentoo Linux Gnome Desktop Team 64
3 Gentoo Security 56
4 Gentoo's Team for Core System packages 53
5 Julian Ospald (hasufell) 48
6 Netmon Herd 47
7 Gentoo KDE team 47
8 Python Gentoo Team 31
9 media-video herd 30
10 Others 943

gmn-opened-2014-07

Tip of the month

(by Sven Vermeulen)
Launching commands in background once (instead of scheduled through cron)

  • Have sys-process/at installed.
  • Have /etc/init.d/atd started.

Use things like:
~$ echo "egencache --update --repo=gentoo --jobs=4" | at now + 10 minutes

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

August 09, 2014
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Introducing pyoembed (August 09, 2014, 21:46 UTC)

Warning: This is a (very) delayed announcement! ;-)

oEmbed is an open standard for embedded content. It allows users to embed some resource, like a picture or a video, in a web page using only the resource URL, without knowing the details of how to embed the resource in a web page.

oEmbed isn't new stuff. It was created around 2008, and despite not being widely supported by content providers, it is supported by some big players, like YouTube, Vimeo, Flickr and Instagram, making its usage highly viable.

To support the oEmbed standard, the content provider just needs to provide a simple API endpoint, that receives an URL and a few other parameters, like the maximum allowed height/width, and returns a JSON or XML object, with ready-to-use embeddable code.

The content provider API endpoint can be previously known by the oEmbed client, or auto-discovered using some meta tags added to the resource's HTML page. This is the point where the standard isn't precise enough: not all of the providers support auto-discovering of the API endpoint, neither all of the providers are properly listed on the oEmbed specification. Proper oEmbed clients should try both approaches, looking for known providers first, falling back to auto-discovered endpoints, if possible.

Each of the Python libraries for oEmbed decided to follow one of the mentioned approaches, without caring about the other one, failing to support relevant providers. And this is the reason why I decided to start writing pyoembed!

pyoembed is a simple and easy to use implementation of the oEmbed standard for Python, that supports both auto-discovered and explicitly defined providers, supporting most (if not all) the relevant providers.

pyoembed's architecture makes it easy to add new providers and supports most of the existing providers out of the box.

To install it, just type:

$ pip install pyoembed

Gentoo users can install it from gentoo-x86:

# emerge -av pyoembed

pyoembed is developed and managed using Github, the repository is publicly available:

https://github.com/rafaelmartins/pyoembed

A Jenkins instance runs the unit tests and the integration tests automatically, you can check the results here:

https://ci.rgm.io/view/pyoembed/

The integration tests are supposed to fail from time to time, because they rely on external urls, that may be unavailable while the tests are running.

pyoembed is released under a 3 clause BSD license.

Enjoy!

Sven Vermeulen a.k.a. swift (homepage, bugs)
Some changes under the hood (August 09, 2014, 19:45 UTC)

In between conferences, technical writing jobs and traveling, we did a few changes under the hood for SELinux in Gentoo.

First of all, new policies are bumped and also stabilized (2.20130411-r3 is now stable, 2.20130411-r5 is ~arch). These have a few updates (mergers from upstream), and r5 also has preliminary support for tmpfiles (at least the OpenRC implementation of it), which is made part of the selinux-base-policy package.

The ebuilds to support new policy releases now are relatively simple copies of the live ebuilds (which always contain the latest policies) so that bumping (either by me or other developers) is easy enough. There’s also a release script in our policy repository which tags the right git commit (the point at which the release is made), creates the necessary patches, uploads them, etc.

One of the changes made is to “drop” the BASEPOL variable. In the past, BASEPOL was a variable inside the ebuilds that pointed to the right patchset (and base policy) as we initially supported policy modules of different base releases. However, that was a mistake and we quickly moved to bumping all policies with every releaes, but kept the BASEPOL variable in it. Now, BASEPOL is “just” the ${PVR} value of the ebuild so no longer needs to be provided. In the future, I’ll probably remove BASEPOL from the internal eclass and the selinux-base* packages as well.

A more important change to the eclass is support for the SELINUX_GIT_REPO and SELINUX_GIT_BRANCH variables (for live ebuilds, i.e. those with the 9999 version). If set, then they pull from the mentioned repository (and branch) instead of the default hardened-refpolicy.git repository. This allows for developers to do some testing on a different branch easily, or for other users to use their own policy repository while still enjoying the SELinux integration support in Gentoo through the sec-policy/* packages.

Finally, I wrote up a first attempt at our coding style, heavily based on the coding style from the reference policy of course (as our policy is still following this upstream project). This should allow the team to work better together and to decide on namings autonomously (instead of hours of discussing and settling for something as silly as an interface or boolean name ;-)

August 07, 2014
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
Can your distro compile Chromium? (August 07, 2014, 07:20 UTC)

Chromium is moving towards using C++11. Even more, it's going to require either gcc-4.8 or clang.

Distros like Ubuntu, Mageia, Fedora, openSUSE, Arch, CentOS, and Slackware are already using gcc-4.8 or later is their latest stable release.

On the other hand, Debian Wheezy (7.0) has gcc-4.7.2. Gentoo is using gcc-4.7.3 in stable.

I started a thread on gentoo-dev, gcc-4.8 may be needed in stable for www-client/chromium-38.x. There is a tracker for gcc-4.8 stabilization, bug #516152. There is also gcc-4.8 porting tracker, bug #461954.

Please consider testing gcc-4.8 on your stable Gentoo system, and file bugs for any package that fails to compile or needs to have a newer version stabilized to work with new gcc. I have recompiled all packages, the kernel, and GRUB without problems.

The title of this post is deliberately a bit similar to my earlier post Is your distro fast enough for Chromium? This browser project is pushing a lot towards shorter release cycles and latest software. I consider that a good thing. Now we just need to keep up with the updates, and any help is welcome.

August 03, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

When portage installs a package onto your system, it caches information about that package in a directory at /var/db/pkg/<cat>/<pkg>/, where <cat> is the category (ie ${CATEGORY}) and <pkg> is the package name, version number and revision number (ie. ${P}). This information can then be used at a later time to tell portage information about what’s installed on a system: what packages were installed, what USE flags are set on each package, what CFLAGS were used, etc. Even the ebuild itself is cached so that if it is removed from the tree, and consequently from your system upon `emerge –sync`, you have a local copy in VDB to uninstall or otherwise continue working with the package. If you take look under /var/db/pkg, you’ll find some interesting and some not so interesting files for each <cat>/<pkg>. Among the less interesting are files like DEPEND, RDPENED, FEATURES, IUSE, USE, which just contain the same values as the ebuild variables by the same name. This is redundant because that information is in the ebuild itself which is also cached but it is more readily available since one doesn’t have to re-parse the ebuild to obtain them. More interesting is information gathered about the package as it is installed, like CONTENTS, which contains a list of all the regular files, directories, and sym link which belong to the package, along with their MD5SUM. This list is used to remove files from the system when uninstalling the package. Environment information is also cached, like CBUILD, CHOST, CFLAGS, CXXFLAGS and LDFLAGS which affects the build of compiled packages, and environment.bz2 which contains the entire shell environment that portage ran in, including all shell variables and functions from inherited eclasses. But perhaps the most interesting information, and the most expensive to recalculate is, cached in NEEDED and NEEDED.ELF.2. The later supersedes the former which is only kept for backward compatibility, so let’s just concentrate on NEEDED.ELF.2. Its a list of every ELF object that is installed for a package, along with its ARCH/ABI information, its SONAME if it is a shared object (readelf -d <obj> | grep SONAME, or scanelf -S), any RPATH used to search for its needed shared objects (readelf -d <obj> | grep RPATH, or scanelf -r), and any NEEDED shared objects (the SONAMES of libraries) that it links against (readelf -d <obj> | grep NEEDED or scanelf -n). [1] Unless you’re working with some exotic systems, like an embedded image where everything is statically linked, your user land utilities and applications depend on dynamic linking, meaning that when a process is loaded from the executable on your hard drive, the linker has to make sure that its needed libraries are also loaded and then do some relocation magic to make sure that unresolved symbols in your executable get mapped to appropriate memory locations in the libraries.

The subtleties of linking are beyond the scope of this blog posting [2], but I think its clear from the previous paragraph that one can construct a “directed linkage graph” [3] of dependencies between all the ELF objects on a system. An executable can link to a library which in turn links to another, and so on, usually back to your libc [4]. `readelf -d <obj> | grep NEEDED` only give you the immediate dependencies, but if you follow these through recursively, you’ll get all the needed libraries that an executable needs to run. `ldd <obj>` is a shell script which provides this information, as does ldd.py from the pax-utils package, which also does some pretty indentation to show the depth of the dependency. If this is sounding vaguely familiar, its because portage’s dependency rules “mimic” the underlying linking which is needed at both compile time and at run time. Let’s take an example, curl compiled with polarssl as its SSL backend:

# ldd /usr/bin/curl | grep ssl
        libpolarssl.so.6 => /usr/lib64/libpolarssl.so.6 (0x000003a3d06cd000)
# ldd /usr/lib64/libpolarssl.so.6
        linux-vdso.so.1 (0x0000029c1ae12000)
        libz.so.1 => /lib64/libz.so.1 (0x0000029c1a929000)
        libc.so.6 => /lib64/libc.so.6 (0x0000029c1a56a000)
        /lib64/ld-linux-x86-64.so.2 (0x0000029c1ae13000)

Now let’s see this dependency reflected in the ebuild:

# cat net-misc/curl/curl-7.36.0.ebuild
RDEPEND="
        ...
        ssl? (
                ...
                curl_ssl_polarssl? ( net-libs/polarssl:= app-misc/ca-certificates )
                ...
        )
        ...

Nothing surprising. However, there is one subtlety. What happens if you update polarssl to a version which is not exactly backwards compatible. Then curl which properly linked against the old version of polarssl doesn’t quite work with the new version. This can happen when the library changes its public interface by either adding new functions, removing older ones and/or changing the behavior of existing functions. Usually upstream indicates this change in the library itself by bumping the SONAME:

# readelf -d /usr/lib64/libpolarssl.so.1.3.7 | grep SONAME
0x000000000000000e (SONAME) Library soname: [libpolarssl.so.6]

But how does curl know about the change when emerging an updated version of polarssl? That’s where subslotting comes in. To communicate the reverse dependency, the DEPEND string in curl’s ebuild has := as the slot indicator for polarssl. This means that upgrading polarssl to a new subslot will trigger a recompile of curl:

# emerge =net-libs/polarssl-1.3.8 -vp

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild r U ] net-libs/polarssl-1.3.8:0/7 [1.3.7:0/6] USE="doc sse2 static-libs threads%* zlib -havege -programs {-test}" ABI_X86="(64) (-32) (-x32)" 1,686 kB
[ebuild rR ] net-misc/curl-7.36.0 USE="ipv6 ldap rtmp ssl static-libs threads -adns -idn -kerberos -metalink -ssh {-test}" CURL_SSL="polarssl -axtls -gnutls -nss -openssl" 0 kB

Here the onus is on the downstream maintainer to know when the API breaks backwards compatibility and subslot accordingly. Going through with this build and then checking the new SONAME we find:

# readelf -d /usr/lib/libpolarssl.so.1.3.8 | grep SONAME
0x000000000000000e (SONAME) Library soname: [libpolarssl.so.7]

Aha! Notice the SONAME jumped from .6 for polarssl-1.3.7 to .7 for 1.3.8. Also notice the SONAME version number also follows the subslotting value. I’m sure this was a conscious effort by hasufell and tommyd, the ebuild maintainers, to make life easy.

So I hope my example has shown the importance of tracing forward and reverse linkage between the ELF objects in on a system [5]. Subslotting is relatively new but the need to trace linking has always been there. There was, and still is, revdep-rebuild (from gentoolkit) which uses output from ldd to construct a “directed linkage graph” [6] but is is relatively slow. Unfortunately, it recalculates all the NEEDED.ELF.2 information on the system in order to reconstruct and invert the directed linkage graph. Subslotting has partially obsoleted revdep-rebuild because portage can now track the reverse dependencies, but it has not completely obsoleted it. revdep-rebuild falls back on the SONAMEs in the shared objects themselves — an error here is an upstream error in which the maintainers of the library overlooked updating the value of CURRENT in the build system, usually in a line of some Makefile.am that looks like

LDFLAGS += -version-info $(CURRENT):$(REVISION):$(AGE)

But an error in subslotting is an downstream error where the maintainers didn’t properly subslot their package and any dependencies to reflect upstream’s changing API. So in some ways, these tools complement each other.

Now we come to the real point of the blog: there is no reason for revdep-rebuild to run ldd on every ELF object on the system when it can obtain that information from VDB. This doesn’t save time on inverting the directed graph, but it does save time on running ldd (effectively /lib64/ld-linux-x86-64.so.2 –list) on every ELF object in the system. So guess what the python version does, revdep-rebuild.py? You guessed it, it uses VDB information which is exported by portage via something like

import portage
vardb = portage.db[portage.root]["vartree"].dbapi

So what’s the difference in time? On my system right now, we’re looking at a difference between approximately 5 minutes for revdep-rebuild versus about 20 seconds for revdep-rebuild.py. [7] Since this information is gathered at build time, there is no reason for any Package Management System (PMS) to not export it via some standarized API. portage does so in an awkward fashion but it does export it. paludis does not export NEEDED.ELF.2 although it does export other VDB stuff. I can’t speak to future PMS’s but I don’t see why they should not be held to a standard.

Above I argued that exporting VDB is useful for utilities that maintain consistency between executibles and the shared objects that they consume. I suspect one could counter-argue that it doesn’t need to be exported because “revdep-rebuild” can be made part of portage or whatever your PMS, but I hope my next point will show that exporting NEEDED.ELF.2 information has other uses besides “consistant linking”. So a stronger point is that, not only should PMS export this information, but that it should provide some well documented API for use by other tools. It would be nice for every PMS to have the same API, preferably via python bindings, but as long as it is well documented, it will be useful. (Eg. webapp-config supports both portage and paludis. WebappConfig/wrapper.py has a simple little switch between “import portage; ... portage.settings['CONFIG_PROTECT'] ... ” and “cave print-id-environment-variable -b --format '%%v\n' --variable-name CONFIG_PROTECT %s/%s ...“.)

So besides consistent linking, what else could make use of NEEDED.ELF.2? In the world of Hardened Gentoo, to increase security, a PaX-patched kernel holds processes to much higher standards with respect to their use of memory. [8] Unfortunately, this breaks some packages which want to implement insecure methods, like RWX mmap-ings. Code is compiled “on-the-fly” by JIT compilers which typically create such mappings as an area to which they first write and then execute. However, this is dangerous because it can open up pathways by which arbitrary code can be injected into a running process. So, PaX does not allow RWX mmap-ings — it doesn’t allow it unless that kernel is told otherwise. This is where the PaX flags come in. In the JIT example, marking the executables with `paxctl-ng -m` will turn off PaX’s MPROTECT and allow the RWX mmap-ing. The issue of consistent PaX markings between executable and their libraries arises when it is the library that needs the markings. But when loaded, it is the markings of the executable, not the library, which set the PaX restrictions on the running process. [9]  So if its the library needs the markings, you have to migrate the markings from the library to the executable. Aha! Here we go again: we need to answer the question “what are all the consumers of a particular library so we can migrate its flags to them?” We can, as revdep-rebuild does, re-read all the ELF objects on the system, reconstruct the directed linkage graph, then invert it; or we can just start from the already gathered VDB information and save some time. Like revdep-rebuild and revdep-rebuild.py, I wrote two utilities. The original, revdep-pax, did forward and reverse migration of PaX flags by gathering information with ldd. It was horribly slow, 5 to 10 minutes depending on the number of objects in $PATH and shared object reported by `ldconfig -p`. I then rewrote it to use VDB information and it accomplished the same task in a fraction of the time [10]. Since constructing and inverting the directed linkage graph is such a useful operation, I figured I’d abstract the bare essential code into a python class which you can get at [11]. The data structure containing the entire graph is a compound python dictionary of the form

{
        abi1 : { path_to_elf1 : [ soname1, soname2, ... ], ... },
        abi2 : { path_to_elf2 : [ soname3, soname4, ... ], ... },
        ...
}

whereas the inverted graph has form

{
        abi1 : { soname1 : [ path_to_elf1, path_to_elf2, ... ], ... },
        abi2 : { soname2 : [ path_to_elf3, path_to_elf4, ... ], ... },
        ...
}

Simple!

Okay, up to now I concentrated on exporting NEEDED.ELF.2 information. So what about rest of the VDB information? Is it useful? A lot of questions regarding Gentoo packages can be answered by “grepping the tree.” If you use portage as your PMS, then the same sort of grep-sed-awk foo magic can be performed on /var/db/pkg to answer similar questions. However, this assumes that the PMS’s cached information is in plain ASCII format. If a PMS decides to use something like Berkeley DB or sqlite, then we’re going to need a tool to read the db format which the PMS itself should provide. Because I do a lot of release engineering of uclibc and musl stages, one need that often comes up is the need to compare of what’s installed in the stage3 tarballs for the various arches and alternative libc’s. So, I run some variation of the following script

#!/usr/bin/env python

import portage, re

portdb = portage.db[portage.root]["vartree"].dbapi

arm_stable = open('arm-stable.txt', 'w')
arm_testing = open('arm-testing.txt', 'w')

for pkg in portdb.cpv_all():
keywords = portdb.aux_get(pkg, ["KEYWORDS"])[0]

arches = re.split('\s+', keywords)
        for a in arches:
                if re.match('^arm$', a):
                        arm_stable.write("%s\n" % pkg)
                if re.match('^~arm$', a):
                        arm_testing.write("%s\n" % pkg)

arm_stable.close()
arm_testing.close()

in a stage3-amd64-uclibc-hardened chroot to see what stable packages in the amd64 tarball are ~arm. [12]  I run similar scripts in other chroots to do pairwise comparisons. This gives me some clue as to what may be falling behind in which arches — to keep some consistency between my various stage3 tarballs. Of course there are other utilities to do the same, like eix, gentoolkit etc, but then one still has to resort to parsing the output of those utilities to get the answers you want. An API for VDB information allows you to write your own custom utility to answer the precise questions you need answers. I’m sure you can multiply these examples.

Let me close with a confession. The above is propaganda for the upcoming GLEP 64 which I just wrote [13]. The purpose of the GLEP is to delineate what information should be exported by all PMS’s with particular emphasis on NEEDED.ELF.2 for the reasons stated above.  Currently portage does provide NEEDED.ELF.2 but paludis does not.  I’m not sure what future PMS’s might or might not provide, so let’s set a standard now for an important feature.

 

Notes:

[1] You can see where NEEDED.ELF.2 is generated for details. Take a look at line ~520 of /usr/lib/portage/bin/misc-functions.sh, or search for the comment “Create NEEDED.ELF.2 regardless of RESTRICT=binchecks”.

[2] A simple hands on tutorial can be found at http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html. It also includes dynamic linking via dlopen() which complicates the nice neat graph that can be constructed from NEEDED.ELF.2.

[3] I’m using the term “directed graph” as defined in graph theory. See http://en.wikipedia.org/wiki/Directed_graph. The nodes of the graph are each ELF object and the directed edges are from the consumer of the shared object to the shared object.

[4] Well, not quite. If you run readelf -d on readelf -d /lib/libc.so.6 you’ll see that it links back to /lib/ld-linux-x86-64.so.2 which doesn’t NEED anything else. The former is stricly your standard C library (man 7 libc) while the later is the dynamic linker/loader (man 8 ld.so).

[5] I should mention parenthatically that there are other executable/library file formats such as Mach-O used on MacOS X. The above arguments translate over to any executable formats which permit shared libraries and dynamic linking. My prejudice for ELF is because it is the primary executable format used on Linux and BSD systems.

[6] I’m coining this term here. If you read the revdep-rebuild code, you won’t see reference to any graph there. Bash doesn’t readily lend itself to the neat data structures that python does.

[7] Just a word of caution, revdep-rebuild.py is still in development and does warn when you run it “This is a development version, so it may not work correctly. The original revdep-rebuild script is installed as revdep-rebuild.sh”.

[8] See https://wiki.gentoo.org/wiki/Hardened/PaX_Quickstart for an explanation of what PaX does as well as how it works.

[9] grep the contents of fs/binfmt_elf.c for PT_PAX_FLAGS and CONFIG_PAX_XATTR_PAX_FLAGS to see how these markings are used when the process is loaded from the ELF object. You can see the PaX protection on a running process by using `cat /proc/<pid>/maps | grep ^PaX` or `pspax` form the pax-utils package.

[10] The latest version off the git repo is at http://git.overlays.gentoo.org/gitweb/?p=proj/elfix.git;a=blob;f=scripts/revdep-pax.

[11] http://git.overlays.gentoo.org/gitweb/?p=proj/elfix.git;a=blob;f=pocs/link-graph/link_graph.py.

[12] These stages are distributed at http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-uclibc-hardened/ and http://distfiles.gentoo.org/experimental/arm/uclibc/.

[13] https://bugs.gentoo.org/show_bug.cgi?id=518630