Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Zack Medico

Last updated:
September 01, 2014, 11:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

August 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Showing return code in PS1 (August 30, 2014, 23:14 UTC)

If you do daily management on Unix/Linux systems, then checking the return code of a command is something you’ll do often. If you do SELinux development, you might not even notice that a command has failed without checking its return code, as policies might prevent the application from showing any output.

To make sure I don’t miss out on application failures, I wanted to add the return code of the last executed command to my PS1 (i.e. the prompt displayed on my terminal).
I wasn’t able to add it to the prompt easily – in fact, I had to use a bash feature called the prompt command.

When the PROMPT_COMMMAND variable is defined, then bash will execute its content (which I declare as a function) to generate the prompt. Inside the function, I obtain the return code of the last command ($?) and then add it to the PS1 variable. This results in the following code snippet inside my ~/.bashrc:

export PROMPT_COMMAND=__gen_ps1
 
function __gen_ps1() {
  local EXITCODE="$?";
  # Enable colors for ls, etc.  Prefer ~/.dir_colors #64489
  if type -P dircolors >/dev/null ; then
    if [[ -f ~/.dir_colors ]] ; then
      eval $(dircolors -b ~/.dir_colors)
    elif [[ -f /etc/DIR_COLORS ]] ; then
      eval $(dircolors -b /etc/DIR_COLORS)
    fi
  fi
 
  if [[ ${EUID} == 0 ]] ; then
    PS1="RC=${EXITCODE} \[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] "
  else
    PS1="RC=${EXITCODE} \[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] "
  fi
}

With it, my prompt now nicely shows the return code of the last executed command. Neat.

Edit: Sean Patrick Santos showed me my utter failure in that this can be accomplished with the PS1 variable immediately, without using the overhead of the PROMPT_COMMAND. Just make sure to properly escape the $ sign which I of course forgot in my late-night experiments :-(.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
PowerPC is back (and little endian) (August 30, 2014, 17:32 UTC)

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.

PowerPC

I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

August 29, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened august meeting (August 29, 2014, 14:43 UTC)

Another month has passed, so we had another online meeting to discuss the progress within Gentoo Hardened.

Lead elections

The yearly lead elections within Gentoo Hardened were up again. Zorry (Magnus Granberg) was re-elected as project lead so doesn’t need to update his LinkedIn profile yet ;-)

Toolchain

blueness (Anthony G. Basile) has been working on the uclibc stages for some time. Due to the configurable nature of these setups, many /etc/portage files were provided as part of the stages, which shouldn’t happen. Work is on the way to update this accordingly.

For the musl setup, blueness is also rebuilding the stages to use a symbolic link to the dynamic linker (/lib/ld-linux-arch.so) as recommended by the musl maintainers.

Kernel and grsecurity with PaX

A bug has been submitted which shows that large binary files (in the bug, a chrome binary with debug information is shown to be more than 2 Gb in size) cannot be pax-mark’ed, with paxctl informing the user that the file is too big. The problem is when the PAX marks are in ELF (as the application mmaps the binary) – users of extended attributes based PaX markings do not have this problem. blueness is working on making things a bit more intelligent, and to fix this.

SELinux

I have been making a few changes to the SELinux setup:

  • The live ebuilds (those with version 9999 which use the repository policy rather than snapshots of the policies) are now being used as “master” in case of releases: the ebuilds can just be copied to the right version to support the releases. The release script inside the repository is adjusted to reflect this as well.
  • The SELinux eclass now supports two variables, SELINUX_GIT_REPO and SELINUX_GIT_BRANCH, which allows users to use their own repository, and developers to work in specific branches together. By setting the right value in the users’ make.conf switching policy repositories or branches is now a breeze.
  • Another change in the SELinux eclass is that, after the installation of SELinux policies, we will check the reverse dependencies of the policy package and relabel the files of these packages. This allows us to only have RDEPEND dependencies towards the SELinux policy packages (if the application itself does not otherwise link with libselinux), making the dependency tree within the package manager more correct. We still need to update these packages to drop the DEPEND dependency, which is something we will focus on in the next few months.
  • In order to support improved cooperation between SELinux developers in the Gentoo Hardened team – perfinion (Jason Zaman) is in the queue for becoming a new developer in our mids – a coding style for SELinux policies is being drafted up. This is of course based on the coding style of the reference policy, but with some Gentoo specific improvements and more clarifications.
  • perfinion has been working on improving the SELinux support in OpenRC (release 0.13 and higher), making some of the additions that we had to make in the past – such as the selinux_gentoo init script – obsolete.

The meeting also discussed a few bugs in more detail, but if you really want to know, just hang on and wait for the IRC logs ;-) Other usual sections (system integrity and profiles) did not have any notable topics to describe.

August 28, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Did Apple lose its advantage? (August 28, 2014, 22:59 UTC)

Readers of my blog for a while probably know already that I've been an Apple user over time. What is not obvious is that I have scaled down my (personal) Apple usage over the past two years, mostly because my habits, and partly because of Android and Linux getting better and better. One component is, though, that some of the advantages to be found when using Apple started to disappear for me.

I think that for me the start of the problems is to be found in the release of iOS 7. Beside the taste of not liking the new flashy UI, what I found is that it did not perform as well as previous releases. I think this is the same effect others have had. In particular the biggest problem with it for me had to do with the way I started using my iPad while in Ireland. Since I now have access to a high-speed connection, I started watching more content in streaming. In particular, thanks to my multiple trips to the USA over the past year, I got access to more video content on the iTunes store, so I wanted to watch some of the new TV series through it.

Turned out that for a few versions, and I mean a few months, iOS was keeping the streamed content in the cache, not accounting for it anywhere, and never cleaning it up. The result was that after streaming half a series, I would get errors telling me the iPad storage was full, but there was no way from the device itself to clear the cache. EIther you had to do a factory reset to drop off all the content of the device, or you had to use a Windows application to remove the cache files manually. Not very nice.

Another very interesting problem with the streaming the content: it can be slow. Not always but it can. One night I wanted to watch The LEGO Movie since I did not see it at the cinema. It's not available on the Irish Netflix so I decided to rent it off iTunes. It took the iPad four hours to download it. It made no sense. And no, the connection was not hogged by something else, and running a SpeedTest from the tablet itself showed it had all the network capacity it needed.

The iPad is not, though, the only Apple device I own; I also bought an iPod Touch back in LA when my Classic died. even though I was not really happy with downgrading from 80G down to 64G. But it's mostly okay, as my main use for the iPod is to listen to audiobooks and podcasts when I sleep — which recently I have been doing through Creative D80 Bluetooth speakers, which are honestly not great but at least don't force me to wear earphones all night long.

I had no problem before switching the iPod from one computer to the next, as I moved from iMac to a Windows disk for my laptop. When I decided to just use iTunes on the one Windows desktop I keep around (mostly to play games), then a few things stopped working as intended. It might have been related to me dropping the iTunes Match subscription, but I'm not sure about that. But what happens is that only a single track for each of the albums was being copied on the iPod and nothing else.

I tried factory reset, cable and wireless sync, I tried deleting the iTunes data on my computer to force it to figure out the iPod is new, and the current situation I'm in is only partially working: the audiobooks have been synced, but without cover art and without the playlists — some of the audiobooks I have are part of a series, or are split in multiple files if I bought them before Audible started providing single-file downloads. This is of course not very good when the audio only lasts three hours, and then I start having nightmares.

It does not help that I can't listen to my audiobooks with VLC for Android because it thinks that the chapter art is a video stream, and thus puts the stream to pause as soon as I turn off the screen. I should probably write a separate rant about the lack of proper audiobooks tools for Android. Audible has an app, but it does not allow you to sideload audiobooks (i.e. stuff I ripped from my original CDs, or that I bought on iTunes), nor it allows you to build a playlist of books, say for all the books in a series.

As I write this, I asked iTunes again to sync all the music to my iPod Touch as 128kbps AAC files (as otherwise it does not fit into the device); iTunes is now copying 624 files; I'm sure my collection contains more than 600 albums — and I would venture to say more than half I have in physical media. Mostly because no store allows me to buy metal in FLAC or ALAC. And before somebody suggests Jamendo or other similar services: yes, great, I actually bought lots of Jazz on Magnatune before it became a subscription service and I loved it, but that is not a replacement for mainstream content. Also, Magnatune has terrible security practices, don't use it.

Sorry Apple, but given these small-but-not-so-small issues with your software recently, I'm not going to buy any more devices from you. If any of the two devices I have fails, I'll just get someone to build a decent audiobook software for me one way or the other…

August 25, 2014
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo on the Odroid-U3 (August 25, 2014, 05:00 UTC)

Arm cross compiler setup and stuffs

This will set up a way to compile things for arm on your native system (amd64 for me)

emerge dev-embedded/u-boot-tools sys-devel/crossdev
crossdev -S -s4 -t armv7a-hardfloat-linux-gnueabi

Building the kernel

This assumes you have kernel sources, I'm testing 3.17-rc2 since they just got support for the odroid-u3 into upstream.

Also, I tend to build without modules, so keep that in mind.

# get the base config (For me on an odroid-u3
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make exynos_defconfig
# change it to add what I want/need
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make menuconfig
# build the kernel
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j10

Setting up the SD Card

I tend to be generous, 10M for the bootloader

parted /dev/sdb mklabel msdos y
parted /dev/sdb mkpart p fat32 10M 200M
parted /dev/sdb mkpart p 200M 100%
parted /dev/sdb toggle 1 boot

mkfs.vfat /dev/sdb1
mkfs.ext4 /dev/sdb2

Building uboot

This may differ between boards, but should general look like the following (I hear vanilla uboot works now)

I used the odroid-v2010.12 branch and one thing to note is that if it sees a zImage on the boot partition it will ONLY use that, kinda of annoying.

git clone git://github.com/hardkernel/u-boot.git
cd u-boot
sed -i -e "s/soft-float/float-abi=hard -mfpu=vfpv3/g" arch/arm/cpu/armv7/config.mk
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make smdk4412_config
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j1
sudo "sh /home/USER/dev/arm/u-boot/sd_fuse/sd_fusing.sh /dev/sdb"

Copying the kernel/userland

sudo -i
mount /dev/sdb2 /mnt/gentoo
mount /dev/sdb1 /mnt/gentoo/boot
cp /home/USER/dev/linux/arch/arm/boot/dts/exynos4412-odroidu3.dtb /mnt/gentoo/boot/
cp /home/USER/dev/linux/arch/arm/boot/zImage /mnt/gentoo/boot/kernel-3.17-rc2.raw
cd /mnt/gentoo/boot
cat kernel-3.17-rc2.raw exynos4412-odroidu3.dtb > kernel-3.17-rc2

tar -xf /tmp/stage3-armv7a_hardfp-hardened-20140627.tar.bz2 -C /mnt/gentoo/

Setting up userland

I tend to just copy or generate a shadow file and overwrite the root entry in /etc/shadow...

Then set up on when booted

Setting up the bootloader

put this in /mnt/gentoo/boot/boot.txt

setenv initrd_high "0xffffffff"
setenv fdt_high "0xffffffff"
setenv fb_x_res "1920"
setenv fb_y_res "1080"
setenv hdmi_phy_res "1080"
setenv bootcmd "fatload mmc 0:1 0x40008000 kernel-3.17-rc2; bootm 0x40008000"
setenv bootargs "console=tty1 console=ttySAC1,115200n8 fb_x_res=${fb_x_res} fb_y_res=${fb_y_res} hdmi_phy_res=${hdmi_phy_res} root=/dev/mmcblk0p2 rootwait ro mem=2047M"
boot

and run this

mkimage -A arm -T script -C none -n "Boot.scr for odroid-u3" -d boot.txt boot.scr

That should do it :D

I used steev (a fellow gentoo dev) and http://www.funtoo.org/ODROID_U2 as sources.

August 22, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

As of today, more than 50% of the 37527 ebuilds in the Gentoo portage tree use the newest ebuild API (EAPI) version, EAPI=5!
The details of the various EAPIs can be found in the package manager specification (PMS); the most notable new feature of EAPI 5, which has sped up acceptance a lot is the introduction of so-called subslots. A package A can specify a subslot, another package B that depends on it can specify that it needs to be rebuilt when the subslot of A changes. This leads to much more elegant solutions for many of the the link or installation path problems that revdep-rebuild, emerge @preserved-rebuild, or e.g. perl-cleaner try to solve... Another useful new feature in EAPI=5 is the masking of use-flags specifically for stable-marked ebuilds.
You can follow the adoption of EAPIs in the portage tree on an automatically updated graph page.

August 19, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching to new laptop (August 19, 2014, 20:11 UTC)

I’m slowly but surely starting to switch to a new laptop. The old one hasn’t completely died (yet) but given that I had to force its CPU frequency at the lowest Hz or the CPU would burn (and the system suddenly shut down due to heat issues), and that the connection between the battery and laptop fails (so even new battery didn’t help out) so I couldn’t use it as a laptop… well, let’s say the new laptop is welcome ;-)

Building Gentoo isn’t an issue (having only a few hours per day to work on it is) and while I’m at it, I’m also experimenting with EFI (currently still without secure boot, but with EFI) and such. Considering that the Gentoo Handbook needs quite a few updates (and I’m thinking to do more than just small updates) knowing how EFI works is a Good Thing ™.

For those interested – the EFI stub kernel instructions in the article on the wiki, and also in Greg’s wonderful post on booting a self-signed Linux kernel (which I will do later) work pretty well. I didn’t try out the “Adding more kernels” section in it, as I need to be able to (sometimes) edit the boot options (which isn’t easy to accomplish with EFI stub-supporting kernels afaics). So I installed Gummiboot (and created a wiki article on it).

Lots of things still planned, so little time. But at least building chromium is now a bit faster – instead of 5 hours and 16 minutes, I can now enjoy the newer versions after little less than 40 minutes.

August 17, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What's up with Semalt, then? (August 17, 2014, 18:05 UTC)

In my previous post on the matter, I called for a boycott of Semalt by blocking access to your servers from their crawler, after a very bad-looking exchange on Twitter with a supposed representative of theirs.

After I posted that, I got threatened by the same representative to be sued for libel, even though what that post was about was documenting their current practices, rather than shaming them. This got enough attention of other people who has been following the Semalt situation so that I could actually gather some more information on the matter.

In particular, there are two interesting blog posts by Joram van den Boezen about the company and its tactics. Turns out that what I thought was a very strange private cloud set up – coming as it was from Malaysia – was actually a botnet. Indeed, what appears from Joram's investigations is that the people behind Semalt use sidecar malware both to gather URLs to crawl, and to crawl them. And this, according to their hosting provider is allowed because they make it clear in their software's license.

This is consistent with what I have seen of Semalt on my server: rather than my blog – which fares pretty well on the web as a source of information – I found them requesting my website, which is almost dead. Looking at all the websites in all my servers, the only other affected is my friend's which is by far not really an important one. But if we start from accepting Joram's findings (and I have no reason not to), then I can see how that can happen.

My friend's website is visited mostly by the people in the area we grew up in, and general friends of his. I know how bad their computers can be, as I have been doing tech support on them for years, and paid my bills that way. Computers that were bought either without a Windows license or with Windows Vista, that got XP installed on them so badly that they couldn't get updates even when they were available. Windows 7 updates that were done without actually possessing a license, and so on so forth. I have, at some point, added a ModRewrite-based warning for a few known viruses that would alter the Internet Explorer User-Agent field.

Add to this that even those who shouldn't be strapped for cash would want to avoid paying for anything if they can, you can see why software such as SoundFrost and other similar "tools" to download YouTube videos into music files would be quite likely to be found in computers that end up browsing my friend's site.

What remains still not clear from all this information is why they are doing it. As I said in my previous post, there is no reason to abuse the referrer field, that is, beside to spam the statistics of the websites. Since the company is selling SEO services, one assumes that they do so to attract more customers. After all, if you spend time checking your Analytics output, you probably are the target audience of SEO services.

But after that, there are still questions that have no answer. How can that company do any analytics when they don't really seem to have any infrastructure but rather use botnets for finding and accessing websites? Do they only make money with their subscriptions? And here is where things can get tricky, because I can only hypothesize and speculate, words that are dangerous to begin with.

What I can tell you is that out there, many people have no scruple, and I'm not referring to Semalt here. When I tried to raise awareness about them on Reddit (a site that I don't generally like, but that can be put to good use sometimes), I stopped by the subreddit to get an idea of what kind of people would be around there. It was not as I was expecting, not at all. Indeed what I found is that there are people out there seriously considering using black hat SEO services. Again, this is speculation, but my assumption is that these are consultants that basically want to show their clients that their services are worth it by inflating the access statistics to the websites.

So either these consultants just buy the services out of companies like Semalt, or even the final site owners don't understand that a company promising "more accesses" does not really mean "more people actually looking at your website and considering your services". It's hard for people who don't understand the technology to discern between "accesses" and "eyeballs'. It's not much different from the fake Twitter followers, studied by Barracuda Labs a couple of years ago — I know I read a more thorough study of one of the websites selling this kind of money but I can't find it. That's why I usually keep that stuff on Readability.

So once again, give some antibiotics to the network, and help cure the web from people like Semalt and the people who would buy their services.

August 16, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Libav Release Process (August 16, 2014, 15:23 UTC)

Since the release document is lacking here few notes on how it works, it will be updated soon =).

Versioning

Libav has separate version for each library provided. As usual the major version bump signifies an ABI-incompatible change, a minor version bump marks a specific feature introduction or removal.
It is made this way to let users leverage the pkgconf checks to require features instead of use a compile+link check.
The APIChange document details which version corresponds to which feature.

The Libav global version number e.g. 9.16 provides mainly the following information:

  • If the major number is updated the Libraries have ABI differences.
    • If the major number is Even API-incompatible changes should be expected, downstreams should follow the migration guide to update their code.
    • If the major number is Odd no API-incompatible changes happened and a simple rebuild **must** be enough to use the new library.
  • If the minor number is updated that means that enough bugfixes piled up during the month/2weeks period and a new point release is available.

Major releases

All the major releases start with a major version bump of all the libraries. This automatically enables new ABI incompatible code and disables old deprecated code. Later or within the same patch the preprocessor guards and the deprecated code gets removed.

Alpha

Once the major bump is committed the first alpha is tagged. Alphas live within the master branch, the codebase can still accept features updates (e.g. small new decoders or new demuxers) but the API and ABI cannot have incompatible changes till the next one or two major releases.

Beta

The first beta tag also marks the start of the new release branch.
From this point all the bugfixes that hit the master will be backported, no feature changes are accepted in the branch.

Release

The release is not different from a beta, it is still a tag in the release branch. The level of confidence nothing breaks is much higher though.

Point releases

Point releases are bugfix-only releases and they aim to provide seamless security updates.

Since most bugs in Libav are security concerns users should update as soon the new release is out. We keep our continuous integration system monitoring all the release branches in addition to the master branch to be confident that backported bugfixes do not cause unexpected issues.

Libav 11

The first beta for the release 11 should appear in the next two days, please help us by testing and reporting bugs.

August 14, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

Foreword

Let’s say we have to design an application that should span across multiple datacenters while being able to scale as easily as firing up a new vm/container without the need to update any kind of configuration.

Facing this kind of challenge is exciting and requires us to address a few key scaffolding points before actually starting to code something :

  • having a robust and yet versatile application container to run our application
  • having a datacenter aware, fault detecting and service discovery service

Seeing the title of this article, the two components I’ll demonstrate are obviously uWSGI and Consul which can now work together thanks to the uwsgi-consul plugin.

While this article example is written in python, you can benefit from the same features in all the languages supported by uWSGI which includes go, ruby, perl ad php !

Our first service discovering application

The application will demonstrate how simple it is for a client to discover all the available servers running a specific service on a given port. The best part is that the services will be registered and deregistered automatically by uWSGI as they’re loaded and unloaded.

The demo application logic is as follows :

  1. uWSGI will load two server applications which are each responsible for providing the specified service on the given port
  2. uWSGI will automatically register the configured service into Consul
  3. uWSGI will also automatically register a health check for the configured service into Consul so that Consul will also be able to detect any failure of the service
  4. Consul will then respond to any client requesting the list of the available servers (nodes) providing the specified service
  5. The client will query Consul for the service and get either an empty response (no server available / loaded) or the list of the available servers

Et voilà, the client can dynamically detect new/obsolete servers and start working !

Setting up uWSGI and its Consul plugin

On Gentoo Linux, you’ll just have to run the following commands to get started (other users refer to the uWSGI documentation or your distro’s package manager). The plugin will be built by hand as I’m still not sure how I’ll package the uWSGI external plugins…

$ sudo ACCEPT_KEYWORDS="~amd64" emerge uwsgi
$ cd /usr/lib/uwsgi/
$ sudo uwsgi --build-plugin https://github.com/unbit/uwsgi-consul
$ cd -

 

You’ll have installed the uwsgi-consul plugin which you should see here :

$ ls /usr/lib/uwsgi/consul_plugin.so
/usr/lib/uwsgi/consul_plugin.so

 

That’s all we need to have uWSGI working with Consul.

Setting up a Consul server

Gentoo users will need to add the ultrabug overlay (use layman) and then install consul (other users refer to the Consul documentation or your distro’s package manager).

$ sudo layman -a ultrabug
$ sudo ACCEPT_KEYWORDS="~amd64" USE="web" emerge consul

 

Running the server and its UI is also quite straightforward. For this example, we will run it directly from a dedicated terminal so you can also enjoy the logs and see what’s going on (Gentoo users have an init script and conf.d ready for them shall they wish to go further).

Open a new terminal and run :

$ consul agent -data-dir=/tmp/consul-agent -server -bootstrap -ui-dir=/var/lib/consul/ui -client=0.0.0.0

 

You’ll see consul running and waiting for work. You can already enjoy the web UI by pointing your browser to http://127.0.0.1:8500/ui/.

Running the application

To get this example running, we’ll use the uwsgi-consul-demo code that I prepared.

First of all we’ll need the consulate python library (available on pypi via pip). Gentoo users can just install it (also from the ultrabug overlay added before) :

$ sudo ACCEPT_KEYWORDS="~amd64" emerge consulate

 

Now let’s clone the demo repository and get into the project’s directory.

$ git clone git@github.com:ultrabug/uwsgi-consul-demo.git
$ cd uwsgi-consul-demo

 

First, we’ll run the client which should report that no server is available yet. We will keep this terminal open to see the client detecting in real time the appearance and disappearance of the servers as we start and stop uwsgi :

$ python client.py 
no consul-demo-server available
[...]
no consul-demo-server available

 

Open a new terminal and get inside the project’s directory. Let’s have uWSGI load the two servers and register them in Consul :

$ uwsgi --ini uwsgi-consul-demo.ini --ini uwsgi-consul-demo.ini:server1 --ini uwsgi-consul-demo.ini:server2
[...]
* server #1 is up on port 2001


* server #2 is up on port 2002

[consul] workers ready, let's register the service to the agent
[consul] service consul-demo-server registered succesfully
[consul] workers ready, let's register the service to the agent
[consul] service consul-demo-server registered succesfully

 

Now let’s check back our client terminal, hooray it has discovered the two servers on the host named drakar (that’s my local box) !

consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2002
consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2001

Expanding our application

Ok it works great on our local machine but we want to see how to add more servers to the fun and scale dynamically.

Let’s add another machine (named cheetah here) to the fun and have servers running there also while our client is still running on our local machine.

On cheetah :

  • install uWSGI as described earlier
  • install Consul as described earlier

Run a Consul agent (no need of a server) and tell him to work with your already running consul server on your box (drakar in my case) :

$ /usr/bin/consul agent -data-dir=/tmp/consul-agent -join drakar -ui-dir=/var/lib/consul/ui -client=0.0.0.0

The -join <your host or IP> is the important part.

 

Now run uWSGI so it starts and registers two new servers on cheetah :

$ uwsgi --ini uwsgi-consul-demo.ini --ini uwsgi-consul-demo.ini:server1 --ini uwsgi-consul-demo.ini:server2

 

And check the miracle on your client terminal still running on your local box, the new servers have appeared and will disappear if you stop uwsgi on the cheetah node :

consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2001
consul-demo-server found on node drakar (xx.xx.xx.xx) using port 2002
consul-demo-server found on node cheetah (yy.yy.yy.yy) using port 2001
consul-demo-server found on node cheetah (yy.yy.yy.yy) using port 2002

Go mad

Check the source code, it’s so simple and efficient you’ll cry ;)

I hope this example has given you some insights and ideas for your current or future application designs !

August 12, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
HD Daymaker LED Headlamp (August 12, 2014, 05:10 UTC)

2014-08-12-064921_571x472_scrot

Short post to share my experience with the Harley-Davidson Daymaker LED Headlamp.

I came to buy it because I was not satisfied with the standard lamp fitted on my sportster and I guess whoever has to drive by night would feel that unpleasant feeling to not actually be able to properly see what’s going on in front of you.

The LED Headlamp is worth the few hundred bucks it costs if at least for the sake of your own life but furthermore for the incredible improvement from the standard lamp. Don’t hesitate a second just go for it and it’s dead simple to mount yourself !

See the difference (passing lights) :

IMG_20140804_215931

IMG_20140804_224657

Now I feel way safer to drive on unlitten roads.

August 11, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Releases! (August 11, 2014, 11:44 UTC)

Last we made a huge effort to make a release for every supported branch (and even one that is supposed not to be). Lots of patches to fix some old bugs got backported. I hope you appreciate the dedication.

Libav 0.8.15

We made an extra effort, this branch is supposed to be closed and the code is really ancient!
I went the extra mile and I had to run over all the codebase to fix a security issue properly: you might crash if your get_buffer callback doesn’t validate the frame dimension, that code is provided by the library user (e.g. VLC), so the solution is to wrap the get_buffer callback in a function ff_get_buffer and do the check there. For Libav 9 and following we did already for unrelated reasons, for Libav 0.8 I (actually we since the first patch didn’t cover all usage) had sift through the code and replace all the avctx->get_buffer() with ff_get_buffer().

Libav 9.16

This is a standard security release, backporting from Libav 10 might require some manual retouch since code got cleaned up a lot and some internals are different but it is still less painful than backporting from 11 to 0.8

Libav 10.3

This is a quite easy release, backporting fixes is nearly immediate since Libav 11 doesn’t have radical changes in the core internals and the cleanups can apply to release/10.

Libav 11 alpha1

Libav11 is a major release API compatible with Libav10, that makes transitioning as smooth as possible: you enjoy automatically some under-the-hood changes that required an ABI bump (such as the input mime support to speed up AAC webradio startup time) and if you want you can start using the new API features (such as avresample AVFrame API, av_packet_rescale_ts(), AVColor in AVFrame and so on).

You can help!

Libav 11 will be out within the month and help is welcome to polish it and make sure we do not have rough edges.

Update a downstream project you are using

Many downstreams are still using (and sometimes misusing) the old (Libav9) and ancient (Libav0.8) API. We started writing migration guides to help, we contributed many patches already and the Debian packagers did a great job to take care of their side.

Some patches are just waiting to be forwarded to the downstream or, if the package is orphaned, to your favourite distribution packagers.

Triage our bugzilla

Most of the Libav development happens in the mailing-lists and sometimes
bugs reported over bugzilla get not updated timely. Triaging bugs sometimes take a little of time and helps a lot.

Gentoo Monthly Newsletter: July 2014 (August 11, 2014, 00:00 UTC)

Gentoo News

Trustee Election Results

The two open seats for the Gentoo Trustees for the 2014-2016 term will be:

  • Alec Warner (antarus) First Term
  • Roy Bamford (neddyseagoon) Fourth Term

Since there were only two nominees for the two seats up for election, there was no official election. They were appointed uncontested.

Council Election Results

The Gentoo Council for the 2014-2015 term will be:

  • Anthony G. Basile (blueness)
  • Ulrich Müller (ulm)
  • Andreas K. Hüttel (dilfridge)
  • Richard Freeman (rich0)
  • William Hubbs (williamh)
  • Donnie Berkholz (dberkholz)
  • Tim Harder (radhermit)

Official announcement here.

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

The following developers have recently changed roles:

  • Projects:
    • mgorny joined Portage
    • k_f joined Gentoo-keys
    • zlogene joined Proxy maintainers
    • civil joined Qt
    • pesa replaced pinkbyte as Qt lead
    • TomWij removed himself from Bug-wranglers
    • Gentoo sound migrated to wiki
    • Artwork migrated to wiki
    • Desktop-util migrated to wiki
    • Accessibility migrated to wiki
    • Enlightenment migrated to wiki
  • Herds:
    • eselect herd was added
    • zlogene joined s390
    • twitch153 joined tools-portage
    • pinkbyte left leechcraft
    • k_f joined crypto

Additions

The following developers have recently joined the project:

  • Xavier Miller (xaviermiller)
  • Patrice Clement (monsieurp)
  • Amy Winston (amynka)
  • Kristian Fiskerstrand (k_f)

Returning Dev

  • Tom Gall (tgall)

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17595
Ebuilds 37628
Architecture Stable Testing Total % of Packages
alpha 3658 561 4219 23.98%
amd64 10863 6239 17102 97.20%
amd64-fbsd 0 1577 1577 8.96%
arm 2681 1743 4424 25.14%
arm64 559 32 591 3.36%
hppa 3061 482 3543 20.14%
ia64 3189 612 3801 21.60%
m68k 618 87 705 4.01%
mips 0 2402 2402 13.65%
ppc 6838 2353 9191 52.24%
ppc64 4326 866 5192 29.51%
s390 1477 331 1808 10.28%
sh 1670 403 2073 11.78%
sparc 4114 898 5012 28.49%
sparc-fbsd 0 317 317 1.80%
x86 11535 5288 16823 95.61%
x86-fbsd 0 3237 3237 18.40%

gmn-portage-stats-2014-08

Security

Package Removals/Additions

Removals

Package Developer Date
perl-core/Class-ISA dilfridge 05 Jul 2014
dev-python/argparse mgorny 06 Jul 2014
dev-python/ordereddict mgorny 06 Jul 2014
perl-core/Filter dilfridge 07 Jul 2014
app-text/qgoogletranslator grozin 09 Jul 2014
dev-lisp/openmcl grozin 09 Jul 2014
dev-lisp/openmcl-build-tools grozin 09 Jul 2014
net-libs/cyassl blueness 15 Jul 2014
dev-ruby/text-format graaff 18 Jul 2014
dev-ruby/jruby-debug-base graaff 18 Jul 2014
games-util/rubygfe graaff 18 Jul 2014
perl-core/PodParser dilfridge 20 Jul 2014
virtual/perl-PodParser dilfridge 21 Jul 2014
perl-core/digest-base dilfridge 22 Jul 2014
virtual/perl-digest-base dilfridge 22 Jul 2014
perl-core/i18n-langtags dilfridge 22 Jul 2014
virtual/perl-i18n-langtags dilfridge 22 Jul 2014
perl-core/locale-maketext dilfridge 23 Jul 2014
virtual/perl-locale-maketext dilfridge 23 Jul 2014
perl-core/net-ping dilfridge 23 Jul 2014
virtual/perl-net-ping dilfridge 23 Jul 2014
virtual/perl-Switch dilfridge 25 Jul 2014
perl-core/Switch dilfridge 25 Jul 2014
x11-misc/keytouch pacho 27 Jul 2014
x11-misc/keytouch-editor pacho 27 Jul 2014
media-video/y4mscaler pacho 27 Jul 2014
dev-python/manifestdestiny pacho 27 Jul 2014
dev-cpp/libsexymm pacho 27 Jul 2014

Additions

Package Developer Date
www-client/vimb radhermit 01 Jul 2014
dev-util/libsparse jauhien 01 Jul 2014
dev-python/docker-py chutzpah 01 Jul 2014
dev-util/ext4_utils jauhien 01 Jul 2014
dev-haskell/base16-bytestring gienah 02 Jul 2014
dev-haskell/boxes gienah 02 Jul 2014
dev-haskell/chell gienah 02 Jul 2014
dev-haskell/conduit-extra gienah 02 Jul 2014
dev-haskell/cryptohash-conduit gienah 02 Jul 2014
dev-haskell/ekg-core gienah 02 Jul 2014
dev-haskell/equivalence gienah 02 Jul 2014
dev-haskell/hastache gienah 02 Jul 2014
dev-haskell/options gienah 02 Jul 2014
dev-haskell/patience gienah 02 Jul 2014
dev-haskell/prelude-extras gienah 02 Jul 2014
dev-haskell/tf-random gienah 02 Jul 2014
dev-haskell/quickcheck-instances gienah 02 Jul 2014
dev-haskell/streaming-commons gienah 02 Jul 2014
dev-haskell/vector-th-unbox gienah 02 Jul 2014
dev-haskell/tasty-th gienah 02 Jul 2014
dev-haskell/dlist-instances gienah 02 Jul 2014
dev-haskell/temporary-rc gienah 02 Jul 2014
dev-haskell/stmonadtrans gienah 02 Jul 2014
dev-haskell/data-hash gienah 02 Jul 2014
dev-haskell/yesod-auth-hashdb gienah 02 Jul 2014
sci-mathematics/agda-lib-ffi gienah 02 Jul 2014
dev-haskell/lifted-async gienah 02 Jul 2014
dev-haskell/wai-conduit gienah 02 Jul 2014
dev-haskell/shelly gienah 02 Jul 2014
dev-haskell/chell-quickcheck gienah 03 Jul 2014
dev-haskell/tasty-ant-xml gienah 03 Jul 2014
dev-haskell/lcs gienah 03 Jul 2014
dev-haskell/tasty-golden gienah 03 Jul 2014
sec-policy/selinux-tcsd swift 04 Jul 2014
dev-perl/Class-ISA dilfridge 05 Jul 2014
net-wireless/gqrx zerochaos 06 Jul 2014
dev-perl/Filter dilfridge 07 Jul 2014
app-misc/abduco xmw 10 Jul 2014
virtual/perl-Math-BigRat dilfridge 10 Jul 2014
virtual/perl-bignum dilfridge 10 Jul 2014
dev-perl/Net-Subnet chainsaw 11 Jul 2014
dev-java/opencsv ercpe 11 Jul 2014
dev-java/trident ercpe 11 Jul 2014
dev-java/htmlparser-org ercpe 11 Jul 2014
dev-java/texhyphj ercpe 12 Jul 2014
dev-util/vmtouch dlan 12 Jul 2014
sys-block/megactl robbat2 14 Jul 2014
dev-python/fexpect jlec 14 Jul 2014
mail-filter/postfwd mschiff 15 Jul 2014
dev-python/wheel djc 15 Jul 2014
dev-ruby/celluloid-io mrueg 15 Jul 2014
sys-process/tiptop patrick 16 Jul 2014
dev-ruby/meterpreter_bins zerochaos 17 Jul 2014
sys-power/thermald dlan 17 Jul 2014
net-analyzer/check_mk dlan 17 Jul 2014
app-admin/fleet alunduil 19 Jul 2014
perl-core/Pod-Parser dilfridge 20 Jul 2014
virtual/perl-Pod-Parser dilfridge 21 Jul 2014
sci-libs/libcerf ottxor 21 Jul 2014
games-fps/enemy-territory-omnibot ottxor 22 Jul 2014
dev-libs/libflatarray slis 22 Jul 2014
perl-core/Digest dilfridge 22 Jul 2014
virtual/perl-Digest dilfridge 22 Jul 2014
net-libs/stem mrueg 22 Jul 2014
perl-core/I18N-LangTags dilfridge 22 Jul 2014
virtual/perl-I18N-LangTags dilfridge 22 Jul 2014
perl-core/Locale-Maketext dilfridge 22 Jul 2014
virtual/perl-Locale-Maketext dilfridge 23 Jul 2014
perl-core/Net-Ping dilfridge 23 Jul 2014
virtual/perl-Net-Ping dilfridge 23 Jul 2014
dev-libs/libbson ultrabug 23 Jul 2014
sci-libs/silo slis 24 Jul 2014
dev-python/pgpdump jlec 24 Jul 2014
net-libs/libasr zx2c4 25 Jul 2014
dev-libs/npth zx2c4 25 Jul 2014
net-wireless/bladerf-firmware zerochaos 25 Jul 2014
net-wireless/bladerf-fpga zerochaos 25 Jul 2014
net-wireless/bladerf zerochaos 25 Jul 2014
sci-libs/cgnslib slis 25 Jul 2014
sci-visualization/visit slis 25 Jul 2014
dev-perl/Switch dilfridge 25 Jul 2014
dev-util/objconv slyfox 28 Jul 2014
app-crypt/monkeysign k_f 29 Jul 2014
virtual/bitcoin-leveldb blueness 29 Jul 2014
dev-db/percona-server robbat2 29 Jul 2014
sys-cluster/galera robbat2 30 Jul 2014
dev-db/mariadb-galera robbat2 30 Jul 2014
net-im/corebird dlan 30 Jul 2014
dev-libs/libpfm slis 31 Jul 2014
dev-perl/ExtUtils-Config civil 31 Jul 2014
dev-libs/papi slis 31 Jul 2014
dev-perl/ExtUtils-Helpers civil 31 Jul 2014
sys-cluster/hpx slis 31 Jul 2014
dev-perl/ExtUtils-InstallPaths civil 31 Jul 2014
dev-perl/Module-Build-Tiny civil 31 Jul 2014
www-plugins/pipelight ryao 31 Jul 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 July 2014 and 31 July 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-07

Bug Activity Number
New 1405
Closed 958
Not fixed 164
Duplicates 180
Total 5912
Blocker 5
Critical 19
Major 69

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo KDE team 41
2 Gentoo Security 38
3 Java team 29
4 Gentoo's Team for Core System packages 28
5 Gentoo Linux Gnome Desktop Team 24
6 Gentoo Games 24
7 Netmon Herd 23
8 Qt Bug Alias 22
9 Perl Devs @ Gentoo 22
10 Others 706

gmn-closed-2014-07

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 85
2 Gentoo Linux Gnome Desktop Team 64
3 Gentoo Security 56
4 Gentoo's Team for Core System packages 53
5 Julian Ospald (hasufell) 48
6 Netmon Herd 47
7 Gentoo KDE team 47
8 Python Gentoo Team 31
9 media-video herd 30
10 Others 943

gmn-opened-2014-07

Tip of the month

(by Sven Vermeulen)
Launching commands in background once (instead of scheduled through cron)

  • Have sys-process/at installed.
  • Have /etc/init.d/atd started.

Use things like:
~$ echo "egencache --update --repo=gentoo --jobs=4" | at now + 10 minutes

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

August 09, 2014
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Introducing pyoembed (August 09, 2014, 21:46 UTC)

Warning: This is a (very) delayed announcement! ;-)

oEmbed is an open standard for embedded content. It allows users to embed some resource, like a picture or a video, in a web page using only the resource URL, without knowing the details of how to embed the resource in a web page.

oEmbed isn't new stuff. It was created around 2008, and despite not being widely supported by content providers, it is supported by some big players, like YouTube, Vimeo, Flickr and Instagram, making its usage highly viable.

To support the oEmbed standard, the content provider just needs to provide a simple API endpoint, that receives an URL and a few other parameters, like the maximum allowed height/width, and returns a JSON or XML object, with ready-to-use embeddable code.

The content provider API endpoint can be previously known by the oEmbed client, or auto-discovered using some meta tags added to the resource's HTML page. This is the point where the standard isn't precise enough: not all of the providers support auto-discovering of the API endpoint, neither all of the providers are properly listed on the oEmbed specification. Proper oEmbed clients should try both approaches, looking for known providers first, falling back to auto-discovered endpoints, if possible.

Each of the Python libraries for oEmbed decided to follow one of the mentioned approaches, without caring about the other one, failing to support relevant providers. And this is the reason why I decided to start writing pyoembed!

pyoembed is a simple and easy to use implementation of the oEmbed standard for Python, that supports both auto-discovered and explicitly defined providers, supporting most (if not all) the relevant providers.

pyoembed's architecture makes it easy to add new providers and supports most of the existing providers out of the box.

To install it, just type:

$ pip install pyoembed

Gentoo users can install it from gentoo-x86:

# emerge -av pyoembed

pyoembed is developed and managed using Github, the repository is publicly available:

https://github.com/rafaelmartins/pyoembed

A Jenkins instance runs the unit tests and the integration tests automatically, you can check the results here:

https://ci.rgm.io/view/pyoembed/

The integration tests are supposed to fail from time to time, because they rely on external urls, that may be unavailable while the tests are running.

pyoembed is released under a 3 clause BSD license.

Enjoy!

Sven Vermeulen a.k.a. swift (homepage, bugs)
Some changes under the hood (August 09, 2014, 19:45 UTC)

In between conferences, technical writing jobs and traveling, we did a few changes under the hood for SELinux in Gentoo.

First of all, new policies are bumped and also stabilized (2.20130411-r3 is now stable, 2.20130411-r5 is ~arch). These have a few updates (mergers from upstream), and r5 also has preliminary support for tmpfiles (at least the OpenRC implementation of it), which is made part of the selinux-base-policy package.

The ebuilds to support new policy releases now are relatively simple copies of the live ebuilds (which always contain the latest policies) so that bumping (either by me or other developers) is easy enough. There’s also a release script in our policy repository which tags the right git commit (the point at which the release is made), creates the necessary patches, uploads them, etc.

One of the changes made is to “drop” the BASEPOL variable. In the past, BASEPOL was a variable inside the ebuilds that pointed to the right patchset (and base policy) as we initially supported policy modules of different base releases. However, that was a mistake and we quickly moved to bumping all policies with every releaes, but kept the BASEPOL variable in it. Now, BASEPOL is “just” the ${PVR} value of the ebuild so no longer needs to be provided. In the future, I’ll probably remove BASEPOL from the internal eclass and the selinux-base* packages as well.

A more important change to the eclass is support for the SELINUX_GIT_REPO and SELINUX_GIT_BRANCH variables (for live ebuilds, i.e. those with the 9999 version). If set, then they pull from the mentioned repository (and branch) instead of the default hardened-refpolicy.git repository. This allows for developers to do some testing on a different branch easily, or for other users to use their own policy repository while still enjoying the SELinux integration support in Gentoo through the sec-policy/* packages.

Finally, I wrote up a first attempt at our coding style, heavily based on the coding style from the reference policy of course (as our policy is still following this upstream project). This should allow the team to work better together and to decide on namings autonomously (instead of hours of discussing and settling for something as silly as an interface or boolean name ;-)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
unpaper and libav status update (August 09, 2014, 11:46 UTC)

The other day I wrote about unpaper and the fact that I was working on making it use libav for file input. I have now finished converting unpaper (in a branch) so that it does not use its own image structure, but rather the same AVFrame structure that libav uses internally and externally. This meant not only supporting stripes, but using the libav allocation functions and pixel formats.

This also enabled me to use libav for file output as well as input. While for the input I decided to add support for formats that unpaper did not read before, for output at the moment I'm sticking with the same formats as before. Mostly because the one type of output file I'd like to support is not currently supported by libav properly, so it'll take me quite a bit longer to be able to use it. For the curious, the format I'm referring to is multipage TIFF. Right now libav only supports single-page TIFF and it does not support JPEG-compressed TIFF images, so there.

Originally, I planned to drop compatibility with previous unpaper version, mostly because to drop the internal structure I was going to lose the input format information for 1-bit black and white images. At the end I was actually able to reimplement the same feature in a different way, and so I restored that support. The only compatibility issue right now is that the -depth parameter is no longer present, mostly because it and -type constrained the same value (the output format).

To reintroduce the -depth parameter, I want to support 16-bit gray. Unfortunately to do so I need to make more fundamental changes to the code, as right now it expects to be able to get the full value at most at 24 bit — and I'm not sure how to scale a 16-bit grayscale to 24-bit RGB and maintain proper values.

While I had to add almost as much code to support the libav formats and their conversion as there was there to load the files, I think this is still a net win. The first point is that there is no format parsing code in unpaper, which means that as long as the pixel format is something that I can process, any file that libav supports now or will support in the future will do. Then there is the fact that I ended up making the code "less smart" by removing codepath optimizations such as "input and output sizes match, so I won't be touching it, instead I'll copy one structure on top of the other", which means that yes, I probably lost some performance, but I also gained some sanity. The code was horribly complicated before.

Unfortunately, as I said in the previous post, there are a couple of features that I would have preferred if they were implemented in libav, as that would mean they'd be kept optimized without me having to bother with assembly or intrinsics. Namely pixel format conversion (which should be part of the proposed libavscale, still not reified), and drawing primitives, including bitblitting. I think part of this is actually implemented within libavfilter but as far as I know it's not exposed for other software to use. Having optimized blitting, especially "copy this area of the image over to that other image" would be definitely useful, but it's not a necessary condition for me to release the current state of the code.

So current work in progress is to support grayscale TIFF files (PAL8 pixel format), and then I'll probably turn to libav and try to implement JPEG-encoded TIFF files, if I can find the time and motivation to do so. What I'm afraid of is having to write conversion functions between YUV and RGB, I really don't look forward to that. In the mean time, I'll keep playing Tales of Graces f because I love those kind of games.

Also, for those who're curious, the development of this version of unpaper is done fully on my ZenBook — I note this because it's the first time I use a low-power device to work on a project that actually requires some processing power to build, but the results are not bad at all. I only had to make sure I had swap enabled: 4GB of RAM are no longer enough to have Chrome open with a dozen tabs, and a compiler in the background.

August 07, 2014
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)
Can your distro compile Chromium? (August 07, 2014, 07:20 UTC)

Chromium is moving towards using C++11. Even more, it's going to require either gcc-4.8 or clang.

Distros like Ubuntu, Mageia, Fedora, openSUSE, Arch, CentOS, and Slackware are already using gcc-4.8 or later is their latest stable release.

On the other hand, Debian Wheezy (7.0) has gcc-4.7.2. Gentoo is using gcc-4.7.3 in stable.

I started a thread on gentoo-dev, gcc-4.8 may be needed in stable for www-client/chromium-38.x. There is a tracker for gcc-4.8 stabilization, bug #516152. There is also gcc-4.8 porting tracker, bug #461954.

Please consider testing gcc-4.8 on your stable Gentoo system, and file bugs for any package that fails to compile or needs to have a newer version stabilized to work with new gcc. I have recompiled all packages, the kernel, and GRUB without problems.

The title of this post is deliberately a bit similar to my earlier post Is your distro fast enough for Chromium? This browser project is pushing a lot towards shorter release cycles and latest software. I consider that a good thing. Now we just need to keep up with the updates, and any help is welcome.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
googlecode.com, or no tarballs for you (August 07, 2014, 02:45 UTC)

I'm almost amused, see this bug

So when I fetched it earlier the tarball had size 207200 bytes
Most of europe apparently gets a tarball of size 207135 bytes
When I download now again I get a tarball of size 206989 bytes

So I have to assume that googlecode now follows githerp in their tradition of being useless for code hosting. Is it really that hard to generate a consistent tarball once, and then mirror it?
Maybe I should build my own codehosting just to understand why this is apparently impossible ...

Jeremy Olexa a.k.a. darkside (homepage, bugs)
What’s new? (August 07, 2014, 01:17 UTC)

Ahem, let me dust this this off…

For those keeping track at home, it has been over 7 months since writing on this thing. Yup, new job, new car, new apartment after I got back. That was fun, and “settling” in again has kept me busy. I’ve also been enjoying the [short] summer that we have.

The most common question that people ask me now is “When are you leaving again?” – I guess there must be something in my eyes when I tell the travel story…ha. Nothing planned.

As far as tech goes, I’ve been digging into Chef for my IT automation needs. I simply can’t imagine a workplace without automation these days. I would show some github stats here but, (said every Ops engineer that I know,) most everything is behind private repo(s). I’m learning new technologies I haven’t used before and wearing many hats at a startup. I know the breadth of skills can only help in the long run. I haven’t worked on Gentoo Linux in awhile. I’m trying to find something there that interests me but after your tech belongings have been commoditized/optimized for lightweight travel, motivation is lacking. Keeping up with emerging tech is still fun, though.

August 06, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

I am no stranger to Indian food, as it is among my favourite types of cuisine (along with Thai and Vietnamese). Having been back in the Saint Louis area for two years now, I have tried many different Indian restaurants, but have been disappointed for one reason or another (price, variety of regional dishes, a lack of distinct flavour profiles, et cetera). Please don’t misunderstand me; there are some good, and even some great Indian places in and around Saint Louis, but they have all somehow fallen a bit short. For instance, here are some such places:

  • India’s Rasoi – great, but expensive, and no buffet
  • Haveli – pretty good buffet, but lacking some variety
  • India’s Kitchen – decent buffet, but inconsistent; nothing stands out
  • Copper Chimney – pretty good, but not all that many options
  • Saffron – decent buffet, but again, not all that many regional options

That list is in no way exhaustive, but I think that the theme will be evident—they’re good, but not “stop you in your tracks” good. Having lived in some regions of the country that have a plethora of exceptional Indian restaurants, I was constantly on a mission to find The Best Indian Restaurant in Saint Louis! My search has yielded a winner: Peshwa Indian Restaurant.

I went with my dearest friend and fellow foodie, Debbie, very shortly after owner and executive Shweta Marathe opened the doors to her wonderful new eatery. As of the time of this blog, we have been back six times in just a few short weeks! Why go back so often (other than the obvious reason of the food is incredible)? Variety. Peshwa constantly has new dishes on the buffet, all stemming from the myriad regions of India. Ms. Marathe brings her unique take on these dishes, and spices them up (pun intended) with influences from her native region outside Pune, which is near the Western coast of India (southeast of Mumbai).


Vada with tamarind Chutney
(Click to enlarge)

This most recent time, we started with some Vada with Tamarind chutney. Vada are these wonderful little doughnut-like delicacies from South India, and are typically made from Urad dal and gram flour. I can’t say if these ones were made primarily with dal or lentils, but they were delicious. As with many dishes (not just from India), the sauces make or break them. The tamarind chutney at Peshwa is the best that I’ve ever had. It has the sweetness (from the dates) that I’ve come to love but haven’t found at other places.

Other appetisers that we’ve had in the past are Idli, which are typically eaten as a dense breakfast food accompanied by a coconut chutney. When I was discussing with Shweta how much I enjoyed these cakes made from Urad dal, I mentioned coconut chutney. She educated me and let me know that they are also eaten with Sambar. I tried them that way, and it was a completely different experience! At the same time, she was back in the kitchen whipping up some coconut chutney (now THAT’S service)!


Chicken Tikka Masala, Vegetable Korma, Naan, and rice
(Click to enlarge)

For entrées, Peshwa offers far too many to list, including some wonderful vegetarian and vegan dishes. The first few times that we went, one of the primary chicken offerings was Butter Chicken, which is great, but not my favourite. That being said, this was outstanding (not overly oily, like it has been at some other places). After talking with Shweta, she agreed to make Chicken Tikka Masala for me at some point (since it is one of my absolute favourites). I used to think that India’s Rasoi had the best in the area, but it has been surpassed in my opinion. At Peshwa, there is not as much sauce, but what is there is infinitely flavourful. The pieces of chicken are so tender that one doesn’t need a knife at all.

Typically, Vegetable Korma is enjoyable, but not something that jumps off the buffet line onto my plate as readily as some other choices. At Peshwa, though, I believe that it is one of the absolute best dishes available. It is creamy and has a flavour profile that is both subtle and complex.

Many other main dishes are available on the buffet as well. You can find staples like Tandoori Chicken, various styles of Biryani, Vindaloo, as well as some lesser-known dishes and even Indian Chinese dishes, which are really something special!


My own mixture of Pineapple Sheera and Kheer rice pudding
(Click to enlarge)

Now, after indulging in those wonderfully complex and sometimes spice-filled entrées, one wants (or even needs) some desserts to cool down the palette. At Peshwa, there are usually two or three desserts available, and they’re constantly being rotated out for different ones. One of the recent times that we went, I was excited to see that two of my favourites (Pineapple Sheera, and Kheer) were both available at the same time. One thing that I love to do, (even though it’s not very traditional), is to mix the two together. I really enjoy the juxtaposition of the warm Sheera and the cool Kheer, as well as the combination of two different textures. Now, Kheer comes in many different varieties, and I have had two of them at Pesha. This particular day, it was the Kheer that is more like a rice pudding with shaved almonds. One previous time, another outstanding dessert was on the menu: Gulab Jamun, which can be found most often in Western India.

I would be amiss if I neglected to mention one special dessert that I’ve only found at Peshwa—the Mango Mastani. This refreshing flavour explosion is native to Pune and surrounding areas, and is made from mango (duh), cold whole milk, sugar, ice cubes, and mango kulfi. It is basically like a mango shake / float with a big scoop of mango kulfi (ice cream-like) in it. Nothing can prepare you for the immense flavour of this outstanding dessert. The only problem that you will have (if you’re like me) is leaving room for it at the end of an otherwise excellent meal.

If you’ve stuck with me throughout this entire review, you’ll easily know that I think VERY highly of Peshwa. Not only has every dish I’ve had there been incredible, but the service is great as well. Deb and I keep joking that one day we’ll find a dish that Shweta and her staff don’t do well, but we’ve yet to find it. If I had to come up with a fault of the restaurant, I would have to be nitpicky to an extreme. Doing so, though, I would say that it would be nice to have some more ice in the water, but I understand this is a typically Western idea.

Do yourself a huge favour, and check out Peshwa Indian Restaurant at:
10633 Page Avenue (click for directions)
Suite B
Saint Louis, MO 63132

As of this writing, they are open from 11:30 until 20:30 (8:30 PM) every day but Tuesday.

Cheers, and happy eating!

|:| Zach |:|

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

As I noted earlier, I've been doing some more housecleaning of bad HTTP crawlers and feed readers. While it matters very little for my and my blog (I don't pay for bandwidth), I find it's a good exercise and, since I do publish my ModSecurity rules, it is a public service for many.

For those who think that I may be losing real readership in this, the number of visits on my site as seen by Analytics increased (because of me sharing the links to that post over to Twitter and G+, as well as in the GitHub issues and the complaint email I sent to the FeedMyInbox guys), yet the daily traffic was cut in half. I think this is what is called a win-win.

But one thing that became clear from both AWSstats and Analytics is that there was one more crawler that I did not stop yet. The crawler name is Semalt, and I'm not doing them the favour of linking to their website. Those of you who follow me on twitter have probably seen what they categorized as "free PR" for them, while I was ranting them up. I defined them a cancer for the Internet, I then realized that the right categorization would be bacteria.

If you look around, you'll find unflattering reviews and multiple instructions to remove them from your website.

Funnily, once I tweeted about my commit, one of their people, who I assume is in their PR department rather than engineering for the blatant stupidity of their answers, told me that it's "easy" to opt-out of their scanner.. you just have to go on their website and tell them your websites! Sure, sounds like a plan, right?

But why on earth am I spending my time attacking one particular company that, to be honest, is not wasting that much of my bandwidth to begin with? Well, as you can imagine from me comparing them to shigella bacteria, I do have a problem with their business idea. And given that on twitter they even missed completely my point (when I pointed out the three spammy techniques they use, their answer was "people don't complain about Google or Bing" — well, yes, neither of the two use any of their spammy techniques!), it'll be difficult for me to consider them as mistaken. They are doing this on purpose.

Let's start with the technicalities, although that's not why I noticed them to begin with. As I said earlier, their way to "opt out" from their services is to go to their website and fill in a form. They completely ignore robots.txt, they don't even fetch it. And given this is an automated crawler, that's bad enough.

The second is that they don't advertise themselves in the User-Agent header. Instead all their fetches report Chrome/35 — and given that they can pass through my ruleset, they probably use a real browser with something like WebDriver. So you have no real way to identify their requests among a number of others, which is not how a good crawler should operate.

The third and most important point is the reason why I consider them just spammers, and so seem others, given the links I posted earlier. Instead of using the user agent field to advertise themselves, they subvert the Referer header. Which means that all their requests, even those that have been 301'd and 302'd around, will report their website as referrer. And if you know how AWStats works, you know that it doesn't take that many crawls for them to be one of the "top referrers" for your website, and thus appear prominently in your stats, whether they are public or not.

At this point it could be easy to say that they are clueless and are not doing this on purpose, but then there is the other important part. Their crawler executes JavaScript, which means that it gets tracked by Google Analytics, too! Analytics has no access to the server logs, so for it to display the referrer as shown by people looking to filter it out, it has to make an effort. Again, this could easily be a mistake, given that they are using something like WebDriver, right?

The problem is that whatever they use, it does not fetch either images or CSS. But it does fetch the Analytics javascript and execute it, as I said. And the only reason I can think for them to want to do so, is to spam the referrer list in there as well.

As their twitter person thanked me for my "free PR" for them, I wanted to expand it further on it, with the hope that people will learn to know them. And to avoid them. My ModSecurity ruleset as I said already is set up to filter them out, other solutions for those who don't want to use ModSecurity are linked above.

August 05, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

So it's that time of the year again when you have to run a few BIOS updates; and you know I love updating BIOS. In this case the victim is the very Excelsior that I bought to run the Gentoo tinderbox, and which I'm hosting to my expenses at Hurricane Electric in Freemont. This is important to say, because it means I have no direct access to the hardware, and nobody else can help me there.

First, I decided to get rid of the low-hanging fruit: the IPMI sideboard. I couldn't find release notes for the update, but it went from version 2.05 to 3.16, and it was released at the end of June 2014, so it's pretty recent, and one would expect quite a bit of bug fixes, given that I had trouble running its iKVM client two years ago already, and I tried to get more information about its protocol without success.

For this part, SuperMicro is strangely helpful: the downloaded archive (once I was able to actually download it, it took me a few tries, the connection would break midway through the download) is three times the size of the actual firmware, because it comes with Windows, DOS and, helpfully, Linux update utilities in the same file. Which would be endearing if it wasn't that the Linux utilities require the tools to access the BMC raw, which is not fun to do with grsec present, and if I'm not mistaken it also conflicts with the in-kernel IPMI drivers. Also, the IPMI remote interface can handle the firmware update "quite well" by itself.

I say "quite well" because it seems like ATEN (the IPMI vendor for SuperMicro and another long list of server manufacturers) still has not mastered the ability to keep configuration consistent across firmware upgrades. You know, the kind of things that most of our home routers or cable TV boxes manage nearly all the time. And all my home computers' BIOSes. So instead it requires you to reset the full configuration when you upgrade the firmware of the device. Which is fine because you can use bmc-config from userspace to dump the configuration (minus the passwords) and reset it back. If the configuration it downloads is actually accepted as valid, which in my case it wasn't. But I only needed to set an admin password and IP address and routing, so…

The second step is to update the BIOS of the server itself. And here is where the fun part starts. You have two options to download: a Windows executable or a ZIP archive. I downloaded both. The Windows executable is supposed to build a floppy to upgrade the BIOS. Not a floppy image, a floppy. It does indeed not work in Wine, but I tried! I settled for the ZIP file.

So the iKVM I just upgraded supports "virtual media" — which means you can give it files on your computer that it'll use instead of floppy disks or CD-Rom. This is very neat, but it comes with some downsides. At first I wanted to give it a floppy disk image via HTTP, and use the Serial-over-SSH to enter the boot menu and do stuff. This has worked like a charm for me before, but it comes with limitations: the HTTP-based virtual media configuration for this iKVM only supports 1.44MB images for floppy disks, and uses SMB to access shared ISO files for the CD-Rom. It makes sense, you can't expect the iKVM board to have enough RAM to store a whole 650MB image for the CD-Rom, now, can you?

The answer is of course to build a BIOS upgrade floppy disk, which is after all what their own Windows tool would have done (albeit on a real disk rather than an image — do people really still use floppy disks on Opteron servers?) Too bad that there is a small problem with that plan: the BIOS itself is over 2MB uncompressed. It does not fit in a floppy disk unformatted, let alone in the formatted disk, with the update utility, the kernel and the command interpreter. So no way to do that easily. The astutes of you after reading the blog post will probably figure out there was a shortcut here, but I'll comment on that later.

So I decided to go back to the iKVM client, the Java WebStart one. Now luckily the new release of it works fine with IcedTea, unlike the previous one. Unfortunately it still is not working out of the box. Indeed it was only by chance that I found someone else who faced and solved that problem by adding a couple of lines to the XML file that defines the JavaWS environment.

The configuration dialog for the client allows providing a local ISO file as well as configuring the address of a SMB share with it: it implements its own transmission protocol, and the floppy option is now "floppy/USB", so I thought maybe I could just give it my 1GB FreeDOS-based BIOS disk … nope, no luck. It still is limited to 1.44MB it seems.

So I had to get a working ISO and do the upgrade. It should be pretty straightforward, right? I even found a blog post showing how to do so, based on Gentoo, as it says to emerge isomaster! Great! Unfortunately, FreeDOS download page now provides you with a great way to download … installers. They no longer provide links to download live CDs. It makes total sense, right? Who doesn't think every day "I should stop procrastinating and finally install FreeDOS on my laptop!"?

After some fiddling around I found a tiny non-descript directory in their FTP mirrors that includes fdfull.iso which is the actual LiveCD. So I download that, copy the BIOS and the updater into the CD and set it in the virtual media settings for the iKVM, and then boot. The CD does not boot unattended. If you leave it unattended it'll ask you what you want to do and default to boot from the HDD. If you do remember to type 1 and press enter at the SysLinux prompt, it'll then default to install (again, why did I bother installing Gentoo on the NUC when I could have used FreeDOS like the cool kids?).

Instead, you can choose to boot FreeDOS from the LiveCD with no support and no drivers, with just the CD drivers, with the CD drivers and HIMEM, with the CD drivers, HIMEM and EMM386. I decided I wanted to play it safe since it's a BIOS update, and booted with just the CD drivers (I needed the BIOS files after all). Turns out that while this is an option at the boot menu, it will never work, as the CDROM driver needs to be loaded high, MSCDEX.EXE style. So reboot once again with HIMEM (and EMM386 because rebooting into that CD is painful ­— among other things, the iKVM client loses focus every time the framebuffer size changes. And it changes. all. the freaking. time. during boot up), and try to access the CD… unfortunately the driver does not support USB CD-Rom (like the one the iKVM provides me with) but just IDE ones, so no access to the ISO for me. Nope.

But I'm not totally unprepared for this kind of situations. As a failsafe device I left in one of the USB slots a random flashdrive I got at Percona Live reflashed with SysRescueCD — it was one of those because I had gotten a couple while over there, and I forgot to bring one from home. Since SysRescueCD uses VFAT by default, and the BIOS shows the USB devices transparently to DOS, it was visible as C:.

Here is the shortcut from above: I should have just used a random FreeDOS boot floppy image, and just copied the BIOS onto that particular thumbdrive, back before I started using Java. It would have been much simpler. But I forgot about that thumbdrive and I was hoping I would be able to start bootdisk. Now I know better.

Of course that meant I had to finish booting the system in Linux proper, mount the USB flashdrive, scp over the BIOS update, and reboot. It took much longer than I wanted. But still much less than all the time trying to get FreeDOS into booting the right thing.

By the way, even if I were to find a way to access the CD with the bios updater, it wouldn't have helped. I did not notice that, but the BIOS updater script FLASH.BAT wants to be smart, and it renames AFUDOS.SMC to AFUDOS.EXE, runs it, then renames it back. Which means that it's not usable from read-only media. Oh yeah and it requires a parameter, but if you don't provide it, it'll tell you that the command is unknown (because it still passes a bunch of flags to the flash utility), and tells you to reboot.

Now this is only the problem with getting the BIOS updated. What happened after would probably deserve a separate post, but I'll try to be short and write that down here. After update, of course I got the usual "CMOS checksum error, blah blah, Press F1 to enter Setup.". So I went and reconfigured the few parameters that I wanted different from the defaults, for instance put the SATA controller into AHCI mode (IDE compatibility, seriously?) and disabled the IDE controller itself, plus enabled SVM which on the defaults is still disabled.

The system booted fine, but no network cards were to be found. Not a driver problem, they would not appear on the PCI bus. To be precise, now that I compared this with the actual running setup, the two PLX PCI-E switches were not to be found on the bus, the network cards just happen to be connected to them. I tried playing with all the configuration options for the network cards, for PCI, for PnP and ACPI. Nope. Tried a full remote power cycle, nope. I was already considering the costs of adding a riser card and a multiport card to the server (wouldn't have helped, as I noted the PLX were gone), when Janne suggested doing a reset to defaults anyway. It worked. The settings were basically the same I left before (looks like "optimal defaults" do include SVM), just changed SATA back to AHCI (yes, even in optimal settings, it defaults to IDE compatibility).

It worked. But then only three out of five drives were recognized (one of the SSDs was disconnected when I was there in April and I hadn't had time to go back there in June to put it back). One more reboot into BIOS to disable the IDE controller altogether and the rest of the HDDs appeared, together with their LVM group.

Yes, I know, SuperMicro suggests you to only do BIOS updates when they require you to. On the other hand, I was hoping for an SVM/IOMMU fix which is not there still, as KVM made it complain, so I wanted to try. I was used to Tyan, that Yamato still uses, that basically told you "don't bother us with support requests unless you updated your BIOS" — a policy that I find much more pleasant, especially given other experience with other hardware (such as Intel's NUC requiring a BIOS update before it can use Intel's WiFi cards….).

Hanno Böck a.k.a. hanno (homepage, bugs)
Las Vegas (August 05, 2014, 05:39 UTC)

Excalibur hotel
My hotel looks like a Disneyland castle - just much larger.
I am a regular author for the German IT news page Golem.de. Earlier this year they asked me if I wanted to report from the Black Hat and Def Con conferences in Las Vegas. The conferences will start tomorrow. As I don't want to fly half across the globe for a few days of IT security conferences I decided to spend some more time. So I spend the last couple of days in Las Vegas and will spend some time after the conferences travelling around. A couple of people asked me to blog a bit and post some pictures, so here we go.

Las Vegas is probably a place I would've never visited on its own. I consider myself a rationalist person and therefore I see gambling mostly as an illogical pursuit. In the end your chances of winning are minimal because otherwise the business wouldn't work. I hadn't imagined how huge the casino business in Las Vegas is. Large parts of the city are just one large casino after another - and it doesn't stop there, because a couple of cities around Vegas literally are made of casinos.

Beside seeing some of the usual tourist attractions (Hoover Dam, Lake Mead), I spend the last couple of days also finding out that there are some interesting solar energy projects nearby. Also a large Star Trek convention> was happening the past days where I attended on the last day.

Nintendo test cardrige
A Nintendo test cardrige at A Gamer's Paradise
If you are ever in Vegas and have the slightest interest in retro gaming I suggest to visit A Gamer's Paradise. It is a shop for used video games, but apart from that it is also a showcase for a whole range of old and partly very exotic video gaming equipment, including things I've never seen before. It also has some playable old consoles. Right beside of it is the Pinball Hall of Fame, which is also a nice place. So you can visit two worthwhile retro gaming related places in one go.

Pictures from Las Vegas
Pictures from A Gamer's Paradise
Pictures from Pinball Hall of Fame

August 04, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

(Last edit 2014-08-31)

I wrote a tool that takes the board setup from a WXF file (example below) and produces an SVG image visualizing that setup.
Different themes for board and pieces are supported (including your own), the gap between pieces can be adjusted, output width can be adjusted, too.
It’s called xiangqi-setup.

Internally, the tool takes the an SVG file of the board, places piece SVGs at the right places and saves the result.
The tool uses svgutils by Bartosz Telenczu. I’m very happy he made that available as free software. My tool is free software (licensed under GNU AGPL 3.0 or later) too, of course.

If you want to imitate the style of Chinese end-game books, you could go with these themes:


(Image licensed under CC0 1.0 Universal: Public Domain Dedication)
If you want to imitate LaTeX xq 0.3 style (with added flexibility), you could go with this:


(Image licensed under CC0 1.0 Universal: Public Domain Dedication)
If you would rather go for something more colourful, for screen rather than print, or you want to explicitly imitate the look of PlayOK.com, you could go with this:


(Image licensed under CC0 1.0 Universal: Public Domain Dedication,
piece artwork kindly shared and released by PlayOK)
There is a version of the pieces without shadows (for PDF generation), too.

Latter image was created running

# ./xiangqi-setup \
    --board themes/board/playok_2014_remake/ \
    --pieces themes/pieces/playok_2014_chinese \
    --scale-pieces 1.0 \
    --width-px 400 \
    demo.wxf setup_imitate_playok.svg

This what you pass in: a WXF file (e.g. produced by XieXie by saving with .wxf extension):

FORMAT          WXF
GAME    
RED             ;;;
BLACK           ;;;
DATE            2014-07-16
FEN             4kaer1/4a2c1/2h1e1h2/3Rp1C1p/2C6/5rP2/1pP1P3P/8E/9/1cEAKA1R1 b

START{
}END

As of now, the complete usage of xiangqi-setup is:

# ./xiangqi-setup --help
usage: xiangqi-setup [-h] [--board DIRECTORY] [--pieces DIRECTORY]
                     [--width-px PIXEL] [--width-cm CENTIMETER] [--dpi FLOAT]
                     [--scale-pieces FACTOR] [--debug]
                     INPUT_FILE OUTPUT_FILE

positional arguments:
  INPUT_FILE
  OUTPUT_FILE

optional arguments:
  -h, --help            show this help message and exit
  --board DIRECTORY
  --pieces DIRECTORY
  --width-px PIXEL
  --width-cm CENTIMETER
  --dpi FLOAT
  --scale-pieces FACTOR
  --debug

For themes, these are your options (at the moment):

# find themes -maxdepth 2 -type d | sort
themes
themes/board
themes/board/a4_blank_2cm_margin
themes/board/clean_alpha
themes/board/clean_beta
themes/board/commons_xiangqi_board_2008
themes/board/commons_xiangqi_board_2008_bw_thin
themes/board/latex_xq_remake
themes/board/minimal
themes/board/minimal_chinese
themes/board/minimal_chinese_arabic
themes/board/playok_2014_remake
themes/pieces
themes/pieces/commons_xiangqi_pieces_print_2010
themes/pieces/commons_xiangqi_pieces_print_2010_bw_heavy
themes/pieces/latex_xqlarge_2006_chinese_autotrace
themes/pieces/latex_xqlarge_2006_chinese_potrace
themes/pieces/playok_2014_chinese
themes/pieces/playok_2014_chinese_noshadow
themes/pieces/retro_simple

If none of the existing themes fit your needs, you may create board and/or pieces of your own.
For boards, drawing a custom grid, palace, start markers and border can be done using the xiangqi-board tool. A demonstration of its current options:

# ./xiangqi-board --help
usage: xiangqi-board [-h] [--line-thickness-px FLOAT] [--field-width-px FLOAT]
                     [--field-height-px FLOAT] [--border-thickness-px FLOAT]
                     [--border-gap-width-px FLOAT]
                     [--border-gap-height-px FLOAT] [--cross-width-px FLOAT]
                     [--cross-thickness-px FLOAT] [--cross-gap-px FLOAT]
                     SVG_FILE INI_FILE

positional arguments:
  SVG_FILE
  INI_FILE

optional arguments:
  -h, --help            show this help message and exit
  --line-thickness-px FLOAT
                        Line thickness of square fields in pixel (default: 1)
  --field-width-px FLOAT
                        Width of fields in pixel (default: 53)
  --field-height-px FLOAT
                        Height of fields in pixel (default: 53)
  --border-thickness-px FLOAT
                        Line thickness of border in pixel (default: 2)
  --border-gap-width-px FLOAT
                        Widtn of gap to border in pixel (default: 40)
  --border-gap-height-px FLOAT
                        Height of gap to border in pixel (default: 40)
  --cross-width-px FLOAT
                        Width of starting position cross segments in pixel
                        (default: 10)
  --cross-thickness-px FLOAT
                        Line thickness of starting position cross in pixel
                        (default: 1)
  --cross-gap-px FLOAT  Gap to starting position cross in pixel (default: 4)

For text on the river, the characters are:

  • Chu river: 楚河
  • Han border: 漢界 traditional, 汉界 simplified

On the Open Source font end of things these are your main options to my understanding:

On a side note, in Gentoo Linux look for these packages:

  • Adobe Source Han Sans: media-fonts/source-han-sans (gentoo-zh overlay)
  • AR PL UKai/UMing CN/TW: media-fonts/arphicfonts
  • Google Noto Sans CJK: media-fonts/notofonts (betagarden overlay)
  • Wangfonts: media-fonts/wangfonts (gentoo-zh overlay)
  • WenQuanYi Micro/Zen Hei: media-fonts/wqy-microhei, media-fonts/wqy-zenhei

If you use xiangqi-setup tool to generate images, feel free to drop me a mail, I would be curious to see your results and check out your custom themes. I would not mind a free of copy of your book, either :)

Cheers!

August 03, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

One minute version

If you’re buying an English book on Xiangqi do not buy “A Beginners guide to Xiangqi” by Tyler Rea: It’s a ripp-off and does more harm than good.
For something in English, you could go with “Chinese Chess: An Introduction to China’s Ancient Game of Strategy” by H. T. Lau for print or browse www.xqinenglish.com, instead.

Disclaimer

  • First, sorry for the poor picture quality!
  • If I highlighted say two errors in a picture below, it does not mean it’s only two.
  • This review is not meant to be complete: it’s way too much already.

How did I get here?

I ran into this video on YouTube:

While I’ve been playing Xiangqi for quite a while already, I was thinking “that book looks like fun, I’ll buy it just to have a closer look, maybe there’s something in there that I haven’t seen yet, too“. So I did have some expectations.

I started skip-reading through the book and soon stumbled over error after error: to the point of laugh-or-cry. It starts with the cover page, already. Read on for details.

Quick review summary

  • Whole chapters match Internet content 1:1 (mostly xqinenglish.com, also en.wikipedia.org).
  • About 100 pages are wasted to two uncommented example games, one move per page.
  • Of those, the first example game is declared a win when the king is not even in check.
  • Poor teaching (mis-leading, logic errors, no examples where needed)
  • No chapter explaining AXF/WXF (or any) move notation (despite use in the book)
  • Many errors details, spelling, case, punctuation (even on cover page)

The book lacks declaration of an edition or a print date. It does say “Printed in Germany by Amazon Distribution GmbH, Leipzig” at the end.
According to the book’s Amazon page it was published at/by CreateSpace: Self Publishing and Free Distribution for Books, CD, DVD, “an Amazon company” in 2013.

This beginners guide is actually a beginner’s guide. Maybe that’s a typo.

The cover says it loud and clear


Interesting things to spot on the cover:

  • “Writen” should have been “Written” with double “t”
  • On the bottom, Red’s king and elephant use the characters of Black
  • Red’s pawns show characters of Black’s pawns and vice versa

Too bad I noticed all of that after buying.

Whole chapters match Internet content 1:1

So far I have identified these matches with Internet content (some 1:1, some adjusted):

The author of xqinenglish.com confirmed to me via e-mail that use of his content in that book has not been authorized/licened by him.

The two places where I noticed copying first:

At page 170 it reads “term use[d] on this site” rather than “in this book“:

At page 49 the “Basic, commonly used tactics in Xiangqi” is not a complete sentence and does not have a full stop either. That is because in the original that is the title of a section, not a sentence.

On the example games

While showing a complete game move by move could be helpful in teaching, a few things went wrong in general:

  • Moves should be commented, e.g. “Red is attacking piece X to make up for blacks attack against ..” rather than “Black has responded by advancing the horse”.
  • WXF/AXF and algebraic move notation could have been shown: There is plenty of space for that.
  • Since there are no arrows indicating the current move on the board, “finding” the move is much more work than necessary, especially when turning pages.
  • No more than a single move per page is plain waste of paper.
  • Roughly 100 pages for two examples games take more than half the page count.

On the first game in particular: The game ends at move #79 of Red which is commented as:

WINNING Move #79, Red secures Checkmate. Black’s General is it check from Red’s Chariot and Red’s General as part of a Face to face laughing check.

That’s rather surprising since Black is not even in check after Red has moved. Is this a joke?

On the second game: page 167 shows the second last move of the game. Black responds with R9+2 / Ri0-i8. How does that do anything against the threat of C5=6 / Ce5-d5 by Red? What about C2=5 / Cb8-e8 for a proper reply? If we assume a really poor opponent, maybe that deserves mentioning.

Also, let me use the occasion to point out the characters used for Red’s king and advisors. A friend of mine who has been studying English, Chinese and German literature on Xiangqi for years said he has never seen those characters used anywhere in the context of Xiangqi.

The quality of teaching

This page is meant to teach movement of the king:

What I see is:

  • Pictures indicating movement of more than one step at a time
  • Lack of a system: it’s neither where-to-go-from-here nor where-could-he-have-come-from

Up next is the first oddity I noticed very first: How would the two pawns go sideways before crossing the river?

This page is meant to explain movement of the horse. With that little blocking pieces there are quite a few move options missing.

Here the idea seemed to be: show all possible steps a pawn can do. However, there are lots of arrows missing:

This one is really bad: If I hop my cannon right in front of the king with no protection, he eats my cannon and that’s it:

To me, this dry list of checkmates clearly lacks examples:

No chapter on move notation

Despite use of AXF notation in the book, there is no chapter explaining how to read or write that (or any) move notation. Why not?

Many errors at details, spelling, case, punctuation

In a printed book, finding spelling mistakes is expected to be hard. Not so with this book: A few examples.
On page 4 we can see how punctuation is pulled into quotes — always but once:

The word “Xiangqi” can be found in the book as “Xiàngqí“, “Xiànqí” (typo, lacks “g“) and “Xiangqi” (without accents):

Also, uppercase is used at interesting places, e.g.

  • Page 44: “leap frog Perpetually”
  • Page 45: “The Anatomy and structure”
  • Page 39: “to Augment your defense”

to name a few.

That’s all for the moment.

(Update 2014-08-31: The book is “out-of-order” on Amazon.com and .de; as of today, it is sold by a reseller for 32.05 EUR at Amazon.de and for 189.20(!) USD at Amazon.com.)

Anthony Basile a.k.a. blueness (homepage, bugs)

When portage installs a package onto your system, it caches information about that package in a directory at /var/db/pkg/<cat>/<pkg>/, where <cat> is the category (ie ${CATEGORY}) and <pkg> is the package name, version number and revision number (ie. ${P}). This information can then be used at a later time to tell portage information about what’s installed on a system: what packages were installed, what USE flags are set on each package, what CFLAGS were used, etc. Even the ebuild itself is cached so that if it is removed from the tree, and consequently from your system upon `emerge –sync`, you have a local copy in VDB to uninstall or otherwise continue working with the package. If you take look under /var/db/pkg, you’ll find some interesting and some not so interesting files for each <cat>/<pkg>. Among the less interesting are files like DEPEND, RDPENED, FEATURES, IUSE, USE, which just contain the same values as the ebuild variables by the same name. This is redundant because that information is in the ebuild itself which is also cached but it is more readily available since one doesn’t have to re-parse the ebuild to obtain them. More interesting is information gathered about the package as it is installed, like CONTENTS, which contains a list of all the regular files, directories, and sym link which belong to the package, along with their MD5SUM. This list is used to remove files from the system when uninstalling the package. Environment information is also cached, like CBUILD, CHOST, CFLAGS, CXXFLAGS and LDFLAGS which affects the build of compiled packages, and environment.bz2 which contains the entire shell environment that portage ran in, including all shell variables and functions from inherited eclasses. But perhaps the most interesting information, and the most expensive to recalculate is, cached in NEEDED and NEEDED.ELF.2. The later supersedes the former which is only kept for backward compatibility, so let’s just concentrate on NEEDED.ELF.2. Its a list of every ELF object that is installed for a package, along with its ARCH/ABI information, its SONAME if it is a shared object (readelf -d <obj> | grep SONAME, or scanelf -S), any RPATH used to search for its needed shared objects (readelf -d <obj> | grep RPATH, or scanelf -r), and any NEEDED shared objects (the SONAMES of libraries) that it links against (readelf -d <obj> | grep NEEDED or scanelf -n). [1] Unless you’re working with some exotic systems, like an embedded image where everything is statically linked, your user land utilities and applications depend on dynamic linking, meaning that when a process is loaded from the executable on your hard drive, the linker has to make sure that its needed libraries are also loaded and then do some relocation magic to make sure that unresolved symbols in your executable get mapped to appropriate memory locations in the libraries.

The subtleties of linking are beyond the scope of this blog posting [2], but I think its clear from the previous paragraph that one can construct a “directed linkage graph” [3] of dependencies between all the ELF objects on a system. An executable can link to a library which in turn links to another, and so on, usually back to your libc [4]. `readelf -d <obj> | grep NEEDED` only give you the immediate dependencies, but if you follow these through recursively, you’ll get all the needed libraries that an executable needs to run. `ldd <obj>` is a shell script which provides this information, as does ldd.py from the pax-utils package, which also does some pretty indentation to show the depth of the dependency. If this is sounding vaguely familiar, its because portage’s dependency rules “mimic” the underlying linking which is needed at both compile time and at run time. Let’s take an example, curl compiled with polarssl as its SSL backend:

# ldd /usr/bin/curl | grep ssl
        libpolarssl.so.6 => /usr/lib64/libpolarssl.so.6 (0x000003a3d06cd000)
# ldd /usr/lib64/libpolarssl.so.6
        linux-vdso.so.1 (0x0000029c1ae12000)
        libz.so.1 => /lib64/libz.so.1 (0x0000029c1a929000)
        libc.so.6 => /lib64/libc.so.6 (0x0000029c1a56a000)
        /lib64/ld-linux-x86-64.so.2 (0x0000029c1ae13000)

Now let’s see this dependency reflected in the ebuild:

# cat net-misc/curl/curl-7.36.0.ebuild
RDEPEND="
        ...
        ssl? (
                ...
                curl_ssl_polarssl? ( net-libs/polarssl:= app-misc/ca-certificates )
                ...
        )
        ...

Nothing surprising. However, there is one subtlety. What happens if you update polarssl to a version which is not exactly backwards compatible. Then curl which properly linked against the old version of polarssl doesn’t quite work with the new version. This can happen when the library changes its public interface by either adding new functions, removing older ones and/or changing the behavior of existing functions. Usually upstream indicates this change in the library itself by bumping the SONAME:

# readelf -d /usr/lib64/libpolarssl.so.1.3.7 | grep SONAME
0x000000000000000e (SONAME) Library soname: [libpolarssl.so.6]

But how does curl know about the change when emerging an updated version of polarssl? That’s where subslotting comes in. To communicate the reverse dependency, the DEPEND string in curl’s ebuild has := as the slot indicator for polarssl. This means that upgrading polarssl to a new subslot will trigger a recompile of curl:

# emerge =net-libs/polarssl-1.3.8 -vp

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild r U ] net-libs/polarssl-1.3.8:0/7 [1.3.7:0/6] USE="doc sse2 static-libs threads%* zlib -havege -programs {-test}" ABI_X86="(64) (-32) (-x32)" 1,686 kB
[ebuild rR ] net-misc/curl-7.36.0 USE="ipv6 ldap rtmp ssl static-libs threads -adns -idn -kerberos -metalink -ssh {-test}" CURL_SSL="polarssl -axtls -gnutls -nss -openssl" 0 kB

Here the onus is on the downstream maintainer to know when the API breaks backwards compatibility and subslot accordingly. Going through with this build and then checking the new SONAME we find:

# readelf -d /usr/lib/libpolarssl.so.1.3.8 | grep SONAME
0x000000000000000e (SONAME) Library soname: [libpolarssl.so.7]

Aha! Notice the SONAME jumped from .6 for polarssl-1.3.7 to .7 for 1.3.8. Also notice the SONAME version number also follows the subslotting value. I’m sure this was a conscious effort by hasufell and tommyd, the ebuild maintainers, to make life easy.

So I hope my example has shown the importance of tracing forward and reverse linkage between the ELF objects in on a system [5]. Subslotting is relatively new but the need to trace linking has always been there. There was, and still is, revdep-rebuild (from gentoolkit) which uses output from ldd to construct a “directed linkage graph” [6] but is is relatively slow. Unfortunately, it recalculates all the NEEDED.ELF.2 information on the system in order to reconstruct and invert the directed linkage graph. Subslotting has partially obsoleted revdep-rebuild because portage can now track the reverse dependencies, but it has not completely obsoleted it. revdep-rebuild falls back on the SONAMEs in the shared objects themselves — an error here is an upstream error in which the maintainers of the library overlooked updating the value of CURRENT in the build system, usually in a line of some Makefile.am that looks like

LDFLAGS += -version-info $(CURRENT):$(REVISION):$(AGE)

But an error in subslotting is an downstream error where the maintainers didn’t properly subslot their package and any dependencies to reflect upstream’s changing API. So in some ways, these tools complement each other.

Now we come to the real point of the blog: there is no reason for revdep-rebuild to run ldd on every ELF object on the system when it can obtain that information from VDB. This doesn’t save time on inverting the directed graph, but it does save time on running ldd (effectively /lib64/ld-linux-x86-64.so.2 –list) on every ELF object in the system. So guess what the python version does, revdep-rebuild.py? You guessed it, it uses VDB information which is exported by portage via something like

import portage
vardb = portage.db[portage.root]["vartree"].dbapi

So what’s the difference in time? On my system right now, we’re looking at a difference between approximately 5 minutes for revdep-rebuild versus about 20 seconds for revdep-rebuild.py. [7] Since this information is gathered at build time, there is no reason for any Package Management System (PMS) to not export it via some standarized API. portage does so in an awkward fashion but it does export it. paludis does not export NEEDED.ELF.2 although it does export other VDB stuff. I can’t speak to future PMS’s but I don’t see why they should not be held to a standard.

Above I argued that exporting VDB is useful for utilities that maintain consistency between executibles and the shared objects that they consume. I suspect one could counter-argue that it doesn’t need to be exported because “revdep-rebuild” can be made part of portage or whatever your PMS, but I hope my next point will show that exporting NEEDED.ELF.2 information has other uses besides “consistant linking”. So a stronger point is that, not only should PMS export this information, but that it should provide some well documented API for use by other tools. It would be nice for every PMS to have the same API, preferably via python bindings, but as long as it is well documented, it will be useful. (Eg. webapp-config supports both portage and paludis. WebappConfig/wrapper.py has a simple little switch between “import portage; ... portage.settings['CONFIG_PROTECT'] ... ” and “cave print-id-environment-variable -b --format '%%v\n' --variable-name CONFIG_PROTECT %s/%s ...“.)

So besides consistent linking, what else could make use of NEEDED.ELF.2? In the world of Hardened Gentoo, to increase security, a PaX-patched kernel holds processes to much higher standards with respect to their use of memory. [8] Unfortunately, this breaks some packages which want to implement insecure methods, like RWX mmap-ings. Code is compiled “on-the-fly” by JIT compilers which typically create such mappings as an area to which they first write and then execute. However, this is dangerous because it can open up pathways by which arbitrary code can be injected into a running process. So, PaX does not allow RWX mmap-ings — it doesn’t allow it unless that kernel is told otherwise. This is where the PaX flags come in. In the JIT example, marking the executables with `paxctl-ng -m` will turn off PaX’s MPROTECT and allow the RWX mmap-ing. The issue of consistent PaX markings between executable and their libraries arises when it is the library that needs the markings. But when loaded, it is the markings of the executable, not the library, which set the PaX restrictions on the running process. [9]  So if its the library needs the markings, you have to migrate the markings from the library to the executable. Aha! Here we go again: we need to answer the question “what are all the consumers of a particular library so we can migrate its flags to them?” We can, as revdep-rebuild does, re-read all the ELF objects on the system, reconstruct the directed linkage graph, then invert it; or we can just start from the already gathered VDB information and save some time. Like revdep-rebuild and revdep-rebuild.py, I wrote two utilities. The original, revdep-pax, did forward and reverse migration of PaX flags by gathering information with ldd. It was horribly slow, 5 to 10 minutes depending on the number of objects in $PATH and shared object reported by `ldconfig -p`. I then rewrote it to use VDB information and it accomplished the same task in a fraction of the time [10]. Since constructing and inverting the directed linkage graph is such a useful operation, I figured I’d abstract the bare essential code into a python class which you can get at [11]. The data structure containing the entire graph is a compound python dictionary of the form

{
        abi1 : { path_to_elf1 : [ soname1, soname2, ... ], ... },
        abi2 : { path_to_elf2 : [ soname3, soname4, ... ], ... },
        ...
}

whereas the inverted graph has form

{
        abi1 : { soname1 : [ path_to_elf1, path_to_elf2, ... ], ... },
        abi2 : { soname2 : [ path_to_elf3, path_to_elf4, ... ], ... },
        ...
}

Simple!

Okay, up to now I concentrated on exporting NEEDED.ELF.2 information. So what about rest of the VDB information? Is it useful? A lot of questions regarding Gentoo packages can be answered by “grepping the tree.” If you use portage as your PMS, then the same sort of grep-sed-awk foo magic can be performed on /var/db/pkg to answer similar questions. However, this assumes that the PMS’s cached information is in plain ASCII format. If a PMS decides to use something like Berkeley DB or sqlite, then we’re going to need a tool to read the db format which the PMS itself should provide. Because I do a lot of release engineering of uclibc and musl stages, one need that often comes up is the need to compare of what’s installed in the stage3 tarballs for the various arches and alternative libc’s. So, I run some variation of the following script

#!/usr/bin/env python

import portage, re

portdb = portage.db[portage.root]["vartree"].dbapi

arm_stable = open('arm-stable.txt', 'w')
arm_testing = open('arm-testing.txt', 'w')

for pkg in portdb.cpv_all():
keywords = portdb.aux_get(pkg, ["KEYWORDS"])[0]

arches = re.split('\s+', keywords)
        for a in arches:
                if re.match('^arm$', a):
                        arm_stable.write("%s\n" % pkg)
                if re.match('^~arm$', a):
                        arm_testing.write("%s\n" % pkg)

arm_stable.close()
arm_testing.close()

in a stage3-amd64-uclibc-hardened chroot to see what stable packages in the amd64 tarball are ~arm. [12]  I run similar scripts in other chroots to do pairwise comparisons. This gives me some clue as to what may be falling behind in which arches — to keep some consistency between my various stage3 tarballs. Of course there are other utilities to do the same, like eix, gentoolkit etc, but then one still has to resort to parsing the output of those utilities to get the answers you want. An API for VDB information allows you to write your own custom utility to answer the precise questions you need answers. I’m sure you can multiply these examples.

Let me close with a confession. The above is propaganda for the upcoming GLEP 64 which I just wrote [13]. The purpose of the GLEP is to delineate what information should be exported by all PMS’s with particular emphasis on NEEDED.ELF.2 for the reasons stated above.  Currently portage does provide NEEDED.ELF.2 but paludis does not.  I’m not sure what future PMS’s might or might not provide, so let’s set a standard now for an important feature.

 

Notes:

[1] You can see where NEEDED.ELF.2 is generated for details. Take a look at line ~520 of /usr/lib/portage/bin/misc-functions.sh, or search for the comment “Create NEEDED.ELF.2 regardless of RESTRICT=binchecks”.

[2] A simple hands on tutorial can be found at http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html. It also includes dynamic linking via dlopen() which complicates the nice neat graph that can be constructed from NEEDED.ELF.2.

[3] I’m using the term “directed graph” as defined in graph theory. See http://en.wikipedia.org/wiki/Directed_graph. The nodes of the graph are each ELF object and the directed edges are from the consumer of the shared object to the shared object.

[4] Well, not quite. If you run readelf -d on readelf -d /lib/libc.so.6 you’ll see that it links back to /lib/ld-linux-x86-64.so.2 which doesn’t NEED anything else. The former is stricly your standard C library (man 7 libc) while the later is the dynamic linker/loader (man 8 ld.so).

[5] I should mention parenthatically that there are other executable/library file formats such as Mach-O used on MacOS X. The above arguments translate over to any executable formats which permit shared libraries and dynamic linking. My prejudice for ELF is because it is the primary executable format used on Linux and BSD systems.

[6] I’m coining this term here. If you read the revdep-rebuild code, you won’t see reference to any graph there. Bash doesn’t readily lend itself to the neat data structures that python does.

[7] Just a word of caution, revdep-rebuild.py is still in development and does warn when you run it “This is a development version, so it may not work correctly. The original revdep-rebuild script is installed as revdep-rebuild.sh”.

[8] See https://wiki.gentoo.org/wiki/Hardened/PaX_Quickstart for an explanation of what PaX does as well as how it works.

[9] grep the contents of fs/binfmt_elf.c for PT_PAX_FLAGS and CONFIG_PAX_XATTR_PAX_FLAGS to see how these markings are used when the process is loaded from the ELF object. You can see the PaX protection on a running process by using `cat /proc/<pid>/maps | grep ^PaX` or `pspax` form the pax-utils package.

[10] The latest version off the git repo is at http://git.overlays.gentoo.org/gitweb/?p=proj/elfix.git;a=blob;f=scripts/revdep-pax.

[11] http://git.overlays.gentoo.org/gitweb/?p=proj/elfix.git;a=blob;f=pocs/link-graph/link_graph.py.

[12] These stages are distributed at http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-uclibc-hardened/ and http://distfiles.gentoo.org/experimental/arm/uclibc/.

[13] https://bugs.gentoo.org/show_bug.cgi?id=518630

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

In a previous post, we've already looked at the structure of Perl ebuilds in Gentoo Linux. Now, let's see what happens in the case of a major Perl update.

Does this look familiar?

UPDATE THE PERL MODULES:
After updating dev-lang/perl you must reinstall
the installed perl modules.
Use: perl-cleaner --all
Then maybe you have updated your major Perl version recently, since this important message is printed by emerge afterwards. So, what is it about? In short, a certain disconnect between the "Perl way" of doing things and the rest of the world. Both have their merits, they just don't play very well with each other... and the result is that major Perl updates in Gentoo have traditionally also been a major pain. (This will become much better in the future, see below.)

Let's see where a perl package stores its files.
caipi ~ # equery files dev-perl/Email-Address
 * Searching for Email-Address in dev-perl ...
 * Contents of dev-perl/Email-Address-1.898.0:
/usr
/usr/lib
/usr/lib/perl5
/usr/lib/perl5/vendor_perl
/usr/lib/perl5/vendor_perl/5.16.3
/usr/lib/perl5/vendor_perl/5.16.3/Email
/usr/lib/perl5/vendor_perl/5.16.3/Email/Address.pm
/usr/share
/usr/share/doc
/usr/share/doc/Email-Address-1.898.0
/usr/share/doc/Email-Address-1.898.0/Changes.bz2
/usr/share/doc/Email-Address-1.898.0/README.bz2
caipi ~ #
Interesting- the installation path contains the Perl version! The reasons for upstream to do this are pretty much obvious, the application binary interface for compiled modules can change and it's necessary to keep the installed modules for different versions apart. Also, in theory you can keep different Perl versions installed in parallel. Nice idea, however if you have only one "system Perl" installation, and you exchange that for a newer version (say, 5.18.1 instead of 5.16.3), the result is that the new version won't find the installed packages anymore.

The results are rather annoying. Imagine you haven't updated your system for a while, one of the many packages to be updated is dev-lang/perl, and later maybe (just picking an example at random) gnome-base/gsettings-desktop-schemas. Perl is updated fine, but when portage arrives at building the gnome package, the build fails with something like
checking for perl >= 5.8.1... 5.18.2
checking for XML::Parser... configure: error: XML::Parser perl module is required for intltool
Right. Perl is updated, dev-perl/XML-Parser is still installed in the old path, and Perl doesn't find it. Bah.

Enter perl-cleaner, the traditional "solution". This small program checks for files in "outdated" Perl installation paths, finds out which packages they belong to, and makes portage rebuild the corresponding packages. During the rebuild, the installation is run by the updated Perl, which makes the files go into the new, now correct path.

This sounds like a good solution, but there are a lot of details and potential problems hidden. For once, most likely you'll run perl-cleaner after a failed emerge command, and some unrelated packages still need updates. Portage will try to figure out how to do this, but blockers and general weirdness may happen. Then, sometimes a package isn't needed with the new Perl version anymore, but perl-cleaner can't know that. Again the result may be a blocker. We've added the following instructions to the perl-cleaner output, which may help cleaning up the most frequent difficulties:
 * perl-cleaner is stopping here:
 * Fix the problem and start perl-cleaner again.
 *
 * If you encounter blockers involving virtuals and perl-core, here are
 * some things to try:
 *   Remove all perl-core packages from your world file
 *     emerge --deselect --ask $(qlist -IC 'perl-core/*')
 *   Update all the installed Perl virtuals
 *     emerge -uD1a $(qlist -IC 'virtual/perl-*')
 *   Afterwards re-run perl-cleaner
In the end, you may have to try several repeated emerge and perl-cleaner commands until you have an updated and consistent system again. So far, it always worked somehow with fiddling, but the situation was definitely not nice.

So what's the future? Well...

EAPI=5 brings the beautiful new feature of subslots and slot operator dependencies. In short, a package A may declare a subslot, and a package B that depends on A may declare "rebuild me if A changes subslot". This mechanism is now used to automate the Perl rebuilds directly from within emerge: dev-lang/perl declares a subslot corresponding to its major version, say "5.18", and every package that installs Perl modules needs to depend on it with the subslot-rebuild requested, e.g.
RDEPEND="dev-lang/perl:="
The good news about this is that portage now knows the dependency tree and can figure out the correct reinstallation order.

The bad news is, it can only work perfectly after all Perl packages have been converted to EAPI=5 and stabilized. perl-core is done, but with about 2100 ebuilds that use perl-module.eclass in the portage tree still quite some work remains. I've plotted the current EAPI distribution of ebuilds using perl-module.eclass in a pie chart for illustration... Maybe we're done when Perl 5.20 goes stable. Who knows. :)

August 02, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
unpaper and libav (August 02, 2014, 18:08 UTC)

I've resumed working on unpaper since I have been using it more than a couple of times lately and there has been a few things that I wanted to fix.

What I've been working on now is a way to read input files in more formats; I was really aggravated by the fact that unpaper implemented its own loading of a single set of file formats (the PPM "rawbits"); I went on to look into libraries that abstract access to image formats, but I couldn't find one that would work for me. At the end I settled for libav even though it's not exactly known for being an image processing library.

My reasons to choose libav was mostly found in the fact that, while it does not support all the formats I'd like to have supported in unpaper (PS and PDF come to mind), it does support the formats that it supports now (PNM and company), and I know the developers well enough that I can get bugs and features fixed or implemented as needed.

I have now a branch can read files by using libav. It's a very naïve implementation of it though: it reads the image into an AVFrame structure and then convert that into unpaper's own image structure. It does not even free up the AVFrame, mostly because I'd actually like to be able to use AVFrame instead of unpaper's structure. Not only to avoid copying memory when it's not required (libav has functions to do shallow-copy of frames and mark them as readable when needed), but also because the frames themselves already contain all the needed information. Furthermore, libav 12 is likely going to include libavscale (or so Luca promised!) so that the on-load conversion can also be offloaded to the library.

Even with the naïve implementation that I implemented in half an afternoon, unpaper not only supports the same input file as before, but also PNG (24-bit non-alpha colour files are loaded the same way as PPM, 1-bit black and white is inverted compared to PBM, while 8-bit grayscale is actually 16-bit with half of it defining the alpha channel) and very limited TIFF support (1-bit is the same as PNG; 8-bit is paletted so I have not implemented it yet, and as for colour, I found out that libav does not currently support JPEG-compressed TIFF – I'll work on that if I can – but otherwise it is supported as it's simply 24bpp RGB).

What also needs to be done is to write out the file using libav too. While I don't plan to allow writing files in any random format with unpaper, I wouldn't mind being able to output through libav. Right now the way this is implemented, the code does explicit conversion back or forth between black/white, grayscale and colour at save time, and this is nothing different than the same conversion that happens at load time, and should rather be part of libavscale when that exists.

Anyway, if you feel like helping with this project, the code is on GitHub and I'll try to keep it updated soon.

August 01, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
More HTTP misbehaviours (August 01, 2014, 22:00 UTC)

Today I have been having some fun: while looking at the backlog on IRCCloud, I found out that it auto-linked Makefile.am which I prompty decided to register it with Gandi — unfortunately I couldn't get Makefile.in or configure.ac as they are both already registered. After that I decided to set up Google Analytics to report how many referrer arrive to my websites through some of the many vanity domains I registered over time.

After doing that, I spent some time staring at the web server logs to make sure that everything was okay, and I found out some more interesting things: it looks like a lot of people have been fetching my blog Atom feed through very bad feed readers. This is the reification of my forecast last year when Google Reader got shut down.

Some of the fetchers are open source, so I ended up opening issues for them, but that is not the case for all of them. And even when they are open source, sometimes they don't even accept pull requests implementing the feature, for whichever reason.

So this post is a bit of a name-and-shame, which can be positive for open-source projects when they can fix things, or negative for closed source services that are trying to replace Google Reader and failing to implement HTTP properly. It will also serve as a warning for my readers from those services, as they'll stop being able to fetch my feed pretty soon, as I'll update my ModSecurity rules to stop these people from fetching my blog.

As I noted above, both Stringer and Feedbin fail to properly use compressed responses (gzip compression), which means that they fetch over 90KiB every turn instead of just 25KiB. The Stringer devs already reacted and seem to be looking into fixing this very soon now. Feedbin I have no answer from yet (but it's pretty soon anyway), but it worries me for another reason too: it does not do any caching at all. And somebody set up a Feedbin instance in the Prague University that fetches my feed, without compression, without caching, every two minutes. I'm going to soon blacklist it.

Gwene still has not replied to the pull request I sent in October 2012, but on the bright side, it has not fetched my blog since a long time ago. Feedzirra (now Feedjira) used by IFTTT still does not enable compressed responses by default, even though it seems to support the option (Stringer is also based on it, it seems).

It's not just plain feed readers that fail at implementing HTTP. Distributed social network Friendica – that aims at doing a better job than Diaspora at that – seem also to forget about implementing either compressed responses or caching. At least it seems to only fetch my feed every twelve hours. On the other hand, it seems to also get someone's timeline from Twitter, so when it encounters a link to my blog it first send a HEAD request, and then fetches the page. Three times. Also uncompressed.

On the side of non-open-source services, FeedWrangler has probably one of the worst implementations of HTTP I've ever seen: it does not support compressed responses (90KiB feed), does not do caching (every time!), and while it would fetch at one hour intervals, it does not understand that a 301 is a permanent redirection, and there's no point in keeping around two feed IDs for /articles.rss and /articles.atom (each with one subscriber). That's 4MiB a day, which is around 2% of the bandwidth my website serves, over a day. While this is not an important amount, and I don't have limitation on the server's egress, it seems silly that 2% of my bandwidth is consumed on two subscribers, when the site has over a thousand visitors a day.

But what takes the biscuit is definitely FeedMyInbox: while it fetches only every six hours, it implements neither caching nor compression. And I found it only when looking into the requests coming from bots without a User-Agent header. The requests come from 216.198.247.46 which is svr.feedmyinbox.com. I'm soon also going to blacklist this until they stop being douches and provide a valid user agent string.

They are by far not the only ones though; there is another bot that fetches my feed every three hours that will soon follow the same destiny. But this does not have an obvious service attached to it, so if whatever you're using to read my blog tells you it can't fetch my blog anymore, try to figure out if you're using a douchereader.

Please remember that software on the net should be implemented for collaboration between client and server, not for exploitation. Everybody's bandwidth gets worse when you heavily use a service that is not doing its job at optimizing bandwidth usage.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened July meeting (August 01, 2014, 19:48 UTC)

I failed to show up myself (I fell asleep – kids are fun, but deplete your energy source quickly), but that shouldn’t prevent me from making a nice write-up of the meeting.

Toolchain

GCC 4.9 gives some issues with kernel compilations and other components. Lately, breakage has been reported with GCC 4.9.1 compiling MySQL or with debugging symbols. So for hardened, we’ll wait this one out until the bugs are fixed.

For GCC 4.10, the –enable-default-pie patch has been sent upstream. If that is accepted, the SSP one will be sent as well.

In uclibc land, stages are being developed for PPC. This is the final architecture that is often used in embedded worlds that needed support for it in Gentoo, and that’s now being finalized. Go blueness!

SELinux

A libpcre upgrade broke relabeling operations on SELinux enabled systems. A fix for this has been made part of libselinux, but a little too late, so some users will be affected by the problem. It’s easily worked around (removing the *.bin files in the contexts/files/ directory of the SELinux configuration) and hopefully will never occur again.

The 2.3 userland has finally been stabilized (we had a few dependencies that we were waiting for – and we were a dependency ourselves for other packages as well).

Finally, some thought discussion is being done (not that there’s much feedback on it, but every documented step is a good step imo) on the SELinux policy within Gentoo (and the principles that we’ll follow that are behind it).

Kernel and grsecurity / PaX

Due to some security issues, the Linux kernel sources have been stabilized more rapidly than usual, which left little time for broad validation and regression testing. Updates and fixes have been applied since and new stabilizations occurred. Hopefully we’re now at the right, stable set again.

The C-based install-xattr application (which is performance-wise a big improvement over the Python-based one) is working well in “lab environments” (some developers are using it exclusively). It is included in the Portage repository (if I understand the chat excerpts correctly) but as such not available for broader usage yet.

An update against elfix is made as well as there was a dependency mismatch when building with USE=-ptpax. This will be corrected in elfix-0.9.

Finally, blueness is also working on a GLEP (Gentoo Linux Enhancement Proposal) to export VDB information (especially NEEDED.ELF.2) as this is important for ELF/library graph information (as used by revdep-pax, migrate-pax, etc.). Although Portage already does this, this is not part of the PMS and as such other package managers might not do this (such as Paludis).

Profiles

Updates on the profiles has been made to properly include multilib related variables and other metadata. For some profiles, this went as easy as expected (nice stacking), but other profiles have inheritance troubles making it much harder to include the necessary information. Although some talks have arised on the gentoo-dev mailinglist about refactoring how Gentoo handles profiles, there hasn’t been done much more than just talking :-( But I’m sure we haven’t heard the last of this yet.

Documentation

Blueness has added information on EMULTRAMP in the kernel configuration, especially noting to the user that it is needed for Python support in Gentoo Hardened. It is also in the PaX Quickstart document, although this document is becoming a very large one and users might overlook it.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Europython 2014 (August 01, 2014, 14:29 UTC)

I had the chance to participate to europython 2014 as my company was sponsoring the event.

IMG_20140725_161445-1024x576

This was a great week where I got to meet some very interesting people and hear about some neat python use cases, libraries and new technologies so I thought I’d write a quick summary of my biased point of view.

ZeroMQ

I had the chance to meet Pieter Hintjens and participate in a 3 hours workshop on ZeroMQ. This was very interesting and refreshing as to go in more depth into this technology which I’ve been using in production for several years now.

Pieter is also quite a philosophical person and I strongly encourage you to listen to his keynote. I ended up pinging him in real life for an issue I’ve been waiting for bug correction on the libzmq and it got answered nicely.

uWSGI

Another big thing in our python stack is the uWSGI application container which I love and follow closely even if my lack of knowledge in C++ prevents me for going too deep in the source code… I got the chance to speak with Roberto De Ioris about the next 2.1 release and propose him two new features.

Thanks a lot for your consideration Roberto !

Trends

  • Not tested = broken !
  • Python is strong and very lively in the Big Data world
  • Asynchronous and distributed architectures get more and more traction and interest

Videos

All the talks videos are already online, you should check them out !

July 27, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


We've got the stabilization of Perl 5.18 upcoming, so what better chance is there to explain a bit how the Perl-related ebuilds in Gentoo work...

First of all, there is dev-lang/perl. This contains the Perl core distribution, installing the binaries and all the Perl modules that are bundled with Perl itself.

Then, there is the perl-core category. It contains independent ebuilds for Perl modules that are also present in the core Perl distribution. Most Perl modules that are bundled with Perl are also in addition released as independent tarballs. If any of these packages is installed from perl-core, its files are placed such that the perl-core download overrides the bundled copy. This means you can also update part of the bundled Perl modules, e.g. in case of a bug, without updating Perl itself.

Next, there are a lot of virtuals "virtual/perl-..." in the virtual category of the portage tree. What are these good for? Well, imagine you want to depend on a specific version of a module that is usually bundled with Perl. For example, you need Module::CoreList at at least version 3.  This can either be provided by a new enough Perl (for example, now hardmasked Perl 5.20 contains Module::CoreList 3.10), or by a separate package from perl-core (where we have Module::CoreList 5.021001 as perl-core/Module-CoreList-5.21.1).
To make sure that everything works, you should never directly depend on a perl-core package, but always on the corresponding virtual (here virtual/perl-Module-CoreList; any perl-core package must have a corresponding virtual). Then both ways to fulfil the dependency are automatically taken into account. (Future repoman versions will warn if you directly depend on perl-core. Also you should never have anything perl-core in your world file!)

Last, we have lots of lots of modules in the dev-perl category. Most of them are from CPAN, and the only thing they have in common is that they have no copy inside core Perl.

I hope this clarifies things a bit. More Perl posts coming...

July 22, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL: drop-in and ABI leakage (July 22, 2014, 23:09 UTC)

There has been some confusion on my previous post with Bob Beck of LibreSSL on whether I would advocate for using a LibreSSL shared object as a drop-in replacement for an OpenSSL shared object. Let me state this here, boldly: you should never, ever, for no reason, use shared objects from different major/minor OpenSSL versions or implementations (such as LibreSSL) as a drop-in replacement for one another.

The reason is, obviously, that the ABI of these libraries differs, sometimes subtly enought that they may actually load and run, but then perform abysmally insecure operations, as its data structures will have changed, and now instead of reading your random-generated key, you may be reading the master private key. nd in general, for other libraries you may even be calling the wrong set of functions, especially for those written in C++, where the vtable content may be rearranged across versions.

What I was discussing in the previous post was the fact that lots of proprietary software packages, by bundling a version of Curl that depends on the RAND_egd() function, will require either unbundling it, or keeping along a copy of OpenSSL to use for runtime linking. And I think that is a problem that people need to consider now rather than later for a very simple reason.

Even if LibreSSL (or any other reimplementation, for what matters) takes foot as the default implementation for all Linux (and not-Linux) distributions, you'll never be able to fully forget of OpenSSL: not only if you have proprietary software that you maintain, but also because a huge amount of software (and especially hardware) out there will not be able to update easily. And the fact that LibreSSL is throwing away so much of the OpenSSL clutter also means that it'll be more difficult to backport fixes — while at the same time I think that a good chunk of the black hattery will focus on OpenSSL, especially if it feels "abandoned", while most of the users will still be using it somehow.

But putting aside the problem of the direct drop-in incompatibilities, there is one more problem that people need to understand, especially Gentoo users, and most other systems that do not completely rebuild their package set when replacing a library like this. The problem is what I would call "ABI leakage".

Let's say you have a general libfoo that uses libssl; it uses a subset of the API that works with both OpenSSL. Now you have a bar program that uses libfoo. If the library is written properly, then it'll treat all the data structures coming from libssl as opaque, providing no way for bar to call into libssl without depending on the SSL API du jour (and thus putting a direct dependency on libssl for the executable). But it's very well possible that libfoo is not well-written and actually treats the libssl API as transparent. For instance a common mistake is to use one of the SSL data structures inline (rather than as a pointer) in one of its own public structures.

This situation would be barely fine, as long as the data types for libfoo are also completely opaque, as then it's only the code for libfoo that relies on the structures, and since you're rebuilding it anyway (as libssl is not ABI-compatible), you solve your problem. But if we keep assuming a worst-case scenario, then you have bar actually dealing with the data structures, for instance by allocating a sized buffer itself, rather than calling into a proper allocation function from libfoo. And there you have a problem.

Because now the ABI of libfoo is not directly defined by its own code, but also by whichever ABI libssl has! It's a similar problem as the symbol table used as an ABI proxy: while your software will load and run (for a while), you're really using a different ABI, as libfoo almost certainly does not change its soname when it's rebuilt against a newer version of libssl. And that can easily cause crashes and worse (see the note above about dropping in LibreSSL as a replacement for OpenSSL).

Now honestly none of this is specific to LibreSSL. The same is true if you were to try using OpenSSL 1.0 shared objects for software built against OpenSSL 0.9 — which is why I cringed any time I heard people suggesting to use symlink at the time, and it seems like people are giving the same suicidal suggestion now with OpenSSL, according to Bob.

So once again, don't expect binary-compatibility across different versions of OpenSSL, LibreSSL, or any other implementation of the same API, unless they explicitly aim for that (and LibreSSL definitely doesn't!)

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)

One of the first tools that you should get if you’re hacking with GStreamer or want to play with the latest version without doing evil things to your system is probably the gst-uninstalled script. It’s the equivalent of Python’s virtualenv for hacking on GStreamer. :)

The documentation around getting this set up is a bit frugal, though, so here’s my attempt to clarify things. I was going to put this on our wiki, but that’s a bit search-engine unfriendly, so probably easiest to just keep it here. The setup I outline below can probably be automated further, and comments/suggestions are welcome.

  • First, get build dependencies for GStreamer core and plugins on your distribution. Commands to do this on some popular distributions follow. This will install a lot of packages, but should mean that you won’t have to play find-the-plugin-dependency for your local build.

    • Fedora: $ sudo yum-builddep gstreamer1-*
    • Debian/Ubuntu: $ sudo apt-get build-dep gstreamer1.0-plugins-{base,good,bad,ugly}
    • Gentoo: having the GStreamer core and plugin packages should suffice
    • Others: drop me a note with the command for your favourite distro, and I’ll add it here
  • Next, check out the code (by default, it will turn up in ~/gst/master)

    • $ curl http://cgit.freedesktop.org/gstreamer/gstreamer/plain/scripts/create-uninstalled-setup.sh | sh
    • Ignore the pointers to documentation that you see — they’re currently defunct
  • Now put the gst-uninstalled script somewhere you can get to it easily:

    • $ ln -sf ~/gst/master/gstreamer/scripts/gst-uninstalled ~/bin/gst-master
    • (the -master suffix for the script is important to how the script works)
  • Enter the uninstalled environment:

    • $ ~/bin/gst-master
    • (this puts you in the directory with all the checkouts, and sets up a bunch of environment variables to use your uninstalled setup – check with echo $GST_PLUGIN_PATH)
  • Time to build

    • $ ./gstreamer/scripts/git-update.sh
  • Take it out for a spin

    • $ gst-inspect-1.0 filesrc
    • $ gst-launch-1.0 playbin uri=file:///path/to/some/file
    • $ gst-discoverer-1.0 /path/to/some/file
  • That’s it! Some tips:

    • Remember that you need to run ~/bin/gst-master to enter the environment for each new shell
    • If you start up a GStreamer app from your system in this environment, it will use your uninstalled libraries and plugins
    • You can and should periodically update you tree by rerunning the git-update.sh script
    • To run gdb on gst-launch, you need to do something like:
    • $ libtool --mode=execute gdb --args gstreamer/tools/gst-launch-1.0 videotestsrc ! videoconvert ! xvimagesink
    • I find it useful to run cscope on the top-level tree, and use that for quick code browsing

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Privacy Theatre (July 22, 2014, 08:26 UTC)

I really wish I could take credit for the term, but Jürgen points out he coined the term way before me, in German: Datenschutztheater. I still like to think that the name fits many behaviours I see out there, and it's not a coincidence that it sounds like the way we think of TSA's rules at airports, security theatre.

I have seen lots and lots of people advocating for 100% encryption of everything, and hiding information and all kind of (in my opinion) overly paranoid suggestions for everybody, without understanding any threat model at all, and completely forgetting that your online privacy is only a small part of the picture.

I have been reminded of this as I proceeded sorting out my paperwork here in Dublin, which started piling up a little too much. My trick is the usual I used in Italy too: scan whatever is important to keep a copy of, and unless the original is required for anything, I destroy the hard copy. I don't trash it, I destroy it. I include anything that has my address on it, and when I was destroying it with my personal shredder, I always made sure to include enough "harmless" papers in the mix to make it more difficult to filter out the parts that looked important.

As I said in my previous post, I'm not worried about "big" corporations knowing things about me, like Tesco knowing what I like to buy. I find it useful, and I don't have a problem with that. On the other hand, I would have a problem with anybody, wanting to attack me directly, decided to dumpster-dive me.

Another common problem I see that I categorize as Privacy Theatre is the astounding lack of what others would call OpSec. I have seen plenty of people at conferences, even in security training, using their laptop without consideration for the other people in the room, and without any sort of privacy screen. In one of the past conferences I've seen mail admins from a provider that will go unnamed, working on production issues in front of my eyes: if I had mischievous intents I would have learnt quite a bit about their production environment.

Yes I know that the screens are a pain, and that you have to keep taking them in and out, and that they take away some of the visual space on your monitor. Myself, for my personal laptop I decided for a gold privacy screen by 3M, which is bearable to use even if you don't need it, as long as you don't need to watch movies on your laptop (I don't, the laptop's display is good but I have a TV and a good monitor for that).

But there are tons of other, smaller pieces that people who insist they are privacy advocates really don't seem to care about. I'm not saying that you should be paranoid, actually I'm saying the exact opposite: try to not be the paranoid person that wants everything encrypted without understanding why. In most cases, Internet communication needs to be encrypted indeed. And you want to encrypt your important files if you put them in the cloud. But at the same time there are things that you don't really care about that much and you're just making your life miserable because Crypto-Gods, while the same energy could be redirected to save you from more realistic petty criminals.

July 20, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL and the bundled libs hurdle (July 20, 2014, 09:55 UTC)

It was over five years ago that I ranted about the bundling of libraries and what that means for vulnerabilities found in those libraries. The world has, since, not really listened. RubyGems still keep insisting that "vendoring" gems is good, Go explicitly didn't implement a concept of shared libraries, and let's not even talk about Docker or OSv and their absolutism in static linking and bundling of the whole operating system, essentially.

It should have been obvious how this can be a problem when Heartbleed came out, bundled copies of OpenSSL would have needed separate updates from the system libraries. I guess lots of enterprise users of such software were saved only by the fact that most of the bundlers ended up using older versions of OpenSSL where heartbeat was not implemented at all.

Now that we're talking about replacing the OpenSSL libraries with those coming from a different project, we're going to be hit by both edges of the proprietary software sword: bundling and ABI compatibility, which will make things really interesting for everybody.

If you've seen my (short, incomplete) list of RAND_egd() users which I posted yesterday. While the tinderbox from which I took this is out of date and needs cleaning, it is a good starting point to figure out the trends, and as somebody already picked up, the bundling is actually strong.

Software that bundled Curl, or even Python, but then relied on the system copy of OpenSSL, will now be looking for RAND_egd() and thus fail. You could be unbundling these libraries, and then use a proper, patched copy of Curl from the system, where the usage of RAND_egd() has been removed, but then again, this is what I've been advocating forever or so. With caveats, in the case of Curl.

But now if the use of RAND_egd() is actually coming from the proprietary bits themselves, you're stuck and you can't use the new library: you either need to keep around an old copy of OpenSSL (which may be buggy and expose even more vulnerability) or you need a shim library that only provides ABI compatibility against the new LibreSSL-provided library — I'm still not sure why this particular trick is not employed more often, when the changes to a library are only at the interface level but still implements the same functionality.

Now the good news is that from the list that I produced, at least the egd functions never seemed to be popular among proprietary developers. This is expected as egd was vastly a way to implement the /dev/random semantics for non-Linux systems, while the proprietary software that we deal with, at least in the Linux world, can just accept the existence of the devices themselves. So the only problems have to do with unbundling (or replacing) Curl and possibly the Python SSL module. Doing so is not obvious though, as I see from the list that there are at least two copies of libcurl.so.3 which is the older ABI for Curl — although admittedly one is from the scratchbox SDKs which could just as easily be replaced with something less hacky.

Anyway, my current task is to clean up the tinderbox so that it's in a working state, after which I plan to do a full build of all the reverse dependencies on OpenSSL, it's very possible that there are more entries that should be in the list, since it was built with USE=gnutls globally to test for GnuTLS 3.0 when it came out.

July 19, 2014
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I was experimenting in my arm chroot, and after a gcc upgrade and emerge --depclean --ask that removed the old gcc I got the following error:

# ls -l
ls: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory

Fortunately the newer working gcc was present, so the steps to make things work again were:

# LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config -l
 * gcc-config: Active gcc profile is invalid!

 [1] armv7a-hardfloat-linux-gnueabi-4.8.2

# LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/lib/gcc/armv7a-hardfloat-linux-gnueabi/4.8.2/" gcc-config 1 
 * Switching native-compiler to armv7a-hardfloat-linux-gnueabi-4.8.2 ...

Actually my first thought was using busybox. The unexpected breakage during a routine gcc upgrade made me do some research in case I can't rely on /bin/busybox being present and working.

I highly recommend the following links for further reading:
http://lambdaops.com/rm-rf-remains
http://eusebeia.dyndns.org/bashcp
http://www.reddit.com/r/linux/comments/27is0x/rm_rf_remains/ci199bk

Read more »

July 14, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

I just watched a TED talk that I would like to share with you.

First, let me quote a line from that talk that works without much context:

Next to the technologie, entertainment and social media industries
we now spend more time with other people’s ideas than we do with our own.

For the remainder, see for yourself: Blur the line: Dan Jaspersen at TEDxCSU

Richard Freeman a.k.a. rich0 (homepage, bugs)
Quick systemd-nspawn guide (July 14, 2014, 20:31 UTC)

I switched to using systemd-nspawn in place of chroot and wanted to give a quick guide to using it.  The short version is that I’d strongly recommend that anybody running systemd that uses chroot switch over – there really are no downsides as long as your kernel is properly configured.

Chroot should be no stranger to anybody who works on distros, and I suspect that the majority of Gentoo users have need for it from time to time.

The Challenges of chroot

For most interactive uses it isn’t sufficient to just run chroot.  Usually you need to mount /proc, /sys, and bind mount /dev so that you don’t have issues like missing ptys, etc.  If you use tmpfs you might also want to mount the new tmp, var/tmp as tmpfs.  Then you might want to make other bind mounts into the chroot.  None of this is particularly difficult, but you usually end up writing a small script to manage it.

Now, I routinely do full backups, and usually that involves excluding stuff like tmp dirs, and anything resembling a bind mount.  When I set up a new chroot that means updating my backup config, which I usually forget to do since most of the time the chroot mounts aren’t running anyway.  Then when I do leave it mounted overnight I end up with backups consuming lots of extra space (bind mounts of large trees).

Finally, systemd now by default handles bind mounts a little differently when they contain other mount points (such as when using –rbind).  Apparently unmounting something in the bind mount will cause systemd to unmount the corresponding directory on the other side of the bind.  Imagine my surprise when I unmounted my chroot bind to /dev and discovered /dev/pts and /dev/shm no longer mounted on the host.  It looks like there are ways to change that, but this isn’t the point of my post (it just spurred me to find another way).

Systemd-nspawn’s Advantages

Systemd-nspawn is a tool that launches a container, and it can operate just like chroot in its simplest form.  By default it automatically sets up most of the overhead like /dev, /tmp, etc.  With a few options it can also set up other bind mounts as well.  When the container exits all the mounts are cleaned up.

From the outside of the container nothing appears different when the container is running.  In fact, you could spawn 5 different systemd-nspawn container instances from the same chroot and they wouldn’t have any interaction except via the filesystem (and that excludes /dev, /tmp, and so on – only changes in /usr, /etc will propagate across).  Your backup won’t see the bind mounts, or tmpfs, or anything else mounted within the container.

The container also has all those other nifty container benefits like containment – a killall inside the container won’t touch anything outside, and so on.  The security isn’t airtight – the intent is to prevent accidental mistakes.  

Then, if you use a compatible sysvinit (which includes systemd, and I think recent versions of openrc), you can actually boot the container, which drops you to a getty inside.  That means you can use fstab to do additional mounts inside the container, run daemons, and so on.  You get almost all the benefits of virtualization for the cost of a chroot (no need to build a kernel, and so on).  It is a bit odd to be running systemctl poweroff inside what looks just like a chroot, but it works.

Note that unless you do a bit more setup you will share the same network interface with the host, so no running sshd on the container if you have it on the host, etc.  I won’t get into this but it shouldn’t be hard to run a separate network namespace and bind the interfaces so that the new instance can run dhcp.

How to do it

So, getting it actually working will likely be the shortest bit in this post.

You need support for namespaces and multiple devpts instances in your kernel:

CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y

 From there launching a namespace just like a chroot is really simple:

systemd-nspawn -D .

That’s it – you can exit from it just like a chroot.  From inside you can run mount and see that it has taken care of /dev and /tmp for you.  The “.” is the path to the chroot, which I assume is the current directory.  With nothing further it runs bash inside.

If you want to add some bind mounts it is easy:

systemd-nspawn -D . –bind /usr/portage

Now your /usr/portage is bound to your host, so no need to sync/etc.  If you want to bind to a different destination add a “:dest” after the source, relative to the root of the chroot (so –bind foo is the same as –bind foo:foo).

If the container has a functional init that can handle being run inside, you can add a -b to boot it:

systemd-nspawn -D . –bind /usr/portage -b

Watch the init do its job.  Shut down the container to exit.

Now, if that container is running systemd you can direct its journal to the host journal with -h:

systemd-nspawn -D . –bind /usr/portage -j -b

Now, nspawn registers the container so that it shows up in machinectl.  That makes it easy to launch a new getty on it, or ssh to it (if it is running ssh – see my note above about network namespaces), or power it off from the host.  

That’s it.  If you’re running systemd I’d suggest ditching chroot almost entirely in favor of nspawn.  


Filed under: foss, gentoo, linux

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Biggest ebuilds in-tree (July 14, 2014, 06:39 UTC)

Random datapoint: There's only about 10 packages with ebuilds over 600 lines.

Sorted by lines, duplicate entries per-package removed, these are the biggest ones:

828 dev-lang/ghc/ghc-7.6.3-r1.ebuild
817 dev-lang/php/php-5.3.28-r3.ebuild
750 net-nds/openldap/openldap-2.4.38-r2.ebuild
664 www-client/chromium/chromium-36.0.1985.67.ebuild
654 www-servers/nginx/nginx-1.4.7.ebuild
658 games-rpg/nwn-data/nwn-data-1.29-r5.ebuild
654 media-video/mplayer/mplayer-1.1.1-r1.ebuild
644 dev-vcs/git/git-9999-r3.ebuild
621 x11-drivers/ati-drivers/ati-drivers-13.4.ebuild
617 sys-freebsd/freebsd-lib/freebsd-lib-9.1-r11.ebuild

July 13, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Background story / context

At work I’m dealing with a test suite running >30 minutes, even on moderately fast hardware. When testing some changes, I launch the test suite and start working on something else to not be waiting for the test suite. Now the sooner I know that the test suite finished execution, the sooner I can fix errors and give it another spin. So checking the test suite for being done manually is not efficient.

The problem

What I wanted was a notification, something audible, looped, like an alarm clock. Either

$ ALARM_WHEN_DONE cmd [p1 p2 ..]

or

$ cmd [p1 p2 ..] ; ALARM

usage would have worked for me.

My approach

I ended up grabbing the free Analog Alarm Clock sound — the low-quality MP3 version download works without registration — and this shell alias:

alias ALARM='mplayer --loop=0 ~/Desktop/alarm.mp3 &>/dev/null'

With this alias, now I can do stuff like

./testrunner ; ALARM

on the shell and never miss the end of test suite execution again :)

Do you have a different/better approach to the same problem? Let me know!

PS: Yes, I have heard of continuous integration and we do that, too :)

July 12, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
LibreSSL on Gentoo (July 12, 2014, 18:31 UTC)

LibreSSL PuffyYesterday the LibreSSL project released the first portable version that works on Linux. LibreSSL is a fork of OpenSSL and was created by the OpenBSD team in the aftermath of the Heartbleed bug.

Yesterday and today I played around with it on Gentoo Linux. I was able to replace my system's OpenSSL completely with LibreSSL and with few exceptions was able to successfully rebuild all packages using OpenSSL.

After getting this running on my own system I installed it on a test server. The Webpage tlsfun.de runs on that server. The functionality changes are limited, the only thing visible from the outside is the support for the experimental, not yet standardized ChaCha20-Poly1305 cipher suites, which is a nice thing.

A warning ahead: This is experimental, in no way stable or supported and if you try any of this you do it at your own risk. Please report any bugs you have with my overlay to me or leave a comment and don't disturb anyone else (from Gentoo or LibreSSL) with it. If you want to try it, you can get a portage overlay in a subversion repository. You can check it out with this command:
svn co https://svn.hboeck.de/libressl-overlay/
git clone https://github.com/gentoo/libressl.git

This is what I had to do to get things running:

LibreSSL itself

First of all the Gentoo tree contains a lot of packages that directly depend on openssl, so I couldn't just replace that. The correct solution to handle such issues would be to create a virtual package and change all packages depending directly on openssl to depend on the virtual. This is already discussed in the appropriate Gentoo bug, but this would mean patching hundreds of packages so I skipped it and worked around it by leaving a fake openssl package in place that itself depends on libressl.

LibreSSL deprecates some APIs from OpenSSL. The first thing that stopped me was that various programs use the functions RAND_egd() and RAND_egd_bytes(). I didn't know until yesterday what egd is. It stands for Entropy Gathering Daemon and is a tool written in perl meant to replace the functionality of /dev/(u)random on non-Linux-systems. The LibreSSL-developers consider it insecure and after having read what it is I have to agree. However, the removal of those functions causes many packages not to build, upon them wget, python and ruby. My workaround was to add two dummy functions that just return -1, which is the error code if the Entropy Gathering Daemon is not available. So the API still behaves like expected. I also posted the patch upstream, but the LibreSSL devs don't like it. So on the long term it's probably better to fix applications to stop trying to use egd, but for now these dummy functions make it easier for me to build my system.

The second issue popping up was that the libcrypto.so from libressl contains an undefined main() function symbol which causes linking problems with a couple of applications (subversion, xorg-server, hexchat). According to upstream this undefined symbol is intended and most likely these are bugs in the applications having linking problems. However, for now it was easier for me to patch the symbol out instead of fixing all the apps. Like the egd issue on the long term fixing the applications is better.

The third issue was that LibreSSL doesn't ship pkg-config (.pc) files, some apps use them to get the correct compilation flags. I grabbed the ones from openssl and adjusted them accordingly.

OpenSSH

This was the most interesting issue from all of them.

To understand this you have to understand how both LibreSSL and OpenSSH are developed. They are both from OpenBSD and they use some functions that are only available there. To allow them to be built on other systems they release portable versions which ship the missing OpenBSD-only-functions. One of them is arc4random().

Both LibreSSL and OpenSSH ship their compatibility version of arc4random(). The one from OpenSSH calls RAND_bytes(), which is a function from OpenSSL. The RAND_bytes() function from LibreSSL however calls arc4random(). Due to the linking order OpenSSH uses its own arc4random(). So what we have here is a nice recursion. arc4random() and RAND_bytes() try to call each other. The result is a segfault.

I fixed it by using the LibreSSL arc4random.c file for OpenSSH. I had to copy another function called arc4random_stir() from OpenSSH's arc4random.c and the header file thread_private.h. Surprisingly, this seems to work flawlessly.

Net-SSLeay

This package contains the perl bindings for openssl. The problem is a check for the openssl version string that expected the name OpenSSL and a version number with three numbers and a letter (like 1.0.1h). LibreSSL prints the version 2.0. I just hardcoded the OpenSSL version numer, which is not a real fix, but it works for now.

SpamAssassin

SpamAssassin's code for spamc requires SSLv2 functions to be available. SSLv2 is heavily insecure and should not be used at all and therefore the LibreSSL devs have removed all SSLv2 function calls. Luckily, Debian had a patch to remove SSLv2 that I could use.

libesmtp / gwenhywfar

Some DES-related functions (DES is the old Data Encryption Standard) in OpenSSL are available in two forms: With uppercase DES_ and with lowercase des_. I can only guess that the des_ variants are for backwards compatibliity with some very old versions of OpenSSL. According to the docs the DES_ variants should be used. LibreSSL has removed the des_ variants.

For gwenhywfar I wrote a small patch and sent it upstream. For libesmtp all the code was in ntlm. After reading that ntlm is an ancient, proprietary Microsoft authentication protocol I decided that I don't need that anyway so I just added --disable-ntlm to the ebuild.

Dovecot

In Dovecot two issues popped up. LibreSSL removed the SSL Compression functionality (which is good, because since the CRIME attack we know it's not secure). Dovecot's configure script checks for it, but the check doesn't work. It checks for a function that LibreSSL keeps as a stub. For now I just disabled the check in the configure script. The solution is probably to remove all remaining stub functions. The configure script could probably also be changed to work in any case.

The second issue was that the Dovecot code has some #ifdef clauses that check the openssl version number for the ECDH auto functionality that has been added in OpenSSL 1.0.2 beta versions. As the LibreSSL version number 2.0 is higher than 1.0.2 it thinks it is newer and tries to enable it, but the code is not present in LibreSSL. I changed the #ifdefs to check for the actual functionality by checking a constant defined by the ECDH auto code.

Apache httpd

The Apache http compilation complained about a missing ENGINE_CTRL_CHIL_SET_FORKCHECK. I have no idea what it does, but I found a patch to fix the issue, so I didn't investigate it further.

Further reading:
Someone else tried to get things running on Sabotage Linux.

Update: I've abandoned my own libressl overlay, a LibreSSL overlay by various Gentoo developers is now maintained at GitHub.

July 09, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

SELinux users might be facing failures when emerge is merging a package to the file system, with an error that looks like so:

>>> Setting SELinux security labels
/usr/lib64/portage/bin/misc-functions.sh: line 1112: 23719 Segmentation fault      /usr/sbin/setfiles "${file_contexts_path}" -r "${D}" "${D}"
 * ERROR: dev-libs/libpcre-8.35::gentoo failed:
 *   Failed to set SELinux security labels.

This has been reported as bug 516608 and, after some investigation, the cause is found. First the quick workaround:

~# cd /etc/selinux/strict/contexts/files
~# rm *.bin

And do the same for the other SELinux policy stores on the system (targeted, mcs, mls, …).

Now, what is happening… Inside the mentioned directory, binary files exist such as file_contexts.bin. These files contain the compiled regular expressions of the non-binary files (like file_contexts). By using the precompiled versions, regular expression matching by the SELinux utilities is a lot faster. Not that it is massively slow otherwise, but it is a nice speed improvement nonetheless.

However, when pcre updates occur, then the basic structures that pcre uses internally might change. For instance, a number might switch from a signed integer to an unsigned integer. As pcre is meant to be used within the same application run, most applications do not have any issues with such changes. However, the SELinux utilities effectively serialize these structures and later read them back in. If the new pcre uses a changed structure, then the read-in structures are incompatible and even corrupt.

Hence the segmentation faults.

To resolve this, Stephen Smalley created a patch that includes PCRE version checking. This patch is now included in sys-libs/libselinux version 2.3-r1. The package also recompiles the existing *.bin files so that the older binary files are no longer on the system. But there is a significant chance that this update will not trickle down to the users in time, so the workaround might be needed.

I considered updating the pcre ebuilds as well with this workaround, but considering that libselinux is most likely to be stabilized faster than any libpcre bump I let it go.

At least we have a solution for future upgrades; sorry for the noise.

Edit: libselinux-2.2.2-r5 also has the fix included.

Michał Górny a.k.a. mgorny (homepage, bugs)
The Council and the Community (July 09, 2014, 06:27 UTC)

A new Council election is in progress and we have a few candidates. Most of them have written a manifesto. For some of them this is one of the few mails they sent to the public mailing lists recently. For one of them this is the only one. Do we want to elect people who do not participate actively in the Community? Does such election even make sense?

Gentoo is an open, free community. While the Developer Community is not really open (joining consumes a lot of time), the discussion media were always open to non-developer comments and ideas. Most of the people working on Gentoo are volunteers, doing all the work in their free time or between other tasks.

While we have formal rules, leaders and projects, all of them have very limited power. The rules pretty much boil down to being «do not»s. You can try to convince developer to follow your vision but you can’t force him to. If you try too hard, the best you can get is losing a valuable contributor. And I’m not talking about the extremes like rage quits; the person will simply no longer be interested in working on a particular project.

Most of the mailing list (and bug) discussions are about that. Finding possible solutions, discussing their technical merits and finding an agreement. It is not enough to choose a solution which is considered best by a majority or a team. It is about agreeing on a solution that is good and that comes with people willing to work on it. Otherwise, you end up with no solution because what has been chosen is not being implemented.

Consider the late games team policy thread. The games team and their supporter believes their solutions to have technical merit. Without getting into debating this, we can easily see the effects. The team is barely getting any contributions, mostly thanks to a few (three?) persistent out-of-team developers that are willing to overcome all the difficulties. And even those contributors support the idea of abolishing the current policy.

So, what’s the purpose of all the teams, their leads and the Council in all this? As I see it, teams are the people who know the particular area better than others, and have valuable experience. Yet teams need to be open to the Community, to listen to their feedback, to provide valuable points to the discussion and to guide it towards a consensus.

The teams may need to make a final decision if a mailing list discussion doesn’t end in a clear agreement. However, they need to weigh it carefully, to foresee the outcome. It is not enough to discuss the merits in a semi-open meeting, and it is not enough to consider only the technical aspect. The teams need to predict how the decision will affect the Community, how it will affect the users and the contributors.

The Council is not very different from those teams, albeit more formal in its proceedings. Likewise, it needs to listen to the Community, especially if it is called specifically to revise a team’s decision (or lack of action).

Now, how could the Council determine what’s best for Gentoo without actively participating in the proceedings of the Community? Non-active candidates, do you expect to start participating after being elected? Or do you think that grepping through the threads five minutes before the meeting is enough?

Well, I hope that the next Council will be up to the task. That it will listen to the Community and weigh their decisions carefully. That it will breed action and support ideas backed by technical merits and willing people, rather than decisions that discourage further contribution.

July 06, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hello :)

I don’t get to playing with code much lately. Yesterday and today I put some effort into trying to understand and document the EGF file format used by Xie Xie to store Xiangqi games including per-move comments and a bit of other metadata.

Status quo includes a simple command line tool:

# ./egf/cli.py test.egf 
Event:  Blog post
Site:  At home
Date:  6-7-2014
Round:  1
Red name:  sping
Black name:  Xie Xie Freeware 2.5.0
Description:  Command line tool demo input
Author:  sping

File i:  R _ _ P _ _ p _ _ r
File h:  H _ C _ _ _ _ c _ h
File g:  E _ _ P _ _ p _ _ e
File f:  A _ _ _ _ _ _ _ _ a
File e:  K _ _ P _ _ p _ _ k
File d:  A _ _ _ _ _ _ _ _ a
File c:  E _ _ P _ _ p _ _ e
File b:  H _ C _ _ _ _ c _ h
File a:  R _ _ P _ _ p _ _ r
(Ranks 9 to 0 from left to right)

To start:  red

6 single moves in total
[ 1]  c h3  - e3 
[ 1]                   H h10 - g8 
[ 2]  h h1  - g3 
[ 2]                   R i10 - h10
[ 3]  r i1  - h1 
[ 3]                   C h8  - h4 

Result:  to be determined

Bytes remaining to be read:
0 0

I welcome help to fill in the remaining blanks, e.g. with decoding time markers and king-in-check markers of moves.

If you are on Gentoo and would like to run Xie Xie the easy way, grab games-board/xiexie-freeware-bin from the betagarden overlay.

EGF files for inspection can be downloaded from http://www.cc-xiexie.com/download.php.

Gentoo Monthly Newsletter: June 2014 (July 06, 2014, 15:00 UTC)

Gentoo News

Interview with Patrick McLean (chutzpah)

(by David Abbott)
1. Hi Patrick o/ tell us about yourself?
I am currently a Gentoo Engineer (yes, that is my actual job title) at Gaikai. Before this job I was a Systems Administrator at the McGill Centre for Intelligent Machines, in Montreal, Quebec, Canada.
When I am not coding or packaging I like to watch television, read sci-fi and fantasy, cycle, occasionally go on hikes. When I can I love downhill skiing, but it’s a little harder in California than it was in Quebec.

2. How did you get involved with Linux and Open Source, and what was the path that lead to you to Gentoo?
I started using Linux at the end of 1996. Originally I switched to Linux because with the slow Internet connections of the times, web pages would take a long time to load. I would often open dozens of windows so I could be reading on site while others were loading. After a certain number of open browsers, Windows 95 would start to bog down then just crash, while when I did the same thing on Linux it would just happily chug along.
Around 2001, when Gnome 2 came out, I wanted to try out, and I don’t like installing software outside of the package manager, so I attempted to get the rpms from the rawhide repository. This experience made me decide to look for a different distro, and I ended up liking Gentoo the most.

3. What aspects of Gentoo do you feel the developers and maintainers have got right?
The ebuild is a great source-based package format, it has it’s drawbacks but it is far superior to the other formats I have looked at. I also like that Gentoo treats configurability as an important feature. The frequent use of /etc/foo.d and the scriptability of many parts of the system is great.
I also like some of the more recent work that has gone in to not breaking systems, preserved-rebuild and (despite some overuse) subslots fix many of the annoyances we had in the old days.
I am also a big fan of what is now OpenRC, ever since I first started using Gentoo, I have thought that this is a huge improvement over the alternatives.

4. What is it about Gentoo you would like to see improved?
I think that portage itself is getting very crufty, and the code base is not very nice to work with. I am sure just about everyone reading this would agree that dependency resolution is way too slow at the moment (especially with subslots). Sometimes it generates error messages that are horribly verbose with no indication of how to fix them. I have seen those errors make people leave Gentoo, this is especially bad when the things it’s generating errors about are relatively harmless.
There are also other problems with how portage stores the information about installed packages on the disk, and binary packages in their current form just suck, and are pretty useless.

5. What resources have you found most helpful when troubleshooting within Gentoo and Linux in general?
For doing research into problems, google of course is very useful. For tracking down problems strace is probably the one tool I find the most useful. Of course also digging into the source is probably the single best way to figure out what is actually going on.

6. What are some of the projects within Gentoo that you enjoy contributing to?
I mostly do ebuild work at the moment, python is one area that I contribute the most to. I would like to get more in to package manager work, and I want to start helping more with OpenRC, but finding time is frequently a problem.

7. What is your programming background?
I taught myself to program on GW-BASIC for DOS, it was in no way a modern or even remotely modern language. I moved on to QBASIC a bit later on. Once I got to post high school I started learning Java, C, C++, but my first programming job was Visual Basic, it was an internship that turned in to a summer job. During this time frame I also taught myself shell scripting.
Later (around 2008) I taught myself python when a friend and I were trying to start a business.

8. For someone new to Python what tips could you give them to get a good foundation?
There are lots of good tutorials out there, I personally used Dive in to Python and found it quite useful. I also found that when I learned more about how Python is implemented, it improved my abilities quite a bit. If you truly understand that in Python everything is a dictionary, and the implications of that then it helps quite a bit in debugging the root cause of problems and write better code.

9. Tell us about pkgcore, its features and future?
Pkgcore is an alternative implementation of the PMS. It’s basically an alternative to portage. It has always had the eventual goal of becoming the default package manager on Gentoo, replacing portage. It’s currently orders of magnitude faster than portage. It’s code base is much cleaner, though a little hard to understand at first thanks to it’s use of libsnakeoil for performance optimization. Currently Tim Harder (radhermit) is working on getting all the recent portage feature implemented, it mostly supports EAPI 5 in the git repo now.
Hopefully it can attract more developers and eventually become a truly viable portage replacement, so we can get rid of the cruft that has built up in the portage source over the years.

10. Which open source programs would you like to see developed?
That’s a hard question to answer. I think the biggest one is I would love to see an open source firmware for BMC controllers. These are the extra small computers included in servers that allow things such as remote console and the ability to remotely manage servers. Currently the ecosystem is full of half-assed implementations done by hardware companies, many of which are rife with security holes. There is no standard for remote console, so they all use buggy and horrible java applets to implement this. I would love to see a standard open source suite that motherboard developer all use, with native remote console clients for major OSes.

11. What would be your dream job?
Well I have long wanted a job as a kernel developer, but have never had the time to really dedicate to get to the point where someone would hire me. My current job is a close second. I work with Gentoo every day at work, often writing new ebuilds an fixing bugs in existing ebuilds as part of my day-to-day duties at work.
My day-to-day duties involve ebuild development and debugging. I also do a lot of automation of things like installing new systems, and was the lead developer on our in-house answer to configuration management. I get to do a lot of cool stuff with Gentoo and I get to get paid for it.

12. Need any help?
Yes, we are currently hiring lots of positions, all working with Gentoo. We are really looking for ebuild developers of all kinds, especially if you are comfortable with Java ebuilds (not mandatory, but it would be nice). We are also looking for anyone who is familiar with Gentoo to help with work in Release Engineering and Site Reliability Engineering. We currently have offices in Southern California, USA and Berlin, Germany.
If you are interested in getting paid to work with Gentoo, please drop me a line.

13. With your skills you would be welcome in any project, why did you chose Gentoo?
It had been my distro of choice for many years, and I just ended maintaining a local overlay with many bug fixes and miscellaneous things, so I decided to become a developer to share my work with everyone else.

14. What can we do to get more people involved as Gentoo developers?
That’s a hard question to answer, at the moment probably the best way would be to get back the “hot” and “cool” factors. These days Gentoo is sort of a “background” distro that has been around for ages, has loads of users but new people don’t get excited about anymore, kind of like Debian.
I think we also need to reduce developer burnout, I get the impression that once some people become developers, they feel that they have to fix every bug in the tree. This leads to them being really productive devs for a few months, then leaving when they get burned out and quit.

15. What users would you like to see recruited to become Gentoo developers?
It would be nice to recruit some of the proxy maintainers to contribute to more packages. I don’t have anyone specific in mind at this moment.

16. As a Gentoo developer what are some of your accomplishments?
When I first started, I was on the amd64 bandwagon very early, so I ended up doing the 64-bit ports for a pretty large number of packges. More recently I maintain ebuilds for some particularly tricky packages such as Ganeti, which is a mixture of Python and Haskell code.

17. Same question but work related.
Well, it’s probably a combination of two things.
Creating Gentoo profiles to auto generate dozens of different server image types, and building solid base Gentoo install for those servers.
Also building a fully automated Gentoo installation system that can partition disks, set up RAID, LVM and other parameters based on a JSON definition. Also a configuration file generation system that makes up the basis of our configuration management system.

18. What are the specs of your personal and work boxes?
My home box is a 6-core Core-i7 970 with 24GB of RAM, a GeForce 770, a 256GB SSD, 2 500GB spinning disks and a 1TB spinning disk. I have a 24” monitor and a 22”.
My workstation at work is a 8-core Opteron with 16GB of RAM. I have 2 32” monitors hooked up to it. We also have some pretty beefy servers for building Gentoo images.

19. Describe your home network.
Nothing that exciting, I have a Netgear WNDR3800 running OpenWRT, and a gigabit switch. Connected to that I have a Synology NAS, a smart TV that I never use the smart features of, a media streaming box, a Blu-Ray, a PS4 (I work for Sony) and a couple of computers.

20. What de/wm do you use now and what did you use in the past?
I currently use XFCE, I used to use Gnome 2, tried out Gnome 3 for 2 days, decided that it isn’t for me so created a huge package.mask to mask it. I stuck with that for several months, then decided I should switch to something else. I tried out Cinnamon for a bit, played with E17, considered Mate but then settled on XFCE.

21. What gives you the most enjoyment within the Gentoo community?
In general developers get along pretty well, this is more true on IRC than on the mailing lists. Also, at conferences there is a strong feeling of community among the Gentoo developers who are attending the conference.

22. How did you get the nick (chutzpah)?
It’s kind of a silly story. Way back when I first started hanging out online (early 90s) I needed a nick. I ended up choosing the name of a particularly challenging Ski Trail at the Sunday River ski resort in Maine. I have been using the name ever since.

Council News

This month’s big issue was to compile a preliminary list of features that could go into the next EAPI. It probably does not make sense to go into all the technical details here; you can find the accepted items in the meeting summaries [1,2,3] or on a separate wiki page [4]. One user-visible change will be that from EAPI=6 on every ebuild should accept user patches from /etc/portage/patches [5], as many do already today. Another one will be that(given an implementation in Portage is ready in time) a new type of use-flags will be introduced that can be used to, e.g., only pull in run-time dependencies; toggling such a useflag does not require a rebuild of the package.

In addition, some of us prepared a proposal to make it in the end easier for developers to host semi-official services within the gentoo.org domain [6]. This still needs work and is definitely not something the council can do on its own, but the general idea was given clear support.

Election News

The nomination process is complete, and voting is now open. This year’s candidates are blueness, dberkholz, dilfridge, jlec, patrick, pinkbyte, radhermit, rich0, ryao,TomWij, ulm, williamh, and zerochaos. Additionally, almost every developer was nominated for the council. Elections will be open until 2359 UTC on July 14, and results should be posted around July 16. We’ve already had around 30 people vote, but there are 200 more developers who can vote. Get out there and vote!

Featured New Project: Hardened Musl

(by Anthony G. Basile)

The hardened musl project aims to build and maintain full stage3 tarballs for amd64, arm, mips and i686 architectures using musl as a its C standard library rather than glibc. The “hardened” aspect means that we will also make use of toolchain hardening features so that the resulting userland executables and libraries are more resistant to exploit, although we also provide a “vanilla” flavor without any hardening. In every respect, these stages will be like regular Gentoo stages, except glibc will be replaced by musl.

musl, like uClibc, is ideal for embedded systems although both can be used for servers and desktops. Embedded systems generally have three needs beyond regular systems: 1) They need to have a small footprint both on their storage device and in RAM. 2) They need speed for real time applications. 3) And in some situations, they need their executables to be statically linked. A typical embedded system has has a minimally configured busybox for some needed utilities as well as whatever service the image is to provide, eg. some httpd service. The stages we are producing are not really embedded stages because they don’t use busybox to provide some minimal set of utilities; rather, they use the full set of utilities provided by coreutils, util-linux and friends. This makes these stages ideal as development platforms for building custom embedded images [1] or expanded into a server or desktop system.

However, be warned! If you try to build a full desktop system, you will hit breakage since musl adheres closely to standards while many packages do not. We are working on getting patches [2] for as a full XFCE4 desktop as we did for uClibc [3]. On the other hand, I’ve had lots of success building servers and routers from those stages without any extra patching.

[1] An example of the hardened uClibc stages being used this way is “Real Time And Tiny” (aka RAT) Gentoo.
[2] These patches are house on the musl branch of the hardened dev overlay.
[3] As a subproject of the Hardened uClibc project, maintain a full XFCE4 desktop based on uClibc, affectionately named “Lilblue” after the Little Blue Penguin, a smaller relative of the Gentoo.

Gentoo Developer Moves

Summary

Gentoo is made up of 237 active developers, of which 35 are currently away.
Gentoo has recruited a total of 799 developers since its inception.

Changes

The following developers have recently changed roles:
None this month

Additions

The following developers have recently joined the project:

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17529
Ebuilds 37513
Architecture Stable Testing Total % of Packages
alpha 3604 551 4155 23.70%
amd64 10781 6247 17028 97.14%
amd64-fbsd 0 1578 1578 9.00%
arm 2662 1726 4388 25.03%
hppa 3059 482 3541 20.20%
ia64 3181 620 3801 21.68%
m68k 623 82 705 4.02%
mips 4 2386 2390 13.63%
ppc 6819 2375 9194 52.45%
ppc64 4317 875 5192 29.62%
s390 1486 316 1802 10.28%
sh 1681 387 2068 11.80%
sparc 4122 896 5018 28.63%
sparc-fbsd 0 316 316 1.80%
x86 11444 5308 16752 95.57%
x86-fbsd 0 3236 3236 18.46%

gmn-portage-stats-2013-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201406-36 net-nds/openldap OpenLDAP: Multiple vulnerabilities 290345
201406-35 net-im/openfire Openfire: Multiple vulnerabilities 266129
201406-34 kde-base/kdelibs KDE Libraries: Multiple vulnerabilities 358025
201406-33 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 503792
201406-32 dev-java/icedtea-bin IcedTea JDK: Multiple vulnerabilities 312297
201406-31 kde-base/konqueror Konqueror: Multiple vulnerabilities 438452
201406-30 app-admin/sudo sudo: Privilege escalation 503586
201406-29 net-misc/spice-gtk spice-gtk: Privilege escalation 435694
201406-28 media-video/libav Libav: Multiple vulnerabilities 439052
201406-27 None polkit Spice-Gtk systemd HPLIP libvirt: Privilege escalation 484486
201406-26 dev-python/django Django: Multiple vulnerabilities 508514
201406-25 net-misc/asterisk Asterisk: Multiple vulnerabilities 513102
201406-24 net-dns/dnsmasq Dnsmasq: Denial of Service 436894
201406-23 app-admin/denyhosts DenyHosts: Denial of Service 495130
201406-22 media-libs/nas Network Audio System: Multiple vulnerabilities 484480
201406-21 net-misc/curl cURL: Multiple vulnerabilities 505864
201406-20 www-servers/nginx nginx: Arbitrary code execution 505018
201406-19 dev-libs/nss Mozilla Network Security Service: Multiple vulnerabilities 455558
201406-18 x11-terms/rxvt-unicode rxvt-unicode: User-assisted execution of arbitrary code 509174
201406-17 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 512888
201406-16 net-print/cups-filters cups-filters: Multiple vulnerabilities 504474
201406-15 kde-misc/kdirstat KDirStat: Arbitrary command execution 504994
201406-14 www-client/opera Opera: Multiple vulnerabilities 442044
201406-13 net-misc/memcached memcached: Multiple vulnerabilities 279386
201406-12 net-dialup/freeradius FreeRADIUS: Arbitrary code execution 501754
201406-11 x11-libs/libXfont libXfont: Multiple vulnerabilities 510250
201406-10 www-servers/lighttpd lighttpd: Multiple vulnerabilities 392581
201406-09 net-libs/gnutls GnuTLS: Multiple vulnerabilities 501282
201406-08 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 510278
201406-07 net-analyzer/echoping Echoping: Buffer Overflow Vulnerabilities 349569
201406-06 media-sound/mumble Mumble: Multiple vulnerabilities 500486
201406-05 mail-client/mutt Mutt: Arbitrary code execution 504462
201406-04 dev-util/systemtap SystemTap: Denial of Service 405345
201406-03 net-analyzer/fail2ban Fail2ban: Multiple vulnerabilities 364883
201406-02 app-arch/libarchive libarchive: Multiple vulnerabilities 366687
201406-01 None D-Bus GLib: Privilege escalation 436028

Package Removals/Additions

Removals

Package Developer Date
dev-python/python-gnutls mrueg 02 Jun 2014
dev-ruby/fastthread mrueg 07 Jun 2014
dev-perl/perl-PBS zlogene 11 Jun 2014
games-strategy/openxcom mr_bones_ 14 Jun 2014
media-plugins/vdr-noepgmenu hd_brummy 15 Jun 2014
net-mail/fetchyahoo eras 16 Jun 2014
app-emacs/redo ulm 17 Jun 2014
games-emulation/boycott-advance-sdl ulm 17 Jun 2014
games-emulation/neopocott ulm 17 Jun 2014

Additions

Package Developer Date
dev-ruby/sshkit graaff 01 Jun 2014
media-gfx/plantuml pva 02 Jun 2014
dev-python/sphinxcontrib-plantuml pva 02 Jun 2014
dev-util/kdevelop-qmake zx2c4 02 Jun 2014
x11-misc/easystroke jer 04 Jun 2014
dev-python/docopt jlec 04 Jun 2014
dev-python/funcsigs jlec 04 Jun 2014
virtual/funcsigs jlec 04 Jun 2014
dev-python/common jlec 04 Jun 2014
dev-python/tabulate jlec 04 Jun 2014
app-admin/ngxtop jlec 04 Jun 2014
dev-python/natsort idella4 05 Jun 2014
dev-libs/liblinear jer 05 Jun 2014
net-analyzer/arp-scan jer 06 Jun 2014
www-servers/mongoose zmedico 06 Jun 2014
dev-ruby/spring graaff 06 Jun 2014
dev-ruby/wikicloth mrueg 06 Jun 2014
net-analyzer/ipgen jer 07 Jun 2014
sec-policy/selinux-dropbox swift 07 Jun 2014
dev-python/jingo idella4 08 Jun 2014
dev-python/click rafaelmartins 08 Jun 2014
dev-python/Coffin idella4 08 Jun 2014
dev-python/sphinx_rtd_theme bicatali 09 Jun 2014
dev-ruby/netrc graaff 09 Jun 2014
dev-ruby/delayer naota 11 Jun 2014
www-client/qtweb jer 11 Jun 2014
dev-python/pyoembed rafaelmartins 12 Jun 2014
www-apps/blohg-tumblelog rafaelmartins 12 Jun 2014
dev-python/jaraco-utils patrick 12 Jun 2014
dev-python/more-itertools patrick 12 Jun 2014
dev-libs/libserialport vapier 12 Jun 2014
dev-python/pretty-yaml chutzpah 12 Jun 2014
net-libs/phodav dev-zero 13 Jun 2014
dev-python/django-haystack idella4 14 Jun 2014
sci-libs/libsigrok vapier 14 Jun 2014
sci-libs/libsigrokdecode vapier 14 Jun 2014
sci-electronics/sigrok-cli vapier 14 Jun 2014
sys-firmware/sigrok-firmware-fx2lafw vapier 14 Jun 2014
sci-electronics/pulseview vapier 14 Jun 2014
dev-ruby/hashr mrueg 14 Jun 2014
games-strategy/openxcom maksbotan 14 Jun 2014
games-engines/openxcom mr_bones_ 14 Jun 2014
net-analyzer/icinga2 prometheanfire 15 Jun 2014
dev-python/pyxenstore robbat2 15 Jun 2014
sys-cluster/ampi jauhien 16 Jun 2014
dev-python/pyjwt idella4 17 Jun 2014
app-emulation/openstack-guest-agents-unix robbat2 22 Jun 2014
dev-python/plyr idella4 22 Jun 2014
app-misc/relevation radhermit 22 Jun 2014
media-sound/lyvi idella4 22 Jun 2014
app-emulation/xe-guest-utilities robbat2 23 Jun 2014
net-misc/yandex-disk pinkbyte 24 Jun 2014
sec-policy/selinux-resolvconf swift 25 Jun 2014
dev-python/json-rpc chutzpah 26 Jun 2014
app-backup/cyphertite grknight 26 Jun 2014
dev-python/jdcal idella4 26 Jun 2014
net-libs/libcrafter jer 26 Jun 2014
net-analyzer/tracebox jer 26 Jun 2014
dev-python/python-catcher jlec 27 Jun 2014
dev-python/python-exconsole jlec 27 Jun 2014
dev-python/reconfigure jlec 27 Jun 2014
sys-block/sas2ircu robbat2 27 Jun 2014
sys-block/sas3ircu robbat2 27 Jun 2014
dev-ruby/psych mrueg 27 Jun 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 31 May 2014 and 30 June 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 1991
Closed 1065
Not fixed 171
Duplicates 147
Total 5843
Blocker 5
Critical 18
Major 64

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 152
2 Gentoo Linux Gnome Desktop Team 54
3 Python Gentoo Team 39
4 Gentoo KDE team 33
5 Gentoo Games 28
6 Gentoo Ruby Team 20
7 Default Assignee for Orphaned Packages 20
8 media-video herd 17
9 Julian Ospald (hasufell) 17
10 Others 684

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Security 97
2 Gentoo Linux Gnome Desktop Team 91
3 Gentoo Linux bug wranglers 91
4 Python Gentoo Team 70
5 Gentoo Games 64
6 Gentoo KDE team 50
7 Gentoo Prefix 49
8 Default Assignee for Orphaned Packages 49
9 Gentoo's Team for Core System packages 35
10 Others 1394

Tips of the month

(by Sven Vermeulen)
Quick one-time patching of packages

If you want to patch a package once (for instance to test a patch provided through bugzilla), just start building the package, but when the following is shown, interrupt it (Ctrl-Z):

>>> Source prepared.

Then go to the builddir (like /var/tmp/portage/net-misc/tor-0.2.4.22/work/tor-0.2.4.22) and apply the patch. Then continue the build (with “fg” command).

Verify integrity of installed software

If you don’t want the full-fledged features of tools like AIDE, you can use qcheck to verify this for installed packages:
~# qcheck -e vim-core
Checking app-editors/vim-core-7.4.273 ...
MD5-DIGEST: /usr/share/vim/vim74/doc/tags
* 1783 out of 1784 files are good

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

July 02, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Multilib in Gentoo (July 02, 2014, 19:03 UTC)

One of the areas in Gentoo that is seeing lots of active development is its ongoing effort to have proper multilib support throughout the tree. In the past, this support was provided through special emulation packages, but those have the (serious) downside that they are often outdated, sometimes even having security issues.

But this active development is not because we all just started looking in the same direction. No, it’s thanks to a few developers that have put their shoulders under this effort, directing the development workload where needed and pressing other developers to help in this endeavor. And pushing is more than just creating bugreports and telling developers to do something.

It is also about communicating, giving feedback and patiently helping developers when they have questions.

I can only hope that other activities within Gentoo and its potential broad impact work on this as well. Kudos to all involved, as well as all developers that have undoubtedly put numerous hours of development effort in the hope to make their ebuilds multilib-capable (I know I had to put lots of effort in it, but I find it is worthwhile and a big learning opportunity).

June 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
D-Bus and SELinux (June 30, 2014, 18:07 UTC)

After a post about D-Bus comes the inevitable related post about SELinux with D-Bus.

Some users might not know that D-Bus is an SELinux-aware application. That means it has SELinux-specific code in it, which has the D-Bus behavior based on the SELinux policy (and might not necessarily honor the “permissive” flag). This code is used as an additional authentication control within D-Bus.

Inside the SELinux policy, a dbus permission class is supported, even though the Linux kernel doesn’t do anything with this class. The class is purely for D-Bus, and it is D-Bus that checks the permission (although work is being made to implement D-Bus in kernel (kdbus)). The class supports two permission checks:

  • acquire_svc which tells the domain(s) allowed to “own” a service (which might, thanks to the SELinux support, be different from the domain itself)
  • send_msg which tells which domain(s) can send messages to a service domain

Inside the D-Bus security configuration (the busconfig XML file, remember) a service configuration might tell D-Bus that the service itself is labeled differently from the process that owned the service. The default is that the service inherits the label from the domain, so when dnsmasq_t registers a service on the system bus, then this service also inherits the dnsmasq_t label.

The necessary permission checks for the sysadm_t user domain to send messages to the dnsmasq service, and the dnsmasq service itself to register it as a service:

allow dnsmasq_t self:dbus { acquire_svc send_msg };
allow sysadm_t dnsmasq_t:dbus send_msg;
allow dnsmasq_t sysadm_t:dbus send_msg;

For the sysadm_t domain, the two rules are needed as we usually not only want to send a message to a D-Bus service, but also receive a reply (which is also handled through a send_msg permission but in the inverse direction).

However, with the following XML snippet inside its service configuration file, owning a certain resource is checked against a different label:

<selinux>
  <associate own="uk.org.thekelleys.dnsmasq"
             context="system_u:object_r:dnsmasq_dbus_t:s0" />
</selinux>

With this, the rules would become as follows:

allow dnsmasq_t dnsmasq_dbus_t:dbus acquire_svc;
allow dnsmasq_t self:dbus send_msg;
allow sysadm_t dnsmasq_t:dbus send_msg;
allow dnsmasq_t sysadm_t:dbus send_msg;

Note that only the access for acquiring a service based on a name (i.e. owning a service) is checked based on the different label. Sending and receiving messages is still handled by the domains of the processes (actually the labels of the connections, but these are always the process domains).

I am not aware of any policy implementation that uses a different label for owning services, and the implementation is more suited to “force” D-Bus to only allow services with a correct label. This ensures that other domains that might have enough privileges to interact with D-Bus and own a service cannot own these particular services. After all, other services don’t usually have the privileges (policy-wise) to acquire_svc a service with a different label than their own label.

June 29, 2014
Pavlos Ratis a.k.a. dastergon (homepage, bugs)
Accepted for Google Summer of Code 2014 (June 29, 2014, 21:00 UTC)

This year I’ve been accepted for Google Summer of Code 2014 with Gentoo Foundation for the Gentoo Keys project and my mentor will be Brian Dolbec (dol-sen). Gentoo Keys is a Python based project that aims to manage the GPG keys used for validation on users and Gentoo’s infrastructure servers. These keys will be any/all of the release keys, developer keys and any other third party keys or keyrings available or needed.

Participating in large communities and being a developer has great responsibilities. Developers have access to commit their new changes to the main repository, however, even an unintended incorrect commit in the main repository would affect the majority of the users. This issue could be addressed easily by the developer that did the mistake instantly. A less innocent case is that if a developer’s box is compromised, then the malicious user could commit malicious changes freely to the main tree. To prevent this kind of incidents, developers are requested to sign their own commits with their GPG key in order to ensure who they claim to be. It’s an extra layer of protection that helps to keep the integrity of the main repository. Gentoo Keys aims to solve that and provides its features in many scenarios like overlays and release engineering management.

Gentoo Keys will be able to verify GPG keys used for Gentoo’s release media, such as installation CD’s, Live DVD’s, packages and other GPG signed documents. In addition, it will be used by Gentoo infrastructure team to achieve GPG signed git commits in the forthcoming git migration of the main CVS tree.

Gentoo Keys is an open source project which has its code available from the very first day in Gentoo’s official repositories. Everyone is welcome to provide patches and request new features.

Source code: https://github.com/gentoo/gentoo-keys.
Weekly Reports are posted here.
Wiki page: https://wiki.gentoo.org/wiki/Project:Gentoo-keys.

Accepted for Google Summer of Code 2014 was originally published by Pavlos Ratis at dastergon's weblog on June 30, 2014.

Sven Vermeulen a.k.a. swift (homepage, bugs)
D-Bus, quick recap (June 29, 2014, 17:16 UTC)

I’ve never fully investigated the what and how of D-Bus. I know it is some sort of IPC, but higher level than the POSIX IPC methods. After some reading, I think I start to understand how it works and how administrators can work with it. So a quick write-down is in place so I don’t forget in the future.

There is one system bus and, for each X session of a user, also a session bus.

A bus is governed by a dbus-daemon process. A bus itself has objects on it, which are represented through path-like constructs (like /org/freedesktop/ConsoleKit). These objects are provided by a service (application). Applications “own” such services, and identify these through a namespace-like value (such as org.freedesktop.ConsoleKit).
Applications can send signals to the bus, or messages through methods exposed by the service. If methods are invoked (i.e. messages send) then the application must specify the interface (such as org.freedesktop.ConsoleKit.Manager.Stop).

Administrators can monitor the bus through dbus-monitor, or send messages through dbus-send. For instance, the following command invokes the org.freedesktop.ConsoleKit.Manager.Stop method provided by the object at /org/freedesktop/ConsoleKit owned by the service/application at org.freedesktop.ConsoleKit:

~$ dbus-send --system --print-reply 
  --dest=org.freedesktop.ConsoleKit 
  /org/freedesktop/ConsoleKit/Manager 
  org.freedesktop.ConsoleKit.Manager.Stop

What I found most interesting however was to query the busses. You can do this with dbus-send although it is much easier to use tools such as d-feet or qdbus.

To list current services on the system bus:

~# qdbus --system
:1.1
 org.freedesktop.ConsoleKit
:1.10
:1.2
:1.3
 org.freedesktop.PolicyKit1
:1.36
 fi.epitest.hostap.WPASupplicant
 fi.w1.wpa_supplicant1
:1.4
:1.42
:1.5
:1.6
:1.7
 org.freedesktop.UPower
:1.8
:1.9
org.freedesktop.DBus

The numbers are generated by D-Bus itself, the namespace-like strings are taken by the objects. To see what is provided by a particular service:

~# qdbus --system org.freedesktop.PolicyKit1
/
/org
/org/freedesktop
/org/freedesktop/PolicyKit1
/org/freedesktop/PolicyKit1/Authority

The methods made available through one of these:

~# qdbus --system org.freedesktop.PolicyKit1 /org/freedesktop/PolicyKit1/Authority
method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
...
property read uint org.freedesktop.PolicyKit1.Authority.BackendFeatures
property read QString org.freedesktop.PolicyKit1.Authority.BackendName
property read QString org.freedesktop.PolicyKit1.Authority.BackendVersion
method void org.freedesktop.PolicyKit1.Authority.AuthenticationAgentResponse(QString cookie, QDBusRawType::(sa{sv} identity)
method void org.freedesktop.PolicyKit1.Authority.CancelCheckAuthorization(QString cancellation_id)
signal void org.freedesktop.PolicyKit1.Authority.Changed()
...

Access to methods and interfaces is governed through XML files in /etc/dbus-1/system.d (or session.d depending on the bus). Let’s look at /etc/dbus-1/system.d/dnsmasq.conf as an example:

<!DOCTYPE busconfig PUBLIC
 "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
 "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
        <policy user="root">
                <allow own="uk.org.thekelleys.dnsmasq"/>
                <allow send_destination="uk.org.thekelleys.dnsmasq"/>
        </policy>
        <policy context="default">
                <deny own="uk.org.thekelleys.dnsmasq"/>
                <deny send_destination="uk.org.thekelleys.dnsmasq"/>
        </policy>
</busconfig>

The configuration mentions that only the root Linux user can ‘assign’ a service/application to the uk.org.thekelleys.dnsmasq name, and root can send messages to this same service/application name. The default is that no-one can own and send to this service/application name. As a result, only the Linux root user can interact with this object.

D-Bus also supports starting of services when a method is invoked (instead of running this service immediately). This is configured through *.service files inside /usr/share/dbus-1/system-services/.