Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Erik Mackdanz
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
June 25, 2017, 17:03 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

June 25, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google-Summer-of-Code-day18 (June 25, 2017, 00:09 UTC)

Google Summer of Code day 18

What was my plan for today?

  • going on with the code for retriving the livepatch and installing it

What i did today?

checked about kpatch-build required folder.

kpatch-build find_dirs function:

find_dirs() {
  if [[ -e "$SCRIPTDIR/create-diff-object" ]]; then
      # git repo
      TOOLSDIR="$SCRIPTDIR"
      DATADIR="$(readlink -f $SCRIPTDIR/../kmod)"
  elif [[ -e "$SCRIPTDIR/../libexec/kpatch/create-diff-object" ]]; then
      # installation path
      TOOLSDIR="$(readlink -f $SCRIPTDIR/../libexec/kpatch)"
      DATADIR="$(readlink -f $SCRIPTDIR/../share/kpatch)"
  else
      return 1
  fi
}

$SCRIPTDIR is the kpatch-build directory. kpatch-build is installed in /usr/bin/ so /usr/kmod /usr/libexe are all under such directory.

error "CONFIG_FUNCTION_TRACER, CONFIG_HAVE_FENTRY, CONFIG_MODULES, CONFIG_SYSFS, CONFIG_KALLSYMS_ALL kernel config options are required" Require by kmod/core.c: https://github.com/dynup/kpatch/blob/master/kmod/core/core.c#L62

We probably need someway for check that this setting are configured in the kernel we are going to build.

Updating kpatch-build for work automatically with gentoo (as now fedora for example can automatically download the kernel rpm and install it, we could do similar thing with gentoo): https://github.com/aliceinwire/kpatch/commits/gentoo

Starting to write the live patch downloader: https://github.com/aliceinwire/elivepatch/commit/d26611fb898223f2ea2dcf323078347ca928cbda

Now the elivepatch server can call and build the livepatch with kpatch:

sudo kpatch-build -s /usr/src/linux-4.10.14-gentoo/ -v /usr/src/linux-4.10.14-gentoo//vmlinux -c config 1.patch --skip-gcc-check
ERROR: kpatch build failed. Check /root/.kpatch/build.log for more details.
127.0.0.1 - - [25/Jun/2017 05:27:06] "POST /elivepatch/api/v1.0/build_livepatch HTTP/1.1" 201 -
WARNING: Skipping gcc version matching check (not recommended)
Using source directory at /usr/src/linux-4.10.14-gentoo
Testing patch file
checking file fs/exec.c
Hunk #1 succeeded at 259 (offset 16 lines).
Reading special section data
Building original kernel

Fixed some minor pep8

what i will do next time?
* work on the livepatch downloader and make the kpatch creator flexible

June 23, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google-Summer-of-Code-day17 (June 23, 2017, 23:12 UTC)

Google Summer of Code day 16

What was my plan for today?

  • going on with the code for retriving the livepatch and installing it
  • Ask infra for a server where to install elivepatch sever

What i did today?
Sended request for the server that will offer the elivepatch service as talked with my mentor. https://bugs.gentoo.org/show_bug.cgi?id=622476

Fixed some pep8 warnings.

Livepatch server is now returning information about the livepatch building status.

Removed basic auth as we will go with SSL.

The client is now sending information about the kernel version when requesting a new build.

The kernel directory under the server is now a livepatch class variable.

what i will do next time?

  • going on with the code for retriving the livepatch and installing it

Google-Summer-of-Code-day16 (June 23, 2017, 23:12 UTC)

Google Summer of Code day 16

What was my plan for today?

  • Divide call for sending (patch, config) and the call for build the livepatch
  • Make the livepatch call more flexible (as now is hardcoded)
  • Ask infra for a server where to install elivepatch sever

What i did today?

Added patch file path argument to the elivepatch server API and added patch call to elivepatch client.

Adding way for dividing the call for sending the configuration with a POST call sending the patch with a POST calland than start the livepatch build and getting the result.

patch sended work and working on calling livepatch.

Added docstring to the build patch function.

Cleaned GetLive dispatcher function.

Added call from client to build livepatch of the server API.

made send_file function more generic for send all kind of file.

what i will do next time?

  • going on with the code for retriving the livepatch and installing it

June 21, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
2017-06-21 Google-Summer-of-Code-day14 (June 21, 2017, 17:57 UTC)

Google Summer of Code day 15

What was my plan for today?
working on sending the configuration file on RESTful api,
and starting to work on making the patch.ko file in the server.

What i did today?
using wekzeug.datastructures.FileStorage in elivepatch_server,
I could receive the file from the elivepatch_client POST request
using the RESTful API.

def post(self):
    parse = reqparse.RequestParser()
    parse.add_argument('file', type=werkzeug.datastructures.FileStorage, location)

so as now we can get the kernel configuration file, extract if is .gz filename and send it to the elivepatch server.

elivepatch server need to read the configuration, compare it with the
current kernel configuration and if different recompile the kernel.
After we can start making the livepatch with kpatch-build.

This is the example of using kpatch-build:

kpatch-build/kpatch-build -s /usr/src/linux-4.9.16-gentoo/ -v /usr/src/linux-4.9.16-gentoo/vmlinux examples/test.patch --skip-gcc-check
gsoc-2017 kpatch (gentoo) # kpatch-build/kpatch-build --help
usage: kpatch-build [options] <patch file>
            -h, --help         Show this help message
            -r, --sourcerpm    Specify kernel source RPM
            -s, --sourcedir    Specify kernel source directory
            -c, --config       Specify kernel config file
            -v, --vmlinux      Specify original vmlinux
            -t, --target       Specify custom kernel build targets
            -d, --debug        Keep scratch files in /tmp
            --skip-cleanup     Skip post-build cleanup
            --skip-gcc-check   Skip gcc version matching check
                               (not recommended)

This command is called automatically by the elivepatch server after receiving the configuration file.

we need also to send the patch file.

what i will do next time?

  • Divide call for sending (patch, config) and the call for build the livepatch
  • Make the livepatch call more flexible (as now is hardcoded)
  • Ask infra for a server where to install elivepatch sever

June 16, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Recently, Gentoo Linux put GCC 6.3 (released in December 2016) into the testing branch. For a source-based distribution like Gentoo, GCC is a critical part of the toolchain, and sometimes can lead to packages failing to compile or run. I recently ran into just this problem with Audacity. The error that I hit was not during compilation, but during runtime:


$ audacity
Fatal Error: Mismatch between the program and library build versions detected.
The library used 3.0 (wchar_t,compiler with C++ ABI 1009,wx containers,compatible with 2.8),
and your program used 3.0 (wchar_t,compiler with C++ ABI 1010,wx containers,compatible with 2.8).
Aborted

The Gentoo Wiki has a nice, detailed page on Upgrading GCC, and explicitly calls out ABI changes. In this particular case of Audacity, the problematic library is referenced in the error above: “wx containers”. WX containers are handled by the wxGTK package. So, I simply needed to rebuild the currently-installed version of wxGTK to fix this particular problem.

Hope that helps.

Cheers,
Zach

June 15, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)
Don't leave Coredumps on Web Servers (June 15, 2017, 09:20 UTC)

CoreCoredumps are a feature of Linux and other Unix systems to analyze crashing software. If a software crashes, for example due to an invalid memory access, the operating system can save the current content of the application's memory to a file. By default it is simply called core.

While this is useful for debugging purposes it can produce a security risk. If a web application crashes the coredump may simply end up in the web server's root folder. Given that its file name is known an attacker can simply download it via an URL of the form http://example.org/core. As coredumps contain an application's memory they may expose secret information. A very typical example would be passwords.

PHP used to crash relatively often. Recently a lot of these crash bugs have been fixed, in part because PHP now has a bug bounty program. But there are still situations in which PHP crashes. Some of them likely won't be fixed.

How to disclose?

With a scan of the Alexa Top 1 Million domains for exposed core dumps I found around 1.000 vulnerable hosts. I was faced with a challenge: How can I properly disclose this? It is obvious that I wouldn't write hundreds of manual mails. So I needed an automated way to contact the site owners.

Abusix runs a service where you can query the abuse contacts of IP addresses via a DNS query. This turned out to be very useful for this purpose. One could also imagine contacting domain owners directly, but that's not very practical. The domain whois databases have rate limits and don't always expose contact mail addresses in a machine readable way.

Using the abuse contacts doesn't reach all of the affected host operators. Some abuse contacts were nonexistent mail addresses, others didn't have abuse contacts at all. I also got all kinds of automated replies, some of them asking me to fill out forms or do other things, otherwise my message wouldn't be read. Due to the scale I ignored those. I feel that if people make it hard for me to inform them about security problems that's not my responsibility.

I took away two things that I changed in a second batch of disclosures. Some abuse contacts seem to automatically search for IP addresses in the abuse mails. I originally only included affected URLs. So I changed that to include the affected IPs as well.

In many cases I was informed that the affected hosts are not owned by the company I contacted, but by a customer. Some of them asked me if they're allowed to forward the message to them. I thought that would be obvious, but I made it explicit now. Some of them asked me that I contact their customers, which again, of course, is impractical at scale. And sorry: They are your customers, not mine.

How to fix and prevent it?

If you have a coredump on your web host, the obvious fix is to remove it from there. However you obviously also want to prevent this from happening again.

There are two settings that impact coredump creation: A limits setting, configurable via /etc/security/limits.conf and ulimit and a sysctl interface that can be found under /proc/sys/kernel/core_pattern.

The limits setting is a size limit for coredumps. If it is set to zero then no core dumps are created. To set this as the default you can add something like this to your limits.conf:

* soft core 0

The sysctl interface sets a pattern for the file name and can also contain a path. You can set it to something like this:

/var/log/core/core.%e.%p.%h.%t

This would store all coredumps under /var/log/core/ and add the executable name, process id, host name and timestamp to the filename. The directory needs to be writable by all users, you should use a directory with the sticky bit (chmod +t).

If you set this via the proc file interface it will only be temporary until the next reboot. To set this permanently you can add it to /etc/sysctl.conf:

kernel.core_pattern = /var/log/core/core.%e.%p.%h.%t

Some Linux distributions directly forward core dumps to crash analysis tools. This can be done by prefixing the pattern with a pipe (|). These tools like apport from Ubuntu or abrt from Fedora have also been the source of security vulnerabilities in the past. However that's a separate issue.

Look out for coredumps

My scans showed that this is a relatively common issue. Among popular web pages around one in a thousand were affected before my disclosure attempts. I recommend that pentesters and developers of security scan tools consider checking for this. It's simple: Just try download the /core file and check if it looks like an executable. In most cases it will be an ELF file, however sometimes it may be a Mach-O (OS X) or an a.out file (very old Linux and Unix systems).

Image credit: NASA/JPL-Université Paris Diderot

June 13, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Moving back to ikiwiki (June 13, 2017, 03:40 UTC)

I was trying to use blogs.gentoo.org/alicef/ Gentoo official blog
based on wordpress.
As far as I could like the draft feature, it had some big drawback.
Most big one I couldn't post any syntax highlighted code.
And wordpress maintenance takes lots of time, in particular managing plugins.
Also because I cannot change Gentoo blog plugins without admin privilege, is bit too much to have to ask every time I have problem with plugins or I need a new one.

So I decided to come back to ikiwiki.
Also in ikiwiki I can make some sort of draft function where post tagged as draft dosen't come up on the blog list.
This is simply done by using this on the blog.mdwn
pages="page(blog/) and !/Discussion and !*/local.css and !tagged(draft)"
That will remove the blog page tagged with draft from the blog view.

For the syntax highlight I used a plugin for pygments.py made by
tylercipriani.com
That you can find here pygments.pm

Pygments.pm output example:

#include <stdio.h>

int main(void) {
    // your code goes here
    return 0;
}

And last one, on Ikiwiki comments, I decided to delegate comments spam to disqus, also if I don't like so much to use a private business plugin, do is job of managing comments well enough.
I also just discovered that disqus is part of Y combinator, the company behind hacker news.

[2017 05] Gentoo activity summary (June 13, 2017, 01:49 UTC)

[2017 04] Gentoo activity summary (June 13, 2017, 01:49 UTC)

  • Searching work as researcher in Japan
  • Got instances for Gentoo Kernel CI
  • Joined Gentoo Kernel Security Project
  • Released kernel 4.11

[2017 03] Gentoo activity summary (June 13, 2017, 01:43 UTC)

Kernel-2.eclass EAPI6 (June 13, 2017, 01:43 UTC)

Kernel-2.eclass recently adopted EAPI 6
Let's move all kernel ebuild packages to EAPI 6

If you are wondering what is change with EAPI 6, there are some nice reference:

https://blogs.gentoo.org/mgorny/2015/11/13/the-ultimate-guide-to-eapi-6/

and from the official documentation https://dev.gentoo.org/~ulm/pms/head/pms.html:

EAPI 6 is EAPI 5 with the following changes:

  • Bash version is 4.2, bash-version on page 78.
  • Default src_prepare no longer a no-op, src-prepare-6 on page 138.
  • Different src_install implementation, src-install-6 on page 174.
  • LC_CTYPE and LC_COLLATE compatible with POSIX locale, locale-settings on page 215.
  • failglob is enabled in global scope, failglob on page 226.
  • einstall banned, banned-commands on page 233.
  • die and assert called with -n respect nonfatal, nonfatal-die on page 237.
  • eapply support, eapply on page 240.
  • eapply_user support, eapply-user on page 240.
  • econf adds --docdir and --htmldir, econf-options on page 246.
  • in_iuse support, in-iuse on page 275.
  • unpack supports absolute and relative paths, unpack-absolute on page 284.
  • unpack supports .txz, unpack-extensions on page 285.
  • unpack matches filename extensions case-insensitively, unpack-ignore-case on page 285.
  • einstalldocs support, einstalldocs on page 286.
  • get_libdir support, get-libdir on page 286.

June 12, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
[2017 02] Summary of Gentoo activity (June 12, 2017, 07:51 UTC)

June 10, 2017
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo Puppet Portage Package Provider (June 10, 2017, 05:00 UTC)

Why do this

The previus built in puppet portage package provider (I'm just going to shorten it to PPPP) only supported very simplistic package interactions. Mainly package name (with slot) install and uninstall. This has proven fairly limiting, if you want to install a specific version of a package and lock it down you were forced to call out to exec or editing package.{mask,unmask,keywords} files.

The new provider (which will be built into puppet in 5.0 or puppet-agent-2.0) supports all the package provider attributes.

How do I get this awesome thing

Emerge puppet or puppet-agent with the experimental use flag.

What it can do

You can use the following attributes with the new PPPP.

  • Name - The full package atom works now, using qatom on the backend.
  • ensure - now allowing a package purge as well (CONFIG_PROTECT="-*").
  • install_options - you can now pass options to emerge (--deep or --usepkgonly for example).
  • uninstall_options - just like install_options

Being able to call out specific versions and per package install options will give much greater flexability.

fin

Here is the pull request that upstream puppet merged.

If you have any questions I'm on freenode as prometheanfire.

June 03, 2017
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo portage templates (June 03, 2017, 05:00 UTC)

Why do this

Gentoo is known for being somewhat complex to manage, making clusters of gentoo machines even more complex in most scenarios. Using the following methods the configuration becomes easier.

By the end of this you should be able to have a default hiera configuration for Gentoo while still being able to override it for specific use cases. What makes the method I chose particularly powerful is the ability to delete default vales entirely, not just setting them to something else.

Most of these methods came from my experience with chef that I thought would apply well to other config engines. While some don't like shoving logic to the configuration template engine, I'm open to suggestions.

Requirements

  • Puppet 4.x or puppet-agent with hiera support.
  • Puppet's stdlib installed (specifically for delete_values stuff).
  • (optional) use puppetserver instead of running this oneoff.
  • Hiera configured to use the following configuration.

Hiera config

:merge_behavior: deeper
:deep_merge_options:
  :merge_hash_arrays: true

Basic Setup

  • Convert the common portage configuraitons to templates.
  • Convert the data in those templates to a datastructure.
  • Use hiera to write the defaults / node overrides.
  • Call the templates via a puppet module.

Datastructures

The easiest way of explaing how this works is to say that the only data stored in the 'deepest' value is ever going to be True or False. The reason for this is a because using deep_merge in hiera is an additive process and we need a flag to remove things further down the line.

The datastructure itself is fairly simple, here is an excerpt from my setup.

make_conf:
  emerge_default_opts:
    "--quiet-build": true
    "--changed-use": true

If I wanted to disable --quiet-build down the line you would just set the value to False in a higher precidence (the specific node config instead of the general location.

make_conf:
  emerge_default_opts:
    "--quiet-build": false

Configuration Templates

The templates themselves are epp based, not erb (the old method).

package.keywords

For this one I'll also supply how I auto-set the right archetecture, works for amd64 at least.

Hiera data

"app-admin/paxtest ~%{facts.architecture}": true

Template

<%- |$packages| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

<%- keys(delete_values($packages, false)).each |$package| { -%>
<%= "$package" %>
<%- } -%>

This one is the simplest, if a value for the key (paxtest in this case) is set to false, don't use it, the remaining keys are then set as plan text.

package.use

<%- |$packages| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

<%- keys(delete_values($packages, false)).each |$package| { -%>
  <%- if ! empty(keys(delete_values($packages[$package], false))) { -%>
<%= "$package" %> <%= join(keys(delete_values($packages[$package], false)), ' ') %>
  <%- } -%>
<%- } -%>

This one is fairly straight forward as well, for each package that isn't disabled, if there are keys for the package (signifying use flags, needed because we remove the unset flags) then set them. This combines the flags set from all levels in hiera.

make.conf

This will be the most complicated one, but it's also likely to be the most important. I'll explain a bit about it after the paste.

<%- |$config| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

CFLAGS="<%= join(keys(delete_values($config['cflags'], false)), ' ') %>"
CXXFLAGS="<%= join(keys(delete_values($config['cxxflags'], false)), ' ') %>"
CHOST="<%= $config['chost'] %>"
MAKEOPTS="<%= join(keys(delete_values($config['makeopts'], false)), ' ') %>"
CPU_FLAGS_X86="<%= join(keys(delete_values($config['cpu_flags_x86'], false)), ' ') %>"
ABI_X86="<%= join(keys(delete_values($config['abi_x86'], false)), ' ') %>"

USE="<%= join(keys(delete_values($config['use'], false)), ' ') %>"

GENTOO_MIRRORS="<%= join(keys(delete_values($config['gentoo_mirrors'], false)), ' ') %>"
<% if has_key($config, 'portage_binhost') { -%>
  <%- if $config['portage_binhost'] != false { -%>
PORTAGE_BINHOST="<%= $config['portage_binhost'] %>"
  <%- } -%>
<% } -%>

FEATURES="<%= join(keys(delete_values($config['features'], false)), ' ') %>"
EMERGE_DEFAULT_OPTS="<%= join(keys(delete_values($config['emerge_default_opts'], false)), ' ') %>"
PKGDIR="<%= $config['pkgdir'] %>"
PORT_LOGDIR="<%= $config['port_logdir'] %>"
PORTAGE_GPG_DIR="<%= $config['portage_gpg_dir'] %>"
PORTAGE_GPG_KEY='<%= $config['portage_gpg_key'] %>'

GRUB_PLATFORMS="<%= join(keys(delete_values($config['grub_platforms'], false)), ' ') %>"
LINGUAS="<%= join(keys(delete_values($config['linguas'], false)), ' ') %>"
L10N="<%= join(keys(delete_values($config['l10n'], false)), ' ') %>"

PORTAGE_ELOG_CLASSES="<%= join(keys(delete_values($config['portage_elog_classes'], false)), ' ') %>"
PORTAGE_ELOG_SYSTEM="<%= join(keys(delete_values($config['portage_elog_system'], false)), ' ') %>"
PORTAGE_ELOG_MAILURI="<%= $config['portage_elog_mailuri'] %>"
PORTAGE_ELOG_MAILFROM="<%= $config['portage_elog_mailfrom'] %>"

<% if has_key($config, 'accept_licence') { -%>
ACCEPT_LICENSE="<%= join(keys(delete_values($config['accept_licence'], false)), ' ') %>"
<%- } -%>
POLICY_TYPES="<%= join(keys(delete_values($config['policy_types'], false)), ' ') %>"
PAX_MARKINGS="<%= join(keys(delete_values($config['pax_markings'], false)), ' ') %>"

USE_PYTHON="<%= join(keys(delete_values($config['use_python'], false)), ' ') %>"
PYTHON_TARGETS="<%= join(keys(delete_values($config['python_targets'], false)), ' ') %>"
RUBY_TARGETS="<%= join(keys(delete_values($config['ruby_targets'], false)), ' ') %>"
PHP_TARGETS="<%= join(keys(delete_values($config['php_targets'], false)), ' ') %>"

<% if has_key($config, 'source') { -%>
source <%= join(keys(delete_values($config['source'], false)), ' ') %>
<%- } -%>

The basic idea of this is that you pass in the full make.conf datastructre you will generate as a single variable. Everything else is pulled (or elemated from that).

Each option that is selected already has all the options merged, but this could mean both the disabled versions of a given value could be still there, this is removed using the delete_values($config['foo'], false) bit.

The puppet module itself

It's fairly easy to call, just make sure the template is in the template location and do it as follows.

file { '/etc/portage/make.conf':
  content => epp('portage/portage-make_conf.epp', {'config' => hiera_hash('portage')['make_conf']})
}

fin

If you have any questions I'm on freenode as prometheanfire.

May 16, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous blog post, I demonstrated how to use the PIV feature of a Yubikey to add a 2nd factor authentication to SSH.

Careful readers such as Grzegorz Kulewski pointed out that using the GPG capability of the Yubikey was also a great, more versatile and more secure option on the table (I love those community insights):

  • GPG keys and subkeys are indeed more flexible and can be used for case-specific operations (signing, encryption, authentication)
  • GPG is more widely used and one could use their Yubikey smartcard for SSH, VPN, HTTP auth and code signing
  • The Yubikey 4 GPG feature supports 4096 bit keys (limited to 2048 for PIV)

While I initially looked at the GPG feature, its apparent complexity got me to discard it for my direct use case (SSH). But I couldn’t resist the good points of Grzegorz and here I got back into testing it. Thank you again Grzegorz for the excuse you provided 😉

So let’s get through with the GPG feature of the Yubikey to authenticate our SSH connections. Just like the PIV method, this one has the  advantage to allow a 2nd factor authentication while using the public key authentication mechanism of OpenSSH and thus does not need any kind of setup on the servers.

Method 3 – SSH using Yubikey and GPG

Acknowledgement

The first choice you have to make is to decide whether you allow your master key to be stored on the Yubikey or not. This choice will be guided by how you plan to use and industrialize your usage of the GPG based SSH authentication.

Consider this to choose whether to store the master key on the Yubikey or not:

  • (con) it will not allow the usage of the same GPG key on multiple Yubikeys
  • (con) if you loose your Yubikey, you will have to revoke your entire GPG key and start from scratch (since the secret key is stored on the Yubikey)
  • (pro) by storing everything on the Yubikey, you won’t necessary have to have an offline copy of your master key (and all the process that comes with it)
  • (pro) it is easier to generate and store everything on the key and is then a good starting point for new comers or rare GPG users

Because I want to demonstrate and enforce the most straightforward way of using it, I will base this article on generating and storing everything on a Yubikey 4. You can find useful links at the end of the article pointing to reference on how to do it differently.

Tools installation

For this to work, we will need some tools on our local machine to setup our Yubikey correctly.

Gentoo users should install those packages:

emerge -av dev-libs/opensc sys-auth/ykpers app-crypt/ccid sys-apps/pcsc-tools app-crypt/gnupg

Gentoo users should also allow the pcscd service to be hotplugged (started automatically upon key insertion) by modifying their /etc/rc.conf and having:

rc_hotplug="pcscd"

Yubikey setup

The idea behind the Yubikey setup is to generate and store the GPG keys directly on our Yubikey and to secure them via a PIN code (and an admin PIN code).

  • default PIN code: 123456
  • default admin PIN code: 12345678

First, insert your Yubikey and let’s change its USB operating mode to OTP+U2F+CCID with MODE_FLAG_EJECT flag.

ykpersonalize -m86
Firmware version 4.3.4 Touch level 783 Program sequence 3

The USB mode will be set to: 0x86

Commit? (y/n) [n]: y

NOTE: if you have an older version of Yubikey (before Sept. 2014), use -m82 instead.

Then, we can generate a new GPG key on the Yubikey. Let’s open the smartcard for edition.

gpg --card-edit --expert

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0005435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

Then switch to admin mode.

gpg/card> admin
Admin commands are allowed

We can start generating the Signature, Encryption and Authentication keys on the Yubikey. During the process, you will be prompted alternatively for the admin PIN and PIN.

gpg/card> generate 
Make off-card backup of encryption key? (Y/n) 

Please note that the factory settings of the PINs are
   PIN = '123456'     Admin PIN = '12345678'
You should change them using the command --change-pin

I advise you say Yes to the off-card backup of the encryption key.

Yubikey 4 users can choose a 4096 bits key, let’s go for it for every key type.

What keysize do you want for the Signature key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits
Note: There is no guarantee that the card supports the requested size.
      If the key generation does not succeed, please check the
      documentation of your card to see what sizes are allowed.
What keysize do you want for the Encryption key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits
What keysize do you want for the Authentication key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits

Then you’re asked for the expiration of your key. I choose 1 year but it’s up to you (leave 0 for no expiration).

Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at mer. 15 mai 2018 21:42:42 CEST
Is this correct? (y/N) y

Finally you give GnuPG details about your user ID and you will be prompted for a passphrase (make it strong).

GnuPG needs to construct a user ID to identify your key.

Real name: Ultrabug
Email address: ultrabug@nospam.com
Comment: 
You selected this USER-ID:
    "Ultrabug <ultrabug@nospam.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

If you chose to make an off-card backup of your key, you will also get notified of its location as well the revocation certificate.

gpg: Note: backup of card key saved to '/home/ultrabug/.gnupg/sk_8E407636C9C32C38.gpg'
gpg: key 22A73AED8E766F01 marked as ultimately trusted
gpg: revocation certificate stored as '/home/ultrabug/.gnupg/openpgp-revocs.d/A1580FD98C0486D94C1BE63B22A73AED8E766F01.rev'
public and secret key created and signed.

Make sure to store that backup in a secure and offline location.

You can verify that everything went good and take this chance to note the public key ID.

gpg/card> verify

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0001435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa4096 rsa4096 rsa4096
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 4
Signature key ....: A158 0FD9 8C04 86D9 4C1B E63B 22A7 3AED 8E76 6F01
 created ....: 2017-05-16 20:43:17
Encryption key....: E1B6 7009 907D 1D94 B200 37D7 8E40 7636 C9C3 2C38
 created ....: 2017-05-16 20:43:17
Authentication key: AAED AB8E E055 41B2 EFFF 62A4 164F 873A 75D2 AD6B
 created ....: 2017-05-16 20:43:17
General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>
sec> rsa4096/22A73AED8E766F01 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/164F873A75D2AD6B created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/8E407636C9C32C38 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106

You’ll find the public key ID on the “General key info” line (22A73AED8E766F01):

General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>

Quit the card edition.

gpg/card> quit

It is then convenient to upload your public key to a key server, whether public or on your own web server (you can also keep it to be used and imported directly from an USB stick).

Export the public key:

gpg --armor --export 22A73AED8E766F01 > 22A73AED8E766F01.asc

Then upload it to your http server or a public server (needed if you want to be able to easily use the key on multiple machines):

# Upload it to your http server
scp 22A73AED8E766F01.asc user@server:public_html/static/22A73AED8E766F01.asc

# OR upload it to a public keyserver
gpg --keyserver hkps://hkps.pool.sks-keyservers.net --send-key 22A73AED8E766F01

Now we can finish up the Yubikey setup. Let’s edit the card again:

gpg --card-edit --expert

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0001435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa4096 rsa4096 rsa4096
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 4
Signature key ....: A158 0FD9 8C04 86D9 4C1B E63B 22A7 3AED 8E76 6F01
 created ....: 2017-05-16 20:43:17
Encryption key....: E1B6 7009 907D 1D94 B200 37D7 8E40 7636 C9C3 2C38
 created ....: 2017-05-16 20:43:17
Authentication key: AAED AB8E E055 41B2 EFFF 62A4 164F 873A 75D2 AD6B
 created ....: 2017-05-16 20:43:17
General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>
sec> rsa4096/22A73AED8E766F01 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/164F873A75D2AD6B created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/8E407636C9C32C38 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
gpg/card> admin

Make sure that the Signature PIN is forced to request that your PIN is entered when your key is used. If it is listed as “not forced”, you can enforce it by entering the following command:

gpg/card> forcesig

It is also good practice to set a few more settings on your key.

gpg/card> login
Login data (account name): ultrabug

gpg/card> lang
Language preferences: en

gpg/card> name 
Cardholder's surname: Bug
Cardholder's given name: Ultra

Now we need to setup the PIN and admin PIN on the card.

gpg/card> passwd 
gpg: OpenPGP card no. A7560001240102010006054351060000 detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 1
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 3
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? Q

If you uploaded your public key on your web server or a public server, configure it on the key:

gpg/card> url
URL to retrieve public key: http://ultrabug.fr/keyserver/22A73AED8E766F01.asc

gpg/card> quit

Now we can quit the gpg card edition, we’re done on the Yubikey side!

gpg/card> quit

SSH client setup

This is the setup on the machine(s) where you will be using the GPG key. The idea is to import your key from the card to your local keyring so you can use it on gpg-agent (and its ssh support).

You can skip the fetch/import part below if you generated the key on the same machine than you are using it. You should see it listed when executing gpg -k.

Plug-in your Yubikey and load the smartcard.

gpg --card-edit --expert

Then fetch the key from the URL to import it to your local keyring.

gpg/card> fetch

Then you’re done on this part, exit gpg and update/display& your card status.

gpg/card> quit

gpg --card-status

You can verify the presence of the key in your keyring:

gpg -K
sec>  rsa4096 2017-05-16 [SC] [expires: 2018-05-16]
      A1580FD98C0486D94C1BE63B22A73AED8E766F01
      Card serial no. = 0001 05435106
uid           [ultimate] Ultrabug <ultrabug@nospam.com>
ssb>  rsa4096 2017-05-16 [A] [expires: 2018-05-16]
ssb>  rsa4096 2017-05-16 [E] [expires: 2018-05-16]

Note the “Card serial no.” showing that the key is actually stored on a smartcard.

Now we need to configure gpg-agent to enable ssh support, edit your ~/.gnupg/gpg-agent.conf configuration file and make sure that the enable-ssh-support is present:

default-cache-ttl 7200
max-cache-ttl 86400
enable-ssh-support

Then you will need to update your ~/.bashrc file to automatically start gpg-agent and override ssh-agent’s environment variables. Add this at the end of your ~/.bashrc file (or equivalent).

# start gpg-agent if it's not running
# then override SSH authentication socket to use gpg-agent
pgrep -l gpg-agent &>/dev/null
if [[ "$?" != "0" ]]; then
 gpg-agent --daemon &>/dev/null
fi
SSH_AUTH_SOCK=/run/user/$(id -u)/gnupg/S.gpg-agent.ssh
export SSH_AUTH_SOCK

To simulate a clean slate, unplug your card then kill any running gpg-agent:

killall gpg-agent

Then plug back your card and source your ~/.bashrc file:

source ~/.bashrc

Your GPG key is now listed in you ssh identities!

ssh-add -l
4096 SHA256:a4vsJM6Sw1Rt8orvPnI8nvNUwHbRQ67ylnoTxruozK9 cardno:000105435106 (RSA)

You will now be able to get the SSH public key hash to copy to your remote servers using:

ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCVDq24Ld/bOzc3yNnY6fF7FNfZnb6wRVdN2xMo1YiA5pz20y+2P1ynq0rb6l/wsSip0Owq4G6fzaJtT1pBUAkEJbuMvZBrbYokb2mZ78iFZyzAkCT+C9YQtvPClFxSnqSL38jBpunZuuiFfejM842dWdMNK3BTcjeTQdTbmY+VsVOw7ppepRh7BWslb52cVpg+bHhB/4H0BUUd/KHZ5sor0z6e1OkqIV8UTiLY2laCCL8iWepZcE6n7MH9KItqzX2I9HVIuLCzhIib35IRp6d3Whhw3hXlit4/irqkIw0g7s9jC8OwybEqXiaeQbmosLromY3X6H8++uLnk7eg9RtCwcWfDq0Qg2uNTEarVGgQ1HXFr8WsjWBIneC8iG09idnwZNbfV3ocY5+l1REZZDobo2BbhSZiS7hKRxzoSiwRvlWh9GuIv8RNCDss9yNFxNUiEIi7lyskSgQl3J8znDXHfNiOAA2X5kVH0s6AQx4hQP9Dl1X2Em4zOz+yJEPFnAvE+XvBys1yuUPq1c3WKMWzongZi8JNu51Yfj7Trm74hoFRn+CREUNpELD9JignxlvkoKAJpWVLdEu1bxJ7jh7kcMQfVEflLbfkEPLV4nZS4sC1FJR88DZwQvOudyS69wLrF3azC1Gc/fTgBiXVVQwuAXE7vozZk+K4hdrGq4u7Gw== cardno:000105435106

This is what ends up in ~/.ssh/authorized_keys on your servers.

When connecting to your remote server, you will be prompted for the PIN!

Conclusion

Using the GPG feature of your Yubikey is very convenient and versatile. Even if it is not that hard after all, it is interesting and fair to note that the PIV method is indeed more simple to implement.

When you need to maintain a large number of security keys in an organization and that their usage is limited to SSH, you will be inclined to stick with PIV if 2048 bits keys are acceptable for you.

However, for power users and developers, usage of GPG is definitely something you need to consider for its versatility and enhanced security.

Useful links

You may find those articles useful to setup your GPG key differently and avoid having the secret key tied to your Yubikey.

Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

When I started fetchcommandwrapper about 6 years ago it was a proof of concept: It plugged into portage replacing wget for downloads, facilitating ${GENTOO_MIRRORS} and aria2 to both download faster and distribute loads across mirrors. A hack for sure, but with some potential.

Back then public interest was non-existent, fetchcommandwrapper had some issues — e.g. metadata.xsd downloads failed and some sites rejected downloading before it made aria2 dress like wget — and I stopped using it myself, eventually.

With the latest bug reports, bugfixes and release of version 0.8 in Gentoo, fetchcommandwrapper is ready for general use now. To give it a shot, you emerge app-portage/fetchcommandwrapper and append source /usr/share/fetchcommandwrapper/make.conf to /etc/portage/make.conf. Done.

If you have extra options that you would like to pass to aria2c, put them in ${FETCHCOMMANDWRAPPER_EXTRA}, or ${FETCHCOMMANDWRAPPER_OPTIONS} for fetchcommendwrapper itself; for example

FETCHCOMMANDWRAPPER_OPTIONS="--link-speed=600000"

tells fetchcommandwrapper that my download link has 600KB/s only and makes aria2 in turn drop connections to mirrors that cannot keep up with at least a third of that, so that faster mirrors get a chance to take their place.

For non-ebuild bugs, feel free to use https://github.com/gentoo/fetchcommandwrapper/issues to report.

Best, Sebastian

May 12, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous blog post, I demonstrated how to use a Yubikey to add a 2nd factor (2FA) authentication to SSH using pam_ssh and pam_yubico.

In this article, I will go further and demonstrate another method using Yubikey’s Personal Identity Verification (PIV) capability.

This one has the huge advantage to allow a 2nd factor authentication while using the public key authentication mechanism of OpenSSH and thus does not need any kind of setup on the servers.

Method 2 – SSH using Yubikey and PIV

Yubikey 4 and NEO also act as smartcards supporting the PIV standard which allows you to store a private key on your security key through PKCS#11. This is an amazing feature which is also very good for our use case.

Tools installation

For this to work, we will need some tools on our local machines to setup our Yubikey correctly.

Gentoo users should install those packages:

emerge dev-libs/opensc sys-auth/ykpers sys-auth/yubico-piv-tool sys-apps/pcsc-lite app-crypt/ccid sys-apps/pcsc-tools sys-auth/yubikey-personalization-gui

Gentoo users should also allow the pcscd service to be hotplugged (started automatically upon key insertion) by modifying their /etc/rc.conf and having:

rc_hotplug="pcscd"

Yubikey setup

The idea behind the Yubikey setup is to generate and store a private key in our Yubikey and to secure it via a PIN code.

First, insert your Yubikey and let’s change its USB operating mode to OTP+CCID.

ykpersonalize -m2
Firmware version 4.3.4 Touch level 783 Program sequence 3

The USB mode will be set to: 0x2

Commit? (y/n) [n]: y

Then, we will create a new management key:

key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
echo $key
D59E46FE263DDC052A409C68EB71941D8DD0C5915B7C143A

Replace the default management key (if prompted, copy/paste the key printed above):

yubico-piv-tool -a set-mgm-key -n $key --key 010203040506070801020304050607080102030405060708

Then change the default PIN code and PUK code of your Yubikey

yubico-piv-tool -a change-pin -P 123456 -N <NEW PIN>

yubico-piv-tool -a change-puk -P 12345678 -N <NEW PUK>

Now that your Yubikey is secure, let’s proceed with the PCKS#11 certificate generation. You will be prompted for your management key that you generated before.

yubico-piv-tool -s 9a -a generate -o public.pem -k

Then create a self-signed certificate (only used for libpcks11) and import it in the Yubikey:

yubico-piv-tool -a verify-pin -a selfsign-certificate -s 9a -S "/CN=SSH key/" -i public.pem -o cert.pem
yubico-piv-tool -a import-certificate -s 9a -i cert.pem

Here you are! You can now export your public key to use with OpenSSH:

ssh-keygen -D opensc-pkcs11.so -e
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWtqI37jwxYMJ9XLq9VwHgJlhZViPVAGIUfMm8SAlfs6cka4Cj570lkoGK04r8JAVJFy/iKfhGpL9N9XuartfIoq6Cg/6Qvg3REupuqs51V2cBaC/gnWIQ7qZqlzBulvcOvzNfHFD/lX42J58+E8tWnYg6GzIsImFZQVpmI6SxNfSmVQIqxIufInrbQaI+pKXntdTQC9wyNK5FAA8TXAdff5ZDnmetsOTVble9Ia5m6gqM7MnxNZ56uDpn+6lCxRZSW+Ln2PDE7sivVcST4qpfwY4P4Lrb3vrjCGODFg4xmGNKXsLi2+uZbs5rW7bg4HFO50kKDucPV1M+rBWA9999

Copy to your servers your SSH public key to your usual ~/.ssh/authorized_keys file in your $HOME.

Testing PIV secured SSH

Plug-in your Yubikey, and then SSH to your remote server using the opensc-pkcs11 library. You will be prompted for your PIN and then successfully logged in 🙂

ssh -I opensc-pkcs11.so cheetah
Enter PIN for 'PIV_II (PIV Card Holder pin)':

You can then configure SSH to use it by default for all your hosts in your ~/.ssh/config

Host=*
PKCS11Provider /usr/lib/opensc-pkcs11.so

Using PIV with ssh-agent

You can also use ssh-agent to avoid typing your PIN every time.

When asked for the passphrase, enter your PIN:

ssh-add -s /usr/lib/opensc-pkcs11.so
Enter passphrase for PKCS#11: 
Card added: /usr/lib/opensc-pkcs11.so

You can verify that it worked by listing the available keys in your ssh agent:

ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWtqI37jwxYMJ9XLq9VwHgJlhZViPVAGIUfMm8SAlfs6cka4Cj570lkoGK04r8JAVJFy/iKfhGpL9N9XuartfIoq6Cg/6Qvg3REupuqs51V2cBaC/gnWIQ7qZqlzBulvcOvzNfHFD/lX42J58+E8tWnYg6GzIsImFZQVpmI6SxNfSmVQIqxIufInrbQaI+pKXntdTQC9wyNK5FAA8TXAdff5ZDnmetsOTVble9Ia5m6gqM7MnxNZ56uDpn+6lCxRZSW+Ln2PDE7sivVcST4qpfwY4P4Lrb3vrjCGODFg4xmGNKXsLi2+uZbs5rW7bg4HFO50kKDucPV1M+rBWA9999 /usr/lib64/opensc-pkcs11.so

Enjoy!

Now you have a flexible yet robust way to authenticate your users which you can also extend by adding another type of authentication on your servers using PAM.

I recently worked a bit at how we could secure better our SSH connections to our servers at work.

So far we are using the OpenSSH public key only mechanism which means that there is no password set on the servers for our users. While this was satisfactory for a time we think that this still suffers some disadvantages such as:

  • we cannot enforce SSH private keys to have a passphrase on the user side
  • the security level of the whole system is based on the protection of the private key which means that it’s directly tied to the security level of the desktop of the users

This lead us to think about adding a 2nd factor authentication to SSH and about the usage of security keys.

Meet the Yubikey

Yubikeys are security keys made by Yubico. They can support multiple modes and work with the U2F open authentication standard which is why they got my attention.

I decided to try the Yubikey 4 because it can act as a smartcard while offering these interesting features:

  • Challenge-Response
  • OTP
  • GPG
  • PIV

Method 1 – SSH using pam_ssh + pam_yubico

The first method I found satisfactory was to combine pam_ssh authentication module along with pam_yubico as a 2nd factor. This allows server side passphrase enforcement on SSH and the usage of the security key to login.

TL;DR: two gotchas before we begin

ADVISE: keep a root SSH session to your servers while deploying/testing this so you can revert any change you make and avoid to lock yourself out of your servers.

Setup pam_ssh

Use pam_ssh on the servers to force usage of a passphrase on a private key. The idea behind pam_ssh is that the passphrase of your SSH key serves as your SSH password.

Generate your SSH key pair with a passphrase on your local machine.

ssh-keygen -f identity
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in identity.
Your public key has been saved in identity.pub.
The key fingerprint is:
SHA256:a2/HNCe28+bpMZ2dIf9bodnBwnmD7stO5sdBOV6teP8 alexys@yazd
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                o|
|            . ++o|
|        S    BoOo|
|         .  B %+O|
|        o  + %+*=|
|       . .. @ .*+|
|         ....%B.E|
+----[SHA256]-----+

You then must copy your private key (named identity with no extension) to your servers under  the ~/.ssh/login-keys.d/ folder.

In your $HOME on the servers, you will get something like this:

.ssh/
├── known_hosts
└── login-keys.d
    └── identity

Then you can enable the pam_ssh authentication. Gentoo users should enable the pam_ssh USE flag for sys-auth/pambase and re-install.

Add this at the beginning of the file /etc/pam.d/ssh

auth    required    pam_ssh.so debug

The debug flag can be removed after you tested it correctly.

Disable public key authentication

Because it takes precedence over the PAM authentication mechanism, you have to disable OpenSSH PubkeyAuthentication authentication on /etc/ssh/sshd_config:

PubkeyAuthentication no

Enable PAM authentication on /etc/ssh/sshd_config

ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM yes

Test pam_ssh

Now you should be prompted for your SSH passphrase to login through SSH.

➜  ~ ssh cheetah
SSH passphrase:

Setup 2nd factor using pam_yubico

Now we will make use of our Yubikey security key to add a 2nd factor authentication to login through SSH on our servers.

Because the Yubikey is not physically plugged on the server, we cannot use an offline Challenge-Response mechanism, so we will have to use a third party to validate the challenge. Yubico gracefully provide an API for this and the pam_yubico module is meant to use it easily.

Preparing your account using your Yubikey (on your machine)

First of all, you need to get your Yubico API key ID from the following URL:

You will get a Client ID (this you will use) and Secret Key (this you will keep safe).

Then you will need to create an authorization mapping file which basically link your account to a Yubikey fingerprint (modhex). This is equivalent to saying “this Yubikey belongs to this user and can authenticate him”.

First, get your modhex:

Using this modhex, create your mapping file named authorized_yubikeys which will be copied to ~/.yubico/authorized_yubikeys on the servers (replace LOGIN_USERNAME with your actual account login name).

LOGIN_USERNAME:xxccccxxuuxx

NOTE: this mapping file can be a centralized one (in /etc for example) to handle all the users from a server. See the the authfile option on the doc.

Setting up OpenSSH (on your servers)

You must install pam_yubico on the servers. For Gentoo, it’s as simple as:

emerge sys-auth/pam_yubico

Copy your authentication mapping file to your home under the .yubico folder on all servers. You should get this:

.yubico/
└── authorized_yubikey

Configure pam to use pam_yubico. Add this after the pam_ssh on the file /etc/pam.d/ssh which should look like this now:

auth    required    pam_ssh.so
auth    required    pam_yubico.so id=YOUR_API_ID debug debug_file=/var/log/auth.log

The debug and debug_file flags can be removed after you tested it correctly.

Testing pam_yubico

Now you should be prompted for your SSH passphrase and then for your Yubikey OTP to login through SSH.

➜  ~ ssh cheetah
SSH passphrase: 
YubiKey for `ultrabug':

About the Yubico API dependency

Careful readers will notice that using pam_yubico introduces a strong dependency on the Yubico API availability. If the API becomes unreachable or your internet connection goes down then your servers would be unable to authenticate you!

The solution I found to this problem is to instruct pam to ignore the Yubikey authentication when pam_yubico is unable to contact the API.

In this case, the module will return a AUTHINFO_UNAVAIL code to PAM which we can act upon using the following syntax. The /etc/pam.d/ssh first lines should be changed to this:

auth    required    pam_ssh.so
auth    [success=done authinfo_unavail=ignore new_authtok_reqd=done default=die]    pam_yubico.so id=YOUR_API_ID debug debug_file=/var/log/auth.log

Now you can be sure to be able to use your Yubikey even if the API is down or unreachable 😉

April 30, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hey there!

If you are not subscribed to the new Gentoo packages feed, let me quickly introduce you to SafeEyes that I started using on a daily basis. It has found it’s way into Gentoo as x11-misc/safeeyes now. This article does a good job:

SafeEyes Protects You From Eye Strain When Working On The Computer (webupd8.org)

Best, Sebastian

April 15, 2017
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
GHC as a cross-compiler update (April 15, 2017, 11:05 UTC)

TL;DR:

Gentoo’s dev-lang/ghc-8.2.1_rc1 supports both cross-building and cross-compiling modes! It’s useful for cross-compiling haskell software and initial porting of GHC itself on a new gentoo target.

Building a GHC crossompiler on Gentoo

Getting ${CTARGET}-ghc (crosscompiler) on Gentoo:

# # convenience variables:
CTARGET=powerpc64-unknown-linux-gnu
#
# # Installing a target toolchain: gcc, glibc, binutils
crossdev ${CTARGET}
# # Installing ghc dependencies:
emerge-${CTARGET} -1 libffi ncurses gmp
#
# # adding 'ghc' symlink to cross-overlay:
ln -s path/to/haskell/overlay/dev-lang/ghc part/to/cross/overlay/cross-${CTARGET}/ghc
#
# # Building ghc crosscompiler:
emerge -1 cross-${CTARGET}/ghc
#
powerpc64-unknown-linux-gnu-ghc --info | grep Target
# ,("Target platform","powerpc64-unknown-linux")

Cross-building GHC on Gentoo

Cross-building ghc on ${CTARGET}:

# # convenience variables:
CTARGET=powerpc64-unknown-linux-gnu
#
# # Installing a target toolchain: gcc, glibc, binutils
crossdev ${CTARGET}
# # Installing ghc dependencies:
emerge-${CTARGET} -1 libffi ncurses gmp
#
# # Cross-building ghc crosscompiler:
emerge-${CTARGET} --buildpkg -1 dev-lang/ghc
#
# # Now built packages can be used on a target to install
# # built ghc as: emerge --usepkg -1 dev-lang/ghc

Building a GHC crossompiler (generic)

That’s how you get a powerpc64 crosscompiler in a fresh git checkout:

$ ./configure --target=powerpc64-unknown-linux-gnu
$ cat mk/build.mk
HADDOCK_DOCS=NO
BUILD_SPHINX_HTML=NO
BUILD_SPHINX_PDF=NO
# to speed things up
BUILD_PROF_LIBS=NO
$ make -j$(nproc)
$ inplace/bin/ghc-stage1 --info | grep Target
,("Target platform","powerpc64-unknown-linux")

Simple!

Below are details that have only historical (or backporting) value.

How did we get there?

Cross-compiling support in GHC is not a new thing. GHC wiki has a detailed section on how to build a crosscompiler. That works quite good. You can even target ghc at m68k: porting example.

What did not work so well is the attempt to install the result! In some places GHC build system tried to run ghc-pkg built for ${CBUILD}, in some places for ${CHOST}.

I never really tried to install a crosscompiler before. I think mostly because I was usually happy to make cross-compiler build at all: making GHC build for a rare target usually required a patch or two.

But one day I’ve decided to give full install a run. Original motivation was a bit unusual: I wanted to free space on my hard drive.

The build tree for GHC usually takes about 6-8GB. I had about 15 GHC source trees lying around. All in all it took about 10% of all space on my hard drive. Fixing make install would allow me to install only final result and get rid of all intermediate files.

I’ve decided to test make install code on Gentoo‘s dev-lang/ghc package as a proper package.

As a result a bunch of minor cleanups happened:

What works?

It allowed me to test various targets. Namely:

Target Bits Endianness Codegen
cross-aarch64-unknown-linux-gnu/ghc 64 LE LLVM
cross-alpha-unknown-linux-gnu/ghc 64 LE UNREG
cross-armv7a-unknown-linux-gnueabi/ghc 32 LE LLVM
cross-hppa-unknown-linux-gnu/ghc 32 BE UNREG
cross-m68k-unknown-linux-gnu/ghc 32 BE UNREG
cross-mips64-unknown-linux-gnu/ghc 32/64 BE UNREG
cross-powerpc64-unknown-linux-gnu/ghc 64 BE NCG
cross-powerpc64le-unknown-linux-gnu/ghc 64 LE NCG
cross-s390x-unknown-linux-gnu/ghc 64 BE UNREG
cross-sparc-unknown-linux-gnu/ghc 32 BE UNREG
cross-sparc64-unknown-linux-gnu/ghc 64 BE UNREG

I am running all of this on x86_64 (64-bit LE platform)

Quite a list! With help of qemu we can even test whether cross-compiler produces something that works:

$ cat hi.hs 
main = print "hello!"
$ powerpc64le-unknown-linux-gnu-ghc hi.hs -o hi.ppc64le
[1 of 1] Compiling Main             ( hi.hs, hi.o )
Linking hi.ppc64le ...
$ file hi.ppc64le 
hi.ppc64le: ELF 64-bit LSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib64/ld64.so.2, for GNU/Linux 3.2.0, not stripped
$ qemu-ppc64le -L /usr/powerpc64le-unknown-linux-gnu/ ./hi.ppc64le 
"hello!"

Many qemu targets are slightly buggy and usually are very easy to fix!

A few recent examples:

  • epoll syscall is not wired properly on qemu-alpha: patch
  • CPU initialization code on qemu-s390x
  • thread creation fails on qemu-sparc32plus due to simple mmap() emulation bug
  • tcg on qemu-sparc64 crashes at runtime in static_code_gen_buffer()

Tweaking qemu is fun 🙂


April 10, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.5 (April 10, 2017, 10:19 UTC)

Howdy folks,

I’m obviously slacking a bit on my blog and I’m ashamed to say that it’s not the only place where I do. py3status is another of them and it wouldn’t be the project it is today without @tobes.

In fact, this new 3.5 release has witnessed his takeover on the top contributions on the project, so I want to extend a warm thank you and lots of congratulations on this my friend 🙂

Also, an amazing new contributor from the USA has come around in the nickname of @lasers. He has been doing a tremendous job on module normalization, code review and feedbacks. His high energy is amazing and more than welcome.

This release is mainly his, so thank you @lasers !

What’s new ?

Well the changelog has never been so large that I even don’t know where to start. I guess the most noticeable change is the gorgeous and brand new documentation of py3status on readthedocs !

Apart from the enhanced guides and sections, what’s amazing behind this new documentation is the level of automation efforts that @lasers and @tobes put into it. They even generate modules’ screenshots programmatically ! I would never have thought of it possible 😀

The other main efforts on this release is about modules normalization where @lasers put so much energy in taking advantage of the formatter features and bringing all the modules to a new level of standardization. This long work brought to light some lack of features or bugs which got corrected along the way.

Last but not least, the way py3status notifies you when modules fail to load/execute got changed. Now modules which fail to load or execute will not pop up a notification (i3 nagbar or dbus) but display directly in the bar where they belong. Users can left click to show the error and right click to discard them from their bar !

New modules

Once again, new and recurring contributors helped the project get better and offer a cool set of modules, thank you contributors !

  • air_quality module, to display the air quality of your place, by @beetleman and @lasers
  • getjson module to display fields from a json url, by @vicyap
  • keyboard_locks module to display keyboard locks states, by @lasers
  • systemd module to check the status of a systemd unit, by @adrianlzt
  • tor_rate module to display the incoming and outgoing data rates of a Tor daemon instance, by @fmorgner
  • xscreensaver module, by @lasers and @neutronst4r

Special mention to @maximbaz for his continuous efforts and help. And also a special community mention to @valdur55 for his responsiveness and help for other users on IRC !

What’s next ?

The 3.6 version will focus on the following ideas, some sane and some crazy 🙂

  • we will continue to work on the ability to add/remove/move modules in the bar at runtime
  • i3blocks and i3pystatus support, to embed their configurations and modules inside py3status
  • formatter optimizations
  • finish modules normalization
  • write more documentation and clean up the old ones

Stay tuned

March 25, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We want to stabilize Perl 5.24 on Gentoo pretty soon (meaning in a few weeks), and do actually not expect any big surprises there. If you are running a stable installation, are willing to do some testing, and are familiar with our Gentoo bugzilla and with filing bug reports, then you might just be the right volunteer to give it a try in advance!

Here's what to do:

Step 1: Update app-admin/perl-cleaner to current ~arch.
I'm deliberately not supplying any version number here, since I might do another release, but you should at least have perl-cleaner-2.25.

Step 2: Make sure your system is uptodate (emerge -uDNav world) and do a depclean step (emerge --depclean --ask).

Step 3: Download the current stabilization list from bug 604602 and place it into your /etc/portage/package.keywords or /etc/portage/package.accept_keywords.

Step 4: Update your world (emerge -uDNav world), which triggers the perl update and the module rebuild.

Step 5: Run "perl-cleaner --all"  (you might also want to try "perl-cleaner --all --delete-leftovers").

... and make sure you file bugs for any problems you encounter, during the update and afterwards! Feedback is also appreciated if all goes fine; then you best leave a comment here on the blog post.

March 21, 2017
Jason A. Donenfeld a.k.a. zx2c4 (homepage, bugs)
WireGuard in Google Summer of Code (March 21, 2017, 18:52 UTC)

WireGuard is participating in Google Summer of Code 2017. If you're a student who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, or working on a wide variety of interesting problems, then this might be appealing. The program opened to students on March 20th. If you're applying for WireGuard, choose "Linux Foundation" and state in your proposal that you'd like to work on WireGuard with "Jason Donenfeld" as your mentor.

March 17, 2017
Michał Górny a.k.a. mgorny (homepage, bugs)

You should know already that you are not supposed to rely on Portage internals in ebuilds — all variables, functions and helpers that are not defined by the PMS. You probably know that you are not supposed to touch various configuration files, vdb and other Portage files as well. What most people don’t seem to understand, you are not supposed to make any assumptions about the ebuild repository either. In this post, I will expand on this and try to explain why.

What PMS specifies, what you can rely on

I think the first confusing point is that PMS actually defines the repository format pretty thoroughly. However, it does not specify that you can rely on that format being visible from within ebuild environment. It just defines a few interfaces that you can reliably use, some of them in fact quite consistent with the repository layout.

You should really look as the PMS-defined repository format as an input specification. This is the format that the developers are supposed to use when writing ebuilds, and that all basic tools are supposed to support. However, it does not prevent the package managers from defining and using other package formats, as long as they provide the environment compliant with the PMS.

In fact, this is how binary packages are implemented in Gentoo. The PMS does not define any specific format for them. It only defines a few basic rules and facilities, and both Portage and Paludis implement their own binary package formats. The package managers expose APIs required by the PMS, and can use them to run the necessary pkg_* phases.

However, the problem is not limited to two currently used binary package formats. This is a generic goal of being able to define any new package format in the future, and make it work out of the box with existing ebuilds. Imagine just a few possibilities: more compact repository formats (i.e. not requiring hundreds of unpacked files), fetching only needed ebuild files…

Sadly, none of this can even start being implemented if developers continuosly insist to rely on specific repository layout.

The *DIR variables

Let’s get into the details and iterate over the few relevant variables here.

First of all, FILESDIR. This is the directory where ebuild support files are provided throughout src_* phases. However, there is no guarantee that this will be exactly the directory you created in the ebuild repository. The package manager just needs to provide the files in some directory, and this directory may not actually exist before the first src_* phase. This implies that the support files may not even exist at all when installing from a binary package, and may be created (copied, unpacked) later when doing a source build.

The next variable listed by the PMS is DISTDIR. While this variable is somewhat similar to the previous one, some developers are actually eager to make the opposite assumption. Once again, the package manager may provide the path to any directory that contains the downloaded files. This may be a ‘shadow’ directory containing only files for this package, or it can be any system downloads directory containing lots of other files. Once again, you can’t assume that DISTDIR will exist before src_*, and that it will exist at all (and contain necessary files) when the build is performed using a binary package.

The two remaining variables I would like to discuss are PORTDIR and ECLASSDIR. Those two are a cause of real mayhem: they are completely unsuited for a multi-repository layout modern package managers use and they enforce a particular source repository layout (they are not available outside src_* phases). They pretty much block any effort on improvement, and sadly their removal is continuously blocked by a few short-sighted developers. Nevertheless, work on removing them is in progress.

Environment saving

While we’re discussing those matters, a short note on environment saving is worth being written. By environment saving we usually mean the magic that causes the variables set in one phase function to be carried to a phase function following it, possibly over a disjoint sequence of actions (i.e. install followed by uninstall).

A common misunderstanding is to assume the Portage model of environment saving — i.e. basically dumping a whole ebuild environment including functions into a file. However, this is not sanctioned by the PMS. The rules require the package manager to save only variables, and only those that are not defined in global scope. If phase functions define functions, there is no guarantee that those functions will be preserved or restored. If phases redefine global variables, there is no guarantee that the redefinition will be preserved.

In fact, the specific wording used in the PMS allows a completely different implementation to be used. The package manager may just snapshot defined functions after processing the global scope, or even not snapshot them at all and instead re-read the ebuild (and re-inherit eclasses) every time the execution continues. In this case, any functions defined during phase function are lost.

Is there a future in this?

I hope this clears up all the misunderstandings on how to write ebuilds so that they will work reliably, both for source and binary builds. If those rules are followed, our users can finally start expecting some fun features to come. However, before that happens we need to fix the few existing violations — and for that to happen, we need a few developers to stop thinking only of their own convenience.

Marek Szuba a.k.a. marecki (homepage, bugs)
Gentoo Linux in a Docker container (March 17, 2017, 14:31 UTC)

I have been using Docker for ebuild development for quite a while and absolutely love it, mostly because how easy it is to manipulate filesystem state with it. Work on several separate ebuilds in parallel? Just spin up several containers. Clean up once I’m done? Happens automatically when I close the container. Come back to something later? One docker commit invocation and I’m done. I could of course do something similar with virtual machines (and indeed I have to for cross-platform work) – but for native amd64 is is extremely convenient.

There is, however, one catch. By default processes running in a Docker container are fairly restricted privilege-wise and the Gentoo sandbox uses ptrace(). Result? By default, certain ebuilds (sys-libs/glibc and dev-libs/gobject-introspection , to name just two) will fail to emerge. One can of course set FEATURES=”-sandbox -usersandbox” for such ebuilds but it is an absolute no-no for both new ebuilds and any stabilisation work.

In the past working around this issue required messing with Docker security policies, which at least I found rather awkward. Fortunately since version 1.13.0 there has been a considerably easier way – simply pass

--cap-add=SYS_PTRACE

to docker-run. Done! Sandbox can now use ptrace() to its heart’s content.

Big Fat Warning: The reason why by default Docker restricts CAP_SYS_PTRACE is that a malicious program can use ptrace() to break out of the container it runs in. Do not grant this capability to containers unless you know what you are doing. Seriously.

Unfortunately the above is not the end of the story because at least as of version 1.13.0, Docker does not allow to enhance the capabilities of a docker-build job. Why is this a problem? For my own work I use a custom image which extends somewhat the official gentoo/stage3-amd64-hardened . One of the things my Dockerfile does is rsync the Portage tree and update @world so that my image contains a fully up-to-date stage3 even when the official base image does not. You can guess what happens when Docker tries to emerge an ebuild requiring the sandbox to use ptrace()… and remember, one of the packages containing such ebuilds is sys-libs/glibc . To my current knowledge the only way around this is to spin up a ptrace-enabled container using the latest good intermediate image left behind by docker-build and execute the remaining build steps manually. Not fun… Hope they will fix this some day.

 

Possibly the simplest way of changing the passhprase protecting a SSH key imported into gpg-agent is to use the Assuan passwd command:

echo passwd foo | gpg-connect-agent

where foo is the keygrip of your SSH key, which one can obtain from the file $GNUPGHOME/sshcontrol [1]. So far so good – but how does one know which of the keys listed in that file is the right one, especially if your sshcontrol list is fairly long? Here are the options I am aware of at this point:

Use the key comment. If you remember the contents of the comment field of the SSH key in question you can simply grep for it in all the files stored in $GNUPGHOME/private-keys-v1.d/ . Take the name of the file that matches, strip .key from the end and you’re set! Note that these are binary files so make sure your grep variant does not skip over them.

Use the MD5 fingerprint and the key comment. If for some reason you would rather not do the above you can take advantage of the fact that for SSH keys imported into gpg-agent the normal way, each keygrip line in sshcontrol is preceded by comment lines containing, among other things, the MD5 fingerprint of the imported key. Just tell ssh-add to print MD5 fingerprints for keys known to the agent instead of the default SHA256 ones:

ssh-add -E md5 -l

locate the fingerprint corresponding to the relevant key comment, then find the corresponding keygrip in sshcontrol .

Use the MD5 fingerprint and the public key. A slightly more complex variant of the above can be used if your SSH key pair in question has no comment but you still have the public key lying around. Start by running

ssh-add -L

and note the number of the line in which the public key in question shows up. The output of ssh-add -L and ssh-add -l is in the same order so you should have no trouble locating the corresponding MD5 fingerprint.

Bottom line: use meaningful comments for your SSH keys. It can really simplify key management in the long run.

[1] https://lists.gnupg.org/pipermail/gnupg-users/2007-July/031482.html

March 08, 2017
Marek Szuba a.k.a. marecki (homepage, bugs)
Hello world! (March 08, 2017, 02:12 UTC)

Welcome to Gentoo Blogs. This is your first post. Edit or delete it, then start blogging!

March 06, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Handling certificates in Gentoo Linux (March 06, 2017, 21:20 UTC)

I recently created a new article on the Gentoo Wiki titled Certificates which talks about how to handle certificate stores on Gentoo Linux. The write-up of the article (which might still change name later, because it does not handle everything about certificates, mostly how to handle certificate stores) was inspired by the observation that I had to adjust the certificate stores of both Chromium and Firefox separately, even though they both use NSS.

February 28, 2017
Denis Dupeyron a.k.a. calchan (homepage, bugs)
Gentoo is accepted to GSoC 2017 (February 28, 2017, 00:07 UTC)

There was good news in my mailbox today. The Gentoo Foundation was accepted to be a mentor organization for Google Summer of Code 2017!

What this means is we need you as a mentor, backup mentor or expert mentor. Whether you are a Gentoo developer and have done GSoC before does not matter at this point.

A mentor is somebody who will help during the selection of students, and will mentor a student during the summer. This should take at most one hour of your time on weekdays when student actually work on their project. What’s in it for you, you ask? A pretty exclusive Google T-shirt, a minion who does things you wouldn’t have the time or energy to do, but most importantly gratification and a lot of fun.

Backup mentors are for when the primary mentor of a student becomes unavailable for an extended period, typically for medical or family reasons. It rarely happens but it does happen. But a backup mentor can also be an experienced mentor (i.e., have done it at least once) who assists a primary mentor who is doing it for the first time.

Expert mentors have a very specific knowledge and are contacted on an as-needed basis to help with technical decisions.

You can be any combination of all that. However, our immediate need in the coming weeks is for people (again, not necessarily mentors or devs) who will help us evaluate student proposals.

If you’re a student, it’s the right time now to start thinking about what project idea you would want to work on during the summer. You can find ideas on our dedicated page, or you can come up with yours (these are the best!). One note though: you are going to be working on this full-time (i.e., 8 hours a day, we don’t allow for another even part-time job next to GSoC, although we do accommodate students who have a limited amount of classes or exams) for 3 months, so make sure your idea can keep you busy for this long. Whether you pick one of our ideas or come up with yours, it is strongly recommended to start discussing it with us on IRC.

As usual, we’d love to chat with you or answer your questions in #gentoo-soc on Freenode IRC. Make sure you stay long enough in the channel and give us enough time to respond to you. We are all volunteers and can’t maintain a 24/7 presence. It can take up to a few hours for one of us to see your request.

February 18, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

Just a quick tip on how to easily create a Fedora chroot environment from (even a non-Fedora) Linux distribution.

I am going to show the process on Debian stretch but it’s not be much different elsewhere.

Since I am going to leverage pip/PyPI, I need it available — that and a few non-Python widespread dependencies:

# apt install python-pip db-util lsb-release rpm yum
# pip install image-bootstrap pychroot

Now for the actual chroot creation, process and usage is very close to debootstrap of Debian:

# directory-bootstrap fedora --release 25 /var/lib/fedora_25_chroot

Done. Now let’s prove we have actual Fedora 25 in there. For lsb_release we need package redhat-lsb here, but the chroot was is functional before that already.

# pychroot /var/lib/fedora_25_chroot dnf -y install redhat-lsb
# pychroot /var/lib/fedora_25_chroot lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch:[..]:printing-4.1-noarch
Distributor ID: Fedora
Description:    Fedora release 25 (Twenty Five)
Release:        25
Codename:       TwentyFive

Note the use of pychroot which does bind mounts of /dev and friends out of the box, mainly.

directory-bootstrap is part of image-bootstrap and, besides Fedora, also supports creation of chroots for Arch Linux and Gentoo.

See you 🙂

February 09, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

So, FOSDEM 2017 is over, and as every year it was both fun and interesting. There will for sure be more blog posts, e.g., with photographs from talks by our developers, the booth, the annual Gentoo dinner, or (obviously) the beer event. The Gentoo booth, centrally located just opposite to KDE and Gnome and directly next to CoreOS, was quite popular; it's always great to hear from all the enthusiastic Gentoo fans. Many visitors also prepared, compiled, and installed their own Gentoo buttons at our button machine.
In addition we had a new Gentoo LiveDVD as handout - the "Crispy Belgian Waffle" FOSDEM 2017 edition. For those of you who couldn't make it to Brussels, you can still get it! Download the ISO here and burn it on a DVD or copy it on a USB stick - all done. Many thanks to Fernando Reyes (likewhoa) for all his work!
Finally, for those who are wondering, the "Gentoo Ecosystem" poster from our table can be downloaded as PDF here. It is based on work by Daniel Robbins and mitzip from Funtoo; the source files are available on Github. Of course this poster is continous work in progress, so tell me if you find something missing!

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo at Fosdem (February 09, 2017, 06:00 UTC)

At the stand

It was nice to meet everyone and hang out as well. There was an interview with Hacker Public Radio which you can find HERE as well.

Just a short one this time, but it was nice to meet everyone.

February 07, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
I missed FOSDEM (February 07, 2017, 16:06 UTC)

I sadly had to miss out on the FOSDEM event. The entire weekend was filled with me being apathetic, feverish and overall zombie-like. Yes, sickness can be cruel. It wasn't until today that I had the energy back to fire up my laptop.

Sorry for the crew that I promised to meet at FOSDEM. I'll make it up, somehow.

February 06, 2017
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Tesseract is one of the best open-source OCR software available, and I recently took over ebuilds maintainership for it. Current development is still quite active, and since last stable release they added a new OCR engine based on LSTM neural networks. This engine is available in an alpha release, and initial numbers show a much faster OCR pass, with fewer errors.

Sounds interesting? If you want to try it, this alpha release is now in tree (along with a live ebuild). I insist on the alpha tag, this is for testing, not for production; so the ebuild masked by default, and you will have to add to your package.unmask file:
=app-text/tesseract-4.00.00_alpha*
The ebuild also includes some additional changes, like current documentation generated with USE=doc (available in stable release too), and updated linguas.

Testing with paperwork

The initial reason I took over tesseract is that I also maintain paperwork ebuilds, a personal document manager, to handle scanned documents and PDFs (which is heavy tesseract user). It recently got a new 1.1 release, if you want to give it a try!

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2017 is starting! (February 06, 2017, 01:53 UTC)

(A previous version of this post recommended #gentoo-soc-mentors on Freenode as the preferred discussion channel for GSoC, please use #gentoo-soc instead as the former is invite-only or ask us to invite you to it)

It’s time to send us your GSoC ideas whether you can/want to mentor or not. We need as many good ideas as possible to make sure Google will select us as an organization again this year. Experience has shown us that we’re not automatically selected. You can submit them yourself on the wiki or let us do it. Don’t waste any time because some polishing typically needs to occur before the deadline (February 27th). You can discuss your ideas with us on Freenode in #gentoo-soc (preferred), or by email at soc-mentors@gentoo.org.

If you’re potentially interested in being a mentor, only want to help during the early phases of discussing and reviewing projects, or are just curious and want to see what goes on there, please let us know and we’ll add you to the mail alias. Everybody from last year was removed so don’t assume you’ll be on the alias because you were last year. Note that you do not have to be a Gentoo developer to be a mentor or help us with GSoC in any way.

Finally, if you’re a student it’s not quite time yet to ask us about projects. Please be patient, we’ll let you know.

Now go and submit that idea!

February 03, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently I was on a mission to make my audio experience on my main desktop more enjoyable. I had previously just used some older Bose AE2 headphones from 2010 plugged in directly to the 3.5mm audio output on the back of my desktop. The sound quality was mediocre at best, and I knew that a combination of a Digital-to-Analogue Converter (DAC) and some better headphones would certainly improve the experience. I also knew that the DAC would probably yield the most noticeable improvements, so I purchased the Big Ego USB DAC from one of my favourite audiophile-grade manufacturers, Emotiva. I have several of their monoblock amplifiers and use their amazing XMC1 for my preamp/processor in my home audio system, so I knew that the quality would be outstanding, especially for the price.

Emotiva Big Ego DAC and V-Moda Crossfade M-100 headphones

Now, the Big Ego FAQ on the Emotiva website indicates that it should work with all modern computing devices:

Q: What devices can I use the Ego DACs with?
A: The Ego DACs are basically designed to work with any modern “computer device” which can be used
with an external USB sound card, which includes:
1) All modern Apple computers
2) All modern Windows computers (Windows XP, Vista, 7, 8.0, 8.1, and Windows 10)
3) Many Linux computers (as long as they support USB Audio Class 1 or 2)
4) Some Android tablets and phones (as long as they support UAC1 or UAC2)
5) Apple iPhone 5 and iPhone 6 (with the lightning to USB camera adapter)

For many Linux users, the Big Ego probably works without any manual intervention. However, if it doesn’t, it shouldn’t be that difficult to get it working properly, and I hope that this guide helps if you are running into trouble.

Firstly, let’s get something out of the way, and that’s USB Audio Class 2 (UAC2) support within Linux. With all modern distributions (>=2.6 kernel), UAC2 is readily available. It can be validated by looking at the audio-v2.h file within the kernel source:

# grep 'From the USB Audio' /usr/src/linux/include/linux/usb/audio-v2.h
* From the USB Audio spec v2.0:

Feel free to look at the full file to see the references to the UAC2 specification.

Kernel support:

Secondly, and also speaking to the kernel, if your distribution doesn’t even show the device, you are likely lacking the one needed kernel driver. To see if your system recognises the Emotiva Big Ego, try the following command and look for similar output:

$ lsusb -v | grep 'Emotiva Big Ego'
...
iProduct 3 Emotiva Big Ego
...

The full identifier (Vendor ID and Product ID) from lsusb is 20ee:0021, even though it doesn’t have a description:

# grep -A 4 /var/log/messages
kernel: usb 9-1: New USB device found, idVendor=20ee, idProduct=0021
kernel: usb 9-1: New USB device strings: Mfr=1, Product=3, SerialNumber=2
kernel: usb 9-1: Product: Emotiva Big Ego
kernel: usb 9-1: Manufacturer: Emotiva

$ lsusb | grep '20ee:0021'
Bus 009 Device 005: ID 20ee:0021

If you don’t get similar output, then you’re lacking kernel support for the Big Ego. The one driver in the kernel that you need is the “USB Audio/MIDI driver” which can be found in the make menuconfig hierarchy as:

Device Drivers --->
  <*> Sound card support --->
    <*> Advanced Linux Sound Architecture --->
      [*] USB sound devices --->
        <*> USB Audio/MIDI driver

You can also check your kernel .config for it, or if you have it as a module, load it:

$ grep -i snd_usb_audio /usr/src/linux/.config
CONFIG_SND_USB_AUDIO=y

OR

# modprobe snd-usb-audio

Emotiva Big Ego DAC and V-Moda Crossfade M-100 headphones

ALSA configurations:

Thirdly, and now that you have the appropriate kernel support, let’s move on to configuring and using the Big Ego with ALSA. You can see a list of device names by using aplay -l, and it’s best to address the device by name instead of number (because the numbering could possibly change upon reboot). This one-liner should show you precisely how it is named (note that your output may be different based on the available sound output devices on your system):

$ aplay -l | awk -F \: '/,/{print $2}' | awk '{print $1}' | uniq
Intel
NVidia
Ego

With that information, you are ready to set the Big Ego as your default sound output device by editing either .asoundrc (in your home directory, for a per-user directive) or within the system-wide /etc/asound.conf (which is the one that I would recommend for most situations). I tried various configurations for my ALSA configuration, but would end up with various oddities. For instance, I ran into a problem where I had sound in applications like Audacious, mpv, and even ALSA’s own speaker-test, but had no sound in other terminal applications like ogg123 or, more importantly, web browsers like Firefox and Chromium. The only configuration that worked fully for me was:

$ cat /etc/asound.conf
defaults.pcm.!card Ego
defaults.pcm.!device 0
defaults.ctl.!card Ego
defaults.ctl.!device 0

After changing your ALSA configuration, you need to reload it, and the method for doing so varies based on your distribution and init system. For me, using Gentoo Linux with OpenRC, I just issued, (as root), /etc/init.d/alsasound restart and it reloaded. Worst case, just reboot your system to test the changes.

Now that you have it set as the default card, applications like alsamixer and such should automatically choose the Big Ego for your levels and mixing. One thing that I noticed with alsamixer is that there are two adjustable level sliders:

alsamixer with the Emotiva Big Ego USB DAC

What I am guessing is that, even though they are labelled “Emotiva Big Ego” and “Emotiva Big Ego 1”, they actually correspond to the output that you are using on the DAC. For instance, I am using the 3.5mm headphone jack on the front, and that corresponds to the “Emotiva Big Ego 1” slider, whereas if I were using the line out jack on the back of the DAC (those rhymes are fun 😛 ), I would adjust it using the slider for “Emotiva Big Ego”.

Additional configurations:

Now that we have configured ALSA to use our USB DAC as the default sound card, there are some additional things that I would like for my convenience. I prefer to not use a full desktop environment (DE), but instead favour a more minimalistic approach. I just use the Openbox window manager (WM). One of the things that I like about Openbox is the ability to set my own key bindings. In this case, I would like to be able to control the volume by using the designated keys on my keyboard, regardless of the application that is using the USB DAC. Here are my key bindings, which are added to ~/.config/openbox/rc.xml:


    <!-- Keybinding for increasing Emotiva Big Ego volume by 1 -->
    <keybind key="XF86AudioRaiseVolume">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 1+</command>
      </action>
    </keybind>
    <!-- Keybinding for decreasing Emotiva Big Ego volume by 1 -->
    <keybind key="XF86AudioLowerVolume">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 1-</command>
      </action>
    </keybind>
    <!-- Keybinding for muting/unmuting volume -->
    <keybind key="XF86AudioMute">
      <action name="execute">
        <command>amixer set 'Emotiva Big Ego',1 toggle</command>
      </action>
    </keybind>

Take note that the subdevice is ‘1’ (bold in the code above). That is because, like I showed in the alsamixer output, I’m using the headphone jack (so it corresponds to the secondary volume slider).

Further troubleshooting:

I hope that these instructions help you get your USB DAC working under Linux, but if they don’t, feel free to leave me a comment here. We’ll see what we can do to get it working for you. One last note is that I experienced some rather severe popping and other undesirable sounds when I had the Big Ego plugged into one of the USB2 ports on the back of my tower. Swapping it to its own non-shared USB3 port fixed that problem. So, if you have it plugged into a USB hub or something similar, try isolating it. Remember, it is a sensitive piece of audio equipment, and special considerations may need to be made. 🙂

Cheers,
Zach

February 02, 2017

FOSDEM 2017 logo

As FOSDEM 2017 approaches we are happy to announce there are a total of five Gentoo developers scheduled to give talks!

Developers and their talks include:

Only a few hours remain until the event kicks off. See you at FOSDEM!

February 01, 2017

Description:
pax-utils is a set of tools that check files for security relevant properties.

A fuzz on scanelf exposed an out-of bound read. It was reported to vapier which fixed the issue immediately.
Unfortunately I can’t get a symbolized ASan stacktrace, so I will show only the useful part of both asan and gdb.

# scanelf -s '*' -axetrnibSDIYZB $FILE
==32758==ERROR: AddressSanitizer: unknown-crash on address 0x7f8f9fa252dc at pc 0x00000053c6a0 bp 0x7ffe93a19910 sp 0x7ffe93a19908 
READ of size 4 at 0x7f8f9fa252dc thread T0                                                                                                                                                                                                                                      
   #0 0x53c69f  (/usr/bin/scanelf+0x53c69f) 
   #1 0x51d649  (/usr/bin/scanelf+0x51d649) 
   #2 0x51b97e  (/usr/bin/scanelf+0x51b97e) 
   #3 0x51ad43  (/usr/bin/scanelf+0x51ad43) 
   #4 0x51922e  (/usr/bin/scanelf+0x51922e) 
   #5 0x7f8f9e7fd61f  (/lib64/libc.so.6+0x2061f) 
   #6 0x41a008  (/usr/bin/scanelf+0x41a008) 

(gdb) bt
#8  0x000000000053c6a0 in scanelf_file_get_symtabs (elf=, sym=0x7fffffffcc00, str=0x7fffffffcc20) at scanelf.c:357
#9  0x000000000051d64a in scanelf_file_sym (elf=0x60700000de60, found_sym=) at scanelf.c:1327
#10 scanelf_elfobj (elf=) at scanelf.c:1547
#11 0x000000000051b97f in scanelf_elf (filename=0x7fffffffe50e "1.crashes", fd=, len=) at scanelf.c:1612
#12 scanelf_fileat (dir_fd=, filename=, st_cache=) at scanelf.c:1679
#13 0x000000000051ad44 in scanelf_dirat (dir_fd=, path=) at scanelf.c:1713
#14 0x000000000051922f in scanelf_dir (path=) at scanelf.c:1763
#15 parseargs (argc=5, argv=0x7fffffffe258) at scanelf.c:2273
#16 main (argc=5, argv=) at scanelf.c:2361

Affected version:
1.2

Fixed version:
1.2.1

Commit fix:
https://github.com/gentoo/pax-utils/commit/95e5489534ac9e9324c5096286899b688e19ae00

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00131-pax-utils-scanelf-oobread-scanelf_file_get_symtabs

Timeline:
2017-01-23: bug discovered and reported to upstream
2017-01-24: upstream realeased a patch and 1.2.1
2017-02-01: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
I’d suggest to go to 1.2.2 because of a functionality bug(s) in 1.2.1

Permalink:
https://blogs.gentoo.org/ago/2017/02/01/pax-utils-scanelf-out-of-bounds-read-in-scanelf_file_get_symtabs-scanelf-c

Description:
pax-utils is a set of tools that check files for security relevant properties.

A fuzz on scanelf exposed an out-of bound read. It was reported to vapier which fixed the issue immediately.
Unfortunately I can’t get a symbolized ASan stacktrace, so I will show only the useful part of both asan and gdb.

# scanelf -s '*' -axetrnibSDIYZB $FILE
==1853==ERROR: AddressSanitizer: unknown-crash on address 0x7f4099d25008 at pc 0x00000053586e bp 0x7fff335cb8b0 sp 0x7fff335cb8a8
READ of size 8 at 0x7f4099d25008 thread T0
    #0 0x53586d  (/usr/bin/scanelf+0x53586d)
    #1 0x51f526  (/usr/bin/scanelf+0x51f526)
    #2 0x51b97e  (/usr/bin/scanelf+0x51b97e)
    #3 0x51ad43  (/usr/bin/scanelf+0x51ad43)
    #4 0x51922e  (/usr/bin/scanelf+0x51922e)
    #5 0x7f4098afd61f  (/lib64/libc.so.6+0x2061f)
    #6 0x41a008  (/usr/bin/scanelf+0x41a008) 

(gdb) bt
#8  0x000000000053586e in scanelf_file_textrel (elf=, found_textrel=) at scanelf.c:560
#9  0x000000000051f527 in scanelf_elfobj (elf=) at scanelf.c:1536
#10 0x000000000051b97f in scanelf_elf (filename=0x7fffffffe50e "/tmp/afl/scanelf/report/crashes/2.crashes", fd=, len=) at scanelf.c:1612
#11 scanelf_fileat (dir_fd=, filename=, st_cache=) at scanelf.c:1679
#12 0x000000000051ad44 in scanelf_dirat (dir_fd=, path=) at scanelf.c:1713
#13 0x000000000051922f in scanelf_dir (path=) at scanelf.c:1763
#14 parseargs (argc=5, argv=0x7fffffffe258) at scanelf.c:2273
#15 main (argc=5, argv=) at scanelf.c:2361

Affected version:
1.2

Fixed version:
1.2.1

Commit fix:
https://github.com/gentoo/pax-utils/commit/95e5489534ac9e9324c5096286899b688e19ae00

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
N/A

Reproducer:
https://github.com/asarubbo/poc/blob/master/00132-pax-utils-scanelf-oobread-scanelf_file_textrel

Timeline:
2017-01-23: bug discovered and reported to upstream
2017-01-24: upstream realeased a patch and 1.2.1
2017-02-01: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
I’d suggest to go to 1.2.2 because of a functionality bug(s) in 1.2.1

Permalink:
https://blogs.gentoo.org/ago/2017/02/01/pax-utils-scanelf-out-of-bounds-read-in-scanelf_file_textrel-scanelf-c

January 27, 2017
Yury German a.k.a. blueknight (homepage, bugs)
WordPress Blogs Maintenance (January 27, 2017, 22:12 UTC)

Changes for blogs.gentoo.org

With the update of the WordPress to 4.7.1 a few plug-ins have created instability to the platform.

We have disabled the WordPress Mobile Site Plugin and the Picasa Album update.

  • WordPress Mobile Site is causing all sorts of issues and an update just came out today. We will push the update and enable it for some testing.  if you were one of the users that was using it please let us know so that you can test it when we update it.
  • The Picasa Album is not working and is disabled pending updates.
If you have any questions please feel free to contact me on irc @blueknight

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.4 (January 27, 2017, 09:29 UTC)

Another community driven and incredible update of py3status has been released !

Our contributor star for this release is without doubt @lasers who is showing some amazing energy with challenging ideas and some impressive modules QA clean ups !

Thanks a lot as usual to @tobes who is basically leading the development of py3status now days with me being in a merge button mode most of the time.

By looking at the issues and pull requests I can already say that the 3.5 release will be grand !

Highlights

  • support of python 3.6 thanks to @tobes
  • a major effort in modules standardization, almost all of them support the format parameter now thanks to @lasers
  • modules documentation has been cleaned up
  • new do_not_disturb module to toggle notifications, by @maximbaz
  • new rss_aggregator module to display your unread feed items, by @raspbeguy
  • whatsmyip module: added geolocation support using ip-api.com, by @vicyap with original code from @neutronst4r

See the full changelog here.

Thank you guys !

January 16, 2017

I don’t know if a news will be sent. A possibile data corruption was found on zlib 1.2.10.
Please update your zlib to 1.2.11 and make sure you restart all services that are linked to zlib (a reboot may be an easy way).

Gentoo bug:
https://bugs.gentoo.org/show_bug.cgi?id=605888

Upstream bug:
https://github.com/madler/zlib/issues/198

Upstream commit:
https://github.com/madler/zlib/commit/4c7c90768308587884fab6159d93a4695a5ab1f0</a

January 15, 2017
Gentoo at FOSDEM 2017 (January 15, 2017, 00:00 UTC)

FOSDEM 2017 logo

On February, 4th and 5th, Gentoo will be attending FOSDEM 2017 in Brussels, Belgium.

This year one of our own, Jason A Donenfeld (zx2c4), will be speaking on WireGuard: a next generation secure kernel network tunnel.

Similar to last year, the event will be hosted at Université libre de Bruxelles. Gentoo developers will be taking rotating shifts at the Gentoo stand with gadgets, swag, and a new 2017 LiveDVD. You can visit this wiki article to see which developer will be manning the stand when you drop by.

We are looking forward to seeing those in the community who have been hard at work on their quizzes!

January 03, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Zenfone gentoo prefix part2 (January 03, 2017, 15:04 UTC)

In the end I bootstrap directly into stage3 and using chroot.
For doing this is pretty straightforward and probably just busybox command is enough.
Make gentoo directory.

mkdir gentoo

Download stage3 i686 files.

wget http://distfiles.gentoo.org/releases/x86/autobuilds/current-stage3-i686/stage3-i686-20161227.tar.bz2

Untar the file.

tar -vjxf *.tar.bz2 -C gentoo

Get in the chroot.
maybe ln -s /proc/self/fd /dev is needed.

export GENTOO_ROOT="/data/data/com.termux/files/home/gentoo/"
cp -L /etc/resolv.conf $GENTOO_ROOT/etc/
mount -t proc proc $GENTOO_ROOT/proc
mount --rbind /sys $GENTOO_ROOT/sys
mount --rbind /dev $GENTOO_ROOT/dev

unset LD_PRELOAD

in make.conf i’m using

FEATURES="-userfetch -sandbox -usersandbox"

next will be trying to install plasma mobile on Gentoo

https://github.com/plasma-phone-packaging

January 02, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Zenfone gentoo prefix part1 (January 02, 2017, 20:24 UTC)

I’m trying to install Gentoo prefix on Zenfone ze551ml .

in summary is a Intel(R) Atom(TM) CPU  Z3580 Quad-core 2.3 GHz with 4 gb of ram running android.

Unfortunatly my zenfone have problem with the front and back camera.

For installing the prefix, I first trying just following the manual bootstrap way, with not so many result.

In the end i found to install termux from google play and link some binary from termux and after bootstrap following the Gentoo manual .

termux script from run it from adb from here.

/data/data/com.termux/files/home/bin/termux-shell.sh

#!/system/bin/sh
export PREFIX='/data/data/com.termux/files/usr'
export HOME='/data/data/com.termux/files/home'
export LD_LIBRARY_PATH='/data/data/com.termux/files/usr/lib'
export PATH="/data/data/com.termux/files/usr/bin:/data/data/com.termux/files/usr/bin/applets:$PATH"
export LANG='en_US.UTF-8'
export SHELL='/data/data/com.termux/files/usr/bin/bash'
cd "$HOME"
exec "$SHELL" -l

I could install part of the stage 1, but I had problem installing python.

./Modules/pwdmodule.c:81:25: error: no member named 'pw_gecos' in 'struct passwd'

http://lists.mindrot.org/pipermail/openssh-bugs/2013-April/012014.html

so I just skipper bootstrapping python and use the python in termux.

bash started complain about process substitution.

bash-4.4# emerge -av ncurses
Failed to validate a sane '/dev'.
bash process substitution doesn't work; this may be an indication of a broken '/dev/fd'.

tried to resolve by  ln -s /proc/self/fd /dev with no result.
bash version:
GNU bash, version 4.4.5(1)-release (i686-pc-linux-android)
I anyway got emerge –info working

bash-4.4# emerge --info
!!! No gcc found. You probably need to 'source /etc/profile'
!!! to update the environment of this terminal and possibly
!!! other terminals also.
Portage 2.2.28-prefix (python 3.5.2-final-0, prefix/linux/x86, [unavailable], unavailable, 3.10.20-g9699532 i686)
=================================================================
System uname: Linux-3.10.20-g9699532-i686-with-libc
KiB Mem: 3998068 total, 1209472 free
KiB Swap: 0 total, 0 free
Timestamp of repository gentoo_prefix: Mon, 02 Jan 2017 18:58:52 +0000
sh bash
ld GNU ld (GNU Binutils) 2.27
Repositories:


gentoo_prefix
location: /data/data/com.termux/files/home/Gentoo/usr/portage
sync-type: rsync
sync-uri: rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
priority: -1000


ACCEPT_KEYWORDS="~x86-linux"
ACCEPT_LICENSE="* -@EULA"
CBUILD="i686-pc-linux-gnu"
CFLAGS="-O2 -march=i686 -pipe -m32 -O2 -pipe"
CHOST="i686-pc-linux-gnu"
CONFIG_PROTECT="/etc"
CONFIG_PROTECT_MASK="/etc/env.d /etc/gconf"
CXXFLAGS="-O2 -march=i686 -pipe -m32 -O2 -pipe"
DISTDIR="/data/data/com.termux/files/home/Gentoo/tmp/usr/portage/distfiles"
FCFLAGS="-O2 -march=i686 -pipe"
FEATURES="assume-digests binpkg-logs case-insensitive-fs collision-protect config-protect-if-modified distlocks ebuild-locks fixlafiles force-prefix merge-sync news parallel-fetch preserve-libs protect-owned sfperms strict unknown-features-warn unmerge-logs unmerge-orphans unprivileged"
FFLAGS="-O2 -march=i686 -pipe"
GENTOO_MIRRORS="http://distfiles.gentoo.org"
LANG="en_US.UTF-8"
LDFLAGS="-Wl,-O1"
MAKEOPTS=""
PKGDIR="/data/data/com.termux/files/home/Gentoo/tmp/usr/portage/packages"
PORTAGE_CONFIGROOT="/data/data/com.termux/files/home/Gentoo/tmp/"
PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages --exclude=/.git"
PORTAGE_TMPDIR="/data/data/com.termux/files/home/Gentoo/tmp/var/tmp"
USE="berkdb bzip2 cli cracklib crypt cxx dri fortran gdbm iconv ipv6 modules ncurses nls nptl openmp pcre prefix prefix-guest readline seccomp session ssl tcpd unicode x86 zlib" ABI_X86="32" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1 emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="kexi words flow plan sheets stage tables krita karbon braindump author" CAMERAS="ptp2" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php5-6" PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7 python3_4 python3_5" RUBY_TARGETS="ruby21" USERLAND="GNU" VIDEO_CARDS="amdgpu fbdev intel nouveau radeon radeonsi vesa dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account"
Unset: CC, CPPFLAGS, CTARGET, CXX, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LC_ALL, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS, USE_PYTHON

but I’m still searching a solution for the bash process substitution problem.

uname -a

Linux localhost 3.10.20-g9699532 #1 SMP PREEMPT Mon Dec 19 03:31:31 PST 2016 i686 Android

Also trying to compile bash by hand looks not a good option:

/usr/homes/chet/src/bash/src/parse.y:135:12: error: conflicting types for ‘__errno’

December 31, 2016
Domen Kožar a.k.a. domen (homepage, bugs)
Reflecting on 2016 (December 31, 2016, 18:00 UTC)

Haven't blogged in 2016, but a lot has happened.

A quick summary of highlighted events:

2016 was a functional programming year as I've planned by end of 2015.

I greatly miss Python community and in that spirit, I've attended EuroPython 2016 and helped organize DragonSprint in Ljubljana. I don't think there's a place for me in OOP anymore, but I'll surely attend community events as nostalgia will kick in.

2017 seems extremely exciting, plans will unveil as I go, starting with some exciting news in January for Nix community.

December 22, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux System Administration, 2nd Edition (December 22, 2016, 18:26 UTC)

While still working on a few other projects, one of the time consumers of the past half year (haven't you noticed? my blog was quite silent) has come to an end: the SELinux System Administration - Second Edition book is now available. With almost double the amount of pages and a serious update of the content, the book can now be bought either through Packt Publishing itself, or the various online bookstores such as Amazon.

With the holidays now approaching, I hope to be able to execute a few tasks within the Gentoo community (and of the Gentoo Foundation) and get back on track. Luckily, my absence was not jeopardizing the state of SELinux in Gentoo thanks to the efforts of Jason Zaman.

December 08, 2016
Mike Pagano a.k.a. mpagano (homepage, bugs)

Just a quick note that I am walking the patch for CVE-2016-8655 down the gentoo-sources kernels.

Yesterday, I released the following kernels with the patch backported:

sys-kernel/gentoo-sources-4.8.12-r1
sys-kernel/gentoo-sources-4.4.36
sys-kernel/gentoo-sources-4.1.36-r1

Updated: 12/08
Also patched:
sys-kernel/gentoo-sources-3.18.45-r1
sys-kernel/gentoo-sources-3.12.68-r1

Updated 12/09
sys-kernel/gentoo-sources-3.10.104-r1
sys-kernel/gentoo-sources-3.4.113-r1

Updated 12/11
sys-kernel/gentoo-sources-3.14.79-r1

If Alice does not get to the others before me, I will continue to walk down the versions until all of them are patched.

Done.

November 30, 2016
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
[2016_11] Gentoo report summary (November 30, 2016, 16:03 UTC)

November 21, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.3 (November 21, 2016, 12:40 UTC)

Ok I slacked by not posting for v3.1 and v3.2 and I should have since those previous versions were awesome and feature rich.

But v3.3 is another major milestone which was made possible by tremendous contributions from @tobes as usual and also greatly thanks to the hard work of @guiniol and @pferate who I’d like to mention and thank again !

Also, I’d like to mention that @tobes has become the first collaborator of the py3status project !

Instead of doing a changelog review, I’ll highlight some of the key features that got introduced and extended during those versions.

The py3 helper

Writing powerful py3status modules have never been so easy thanks to the py3 helper !

This magical object is added automatically to modules and provides a lot of useful methods to help normalize and enhance modules capabilities. This is a non exhaustive list of such methods:

  • format_units: to pretty format units (KB, MB etc)
  • notify_user: send a notification to the user
  • time_in: to handle module cache expiration easily
  • safe_format: use the extended formatter to handle the module’s output in a powerful way (see below)
  • check_commands: check if the listed commands are available on the system
  • command_run: execute the given command
  • command_output: execute the command and get its output
  • play_sound: sound notifications !

Powerful control over the modules’ output

Using the self.py3.safe_format helper will unleash a feature rich formatter that one can use to conditionally select the output of a module based on its content.

  • Square brackets [] can be used. The content of them will be removed from the output if there is no valid placeholder contained within. They can also be nested.
  • A pipe (vertical bar) | can be used to divide sections the first valid section only will be shown in the output.
  • A backslash \ can be used to escape a character eg \[ will show [ in the output.
  • \? is special and is used to provide extra commands to the format string, example \?color=#FF00FF. Multiple commands can be given using an ampersand & as a separator, example \?color=#FF00FF&show.
  • {<placeholder>} will be converted, or removed if it is None or empty. Formatting can also be applied to the placeholder eg {number:03.2f}.

Example format_string:

This will show artist - title if artist is present, title if title but no artist, and file if file is present but not artist or title.

"[[{artist} - ]{title}]|{file}"

More code and documentation tests

A lot of efforts have been put into py3status automated CI and feature testing allowing more confidence in the advanced features we develop while keeping a higher standard on code quality.

This is such as even modules’ docstrings are now tested for bad formatting 🙂

Colouring and thresholds

A special effort have been put in normalizing modules’ output colouring with the added refinement of normalized thresholds to give users more power over their output.

New modules, on and on !

  • new clock module to display multiple times and dates informations in a flexible way, by @tobes
  • new coin_balance module to display balances of diverse crypto-currencies, by Felix Morgner
  • new diskdata module to shows both usage data and IO data from disks, by @guiniol
  • new exchange_rate module to check for your favorite currency rates, by @tobes
  • new file_status module to check the presence of a file, by @ritze
  • new frame module to group and display multiple modules inline, by @tobes
  • new gpmdp module for Google Play Music Desktop Player by @Spirotot
  • new kdeconnector module to display information about Android devices, by @ritze
  • new mpris module to control MPRIS enabled music players, by @ritze
  • new net_iplist module to display interfaces and their IPv4 and IPv6 IP addresses, by @guiniol
  • new process_status module to check the presence of a process, by @ritze
  • new rainbow module to enlight your day, by @tobes
  • new tcp_status module to check for a given TCP port on a host, by @ritze

Changelog

The changelog is very big and the next 3.4 milestone is very promising with amazing new features giving you even more power over your i3bar, stay tuned !

Thank you contributors

Still a lot of new timer contributors which I take great pride in as I see it as py3status being an accessible project.

  • @btall
  • @chezstov
  • @coxley
  • Felix Morgner
  • Gabriel Féron
  • @guiniol
  • @inclementweather
  • @jakubjedelsky
  • Jan Mrázek
  • @m45t3r
  • Maxim Baz
  • @pferate
  • @ritze
  • @rixx
  • @Spirotot
  • @Stautob
  • @tjaartvdwalt
  • Yuli Khodorkovskiy
  • @ZeiP