Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.
April 17, 2019
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Meet the py3status logo (April 17, 2019, 08:11 UTC)

I’m proud and very pleased to introduce the py3status logo that Tobaloidee has created for our beloved project!

We’ve been discussing and dreaming about this for a while in the dedicated logo issue. So when Tobaloidee came with his awesome concept and I first saw the logo I was amazed at how he perfectly gave life to the poor brief that I expressed.


Thanks again Tobaloidee and of course all of the others who participated (with a special mention to @cyrinux’s girlfriend)!


We have a few other variants that exist, I’m putting some of them here for quick download & use.

April 16, 2019

Nitrokey logo

The Gentoo Foundation has partnered with Nitrokey to equip all Gentoo developers with free Nitrokey Pro 2 devices. Gentoo developers will use the Nitrokey devices to store cryptographic keys for signing of git commits and software packages, GnuPG keys, and SSH accounts.

Thanks to the Gentoo Foundation and Nitrokey’s discount, each Gentoo developer is eligible to receive one free Nitrokey Pro 2. To receive their Nitrokey, developers will need to register with their @gentoo.org email address at the dedicated order form.

A Nitrokey Pro 2 Guide is available on the Gentoo Wiki with FAQ & instructions for integrating Nitrokeys into developer workflow.


Nitrokey Pro 2 has strong reliable hardware encryption, thanks to open source. It can help you to: sign Git commits; encrypt emails and files; secure server access; and protect accounts against identity theft via two-factor authentication (one-time passwords).


Gentoo Linux is a free, source-based, rolling release meta distribution that features a high degree of flexibility and high performance. It empowers you to make your computer work for you, and offers a variety of choices at all levels of system configuration.

As a community, Gentoo consists of approximately two hundred developers and over fifty thousand users globally.

The Gentoo Foundation supports the development of Gentoo, protects Gentoo’s intellectual property, and oversees adherence to Gentoo’s Social Contract.


Nitrokey is a German IT security startup committed to open source hardware and software. Nitrokey develops and produces USB keys for data encryption, email encryption (PGP/GPG, S/MIME), and secure account logins (SSH, two-factor authentication via OTP and FIDO).

Nitrokey is proud to support the Gentoo Foundation in further securing the Gentoo infrastructure and contributing to a secure open source Linux ecosystem.

April 09, 2019
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Using rav1e – from your code (April 09, 2019, 13:52 UTC)

AV1, Rav1e, Crav1e, an intro

(this article is also available on my dev.to profile, I might use it more often since wordpress is pretty horrible at managing markdown.)

AV1 is a modern video codec brought to you by an alliance of many different bigger and smaller players in the multimedia field.
I’m part of the VideoLan organization and I spent quite a bit of time on this codec lately.

rav1e: The safest and fastest AV1 encoder, built by many volunteers and Mozilla/Xiph developers.
It is written in rust and strives to provide good speed, quality and stay maintainable.

crav1e: A companion crate, written by yours truly, that provides a C-API, so the encoder can be used by C libraries and programs.

This article will just give a quick overview of the API available right now and it is mainly to help people start using it and hopefully report issues and problem.

Rav1e API

The current API is built around the following 4 structs and 1 enum:

  • struct Frame: The raw pixel data
  • struct Packet: The encoded bitstream
  • struct Config: The encoder configuration
  • struct Context: The encoder state

  • enum EncoderStatus: Fatal and non-fatal condition returned by the Contextmethods.


The Config struct currently is simply constructed.

    struct Config {
        enc: EncoderConfig,
        threads: usize,

The EncoderConfig stores all the settings that have an impact to the actual bitstream while settings such as threads are kept outside.

    let mut enc = EncoderConfig::with_speed_preset(speed);
    enc.width = w;
    enc.height = h;
    enc.bit_depth = 8;
    let cfg = Config { enc, threads: 0 };

NOTE: Some of the fields above may be shuffled around until the API is marked as stable.


    let cfg = Config { enc, threads: 0 };
    let ctx: Context<u8> = cfg.new_context();

It produces a new encoding context. Where bit_depth is 8, it is possible to use an optimized u8 codepath, otherwise u16 must be used.


It is produced by Config::new_context, its implementation details are hidden.


The Context can be grouped into essential, optional and convenience.

    // Essential API
    pub fn send_frame<F>(&mut self, frame: F) -> Result<(), EncoderStatus>
      where F: Into<Option<Arc<Frame<T>>>>, T: Pixel;
    pub fn receive_packet(&mut self) -> Result<Packet<T>, EncoderStatus>;

The encoder works by processing each Frame fed through send_frame and producing each Packet that can be retrieved by receive_packet.

    // Optional API
    pub fn container_sequence_header(&mut self) -> Vec<u8>;
    pub fn get_first_pass_data(&self) -> &FirstPassData;

Depending on the container format, the AV1 Sequence Header could be stored in the extradata. container_sequence_header produces the data pre-formatted to be simply stored in mkv or mp4.

rav1e supports multi-pass encoding and the encoding data from the first pass can be retrieved by calling get_first_pass_data.

    // Convenience shortcuts
    pub fn new_frame(&self) -> Arc<Frame<T>>;
    pub fn set_limit(&mut self, limit: u64);
    pub fn flush(&mut self) {
  • new_frame() produces a frame according to the dimension and pixel format information in the Context.
  • flush() is functionally equivalent to call send_frame(None).
  • set_limit()is functionally equivalent to call flush()once limit frames are sent to the encoder.


The workflow is the following:

  1. Setup:
    • Prepare a Config
    • Call new_context from the Config to produce a Context
  2. Encode loop:
    • Pull each Packet using receive_packet.
    • If receive_packet returns EncoderStatus::NeedMoreData
      • Feed each Frame to the Context using send_frame
  3. Complete the encoding
    • Issue a flush() to encode each pending Frame in a final Packet.
    • Call receive_packet until EncoderStatus::LimitReached is returned.

Crav1e API

The crav1e API provides the same structures and features beside few key differences:

  • The Frame, Config, and Context structs are opaque.
typedef struct RaConfig RaConfig;
typedef struct RaContext RaContext;
typedef struct RaFrame RaFrame;
  • The Packet struct exposed is much simpler than the rav1e original.
typedef struct {
    const uint8_t *data;
    size_t len;
    uint64_t number;
    RaFrameType frame_type;
} RaPacket;
  • The EncoderStatus includes a Success condition.
typedef enum {
} RaEncoderStatus;


Since the configuration is opaque there are a number of functions to assemble it:

  • rav1e_config_default allocates a default configuration.
  • rav1e_config_parse and rav1e_config_parse_int set a specific value for a specific field selected by a key string.
  • rav1e_config_set_${field} are specialized setters for complex information such as the color description.
RaConfig *rav1e_config_default(void);

 * Set a configuration parameter using its key and value as string.
 * Available keys and values
 * - "quantizer": 0-255, default 100
 * - "speed": 0-10, default 3
 * - "tune": "psnr"-"psychovisual", default "psnr"
 * Return a negative value on error or 0.
int rav1e_config_parse(RaConfig *cfg, const char *key, const char *value);

 * Set a configuration parameter using its key and value as integer.
 * Available keys and values are the same as rav1e_config_parse()
 * Return a negative value on error or 0.
int rav1e_config_parse_int(RaConfig *cfg, const char *key, int value);

 * Set color properties of the stream.
 * Supported values are defined by the enum types
 * RaMatrixCoefficients, RaColorPrimaries, and RaTransferCharacteristics
 * respectively.
 * Return a negative value on error or 0.
int rav1e_config_set_color_description(RaConfig *cfg,
                                       RaMatrixCoefficients matrix,
                                       RaColorPrimaries primaries,
                                       RaTransferCharacteristics transfer);

 * Set the content light level information for HDR10 streams.
 * Return a negative value on error or 0.
int rav1e_config_set_content_light(RaConfig *cfg,
                                   uint16_t max_content_light_level,
                                   uint16_t max_frame_average_light_level);

 * Set the mastering display information for HDR10 streams.
 * primaries and white_point arguments are RaPoint, containing 0.16 fixed point values.
 * max_luminance is a 24.8 fixed point value.
 * min_luminance is a 18.14 fixed point value.
 * Returns a negative value on error or 0.
int rav1e_config_set_mastering_display(RaConfig *cfg,
                                       RaPoint primaries[3],
                                       RaPoint white_point,
                                       uint32_t max_luminance,
                                       uint32_t min_luminance);

void rav1e_config_unref(RaConfig *cfg);

The bare minimum setup code is the following:

    int ret = -1;
    RaConfig *rac = rav1e_config_default();
    if (!rac) {
        printf("Unable to initialize\n");
        goto clean;

    ret = rav1e_config_parse_int(rac, "width", 64);
    if (ret < 0) {
        printf("Unable to configure width\n");
        goto clean;

    ret = rav1e_config_parse_int(rac, "height", 96);
    if (ret < 0) {
        printf("Unable to configure height\n");
        goto clean;

    ret = rav1e_config_parse_int(rac, "speed", 9);
    if (ret < 0) {
        printf("Unable to configure speed\n");
        goto clean;


As per the rav1e API, the context structure is produced from a configuration and the same send-receive model is used.
The convenience methods aren’t exposed and the frame allocation function is actually essential.

// Essential API
RaContext *rav1e_context_new(const RaConfig *cfg);
void rav1e_context_unref(RaContext *ctx);

RaEncoderStatus rav1e_send_frame(RaContext *ctx, const RaFrame *frame);
RaEncoderStatus rav1e_receive_packet(RaContext *ctx, RaPacket **pkt);
// Optional API
uint8_t *rav1e_container_sequence_header(RaContext *ctx, size_t *buf_size);
void rav1e_container_sequence_header_unref(uint8_t *sequence);


Since the frame structure is opaque in C, we have the following functions to create, fill and dispose of the frames.

RaFrame *rav1e_frame_new(const RaContext *ctx);
void rav1e_frame_unref(RaFrame *frame);

 * Fill a frame plane
 * Currently the frame contains 3 planes, the first is luminance followed by
 * chrominance.
 * The data is copied and this function has to be called for each plane.
 * frame: A frame provided by rav1e_frame_new()
 * plane: The index of the plane starting from 0
 * data: The data to be copied
 * data_len: Lenght of the buffer
 * stride: Plane line in bytes, including padding
 * bytewidth: Number of bytes per component, either 1 or 2
void rav1e_frame_fill_plane(RaFrame *frame,
                            int plane,
                            const uint8_t *data,
                            size_t data_len,
                            ptrdiff_t stride,
                            int bytewidth);


The encoder status enum is returned by the rav1e_send_frame and rav1e_receive_packet and it is possible already to arbitrarily query the context for its status.

RaEncoderStatus rav1e_last_status(const RaContext *ctx);

To simulate the rust Debug functionality a to_str function is provided.

char *rav1e_status_to_str(RaEncoderStatus status);


The C API workflow is similar to the Rust one, albeit a little more verbose due to the error checking.

    RaContext *rax = rav1e_context_new(rac);
    if (!rax) {
        printf("Unable to allocate a new context\n");
        goto clean;
    RaFrame *f = rav1e_frame_new(rax);
    if (!f) {
        printf("Unable to allocate a new frame\n");
        goto clean;
while (keep_going(i)){
     RaPacket *p;
     ret = rav1e_receive_packet(rax, &p);
     if (ret < 0) {
         printf("Unable to receive packet %d\n", i);
         goto clean;
     } else if (ret == RA_ENCODER_STATUS_SUCCESS) {
         printf("Packet %"PRIu64"\n", p->number);
     } else if (ret == RA_ENCODER_STATUS_NEED_MORE_DATA) {
         RaFrame *f = get_frame_by_some_mean(rax);
         ret = rav1e_send_frame(rax, f);
         if (ret < 0) {
            printf("Unable to send frame %d\n", i);
            goto clean;
        } else if (ret > 0) {
        // Cannot happen in normal conditions
            printf("Unable to append frame %d to the internal queue\n", i);
     } else if (ret == RA_ENCODER_STATUS_LIMIT_REACHED) {
         printf("Limit reached\n");

In closing

This article was mainly a good excuse to try dev.to and see write down some notes and clarify my ideas on what had been done API-wise so far and what I should change and improve.

If you managed to read till here, your feedback is really welcome, please feel free to comment, try the software and open issues for crav1e and rav1e.

Coming next

  • Working crav1e got me to see what’s good and what is lacking in the c-interoperability story of rust, now that this landed I can start crafting and publishing better tools for it and maybe I’ll talk more about it here.
  • Soon rav1e will get more threading-oriented features, some benchmarking experiments will happen soon.


  • Special thanks to Derek and Vittorio spent lots of time integrating crav1e in larger software and gave precious feedback in what was missing and broken in the initial iterations.
  • Thanks to David for the review and editorial work.
  • Also thanks to Matteo for introducing me to dev.to.

April 02, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

So G+ is closing down today .. I was the "owner" of the "Japanese Learners", "Photowalks Austria" and a few others (barely active) communities there and - after a vote created a MeWe group for it.

I made a MeWe Group for Photowalks Austria The community was Photowalks Austria G+ Community

Just in case someone is searching for it after g+ is gone here the links:

Old G+ community for Japanese Learners

The new Japanese Learners Group on MeWe

The discord server for the Japanse Learners Group

That just leaves to say: Goodbye Google+ it was a fun ride!

March 29, 2019
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

We recently had to face free disk space outages on some of our scylla clusters and we learnt some very interesting things while outlining some improvements that could be made to the ScyllaDB guys.

100% disk space usage?

First of all I wanted to give a bit of a heads up about what happened when some of our scylla nodes reached (almost) 100% disk space usage.

Basically they:

  • stopped listening to client requests
  • complained in the logs
  • wouldn’t flush commitlog (expected)
  • abort their compaction work (which actually gave back a few GB of space)
  • stay in a stuck / unable to stop state (unexpected, this has been reported)

After restarting your scylla server, the first and obvious thing you can try to do to get out of this situation is to run the nodetool clearsnapshot command which will remove any data snapshot that could be lying around. That’s a handy command to reclaim space usually.

Reminder: depending on your compaction strategy, it is usually not advised to allow your data to grow over 50% of disk space...

But that’s only a patch so let’s go down the rabbit hole and look at the optimization options we have.

Optimize your schemas

Schema design and the types your choose for your columns have a huge impact on disk space usage! And in our case we indeed overlooked some of the optimizations that we could have done from the start and that did cost us a lot of wasted disk space. Fortunately it was easy and fast to change.

To illustrate this, I’ll take a sample of 100,000 rows of a simple and naive schema associating readings of 50 integers to a user ID:

Note: all those operations were done using Scylla 3.0.3 on Gentoo Linux.

CREATE TABLE IF NOT EXISTS test.not_optimized
uid text,
readings list<int>,
) WITH compression = {};

Once inserted on disk, this takes about 250MB of disk space:

250M    not_optimized-00cf1500520b11e9ae38000000000004

Now depending on your use case, if those readings at not meant to be updated for example you could use a frozen list instead, which will allow a huge storage optimization:

CREATE TABLE IF NOT EXISTS test.mid_optimized
uid text,
readings frozen<list<int>>,
) WITH compression = {};

With this frozen list we now consume 54MB of disk space for the same data!

54M     mid_optimized-011bae60520b11e9ae38000000000004

There’s another optimization that we could do since our user ID are UUIDs. Let’s switch to the uuid type instead of text:

uid uuid,
readings frozen<list<int>>,
) WITH compression = {};

By switching to uuid, we now consume 50MB of disk space: that’s a 80% reduced disk space consumption compared to the naive schema for the same data!

50M     optimized-01f74150520b11e9ae38000000000004

Enable compression

All those examples were not using compression. If your workload latencies allows it, you should probably enable compression on your sstables.

Let’s see its impact on our tables:

ALTER TABLE test.not_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};
ALTER TABLE test.mid_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};
ALTER TABLE test.optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};

Then we run a nodetool compact test to force a (re)compaction of all the sstables and we get:

63M     not_optimized-00cf1500520b11e9ae38000000000004
28M mid_optimized-011bae60520b11e9ae38000000000004
24M optimized-01f74150520b11e9ae38000000000004

Compression is really a great gain here allowing another 50% reduced disk space usage reduction on our optimized table!

Switch to the new “mc” sstable format

Since the Scylla 3.0 release you can use the latest “mc” sstable storage format on your scylla clusters. It promises a greater efficiency for usually a way more reduced disk space consumption!

It is not enabled by default, you have to add the enable_sstables_mc_format: true parameter to your scylla.yaml for it to be taken into account.

Since it’s backward compatible, you have nothing else to do as new compactions will start being made using the “mc” storage format and the scylla server will seamlessly read from old sstables as well.

But in our case of immediate disk space outage, we switched to the new format one node at a time, dropped the data from it and ran a nodetool rebuild to reconstruct the whole node using the new sstable format.

Let’s demonstrate its impact on our test tables: we add the option to the scylla.yaml file, restart scylla-server and run nodetool compact test again:

49M     not_optimized-00cf1500520b11e9ae38000000000004
26M mid_optimized-011bae60520b11e9ae38000000000004
22M optimized-01f74150520b11e9ae38000000000004

That’s a pretty cool gain of disk space, even more for the not optimized version of our schema!

So if you’re in great need of disk space or it is hard for you to change your schemas, switching to the new “mc” sstable format is a simple and efficient way to free up some space without effort.

Consider using secondary indexes

While denormalization is the norm (yep.. legitimate pun) in the NoSQL world this does not mean we have to duplicate everything all the time. A good example lies in the internals of secondary indexes if your workload can compromise with its moderate impact on latency.

Secondary indexes on scylla are built on top of Materialized Views that basically stores an up to date pointer from your indexed column to your main table partition key. That means that secondary indexes MVs are not duplicating all the columns (and thus the data) from your main table as you would have to do when denormalizing a table to query by another column: this saves disk space!

This of course comes with a latency drawback because if your workload is interested in the other columns than the partition key of the main table, the coordinator node will actually issue two queries to get all your data:

  1. query the secondary index MV to get the pointer to the partition key of the main table
  2. query the main table with the partition key to get the rest of the columns you asked for

This has been an effective trick to avoid duplicating a table and save disk space for some of our workloads!

(not a tip) Move the commitlog to another disk / partition?

This should only be considered as a sort of emergency procedure or for cost efficiency (cheap disk tiering) on non critical clusters.

While this is possible even if the disk is not formatted using XFS, it not advised to separate the commitlog from data on modern SSD/NVMe disks but… you technically can do it (as we did) on non production clusters.

Switching is simple, you just need to change the commitlog_directory parameter in your scylla.yaml file.

March 27, 2019
Gentoo GNOME 3.30 for all init systems (March 27, 2019, 00:00 UTC)

GNOME logo

GNOME 3.30 is now available in Gentoo Linux testing branch. Starting with this release, GNOME on Gentoo once again works with OpenRC, in addition to the usual systemd option. This is achieved through the elogind project, a standalone logind implementation based on systemd code, which is currently maintained by a fellow Gentoo user. Gentoo would like to thank Mart Raudsepp (leio), Gavin Ferris, and all others working on this for their contributions. More information can be found in Mart’s blog post.

March 26, 2019
Mart Raudsepp a.k.a. leio (homepage, bugs)
Gentoo GNOME 3.30 for all init systems (March 26, 2019, 16:51 UTC)

GNOME 3.30 is now available in Gentoo Linux testing branch.
Starting with this release, GNOME on Gentoo once again works with OpenRC, in addition to the usual systemd option. This is achieved through the elogind project, a standalone logind implementation based on systemd code, which is currently maintained by a fellow Gentoo user. It provides the missing logind interfaces currently required by GNOME without booting with systemd.

For easier GNOME install, the desktop/gnome profiles now set up default USE flags with elogind for OpenRC systems, while the desktop/gnome/systemd profiles continue to do that for systemd systems. Both have been updated to provide a better initial GNOME install experience. After profile selection, a full install should be simply a matter of `emerge gnome` for testing branch users. Don’t forget to adapt your system to any changed USE flags on previously installed packages too.

GNOME 3.32 is expected to be made available in testing branch soon as well, followed by introducing all this for stable branch users. This is hoped to complete within 6-8 weeks.

If you encounter issues, don’t hesitate to file bug reports or, if necessary, contact me via e-mail or IRC. You can also discuss the elogind aspects on the Gentoo Forums.


I’d like to thank Gavin Ferris, for kindly agreeing to sponsor my work on the above (upgrading GNOME on Gentoo from 3.26 to 3.30 and introducing Gentoo GNOME elogind support); and dantrell, for his pioneering overlay work integrating GNOME 3 with OpenRC on Gentoo, and also the GNOME and elogind projects.

March 25, 2019
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.17 (March 25, 2019, 14:12 UTC)

I’m glad to announce a new (awaited) release of py3status featuring support for the sway window manager which allows py3status to enter the wayland environment!

Updated configuration and custom modules paths detection

The configuration section of the documentation explains the updated detection of the py3status configuration file (with respect of XDG_CONFIG environment variables):

  • ~/.config/py3status/config
  • ~/.config/i3status/config
  • ~/.config/i3/i3status.conf
  • ~/.i3status.conf
  • ~/.i3/i3status.conf
  • /etc/xdg/i3status/config
  • /etc/i3status.conf

Regarding custom modules paths detection, py3status does as described in the documentation:

  • ~/.config/py3status/modules
  • ~/.config/i3status/py3status
  • ~/.config/i3/py3status
  • ~/.i3/py3status


Lots of modules improvements and clean ups, see changelog.

  • we worked on the documentation sections and content which allowed us to fix a bunch of typos
  • our magic @lasers have worked a lot on harmonizing thresholds on modules along with a lot of code clean ups
  • new module: scroll to scroll modules on your bar (#1748)
  • @lasers has worked a lot on a more granular pango support for modules output (still work to do as it breaks some composites)

Thanks contributors

  • Ajeet D’Souza
  • @boucman
  • Cody Hiar
  • @cyriunx
  • @duffydack
  • @lasers
  • Maxim Baz
  • Thiago Kenji Okada
  • Yaroslav Dronskii

March 20, 2019
Install Gentoo in less than one minute (March 20, 2019, 18:35 UTC)

I’m pretty sure that the title of this post will catch your attention…and/or maybe your curiosity.

Well..this is something I’m doing since years…and since did not cost too much to make it in a public and usable state, I decided to share my work, to help some people to avoid waste of time and to avoid to be angry when your cloud provider does not offer the gentoo image.

So what are the goals of this project?

  1. Install gentoo on cloud providers that do not offer a Gentoo image (e.g Hetzner)
  2. Install gentoo everywhere in few seconds.

To do a fast installation, we need a stage4….but what is exactly a stage4? In this case the stage4 is composed by the official gentoo stage3 plus grub, some more utilities and some file already configured.

So since the stage4 has already everything to complete the installation, we just need to make some replacement (fstab, grub and so on), install grub on the disk………..and…..it’s done (by the auto-installer script)!

At this point I’d expect some people to say….”yeah…it’s so simply and logical…why I didn’t think about that” – Well, I guess that every gentoo user didn’t discover that just after the first installation…so you don’t need to blame yourself 🙂

The technical details are covered by the README in the gentoo-stage4 git repository

As said in the README:

  • If you have any request, feel free to contact me
  • A star on the project will give me the idea of the usage and then the effort to put here.

So what’s more? Just a screenshot of the script in action 🙂

# Gentoo hetzner cloud
# Gentoo stage4
# Gentoo cloud

March 14, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

Update: Still having the issue after some time after all :(

So I have a a (private) nextcloud instance and I kept having errors like "unable to process request,." "Connection to server lost" and it would stop loading at an empty page..

In the logs I had loads of messages like this:

Error PHP mime_content_type(): Failed identify data 0:indirect recursion nesting (0) exceeded at /..../apps/theming/lib/IconBuilder.php#138 2019-03-14T08:08:27+0100
Debug no app in context No cache entry found for /appdata_occ9eea91961/theming/5/favIcon-settings (storage: local::/.../owncloud_data/, internalPath: appdata_occ9eea91961/theming/5/favIcon-settings) 2019-03-14T08:08:27+0100

The first thing I found was reparing the file cache so I ran this:

 php occ maintenance:mimetype:update-db --repair-filecache

That did add about 56 mime types .. ok sounds good .. let's try again .. nope same error (that error btw pops up wherever - in my case also the app update page for example.

so next I check permissions and give my webserver user permissions on both the nextcloud install directory and also the data directory recursively - just to be on the safe side. (chmod -R apache:apache <path>).

Nope still nothing .. getting frustrated here.. - also an update to the latest php 7.1 release (and shared-mime-info package) did nothing either..

Checking for the file it cannot find a cache entry just leads to the discovery, that this file does not exist..

the next thing that i thought of was I could check out themeing so I went to (https://<my-install-webroot>/settings/admin/theming). There I just changed my "Web link" to the actual nextcloud install url (and while i was there also changed the color, name and slogan for fun). after saving this and going back to the app update page it now worked .. so i have no clue what happened .. maybe it was trying to get the favicon from the nextcloud.com page (which is the default "Web link" it seems).

But now it works .. still got those log messages, but they seem to have somehow led me to resolve my issue...

March 07, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)
egroupware & DB connections (postgresql) (March 07, 2019, 15:14 UTC)

So I recently had this issue with running out of DB connections to my postgresql server.

Now i figured out that egroupware by default has persistent connections enabled (for performance reasons). Which for a small installation with 2 users and a few cal/cardDAV sync devices really is not an issue .. so after checking that egw hat 25+ persistent connections I decided to just disable persistent connections.

The result: less resources consumed on my DB server, free connections for other things that actually use them and absolutely no noticeable issue or slowdown with egroupware.

So my recommendation for small installs of egroupware: do not enable persistent connections.

March 01, 2019
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Bye bye Google Analytics (March 01, 2019, 12:15 UTC)

A few days ago, I removed Google Analytics from my blog and trashed the associated account.

I’ve been part of the Marketing Tech and Advertising Tech industries for over 15 years. I design and operate data processing platforms (including web navigation trackers) for a living. So I thought that maybe sharing the reasons of why I took this decision might be of interest for some people. I’ll keep it short.

MY convenience is not a enough reason to send YOUR data to Google

The first and obvious question I asked myself is why did I (and so many people) set up this tracker on my web site?

My initial answer was a mix of:

  • convenience : it’s easy to set up, there’s a nice interface, you get a lot of details, you don’t have to ask yourself how it’s done, it just works
  • insight : it sounded somewhat important to know who was visiting what content and somehow know about the interest of people visiting

With also a (hopefully not too much) of:

  • pride: are some blog posts popular? if so which one and let’s try to do more like this!

I don’t think those are good enough reasons to add a tracker that sends YOUR data to Google.

Convenience kills diversity

I’m old enough to have witnessed the rise of internet and its availability to (almost) everyone. The first things I did when I could connect was create and host my own web site, it was great and looked ugly!

But while Internet could have been a catalyst for diversity, it turned out to create an over concentration on services and tools that we think are hard to live without because of their convenience (and a human tendency for mimicry).

When your choices are reduced and the mass adoption defines your standards, it’s easy to let it go and pretend you don’t care that much.

I decided to stop pretending that I don’t care. I don’t want to participate in the concentration of web navigation tracking to Google.

Open Internet is at risk

When diversity is endangered so is Open Internet. This idea that a rich ecosystem can bring their own value and be free to grow by using the data they generate or collect is threatened by the GAFA who are building walled gardens around OUR data.

For instance, Google used the GDPR regulation as an excuse to close down the access to the data collected by their (so convenient) services. If a company (or you) wants to access / query this data (YOUR data) then you can only by using their billed tools.

What should have been only a clear win for us people turned out to also benefit those super blocks and threaten diversity and Open Internet even more.

Adding Google Analytics to your web site helps Google have a larger reach and tracking footprint on the whole web: imagine all those millions of web sites visits added together, that’s where the value is for them. No wonder GA is free.

So in this regard too, I decided to stop contributing to the empowerment of Google.

This blog is Tracking Free

So from now on if you want to share your thoughts of just let me know you enjoyed a post on this blog, take the lead on YOUR data and use the comment box.

The choice is yours!

February 27, 2019
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We're happy to announce that our manuscript "Shaping electron wave functions in a carbon nanotube with a parallel magnetic field" has been published as Editor's Suggestion in Physical Review Letters.

When a physicist thinks of an electron confined to a one-dimensional object such as a carbon nanotube, the first idea that comes to mind is the „particle in a box“ from elementary quantum mechanics. A particle can behave as a wave, and in this model it is essentially a standing wave, reflected at two infinitely high, perfect barrier walls. The mathematical solutions for the wave function describing it are the well-known half-wavelength resonator solutions, with a fundamental mode where exactly half a wavelength fits between the walls, a node of the wave function at each wall and an antinode in the center.

In this publication, we show how wrong this first idea can be, and what impact that has. In a carbon nanotube as quasi one-dimensional system, an electron is not in free space, but confined to the lattice of carbon atoms which forms the nanotube walls. This hexagonal lattice, the same that also forms in planar form graphene, is called bipartite, since every elementary cell of the lattice contains two carbon atoms; one can imagine the nanotube wall as being built out of two sub-lattices, with one atom per cell each, that are shifted relative to each other. Surprisingly, the hexagonal bipartite lattice does not generally support the half-wavelength solutions mentioned above, where the electronic wave function becomes zero at the edges. In each sublattice, we can only force the wave function to zero at one end of the nanotube "box"; its value at the other end remains finite. This means that the wave function shape for each of the two sublattices is more similar to that of a quarter wavelength resonator, where one end displays a node, the other an antinode. The two sublattice wave functions are mirrored in shape to each other, with node and antinode swapping position.

When we now apply a magnetic field along the carbon nanotube, a magnetic flux enters the nanotube, and the boundary conditions for the electron wave function change via the Aharonov-Bohm effect. Astonishingly, its shape along the carbon nanotube can thereby be tuned between half-wavelength and quarter-wavelength behaviour. This means that the probability of the trapped electron to be near the contacts changes, and with it the tunnel current, leading to a very distinct behaviour of the electronic conductance. It turns out that our measurement and the corresponding calculations are agreeing very well. Thus, our work shows the impact of a non-trivial host crystal on the electronic behaviour, important for many novel types of material.

"Shaping electron wave functions in a carbon nanotube with a parallel magnetic field"
M. Marganska, D. R. Schmid, A. Dirnaichner, P. L. Stiller, Ch. Strunk, M. Grifoni, and A. K. Hüttel
Physical Review Letters 122, 086802 (2019), Editor's Suggestion; arXiv:1712.08545 (PDF, supplementary information)

February 22, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)
Postgresql major version upgrade (gentoo) (February 22, 2019, 10:37 UTC)

Just did an upgrade from postgres 10.x to 11.x on a test machine..

The guide on the Gentoo Wiki is pretty good, but a few things I forgot at first:

First off when initializing the new cluster with "emerge --config =dev-db/postgresql-11.1" making sure the DB init options are the same as the old cluster. They are stored in /etc/conf.d/postgresql-XX.Y so just make sure PG_INITDB_OPTS collation ,.. match - if not delete the new cluster and re-run emerge --config ;)

The second thing was pg_hba.conf: make sure to re-add extra user/db/connection permissions again (in my case I ran diff and then just copied the old config file as the only difference was the extra permissions I had added)

The third thing was postgresql.conf: here I forgot to make sure listen_addresses and port are the same as in the old config (I did not copy this one as there are a lot more differences here. -- and of course check the rest of the config file too (diff is your friend ;) )

other than that pg_upgrade worked really well for me and it is now up and running agian.

February 20, 2019
Michał Górny a.k.a. mgorny (homepage, bugs)

Traditionally, OpenPGP revocation certificates are used as a last resort. You are expected to generate one for your primary key and keep it in a secure location. If you ever lose the secret portion of the key and are unable to revoke it any other way, you import the revocation certificate and submit the updated key to keyservers. However, there is another interesting use for revocation certificates — revoking shared organization keys.

Let’s take Gentoo, for example. We are using a few keys needed to perform automated signatures on servers. For this reason, the key is especially exposed to attacks and we want to be able to revoke it quickly if the need arises. Now, we really do not want to have every single Infra member hold a copy of the secret primary key. However, we can give Infra members revocation certificates instead. This way, they maintain the possibility of revoking the key without unnecessarily increasing its exposure.

The problem with traditional revocation certificates is that they are supported for the purpose of revoking the primary key only. In our security model, the primary key is well protected, compared to subkeys that are totally exposed. Therefore, it is superfluous to revoke the complete key when only a subkey is compromised. To resolve this limitation, gen-revoke tool was created that can create exported revocation signatures for both the primary key and subkeys.

Technical background

The OpenPGP key (v4, as defined by RFC 4880) consists of a primary key, one or more UIDs and zero or more subkeys. Each of those keys and UIDs can include zero or more signature packets. Those packets bind information to the specific key or UID, and their authenticity is confirmed by a signature made using the secret portion of a primary key.

Signatures made by the key’s owner are called self-signatures. The most basic form of them serve as a binding between the primary key and its subkeys and UIDs. Since both those classes of objects are created independently of the primary key, self-signatures are necessary to distinguish authentic subkeys and UIDs created by the key owner from potential fakes. Appropriately, GnuPG will only accept subkeys and UIDs that have valid self-signature.

One specific type of signatures are revocation signatures. Those signatures indicate that the relevant key, subkey or UID has been revoked. If a revocation signature is found, it takes precedence over any other kinds of signatures and prevents the revoked object from being further used.

Key updates are means of distributing new data associated with the key. What’s important is that during an update the key is not replaced by a new one. Instead, GnuPG collects all the new data (subkeys, UIDs, signatures) and adds it to the local copy of the key. The validity of this data is verified against appropriate signatures. Appropriately, anyone can submit a key update to the keyserver, provided that the new data includes valid signatures. Similarly to local GnuPG instance, the keyserver is going to update its copy of the key rather than replacing it.

Revocation certificates specifically make use of this property. Technically, a revocation certificate is simply an exported form of a revocation signature, signed using the owner’s primary key. As long as it’s not on the key (i.e. GnuPG does not see it), it does not do anything. When it’s imported, GnuPG adds it to the key. Further submissions and exports include it, effectively distributing it to all copies of the key.

gen-revoke builds on this idea. It creates and exports revocation signatures for the primary key and subkeys. Due to implementation limitations (and for better compatibility), rather than exporting the signature alone it exports a minimal copy of the relevant key. This copy can be imported just like any other key export, and it causes the revocation signature to be added to the key. Afterwards, it can be exported and distributed just like a revocation done directly on the key.


To use the script, you need to have the secret portion of the primary key available, and public encryption keys for all the people who are supposed to obtain a copy of the revocation signatures (recipients).

The script takes at least two parameters: an identifier of the key for which revocation signatures should be created, followed by one or more e-mail addresses of signature recipients. It creates revocation signatures both for the primary key and for all valid subkeys, for all the people specified.

The signatures are written into the current directory as key exports and are encrypted to each specified person. They should be distributed afterwards, and kept securely by all the individuals. If a need to revoke either a subkey or the primary key arises, the first person available can decrypt the signature, import it and send the resulting key to keyservers.

Additionally, each signature includes a comment specifying the person it was created for. This comment will afterwards be displayed by GnuPG if one of the revocation signatures is imported. This provides a clear audit trace as to who revoked the key.

Security considerations

Each of the revocation signatures can be used by an attacker to disable the key in question. The signatures are protected through encryption. Therefore, the system is vulnerable to the key of a single signature owner being compromised.

However, this is considerably safer than the equivalent option of distributing the secret portion of the primary key. In the latter case, the attacker would be able to completely compromise the key and use it for malicious purposes; in the former, it is only capable of revoking the key and therefore causing some frustration. Furthermore, the revocation comment helps identifying the compromised user.

The tradeoff between reliability and security can be adjusted by changing the number of revocation signature holders.

February 13, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

After about 2 weeks of trying to figure out where the problem was with the amdgpu driver on my RX590 on my Ryzen mainboard on linux prOMiNd in the #radeon channel on IRC (Freenode) said I should try with the kernel commandline mem_encrypt=off .. and it fixed it! -- the Issue manifested itself that the screen on booting up got "stuck" once the KMS (kernel mode setting) tried to use amdgpu. (nomodeset did work, but left me with no X,..)

  • My Hardware:
  • AMD Ryzen 7 2700X
  • MSI X470 Gaming Plus
  • G.SKill 16GB Kit
  • Sapphire Nitro+ Radeon RX590 8GB Special Edition

I expect disabling one or both of those will do the same:


here's the relevant dmesg output in case someone has a similar issue (so search engines can find it):

[   14.161225] [drm] amdgpu kernel modesetting enabled.
[   14.161259] Parsing CRAT table with 1 nodes
[   14.161262] Ignoring ACPI CRAT on non-APU system
[   14.161264] Virtual CRAT table created for CPU
[   14.161264] Parsing CRAT table with 1 nodes
[   14.161265] Creating topology SYSFS entries
[   14.161269] Topology: Add CPU node
[   14.161270] Finished initializing topology
[   14.161345] checking generic (e0000000 300000) vs hw (e0000000 10000000)
[   14.161346] fb0: switching to amdgpudrmfb from EFI VGA
[   14.161372] Console: switching to colour dummy device 80x25
[   14.161546] [drm] initializing kernel modesetting (POLARIS10 0x1002:0x67DF 0x1DA2:0xE366 0xE1).
[   14.161552] [drm] register mmio base: 0xFE900000
[   14.161553] [drm] register mmio size: 262144
[   14.161558] [drm] add ip block number 0 <vi_common>
[   14.161558] [drm] add ip block number 1 <gmc_v8_0>
[   14.161559] [drm] add ip block number 2 <tonga_ih>
[   14.161559] [drm] add ip block number 3 <gfx_v8_0>
[   14.161559] [drm] add ip block number 4 <sdma_v3_0>
[   14.161560] [drm] add ip block number 5 <powerplay>
[   14.161560] [drm] add ip block number 6 <dm>
[   14.161560] [drm] add ip block number 7 <uvd_v6_0>
[   14.161561] [drm] add ip block number 8 <vce_v3_0>
[   14.161568] [drm] UVD is enabled in VM mode
[   14.161568] [drm] UVD ENC is enabled in VM mode
[   14.161569] [drm] VCE enabled in VM mode
[   14.161743] amdgpu 0000:1d:00.0: No more image in the PCI ROM
[   14.161756] ATOM BIOS: 113-4E3661U-X6I
[   14.161774] [drm] vm size is 64 GB, 2 levels, block size is 10-bit, fragment size is 9-bit
[   14.161775] amdgpu 0000:1d:00.0: SME is active, device will require DMA bounce buffers
[   14.161775] amdgpu 0000:1d:00.0: SME is active, device will require DMA bounce buffers
[   14.311979] amdgpu 0000:1d:00.0: VRAM: 8192M 0x000000F400000000 - 0x000000F5FFFFFFFF (8192M used)
[   14.311981] amdgpu 0000:1d:00.0: GART: 256M 0x000000FF00000000 - 0x000000FF0FFFFFFF
[   14.311988] [drm] Detected VRAM RAM=8192M, BAR=256M
[   14.311989] [drm] RAM width 256bits GDDR5
[   14.312063] [TTM] Zone  kernel: Available graphics memory: 8185614 kiB
[   14.312064] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[   14.312064] [TTM] Initializing pool allocator
[   14.312069] [TTM] Initializing DMA pool allocator
[   14.312103] [drm] amdgpu: 8192M of VRAM memory ready
[   14.312104] [drm] amdgpu: 8192M of GTT memory ready.
[   14.312123] software IO TLB: SME is active and system is using DMA bounce buffers
[   14.312124] [drm] GART: num cpu pages 65536, num gpu pages 65536
[   14.313844] [drm] PCIE GART of 256M enabled (table at 0x000000F400300000).
[   14.313934] [drm:amdgpu_device_init.cold.34 [amdgpu]] *ERROR* sw_init of IP block <tonga_ih> failed -12
[   14.313935] amdgpu 0000:1d:00.0: amdgpu_device_ip_init failed
[   14.313937] amdgpu 0000:1d:00.0: Fatal error during GPU init
[   14.313937] [drm] amdgpu: finishing device.
[   14.314020] ------------[ cut here ]------------
[   14.314021] Memory manager not clean during takedown.
[   14.314045] WARNING: CPU: 6 PID: 4541 at drivers/gpu/drm/drm_mm.c:950 drm_mm_takedown+0x1a/0x20 [drm]
[   14.314045] Modules linked in: amdgpu(+) mfd_core snd_usb_audio snd_usbmidi_lib snd_rawmidi snd_seq_device chash i2c_algo_bit gpu_sched drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm snd_hda_codec_realtek snd_hda_codec_generic drm snd_hda_intel snd_hda_codec agpgart snd_hwdep snd_hda_core snd_pcm nct6775 snd_timer hwmon_vid kvm snd irqbypass k10temp macvlan r8169 pcnet32 mii e1000 efivarfs dm_snapshot dm_bufio
[   14.314061] CPU: 6 PID: 4541 Comm: udevd Not tainted 4.20.2-gentooamdgpu #2
[   14.314062] Hardware name: Micro-Star International Co., Ltd. MS-7B79/X470 GAMING PLUS (MS-7B79), BIOS A.40 06/28/2018
[   14.314070] RIP: 0010:drm_mm_takedown+0x1a/0x20 [drm]
[   14.314072] Code: 1c b1 a5 ca 66 66 2e 0f 1f 84 00 00 00 00 00 90 48 8b 47 38 48 83 c7 38 48 39 c7 75 01 c3 48 c7 c7 30 88 23 c0 e8 4d b3 a5 ca <0f> 0b c3 0f 1f 00 41 57 41 56 49 89 f6 41 55 41 54 49 89 fd 55 53
[   14.314073] RSP: 0018:ffffaf2d839b7a08 EFLAGS: 00010286
[   14.314074] RAX: 0000000000000000 RBX: ffff95a68c102b00 RCX: ffffffff8be47158
[   14.314075] RDX: 0000000000000001 RSI: 0000000000000096 RDI: ffffffffa7ec6e2c
[   14.314076] RBP: ffff95a68a9229e8 R08: 000000000000003c R09: 0000000000000001
[   14.314077] R10: 0000000000000000 R11: 0000000000000001 R12: ffff95a68a9229c8
[   14.314077] R13: 0000000000000000 R14: 0000000000000170 R15: ffff95a686289930
[   14.314079] FS:  00007fe4117017c0(0000) GS:ffff95a68eb80000(0000) knlGS:0000000000000000
[   14.314080] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   14.314081] CR2: 00007ffc0740f8e8 CR3: 000080040c5d0000 CR4: 00000000003406e0
[   14.314081] Call Trace:
[   14.314149]  amdgpu_vram_mgr_fini+0x1d/0x40 [amdgpu]
[   14.314154]  ttm_bo_clean_mm+0x9d/0xb0 [ttm]
[   14.314216]  amdgpu_ttm_fini+0x6c/0xe0 [amdgpu]
[   14.314277]  amdgpu_bo_fini+0x9/0x30 [amdgpu]
[   14.314344]  gmc_v8_0_sw_fini+0x2d/0x50 [amdgpu]
[   14.314416]  amdgpu_device_fini+0x235/0x3d6 [amdgpu]
[   14.314477]  amdgpu_driver_unload_kms+0xab/0x150 [amdgpu]
[   14.314536]  amdgpu_driver_load_kms+0x181/0x250 [amdgpu]
[   14.314543]  drm_dev_register+0x10e/0x150 [drm]
[   14.314602]  amdgpu_pci_probe+0xb8/0x120 [amdgpu]
[   14.314606]  local_pci_probe+0x3c/0x90
[   14.314609]  pci_device_probe+0xdc/0x160
[   14.314612]  really_probe+0xee/0x2a0
[   14.314613]  driver_probe_device+0x4a/0xb0
[   14.314615]  __driver_attach+0xaf/0xd0
[   14.314617]  ? driver_probe_device+0xb0/0xb0
[   14.314619]  bus_for_each_dev+0x71/0xb0
[   14.314621]  bus_add_driver+0x197/0x1e0
[   14.314623]  ? 0xffffffffc0369000
[   14.314624]  driver_register+0x66/0xb0
[   14.314626]  ? 0xffffffffc0369000
[   14.314628]  do_one_initcall+0x41/0x1b0
[   14.314631]  ? _cond_resched+0x10/0x20
[   14.314633]  ? kmem_cache_alloc_trace+0x35/0x170
[   14.314636]  do_init_module+0x55/0x1e0
[   14.314639]  load_module+0x2242/0x2480
[   14.314642]  ? __do_sys_finit_module+0xba/0xe0
[   14.314644]  __do_sys_finit_module+0xba/0xe0
[   14.314646]  do_syscall_64+0x43/0xf0
[   14.314649]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   14.314651] RIP: 0033:0x7fe411a7f669
[   14.314652] Code: 00 00 75 05 48 83 c4 18 c3 e8 b3 b7 01 00 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e7 a7 0c 00 f7 d8 64 89 01 48
[   14.314653] RSP: 002b:00007ffe7cb639e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[   14.314655] RAX: ffffffffffffffda RBX: 000056165f9c3150 RCX: 00007fe411a7f669
[   14.314656] RDX: 0000000000000000 RSI: 00007fe411b6190d RDI: 0000000000000016
[   14.314656] RBP: 00007fe411b6190d R08: 0000000000000000 R09: 0000000000000002
[   14.314657] R10: 0000000000000016 R11: 0000000000000246 R12: 0000000000000000
[   14.314658] R13: 000056165f9d3270 R14: 0000000000020000 R15: 000056165f9c3150
[   14.314659] ---[ end trace 9db69ba000fb2712 ]---
[   14.314664] [TTM] Finalizing pool allocator
[   14.314666] [TTM] Finalizing DMA pool allocator
[   14.314700] [TTM] Zone  kernel: Used memory at exit: 124 kiB
[   14.314703] [TTM] Zone   dma32: Used memory at exit: 124 kiB
[   14.314704] [drm] amdgpu: ttm finalized
[   14.314868] amdgpu: probe of 0000:1d:00.0 failed with error -12

February 01, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

I tried with just exporting from postgresql like this:

FROM (SELECT * FROM llx_societe) t) to '/path/to/file/llx_societe_extrafields.json';

but that gives me so much that I do not need and also still keeps the half-french colum names (which as someone who doesn't speak french is driving me mad and slowing me down..)

Warning: Postgresql does not seem to escape " from HTML so you need to escape it or remove it (which is hwat i did since I do not need it)

so I'll just make query and/or view to deal with this:

SELECT  s.rowid AS s_row,
        s.nom AS s_name,
        s.phone AS s_phone,
        s.fax AS s_fax,
        s.email AS s_email,
        s.url AS s_url,
        s.fax AS s_fax,
        s.address AS s_address,
        s.town AS s_town,
        s.zip AS s_zip,
        s.note_public AS s_note_public,
        s.note_private AS s_note_private,
        s.ape AS s_fbno,
        s.idprof4 AS s_dvrno,
        s.tva_assuj AS s_UST,
        s.tva_intra AS s_uid,
        s.code_client AS s_code_client,
        s.name_alias AS s_name_alias,
        s.siren AS s_siren,
        s.siret AS s_siret,
        s.client AS s_client,
        s_dep.nom AS s_county,
        s_reg.nom AS s_country,
        se.pn_name AS s_pn_name,
        sp.rowid AS sp_rowid,
        sp.lastname AS sp_lastname,
        sp.firstname AS sp_firstname,
        sp.address as sp_address,
        sp.civility AS sp_civility,
        sp.address AS sp_address,
        sp.zip AS sp_zip,
        sp.town AS sp_town,
        sp_dep.nom AS sp_county,
        sp_reg.nom AS sp_country,
        sp.fk_pays AS sp_fk_pays,
        sp.birthday AS sp_birthday,
        sp.poste AS sp_poste,
        sp.phone AS sp_phone,
        sp.phone_perso AS sp_phone_perso,
        sp.phone_mobile AS sp_phone_mobile,
        sp.fax AS sp_fax,
        sp.email AS sp_email,
        sp.priv AS sp_priv,
        sp.note_private AS sp_note_private,
        sp.note_public AS sp_note_public

FROM llx_societe AS s
INNER JOIN llx_societe_extrafields AS se ON se.fk_object = s.rowid
LEFT JOIN llx_socpeople AS sp ON sp.fk_soc = s.rowid
LEFT JOIN llx_c_departements AS s_dep ON s.fk_departement = s_dep.rowid
LEFT JOIN llx_c_regions AS s_reg ON s_dep.fk_region = s_reg.rowid
LEFT JOIN llx_c_departements AS sp_dep ON sp.fk_departement = sp_dep.rowid
LEFT JOIN llx_c_regions AS sp_reg ON sp_dep.fk_region = sp_reg.rowid
ORDER BY s_name, sp_lastname, sp_firstname;

January 31, 2019
Michał Górny a.k.a. mgorny (homepage, bugs)

This article describes the UI deficiency of Evolution mail client that extrapolates the trust of one of OpenPGP key UIDs into the key itself, and reports it along with the (potentially untrusted) primary UID. This creates the possibility of tricking the user into trusting a phished mail via adding a forged UID to a key that has a previously trusted UID.

Continue reading

January 29, 2019
Michał Górny a.k.a. mgorny (homepage, bugs)
Identity with OpenPGP trust model (January 29, 2019, 13:50 UTC)

Let’s say you want to send a confidential message to me, and possibly receive a reply. Through employing asymmetric encryption, you can prevent a third party from reading its contents, even if it can intercept the ciphertext. Through signatures, you can verify the authenticity of the message, and therefore detect any possible tampering. But for all this to work, you need to be able to verify the authenticity of the public keys first. In other words, we need to be able to prevent the aforementioned third party — possibly capable of intercepting your communications and publishing a forged key with my credentials on it — from tricking you into using the wrong key.

This renders key authenticity the fundamental problem of asymmetric cryptography. But before we start discussing how key certification is implemented, we need to cover another fundamental issue — identity. After all, who am I — who is the person you are writing to? Are you writing to a person you’ve met? Or to a specific Gentoo developer? Author of some project? Before you can distinguish my authentic key from a forged key, you need to be able to clearly distinguish me from an impostor.

Forms of identity

Identity via e-mail address

If your primary goal is to communicate with the owner of the particular e-mail address, it seems obvious to associate the identity with the owner of the e-mail address. However, how in reality would you distinguish a ‘rightful owner’ of the e-mail address from a cracker who managed to obtain access to it, or to intercept your network communications and inject forged mails?

The truth is, the best you can certify is that the owner of a particular key is able to read and/or send mails from a particular e-mail address, at a particular point in time. Then, if you can certify the same for a long enough period of time, you may reasonably assume the address is continuously used by the same identity (which may qualify as a legitimate owner or a cracker with a lot of patience).

Of course, all this relies on your trust in mail infrastructure not being compromised.

Identity via personal data

A stronger protection against crackers may be provided by associating the identity with personal data, as confirmed by government-issued documents. In case of OpenPGP, this is just the real name; X.509 certificates also provide fields for street address, phone number, etc.

The use of real names seems to be based on two assumptions: that your real name is reasonable well-known (e.g. it can be established with little risk of being replaced by a third party), and that the attacker does not wish to disclose his own name. Besides that, using real names meets with some additional criticism.

Firstly, requiring one to use his real name may be considered an invasion on privacy. Most notably, some people wish not to disclose or use their real names, and this effectively prevents them from ever being certified.

Secondly, real names are not unique. After all, the naming systems developed from the necessity of distinguishing individuals in comparatively small groups, and they simply don’t scale to the size of the Internet. Therefore, name collisions are entirely possible and we are relying on sheer luck that the attacker wouldn’t happen to have the same name as you do.

Thirdly and most importantly, verifying identity documents is non-trivial and untrained individuals are likely to fall victim of mediocre quality fakes. After all, we’re talking about people who hopefully read some article on verifying a particular kind of document but have no experience recognizing forgery, no specialized hardware (I suppose most of you don’t carry a magnifying glass and a UV light on yourself) and who may lack skills in comparing signatures or photographs (not to mention some people have really old photographs in documents). Some countries don’t even issue any official documentation for document verification in English!

Finally, even besides the point of forged documents, this relies on trust in administration.

Identity via photographs

This one I’m mentioning merely for completeness. OpenPGP keys allow adding a photo as one of your UIDs. However, this is rather rarely used (out of the keys my GnuPG fetched so far, less than 10% have photographs). The concerns are similar as for personal data: it assumes that others are reliably able to know how you look like, and that they are capable of reliably comparing faces.

Online identity

An interesting concept is to use your public online activity to prove your identity — such as websites or social media. This is generally based on cross-referencing multiple resources with cryptographically proven publishing access, and assuming that an attacker would not be able to compromise all of them simultaneously.

A form of this concept is utilized by keybase.io. This service builds trust in user profiles via cryptographically cross-linking your profiles on some external sites and/or your websites. Furthermore, it actively encourages other users to verify those external proofs as well.

This identity model entirely relies on trust in network infrastructure and external sites. The likeliness of it being compromised is reduced by (potentially) relying on multiple independent sites.

Web of Trust model

Most of time, you won’t be able to directly verify the identity of everyone you’d like to communicate with. This creates a necessity of obtaining indirect proof of authenticity, and the model normally used for that purpose in OpenPGP is the Web of Trust. I won’t be getting into the fine details — you can find them e.g. in the GNU Privacy Handbook. For our purposes, it suffices to say that in WoT the authenticity of keys you haven’t verified may be assessed by people whose keys you trust already, or people they know, with a limited level of recursion.

The more key holders you can trust, the more keys you can have verified indirectly and the more likely it is that your future recipient will be in that group. Or that you will be able to get someone from across the world into your WoT by meeting someone residing much closer to yourself. Therefore, you’d naturally want the WoT to grow fast and include more individuals. You’d want to preach OpenPGP onto non-crypto-aware people. However, this comes with inherent danger: can you really trust that they will properly verify the identity of the keys they sign?

I believe this is the most fundamental issue with WoT model: for it to work outside of small specialized circles, it has to include more and more individuals across the world. But this growth inevitable makes it easier for a malicious third party to find people that can be tricked into certifying keys with forged identities.


The fundamental problem in OpenPGP usage is finding the correct key and verifying its authenticity. This becomes especially complex given there is no single clear way of determining one’s identity in the Internet. Normally, OpenPGP uses a combination of real name and e-mail address, optionally combined with a photograph. However, all of them have their weaknesses.

Direct identity verification for all recipients is non-practical, and therefore requires indirect certification solutions. While the WoT model used by OpenPGP attempts to avoid centralized trust specific to PKI, it is not clear whether it’s practically manageable. On one hand, it requires trusting more people in order to improve coverage; on the other, it makes it more vulnerable to fraud.

Given all the above, the trust-via-online-presence concept may be of some interest. Most importantly, it establishes a closer relationship between the identity you actually need and the identity you verify — e.g. you want to mail the person being an open source developer, author of some specific projects rather than arbitrary person with a common enough name. However, this concept is not established broadly yet.

January 26, 2019
Michał Górny a.k.a. mgorny (homepage, bugs)

This article shortly explains the historical git weakness regarding handling commits with multiple OpenPGP signatures in git older than v2.20. The method of creating such commits is presented, and the results of using them are described and analyzed.

Continue reading

January 20, 2019
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.16 (January 20, 2019, 21:10 UTC)

Two py3status versions in less than a month? That’s the holidays effect but not only!

Our community has been busy discussing our way forward to 4.0 (see below) and organization so it was time I wrote a bit about that.


A new collaborator

First of all we have the great pleasure and honor to welcome Maxim Baz @maximbaz as a new collaborator on the project!

His engagement, numerous contributions and insightful reviews to py3status has made him a well known community member, not to mention his IRC support 🙂

Once again, thank you for being there Maxim!

Zen of py3status

As a result of an interesting discussion, we worked on defining better how to contribute to py3status as well as a set of guidelines we agree on to get the project moving on smoothly.

Here is born the zen of py3status which extends the philosophy from the user point of view to the contributor point of view!

This allowed us to handle the numerous open pull requests and get their number down to 5 at the time of writing this post!

Even our dear @lasers don’t have any open PR anymore 🙂

3.15 + 3.16 versions

Our magic @lasers has worked a lot on general modules options as well as adding support for i3-gaps added features such as border coloring and fine tuning.

Also interesting is the work of Thiago Kenji Okada @m45t3r around NixOS packaging of py3status. Thanks a lot for this work and for sharing Thiago!

I also liked the question of Andreas Lundblad @aioobe asking if we could have a feature allowing to display a custom graphical output, such as a small PNG or anything upon clicking on the i3bar, you might be interested in following up the i3 issue he opened.

Make sure to read the amazing changelog for details, a lot of modules have been enhanced!


  • You can now set a background, border colors and their urgent counterparts on a global scale or per module
  • CI now checks for black format on modules, so now all the code base obey the black format style!
  • All HTTP requests based modules now have a standard way to define HTTP timeout as well as the same 10 seconds default timeout
  • py3-cmd now allows sending click events with modifiers
  • The py3status -n / –interval command line argument has been removed as it was obsolete. We will ignore it if you have set it up, but better remove it to be clean
  • You can specify your own i3status binary path using the new -u, –i3status command line argument thanks to @Dettorer and @lasers
  • Since Yahoo! decided to retire its public & free weather API, the weather_yahoo module has been removed

New modules

  • new conky module: display conky system monitoring (#1664), by lasers
  • new module emerge_status: display information about running gentoo emerge (#1275), by AnwariasEu
  • new module hueshift: change your screen color temperature (#1142), by lasers
  • new module mega_sync: to check for MEGA service synchronization (#1458), by Maxim Baz
  • new module speedtest: to check your internet bandwidth (#1435), by cyrinux
  • new module usbguard: control usbguard from your bar (#1376), by cyrinux
  • new module velib_metropole: display velib metropole stations and (e)bikes (#1515), by cyrinux

A word on 4.0

Do you wonder what’s gonna be in the 4.0 release?
Do you have ideas that you’d like to share?
Do you have dreams that you’d love to become true?

Then make sure to read and participate in the open RFC on 4.0 version!

Development has not started yet; we really want to hear from you.

Thank you contributors!

There would be no py3status release without our amazing contributors, so thank you guys!

  • AnwariasEu
  • cyrinux
  • Dettorer
  • ecks
  • flyingapfopenguin
  • girst
  • Jack Doan
  • justin j lin
  • Keith Hughitt
  • L0ric0
  • lasers
  • Maxim Baz
  • oceyral
  • Simon Legner
  • sridhars
  • Thiago Kenji Okada
  • Thomas F. Duellmann
  • Till Backhaus

January 13, 2019
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Getting control on my data (January 13, 2019, 19:28 UTC)

Since 2009 I started to get always more interested on privacy,
radical decentralization and self-hosting.
But only recently I started to actively work on keeping
my own privacy and making more strict my open source usage
(no dropbox, no google services).
The point of this more radical change is not only privacy.
The point is partially because I don't want that corporation1 use
my data for their business and partially because I think
that open source and decentralization is the way to go.
Not using open source is giving corporations the ability to automate us27.
Continuing to use a central controlled entity is giving away our
freedom and our privacy (yes, also in our life).
Corporation that have many users and that are dominating our communication services
they can control every aspect of our life.
They can remove and censor whatever content is against their view,
adding crazily expensive service features and owning your data.
I prefer to use good open source, taking control back over my data
and be sure that they are contributing back to the ecosystem.
Taking control back over my freedom and having the possibility to
contribute back and helping out.
I prefer to donate to a service that is giving users freedom than
giving money to a service that is removing user rights.

dontspyonme image from: https://www.reddit.com/r/degoogle/comments/8bzumr/dont_spy_on_me/

Unfortunatly, also, my server hosting my irc znc bouncer and my previous
website started to get too full for what I wanted to do,
so I had to get a new VPS for hosting my services and I'm
using the old one for keeping just my email server.
Before moving out I also had a Google account that was already asking money for
keeping my google email account space (I would have to pay google for doing
data analysis on my email...).

So I decided to quit.
Quitting facebook5, google6 and dropbox18.
Probably also quitting twitter in the near future.

I started setting my servers but I wanted something simple to setup
and that I could easily move away, if I have any kind of problem
(like moving to a different server or just keeping simple data backup).

As now I'm heavily relying on docker.
I changed my google email to mailcow17, and having control on the own mail service is
a really good experience.
Mailcow is using only open source, like SoGo that is also really easy to use,
and offer the possibility to make mail filters similar to google mail.
The move to mailcow was straightforward but I still need to finish to move
all my mail to the new accout.
Moved away from Google drive and dropbox for nextcloud + collabora online (stable libreoffice online)7.
Installed back znc and quassel core for my irc bouncer.
I used grammar for sometime in the browser and now I'm using language-tool 9
with own docker server.
I stopped searching videos on youtube and just using peertube10.
I'm still unfortunatly using twitter but I opened a account on mastodon11 (fosstodon),
I could talk with the owner and looks a reasonable setup.
Google search became searx12 + yacy13.
Android became lineage os19 + fdroid20 + aurora store28 (unfortuantly not all the application that I need are Open Source). Also my password as been moved away from lastpass to bitwarden21
keepassxc22 and pass23.
The feeling got by selfhosting most service that I use,
is definetly, as Douglas Rushkoff (team human24) would say, more human.
Is less internet of the corporations and feels more like what internet need to be,
something that is managed and owned by human not by algorithms trying to
use your data for making growth.

privacytools Nice inspiration for quitting was privacytools.io 1

Read also:
Nothing to hide argument (Wikipedia)2
How do you counter the "I have nothing to hide?" argument? (reddit.com)3
'I've Got Nothing to Hide' and Other Misunderstandings of Privacy (Daniel J. Solove - San Diego Law Review)4
Richard Stallman on privacy16

I also moved from ikiwiki to pelican, but this was more a personal preference, ikiwiki is really good but pelican25 is more simple for me to customize as is made in python.
I also went back to Qtile26 from i3.

so enjoy my new blogs and my new rants :)

January 12, 2019
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
My First Review (January 12, 2019, 18:06 UTC)

This was a test post but I think it can became a real post.
The test site post was this:
"Following is a review of my favorite mechanical keyboard."

I actually recently bought a new keyboard.
Usually I use a Happy Hacking Keyboard professional 2 english layout, I have two of them.
HHB_work HHKB_home

I really like my HHKB and I have no drow back on using it.
Both keyboard are modded with the hasu TMK controller.
The firmware is formatted with a colemak layout.

But recently I see the advertisment of the ulitmate hacking
Is interesting that is made by a crowd founding and that looks
heavily customizable.
Was looking pretty solid, so I bought one.
Here it is:

Having the key mark on the key, as a colemak user, was
enough useless.
But I had no problem remapping the firmware for following
the colemak layout.

January 09, 2019
FOSDEM 2019 (January 09, 2019, 00:00 UTC)


It’s FOSDEM time again! Join us at Université libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year’s FOSDEM 2019 will be held on February 2nd and 3rd.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building K. Visit this year’s wiki page to see who’s coming. So far eight developers have specified their attendance, with most likely many more on the way!

January 05, 2019
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

Going to be using this blog post to add bits and pieces of how to use proteus to handle data in tryton.

Just noticed the proteus readme is quite good: here's a link to the proteus github

IMPORTANT: One thing I noticed (the hard way) is that if you are connected with a proteus session and you add&activate a module (at least when it is not done using proteus) you need to re-connect as it does not seem to add things like extra fields added to models otherwise.

First thing: connect:

from proteus import config, Model, Wizard, Report
pcfg = config.set_trytond(database='trytond', config_file='/etc/tryton/trytond.conf')

Then we just get ourselved our parties:

Party = Model.get('party.party')
for p in all_parties:

This will print out all names and the first full address of each.

Party Relations (a seperate module):


Would give you output similar to this (if there are relations - in my case 2):


Interesting fields there (for me):

p.relations[0].type.name # returns the name of the relation as entered
p.relations[0].reverse # reverse relation as entered
# the next 2 are self explainatory anyway just note the '_' with from

Now to add a new one:

np = Party()
np.name='Test Customer from Proteus'

This just creates a new party with just a name. default values that are set up (like default language) are set. Until it is saved the id (np.id) is -1. By default it also comes with one (empty address).

Here's how to edit/add:

np.save() # don't forget this

Extra fields from other (possibly own) can be accessed exactly the same way as the normal ones (just don't forget to reconnect - like i did ;) )

Here's how you refresh the data:



January 04, 2019
Sergei Trofimovich a.k.a. slyfox (homepage, bugs)
page fault handling on alpha (January 04, 2019, 00:00 UTC)

trofi's blog: page fault handling on alpha

page fault handling on alpha


This was a quiet evening on #gentoo-alpha. Matt Turner shared an unusual kernel crash report seen by Dmitry V. Levin.

Dmitry noticed that one of AlphaServer ES40 machines does not handle strace test suite and generates kernel oopses:

Unable to handle kernel paging request at virtual address ffffffffffff9468
CPU 3 
aio(26027): Oops 0
pc = [<fffffc00004eddf8>]  ra = [<fffffc00004edd5c>]  ps = 0000    Not tainted
pc is at sys_io_submit+0x108/0x200
ra is at sys_io_submit+0x6c/0x200
v0 = fffffc00c58e6300  t0 = fffffffffffffff2  t1 = 000002000025e000
t2 = fffffc01f159fef8  t3 = fffffc0001009640  t4 = fffffc0000e0f6e0
t5 = 0000020001002e9e  t6 = 4c41564e49452031  t7 = fffffc01f159c000
s0 = 0000000000000002  s1 = 000002000025e000  s2 = 0000000000000000
s3 = 0000000000000000  s4 = 0000000000000000  s5 = fffffffffffffff2
s6 = fffffc00c58e6300
a0 = fffffc00c58e6300  a1 = 0000000000000000  a2 = 000002000025e000
a3 = 00000200001ac260  a4 = 00000200001ac1e8  a5 = 0000000000000001
t8 = 0000000000000008  t9 = 000000011f8bce30  t10= 00000200001ac440
t11= 0000000000000000  pv = fffffc00006fd320  at = 0000000000000000
gp = 0000000000000000  sp = 00000000265fd174
Disabling lock debugging due to kernel taint
[<fffffc0000311404>] entSys+0xa4/0xc0

Oopses should never happen against userland workloads.

Here crash happened right in the io_submit() syscall. “Should be a very simple arch-specific bug. Can’t take much time to fix.” was my thought. Haha.


Dmitry provided very nice reproducer of the problem (extracted from strace test suite):

The idea of this test is simple: create valid context for asynchronous IO and pass invalid pointer ptr to it. mmap()/munmap() trick makes sure the ptr is pointing at an invalid non-NULL user memory location.

To reproduce and explore the bug locally I picked qemu alpha system emulation. To avoid complexity of searching for proper IDE driver for root filesystem I built minimal linux kernel with only initramfs support without filesystem or block device support.

Then I put statically linked reproducer and busybox into initramfs:

$ LANG=C tree root/
|-- aio (statically linked aio.c)
|-- aio.c (source above)
|-- bin
|   |-- busybox (statically linked busybox)
|   `-- sh -> busybox
|-- dev (empty dir)
|-- init (script below)
|-- proc (empty dir)
`-- sys (empty dir)

4 directories, 5 files

$ cat root/init

mount -t proc none /proc
mount -t sysfs none /sys
exec bin/sh

To run qemu system emulation against the above I used the following one-liner:

run-qemu.sh builds initramfs image and runs kernel against it.

Cross-compiling vmlinux on alpha is also straightforward:

I built kernel and started a VM as:

# build kernel
$ ./mk.sh -j$(nproc)

# run kernel
$ ./run-qemu.sh -curses
[    0.650390] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
/ #

That was simple. I got the prompt! Then I ran statically linked /aio reproducer as:

/ # /aio
Unable to handle kernel paging request at virtual address 0000000000000000
aio(26027): Oops -1

Woohoo! Crashed \o/ This allowed me to explore failure in more detail.

I used -curses (instead of default -sdl) to ease copying of text back from VM.

Fault address pattern was slightly different from the original report. I hoped it’s a manifestation of the same bug. Worst case I would find another bug to fix and get back to original one again :)

Into the rabbit hole

Oops was happening every time I ran /aio on 4.20 kernel. io_submit(2) man page claims it’s an old system call from 2.5 kernel era. Thus it should not be a recent addition.

How about older kernels? Did they also fail?

I was still not sure I had correct qemu/kernel setup. I decided to pick older 4.14 kernel version known to run without major problems on our alpha box. 4.14 kernel version did not crash in qemu either. This reassured me I have not completely broken setup.

I got first suspect: kernel regression.

Reproducer was very stable. Kernel bisection got me to first regressed commit:

commit 95af8496ac48263badf5b8dde5e06ef35aaace2b
Author: Al Viro <viro@zeniv.linux.org.uk>
Date:   Sat May 26 19:43:16 2018 -0400

    aio: shift copyin of iocb into io_submit_one()

    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>

:040000 040000 20dd44ac4706540b1c1d4085e4269bd8590f4e80 05d477161223e5062f2f781b462e0222c733fe3d M      fs

The commit clearly touched io_submit() syscall handling. But there is a problem: the change was not alpha-specific at all. If commit had any problems it also should have caused problems on other systems.

To get better understanding of probable cause I decided to look at failure mechanics. Actual values of local variables in io_submit() right before crash might get me somewhere. I started adding printk() statements around SYSCALL_DEFINE3(io_submit, …) implementation.

At some point after enough printk() calls added crashes disappeared. This confirmed it was not just a logical bug but something more subtle.

I also was not able to analyze the generated code difference between printk()/no-printk() versions.

Then I attempted to isolate faulty code into a separate function but not much success here either. Any attempt to factor out a subset of io_submit() into a separate function made bug to go away.

It was time for a next hypothesis: mysterious incorrect compiler code generation or invalid __asm__ constraints for some kernel macro exposed after minor code motion.

Single stepping through kernel

How to get an insight into the details without affecting original code too much?

Having failed at minimal code snippet I attempted to catch exact place of page fault by single-stepping through kerenel using gdb.

For qemu-loadable kernels the procedure very straightforward:

  • start gdb server on qemu side with -s option
  • start gdb client on host side with target remote localhost:1234

The same procedure in exact commands (I’m hooking into sys_io_submit()):

<at tty1>
$ ./run-qemu.sh -s

<at tty2>
$ gdb --quiet vmlinux
(gdb) target remote localhost:1234
  Remote debugging using localhost:1234
  0xfffffc0000000180 in ?? ()
(gdb) break sys_io_submit 
  Breakpoint 1 at 0xfffffc000117f890: file ../linux-2.6/fs/aio.c, line 1890.
(gdb) continue

<at qemu>
  # /aio

<at tty2 again>
  Breakpoint 1, 0xfffffc000117f89c in sys_io_submit ()
(gdb) bt
  Breakpoint 1, __se_sys_io_submit (ctx_id=2199023255552, nr=1, iocbpp=2199023271936) at ../linux-2.6/fs/aio.c:1890
  1890    SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
(gdb) bt
  #0  __se_sys_io_submit (ctx_id=2199023255552, nr=1, iocbpp=2199023271936) at ../linux-2.6/fs/aio.c:1890
  #1  0xfffffc0001011254 in entSys () at ../linux-2.6/arch/alpha/kernel/entry.S:476

Now we can single-step through every instruction with nexti and check where things go wrong.

To poke around efficiently I kept looking at these cheat sheets:

Register names are especially useful as each alpha register has two names: numeric and mnemonic. Source code might use one form and gdb disassembly might use another. For example $16/a0 for gas ($r16/$a0 for gdb) is a register to pass first integer argument to function.

After many backs and forths I found the suspicious behaviour when handling single instruction:

(gdb) disassemble
  => 0xfffffc000117f968 <+216>:   ldq     a1,0(t1)
     0xfffffc000117f96c <+220>:   bne     t0,0xfffffc000117f9c0 <__se_sys_io_submit+304>
(gdb) p $gp
    $1 = (void *) 0xfffffc0001c70908 # GOT
(gdb) p $a1
    $2 = 0
(gdb) p $t0
    $3 = 0
(gdb) nexti
     0xfffffc000117f968 <+216>:   ldq     a1,0(t1)
  => 0xfffffc000117f96c <+220>:   bne     t0,0xfffffc000117f9c0 <__se_sys_io_submit+304>
(gdb) p $gp
    $4 = (void *) 0x0
(gdb) p $a1
    $5 = 0
(gdb) p $t0
   $6 = -14 # -EFAULT

The above gdb session executes single ldq a1,0(t1) instruction and observes effect on the registers gp, a1, t0.

Normally ldq a1, 0(t1) would read 64-bit value pointed by t1 into a1 register and leave t0 and gp untouched.

The main effect seen here that causes later OOps is sudden gp change. gp is supposed to point to GOT (global offset table) table in current “program” (kernel in this case). Something managed to corrupt it.

By /aio test case construction instruction ldq a1,0(t1) is not supposed to read any valid data: our test case passes invalid memory location there. All the register changing effects are the result of page fault handling.

The smoking gun

Grepping around arch/alpha directory I noticed entMM page fault handling entry.

It claims to handle page faults and keeps gp value on stack. Let’s trace the fate of that on-stack value as page fault happens:

(gdb) disassemble
  => 0xfffffc000117f968 <+216>:   ldq     a1,0(t1)
     0xfffffc000117f96c <+220>:   bne     t0,0xfffffc000117f9c0 <__se_sys_io_submit+304>
(gdb) p $gp
    $1 = (void *) 0xfffffc0001c70908 # GOT

(gdb) break entMM
    Breakpoint 2 at 0xfffffc0001010e10: file ../linux-2.6/arch/alpha/kernel/entry.S, line 200
(gdb) continue
    Breakpoint 2, entMM () at ../linux-2.6/arch/alpha/kernel/entry.S:200
(gdb) x/8a $sp
    0xfffffc003f51be78:     0x0     0xfffffc000117f968 <__se_sys_io_submit+216>
    0xfffffc003f51be88:     0xfffffc0001c70908 <# GOT> 0xfffffc003f4f2040
    0xfffffc003f51be98:     0x0     0x20000004000 <# userland address>
    0xfffffc003f51bea8:     0xfffffc0001011254 <entSys+164> 0x120001090
(gdb) watch -l *0xfffffc003f51be88
    Hardware watchpoint 3: -location *0xfffffc003f51be88
(gdb) continue
    Old value = 29821192
    New value = 0
    0xfffffc00010319d0 in do_page_fault (address=2199023271936, mmcsr=<optimized out>, cause=0, regs=0xfffffc003f51bdc0)
       at ../linux-2.6/arch/alpha/mm/fault.c:199
    199                     newpc = fixup_exception(dpf_reg, fixup, regs->pc);

Above gdb session does the following:

  • break entMM: break at page fault
  • x/8a $sp: print 8 top stack values at entMM call time
  • spot gp value at 0xfffffc003f51be88 (sp+16) address
  • watch -l *0xfffffc003f51be88: set hardware watchpoint at a memory location where gp is stored.

Watch triggers at seemingly relevant place: fixup_exception() where exception handler adjusts registers before resuming the faulted task.

Looking around I found an off-by-two bug in page fault handling code. The fix was simple:

Patch is proposed upstream as https://lkml.org/lkml/2018/12/31/83.

Effect of the patch is to write 0 into on-stack location of a1 ($17 register) instead of location of gp.

That’s it!

Page fault handling magic

I always wondered how kernel reads data from userspace when it’s needed. How does it do swap-in if data is not available? How does it check for permission privilege access? That kind of stuff.

The above investigation covers most of involved components:

  • ldq instruction is used to force the read from userspace (as one would read from kernel’s memory)
  • entMM/do_page_fault() handles the userspace fault as if fault would not happen

The few minor missing details are:

  • How does kernel know which instructions are known to generate user page faults?
  • What piece of hardware holds a pointer to page fault handler on alpha?

Let’s expand the code involved in page fault handling. Call site:

which is translated to already familiar pair of instructions:

=> 0xfffffc000117f968 <+216>:   ldq     a1,0(t1)
   0xfffffc000117f96c <+220>:   bne     t0,0xfffffc000117f9c0 <__se_sys_io_submit+304>

Fun fact: get_user() has two return values: normal function return value (stored into t0 register) and user_iocb effect (stored into a1 register).

Let’s expand get_user() implementation on alpha:

A lot of simple code above does two things:

  1. use __access_ok() to check for address to be a userspace address to prevent data exfiltration from kernel.
  2. dispatch across different supported sizes to do the rest of work. Our case is a simple 64-bit read.

Looking at __get_user_64() in more detail:

A few observations:

  • The actual check for address validity is done by CPU: load-8-bytes instruction (ldq %0,%2) is executed and MMU handles a page fault
  • There is no explicit code to recover from the exception. All auxiliary information it put into __ex_table section.
  • ldq %0,%2 instruction uses only parameters “0” (__gu_val) and “2”(addr) but does not use “1”(__gu_err) parameter directly.
  • __ex_table uses cool lda instruction hack to encode auxiliary data:
    • __gu_err error register
    • pointer to next instruction after faulty instrustion: cont-label (or 2b-1b)
    • result register

Page fault handling mechanism knows how to get to __ex_table data where “1”(__gu_err) is encoded and is able to reach that data to use it later in mysterious fixup_exception() we saw before.

In case of alpha (and many other targets) __ex_table collection is defined by arch/alpha/kernel/vmlinux.lds.S linker script using EXCEPTION_TABLE() macro:

#define EXCEPTION_TABLE(align)                         \
    . = ALIGN(align);                                  \
    __ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) {  \
        __start___ex_table = .;                        \
        KEEP(*(__ex_table))                            \
        __stop___ex_table = .;                         \

Here all __ex_table sections are gathered between __start___ex_table and __stop___ex_table symbols. Those are handled by generic kernel/extable.c code:

search_exception_tables() resolves faut address to relevant struct exception_table_entry.

Let’s look at the definition of struct exception_table_entry:

Note how lda in-memory instruction format is used to encode all details needed by fixup_exception()! In case of our sys_io_submit() case it would be lda a1, 4(t0) (lda r17, 4(r1)):

(gdb) bt
  #0  0xfffffc00010319d0 in do_page_fault (address=2199023271936, mmcsr=<optimized out>, cause=0, 
      regs=0xfffffc003f51bdc0) at ../linux-2.6/arch/alpha/mm/fault.c:199
  #1  0xfffffc0001010eac in entMM () at ../linux-2.6/arch/alpha/kernel/entry.S:222
(gdb) p *fixup
    $4 = {insn = -2584576, fixup = {unit = 572588036, bits = {nextinsn = 4, errreg = 1, valreg = 17}}}

Note how page fault handling also advances pc (program counter or instruction pointer) nextinsn=4 bytes forward to skip failed ldq instruction.

arch/alpha/mm/fault.c does all the heavy-lifting of handling page faults. Here is a small snippet that handles our case of faults covered by exception handling:

do_page_fault() also does a few other page-fault related things I carefully skipped here:

  • page fault accounting
  • handling of missing support for “prefetch” instruction
  • stack growth
  • OOM handling
  • SIGSEGV, SIGBUS propagation

Once do_page_fault() gets control it updates regs struct in memory for faulted task using dpf_reg() macro. It looks unusual:

  • refers to negative offsets sometimes: (r) <= 15 ? (r)-16 (out of struct pt_regs)
  • defines not one but a few ranges of registers: 0-8, 9-15, 16-18, 19-…

struct pt_regs as is:

Now meaning of dpf_reg() should be more clear. As pt_regs keeps only a subset of registers is has to account for gaps and offsets.

Here I noticed the bug: r16-r18 range is handled incorrectly by dpf_reg(): r16 “address” is regs+10 (26-16), not regs+8.

The implementation also means that dpf_reg() can’t handle gp(r29) and sp(r30) registers as value registers. That should not normally be a problem as gcc never assigns those registers for temporary computations and keeps them to hold GOT pointer and stack pointer at all times. But one could write assembly code to do it :)

If all the above makes no sense to you it’s ok. Check kernel documentation for x86 exception handling instead which uses very similar technique.

To be able to handle all registers we need to bring in r9-r15. Those are written right before struct pt_regs right at entMM entry:

Here there are a few subtle things going on:

  1. at entry entMM already has a frame of last 6 values: ps,pc,gp,r16-r18.
  2. then SAVE_ALL (not pasted bove) stores r0-r8,r19-r28,hae,trap_a0-trap-a2
  3. and only then r9-r15 are stored (note the subq $sp, 56, $sp to place them before).

In C land only 2. and 3. constitute struct pt_regs. 1. happens to be outside and needs negative addressing we saw in dpf_reg().

As I understand the original idea was to share ret_from_sys_call part across various kernel entry points:

  • system calls: entSys
  • arithmetic exceptions: entArith
  • external interrupts: entInt
  • internal faults (bad opcode, FPU failures, breakpoint traps, ): entIF
  • page faults: entMM
  • handling of unaligned access: entUna
  • MILO debug break: entDbg

Of the above only page faults and unaligned faults need read/write acceess to every register.

In practice entUna uses different layout and simpler code patching.

The last step to get entMM executed at a fault handler is to register it in alpha’s PALcode subsystem (Privileged Architecture Library code).

It’s done in trap_init(). along with other handlers. Simple!

Or not so simple. What is that PALcode thing (wiki’s link)? It looks like a tiny hypervisor that provides service points for CPU you can access with call_pal <number> instruction.

It puzzled me a lot of what call_pal was supposed to do. Should it transfer control somwehre else or is it a normal call?

Actually given it’s a generic mechanism to do “privileged service calls” it can do both. I was not able to quickly find the details on how different service calls affect registers and found it simplest to navigate through qemu’s PAL source.

AFAIU PALcode of real alpha machine is a proprietary process-specific blob that could have it’s own quirks.

Back to out qemu-palcode let’s looks at a few examples.

First is function-like call_pal PAL_swpipl used in entMM and others:

I know almost nothing about PAL but I suspect mfpr means move-from-physical-register. hw_rei/hw_ret is a branch from PAL service routine back to “unprivileged” user/kernel.

hw_rei does normal return from call_pal to the instruction next to call_pal.

Here call_pal PAL_rti is an example of task-switch-like routine:

Here target (p5, some service only hardware register) was passed on stack in FRM_Q_PC($sp).

That PAL_rti managed to confused me a lot as I was trying to single-step through it as a normal function. I did not notice how I was jumping from page fault handling code to timer interrupt handling code.

But all became clear once I found it’s definition.

Parting words

  • qemu can emulate alpha good enough to debug obscure kernel bugs
  • gdb server is very powerful for debugging unmodified kernel code including hardware watchpoints, dumping registers, watching after interrupt handling routines
  • My initial guesses were all incorrect: it was not a kernel regression, not a compiler deficiency and not an __asm__ constraint annotation bug.
  • PALcode while a nice way to abstract low-level details of CPU implementation complicates debugging of operating system. PALcode also happens to be OS-dependent!
  • This was another one-liner fix :)
  • The bug has been always present in kernel (for about 20 years?).

Have fun!

Posted on January 4, 2019
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> comments powered by Disqus

December 30, 2018
Luca Barbato a.k.a. lu_zero (homepage, bugs)

Since there are plenty of blogposts about what people would like to have or will implement in rust in 2019 here is mine.

I spent the last few weeks of my spare time making a C-api for rav1e called crav1e, overall the experience had been a mixed bag and there is large space for improvement.

Ideally I’d like to have by the end of the year something along the lines of:

$ cargo install-library --prefix=/usr --libdir=/usr/lib64 --destdir=/staging/place

So that it would:
– build a valid cdylib+staticlib
– produce a correct header
– produce a correct pkg-config file
– install all of it in the right paths

All of this requiring a quite basic build.rs and, probably, an applet.

What is it all about?

Building and installing properly shared libraries is quite hard, even more on multiple platforms.

Right now cargo has quite limited install capabilities with some work pending on extending it and has an open issue and a patch.

Distributions that, probably way too early since the rust-ABI is not stable nor being stabilized yet, are experimenting in building everything as shared library also have those problems.

Why it is important

rust is a pretty good language and has a fairly simple way to interact in both direction with any other language that can produce or consume C-ABI-compatible object code.

This is already quite useful if you want to build a small static archive and link it in your larger application and/or library.

An example of this use-case is librsvg.

Such heterogeneous environment warrant for a modicum of additional machinery and complication.

But if your whole library is written in rust, it is a fairly annoying amount of boilerplate that you would rather avoid.

Current status

If you want to provide C-bindings to your crates you do not have a single perfect solution right now.

What works well already

Currently building the library itself works fine and it is really straightforward:

  • It is quite easy to mark data types and functions to be C-compatible:
pub struct Foo {
    a: Bar,

pub unsafe extern "C" fn make_foo() -> *mut Foo {
  • rustc and cargo are aware of different crate-types, selecting staticlib produces a valid library
name = "rav1e"
crate-type = ["staticlib"]
  • cbindgen can produce a usable C-header from a crate using few lines of build.rs or a stand-alone applet and a toml configuration file.
extern crate cbindgen;

fn main() {
    let crate_dir = std::env::var("CARGO_MANIFEST_DIR").unwrap();
    let header_path: std::path::PathBuf = ["include", "rav1e.h"].iter().collect();


header = "// SPDX-License-Identifier: MIT"
sys_includes = ["stddef.h"]
include_guard = "RAV1E_H"
tab_width = 4
style = "Type"
language = "C"

parse_deps = true
include = ['rav1e']
expand = ['rav1e']

prefix = "Ra"
item_types = ["enums", "structs", "unions", "typedefs", "opaque", "functions"]

rename_variants = "ScreamingSnakeCase"
prefix_with_name = true

Now issuing cargo build --release will get you a .h in the include/ dir and a .a library in target/release, so far it is simple enough.

What sort of works

Once have a static library, you need an external mean to track what are its dependencies.

Back in the old ages there were libtool archives (.la), now we have pkg-config files providing more information and in a format that is way easier to parse and use.

rustc has --print native-static-libs to produce the additional libraries to link, BUT prints it to stderr and only as a side-effect of the actual build process.

My, fairly ugly, hack had been adding a dummy empty subcrate just to produce the link-line using

cargo rustc -- --print native-static-libs 2>&1| grep native-static-libs | cut -d ':' -f 3

And then generate the .pc file from a template.

This is anything but straightforward and because how cargo rustc works, you may end up adding an empty subcrate just to extract this information quickly.

What is missing

Once you have your library, your header and your pkg-config file, you probably would like to install the library somehow and/or make a package out of it.

cargo install does not currently cover it. It works only for binaries and just binaries alone. It will hopefully change, but right now you just have to pick the external build system you are happy with and hack your way to integrate the steps mentioned above.

For crav1e I ended up hacking a quite crude Makefile.

And with that at least a pure-rust static library can be built and installed with the common:

make DESTDIR=/staging/place prefix=/usr libdir=/usr/lib64

Dynamic libraries

Given rustc and cargo have the cdylib crate type, one would assume we could just add the type, modify our build-system contraption a little and go our merry way.

Sadly not. A dynamic library (or shared object) requires in most of the common platform some additional metadata to guide the runtime linker.

The current widespread practice is to use tools such as patchelf or install_name_tool, but it is quite brittle and might require tools.

My plans for the 2019

rustc has a mean to pass the information to the compile-time linker but there is no proper way to pass it in cargo, I already tried to provide a solution, but I’ll have to go through the rfc route to make sure there is community consensus and the feature is properly documented.

Since kind of metadata is platform-specific so would be better to have this information produced and passed on by something external to the main cargo. Having it as applet or a build.rs dependency makes easier to support more platforms little by little and have overrides without having to go through a main cargo update.

The applet could also take care of properly create the .pc file and installing since it would have access to all the required information.

Some efforts could be also put on streamlining the process of extracting the library link line for the static data and spare some roundtrips.

I guess that’s all for what I’d really like to have next year in rust and I’m confident I’ll have time to deliver myself 🙂

December 06, 2018
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Scylla Summit 2018 write-up (December 06, 2018, 22:53 UTC)

It’s been almost one month since I had the chance to attend and speak at Scylla Summit 2018 so I’m relieved to finally publish a short write-up on the key things I wanted to share about this wonderful event!

Make Scylla boring

This statement of Glauber Costa sums up what looked to me to be the main driver of the engineering efforts put into Scylla lately: making it work so consistently well on any kind of workload that it’s boring to operate 🙂

I will follow up on this statement to highlight the things I heard and (hopefully) understood during the summit. I hope you’ll find it insightful.

Reduced operational efforts

The thread-per-core and queues design still has a lot of possibilities to be leveraged.

The recent addition of RPC streaming capabilities to seastar allows a drastic reduction in the time it takes the cluster to grow or shrink (data rebalancing / resynchronization).

Incremental compaction is also very promising as this background process is one of the most expensive there is in the database’s design.

I was happy to hear that scylla-manager will soon be made available and free to use with basic features while retaining more advanced ones for enterprise version (like backup/restore).
I also noticed that the current version was not supporting SSL enabled clusters to store its configuration. So I directly asked Michał for it and I’m glad that it will be released on version 1.3.1.

Performant multi-tenancy

Why choose between real-time OLTP & analytics OLAP workloads?

The goal here is to be able to run both on the same cluster by giving users the ability to assign “SLA” shares to ROLES. That’s basically like pools on Hadoop at a much finer grain since it will create dedicated queues that will be weighted by their share.

Having one queue per usage and full accounting will allow to limit resources efficiently and users to have their say on their latency SLAs.

But Scylla also has a lot to do in the background to run smoothly. So while this design pattern was already applied to tamper compactions, a lot of work has also been done on automatic flow control and back pressure.

For instance, Materialized Views are updated asynchronously which means that while we can interact and put a lot of pressure on the table its based on (called the Main Table), we could overwhelm the background work that’s needed to keep MVs View Tables in sync. To mitigate this, a smart back pressure approach was developed and will throttle the clients to make sure that Scylla can manage to do everything at the best performance the hardware allows!

I was happy to hear that work on tiered storage is also planned to better optimize disk space costs for certain workloads.

Last but not least, columnar storage optimized for time series and analytics workloads are also something the developers are looking at.

Latency is expensive

If you care for latency, you might be happy to hear that a new polling API (named IOCB_CMD_POLL) has been contributed by Christoph Hellwig and Avi Kivity to the 4.19 Linux kernel which avoids context switching I/O by using a shared ring between kernel and userspace. Scylla will be using it by default if the kernel supports it.

The iotune utility has been upgraded since 2.3 to generate an enhanced I/O configuration.

Also, persistent (disk backed) in-memory tables are getting ready and are very promising for latency sensitive workloads!

A word on drivers

ScyllaDB has been relying on the Datastax drivers since the start. While it’s a good thing for the whole community, it’s important to note that the shard-per-CPU approach on data that Scylla is using is not known and leveraged by the current drivers.

Discussions took place and it seems that Datastax will not allow the protocol to evolve so that drivers could discover if the connected cluster is shard aware or not and then use this information to be more clever in which write/read path to use.

So for now ScyllaDB has been forking and developing their shard aware drivers for Java and Go (no Python yet… I was disappointed).

Kubernetes & containers

The ScyllaDB guys of course couldn’t avoid the Kubernetes frenzy so Moreno Garcia gave a lot of feedback and tips on how to operate Scylla on docker with minimal performance degradation.

Kubernetes has been designed for stateless applications, not stateful ones and Docker does some automatic magic that have rather big performance hits on Scylla. You will basically have to play with affinities to dedicate one Scylla instance to run on one server with a “retain” reclaim policy.

Remember that the official Scylla docker image runs with dev-mode enabled by default which turns off all performance checks on start. So start by disabling that and look at all the tips and literature that Moreno has put online!

Scylla 3.0

A lot has been written on it already so I will just be short on things that important to understand in my point of view.

  • Materialized Views do back fill the whole data set
    • this job is done by the view building process
    • you can watch its progress in the system_distributed.view_build_status table
  • Secondary Indexes are Materialized Views under the hood
    • it’s like a reverse pointer to the primary key of the Main Table
    • so if you read the whole row by selecting on the indexed column, two reads will be issued under the hood: one on the indexed MV view table to get the primary key and one on the main table to get the rest of the columns
    • so if your workload is mostly interested by the whole row, you’re better off creating a complete MV to read from than using a SI
    • this is even more true if you plan to do range scans as this double query could lead you to read from multiple nodes instead of one
  • Range scan is way more performant
    • ALLOW FILTERING finally allows a great flexibility by providing server-side filtering!

Random notes

Support for LWT (lightweight transactions) will be relying on a future implementation of the Raft consensus algorithm inside Scylla. This work will also benefits Materialized Views consistency. Duarte Nunes will be the one working on this and I envy him very much!

Support for search workloads is high in the ScyllaDB devs priorities so we should definitely hear about it in the coming months.

Support for “mc” sstables (new generation format) is done and will reduce storage requirements thanks to metadata / data compression. Migration will be transparent because Scylla can read previous formats as well so it will upgrade your sstables as it compacts them.

ScyllaDB developers have not settled on how to best implement CDC. I hope they do rather soon because it is crucial in their ability to integrate well with Kafka!

Materialized Views, Secondary Indexes and filtering will benefit from the work on partition key and indexes intersections to avoid server side filtering on the coordinator. That’s an important optimization to come!

Last but not least, I’ve had the pleasure to discuss with Takuya Asada who is the packager of Scylla for RedHat/CentOS & Debian/Ubuntu. We discussed Gentoo Linux packaging requirements as well as the recent and promising work on a relocatable package. We will collaborate more closely in the future!

November 25, 2018
Michał Górny a.k.a. mgorny (homepage, bugs)
Portability of tar features (November 25, 2018, 14:26 UTC)

The tar format is one of the oldest archive formats in use. It comes as no surprise that it is ugly — built as layers of hacks on the older format versions to overcome their limitations. However, given the POSIX standarization in late 80s and the popularity of GNU tar, you would expect the interoperability problems to be mostly resolved nowadays.

This article is directly inspired by my proof-of-concept work on new binary package format for Gentoo. My original proposal used volume label to provide user- and file(1)-friendly way of distinguish our binary packages. While it is a GNU tar extension, it falls within POSIX ustar implementation-defined file format and you would expect that non-compliant implementations would extract it as regular files. What I did not anticipate is that some implementation reject the whole archive instead.

This naturally raised more questions on how portable various tar formats actually are. To verify that, I have decided to analyze the standards for possible incompatibility dangers and build a suite of test inputs that could be used to check how various implementations cope with that. This article describes those points and provides test results for a number of implementations.

Please note that this article is focused merely on read-wise format compatibility. In other words, it establishes how tar files should be written in order to achieve best probability that it will be read correctly afterwards. It does not investigate what formats the listed tools can write and whether they can correctly create archives using specific features.

Continue reading

November 16, 2018
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)

So I recently had a problem, where postgresql would run out of max concurrent connections .. and I wasn'T sure what caused it.

So to find out what the problem was I wanted to know what connections were open. After a short search I found the pg_stat_activity table.

of course most info in there is not needed for my case (it has database id, name, pid, usename, application_name, client_addr, state, ...)

but for me this was all I needed:

postgres=# select count(*), datname,state,pid from pg_stat_activity group by datname, state, pid order by datname;
 count |  datname   |        state        |  pid
     1 | dbmail     | idle                | 30092
     1 | dbmail     | idle                | 30095

or shorter just the connections by state and db

postgres=# select count(*), datname,state from pg_stat_activity group by datname, state order by datname;
 count | datname  |        state
    15 | dbmail   | idle

of course one could go into more detail, but this made me realize that i could limit some processes that used a lot of connections, but are not heavy load. Really simple once you know where to look - as usual :)

November 13, 2018
Luca Barbato a.k.a. lu_zero (homepage, bugs)

Over the year I contributed to an AV1 encoder written in rust.

Here a small tutorial about what is available right now, there is still lots to do, but I think we could enjoy more user-feedback (and possibly also some help).

Setting up

Install the rust toolchain

If you do not have rust installed, it is quite simple to get a full environment using rustup

$ curl https://sh.rustup.rs -sSf | sh
# Answer the questions asked and make sure you source the `.profile` file created.
$ source ~/.profile

Install cmake, perl and nasm

rav1e uses libaom for testing and and on x86/x86_64 some components have SIMD variants written directly using nasm.

You may follow the instructions, or just install:
nasm (version 2.13 or better)
perl (any recent perl5)
cmake (any recent version)

Once you have those dependencies in you are set.

Building rav1e

We use cargo, so the process is straightforward:

## Pull in the customized libaom if you want to run all the tests
$ git submodule update --init

## Build everything
$ cargo build --release

## Test to make sure everything works as intended
$ cargo test --features decode_test --release

## Install rav1e
$ cargo install

Using rav1e

Right now rav1e has a quite simple interface:

rav1e 0.1.0
AV1 video encoder

    rav1e [OPTIONS]  --output 

    -h, --help       Prints help information
    -V, --version    Prints version information

    -I, --keyint     Keyframe interval [default: 30]
    -l, --limit                  Maximum number of frames to encode [default: 0]
        --low_latency      low latency mode. true or false [default: true]
    -o, --output                Compressed AV1 in IVF video output
        --quantizer                 Quantizer (0-255) [default: 100]
    -s, --speed                  Speed level (0(slow)-10(fast)) [default: 3]
        --tune                    Quality tuning (Will enforce partition sizes >= 8x8) [default: psnr]  [possible
                                        values: Psnr, Psychovisual]

        Uncompressed YUV4MPEG2 video input

It accepts y4m raw source and produces ivf files.

You can configure the encoder by setting the speed and quantizer levels.

The low_latency flag can be turned off to run some additional analysis over a set of frames and have additional quality gains.


While ave and gst-rs will use the rav1e crate directly, there are a number of software such as handbrake or vlc that would be much happier to consume a C API.

Thanks to the staticlib target and cbindgen is quite easy to produce a C-ABI library and its matching header.


crav1e is built using cargo, so nothing special is needed right now beside nasm if you are building it on x86/x86_64.

Build the library

This step is completely straightforward, you can build it as release:

$ cargo build --release

or as debug

$ cargo build

It will produce a target/release/librav1e.a or a target/debug/librav1e.a.
The C header will be in include/rav1e.h.

Try the example code

I provided a quite minimal sample case.

cc -Wall c-examples/simple_encoding.c -Ltarget/release/ -lrav1e -Iinclude/ -o c-examples/simple_encoding

If it builds and runs correctly you are set.

Manually copy the .a and the .h

Currently cargo install does not work for our purposes, but it will change in the future.

$ cp target/release/librav1e.a /usr/local/lib
$ cp include/rav1e.h /usr/local/include/

Missing pieces

Right now crav1e works well enough but there are few shortcomings I’m trying to address.

Shared library support

The cdylib target does exist and produce a nearly usable library but there are some issues with soname support. I’m trying to address them with upstream, but it might take some time.

Meanwhile some people suggest to use patchelf or similar tools to fix the library after the fact.

Install target

cargo is generally awesome, but sadly its support for installing arbitrary files to arbitrary paths is limited, luckily there are people proposing solutions.

pkg-config file generation

I consider a library not proper if a .pc file is not provided with it.

Right now there are means to extract the information need to build a pkg-config file, but there isn’t a simple way to do it.

$ cargo rustc -- --print native-static-libs

Provides what is needed for Libs.private, ideally it should be created as part of the install step since you need to know the prefix, libdir and includedir paths.

Coming next

Probably the next blog post will be about my efforts to make cargo able to produce proper cdylib or something quite different.

PS: If somebody feels to help me with matroska in AV1 would be great 🙂

November 12, 2018
Hanno Böck a.k.a. hanno (homepage, bugs)

HackerOne is currently one of the most popular bug bounty program platforms. While the usual providers of bug bounty programs are companies, w while ago I noted that some people were running bug bounty programs on Hacker One for their private projects without payouts. It made me curious, so I decided to start one with some of my private web pages in scope.

The HackerOne process requires programs to be private at first, starting with a limited number of invites. Soon after I started the program the first reports came in. Not surprisingly I got plenty of false positives, which I tried to limit by documenting the scope better in the program description. I also got plenty of web security scanner payloads via my contact form. But more to my surprise I also got a number of very high quality reports.

S9YThis blog and two other sites in scope use Serendipity (also called S9Y), a blog software written in PHP. Through the bug bounty program I got reports for an Open Redirect, an XSS in the start page, an XSS in the back end, an SQL injection in the back end and another SQL injection in the freetag plugin. All of those were legitimate vulnerabilities in Serendipity and some of them quite severe. I forwarded the reports to the Serendipity developers.

Fixes are available by now, the first round of fixes were released with Serendipity 2.1.3 and another issue got fixed in 2.1.4. The freetag plugin was updated to version 2.69. If you use Serendipity please make sure you run the latest versions.

I'm not always happy with the way the bug bounty platforms work, yet it seems they have attracted an active community of security researchers who are also willing to occasionally look at projects without financial reward. While it's questionable when large corporations run bug bounty programs without rewards, I think that it's totally fine for private projects and volunteer-run free and open source projects.

The conclusion I take from this is that likely more projects should try to make use of the bug bounty community. Essentially Serendipity got a free security audit and is more secure now. It got this through the indirection of my personal bug bounty program, but of course this could also work directly. Free software projects could start their own bug bounty program, and when it's about web applications ideally they should have have a live installation of their own product in scope.

In case you find some security issue with my web pages I welcome reports. And special thanks to Brian Carpenter (Geeknik), Julio Cesar and oreamnos for making my blog more secure.

November 10, 2018
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.14 (November 10, 2018, 21:08 UTC)

I’m happy to announce this release as it contains some very interesting developments in the project. This release was focused on core changes.


There are now two optional dependencies to py3status:

  • gevent
    • will monkey patch the code to make it concurrent
    • the main benefit is to use an asynchronous loop instead of threads
  • pyudev
    • will enable a udev monitor if a module asks for it (only xrandr so far)
    • the benefit is described below

To install them all using pip, simply do:

pip install py3status[all]

Modules can now react/refresh on udev events

When pyudev is available, py3status will allow modules to subscribe and react to udev events!

The xrandr module uses this feature by default which allows the module to instantly refresh when you plug in or off a secondary monitor. This also allows to stop running the xrandr command in the background and saves a lot of CPU!


  • py3status core uses black formatter
  • fix default i3status.conf detection
    • add ~/.config/i3 as a default config directory, closes #1548
    • add .config/i3/py3status in default user modules include directories
  • add markup (pango) support for modules (#1408), by @MikaYuoadas
  • py3: notify_user module name in the title (#1556), by @lasers
  • print module information to sdtout instead of stderr (#1565), by @robertnf
  • battery_level module: default to using sys instead of acpi (#1562), by @eddie-dunn
  • imap module: fix output formatting issue (#1559), by @girst

Thank you contributors!

  • eddie-dunn
  • girst
  • MikaYuoadas
  • robertnf
  • lasers
  • maximbaz
  • tobes

October 31, 2018
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Update from the PipeWire hackfest (October 31, 2018, 15:49 UTC)

As the third and final day of the PipeWire hackfest draws to a close, I thought I’d summarise some of my thoughts on the goings-on and the future.


Before I get into the details, I want to send out a big thank you to:

  • Christian Schaller for all the hard work of organising the event and Wim Taymans for the work on PipeWire so far (and in the future)
  • The GNOME Foundation, for sponsoring the event as a whole
  • Qualcomm, who are funding my presence at the event
  • Collabora, for sponsoring dinner on Monday
  • Everybody who attended and participate for their time and thoughtful comments


For those of you who are not familiar with it, PipeWire (previously Pinos, previously PulseVideo) was Wim’s effort at providing secure, multi-program access to video devices (like webcams, or the desktop for screen capture). As he went down that rabbit hole, he wrote SPA, a lightweight general-purpose framework for representing a streaming graph, and this led to the idea of expanding the project to include support for low latency audio.

The Linux userspace audio story has, for the longest time, consisted of two top-level components: PulseAudio which handles consumer audio (power efficiency, wide range of arbitrary hardware), and JACK which deals with pro audio (low latency, high performance). Consolidating this into a good out-of-the-box experience for all use-cases has been a long-standing goal for myself and others in the community that I have spoken to.

An Opportunity

From a PulseAudio perspective, it has been hard to achieve the 1-to-few millisecond latency numbers that would be absolutely necessary for professional audio use-cases. A lot of work has gone into improving this situation, most recently with David Henningsson’s shared-ringbuffer channels that made client/server communication more efficient.

At the same time, as application sandboxing frameworks such as Flatpak have added security requirements of us that were not accounted for when PulseAudio was written. Examples including choosing which devices an application has access to (or can even know of) or which applications can act as control entities (set routing etc., enable/disable devices). Some work has gone into this — Ahmed Darwish did some key work to get memfd support in PulseAudio, and Wim has prototyped an access-control mechanism module to enable a Flatpak portal for sound.

All this said, there are still fundamental limitations in architectural decisions in PulseAudio that would require significant plumbing to address. With Wim’s work on PipeWire and his extensive background with GStreamer and PulseAudio itself, I think we have an opportunity to revisit some of those decisions with the benefit of a decade’s worth of learning deploying PulseAudio in various domains starting from desktops/laptops to phones, cars, robots, home audio, telephony systems and a lot more.

Key Ideas

There are some core ideas of PipeWire that I am quite excited about.

The first of these is the graph. Like JACK, the entities that participate in the data flow are represented by PipeWire as nodes in a graph, and routing between nodes is very flexible — you can route applications to playback devices and capture devices to applications, but you can also route applications to other applications, and this is notionally the same thing.

The second idea is a bit more radical — PipeWire itself only “runs” the graph. The actual connections between nodes are created and managed by a “session manager”. This allows us to completely separate the data flow from policy, which means we could write completely separate policy for desktop use cases vs. specific embedded use cases. I’m particularly excited to see this be scriptable in a higher-level language, which is something Bastien has already started work on!

A powerful idea in PulseAudio was rewinding — the ability to send out huge buffers to the device, but the flexibility to rewind that data when things changed (a new stream got added, or the stream moved, or the volume changed). While this is great for power saving, it is a significant amount of complexity in the code. In addition, with some filters in the data path, rewinding can break the algorithm by introducing non-linearity. PipeWire doesn’t support rewinds, and we will need to find a good way to manage latencies to account for low power use cases. One example is that we could have the session manager bump up the device latency when we know latency doesn’t matter (Android does this when the screen is off).

There are a bunch of other things that are in the process of being fleshed out, like being able to represent the hardware as a graph as well, to have a clearer idea of what is going on within a node. More updates as these things are more concrete.

The Way Forward

There is a good summary by Christian about our discussion about what is missing and how we can go about trying to make a smooth transition for PulseAudio users. There is, of course, a lot to do, and my ideal outcome is that we one day flip a switch and nobody knows that we have done so.

In practice, we’ll need to figure out how to make this transition seamless for most people, while folks with custom setup will need to be given a long runway and clear documentation to know what to do. It’s way to early to talk about this in more specifics, however.


One key thing that PulseAudio does right (I know there are people who disagree!) is having a custom configuration that automagically works on a lot of Intel HDA-based systems. We’ve been wondering how to deal with this in PipeWire, and the path we think makes sense is to transition to ALSA UCM configuration. This is not as flexible as we need it to be, but I’d like to extend it for that purpose if possible. This would ideally also help consolidate the various methods of configuration being used by the various Linux userspaces.

To that end, I’ve started trying to get a UCM setup on my desktop that PulseAudio can use, and be functionally equivalent to what we do with our existing configuration. There are missing bits and bobs, and I’m currently focusing on the ones related to hardware volume control. I’ll write about this in the future as the effort expands out to other hardware.

Onwards and upwards

The transition to PipeWire is unlikely to be quick or completely-painless or free of contention. For those who are worried about the future, know that any switch is still a long way away. In the mean time, however, constructive feedback and comments are welcome.

October 18, 2018
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We're happy to announce that our article "Lab::Measurement — a portable and extensible framework for controlling lab equipment and conducting measurements", describing our measurement software package Lab::Measurement, has been published in Computer Physics Communications.

Lab::Measurement is a collection of object-oriented Perl 5 modules for controlling lab instruments, performing measurements, and recording and plotting the resultant data. Its operating system independent driver stack makes it possible to use nearly identical measurement scripts both on Linux and Windows. Foreground operation with live plotting and background operation for, e.g., process control are supported. For more details, please read our article, visit the Lab::Measurement homepage, or visit Lab::Measurement on CPAN!

"Lab::Measurement - a portable and extensible framework for controlling lab equipment and conducting measurements"
S. Reinhardt, C. Butschkow, S. Geissler, A. Dirnaichner, F. Olbrich, C. Lane, D. Schröer, and A. K. Hüttel
Comp. Phys. Comm. 234, 216 (2019); arXiv:1804.03321 (PDF)

October 14, 2018
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)
tryton -- ipython, proteus (October 14, 2018, 09:22 UTC)

So after being told on IRC that you can use (i)python and proteus to poke around a running tryton instance(thanks for that hint btw) I tried it and had some "fun" right away:
from proteus import config,Model
pcfg = config.set_trytond(database='trytond', config_file='/etc/tryon/trytond.conf')

gave me this:

ValueError                                Traceback (most recent call last)
/usr/lib64/python3.5/site-packages/trytond/backend/__init__.py in get(prop)
     31                 ep, = pkg_resources.iter_entry_points(
---> 32                     'trytond.backend', db_type)
     33             except ValueError:

ValueError: not enough values to unpack (expected 1, got 0)

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
<ipython-input-2-300353cf02f5> in <module>()
----> 1 pcfg = config.set_trytond(database='trytond', config_file='/etc/tryon/trytond.conf')

/usr/lib64/python3.5/site-packages/proteus/config.py in set_trytond(database, user, config_file)
    281         config_file=None):
    282     'Set trytond package as backend'
--> 283     _CONFIG.current = TrytondConfig(database, user, config_file=config_file)
    284     return _CONFIG.current

/usr/lib64/python3.5/site-packages/proteus/config.py in __init__(self, database, user, config_file)
    232         self.config_file = config_file
--> 234         Pool.start()
    235         self.pool = Pool(database_name)
    236         self.pool.init()

/usr/lib64/python3.5/site-packages/trytond/pool.py in start(cls)
    100             for classes in Pool.classes.values():
    101                 classes.clear()
--> 102             register_classes()
    103             cls._started = True

/usr/lib64/python3.5/site-packages/trytond/modules/__init__.py in register_classes()
    339     Import modules to register the classes in the Pool
    340     '''
--> 341     import trytond.ir
    342     trytond.ir.register()
    343     import trytond.res

/usr/lib64/python3.5/site-packages/trytond/ir/__init__.py in <module>()
      2 # this repository contains the full copyright notices and license terms.
      3 from ..pool import Pool
----> 4 from .configuration import *
      5 from .translation import *
      6 from .sequence import *

/usr/lib64/python3.5/site-packages/trytond/ir/configuration.py in <module>()
      1 # This file is part of Tryton.  The COPYRIGHT file at the top level of
      2 # this repository contains the full copyright notices and license terms.
----> 3 from ..model import ModelSQL, ModelSingleton, fields
      4 from ..cache import Cache
      5 from ..config import config

/usr/lib64/python3.5/site-packages/trytond/model/__init__.py in <module>()
      1 # This file is part of Tryton.  The COPYRIGHT file at the top level of
      2 # this repository contains the full copyright notices and license terms.
----> 3 from .model import Model
      4 from .modelview import ModelView
      5 from .modelstorage import ModelStorage, EvalEnvironment

/usr/lib64/python3.5/site-packages/trytond/model/model.py in <module>()
      6 from functools import total_ordering
----> 8 from trytond.model import fields
      9 from trytond.error import WarningErrorMixin
     10 from trytond.pool import Pool, PoolBase

/usr/lib64/python3.5/site-packages/trytond/model/fields/__init__.py in <module>()
      2 # this repository contains the full copyright notices and license terms.
----> 4 from .field import *
      5 from .boolean import *
      6 from .integer import *

/usr/lib64/python3.5/site-packages/trytond/model/fields/field.py in <module>()
     18 from ...rpc import RPC
---> 20 Database = backend.get('Database')

/usr/lib64/python3.5/site-packages/trytond/backend/__init__.py in get(prop)
     32                     'trytond.backend', db_type)
     33             except ValueError:
---> 34                 raise exception
     35             mod_path = os.path.join(ep.dist.location,
     36                 *ep.module_name.split('.')[:-1])

/usr/lib64/python3.5/site-packages/trytond/backend/__init__.py in get(prop)
     24     if modname not in sys.modules:
     25         try:
---> 26             __import__(modname)
     27         except ImportError as exception:
     28             if not pkg_resources:

ImportError: No module named 'trytond.backend.'

Took me a while to figure out I just had a typon in the config file path. Since that cost me some time I thought I'd put it on here so that maybe someone else who makes the same mistake doesn't waste as much time on it as me ;) -- and thanks to the always helpful people on IRC #tryton@freenode

October 04, 2018
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Austria: Trip Summary (October 04, 2018, 05:00 UTC)

Well, our 2018 trip to Austria, Slovenia, and Hungary ends today and we have to head back home, but not before one last moment of indulgence. We woke up early so that we could partake in the included breakfast at the hotel. As with everything else at the Aria Hotel, the breakfast was incredible! There was a full buffet of items, and we also were able to order some eggs. We each got an egg white omelette with some vegetables, had a couple breads, and ordered coffee / tea. I really enjoyed my croissant with some local fruit jams (especially the Apricot jam).

Vegetarian omelette at the Hotel Aria's complimentary breakfast
Vegetarian omelette at the Hotel Aria’s complimentary breakfast

The staff at the Aria brought up the car from the valet parking lot, brought down our bags from the room, and loaded them for us. The whole experience there made it the very best hotel that I have ever had the pleasure of staying at!

We checked out, and drove back to Budapest airport. Despite the bit of traffic leaving the city centre, it was quite easy to get to the airport, and everything was clearly marked for returning the rental car. We got through security and on the flight without any problems at all.

So, what were the Top 3s of the trip (in my opinion)?

FOOD (okay, so I had to have 4 for this category)

  1. Our main dish at Zum Kaiser von Österrich in the Wachau
  2. The salad with seeds and roasted walnuts at Weinhaus Attwenger in Bad Ischl
  3. The spinach dumplings at Sixta in Vienna
  4. The mushroom tartare at Kirchenwirt an der Weinstraße in Ehrenhausen


  1. Domäne Wachau’s Pinot Noir
  2. Domäne Wachau’s Kellerberg Riesling
  3. Weingut Tement’s Vinothek Reserve Sauvignon Blanc


  1. The winery tours (Domäne Wachau, Schloss Gobelsburg, and Tement were amazing)
  2. Going up into the mountains of Hallstatt
  3. The entire experience that was the Aria Hotel Budapest—a music lover’s dream and simply the most amazing hotel I’ve ever seen!

CLIP OS logo ANSSI, the National Cybersecurity Agency of France, has released the sources of CLIP OS, that aims to build a hardened, multi-level operating system, based on the Linux kernel and a lot of free and open source software. We are happy to hear that it is based on Gentoo Hardened!

October 03, 2018
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

As in Südstiermark and many other places on this trip, we unfortunately only had one full day in the great city of Budapest. I had come up with a list of activities for us, and we sat down to figure out which ones we wanted to do (since there was absolutely no way to do everything in a mere 24 hours). We ended up spending much of the day just walking around and taking photos of the area. Our first spot for photos was right outside of our hotel at St. Stephen’s Basilica.

Budapest - St. Stephen's Basilica right outside of the Aria Hotel
Budapest – St. Stephen’s Basilica adjacent to the Aria Hotel

From there, we ventured across the Széchenyi Bridge to see an area known as The Fisherman’s Bastion (or Halászbástya in Hungarian). It’s a terrace near Matthias Church that is steeped in history and culture, and it also provides some beautiful views of the city. Down closer to the river, I think that I got some great shots of the Hungarian Parliament Building from a nice location on the western bank of the Danube.

Budapest - beautiful view of the Hungarian Parliament Building from west of the Danube
Budapest – beautiful view of the Hungarian Parliament Building from west of the Danube

We also wanted to “live it up” on the last night of our trip, so we asked the concierge for a recommendation of a bakery for cakes and treats. Café Gerbeaud came with the highest praises, so we walked to the neighbouring square to check it out. There were many stunning desserts to satisfy just about any type of sweet tooth! We couldn’t decide, so we ended up each getting a slice of three different cakes. Talk about a splurge!

Enjoying our desserts (The Dobos, Esterházy, and Émile cakes) from Café Gerbeaud
Our numerous desserts (The Dobos, Esterházy, and Émile cakes) from Café Gerbeaud

Right about that time, I received an email from one of the restaurants that I had contacted, and they were asking me to confirm our reservations. It was the first time that I had heard from them, so I didn’t think that my reservations had actually gone through. We now had a decision to make between the two restaurants, and I think that I chose poorly. More on that in just a little bit.

We wanted to walk to Városliget Park (the City Park) in order to just take some more photos and enjoy the day, but soon realised that we wouldn’t have the time necessary to get there and not feel rushed. So we ended up just looking in some of the shops along Andrássy street. Boggi had a storefront there, and I really like that Milanese designer, so we went in. I didn’t expect to, but I ended up purchasing a gorgeous sport shirt because it fit me like a glove! A bit impulsive, but sometimes things like that have to be done when on holiday.

We made it back to the Aria Hotel in time to experience the afternoon wine and piano reception (that we missed yesterday due to the travel problems). It was lovely to just sit in the music parlour and listen to the performance. We didn’t partake in any of the food because we had dinner reservations soon thereafter.

The afternoon reception in the music garden at the Aria Hotel
The afternoon reception in the music garden at the Aria Hotel

After that incredibly relaxing reception, we got ready and walked to dinner at Caviar & Bull. The food was over-the-top delicious, but we shared quite a few starters and just left without ordering any mains. If the food was that great, why would we leave without ordering entrées? Well, in my opinion, the prices were exorbitant for the portion size. We added it up, and the four starters came out to 10 bites per person. That being said, the food that we had was extremely creative and fun—like the molecular spheres:

Budapest - molecular sphere starter at Caviar & Bull
Budapest – molecular sphere starter at Caviar & Bull

On our walk back to the hotel, we realised that we needed some actual food, so we went to this little Japanese place called Wasabi Extra, which was directly across from our hotel. It was a conveyor belt sushi joint (all-you-can-eat), but we opted to just get some Japanese curry dishes. They were mediocre at best, but at least provided some sustenance.

We wandered back up to the room, and the hotel staff had delivered the wines they had been chilling for us in their walk-in. They also delivered the wine glasses and an ice bucket. Which wines did we choose for the last evening of our trip? Of course they had to be special, so we went with the 1995 vintage of the Domäne Wachau Kellerberg Riesling. We also opened a bottle of the 2017 vintage for comparison. It was a great experience, and one that we likely won’t be able to ever have again. That particular Riesling is my favourite of theirs, and arguably my favourite expression of the grape outside of Alsace and Germany. Having one with such bottle age transformed it into a golden yellow colour with aromas of overly ripe tropical fruits and petrol, along with the creamy mouthfeel that softens the typical blinding acidity of Riesling; it was a truly remarkable wine!

The perfect ending to a trip - enjoying Domäne Wachau's 1995 Ried Kellerberg Riesling and desserts from Café Gerbeaud
The perfect ending to a trip – enjoying Domäne Wachau’s 1995 Ried Kellerberg Riesling and desserts from Café Gerbeaud

We also had our desserts from Café Gerbeaud. They were all good, but I think that we agreed that the Émile was undoubtedly our favourite. That’s the one that Deb lovingly calls “the Pringle dessert” because of the chocolate garnish on the top that looks a bit like a Pringles crisp. A pretty darn good way to end a trip, if I do say so myself… and I do!

October 02, 2018
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

We woke up a bit early to check out of Weingut Tement, but before doing so had a tour of the facility with Monika Tement (the wife of Armin Tement, who, with his brother Stefan, is the current winemaker and proprietor of the estate). It was rainy and damp outside, so we couldn’t go through the vineyards. Thankfully I was able to get some photos of the beautiful Zieregg vineyard yesterday when the weather was nicer.

Südsteiermark - The stunning panorama of Ried Zieregg at Weingut Tement
Südsteiermark – The stunning panorama of Ried Zieregg at Weingut Tement

Even though we weren’t able to go through the vineyards together due to the rains, Monika improvised and shared so much incredible information about their land and winemaking practices. In their cellar, there is a portion where there isn’t a concrete wall, and one can see the open soil that comprises the Zeiregg STK Grand Cru vineyard site (so… very… cool!).

Südsteiermark - inside Weingut Tement's cellar with the wall exposing the soils of Ried Zieregg
Südsteiermark – inside Weingut Tement’s cellar with the wall exposing the soils of Ried Zieregg

Before leaving the cellar, we were fortunate enough to see brothers Armin and Stefan Tement checking the status of fermentation of many of the wines that were in-barrel. They were testing the sugar content, alcohol content, and various other components of the wine using instruments designed specifically for the tasks. Monika also told us about the story of the Cellar Cat that, according to lore, will choose the best barrel of wine and sit atop it. In this case, it chose wisely (or, truthfully, whomever placed this cat statue on the barrel chose wisely) by selecting a lovely barrel of Zieregg Grosse Lage STK Sauvignon Blanc.

Südsteiermark - the cellar cat chooses his barrel of Ried Zieregg Sauvignon Blanc at Weingut Tement
Südsteiermark – the cellar cat chooses his barrel of Ried Zieregg Sauvignon Blanc at Weingut Tement

We got in the car and headed out for what was the longest drive of the trip. Going from Südstiermark back to Budapest was supposed to be about 3.5 hours, but yet again, the GPS that we rented with the car was TERRIBLE. That problem, coupled with traffic, road construction, poor road conditions, and nearly running out of fuel resulted in the trip taking nearly 5.5 hours. We missed the afternoon wine and piano reception at the Aria Hotel, but at least didn’t miss out on the massage that I had scheduled. We had to cut it a little short as to not interfere with our dinner plans, but we still got to enjoy it.

Budapest - The custom-built grand piano in the music garden at the Hotel Aria
Budapest – The custom-built grand piano in the music garden at the Hotel Aria

After the massage, we freshened up and walked to our dinner reservations at Aszu, which was just two blocks over from the hotel. We started our meal by sharing a summer salad with carrots, and radishes, along with a Hungarian chicken pancake dish called Hortobágyi. We then decided to order three mains and just share them as well. We went with: 1) fresh pasta with mascarpone and spinach mousse, garlic, and dried tomatoes; 2) a farmhouse chicken breast with corn variations (including popcorn) and truffle pesto; 3) a pork shoulder with cauliflower cream, apricots, and yoghurt. After trying each of them, it so happened that Deb really liked the pork shoulder, and I preferred the pasta dish. So, we didn’t share those two, but only the farmhouse chicken. I had wanted to try one of their desserts, but we didn’t have time (the service was impeccable, but a bit slow) before our reservations back at the hotel’s rooftop Sky Bar.

I had arranged for a private violin soloist performance (since the Aria is known for its complete music theme), and it was absolutely astonishing! After that show, we had our own little table inside the High Note Sky Bar. It was cosy, and our waiter brought out our wines along with some complimentary baggies of popcorn. As I believe that one should always have the wines of the region, Deb had the 2016 Demeter Zoltán Szerelmi Hárslevelú, and I had the 2016 St. Andrea Áldás Egri Bikavér, which translates to “Bulls Blood”. It’s a mix of a lot of different grapes (in this case, Kékfrankos, Merlot, Cabernet Franc, Pinot Noir, Syrah, Cabernet Sauvignon, Kadarka, and Turán), and it was very interesting. I hope to never encounter it in a blind tasting, though, because it would be essentially impossible to identify. 😛 After that bottle, we each wanted one additional glass. Deb had a the 2016 István Szepsy Dry Furmint, and I went with the 2016 Etyeki Kúria Pinot Noir. Both were lovely, and I was surprised to find yet another gorgeous representation of cool climate Pinot!

We headed back downstairs to our beautiful room, but stopped to take one more look at the lovely terraces and music garden below.

Budapest - Hotel Aria's stunning music garden courtyard
Budapest – Hotel Aria’s stunning music garden courtyard

October 01, 2018
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Unfortunately, as with so many of our stays in Austria, we only had one full day in the Ciringa / Südsteiermark region, so we had to make the best of it by seeing some of the cool attractions. We drove about 45 minutes or so away to the town of Riegersburg to visit their castle. If you walk up the hill to the top (instead of taking the funicular) and forego the castle museums, there are no fees involved. However, we opted to take the lift for €6 and to see all three museums for €13. The lift was rickety and a bit frightening, but we made it! The castle was really a neat experience, and the museums (one about the castle itself, one about witches, and one about medieval arms) were informative, but they were primarily in German. The English pamphlets only gave basic overviews of each room, so I feel like we missed out on a lot of the fascinating details. Though the castle experience was fun, I think it was a bit overpriced.

Südsteiermark - Riegersburg Castle entrance
Südsteiermark – Riegersburg Castle entrance

One of the most interesting aspects of the castle was the various ornate stoves in some of the rooms. I often forget that there was no such thing as central heating and cooling during these times, so it was certainly a must to have some form of heating throughout the castle during the winter months. These stoves likely provided ample heat for taking the chill out of the air, and at Riegersburg, they likely served as discussion pieces given their elaborate and intricate designs.

Südsteiermark - lovely tile stove inside Riegersburg Castle
Südsteiermark – lovely tile stove inside Riegersburg Castle

After going through the three museums, we spent a little time looking around the outside of the castle. The views of the surrounding areas were really beautiful and pastoral. Once finished with Riegersburg, we drove a little bit down the road to Zotter Schokoladen (a chocolate manufacturer) for a tour of their facility. It started with a really great video that outlined the chocolate making process beginning with harvesting the cacao pods. We then went through the factory with an English audio guide that explained every step of the process in a lot more detail.

Südsteiermark - one of many chocolate machines at Zotter Schokoladen
Südsteiermark – one of many chocolate machines at Zotter Schokoladen

During each stage of the chocolate production, we were able to taste the “chocolate”. I use the word “chocolate” loosely because at many of the stages in the process, it didn’t taste much like the chocolate that we’re all used to. We did, however, get the opportunity to taste a bunch of their finished products. Some were good, some were great, and a few were absolutely fantastic! Deb ended up getting this solid 72% Milk chocolate bar sourced from Peru, some white chocolate bark with pistachios and almonds, and we each bought one of the tasting spoons that we used throughout the tour. I didn’t buy anything because the one that I loved the most wasn’t available for purchase. It was called the White Goddess and was white chocolate with Tonka Beans and honey crisps. It looks like it’s available online, so I may consider it at some point. The other ones that I enjoyed were the coconut nougat and the white chocolate bar with coconut and raspberries. One aspect of Zotter that I really found fascinating was the number of vegan options that they had available.

Südsteiermark - some of the vegan offerings at Zotter Schokoladen
Südsteiermark – some of the vegan offerings at Zotter Schokoladen

At the tail end of the Zotter tour, there was a really great experience where they had large glass jars with various items that have rather distinct aromas (like rose petals, some baking spices such as cloves, and so on). The object of this particular hallway was to smell the contents of each jar and see if you could name the aroma without looking at the answer printed on the underside of the lid. Deb and I made it into a bit of a game by loosely keeping score, and I found it to be a lot of fun because many of the aromas that can be found in chocolate can also be found in red wines. As a side note, there was a really fun “chocolate bath” at the exit of the tour. Sadly, it was only for show, but I can imagine that chocoholics everywhere would swoon at the thought. 😛

Südsteiermark - the chocolate bathtub at Zotter Schokoladen
Südsteiermark – the chocolate bathtub at Zotter Schokoladen

The other portion of the Zotter tour is a farm / petting zoo, but darn the bad luck, it started raining so we didn’t get a chance to go through it. After Zotter, we went back to the same restaurant that we ate at the previous night (Kirchenwirt an der Weinstraße in Ehrenhausen) because we enjoyed it so much! We didn’t have the same waiter this time, and our waitress tonight spoke VERY little English. It made it more difficult to order, but everything came out like we wanted. We each started with the mushroom tartare (which was my favourite), and then Deb went with Wiener Schnitzel and I had a custom order similar to what she had the evening before. I ordered the Pork Medallions, but without the pork. I know, it sounds ridiculous, but I wanted the dish with just a boatload of trumpet mushrooms and some extra German pretzel dumplings. I ordered by using Google Translate on my mobile, and my custom dish came out just as I had intended. Success!

Südsteiermark - Ehrenhausen - Kirchenwirt an der Weinstraße - Mushroom tartare starter
Südsteiermark – Ehrenhausen – Kirchenwirt an der Weinstraße – Mushroom tartare starter

Südsteiermark - Ehrenhausen - Kirchenwirt an der Weinstraße - Trumpet mushrooms and pretzel dumplings
Südsteiermark – Ehrenhausen – Kirchenwirt an der Weinstraße – Trumpet mushrooms and pretzel dumplings

Back at the beautiful chalet, we enjoyed our wines of the evening. This time we went with two of the special, limited production wines from Weingut Tement. We wanted to compare two of their higher-end Sauvignon Blancs, so we had a bottle of the 2012 Zieregg “IZ” Reserve and a bottle of the 2015 Zieregg Vinothek Reserve. I thought that Deb would like the Vinothek and that I would like the “IZ” (which is made via a process similar to carbonic maceration [often used in Beaujolais]), but I had it completely backwards. I preferred the Vinothek and Deb liked the “IZ” more. I found the Vinothek to be a more pure expression of the grape and the place, which are two aspects that I highly value in wine.

September 30, 2018
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Today we woke up extra early (before the sun had even peaked over the mountain crest) to depart Hallstatt for Maribor, Slovenia. The reason for getting up before the rooster’s crow is that it’s a special day in Maribor—the annual Harvest Festival of the oldest grape vine in the world. We started out for the ~3-hour drive, but met a problem right off the bat in that the road leaving Obertraun toward Graz was closed due to an avalanche. Yes, an avalanche… let’s not forget that we’re in the Austrian Alps at the end of autumn. I had to figure out an alternative route, but fortunately we still made it to Maribor in time. Actually, right as we arrived in the city centre, we pulled in behind the pre-festival wagon complete with an accordion player! We parked the car, and saw the event from a fairly nice perspective on the side line.

Maribor - Harvest Festival - Pre-show celebration
Maribor – Harvest Festival – Pre-show celebration

Being an absolute wine fanatic, and one with a strong interest in viticulture and oenology, I geeked out a little bit at the Harvest Festival because it is the oldest fruit-bearing grapevine on the planet! Not only that, but we just happened to be heading to southeastern Austria on the same day; a perfect coincidence. The festival started with members of the Slovenian Wine Council (formally known as the PSVVS [the Business Association for Viticulture and Wine Production]) speaking to the quality of the country’s various wine regions. It was wonderful to see them take such pride in their indigenous grapes and wines!

Maribor - Harvest Festival - Slovenian Wine Council
Maribor – Harvest Festival – Slovenian Wine Council

After the speakers (including diplomats and industry representatives from foreign nations) discussed the impact of Slovenian wines on the global marketplace, the festivities continued with live music, dancers wearing traditional garb, and importantly, the ceremonial first cutting of the grapes. We didn’t stay too much after the first cutting as most of the activities were in Slovenian and likely lost in translation, but I’m glad that we were there to see it firsthand; it was very likely a once-in-a-lifetime experience.

Maribor - Harvest Festival - first cutting of the grapes
Maribor – Harvest Festival – first cutting of the grapes

After the Harvest Festival, we went to Mestni Park (meaning “City Park”) so that we could climb to the top of Piramida Hill. It’s a high ground and, though not anything like the mountains we just saw in Hallstatt, it has quite a steep grade. The top of Piramida is considered one of the best views of the city. It was a fun hike, and the views certainly were impressive, so I’m glad that we took the time to do it. However, since there was a minimum of €15 for the Vignette pass (for driving on Slovenian motorways), it seemed a bit expensive just for the few hours of the festival and the park. Nevertheless, it was a good experience.

View of Maribor from atop Piramida at Mestni Park
View of Maribor from atop Piramida at Mestni Park

As it was mid-afternoon, we then got back in the car and drove up to the Slovenian-Austrian border for our stay at Weingut Tement. Tement offers a few different accommodation options, and we actually stayed on the only part of their property that is technically in Slovenia (the Winzarei Ciringa chalets) instead of on the Austrian side of the border. We had a lovely reception where we were able to taste some of their wines, and then saw our gorgeous chalet.

Südsteiermark - Weingut Tement's Chalet Ciringa - Living room
Südsteiermark – Weingut Tement’s Chalet Ciringa – Living room

There was a sizeable bedroom, full kitchen, extremely luxurious bathroom, and a lovely little breakfast nook before walking out the door to the patio. From our patio, we could readily see some of Tement’s vineyards, and even though they weren’t their esteemed Grosse Lage STK Zieregg vineyards, they were beautiful nonetheless.

Südsteiermark - Weingut Tement's Chalet Ciringa - our breakfast nook
Südsteiermark – Weingut Tement’s Chalet Ciringa – our breakfast nook

Südsteiermark - Weingut Tement's Chalet Ciringa - beautiful view from the patio
Südsteiermark – Weingut Tement’s Chalet Ciringa – fantastic vineyard view from the patio

We spent a little time just walking the Zieregg Vineyard (adjacent to the winery itself), and then headed to Ehrenhausen for dinner at Die Weinbank, which is directly affiliated with Weingut Tement. Unfortunately, when we arrived, it was closed despite the confirmation of our reservations. I looked on my mobile and found that there was one other restaurant named Kirchenwirt an der Weinstraße a mere block away from our car park, so we went there instead. We were expecting pub food, but boy were we wrong! It was elevated and outstanding, and our waiter was extremely accommodating by reading the entire menu to us in English. Deb and I shared some pumpkin soup, a salad with pumpkin, and mushroom tartare. She then ordered pork cutlets with trumpet mushrooms, and I went with pesto linguine with vegetables and, yup, more pumpkin. We ordered a couple pieces of house-made apple strudel to take away with us for later.

Back at our chalet, we enjoyed our wines of the evening. We each had the current vintage (2016) of Weingut Tement Zieregg Morillon (which is the local name for Chardonnay). It was a lovely mix of styles (not heavily oaky like many California Chards, but not as sharply crisp as Chablis either) and exhibited a character all of their own. The apple strudel was interesting, but I personally found it to be a bit like apple sauce inside instead of a strudel filling. It might have been better at the restaurant where it would be served warm and with vanilla ice cream, but neither of us like to have sweets before wine.

September 28, 2018
Thomas Raschbacher a.k.a. lordvan (homepage, bugs)
Tryton Module Development (September 28, 2018, 12:03 UTC)

So I've finally got around to really start Tryton module dev to customize it to what we need.

I plan to put stuff that is useful as examples or maybe directly as-is on my github: https://github.com/LordVan/tryton-modules

On a side note this is trytond-4.8.4 running on python 3.5 at the moment.

The first module just (re-)adds the description filed to the sale lines in the sale module (entry). This by itself is vaguely useful for me but mostly was to figure out how this works. I have to say once figured out it is really easy - the hardest part was to get the XML right for someone who is not familiar with the structure. I'd like to thank the people who helped me on IRC ( #tryton@freenode )

The next step will be to add some custom fields to this and products.

To add this module you can follow the steps in the documentation: Tryton by example

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.13 (September 28, 2018, 11:56 UTC)

I am once again lagging behind the release blog posts but this one is an important one.

I’m proud to announce that our long time contributor @lasers has become an official collaborator of the py3status project!

Dear @lasers, your amazing energy and overwhelming ideas have served our little community for a while. I’m sure we’ll have a great way forward as we learn to work together with @tobes 🙂 Thank you again very much for everything you do!

This release is as much dedicated to you as it is yours 🙂


After this release, py3status coding style CI will enforce the ‘black‘ formatter style.


Needless to say that the changelog is huge, as usual, here is a very condensed view:

  • documentation updates, especially on the formatter (thanks @L0ric0)
  • py3 storage: use $XDG_CACHE_HOME or ~/.cache
  • formatter: multiple variable and feature fixes and enhancements
  • better config parser
  • new modules: lm_sensors, loadavg, mail, nvidia_smi, sql, timewarrior, wanda_the_fish

Thank you contributors!

  • lasers
  • tobes
  • maximbaz
  • cyrinux
  • Lorenz Steinert @L0ric0
  • wojtex
  • horgix
  • su8
  • Maikel Punie

September 27, 2018
Michał Górny a.k.a. mgorny (homepage, bugs)
New copyright policy explained (September 27, 2018, 06:47 UTC)

On 2018-09-15 meeting, the Trustees have given the final stamp of approval to the new Gentoo copyright policy outlined in GLEP 76. This policy is the result of work that has been slowly progressing since 2005, and that has taken considerable speed by the end of 2017. It is a major step forward from the status quo that has been used since the forming of Gentoo Foundation, and that mostly has been inherited from earlier Gentoo Technologies.

The policy aims to cover all copyright-related aspects, bringing Gentoo in line with the practices used in many other large open source projects. Most notably, it introduces a concept of Gentoo Certificate of Origin that requires all contributors to confirm that they are entitled to submit their contributions to Gentoo, and corrects the copyright attribution policy to be viable under more jurisdictions.

This article aims to shortly reiterate over the most important points in the new copyright policy, and provide a detailed guide on following it in Q&A form.

Continue reading

September 15, 2018
Michał Górny a.k.a. mgorny (homepage, bugs)

With Qt5 gaining support for high-DPI displays, and applications starting to exercise that support, it’s easy for applications to suddenly become unusable with some screens. For example, my old Samsung TV reported itself as 7″ screen. While this used not to really matter with websites forcing you to force the resolution of 96 DPI, the high-DPI applications started scaling themselves to occupy most of my screen, with elements becoming really huge (and ugly, apparently due to some poor scaling).

It turns out that it is really hard to find a solution for this. Most of the guides and tips are focused either on proprietary drivers or on getting custom resolutions. The DisplaySize specification in xorg.conf apparently did not change anything either. Finally, I was able to resolve the issue by overriding the EDID data for my screen. This guide explains how I did it.

Step 1: dump EDID data

Firstly, you need to get the EDID data from your monitor. Supposedly read-edid tool could be used for this purpose but it did not work for me. With only a little bit more effort, you can get it e.g. from xrandr:

$ xrandr --verbose
HDMI-0 connected primary 1920x1080+0+0 (0x57) normal (normal left inverted right x axis y axis) 708mm x 398mm

If you have multiple displays connected, make sure to use the EDID for the one you’re overriding. Copy the hexdump and convert it to a binary blob. You can do this by passing it through xxd -p -r (installed by vim).

Step 2: fix screen dimensions

Once you have the EDID blob ready, you need to update the screen dimensions inside it. Initially, I did it using hex editor which involved finding all the occurrences, updating them (and manually encoding into the weird split-integers) and correcting the checksums. Then, I’ve written edid-fixdim so you wouldn’t have to repeat that experience.

First, use --get option to verify that your EDID is supported correctly:

$ edid-fixdim -g edid.bin
EDID structure: 71 cm x 40 cm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
CEA EDID found
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm

So your EDID consists of basic EDID structure, followed by one extension block. The screen dimensions are stored in 7 different blocks you’d have to update, and referenced in two checksums. The tool will take care of updating it all for you, so just pass the correct dimensions to --set:

$ edid-fixdim -s 1600x900 edid.bin
EDID structure updated to 160 cm x 90 cm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
CEA EDID found
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm

Afterwards, you can use --get again to verify that the changes were made correctly.

Step 3: overriding EDID data

Now it’s just the matter of putting the override in motion. First, make sure to enable CONFIG_DRM_LOAD_EDID_FIRMWARE in your kernel:

Device Drivers  --->
  Graphics support  --->
    Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)  --->
      [*] Allow to specify an EDID data set instead of probing for it

Then, determine the correct connector name. You can find it in dmesg output:

$ dmesg | grep -C 1 Connector
[   15.192088] [drm] ib test on ring 5 succeeded
[   15.193461] [drm] Radeon Display Connectors
[   15.193524] [drm] Connector 0:
[   15.193580] [drm]   HDMI-A-1
[   15.193800] [drm]     DFP1: INTERNAL_UNIPHY1
[   15.193857] [drm] Connector 1:
[   15.193911] [drm]   DVI-I-1
[   15.194210] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
[   15.194267] [drm] Connector 2:
[   15.194322] [drm]   VGA-1

Copy the new EDID blob into location of your choice inside /lib/firmware:

$ mkdir /lib/firmware/edid
$ cp edid.bin /lib/firmware/edid/samsung.bin

Finally, add the override to your kernel command-line:


If everything went fine, xrandr should report correct screen dimensions after next reboot, and dmesg should report that EDID override has been loaded:

$ dmesg | grep EDID
[   15.549063] [drm] Got external EDID base block and 1 extension from "edid/samsung.bin" for connector "HDMI-A-1"

If it didn't, check dmesg for error messages.

September 09, 2018
Sven Vermeulen a.k.a. swift (homepage, bugs)
cvechecker 3.9 released (September 09, 2018, 11:04 UTC)

Thanks to updates from Vignesh Jayaraman, Anton Hillebrand and Rolf Eike Beer, a new release of cvechecker is now made available.

This new release (v3.9) is a bugfix release.

September 07, 2018
Gentoo congratulates our GSoC participants (September 07, 2018, 00:00 UTC)

GSOC logo Gentoo would like to congratulate Gibix and JSteward for finishing and passing Google’s Summer of Code for the 2018 calendar year. Gibix contributed by enhancing Rust (programming language) support within Gentoo. JSteward contributed by making a full Gentoo GNU/Linux distribution, managed by Portage, run on devices which use the original Android-customized kernel.

The final reports of their projects can be reviewed on their personal blogs:

August 24, 2018
Michał Górny a.k.a. mgorny (homepage, bugs)

I have recently worked on enabling 2-step authentication via SSH on the Gentoo developer machine. I have selected google-authenticator-libpam amongst different available implementations as it seemed the best maintained and having all the necessary features, including a friendly tool for users to configure it. However, its design has a weakness: it stores the secret unprotected in user’s home directory.

This means that if an attacker manages to gain at least temporary access to the filesystem with user’s privileges — through a malicious process, vulnerability or simply because someone left the computer unattended for a minute — he can trivially read the secret and therefore clone the token source without leaving a trace. It would completely defeat the purpose of the second step, and the user may not even notice until the attacker makes real use of the stolen secret.

In order to protect against this, I’ve created google-authenticator-wrappers (as upstream decided to ignore the problem). This package provides a rather trivial setuid wrapper that manages a write-only, authentication-protected secret store for the PAM module. Additionally, it comes with a test program (so you can test the OTP setup without jumping through the hoops or risking losing access) and friendly wrappers for the default setup, as used on Gentoo Infra.

The recommended setup (as utilized by sys-auth/google-authenticator-wrappers package) is to use a dedicated user for the password store. In this scenario, the users are unable to read their secrets, and all secret operations (including authentication via the PAM module) are done using an unprivileged user. Furthermore, any operation regarding the configuration (either updating it or removing the second step) require regular PAM authentication (e.g. typing your own password).

This is consistent with e.g. how shadow operates (users can’t read their passwords, nor update them without authenticating first), how most sites using 2-factor authentication operate (again, users can’t read their secrets) and follows the RFC 6238 recommendation (that keys […] SHOULD be protected against unauthorized access and usage). It solves the aforementioned issue by preventing user-privileged processes from reading the secrets and recovery codes. Furthermore, it prevents the attacker with this particular level of access from disabling 2-step authentication, changing the secret or even weakening the configuration.

August 17, 2018
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Gentoo on Integricloud (August 17, 2018, 22:44 UTC)

Integricloud gave me access to their infrastructure to track some issues on ppc64 and ppc64le.

Since some of the issues are related to the compilers, I obviously installed Gentoo on it and in the process I started to fix some issues with catalyst to get a working install media, but that’s for another blogpost.

Today I’m just giving a walk-through on how to get a ppc64le (and ppc64 soon) VM up and running.


Read this and get your install media available to your instance.

Install Media

I’m using the Gentoo installcd I’m currently refining.


You have to append console=hvc0 to your boot command, the boot process might figure it out for you on newer install medias (I still have to send patches to update livecd-tools)

Network configuration

You have to manually setup the network.
You can use ifconfig and route or ip as you like, refer to your instance setup for the parameters.

ifconfig enp0s0 ${ip}/16
route add -net default gw ${gw}
echo "nameserver" > /etc/resolv.conf
ip a add ${ip}/16 dev enp0s0
ip l set enp0s0 up
ip r add default via ${gw}
echo "nameserver" > /etc/resolv.conf

Disk Setup

OpenFirmware seems to like gpt much better:

parted /dev/sda mklabel gpt

You may use fdisk to create:
– a PowerPC PrEP boot partition of 8M
– root partition with the remaining space

Device     Start      End  Sectors Size Type
/dev/sda1   2048    18431    16384   8M PowerPC PReP boot
/dev/sda2  18432 33554654 33536223  16G Linux filesystem

I’m using btrfs and zstd-compress /usr/portage and /usr/src/.

mkfs.btrfs /dev/sda2

Initial setup

It is pretty much the usual.

mount /dev/sda2 /mnt/gentoo
cd /mnt/gentoo
wget https://dev.gentoo.org/~mattst88/ppc-stages/stage3-ppc64le-20180810.tar.xz
tar -xpf stage3-ppc64le-20180810.tar.xz
mount -o bind /dev dev
mount -t devpts devpts dev/pts
mount -t proc proc proc
mount -t sysfs sys sys
cp /etc/resolv.conf etc
chroot .

You just have to emerge grub and gentoo-sources, I diverge from the defconfig by making btrfs builtin.

My /etc/portage/make.conf:

CFLAGS="-O3 -mcpu=power9 -pipe"
# WARNING: Changing your CHOST is not something that should be done lightly.
# Please consult https://wiki.gentoo.org/wiki/Changing_the_CHOST_variable beforee

# NOTE: This stage was built with the bindist Use flag enabled

USE="ibm altivec vsx"

# This sets the language of build output to English.
# Please keep this setting intact when reporting bugs.

MAKEOPTS="-j4 -l6"
EMERGE_DEFAULT_OPTS="--jobs 10 --load-average 6 "

My minimal set of packages I need before booting:

emerge grub gentoo-sources vim btrfs-progs openssh

NOTE: You want to emerge again openssh and make sure bindist is not in your USE.

Kernel & Bootloader

cd /usr/src/linux
make defconfig
make menuconfig # I want btrfs builtin so I can avoid a initrd
make -j 10 all && make install && make modules_install
grub-install /dev/sda1
grub-mkconfig -o /boot/grub/grub.cfg

NOTE: make sure you pass /dev/sda1 otherwise grub will happily assume OpenFirmware knows about btrfs and just point it to your directory.
That’s not the case unfortunately.


I’m using netifrc and I’m using the eth0-naming-convention.

touch /etc/udev/rules.d/80-net-name-slot.rules
ln -sf /etc/init.d/net.{lo,eth0}
echo -e "config_eth0=\"${ip}/16\"\nroutes_eth0="default via ${gw}\"\ndns_servers_eth0=\"\"" > /etc/conf.d/net

Password and SSH

Even if the mticlient is quite nice, you would rather use ssh as much as you could.

rc-update add sshd default

Finishing touches

Right now sysvinit does not add the hvc0 console as it should due to a profile quirk, for now check /etc/inittab and in case add:

echo 'hvc0:2345:respawn:/sbin/agetty -L 9600 hvc0' >> /etc/inittab

Add your user and add your ssh key and you are ready to use your new system!