Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
April 18, 2014, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 18, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Heartbleed and SuperGenPass (April 18, 2014, 12:23 UTC)

After an older post of mine a colleague pointed out SuperGenPass to generate different passwords for each service out there from a single master password and the domain name in use. The idea was interesting, especially since it’s all client-side, which sounded very appealing to me.

Unfortunately, it didn’t take long for me to figure out a few limitations in this approach; the most obvious one is of course Amazon: while nowadays the login page even for Audible is hosted at the amazon.com domain, the localized stores still log in on, e.g., amazon.co.uk, but with the same password. Sure it’s easy to fix this, but it’s still a bit of a pain to change every time.

Also, at least the Chrome extension I’m using, makes it difficult to use different passwords for different services hosted at the same domain. You have an option to enable or disable the subdomain removal, so if you disable it, you’ll get different passwords for www.example.com and example.com (unlikely to be what you want) while if you enable it, you’ll get the same password for forums.gentoo.org and bugs.gentoo.org (which is not what I want). Yes you can fix it on a per-service basis, but it adds to the problem above.

The last bother in the daily usage of the extension, has been with special characters. SuperGenPass does not, by default, use any special characters, just letter (mixed case) and numbers. Which is perfectly fine, unless you have a website that stupidly insists on requiring you to use symbols as well, or that requires you to use (less stupidly) longer or (insanely stupidly) shorter passwords. You then have to remember.

All three of these complains mean that you have to remember some metadata in addition to the master password: whether you have to change the domain used, whether you’re using subdomain removal or not for that particular service, and whether you have to change the length, or add special characters. It partly defeats the purpose of having a fully stateless hashing password generator.

There is also one more problem that worried me much more: while it makes it so that a leak from a single website would leak your base password for everything else, it does not entirely make it impossible. While there’s no real way to tell that someone is using SuperGenPass, if you’re targeting a single individual, it’s not impossible to tell; in particular, you now know I’ve been using SGP for a while, so if a password for an account named Flameeyes gets leaked, and it looks like an SGP password, it’s a good assumption that it is. Then, all you need to do is guess the domains that could be used to generate the password (with and without subdomain removal), and start generating passwords until you get to the master password used to generate that particular site password. Now you just need to have an educated guess to the domain you’re going to try login as me, and you’re done. And this is with me assuming that there is no weakness in the SGP algorithm — crypto is honestly too hard for me.

And now there is heartbleed — go change all your passwords, starting from xine. But how do you change your passwords when you have them generated? You have to change your master password. And now you have to remember if you changed the password for a given service already or not. And what happens if one of the services you’re using has been compromised before, such as Comixology? Now you have three different master passwords, if not more, and you’re back to square one, like SGP was never used.

So with all this considered, I’ve decided to say goodbye to SGP — I still have a few services that have not been migrated – but not those that I’ve named here, I’m not a moron – but I’m migrating them as I got. There are plenty of things I forgot I registered to at some point or another that have been mailing me to change their password. I decided to start using LastPass. The reason was mostly that they do a safety check for heartbleed vulnerabilities before you set up your passwords with them. I was skeptical about them (and any other online password storage) for a long time, but at this point I’m not sure I have any better option. My use of sgeps is not scalable, as I found out for myself, and the lack of 2FA in most major services (PayPal, seriously?) makes me consider LastPass as the lesser evil for my safety.

Heartbleed and xine-project (April 18, 2014, 09:38 UTC)

This blog comes in way too late, I know, but there has been extenuating circumstances around my delay on clearing this through. First of all, yes, this blog and every other websites I maintain were vulnerable to Heartbleed. Yes they are now completely fixed: new OpenSSL first, new certificates after. For most of the certificates, though, there is no revocation issued, as they are issued through StartSSL which means that they are free to issue, and expensive to revoke. The exception to this has been the certificate used by xine’s bugzilla which was revoked, free of charge, by StartSSL (huge thanks to the StartSSL people!)

If you have an account on xine’s Bugzilla, please change your passwords NOW. If somebody knows a way to automatically reset all passwords that were not changed before a given date in Bugzilla, please let me know. Also, if somebody knows whether Bugzilla has decent support for (optional) 2FA, I’d also be interested.

More posts on the topic will follow, this is just an announcement.

April 17, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Gender development is one of my primary foci within the field of Child and Adolescent Psychology, and this video from The Representation Project vividly portrays one of the most destructive demands that we place on young boys—”Be a man!”


The Mask You Live In – Be a Man!

There are wonderful pieces of wisdom from Dr. William Pollack (who wrote an outstanding book [Real Boys] that I’ve ready many, many times), Dr. Judy Chu, and Dr. Niobe Way (who wrote another outstanding book about boys’ emotional interactions called Deep Secrets).

Not only does it automatically assume gender inequality, but it does so in the worst of ways. It puts down females by implicitly suggesting that men are better, stronger, more efficient, or what have you. Further, it contains a message that showing the perfectly normal (and necessary!) human emotions of pain, fear, and the umbrella of empathic concern weaken masculinity. Nothing could be further from the truth!

Abraham Lincoln said that “No man stands so tall as when he stoops to help a child.” That powerful quote is dually applicable in reference to this video. Firstly, it shows that true masculinity is characterised by helping others, and especially those who are unable or less able to help themselves. Secondly, it is a call to action for all of you reading this blog entry: help boys everywhere to grow into a healthy understanding of masculinity. Dispel this myth of masculinity. Teach them to help one another. Teach them to care, even if others laugh at them for doing so. Teach them that fear is equally as important as courage. Teach them that is okay to cry, even in front of others. Then they will truly know what it means to “Be a man.”

|:| Zach |:|

Sven Vermeulen a.k.a. swift (homepage, bugs)
If things are weird, check for policy.29 (April 17, 2014, 19:01 UTC)

Today we analyzed a weird issue one of our SELinux users had with their system. He had a denial when calling audit2allow, informing us that sysadm_t had no rights to read the SELinux policy. This is a known issue that has been resolved in our current SELinux policy repository but which needs to be pushed to the tree (which is my job, sorry about that). The problem however is when he added the policy – it didn’t work.

Even worse, sesearch told us that the policy has been modified correctly – but it still doesn’t work. Check your policy with sestatus and seinfo and they’re all saying things are working well. And yet … things don’t. Apparently, all policy changes are ignored.

The reason? There was a policy.29 file in /etc/selinux/mcs/policy which was always loaded, even though the user already edited /etc/selinux/semanage.conf to have policy-version set to 28.

It is already a problem that we need to tell users to edit semanage.conf to a fixed version (because binary version 29 is not supported by most Linux kernels as it has been very recently introduced) but having load_policy (which is called by semodule when a policy needs to be loaded) loading a stale policy.29 file is just… disappointing.

Anyway – if you see weird behavior, check both the semanage.conf file (and set policy-version = 28) as well as the contents of your /etc/selinux/*/policy directory. If you see any policy.* that isn’t version 28, delete them.

April 16, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v1.4 (April 16, 2014, 10:18 UTC)

I’m glad to announce the release of py3status-1.4 which I’d like to dedicate to @guiniol who provided valuable debugging (a whole Arch VM) to help me solve the problem he was facing (see changelog).

I’m gathering wish lists an have some (I hope) cool ideas for the next v1.5 release, feel free to post your most adventurous dreams !

changelog

  • new ordering mechanism with verbose logging on debug mode. fixes rare cases where the modules methods were not always loaded in the same order and caused inconsistent ordering between reloads. thx to @guiniol for reporting/debugging and @IotaSpencer and @tasse for testing.
  • debug: dont catch print() on debug mode
  • debug: add position requested by modules
  • Add new module ns_checker.py, by @nawadanp
  • move README to markdown, change ordering
  • update the README with the new options from –help

contributors

Special thanks to this release’s contributors !

  • @nawadanp
  • @guiniol
  • @IotaSpencer
  • @tasse

April 14, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Freelance Journalist looking for Jobs (April 14, 2014, 19:32 UTC)

If you don't know who I am and what I do, let me quickly introduce myself:

I live in Berlin and work as a freelance journalist. I sometimes write for the newspapers taz, the online version of the Zeit and I'm a regular author at the IT news magazine Golem.de. Currently, my main topics are IT security and cryptography. I cover those topics both for general interest media and for experts. I also write about some completely different topics like climate change, energy politics, science and problems in medicine and whatever I happen to find interesting. I maintain an extensive list of articles I wrote in the past on my website and just recently added an English version with Google Translate links to my articles.

A notable article I wrote lately was a large piece on the security of various encryption algorithms after the Snowden revelations which got around 900.000 visits. In the past days I covered the Heartbleed story extensively and am happy to say that I wrote the first article in German language that appeared on Google News.

I'm quite happy with my job right now. Especially my cooperation with Golem.de is going very well. I have enough topics to write about, have some new opportunities in sight and earn enough money from my work to pay my expenses However, all my current employers publish exclusively in German. I sometimes cover topics where I'd wish that I could target an international audience and where I'd like to publish in English language.

If you are working for any kind of media in English language and you think my work may be interesting for you: Please get in touch with me. Of course if you work for any kind of media in German language and think the same you may also get in touch with me.

I'm aware that this is difficult. Anyone who decides to cooperate with me on this needs to be aware: I'm no native speaker. I think my English language skills are decent, but they are far from perfect. My work probably requires more spell checking and editing than others.

Recent versions of OpenSSL were found to be affected by an information disclosure vulnerability related to TLS heartbeats, nicknamed Heartbleed. It allows attackers to read up to 64kb of random server memory, possibly including passwords, session IDs or even private keys.

After the public disclosure on April 7, we have confirmed that several services provided by Gentoo Infrastructure were vulnerable as well. We have immediately updated the affected software, recreated private keys, reissued certificates, and invalidated all running user sessions. Despite these measures, we cannot exclude the possibility of attackers exploiting the issue during the time it was not publicly known to gain access to credentials or session IDs of our users. There are currently no indications this has happened.

However, to be safe, we are asking you to reset your passwords used for Gentoo services within the next 7 days. You need to take action if you have an account on one of the following sites:

  • blogs.gentoo.org
  • bugs.gentoo.org
  • forums.gentoo.org
  • wiki.gentoo.org

After 7 days, we will be removing all passwords to avoid abuse. For more information and the full announcement, visit http://infra-status.gentoo.org/notice/20140413-heartbleed.

April 10, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Security and Tools (April 10, 2014, 09:51 UTC)

Everybody should remember than a 100% secure device is the one unplugged and put in a safe covered in concrete. There is always a trade-off on the impairment we inflict ourselves in order to stay safe.

Antonio Lioy

In the wake of the heartbleed bug. I’d like to return again on what we have to track problems and how they could improve.

The tools of the trade

Memory checkers

I wrote in many places regarding memory checkers, they are usually a boon and they catch a good deal of issues once coupled with good samples. I managed to fix a good number of issues in hevc just by using gcc-asan and running the normal tests and for vp9 took not much time to spot a couple of issues as well (the memory checkers aren’t perfect so they didn’t spot the faulty memcpy I introduced to simplify a loop).

If you maintain some software please do use valgrind, asan (now also available on gcc) and, if you are on windows, drmemory. They help you catch bugs early. Just beware that sometimes certain versions of clang-asan miscompile. Never blindly trust the tools.

Static analyzers

The static analyzers are a mixed bag, sometimes they spot glaring mistakes sometimes they just point at impossible conditions.
Please do not put asserts to make them happy, if they are right you just traded a faulty memory access for a deny of service.

Other checkers

There are plenty other good tools from the *san family one can use, ubsan is maybe the newest available in gcc and it does help. Valgrind has plenty as well and the upcoming drmemory has a good deal of interesting perks, if only upstream hadn’t been so particular with release process and build systems you’d have it in Gentoo since last year…

Regression tests

I guess everybody is getting sick of me talking about fuzzy testing or why I spent weeks to have a fast regression test archive called playground for Libav and I’m sure everybody in Gentoo is missing the tinderbox runs Diego used to run.
Having a good and comprehensive batch of checks to make sure new code and new fixes do not have the uncalled side effect of breaking stuff is nice, coupled with git bisect makes backporting to fix issues in release branches much easier.

Debuggers

We have gdb, that works quite well, and we have lldb that should improve a lot. And many extensions on top of them. When they fail we can always rely on printf, or not

What’s missing

Speed

If security is just an acceptable impairment over performance in order not to crash, using the tools mentioned are an acceptable slow down on the development process in order not to spend much more time later tracking those issues.

The teams behind valgrind and *san are doing their best to just make the execution three-four times as slow when the code is instrumented.

The static analyzers are usually just 5 times as slow as a normal compiler run.

A serial regression test run could take ages and in parallel could make your system not able to do anything else.

Any speed up there is a boon. Bigger hardware and automation mitigates the problem.

Precision

While gdb is already good in getting you information out of gcc-compiled data apparently clang-compiled binaries are a bit harder. Using lldb is a subtle form of masochism right now for many reasons, it getting confused is just the icing of a cake of annoyance.

Integration

So far is a fair fight between valgrind and *san on which integrates better with the debuggers. I started using asan mostly because made introspecting memory as simple as calling a function from gdb. Valgrind has a richer interface but is a pain to use.

Reporting

Some tools are better than other in pointing out the issues. Clang is so far the best with gcc-4.9 coming closer. Most static analyzers are trying their best to deliver the big picture and the detail. gdb so far is incredibly better compared to lldb, but there are already some details in lldb output that gdb should copy.

Thanks

I’m closing this post thanking everybody involved in creating those useful, yet perfectible tools, all the people actually using them and reporting bugs back and everybody actually fixing the mentioned bugs so I don’t have to do myself alone =)

Everything is broken, but we are fixing most of it together.

April 08, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB 2.4.10 & pymongo 2.7 (April 08, 2014, 16:05 UTC)

I’m pleased to announce those latest mongoDB related bumps. The next version bump will be for the brand new mongoDB 2.6 for which I’ll add some improvements to the Gentoo ebuild so stay tuned ;)

mongodb-2.4.10

  •  fixes some memory leaks
  • start elections if more than one primary is detected
  • fixes issues about indexes building and replication on secondaries
  • chunk size is decreased to 255 KB (from 256 KB) to avoid overhead with usePowerOf2Sizes option

All mongodb-2.4.10 changelog here.

pymongo-2.7

  • of course, the main feature is the mongoDB 2.6 support
  • new bulk write API (I love it)
  • much improved concurrency control for MongoClient
  • support for GridFS queries

All pymongo-2.7 changelog here.

April 06, 2014
Raúl Porcel a.k.a. armin76 (homepage, bugs)
New AArch64/arm64 stage3 available (April 06, 2014, 12:54 UTC)

Hello all,

Following up with my AArch64/ARM64 on Gentoo post, in the last months Mike Frysinger (vapier) has worked in bringing arm64 support to the Gentoo tree.

He has created the profiles and the keyword, along with keywording a lot of packages(around 439), so props to him.

Upstream qemu-2.0.0-rc now supports aarch64/arm64, so I went ahead and created a stage3 using the new arm64 profile. Thanks to Mike I didn’t had to fight with a lot of problems like in the previous stage3.

For building I just had to have this in my package.keywords file:

=app-text/opensp-1.5.2-r3 **
=dev-util/gperf-3.0.4 **
=sys-apps/busybox-1.21.0 **
=app-text/sgml-common-0.6.3-r5 **
=app-text/openjade-1.3.2-r6 **
=app-text/po4a-0.42 **
=dev-perl/Text-CharWidth-0.40.0 **
=dev-perl/SGMLSpm-1.03-r7 **
=dev-util/intltool-0.50.2-r1 **
=dev-perl/XML-Parser-2.410.0 **
=dev-perl/Text-WrapI18N-0.60.0 **
=sys-apps/coreutils-8.22

And in my package.use file:

sys-apps/busybox -static

coreutils-8.21 fails to build, 8.22 built fine. And building busybox with USE=”static” still fails.

Also I’ve just found out that USE=”hpn” on net-misc/openssh makes the client segfault. Not sure if its because of qemu or because the unaligned accesses hpn had aren’t happy on arm64. So if you plan to use the ssh client in the arm64 chroot, make sure you have USE=”-hpn”

By the way, app-arch/lbzip2 seems to fail to run here, not sure if its because of qemu or it simply doesn’t work on arm64. It segfaults.

You can download it from: http://gentoo.osuosl.org/experimental/arm/arm64

I’ve also starting to upload some binary packages: http://tinderbox.dev.gentoo.org/default/linux/arm64/

Also, if someone wants to give us access to arm64 hardware, we would be really happy :)

 


April 05, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
"smart" software (April 05, 2014, 10:29 UTC)

1) Grab webbwrowser
2) Enter URL
3) Figure out that webbrowser doesn't want to use HTTP because ... saturday? I don't know, but ass'u'me'ing that some URLs are ftp is just, well stupid, because your heuristic is whack.

Or, even more beautiful:

$ clementine
18:02:59.662 WARN  unknown                          libpng warning: iCCP: known incorrect sRGB profile 
Bus error


I have no idea what this means, so I'll be explicitly writing http:// at the beginning of all URL I offer to Firefox. And Clementine just got a free travel to behind the barn, where it'll get properly retired - after all it doesn't do the simple job it was hired to do. Ok, before it randomly didn't play "some" music files because gstreamer, which makes no sense either, but open rebellion will not have happy results.

I guess the moral of the story is: Don't misengineer things, clementine should output music and not be a bus driver. Firefox should not interpret-dance the URLS offered to it, but since it's still less retarded than the competition it'll be allowed to stay a little bit longer.

Sigh. Doesn't anyone engineer things anymore?

April 04, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
The road to MVC (April 04, 2014, 19:36 UTC)

In the past month or so I started helping Vittorio on adding one of the important missing feature to our h264 decoder. Multi View support.

MVC

The basic idea of this feature is quite simple, you are shooting a movie with multiple angles, something is bound to be sort of common and you’d like to ensure frame precision.

So what about encoding all the simultaneous frames captured in the same elementary stream, share across the different layers as much as you could and then let the decoder output the frames somehow?

Since we know that all the containers have problems might be not completely a bogus idea to have the codec taking care of it. Even better if the resulting aggregated bitstream is more compact than the sum of the single ones.

High level structure

What’s different in h264-mvc than the normal h264?

Random bystander

Not a lot, in fact the main layer is exactly the same and a normal decoder can just skip over the additional bits (3 NALs more or less) and just decode as usual.

Basically there is a NAL unit to signal which layer we are currently working on, a NAL to store the SPS specific per layer and a NAL to keep the actual frame data.

Beside that everything is exactly the same.

Implementation

So why it isn’t already available, you made it look easy?!

Random jb

Sadly it would be easy if the decoder we have isn’t _that_ convoluted with many components entangled in a monolithic entity, with code that grew over the years to adapt to different needs.

Architectural pain points

Per slice multithreaded decoding made the code quite hard to follow since you then have a master context, h that in certain functions is actually h0 and a slice specific copy hx that sometimes becomes h and such.

Per frame multhtreaded decoding luckily doesn’t get in the way too much for now.

Having to touch a large file of about 4k lines of code in itself isn’t _so_ nice, split view as you like for editing, you end up waiting a single core of you cpu doing the work.

Community constraints

The h264-mvc is a fringe feature for many and if you care about speed you want to not have all the cruft around slowing down. What’s is for you a feature, for many is just cruft.

  • MVC support must be completely optional or not slow down the normal decoding at all.
  • MVC support must not make the code harder to follow than it is now, so hacking your way is not an option.
  • MVC should give me a pony, purple

The plan

First take the low hanging fruits while you think what’s the best route to achieve your goal.

Random wise person

Refactor

The first step is always refactor and cleanup. As you, hopefully, do not cook on a dirty kitchen, people shouldn’t
write code on top of crufty one.

Split the monster

In Libav everything compiles quite fast beside for vc1(vc1dec.c is 6k loc) and h264(h264.c was around 6k loc).
New codecs such as vp9 or hevc landed already split in smaller chunks.

Shuffling the code should be simple enough, so we had h264.c split in h264_slice.c, h264_mb.c and such. That helps having shorter (re)build time and makes you easier to focus.

Untangle it

Vittorio tried to remove the dependency over the mpeg12 context in order to make easier to follow the code, it was one of the pending issues since years. Now h264 doesn’t require mpeg12 in order to build, that will make probably happier our friends working on Chrome and everybody else needing to have _just_ few selected features in their build.

Pave the road

Once you divided the problem in smaller sub problems (parsing the new nals, store the information in an appropriate data structure, do the actual decoding and store the results somewhere accessible) you can start working on adapting the code to fit. That means reordering some code, splitting functions that would be shared and maybe slay some bugs hidden in the code weed while at it.

So far

We are halfway!

Random optimist

Done

We got the frame splitting, nal parsing pretty much in working shape and is not sent for review just because in itself is not
useful.

Doing

The frame data decoding is pending some patches from me that try to simplify the slice header parsing so enough of it could be shared w/out adding more branches. I hacked it once and I know the approach used works.

The code to store multiple views in a single frame has a whole blueprint being evaluated.

To Do

Test the actual decoding and hopefully make so the frame reference code behaves as expected, this will be probably the most annoying and time consuming task if we are unlucky. That code bites.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Convert special characters to ASCII in python (April 04, 2014, 09:22 UTC)

I came across a recurrent problem at work which was to convert special characters such as the French-Latin accentuated letter “é” to ASCII “e” (this is called transliteration).

I wanted to avoid having to use an external library such as Unidecode (which is great obviously) so I ended up wandering around the unicodedata built-in library. Before I had to get too deep in the matter I found this StackOverflow topic which gives an interesting method to do so and works fine for me.

def strip_accents(s):
    """
    Sanitarize the given unicode string and remove all special/localized
    characters from it.

    Category "Mn" stands for Nonspacing_Mark
    """
    try:
        return ''.join(
            c for c in unicodedata.normalize('NFD', s)
            if unicodedata.category(c) != 'Mn'
        )
    except:
        return s

PS : thanks to @Flameeyes for his good remark on wording

April 03, 2014
Michal Hrusecky a.k.a. miska (homepage, bugs)
Help MariaDB gather some statistics! (April 03, 2014, 17:33 UTC)

MariaDB logoI was browsing around the Internet (don’t remember what for) and I accidentally found one cool aspect of MariaDB. There is a feedback plugin and this short post is meant to encourage you to use it!

Ok, so what it does and why should you opt-in to be spied on :-) It takes some information about your MariaDB server including it’s usage and it will send it to the MariaDB folks. It doesn’t send private data from your database. It sends stuff like what OS are you running, what version of various plugins, how did you tweaked the default settings and also how big and how busy is your server. Now a short list of why I turned this on:

  • Why not? Doesn’t cost me anything, nothing from the data I send is secret.
  • When I develop an application, I’m always happy when somebody uses it. This is an easy way how to tell developers, that they have here one happy user :-)
  • Easy way to contribute. It’s really simple to turn it on, it will help MariaDB folks make better database and doesn’t require much effort from my side.
  • Selfish reason – if they see that plenty of people use MariaDB the same way I do, they will focus more on my use case :-)

But all these data are not only available to them, they are also making some nice graphs out of it. That way, I can find out that there is at least another 27 guys running latest 10.0.10. Also I found out that there is not many reports from openSUSE folks. And that is one of the reasons to write this blog. If you are running MariaDB on openSUSE, please turn feedback plugin on to show that we have plenty of people using MariaDB :-)

How can you turn it on? Simple, login to your database and activate the plugin using following command:

INSTALL PLUGIN feedback SONAME 'feedback';

Now just wait till your reports will show up in statistics. If I got you interested, you can read more about the plugin on MariDB website (it can report to any url, not only MariaDB one, you can use it for monitoring). While waiting, browsing already collected statistics is also interesting ;-)

April 02, 2014
Gentoo Monthly Newsletter - March 2014 (April 02, 2014, 08:04 UTC)

The March 2014 GMN issue is now available online.

This month on GMN:

  • Interview with Gentoo developer Tom Wijsman (TomWij)
  • Tracking the history of Gentoo: Gentoo Galaxy
  • Latest Gentoo news, tips, interesting stats and much more.

April 01, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
What is that net-pf-## thingie? (April 01, 2014, 17:46 UTC)

When checking audit logs, you might come across applications that request loading of a net-pf-## module, with ## being an integer. Having requests for net-pf-10 is a more known cause (enable IPv6) but what about net-pf-34?

The answer can be found in /usr/src/linux/include/linux/socket.h:

#define AF_ATMPVC       8       /* ATM PVCs                     */
#define AF_X25          9       /* Reserved for X.25 project    */
#define AF_INET6        10      /* IP version 6                 */
#define AF_ROSE         11      /* Amateur Radio X.25 PLP       */
#define AF_DECnet       12      /* Reserved for DECnet project  */
...
#define AF_BLUETOOTH    31      /* Bluetooth sockets            */
#define AF_IUCV         32      /* IUCV sockets                 */
#define AF_RXRPC        33      /* RxRPC sockets                */
#define AF_ISDN         34      /* mISDN sockets                */
#define AF_PHONET       35      /* Phonet sockets               */
#define AF_IEEE802154   36      /* IEEE802154 sockets           */

So next time you get such a weird module load request, check socket.h for more information.

March 31, 2014
Gentoo Monthly Newsletter: March 2014 (March 31, 2014, 19:07 UTC)

Gentoo News

Interview with Tom Wijsman (TomWij)

(by David Abbott)

1. To get started, can you give us a little background information about yourself?

Tom Wijsman is my full name; TomWij is formed as a shorter nickname, taking the first three letters twice. 24 years is how long I’ve been alive and Antwerp, Belgium is where you can find me eating, hanging around, sleeping, studying, working and so on…

At university, I study the Computer Science programme with a specialization in Software Engineering. As the last year starts now, my student time is almost over.

Over the last years, a lot of programming languages have passed by there and on Gentoo Linux; which makes participation in both of them really worth it.

Besides programming, listening and playing some music is what I like to do. Currently I own an electric guitar, which sometimes is played on; but maybe I go for another instrument soon and practice in a more dedicated manner. Occasionally, I play FPS or RTS games too.

2. Tell us about your introduction to Gentoo?

The first look at Gentoo was when I was a dedicated enthusiast Windows user, who would run as much on Windows as possible. Once I’ve tried to set up a Windows / Linux combination by running SUA / Interix together with Xming, but as I barely knew Linux back then that didn’t come to a good end. Later, Linux was needed for university; as we needed to guarantee our software compiles and works on the lab computers that run Linux.

Having used another distribution in a virtual machine for some time; I discovered that it was slow without hardware virtualization, which we didn’t have yet back then. Something fast and small on a separate partition was needed; and thus, a small bit of space was cleaned out at the end of the partition and Gentoo was used to create a quite minimal setup with just what’s necessary to develop, compile and test.

When the need for that was over, the small partition was ditched; thus I have been using Windows for several years, but with Windows 8 going RTM and the changes that happened I started to realize that I wanted an OS that can be changed to what I like, instead of doing things the way in the limited amount of ways they can be done.

So, Gentoo Linux came back in mind; and that’s how I made the switch to it last year.

3. Describe your journey to become a Gentoo developer?

Not long after becoming an user of Gentoo, I decided to contribute back; so, I started to try to package some things that I used on Windows or which fitted the current needs back then. From there on I looked for ways to contribute, at which time I found a blog post that the kernel team is looking for users to help; there was too many users, so, I didn’t make the cut.

Apparently, none of them sticked to it; so, later I got back to try again and then the kernel lead mentored me. As this was a good opportunity, the next days were spent on studying the development manual and answering the quizzes as detailed as possible; I took a self-study approach here, looking back on it having seen every part of the devmanual certainly gains you a lot, as you can recall where things are and know enough to not break the Portage tree.

After a recruiter reviewed the quiz responses a year ago; I learned more during the review, that’s how I became Gentoo Developer and six months after I switched from Windows.

4. What are some of the projects you are involved with and the packages you help maintain?

Besides working on our Kernel releases, recently I have joined the QA and Portage team to keep the quality of our distribution high and improve repoman; in the longer end I plan to improve Portage and/or pkgcore when I get to learn their code base better. Other teams I am on are the Proxy Maintainers (to help Gentoo users maintain packages without them needing to become a Gentoo Developer); as well as the Java, Dotnet, Bug Wranglers and Bug Cleaners projects. The last two projects help get bugs assigned and cleaned up.

Next to those projects I maintain or help maintain some packages that I either personally use, am interested in or where work was needed. One of the last introduced packages is Epoch, a new minimal init system. It boots extremely fast on the Raspberry Pi.

5. I proxy-maintain a few packages myself. I am a staff member without commit rights. Its a great way to give back and also help maintain a package that you like and use. To prepare I did the ebuild quiz for my own understanding of ebuild writing and set up a local overlay to test my ebuilds. What are some other ways a user can become confident enough to maintain some packages?

The basic guide to write Gentoo Ebuilds is a guide that was started to cover the very first steps to writing an ebuild; this resource was previously non existing, it was written to close the gap between having no prior knowledge and the Gentoo Development Guide.

The Gentoo Development Guide is a great reference to find most details and policy one needs to know for writing ebuilds; when working in the terminal, checking out man 5 ebuild can be handy to quickly look up syntax, variables and functions of the ebuild format.

Creating a local overlay allows you to start locally experimenting with ebuilds. When you feel confident you can request a hosted overlay (or create one yourself on a third party service like GitHub and file a similar bug requesting it to be added to the overlay list) or contribute to the Portage tree (through proxy maintenance or you can become developer if you want to) or an existing overlay.

When you do proxy maintenance, the proxy maintainers will help you by advising and reviewing the ebuild and letting you know how to improve it; if you work on an overlay, there are other mediums (where proxy maintainers are present as well) to ask questions or get your ebuild reviewed. For example, #gentoo-dev-help on the Freenode IRC network is helpful.

Besides that users are advised to run

repoman manifest && repoman full

to check for QA errors, QA keywords are explained in the last part of man repoman it can help find common mistakes, as well as help increase the quality for it to be added to the Portage tree.

6. What do you think Gentoo’s strengths and weaknesses are both as a development platform and as a general purpose Linux Distribution?

That you can very easily patch up packages is a very nice feature, as well as the code that gets compiled by those packages; you can simply unpack the code;

ebuild unpack foo-1.ebuild

and write a patch for one or more file(s), then put the patch in /etc/portage/patches/app-bar/foo and there you have your patched code.

Besides patching up packages, the USE flag control in Gentoo is what makes Gentoo powerful. This controls the features of packages to allow you to have packages fit your usage rather than become bloated with features, libraries and other size hogs you never need. Alongside USE flag control becomes the ability to choose alternative libraries, alternative GUIs more; which is nice when you prefer the way something works or looks like.

What I think Gentoo could use more is more manpower; what made Gentoo powerful is its community, and its community is formed by users who contribute. And to this extent the amount of contributions determine how powerful Gentoo becomes.

If users are interested; they are welcome to contribute to Gentoo, to make it even more powerful than ever before. They don’t necessarily need much prior knowledge, there’s something for everybody; and if needed, we can help them learn more.

7. Can you describe your personal desktop setup (WM/DE)?

As desktop environment; I use GNOME 3, I’m glad to see the way they have progressed in terms of their user interface. GNOME 2 I’ve also used in the past, because I didn’t bother searching further too much; but didn’t really like GNOME 2’s UI. GNOME 3’s UI gets out of the way and I like how it focuses on the more typical user that has no special requirements.

Alongside that comes the requirement to run systemd; though that was in use long before running GNOME 3, as a while ago I was on XFCE and was experimenting around to see if systemd fits certain needs. It does; so does XFCE as well, so while I don’t really like it UI like with GNOME 2, I considered XFCE as an alternative DE to switch to. However, very recently I’m using MATE on top of GNOME 3; if GNOME 3 breaks, MATE is my new alternative DE.

The particular thing that I like about systemd is that it allows you to easily make a huge cut in boot time; while this kind of parameter has no good purpose in general, it does help as I need to test kernel releases and sometimes switch between NVIDIA and Nouveau module. The boot is down to two seconds after the boot loader hands over; at this point, you discover that the bootchart PNG export feature doesn’t allow you to scale the graph…

On the Raspberry Pi, Epoch gets the boot time down to seconds; as it was bothering that it previously took over a minute, as that is what running init scripts (which are shell) does together with all what they call when you run it on slow embedded hardware. Whereas Epoch is a daemon with a single configuration file that just starts a few processes and that’s it.

It also helped for bisecting as well as hacking up a reclocking patch for the Nouveau module a bit; while making it work on the NVIDIA card, the patch is still unstable and might break other cards and further improving it is quite a steep learning curve and a lot of work.

Other software that I use is AutoKey to quickly paste text that I need to repeat often (comments on bugs, e-mail responses, …); Chromium which I think is a browser that gets out of the way with its minimal interface; WeeChat (actively developed irssi clone with a ton of extra features); a mail client that does what I need (Claws Mail); and I could go on for hours, so, feel free to ask if you want to know more…

8. What are the specs of your current boxes?

Currently I own a Clevo W870CU barebone laptop that is put together; it features a Intel Corporation 5 Series/3400 Series Chipset, a Full HD 17 inch screen and enough interface ports. The processor in it is an Intel(R) Core(TM) i7 CPU Q 720. As hard disks I use a Intel X25-M 160 GB SSD and a Seagate Momentus 7200.3 320 GB HDD. There are also a NVIDIA GeForce GTX 285M, Intel Corporation WiFi Link 5100 and Realtek RTL8111/8168/8411 PCIE Gigabit Ethernet Controller inside.

As for the Raspberry Pi, it is a model B; you can find its specifications here. I gave it a 32 GB SD card with Gentoo on it where the 32 GB gives it some room before wearing it out. Alongside there are two external drives of a few terabytes to store big data and backups.

The Raspberry Pi here kind of acts like a cheap all-in-one NAS and/or media solution.

9. As a Gentoo Developer what are some of your accomplishments?

On the kernel team, the kernel eclass and genpatches scripts were adapted to bring support for experimental patches; this allows adding experimental patches to kernel packages using USE=experimental, without applying them by default. A condition for an experimental patch to be added is that applying the patch does not change the runtime behavior; in other words, we want changes to be guarded by a config option, in addition to USE=experimental. The eventual end goal is to have a lot of the regular experimental patches supported, to deduplicate work amongst kernel packages and our users.

Besides making improvements to the kernel packaging I maintain packages that I use and/or packages that need maintenance; at the moment, MATE is being brought to the Portage tree. Quality Assurance work I also do to keep the quality of the Portage tree high.

10. What would be your dream job?

While not having anything specific in mind, developing on “something” is what I have in mind.

In the context of the business world, that could be solutions that aid users with their daily tasks; in the context of the gaming world, maybe some indie game in the hope that it grows out; and last, I listen to music a lot, so, maybe within that context it could be some kind of computer science solution within that field.

Relying on yet-to-discover science is what I’d like to avoid, and rather rely on what is a given already; such that becoming popular is the only real risk. Once popularity has been obtained, then exploration might become an option; although one should not ignore that exploration can lead to popularity, but as said that is not without risk.

11. What users would you like to recruit to become Gentoo Developers?

Good question; many people are qualified, anyone that’s interested can expect help from us.

12. What gives you the most enjoyment within the Gentoo community?

Giving back to the community as an appreciation of what the community has given to me.

Gentoo Galaxy: Keeping History of Gentoo

(by Seemant Kulleen)

Gentoo Galaxy aims to make sure that Gentoo’s history is as accurate as possible, that every Gentoo developer’s contribution is acknowledged and valued. We’re starting with our list of Gentoo developers. We currently have all developers who have been active in Bugzilla and/or the 4 main CVS repositories throughout Gentoo’s history represented in a visualization here: http://kulleen.org/gentoo/galaxy

That page contains a list of developers for whom we need more information — we want to visualize everybody’s contributions. If you are or know a developer on that list, please get in touch with us via bugzilla. e-mail, twitter, google plus or IRC in #gentoo or #gentoo-dev.

Trustee News

Gentoo Foundation 2013 Treasure Summary
In the fiscal year 2013, for the period of July 1st through June 30th we had total assets of $73,494.40. Our main income was $7,000.00 from GSOC, next was donations thru paypal for $6,386.94 and the official Gentoo store generated $558.85 in commissions.

Our expenses totaled $3,396.01 with $2,399.23 to Gentoo GSoC 2012 mentor’s summit travel reimbursement.

Our expenses are kept to a minimium because of all our generous sponsors plus the work of our Infrastructure team to secure donations of hosting, hardware and bandwidth.

Requests for Funds, Project Support, or Equipment
Requests for funds, project support, or equipment need to be sent to the Foundation in the form of a proposal. This proposal is to inform all trustees of the need (not all of them will be aware of the need or the background of the situation). The proposal process will also help to maintain a trusting relationship between the Foundation and its donors. Donors know and expect that without exception money will only be spent after a proposal and vote by the Board of Trustees. Additionally, the proposals will be archived to provide accountability for money spent.

Please review our policy documentation for more information.

News Items

Subject: Ruby 1.8 removal, Ruby 1.9 and Ruby 2.0 activated by default

The Gentoo Ruby team would like to inform you, that the default active ruby targets changed from “ruby19 ruby18″ to “ruby19 ruby20″.

It is about time, because Ruby 1.8 was retired by upstream in July 2013 [1] and has got known security issues (CVE-2013-4164). In Gentoo, we’re going to remove the currently package.masked Ruby MRI 1.8 soon. All packages, depending on ruby, have been converted to support at least Ruby 1.9 or were added to the package.mask at the same time with Ruby 1.8. In case of issues during or after the upgrade, feel free to fill a bug at bugs.gentoo.org

If your currently eselected Ruby interpreter is ruby18, our recommendation is to change it to ruby19. [2] At the moment Ruby MRI 1.9 delivers the best possible support of all Ruby interpreters in tree.

Check the current setting via:
eselect ruby show

Change the current setting to Ruby MRI 1.9 via:
eselect ruby set ruby19

[1] https://www.ruby-lang.org/en/news/2013/06/30/we-retire-1-8-7/
[2] https://wiki.gentoo.org/wiki/Project:Ruby/Ruby_1.9_migration

Gentoo Developer Stats

Summary

Gentoo is made up of 252 active developers, of which 38 are currently away.
Gentoo has recruited a total of 794 developers since its inception.

Changes

The following developers have recently changed roles:
Jason A. Donenfeld (zx2c4) Joined the systemd project

Additions

The following developers have recently joined the project:
None this month

Moves

The following developers recently left the Gentoo project:
None this month

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 161
Packages 17342
Ebuilds 36489
Architecture Stable Testing Total % of Packages
alpha 3612 510 4122 23.77%
amd64 10703 6142 16845 97.13%
amd64-fbsd 0 1577 1577 9.09%
arm 2631 1636 4267 24.61%
hppa 3034 484 3518 20.29%
ia64 3186 575 3761 21.69%
m68k 576 88 664 3.83%
mips 4 2362 2366 13.64%
ppc 6865 2349 9214 53.13%
ppc64 4334 849 5183 29.89%
s390 1493 290 1783 10.28%
sh 1714 339 2053 11.84%
sparc 4135 877 5012 28.90%
sparc-fbsd 0 323 323 1.86%
x86 11418 5183 16601 95.73%
x86-fbsd 0 3233 3233 18.64%

gmn-portage-stats-2013-11

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201403-08 dev-perl/PlRPC PlRPC: Arbitrary code execution 497692
201403-07 sys-apps/grep grep: User-assisted execution of arbitrary code 448246
201403-06 net-libs/libupnp libupnp: Arbitrary code execution 454570
201403-05 app-editors/emacs GNU Emacs: Multiple vulnerabilities 398239
201403-04 dev-qt/qtcore QtCore: Denial of Service 494728
201403-03 sys-apps/file file: Denial of Service 501574
201403-02 dev-libs/libyaml LibYAML: Arbitrary code execution 499920
201403-01 www-client/chromium Chromium-V8: Multiple vulnerabilities 486742

Package Removals/Additions

Removals

Package Developer Date
x11-misc/slimlock titanofold 10 Mar 2014
dev-libs/ido ssuominen 15 Mar 2014
dev-ruby/ruby-bdb mrueg 15 Mar 2014
www-servers/mongrel_cluster mrueg 15 Mar 2014
virtual/emacs-cedet ulm 17 Mar 2014
gnustep-libs/cddb voyageur 17 Mar 2014
app-emacs/nxml-mode ulm 17 Mar 2014
app-emacs/erc ulm 17 Mar 2014
app-emacs/cperl-mode ulm 17 Mar 2014
app-emacs/alt-font-menu ulm 17 Mar 2014
app-emacs/u-vm-color ulm 17 Mar 2014
app-emacs/eperiodic ulm 20 Mar 2014
app-emacs/view-process ulm 20 Mar 2014
media-sound/audio-entropyd angelos 22 Mar 2014
app-emacs/http-emacs ulm 23 Mar 2014
app-emacs/mairix ulm 23 Mar 2014

Additions

Package Developer Date
dev-python/pretend radhermit 01 Mar 2014
dev-python/cryptography radhermit 01 Mar 2014
dev-java/boilerpipe ercpe 01 Mar 2014
media-plugins/gst-plugins-vaapi pacho 01 Mar 2014
dev-db/derby ercpe 01 Mar 2014
net-analyzer/masscan robbat2 01 Mar 2014
mate-base/mate-desktop tomwij 02 Mar 2014
mate-extra/mate-dialogs tomwij 02 Mar 2014
mate-extra/mate-polkit tomwij 02 Mar 2014
x11-libs/libmatewnck tomwij 02 Mar 2014
dev-python/ssl-fetch dolsen 02 Mar 2014
dev-java/hamcrest-integration ercpe 02 Mar 2014
sci-libs/Fiona slis 03 Mar 2014
dev-python/ipdbplugin slis 03 Mar 2014
sci-libs/pyshp slis 03 Mar 2014
dev-util/lttng-modules dlan 04 Mar 2014
dev-util/lttng-ust dlan 04 Mar 2014
dev-util/lttng-tools dlan 04 Mar 2014
dev-util/babeltrace dlan 04 Mar 2014
games-misc/papers-please hasufell 04 Mar 2014
dev-haskell/scientific qnikst 04 Mar 2014
dev-haskell/text-stream-decode qnikst 04 Mar 2014
kde-base/kwalletmanager johu 04 Mar 2014
mate-base/mate-panel tomwij 05 Mar 2014
mate-base/mate-settings-daemon tomwij 05 Mar 2014
net-wireless/crackle zerochaos 05 Mar 2014
dev-util/appdata-tools polynomial-c 06 Mar 2014
media-libs/libepoxy mattst88 06 Mar 2014
dev-ruby/magic mrueg 06 Mar 2014
net-wireless/mate-bluetooth tomwij 07 Mar 2014
x11-themes/mate-icon-theme tomwij 07 Mar 2014
x11-wm/mate-window-manager tomwij 07 Mar 2014
dev-ruby/ruby-feedparser mrueg 07 Mar 2014
dev-java/dnsjava ercpe 07 Mar 2014
dev-haskell/abstract-deque-tests gienah 09 Mar 2014
dev-haskell/exceptions gienah 09 Mar 2014
dev-haskell/errorcall-eq-instance gienah 09 Mar 2014
dev-haskell/asn1-encoding gienah 09 Mar 2014
dev-haskell/asn1-parse gienah 09 Mar 2014
dev-haskell/chunked-data gienah 09 Mar 2014
dev-haskell/enclosed-exceptions gienah 09 Mar 2014
dev-haskell/esqueleto gienah 09 Mar 2014
dev-haskell/foldl gienah 09 Mar 2014
dev-haskell/x509 gienah 09 Mar 2014
dev-haskell/x509-store gienah 09 Mar 2014
dev-haskell/x509-system gienah 09 Mar 2014
dev-haskell/x509-validation gienah 09 Mar 2014
mate-base/mate-file-manager tomwij 09 Mar 2014
mate-extra/mate-calc tomwij 09 Mar 2014
mate-extra/mate-character-map tomwij 09 Mar 2014
mate-extra/mate-power-manager tomwij 09 Mar 2014
mate-extra/mate-screensaver tomwij 10 Mar 2014
mate-extra/mate-sensors-applet tomwij 10 Mar 2014
dev-python/ansicolor jlec 10 Mar 2014
dev-libs/liblogging ultrabug 10 Mar 2014
sys-apps/gentoo-functions williamh 10 Mar 2014
mate-extra/mate-system-monitor tomwij 10 Mar 2014
mate-extra/mate-utils tomwij 11 Mar 2014
x11-terms/mate-terminal tomwij 11 Mar 2014
x11-themes/mate-backgrounds tomwij 11 Mar 2014
x11-themes/mate-themes tomwij 11 Mar 2014
media-video/atomicparsley-wez ssuominen 11 Mar 2014
app-arch/mate-file-archiver tomwij 12 Mar 2014
app-editors/mate-text-editor tomwij 12 Mar 2014
app-text/mate-document-viewer tomwij 12 Mar 2014
games-misc/games-envd hasufell 12 Mar 2014
perl-core/Dumpvalue zlogene 12 Mar 2014
dev-python/python-caja tomwij 12 Mar 2014
dev-haskell/fingertree qnikst 12 Mar 2014
dev-haskell/reducers qnikst 12 Mar 2014
dev-haskell/monadrandom qnikst 12 Mar 2014
dev-haskell/either qnikst 12 Mar 2014
media-libs/x265 aballier 12 Mar 2014
dev-haskell/tasty-rerun qnikst 12 Mar 2014
dev-haskell/ekg qnikst 12 Mar 2014
dev-lang/lfe patrick 13 Mar 2014
dev-ml/optcomp aballier 13 Mar 2014
dev-ml/deriving aballier 13 Mar 2014
dev-python/venusian patrick 14 Mar 2014
dev-python/pyramid patrick 14 Mar 2014
kde-misc/about-distro johu 14 Mar 2014
dev-haskell/errors qnikst 14 Mar 2014
perl-core/Math-Complex zlogene 14 Mar 2014
dev-libs/ido ssuominen 15 Mar 2014
dev-python/dugong radhermit 17 Mar 2014
mate-base/mate-applets tomwij 17 Mar 2014
mate-extra/caja-dropbox tomwij 17 Mar 2014
mate-extra/mate-file-manager-image-converter tomwij 17 Mar 2014
mate-extra/mate-file-manager-open-terminal tomwij 17 Mar 2014
mate-extra/mate-file-manager-sendto tomwij 17 Mar 2014
mate-extra/mate-file-manager-share tomwij 17 Mar 2014
dev-util/emilpro zerochaos 18 Mar 2014
kde-misc/kcmsystemd johu 18 Mar 2014
media-gfx/mate-image-viewer tomwij 19 Mar 2014
x11-misc/mate-menu-editor tomwij 19 Mar 2014
net-analyzer/mate-netspeed tomwij 19 Mar 2014
x11-misc/mate-notification-daemon tomwij 19 Mar 2014
x11-themes/mate-icon-theme-faenza tomwij 19 Mar 2014
dev-ruby/rb-readline zerochaos 19 Mar 2014
dev-vcs/hg-fast-export ottxor 21 Mar 2014
sys-apps/audio-entropyd angelos 22 Mar 2014
dev-vcs/git-flow johu 22 Mar 2014
app-emacs/gnuplot-mode ulm 22 Mar 2014
app-admin/mate-system-tools tomwij 22 Mar 2014
mate-extra/mate-media tomwij 22 Mar 2014
mate-base/mate-control-center tomwij 22 Mar 2014
net-misc/portspoof zerochaos 22 Mar 2014
app-leechcraft/lc-ooronee maksbotan 23 Mar 2014
app-leechcraft/lc-cpuload maksbotan 23 Mar 2014
app-leechcraft/lc-certmgr maksbotan 23 Mar 2014
mate-extra/mate-user-share tomwij 23 Mar 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 25 February 2014 and 27 March 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-02

Bug Activity Number
New 1820
Closed 1307
Not fixed 177
Duplicates 159
Total 5600
Blocker 4
Critical 19
Major 65

Closed bug ranking

The developers and teams who have closed the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Python Gentoo Team 76
2 Perl Devs @ Gentoo 63
3 Gentoo KDE team 47
4 Gentoo Security 41
5 Gentoo's Team for Core System packages 41
6 Gentoo's Haskell Language team 35
7 Gentoo Linux Gnome Desktop Team 31
8 GNU Emacs Team 29
9 Default Assignee for Orphaned Packages 28
10 Others 915

gmn-activity-2014-02

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 119
2 Gentoo Security 95
3 Gentoo Games 75
4 Gentoo KDE team 57
5 Gentoo Linux Gnome Desktop Team 57
6 Python Gentoo Team 52
7 Gentoo's Team for Core System packages 51
8 Gentoo's Haskell Language team 41
9 GNU Emacs Team 41
10 Others 1231

gmn-activity-2014-02

Tip of the month

Gentoolkit has a little known utility called enalyze.

Enalyze analyzes the deployment information Gentoo keeps of all packages and checks this against the current settings status.

There are 2 sub-modules:
- the “analyze” module produces the reports, and
- the “rebuild” module which allows for rebuilding package.use, package.accept_keywords, and package.unmask files which can be placed in /etc/portage.

The difference between it and equery, is that equery does specific queries, while enalyze does complete reports. So, essentially it can be used as a tune up or repair kit for your gentoo system. It does not do everything for you, it does leave some of the decision making to you. But after reviewing the reports, you may want to edit your make.conf to optimize its settings. An interesting feature is that enalyze supports creation of new package.use, package.accept_keywords or package.unmask files based on the currently installed package information, your current profile and make.conf settings. Through it, enalyze can help you rebuild these files or remove obsolete entries from it.

Please note that it does not use or modify existing /etc/portage/package.* files

eg:

# enalyze analyze -v use

This produces a report of all use flags used by packages on your system as well as how they are used. It shows if a USE flag is enabled or disabled, and shows if the USE flag has a “default” setting (a summary of: a profile enabled USE flag, a global make.defaults USE flag, etc.) For each USE flag, the packages that use it are listed as well when called with the -v module option.

From that information you can edit your make.conf’s USE= and remove any flags that are already defaulted. if there is a flag that has more than a few packages using that setting, you could add it to the USE= instead of relying on having that flag in package.use for those packages.

When finished the above:

# enalyze rebuild use

Will generate a new package.use file (neatly sorted) of only the entries needed to preserve the current state of the packages installed. Once you check over the file, add some custom tweaks (to your satisfaction) you can replace the existing or missing file in /etc/portage.

It also runs completely as any user in the portage group. There is no need to run it with superuser rights. Any files generated are saved in the users home directory.

Tip: It is very useful for changing profiles too. Just run them to adapt to the new profile and the new defaults.

P.S. There is room for the utility to get many more reports and rebuild options. So, submit your requests (and hopefully code).

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Proof of concept for USE enabled policies (March 31, 2014, 16:33 UTC)

tl;dr: Some (-9999) policy ebuilds now have USE support for building in (or leaving out) SELinux policy statements.

One of the “problems” I have been facing since I took on the maintenance of SELinux policies within Gentoo Hardened is the (seeming) inability to make a “least privilege” policy that suits the flexibility that Gentoo offers. As a quick recap: SELinux policies describe the “acceptable behavior” of an application (well, domain to be exact), often known as the “normalized behavior” in the security world. When an application (which runs within a SELinux domain) wants to perform some action which is not part of the policy, then this action is denied.

Some applications can have very broad acceptable behavior. A web server for instance might need to connect to a database, but that is not the case if the web server only serves static information, or dynamic information that doesn’t need a database. To support this, SELinux has booleans through which optional policy statements can be enabled or disabled. So far so good.

Let’s look at a second example: ALSA. When ALSA enabled applications want to access the sound devices, they use IPC resources to “collaborate” around the sound subsystem (semaphores and shared memory to be exact). Semaphores inherit the type of the domain that first created the semaphore (so if mplayer creates it, then the semaphore has the mplayer_t context) whereas shared memory usually gets the tmpfs-related type (mplayer_tmpfs_t). When a second application wants to access the sound device as well, it needs access to the semaphore and shared memory. Assuming this second application is the browser, then mozilla_t needs access to semaphores by mplayer_t. And the same for chromium_t. Or java_t applications that are ALSA-enabled. And alsa_t. And all other applications that are ALSA enabled.

In Gentoo, ALSA support can be made optional through USE="alsa". If a user decides not to use ALSA, then it doesn’t make sense to allow all those domains access to each others’ semaphores and shared memory. And although SELinux booleans can help, this would mean that for each application domain, something like the following policy would need to be, optionally, allowed:

# For the mplayer_t domain:
optional_policy(`
  tunable_policy(`use_alsa',`
    mozilla_rw_semaphores(mplayer_t)
    mozilla_rw_shm(mplayer_t)
    mozilla_tmpfs_rw_files(mplayer_t)
  ')
')

optional_policy(`
  tunable_policy(`use_alsa',`
    chromium_rw_semaphores(mplayer_t)
    chromium_rw_shm(mplayer_t)
    chromium_tmpfs_rw_files(mplayer_t)
  ')
')

And this for all domains that are ALSA-enabled. Every time a new application is added that knows ALSA, the same code needs to be added to all policies. And this only uses a single SELinux boolean (whereas Gentoo supports USE="alsa" on per-package level), although we can create separate booleans for each domain if we want to. Not that that will make it more manageable.

One way of dealing with this would be to use attributes. Say we have a policy like so:

attribute alsadomain;
attribute alsatmpfsfile;

allow alsadomain alsadomain:sem rw_sem_perms;
allow alsadomain alsadomain:shm rw_shm_perms;
allow alsadomain alsatmpfsfile:file rw_file_perms;

By assigning the attribute to the proper domains whenever ALSA support is needed, we can toggle this more easily:

# In alsa.if
interface(`alsa_domain',`
  gen_require(`
    attribute alsadomain;
    attribute alsatmpfsfile;
  ')
  typeattribute $1 alsadomain;
  typeattribute $2 alsatmpfsfile;
')


# In mplayer.te
optional_policy(`
  tunable_policy(`use_alsa',`
    alsa_domain(mplayer_t, mplayer_tmpfs_t)
  ')
')

That would solve the problem of needlessly adding more calls in a policy for every ALSA application. And hey, we can probably live with either a global boolean (use_alsa) or per-domain one (mplayer_use_alsa) and toggle this according to our needs.

Sadly, the above is not possible: one cannot define typeattribute assignments inside a tunable_policy code: attributes are part of the non-conditional part of a SELinux policy. The solution would be to create build-time conditionals (rather than run-time):

ifdef(`use_alsa',`
  optional_policy(`
    alsa_domain(mplayer_t, mplayer_tmpfs_t)
  ')
')

This does mean that use_alsa has to be known when the policy is built. For Gentoo, that’s not that bad, as policies are part of separate packages, like sec-policy/selinux-mplayer. So what I now added was USE-enabled build-time decisions that trigger this code. The selinux-mplayer package has IUSE="alsa" which will enable, if set, the use_alsa build-time conditional.

As a result, we now support a better, fine-grained privilege setting inside the SELinux policy which is triggered through the proper USE flags.

Is this a perfect solution? No, but it is manageable and known to Gentoo users. It isn’t perfect, because it listens to the USE flag setting for the selinux-mplayer package (and of course globally set USE flags) but doesn’t “detect” that the firefox application (for which the policy is meant) is or isn’t built with USE="alsa". So users/administrators will need to keep this in mind when using package-local USE flag definitions.

Also, this will make it a bit more troublesome for myself to manage the SELinux policy for Gentoo (as upstream will not use this setup, and as such patches from upstream might need a few manual corrections before they apply to our tree). However, I gladly take that up if it means my system will have somewhat better confinement.

March 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

When investigating AVC denials, some denials show a path that isn’t human readable, like so:

type=AVC msg=audit(1396189189.734:1913): avc:  denied  { execute } for  pid=17955 comm="emerge" path=2F7661722F666669737A69596157202864656C6574656429 dev="dm-3" ino=1838 scontext=staff_u:sysadm_r:portage_t tcontext=staff_u:object_r:var_t tclass=file

To know what this file is (or actually was, because such encoded paths mean that the file wasn’t accessible anymore at the time that the denial was shown contains a space), you need to hex-decode the value. For instance, with python:

~$ python -c "import base64; print(base64.b16decode(\"2F7661722F666669737A69596157202864656C6574656429\"));";
b'/var/ffisziYaW (deleted)'

In the above example, /var/ffisziYaW was the path of the file (note that, as it starts with ffi, it is caused by libffi which I’ve blogged about before). The reason that the file was deleted at the time the denial was generated is because what libffi does is create a file, get the file descriptor and unlink the file (so it is deleted and only the (open) file handle allows for accessing it) before it wants to execute it. As a result, the execution (which is denied) triggers a denial for the file whose path is no longer valid (as it is now appended with “ (deleted)“).

Edit 1: Thanks to IooNag who pointed me to the truth that it is due to a space in the file name, not because it was deleted. Having the file deleted makes the patch be appended with “ (deleted)” which contains a space.

Managing Inter-Process Communication (IPC) (March 30, 2014, 10:50 UTC)

As a Linux administrator, you’ll eventually need to concern you about Inter-Process Communication (IPC). The IPC primitives that most POSIX operating systems provide are semaphores, shared memory and message queues. On Linux, the first utility that helps you with those primitives is ipcs. Let’s start with semaphores first.

Semaphores in general are integer variables that have a positive value, and are accessible by multiple processes (users/tasks/whatever). The idea behind a semaphore is that it is used to streamline access to a shared resource. For instance, a device’ control channel might be used by multiple applications, but only one application at a time is allowed to put something on the channel. Through semaphores, applications check the semaphore value. If it is zero, they wait. If it is higher, they attempt decrement the semaphore. If it fails (because another application in the mean time has decremented the semaphore) then the application waits, otherwise it continues as it has successfully decremented the semaphore. In effect, it acts as a sort-of lock towards a common resource.

An example you can come across is with ALSA. Some of the ALSA plugins (such as dmix) use IPC semaphores to allow multiple ALSA applications to connect to and use the sound subsystem. When an ALSA-enabled application is using the sound system, you’ll see that a semaphore is active:

~$ ipcs -s
------ Semaphore Arrays --------
key        semid      owner      perms      nsems     
0x0056a4d5 32768      swift      660        1

More information about a particular semaphore can be obtained using ipcs -s -i SEMID where SEMID is the value in the semid column:

~$ ipcs -s -i 32768
Semaphore Array semid=32768
uid=1001         gid=18  cuid=1001       cgid=100
mode=0660, access_perms=0660
nsems = 1
otime = Sun Mar 30 12:33:46 2014  
ctime = Sun Mar 30 12:33:38 2014  
semnum     value      ncount     zcount     pid       
0          0          0          0          32061

As with all IPC resources, we have information about the owner of the semaphore (uid and gid), the creator of the semaphore (cuid and cgid) as well as its access mask, similar to the file access mask on Linux systems (mode and access_perms). Specific to the IPC semaphore, you can also notice the nsems = 1. Unlike the general semaphores, IPC semaphores are actually a wrapper around one or more “real” semaphores. The nsems variable shows how many “real” semaphores are handled by the IPC semaphore.

Another very popular IPC resource is shared memory. This is memory that is accessible by multiple applications, and provides a very versatile approach to sharing information and collaboration between processes. Usually, a semaphore is also used to govern writes and reads to the shared memory, so that one process that wants to update a part of the shared memory takes a semaphore (a sort-of lock), makes the updates, and then increments the semaphore again.

You can see the currently defined shared memory using ipcs -m:

~$ ipcs -m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x00000000 655370     swift      600        393216     2          dest

Again, more information can be obtained through -i SHMID. An interesting value to look at as well is the creator PID (just in case the process still runs, or through the audit logs) and the last PID used to operate on the shared memory (which also might no longer exist, but is still an important value to investigate).

~$ ipcs -m -p
------ Shared Memory Creator/Last-op PIDs --------
shmid      owner      cpid       lpid      
655370     swift      6147       6017

~$ ps -ef | grep -E '(6147|6017)'
root      6017  6016  0 09:49 tty1     00:01:30 /usr/bin/X -nolisten tcp :0 -auth /home/swift/.serverauth.6000
swift     6147     1  2 09:50 tty1     00:05:10 firefox

In this case, the shared memory is most likely used to handle the UI of firefox towards the X server.

A last IPC resource are message queues, through which processes can put messages on a queue and remove messages (by reading them) from the queue. I don’t have an example at hand for the moment, but just like semaphores and shared memory, queues can be looked at through ipcs -q with more information being available through ipcs -q -i MSQID.

Now what if you need to operate these? For this, you can use ipcrm to remove an IPC resource whereas ipcmk can be used to create one (although the latter is not that often used for administrative purposes, whereas ipcrm can help you troubleshoot and fix issues without having to reboot a system). Of course, removing IPC resources from the system should only be done when there is a bug in the application(s) that use it (for instance, a process decreased a semaphore and then crashed – in that case, remove the semaphore and start one of the application(s) that also operates on the semaphore as they usually recreate it and continue happily).

Now before finishing this post, I do need to tell you about the difference between an IPC resource key and its identifier. The key is like a path or URL, and is a value used by the applications to find and obtain existing IPC resources (something like, “give me the list of semaphores that I can access with key 12345″). The identifier is a unique ID generated by the Linux kernel at the moment that the IPC resource is created. Unlike the key, which can be used for multiple IPC resources, the identifier is unique. This is why the identifier is used in the ipcs -i command rather than the key. Also, that means that if applications would properly document their IPC usage then we would easily know what an IPC resource is used for.

March 29, 2014
Robin Johnson a.k.a. robbat2 (homepage, bugs)

This is a slightly edited copy of an email I send to the mailing lists for my local hackspace, VHS. I run their mailing lists presently for historical reasons, but we're working on migrating them slowly.


Hi all,

Speaking as your email list administrator here. I've tried to keep the logs below as intact as possible, I've censored only one user's domain as being identifying information explicitly, and then two other recipient addresses.

There have been a lot of reports lately of bounce notices from the list, and users have correctly contacted me, wondering what's going on. The bounce messages are seen primarily by users on Gmail and hosted Google Apps, but the problems do ultimately affect everybody.

67.6% of the vhs-general list uses either gmail or google apps (347 subs of 513). For the vhs-members list it's 68.3% (both of these stats created by checking if the MX record for the user's domain points to Google).

Google deciding that a certain list message is too much like spam, because of two things:

  • because of content
  • because of DMARC policy

Content:

We CAN do something about the content.

Please don't send email that has one or twos, containing a URL and a short line of text. It's really suspicious and spam-like.

Include a better description (two or three lines) with the URL.

This gets an entry in the mailserver logs like:

delivery 47198: failure:
+173.194.79.26_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[66.196.40.251______12]_Our_system_has_detected_that_this_message_is/550-5.7.1_likely_unsolicited_mail._To_reduce_the_amount_of_spam_sent_to_Gmail,/550-5.7.1_this_message_has_been_blocked._Please_visit/550-5.7.1_http://support.google.com/m
+ail/bin/answer.py?hl=en&answer=188131_for/550_5.7.1_more_information._mu18si1139639pab.287_-_gsmtp/

That was triggered by this email earlier in the month:

> Subject: Kano OS for RasPi
> http://kano.me/downloads
> Apparently it's faster than Rasbian

DMARC policy:

TL;DR: If you work on an open-source mailing list app, please implement DMARC support ASAP!

Google and other big mail hosters have been working on an anti-spam measure called DMARC [1].

Unlike many prior attempts, it latches onto the From header as well as the SMTP envelope sender, and this unfortunately interferes with mailing lists [2], [3].

I do applaud the concept behind DMARC, but the rollout seems to be hurting lots of the small guys.

At least person (Eric Sachs) at Google is aware of this [4]. There is no useful workaround that I can enact as a list admin right now, other than asking the one present user to tweak his mailserver if possible.

There is also no completed open source support I can find for DMARC. Per the Google post above, the Mailman project is working on it [5], [6], but it's not yet available as of the last release. Our lists run on ezmlm-idx, and I run some other very large lists using mlmmj (gentoo.org) and sympa; none of them have DMARC support.

The problem is only triggering with a few conditions so far:

  • Recpient is on a mail service that implements DMARC (and DKIM and SPF)
  • Sender is on a domain that has a DMARC policy of reject

Of the 115 unique domains used by subscribers on this list, here are all the DMARC policies:

_dmarc.gmail.com.       600  IN TXT "v=DMARC1\; p=none\; rua=mailto:mailauth-reports@google.com"
_dmarc.USERDOMAIN.ca.   7200 IN TXT "v=DMARC1\; p=reject\; rua=mailto:azrxfkte@ag.dmarcian.com\; ruf=mailto:azrxfkte@fr.dmarcian.com\; adkim=s\; aspf=s"
_dmarc.icloud.com.      3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:dmarc_agg@auth.returnpath.net, mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com, mailto:dmarc_afrf@auth.returnpath.net\;rf=afrf\;pct=100"
_dmarc.mac.com.         3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com\;"
_dmarc.me.com.          3600 IN TXT "v=DMARC1\; p=none\; rua=mailto:d@rua.agari.com\; ruf=mailto:d@ruf.agari.com\;"
_dmarc.yahoo.ca.        7200 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"
_dmarc.yahoo.com.       1800 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"
_dmarc.yahoo.co.uk.     1800 IN TXT "v=DMARC1\; p=none\; pct=100\; rua=mailto:dmarc-yahoo-rua@yahoo-inc.com\;"

Only one of those includes a reject policy, but I suspect it's a matter of time until more of them will include it. I'm going to use USERDOMAIN.ca here as the rest of the example, and that user is indirectly responsible for lots of the rejects we are seeing.

Step 1.

User sends this email.

From: A User <someuser@userdomain.ca>
To: vhs-general@lists.hackspace.ca

Delivered to list server via SMTP (these two addresses form the SMTP envelope)

MAIL FROM:<someuser@userdomain.ca>
RCPT TO:<vhs-general@lists.hackspace.ca>

Step 2.

If the MAIL-FROM envelope address is on the list of list subscribers, your message is accepted.

Step 3.0.

The list adjusts the mail to outgoing, and uses SMTP VERP [7] to get the mail server to send the new message. This means it hands off a single copy of the email, as well as a list of all recipients for the mail. Envelope from address in this case will encode the name of the list and the number of the mail in the archive.

If it was delivering to me (robbat2@orbis-terrarum.net), the outgoing SMTP connection would look roughly like:

MAIL FROM:<vhs-general-return-18094-robbat2=orbis-terrarum.net@lists.hackspace.ca>
RCPT TO:<robbat2@orbis-terrarum.net>

And the mail itself still looks like:

From: A User <someuser@userdomain.ca>
To: vhs-general@lists.hackspace.ca

Step 3.1.

I got this email, and if I open it I see this telling me about the SMTP details:

Return-Path: <vhs-general-return-18094-robbat2=orbis-terrarum.net@lists.hackspace.ca>

I don't implement DMARC on my domain. If my system bounced the email, it would have gone to that address, and the list app would know that message 18094 on list vhs-general bounced to user robbat2@orbis-terrarum.net.

Step 3.2.

Google DOES implement DMARC, so lets run through that.

The key part of DMARC is that it takes the domain from the From header.

_dmarc.USERDOMAIN.ca.   7200 IN TXT "v=DMARC1\; p=reject\; rua=mailto:azrxfkte@ag.dmarcian.com\; ruf=mailto:azrxfkte@fr.dmarcian.com\; adkim=s\; aspf=s"

The relevant parts to us are:

p=reject, aspf=s

The ASPF section applies strict mode, and says the mail with a From header of someuser@USERDOMAIN.ca, must have an exact match of the MAIL FROM transaction of @USERDOMAIN.ca.

It doesn't match, as the list changed the MAIL FROM address. The p=reject says to reject the mail if this happens.

This runs counter to the design principles of mailing lists, so DMARC has a bunch of options, all of which require changing the mail in some way.

Here's the logs from the above failure:

> 2014-03-19 11:19:50.783996500 new msg 98907
> 2014-03-19 11:19:50.783998500 info msg 98907: bytes 8864 from <vhs-general-return-18094-@lists.hackspace.ca-@[]> qp 32511 uid 89
> 2014-03-19 11:19:50.785359500 starting delivery 211352: msg 98907 to remote user1@gappsdomain.com
> 2014-03-19 11:19:50.785385500 status: local 1/10 remote 1/40
> 2014-03-19 11:19:50.785450500 starting delivery 211353: msg 98907 to remote user2@gmail.com
> ...
> 2014-03-19 11:19:58.713558500 delivery 211352: failure:
+74.125.25.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_Unauthenticated_email_from_USERDOMAIN.ca_is_not_accepted_due_to_domain's/550-5.7.1_DMARC_policy._Please_contact_administrator_of_USERDOMAIN.ca_domain_if/550-5.7.1_this_was_a_legitimate_mail._Please_visit/550-5.7.1__http://support.google.com
+/mail/answer/2451690_to_learn_about_DMARC/550_5.7.1_initiative._ub8si9386628pac.133_-_gsmtp/
> 2014-03-19 11:19:59.053816500 delivery 211353: failure:
+173.194.79.26_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_Unauthenticated_email_from_USERDOMAIN.ca_is_not_accepted_due_to_domain's/550-5.7.1_DMARC_policy._Please_contact_administrator_of_USERDOMAIN.ca_domain_if/550-5.7.1_this_was_a_legitimate_mail._Please_visit/550-5.7.1__http://support.google.co
+m/mail/answer/2451690_to_learn_about_DMARC/550_5.7.1_initiative._my2si9389106pab.76_-_gsmtp/

[1] http://dmarc.org/
[2] http://dmarc.org/faq.html#s_3
[3] http://dmarc.org/faq.html#r_2
[4] https://sites.google.com/site/oauthgoog/mlistsdkim
[5] http://www.marshut.com/qskkv/adding-dmarc-support-for-mailman-3.html
[6] https://code.launchpad.net/~jimpop/mailman/dmarc-reject
[7] http://en.wikipedia.org/wiki/Variable_envelope_return_path

March 28, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

Within an SELinux policy, certain access vectors (permissions) can be conditionally granted based on the value of a SELinux boolean.

To find the list of SELinux booleans that are available on your system, you can use the getsebool -a method, or semanage boolean -l. The latter also displays the description of the boolean:

~# semanage boolean -l | grep user_ping
user_ping                      (on   ,   on)  Control users use of ping and traceroute

You can easily query the SELinux policy to see what this boolean triggers:

~# sesearch -b user_ping -A -C
Found 22 semantic av rules:
ET allow ping_t staff_t : process sigchld ; [ user_ping ]
ET allow ping_t staff_t : fd use ; [ user_ping ]
ET allow ping_t staff_t : fifo_file { ioctl read write getattr lock append open } ; [ user_ping ]
ET allow ping_t user_t : process sigchld ; [ user_ping ]
ET allow ping_t user_t : fd use ; [ user_ping ]
...

However, often you want to know if a particular access is allowed and, if it is conditionally allowed, which boolean enables it. In the case of user ping, we want to know if (and when) a user domain (user_t) is allowed to transition to the ping domain (ping_t):

~# sesearch -s user_t -t ping_t -c process -p transition -ACTS
Found 1 semantic av rules:
ET allow user_t ping_t : process transition ; [ user_ping ]

So there you go – it is allowed if the user_ping SELinux boolean is enabled.

March 27, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Online hardened meeting of March (March 27, 2014, 21:44 UTC)

I’m back from the depths of the unknown, so time to pick up my usual write-up of the online Gentoo Hardened meeting.

Toolchain

GCC 4.9 is being worked on, and might be released by end of April (based on the amount of open bugs). You can find the changes online.

Speaking of GCC, pipacs asked if it is possible in the upcoming 4.8.2 ebuilds to disable the SSP protection for development purposes (such as when you’re developing GCC plugins that do similar protection measures like SSP, but you don’t want those to collide with each other). Recent discussion on Gentoo development mailinglist had a consensus that the SSP protection measures (-fstack-protector) can be enabled by default, but of course if people are developing new GCC plugins which might interfere with SSP, disabling it is needed. One can use -fno-stack-protector for this, or build stuff with -D__KERNEL__ (as for kernel builds the default SSP handling is disabled anyway, allowing for kernel-specific implementations).

Other than those, there is no direct method to make SSP generally unavailable.

Blueness is also working on musc-libc on Gentoo, which would give a strong incentive for hardened embedded devices. For desktops, well, don’t hold your breath just yet.

Kernel grSec/PaX

It looks like kernel 3.13 will be Ubuntu’s LTS kernel choice, which also makes it the kernel version that grSecurity will put the long term support for in. And with Linux 3.14 almost out, the grsec patches for it are ready as well. Of the previous LTS kernels, 3.2 will probably finish seeing grsec support somewhere this year.

The C wrapper (called install-xattr) used to preserve xattr information during Portage builds has not been integrated in Portage yet, but the development should be finished.

During the chat session, we also discussed the gold linker and how it might be used by more and more packages (so not only by users that explicitly ask for it). udev version 210 onwards is one example, but some others exist. But other than its existence there’s not much to say right here.

SELinux

The 20140311 release of the reference policy is now in the Portage tree.

Also, prometheanfire caught a vulnerability (CVE-2014-1874) in SELinux which has been fixed in the latest kernels.

System Integrity

I made a few updates to the gentoo hardening guide in XCCDF/OVAL format. Nothing major, and I still need to add in a lot of other best practices (as well as automate the tests through OVAL), but I do intend to update the files (at least the gentoo one and ssh as OpenSSH 6 is now readily available) regularly in the next few weeks.

Profiles

A few minor changes have been made to hardened/uclibc to support multilib, but other than that nothing has been done (nor needed to be done) to our profiles.

That’s it for this months hardened meeting write-up. See you next time!

Sebastian Pipping a.k.a. sping (homepage, bugs)

Since I have the opportunity to play a game of Janggi (Korean chess) tomorrow, I made a summary of the differences in game play to XiangQi (Chinese chess) for myself.

99% of this is based on this page at chessvariants.org. Here is my summary:

Setup and Start

  • Kings start at the center of the palace
  • Each player is allowed to flip horse and elephant in position on one or both sides; red first, then green/blue
  • Red does not start, blue/green does.

Movement of Pieces

  • Rooks / Chariots
    • Can move diagonal within the palace (if the whole move remains a straight line)
  • Elephants / Ministers
    • Move like a big horse: one straight, two diagonal. All passed ground needs to be empty.
    • Since there is no river, elephants can travel all the board
  • King / General
    • Can move on marked diagonal lines, too
  • Advisors / Guards
    • Can move on marked straight lines, too
  • Cannons
    • Can move diagonal within the palace (straight line)
    • Need to jump (over a single piece) even when moving without capturing
    • Cannot capture other cannons
    • Cannot jump over (any player’s) other cannons
  • Soldiers / Pawns
    • Can move sideways right from the start (there is no river to promote them)
    • Can move diagonal within the palace (though never backwards)
  • (Horses / Knights behave the very same as in XiangQi)

Special rules

  • Different approach to flying generals / face-to-face laugh
    • Allowed
    • Puts the other player’s king in check
    • Revokes your right to win! Can only used in hope for a draw
    • And: If the king is used to defend a piece attacking the other king to checkmate, it is considered a draw (from this page on ancientchess.com)
  • Passing (skipping a move) allowed only when
    • you are not in check yet and
    • any move would put you in check.
  • Movement in the palace
    • Soldiers, cannons, rooks are allowed to walk marked diagonal lines in the palace (respecting their base rules, e.g. pawns not backwards, only straight lines).

If you spot mistakes, ambiguities or missing things (expect draw-related point counting) I’d be happy to hear from you. Thanks!

March 26, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)
Tagged pointers, and saving memory in Trojita (March 26, 2014, 17:04 UTC)

One of the improvements which were mentioned in the recent announcement of Trojitá, a fast Qt e-mail client, were substantial memory savings and speed improvements. In the rest of this post, I would like to explain what exactly we have done and how it matters. This is going to be a technical post, so if you are not interested in C++ or software engineering, you might want to skip this article.

Planting Trees

At the core of Trojitá's IMAP implementation is the TreeItem, an abstract class whose basic layout will be familiar to anyone who has worked with a custom QAbstractItemModel reimplementation. In short, the purpose of this class is to serve as a node in the tree of items which represent all the data stored on a remote IMAP server.

The structure is tree-shaped because that's what fits both the QAbstractItemModel's and the IMAP way of working. At the top, there's a list of mailboxes. Children of these mailboxes are either other, nested mailboxes, or lists of messages. Below the lists of messages, one can find individual e-mails, and within these e-mails, individual body parts as per the recursive nature of the MIME encapsulation. (This is what enables messages with pictures attached, e-mail forwarding, and similar stuff. MIME is fun.) This tree of items is used by the QAbstractItemModel for keeping track of what is where, and for issuing the QModelIndex instances which are used by the rest of the application for accessing, requesting and manipulating the data.

When a QModelIndex is used and passed to the IMAP Model, what matters most is its internalPointer(), a void * which, within Trojitá, always points to an instance of some TreeItem subclass. Everything else, like the row() and column(), are actually not important; the pointer itself is enough to determine everything about the index in question.

Each TreeItem has to store a couple of interesting properties. Besides the usual Qt-mandated stuff like pointer to the parent item and a list of children, there are also application-specific items which enable the code to, well, actually do useful things like printing e-mail subjects or downloading mail attachments. For a mailbox, this crucial information might be the mailbox name. For a message, the UID of the message along with a pointer to the mailbox is enough to uniquely identify everything which is needed.

Lazy Loading

Enter the lazy loading. Many people confirm that Trojitá is fast, and plenty of them are not afraid to say that it is blazingly fast. This speed is enabled by the fact that Trojitá will only do the smallest amount of work required to bring the data over the network (or from disk, for that matter). If you open a huge mailbox with half a million messages, perhaps the GMail's "All messages" account, or one's LKML archive, Trojitá will not start loading half a million of subjects. Instead, the in-memory TreeItem nodes are created in a special state "no data has been requested yet". Trojitá still creates half a million items in memory, but these items are rather lightweight and only contain the absolute minimum of data they need for proper operation.

Some of these "empty" nodes are, eventually, consulted and used for item display -- perhaps because a view is attached to this model, and the view wants to show the recent mail to the user. In Qt, this usually happens via the data() method of the QAbstractItemModel, but other methods like rowCount() have a very similar effect. Whenever more data are needed, the state of the tree node changes from the initial "no data have been requested" to "loading stuff", and an asynchronous request for these data is dispatched. An important part of the tale is that the request is indeed completely asynchronous, so you won't see any blocking whatsoever in the GUI. The QTreeView will show an animation while a subtree is expanded, the message viewer might display a spinner, and the mail listing shows greyed-out "Loading..." placeholder instead of the usual message subjects.

After a short while, the data arrive and the tree node is updated with the extracted contents -- be it e-mail subject, or perhaps the attached image of dancing pigs. As the requested data are now here, the status of the tree node is updated from the previous "loading stuff" into "done". At the same time, an appropriate signal, like dataChanged or rowsInserted, is emitted. Requesting the same data again via the classic MVC API will not result in network requests, but everything will be accommodated from the local cache.

What we see now is that there is just a handful of item states, yet the typical layout of the TreeItem looks roughly like this:

enum class FetchingStatus {
    INITIAL_NOTHING_REQUESTED_YET,
    LOADING,
    DONE,
    FAILED
};
class TreeItem {
    TreeItem *m_parent;
    QList<TreeItem*> m_children;
    FetchingStatus m_status;
};

On a 64bit system, this translates to at least three 64bit words being used -- one for the painter to the parent item, one (or much more) for storage of the list of children, and one more for storing the enum FetchingStatus. That's a lot of space, given we have just created half a million of these items.

Tagged Pointers

An interesting property of a modern CPU is that the data structures must be aligned properly. A very common rule is that e.g. a 32bit integer can only start at memory offset which is a multiple of four. In hex, this means that an address, or a pointer value, could end with 0x0, or 0x4, or 0x8, or 0xc. The detailed rules are platform-specific and depend on the exact data structure which we are pointing to, but the important message is that at least some of the low bits in the pointer address are always going to be zero. Perhaps we could encode some information in there?

Turns out this is exactly what pointer tagging is about. Instead of having two members, one TreeItem * and one FetchingStatus, these are squashed into a single pointer-sized value. The CPU can no longer use the pointer value directly, all accesses have to go via an inlined function which simply masks away the lowest bits which do bring a very minor performance hit, but the memory conservation is real.

For a real-world example, see this commit in Trojitá.

Using Memory Only When Needed

Back to our example of a mailbox with 500k messages. Surely a user is only going to see a small subset of them at once, right?

That is indeed the case. We still have to at least reserve space for 500k items for technical reasons, but there is certainly no need to reserve space for heavy stuff like subjects and other headers. Indeed, in Trojitá, we track the From/To/Cc/Bcc headers, the subjects, various kinds of timestamps, other envelope items and similar stuff, and this totals a couple hundred bytes per each message. A couple hundred bytes is not much (pun intended), but "a couple hundred bytes" times "half a million" is a ton of memory.

This got implemented here. One particular benchmark which tests how fast Trojitá resynchronizes a mailbox with 100k of messages showed immediate reduction in memory usage from previous 45 MB to 25 MB. The change, again, does come with a cost; one now has to follow one more pointer redirection, and one has to perform one more dynamic allocation for each message which is actually visible. That, however, proves to be negligible during typical usage.

Measure, Don't Guess

As usual with optimizing, the real results might sometimes be surprising. A careful reader and an experienced Qt programmer might have noticed the QList above and shuddered in horror. In fact, Trojitá now uses QVector in its place, but when I was changing the code, using std::vector sounded like a no-brainer. Who needs the copy-on-write semantics here anyway, so why should I pay its price in this context? These data (list of children of an item) are not copied that often, and copying a contiguous list of pointers is pretty cheap anyway (it surely is dwarfed by dynamic allocation overhead). So we should just stick with std::vector, right?

Well, not really. It turned out that plenty of these lists are empty most of the time. If we are looking at the list of messages in our huge mailbox, chances are that most of these messages were not loaded yet, and therefore the list of children, i.e. something which represents their inner MIME structure, is likely empty. This is where the QVector really shines. Instead of using three pointers per vector, like the GCC's std::vector does, QVector is happy with a single pointer pointing to a shared null instance, something which is empty.

Now, factor of three on an item which is used half a million times, this is something which is going to hurt. That's why Trojitá eventually settled on using QVector for the m_children member. The important lesson here is "don't assume, measure".

Wrapping up

Thanks to these optimization (and a couple more, see the git log), one particular test case now runs ten times faster while simultaneously using 38% less memory -- comparing the v0.4 with v0.3.96. Trojitá was pretty fast even before, but now it really flies. The sources of memory diet were described in today's blog post; the explanation on how the time was cut is something which will have to wait for another day.

Hanno Böck a.k.a. hanno (homepage, bugs)
Extract base64-encoded images from CSS (March 26, 2014, 13:32 UTC)

I recently stepped upon a webpage where I wanted to extract an image. However, after saving the page with my browser I couldn't find any JPG or PNG file. After looking into this, I saw some CSS code that looked like this:

background-image:url("data:image/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgAQAAAABbAUdZAAAAE0lEQVR4AWNgYPj/n4oElU1jAADtvT/BfzVwSgAAAABJRU5ErkJggg==";

What this does is that it embeds a base64 encoded image file into the CSS layout. I found some tools to create such images, but I found none to extract them. It isn't very hard to extract such an image, I wrote a small bash script that will do and that I'd like to share:

#!/bin/sh
n=1
for i in `grep -ho "base64,[A-Za-z0-9+/=]*" $@|sed -e "s:base64,::g"`; do
echo $i | base64 -d > file_$n
n=`expr $n + 1`
done
Save this as css2base64 and pass HTML or CSS files on the command line (e. g. css2base64 test.html test.css).

Hope this helps others. If this script is copyrightable at all (which I doubt), I hereby release it (like the other content of my blog) as CC0 / Public Domain.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Fixing the busybox build failure (March 26, 2014, 12:18 UTC)

Since a few months I have a build failure every time I try to generate an initial ram file system (as my current primary workstation uses a separate /usr and LVM for everything except /boot):

* busybox: >> Compiling...
* ERROR: Failed to compile the "all" target...
* 
* -- Grepping log... --
* 
*           - busybox-1.7.4-signal-hack.patch
* busybox: >> Configuring...
*COMMAND: make -j2 CC="gcc" LD="ld" AS="as"  
*  HOSTCC  scripts/basic/fixdep
*make: execvp: /var/tmp/genkernel/18562.2920.28766.17301/busybox-1.20.2/scripts/gen_build_files.sh: Permission denied
*make: *** [gen_build_files] Error 127
*make: *** Waiting for unfinished jobs....
*/bin/sh: scripts/basic/fixdep: Permission denied
*make[1]: *** [scripts/basic/fixdep] Error 1
*make: *** [scripts_basic] Error 2

I know it isn’t SELinux that is causing this, as I have no denial messages and even putting SELinux in permissive mode doesn’t help. Today I found the time to look at it with more fresh eyes, and noticed that it wants to execute a file (gen_build_files.sh) situated in /var/tmp somewhere. That file system however is mounted with noexec (amongst other settings) so executing anything from within that file system is not allowed.

The solution? Update /etc/genkernel.conf and have TMPDIR point to a location where executing is allowed. Of course, this being a SELinux system, the new location will need to be labeled as tmp_t as well, but that’s just a simple thing to do.

~# semanage fcontext -a -t tmp_t /var/build/genkernel(/.*)?
~# restorecon -R /var/build/genkernel

The new location is not world-writable (only for root as only root builds initial ram file systems here) so not having noexec here is ok.

March 25, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Talk about SELinux on GSE Linux/Security (March 25, 2014, 21:11 UTC)

On today’s GSE Linux / GSE Security meeting (in cooperation with IMUG) I gave a small (30 minutes) presentation about what SELinux is. The slides are online and cover two aspects of SELinux: some of its design principles, and then a set of features provided by SELinux. The talk is directed towards less technical folks – still IT of course, but not immediately involved in daily operations – so no commands and example/output.

SELinux came across the board a few times during the entire day. In the talks about Open Source Security and Security Guidelines for z/VM and Linux on System z SELinux came (of course) up as the technology of choice for providing in-operating system mandatory access control (on the zEnterprise’ Z/VM level – the hypervisor – this is handled through RACF Mandatory Access Control) and the Security Enablement on Virtual Machines had SELinux in the front line for the sVirt security protection measures (which focuses on the segregation through MLS categories).

And during the talk about A customer story about logging and audit, well, you can guess which technology is also one of the many sources of logging. Right. SELinux ;-)

Anyway, if your company is interested in such GSE events, make sure to follow the gsebelux.com site for updates. It’s a great way for networking as well as sharing experiences.

March 24, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Create your own SELinux Gentoo profile (March 24, 2014, 19:51 UTC)

Or any other profile for that matter ;-)

A month or so ago we got the question how to enable SELinux on a Gentoo profile that doesn’t have a <some profilename>/selinux equivalent. Because we don’t create SELinux profiles for all possible profiles out there, having a way to do this yourself is good to know.

Sadly, the most efficient way to deal with this isn’t supported by Portage: creating a parent file pointing to /usr/portage/profiles/features/selinux in /etc/portage/profile, as is done for all SELinux enabled profiles. The /etc/portage/profile location (where users can do local changes to the profile settings) does not support a parent file in there.

Luckily, enabling SELinux is a matter of merging the files in /usr/portage/profiles/features/selinux into /etc/portage/profile. If you don’t have any files in there, you can blindly copy over the files from features/selinux.

Edit: aballier on #gentoo-dev mentioned that you can create a /etc/portage/make.profile directory (instead of having it be a symlink managed by eselect profile) which does support parent files. In that case, just create one with two entries: one path to the profile you want, and one path to the features/selinux location.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
Libav 10 – release (March 24, 2014, 03:37 UTC)

New release

After several months spent finalizing, we are now pleased to announce the release of Libav 10.

One of the main features of this release is the addition of reference-counted data buffers to Libav and their use in various structures. Specifically, the data buffers used by AVPacket and AVFrame can now be reference counted, which should allow to significantly simplify many use cases. In addition, reference-counted AVFrames> can now be used in libavfilter, avoiding the need for a separate libavfilter-specific frame structure. Frames can now be passed straight from the decoders into filters or from filters to encoders.

These additions made it necessary to bump the major versions of libavcodec, libavformat, libavdevice, libavfilter, and libavutil, which was accompanied by dropping some old deprecated APIs. These libraries are thus not ABI- or API- compatible with the previous release. All the other libraries (libavresample and libswscale) remain ABI- and API-compatible.

Another major point is the inclusion of the HEVC (AKA H.265, the successor of H.264) decoder in the main codebase. It was started in 2012 as a Libav Google Summer of Code project by Guillaume Martres and subsequently completed with the assistance of the OpenHEVC project and several Libav developers.

As usual, this release also contains support for other new formats, many smaller new features and countless bug fixes. We can highlight a native VP9 decoder, with encoding provided through libvpx, native decoders for WebP, JPEG 2000, and AIC, as well as improved WavPack support with encoding through libwavpack, support for more AAC flavors (LD – low delay, ELD – enhanced low delay), slice multithreading in libavfilter, or muxing chapters in ASF. Furthermore a few new filters have been introduced, namely compand, to change audio dynamics, framepack, to create stereoscopic videos, asetpts, to set audio pts, and interlace, to convert progressive video to interlaced. Finally there is more fine-grained detection of host and target libc, which should allow better portability to various cross compilation scenarios.

See the Changelog file for a fuller list of significant changes.

You can download the new release, as usual, from our download page.

Release 10 took a lot of time to get out, mostly due the fact we spent lots of time to help downstream to adapt to the new API and we tried our best to provide patches to most of the projects we were aware of.

Now we are settled to have migration guides so the next API breaking releases won’t require that much effort.

Thanks

I want to thank everybody in the Libav team for spending that much time on the annoying, depressing and unrewarding task of coping with the release process, fixing fringe bugs, bake patches for projects not really used and help cleaning up the documentation.

Special thanks also go to the people from VideoLan and mpv since they helped us a lot in many different ways (testing, giving feedback on the new APIs and also provide patches) and to the Google security team that provided me on short notice a large batch of samples for HEVC and VP9 that I used to validate the new decoders.

Future releases

Release process update

This is the plan for the next 4 releases (spanning more or less from spring till winter), it is the result of all the feedback regarding our release process and requests.

Enough people, mostly mpv, vlc and other downstreams tracking us by git commit, would like to have quicker major releases. The API changes
introduced are mostly caused by us trying to satisfy their needs after all.

On the other hand, a good amount of people, distribution managers/packagers and the people tending to orphaned packages used but not really developed further, have quite a problem keeping up with the changes if the API gets incompatible too often.

In order to help them we already opened a dedicated section to our bugzilla and started writing migration guides, but they would really like not having to patch old packages that often anyway.

Trying to satisfy those two, apparently conflicting, requirements that’s what we aim for:

  • Every odd major releases should not break the API, must happen quickly once enough features are available and just augment the API. ABI breaks still possible, thus the version bumps.
  • Major releases removing old APIs, thus normally source incompatible with downstream not tracking git, should happen at most once per season
    or twice per year.
  • All the API changes will get an entry in the migration guide when it is committed.
  • We do remain committed to backport security-impacting bugfixes through a window of API-breaking releases, thus not leaving in the cold who couldn’t or didn’t update often enough.

I hope ~8 feature improvements and ~4 api cleanups per year would make most people happy.

Next releases

Libav 11

It would just provide new features, more optimizations for the usual platforms and the new ones, support for a good number of fringe codecs, such as the elusive vp7, will be added.

As stated above no API breakages are to be expected.

Libav 12

This release will contain major changes, including possibly a new scaling library.
The wiki has a Blueprint section tracking the most prominent ones, you are welcome to discuss them with us.

What I’ll be working on

I’m personally involved in the following items:

  • Extend MXF support: The format is quite bizantine and had been extended even further over time.[libav11]
  • Hwaccel2: because the current situation is far from being easy to use.[libav11]
  • mime-type support in Input Formats: since we support as output I don’t see why we should not leverage it on input to speed-up probing formats.[libav11]
  • AVScale: a replacement to swscale, trying to be more rational and not pointlessly lose information doing all pointless intermediate conversions to YUV. Incidentally also support hardware scalers when available [libav12]
  • libmfx: Intel tried its best to give an uniform interface that spans Linux, Windows and possibly MacOSX, I have working decoders and encoders wrappers, soon also hwaccel1.2 support. [libav11]
  • MVC support: multiview support is nice to have if you want to watch your blu-ray disks. [libav11]
  • Apple VDA and VT hwaccel: Since the introduction of hwaccel1.2 supporting them properly should be easier.[libav11]

If some of them are important to you actual help or even sponsorship is welcome.

March 23, 2014
Robin Johnson a.k.a. robbat2 (homepage, bugs)

One of my past consulting customers, came to me with a problem. He'd been relatively diligent in upgrading his servers since last I spoke (it had been some years), and now the admin panel on one of his client's very old PHP websites was no longer working.

I knew the code had some roots back to at least PHP3, at the file headers I'd previously seen had copyright dates back to 1999. Little did I know, I was in for a treat today.

When last I visited this codebase, due to it's terrible nature with hundreds of globals, I had to put some hacks in for PHP 5.4, since register_globals were no longer an option. The hack for this is quite simple:

foreach($_POST as $__k => $__v) { $$__k = $__v; }
foreach($_GET as $__k => $__v) { $$__k = $__v; }

Well it seems since the last upgrade, they had also changed the register_long_arrays setting by demand of another project, and the login on the old site was broken. Quite simple this, just need to s/HTTP_SERVER_VARS/_SERVER/ (and similarly for POST/GET/COOKIE depending on your site).

Almost all was well now, except that the next complain was file uploads didn't work for several forms. I naively duplicated the _POST/_GET block above to $_FILES. No luck. Thus, my memory not remembering how file uploads used to work in early PHP, I set out to fix this.

I picked a good one to test with, and noticed that it used some of the very old PHP variables for file uploads (again globals). These files dated back to 1997 and PHP/FI!. The initial solution was to map $_FILES[x]['tmp_name'] to $x, and the rest of $_FILES[x][y] to $x_y. Great it seems to work now.

Except... one file upload form was still broken; it had multiple files allowed in a single form. Time for a more advanced hack:

# PHP/FI used this structure for files: http://www.php.net/manual/phpfi2.php#upload
foreach($_FILES as $__k => $__v) { 
  if(!is_array($__v['tmp_name'])) {
    $s = $__k;
    $$s = $__v['tmp_name'];
    $keys = array('name','size','type');
    foreach($keys as $k) {
      $s = $__k.'_'.$k;
      $$s = $__v[$k];
    }
  } else {
    for($i = 0; $i <= count($__v['tmp_name']); $i++) {
      if(defined($__v['tmp_name']) && defined($__v['tmp_name'][$i])) {
        $s = $__k.'['.$i.']';
        $$s = $__v['tmp_name'][$i];
        $keys = array('name','size','type');
        foreach($keys as $k) {
          $s = $__k.'_'.$k.'['.$i.']';
          $$s = $__v[$k][$i];
        }
      }
    }
  }
}

Thus I solved the problem, and had to relearn back how it used to be done with PHP/FI.

March 21, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Recently I ran into a problem with RHEL 6 (and any derivatives, like CentOS 6 or Scientific Linux 6) where having two NICs (network interfaces) in the same subnet resulted in strange behaviour. In RHEL ≤5 (or CentOS ≤5), one could have two interfaces with IPs in the same subnet and there weren’t any problems (besides the obvious question of why one would set it up this way instead of just bonding the interfaces). However, in RHEL 6 (or CentOS 6), having two interfaces with IPs in the same subnet results in the primary one pinging but the secondary one not responding.

The cause of this problem is that the rp_filter settings changed between these kernels (2.6.18 in RHEL 5 and 2.6.32 in RHEL 6). In RHEL 5, the rp_filter setting was a boolean where 1 meant that source validation was done by reversed path (as in RFC1812), and 0 meant no source validation. However, in RHEL 6, this setting changed to an integer with the following settings:

*****
0 – No source validation

1 – Strict Reverse Path validation (RFC3704) – Packets are checked against the FIB (Forwarding Information Base), and only the best ones succeed

2 – Loose Reverse Path validation (RFC3704) – Packets are checked against the FIB, but only non-reachable BY ANY INTERFACE will fail
*****

So, though the default setting is still 1, it now has a different meaning. In order to get these two network interfaces with IPs in the same subnet to both respond, I needed to make two changes in /etc/sysctl.conf:

  • Change net.ipv4.conf.default.rp_filter from ’1′ to ’2′
  • Add the line net.ipv4.conf.all.rp_filter = 2

To better illustrate the changes, here are the differences:

DEFAULT SETTINGS:
# grep '.rp_filter' /etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 1

REQUIRED SETTINGS:
# grep '.rp_filter' /etc/sysctl.conf
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2

In order to make these changes effective immediately, you can reload the configuration with:

# sysctl -p

Ultimately, the new defaults make it so that the kernel discards packets when the route for outbound traffic differs from the route of incoming traffic. Changing these settings as mentioned above will make the kernel handle those packets like it did before 2.6.32. That way, having two or more interfaces with IPs in the same subnet will function as intended. Also, these changes aren’t limited to just RHEL 6 and derivatives, but also to any distribution with ≥kernel-2.6.32 in which the defaults were not changed.

Cheers,
Zach

March 20, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)

Summary

An SSL stripping vulnerability was discovered in Trojitá, a fast Qt IMAP e-mail client. User's credentials are never leaked, but if a user tries to send an e-mail, the automatic saving into the "sent" or "draft" folders could happen over a plaintext connection even if the user's preferences specify STARTTLS as a requirement.

Background

The IMAP protocol defines the STARTTLS command which is used to transparently upgrade a plaintext connection to an encrypted one using SSL/TLS. The STARTTLS command can only be issued in an unauthenticated state as per the IMAP's state machine.

RFC 3501 also allows for a possibility of the connection jumping immediately into an authenticated state via the PREAUTH initial response. However, as the STARTTLS command cannot be issued once in the authenticated state, an attacker able to intercept and modify the network communication might trick the client into a state where the connection cannot be encrypted anymore.

Affected versions

All versions of Trojitá up to 0.4 are vulnerable. The fix is included in version 0.4.1.

Remedies

Connections which use the SSL/TLS form the very beginning (e.g. the connections using port 993) are secure and not vulnerable.

Possible impact

The user's credentials will never be transmitted over a plaintext connection even in presence of this attack.

Because Trojitá proceeded to use the connection without STARTTLS in face of PREAUTH, certain data might be leaked to the attacker. The only example which we were able to identify is the full content of a message which the user attempts to save to their "Sent" folder while trying to send a mail.

We don't believe that any other data could be leaked. Again, user's credentials will not be leaked.

Acknowledgement

Thanks to Arnt Gulbrandsen on the imap-protocol ML for asking what happens when we're configured to request STARTTLS and a PREAUTH is received, and to Michael M Slusarz for starting that discussion.

Michal Hrusecky a.k.a. miska (homepage, bugs)
My Jolla applications (March 20, 2014, 12:00 UTC)

One of the things, that I really like about Jolla is the technology to write applications. C++ is my favorite programing language and I always admired Qt. At least big parts of it. So when I got my Jolla, I started playing with SDK and writing some simple applications. It was kinda harder than I expected, but I’ll write about there in separate blog post. This one is dedicated to the applications I wrote and to show what do they do. And if you have a Jolla, maybe get you interested in those ;-) Both of them are available via OpenRepos and Harbour.

Hunger Meter

This was the first application I wrote. I was wondering how power hungry are various applications. On Android I used to have CPU usage monitor and I know that surprisingly many applications take advantage of CPU to the full extend which affects a battery life. Since I was writing the application I decided not to go for CPU usage but directly for what I was interested in – battery usage. First version was really simple, it just showed two numbers – current consumption and average (over ten seconds) one. But that already helped me to find out, that if you want to drain your battery fast, use Angry Birds :-)

I haven’t stopped developing after first version and continued extending functionality. Currently time intervals are configurable, it displays semi-nice graph for the longer interval, displays some basic information about battery and collects long time (day and more) statistics. These are not plotted yet, that is part of my TODO.

If you are interested, you can get this application from OpenRepos where is the last development version or from Harbour where is the last stable version (one that was successfully tested in OpenRepos). Sources are available on GitHub and here are few screenshots :-)

HungerMeter - CoverHungerMeter - Settings

HungerMeter - GraphHungerMeter - Battery

Crest

My second application is also simple. It’s top-like application. Shows processes, how much memory (RSS) and CPU they use, allows you to sort them, filter for GUI applications only and most importantly – allows you to kill processes you don’t like. Not much more to write about in regards to this. Just a set of links to OpenRepos and GitHub and few screenshots.

CrestCrest - kill

End note

I wrote some applications when I was using Palm OS. When I switched to Android, I never forced myself to cope with Java thingy. Although there are some nuisances in developing for Sailfish OS (more about them next time), I’m happily developing applications for my PDA/CellPhone again :-) So if you have Jolla and like my ideas for applications, try them. You can report bugs/feature requests via issues page on GitHub, maybe I’ll respond. If you submit the patch, chances that I’ll respond are much higher :-)

March 19, 2014
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Introducing peerflixsrc (March 19, 2014, 16:26 UTC)

Some of you might have been following all the brouhaha over Popcorn Time. I won’t get into the arguments that can be made for and against at the moment.

While poking around at what it was that Popcorn Time was doing, I stumbled upon peerflix, a Node.js-based application that takes a .torrent file that points to one big video file, and presents that as an HTTP stream. It has its own BitTorrent implementation where it prioritises early chunks of the file so that it is possible to start watching the video before the entire file has been downloaded. It also seeds the file while the video is being watched locally.

Seeing as I was at the GStreamer Hackfest in Munich when this came up in discussions, it seemed topical to have a GStreamer element to wrap this neat bit of functionality. Thus was peerflixsrc born. This is a simple source element that takes a URI to a torrent file (something like torrent+http://archive.org/some/video.torrent), fires up peerflix in the background, and provides the data from the corresponding HTTP stream. Conveniently enough, this can be launched using playbin or Totem (hinting at the possibilities of what can come next!). Here’s what it looks like…

Screenshot of Totem playing a torrent file directly using peerflixsrc

Screenshot of Totem playing a torrent file directly using peerflixsrc

The code is available now. To use it, build this copy of gst-plugins-bad using your favourite way, make sure you have peerflix installed (sudo npm install -g peerflix), and you’re good to go.

This is not quite mature enough to go into upstream GStreamer. The ugliest part is firing up a Node.js server to make this work, not the least because managing child processes on Linux is not the prettiest code you can write. Maybe someone wants to look at rewriting the torrent bits from peerflix in C? There don’t seem to be any decent C-based libraries for this out there, though.

In the mean time, enjoy this, and comments / patches welcome!

GStreamer Hackfest 2014 (March 19, 2014, 15:46 UTC)

Last weekend, I was at the GStreamer Hackfest in Munich. As usual, it was a blast — we got much done, and it was a pleasure to meet the fine folks who bring you your favourite multimedia framework again. Thanks to the conference for providing funding to make this possible!

My plan was to work on making Totem’s support for passthrough audio work flawlessly (think allowing your A/V receiver to decode AC3/DTS if it allows it, with more complex things coming the future as we support it). We’ve had the pieces in place in GStreamer for a while now, and not having that just work with Totem has been a bit of a bummer for me.

The immediate blocker so far has been that Totem needs to add a filter (scaletempo) before the audio sink, which forces negotiation to always pick a software decoder. We solved this by adding the ability for applications to specify audio/video filters for playbin to plug in if it can. There’s a now-closed bug about it, for the curious. Hopefully, I’ll get the rest of the work to make Totem use this done soon, so things just work.

Now the reason that didn’t happen at the hackfest is that I got a bit … distracted … at the hackfest by another problem. More details in an upcoming post!

Matt Turner a.k.a. mattst88 (homepage, bugs)
Laptop choices and aftermath (March 19, 2014, 04:00 UTC)

In November I was lamenting the lack of selection in credible Haswell-powered laptops for Mesa development. I chose the 15" MacBook Pro, while coworkers picked the 13" MBP and the System76 Galago Pro. After using the three laptops for a few months, I review our choices and whether they panned out like we expected.

  CPU RAM Graphics Screen Storage Battery
13" MacBook Pro 2.8 GHz 4558U 16 GiB GT3 - 1200 MHz 13.3" 2560x1600 512 GiB PCIe 71.8 Wh
15" MacBook Pro 2.0 GHz 4750HQ 16 GiB GT3e - 1200 MHz 15.4" 2880x1800 256 GiB PCIe 95 Wh
Galago Pro 2.0 GHz 4750HQ 16 GiB GT3e - 1200 MHz 14.1" 1920x1080 many options 52 Wh

15" MacBook Pro

The installation procedure on the MacBook was very simple. I shrunk the HFS partition from OS X and installed rEFInd, before following the usual Gentoo installation.

Quirks and Annoyances

Running Linux on the MacBook is a good experience overall, with some quirks:

  • the Broadcom BCM4360 wireless chip is supported by a proprietary driver (net-wireless/broadcom-sta in Gentoo)
  • the high DPI Retina display often necessitates 150~200% zoom (or lots of squinting)
  • the keyboard causes some annoyances:
    • the function keys operate only as F* keys when the function key is held, making common key combinations awkward (behavior can be changed with the /sys/module/hid_apple/parameters/fnmode file).
    • there's no Delete key, and Home/End/Page Up/Page Down are function+arrow key.
    • the power button is a regular key immediately above backspace. It's easy to press accidentally.
  • the cooling fans don't speed up until the CPU temperature is near 100 C.
  • no built-in Ethernet. Seriously, we've reinvented how many mini and micro HDMI and DisplayPort form factors, but we can't come up with a way to rearrange eight copper wires to fit an Ethernet port into the laptop?

Worst Thing: Insufficient cooling

The worst thing about the MacBook is the insufficient cooling. Even forcing the two fans to their maximum frequencies isn't enough to prevent the CPUs from thermal throttling in less than a minute of full load. Most worrying is that my CPU's core #1 seems to run significantly hotter under load that the other cores. It's always the first, and routinely the only, core to reach 100 C, causing the whole CPU package to be throttled until it cools slightly. The temperature gradient across a chip only 177 square millimeters is also troubling: frequently core #1 is 15 C hotter than core #3 under load. The only plausible conclusion I've come to is that the thermal paste isn't applied evenly across the CPU die. And since Apple uses tamper resistant screws I couldn't reapply the thermal paste without special tools (and probably voiding the warranty).

Best Thing: Retina display

I didn't realize how much the Retina display would improve the experience. Having multiple windows (that would have been close to full screen at 1080p) open at once is really nice. Being able to have driver code open on the left half of the screen, and the PDF documentation open on the right makes patch review quicker and more efficient. I've attached other laptops I've used to larger monitors, but I've never even felt like trying with the 15" MBP.

13" MacBook Pro

I consider the 13" MacBook Pro to be strictly inferior (okay, lighter and smaller is nice, but...) to the 15". Other than the obvious differences in the hardware, the most disappointing thing I've discovered about it is that the 13" screen isn't really big enough to be comfortable for development. The coworker that owns it plugs it into his physically larger 1080p monitor when he gets to the office. For a screen that's supposed to be probably the biggest selling point of the laptop, it's not getting a lot of use.

As I mentioned, I'm perfectly satisfied with the 15" screen for everyday development.

System76 Galago Pro

I used the Galago Pro for about three weeks before switching to the 15" MacBook. In total it's a really compelling system, except for the serious lack of attention to detail.

Quirks and Annoyances

  • although it has built-in Ethernet (yay!), the latch mechanism will drive you nuts. Two hands are necessary to unplug an Ethernet cable from it, and three are really recommended.
  • the single hinge attaching the screen feels like a failure point, and the screen itself flexes way too much when you open or close the laptop.
  • all three USB ports at on the right side, which can be annoying if you want to use a mouse, which you will, because...
  • the touchpad doesn't behave very well. In fairness, this is probably mostly the fault of the synaptics driver or the default configuration.

Worst Thing: Keyboard

The keyboard is probably the worst part. The first time I booted the system, typing k while holding the shift key wouldn't register a key press. Lower case k typed fine, but with shift held - nothing. After about 25 presses, it began working without any indication as to what changed.

The key stroke is very short, you get almost no feedback, and if you press the keys at an angle slightly off center they may not register. Typing on it can be a rather frustrating experience. Beyond it being a generally unpleasant keyboard, the function key placement confirms that the keyboard is a complete afterthought: Suspend is between Mute and Volume Down. Whoops!

Best Thing: Cooling

The Galago Pro has an excellent cooling system. Its fans are capable of moving a surprising amount of air and don't make too much noise doing it. Under full load, the CPU's temperature never passed 84 C - 16 C cooler than the 15" MBP (and the MBP doesn't break 100 C only because it starts throttling!). On top of not scorching your lap during compiles, the cooler temperatures mean the CPU and GPU are going to be able to stay in turbo mode longer and give better performance.

Final thoughts

Concerns about the keyboard and general build quality of the Galago Pro turned out to be true. I think it's possible to get used to the keyboard, and if you do I feel confident that the system is really nice to use (well, I guess you have to get used to the other input device too).

I'm overall quite happy with the MacBook Pro. The Retina display is awesome, and the PCIe SSD is incredibly fast. I was most worried about the 15" MacBook overheating and triggering thermal throttling. Unfortunately this was well founded. Other than the quirks, which are par for the course, the overheating issue is the one significant downside to this machine.

March 18, 2014
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

Students, this Friday at 1900 UTC is the deadline to apply for this year’s GSoC. It’s an awesome program that pays you to work on open-source projects for a summer (where you == a university/college student).

It’s by no means too late, but start your application today. You can find more information on Gentoo’s projects here (click on the Ideas page to get started; also see our application guidelines) and on the broader GSoC program here.

Good luck!


Tagged: community, development, gentoo, gsoc

Michal Hrusecky a.k.a. miska (homepage, bugs)
Cool Live flash GSoC idea (March 18, 2014, 09:10 UTC)

openSUSE Flash drive

openSUSE Flash drive

I have this idea nagging me for a while about how to make our ambassadors live (and mine) easier. From time to time you need a flash drive with Live version of our favorite openSUSE to show it to people. Currently it is really simple to create one using dd. But once you do it, you cannot use flash drive for “normal” purposes. People somehow doesn’t appreciate flash drive that doesn’t contain vfat. So this project is about redoing openSUSE flash drive to make it way cooler and more usable.

There are two projects out there that inspired me (or that I want to copy). It’s Slax live distribution and SystemRescueCD. Both are great and I would like to pinpoint some of the goals that project should reach.

First of all, whole flash drive should contain vfat or ntfs or some commonly supported dumb filesystem. Nothing fancy. And everything should be just a file on that flash drive. If you need to transfer quite some data, you simple delete few directories, use flash drive as a storage and then copy those directories back.

Other feature that should be implemented is to make it easily possible for flash drive to contain multiple flavors of distribution at the same time. So during the boot, you will be able to select whether you want to show Gnome or KDE. Adding new flavor should be easy – copying files with new flavor to the flash drive. Same to get rid of it – just delete Gnome flavor files and Gnome version is gone from flash drive. This is what Slax manages to do really well, although they try to combine everything into one distribution. I wouldn’t go that deep in regards to modularity for this project, but selecting which live version do you want to boot sounds like a good idea. It should be also possible to decide whether changes you make while running this Live distribution are stored permanently or lost after reboot.

Now how to make it cool for Ambassadors? I think we are not rich enough to give everybody his flash drive during conference. There are two options that I would like to see integrated in this flash drive project. First, it should be possible to boot from flash drive and load everything into memory. So people can come to the booth, use flash drive to boot openSUSE, leave and play with it till reboot and we can reuse the flash drive to boot another computer. Other cool option to have would be to make it possible to to distribute this Live version over PXE, so we can have just a few ethernet cables on our booth where people can connect to boot openSUSE.

Personally I would love to have something like this. And few students already shown some interest in this idea, so it might even happen. If you just decided to apply as well, feel free to submit your proposal to the melange and I have a simple homework for you (can be sent during reviewing process). My friend tried to run Live KDE over PXE few weeks ago and run into trouble that NetworkManager was messing up with network and thus his NFS root was having some serious troubles. Your homework is to solve this issue :-) Take an initrd from openSUSE 13.1 Live KDE and modify it so when you are booting vith NFS root, it will disable NetworkManager. Send me the result (either description or initrd or both) and the best solution (from maintainability and robustness point of view) wins.

March 17, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Nightster B&W (March 17, 2014, 22:20 UTC)

This shot was taken some days ago on a short trip near Paris, it’s a nice addition to the very few pictures I have of my bike !

hd n&b

click to see full resolution, grain is very nice on this Ilford 3200 ISO film

March 16, 2014
Alex Alexander a.k.a. wired (homepage, bugs)

I’ve created a recovery script that changes the screen density. It is simple, yet really useful if you want to automate your ROM updates.

Get it here: Density-252.zip (this zip sets the density to 252).

Read on if you want to find out how to change the density to a different value.

So, I recently switched to the official Omni ROM nightlies on my Galaxy Note II.

I miss a few things from the custom Asylum ROM I used to run (which is now dead, unfortunately), but in return I get delta updates. I’m sure that things like Lock-screen Notifications will end up in Omni when they are stable enough anyway.

Omni’s update system has an interesting side feature called FlashAfterUpdate. This allows me to flash my own zip files after the update. Very useful when you want to use a different kernel or su binary – I use DevilKernel and superuser. The only thing I was missing to completely automate the update process was a way to change the display density.

Seems no-one else ever thought of doing this, so I made a script myself. Nothing too fancy really, the real work’s done by a single sed command.

Here’s the flash-able zip: Density-252.zip

Doesn’t matter what your previous density was, flashing this zip will edit /system/build.prop and set the new density for you.

To change the target density

Open the zip file and edit the following file:

META-INF/com/google/android/updater-script

then find the following line:

run_program("/sbin/sh", "-c", "sed -i 's:ro.sf.lcd_density=.*:ro.sf.lcd_density=252:' /system/build.prop");
                                                                                ^^^

and change 252 to the density number you desire.

Save the file,
re-create the zip file if necessary (some archive tools will auto-update the zip),
push it to your device,
flash it,
profit ;)

March 08, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

You probably read by now that I’ve been thinking of build either an Android application or a Chrome one as either companion or replacement for the glucometer utilities which I’ve been writing in Python for the past few months.

Packt has been nice enough to let me review Xamarin Mobile Application Development for Android, and so I decided to take into consideration the option of actually building the app in C#, so that it can be shared across various platforms.

The book goes into details of what Android applications can and should do and provides nice examples, mostly around a points-of-interst application. It’s hard to say much when I don’t want to complain, so I’ll just say, give it a go, if you don’t plan to make your apps open source (which I think you should). As the book points out, being able to share your backend libraries (but not frontend/UI ones!) across operating systems and platforms (phone, tablet, computer) is a godsend, so I think Xamarin did build a very good tool for the job.

On the other hand, I'm definitely not going to pursue this — while C# is a language I like, and Xamarin for Android allows you to use JNI extensions as the one Prolific releases for their USB-to-serial adapter, I find having the tool open source is more important than any of this.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
lldb: how to botch the user interface (March 08, 2014, 12:21 UTC)

Recently I had to spend some time developing on MacOSX. Gentoo-Prefix sadly is getting less and less useful till we don’t make clang a first class citizen (People proposing a GSoC for it are welcome!) so I’m forced to use what’s provided by Xcode.

Clang

clang is wonderful for developing, it is arguably fast at building and the generated code isn’t that bad, beside when you are using asan and it miscompiles… (reported to the asan developers, they will have a look, gcc-asan works as expected.

The warning reporting is probably one of the feature I do miss in other compilers and that’s why I added it to cparser and I’m looking forward to move to gcc-4.9.

All in all clang developers increased the usability of the compiler and made the other projects improve as well, competition in opensource does work.

LD

The linker is again different from the usual binutils, normally you do not notice it but with the new xcode you have to face it since some projects will have problems finding symbols. Again the reporting is quite good, not stellar as clang’s but when the missing symbols are C++ it does a better job than stock binutils in telling you what’s missing from where.

lldb

The new debugger probably isn’t really ready for the prime time. gdb gets its share of complaints about some of its quirks (macros system quite minimal and python interface good but not documented as it should), but it is really effective and fast to use.

lldb not. Almost every command that in gdb is a single statement in lldb it is two, usually with a compusory option. Setting breakpoints, watchers, moving through frames; everything gets more cumbersome to use.

The reporting is a little more confusing and the error messages can be misleading. And since when you are on debug there are problems and you might be under pressure, it doesn’t help at all.

While debugging some VDA hwaccel improvements for libav I got to spend quite a bit of time tracking why a pointer gets nulled. The watchpoint set to it triggered at random times in the innards of the osx memory management and I couldn’t actually see when or how that happens. I ended up writing a dummy hwaccel accessing the same fields on linux, run it through gdb and discover the actual problem in … 10minutes, code and reboots included.

I do hope we’ll see a better interface for lldb and further improvements on gdb (and hopefully make so clang + gdb and gcc + lldb interoperate better).

March 07, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Couchbase on Gentoo Linux (March 07, 2014, 16:29 UTC)

Back in 2010 when I was comparing different NoSQL solutions I came across CouchDB. Even tho I went for mongoDB in the end, it was still a nice and promising technology even more since the merge with the Membase guys in late 2012 which lead to the actual Couchbase.

I won’t go into the details of Couchbase itself since it’s way covered all around the net but I wanted to let you guys know that I’ve packaged most of the couchbase ecosystem for Gentoo Linux :

  • dev-db/couchbase-server-community-2.2.0 : the community server edition (bin)
  • dev-libs/libcouchbase-2.2.0 : the C client library
  • dev-python/couchbase-1.2.0 : the python client library

Those packages are still only available on my overlay (ultrabug on layman) since I’m not sure about the interest of other users in the community and I still need to make sure it’s production ready enough.

If you’re interested in seeing this package in portage, please say so !

I dedicate this packaging to @atorgfr :)

March 06, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)

tl;dr A very short key exchange crashes Chromium/Chrome. Other browsers accept parameters for a Diffie Hellman key exchange that are completely nonsense. In combination with recently found TLS problems this could be a security risk.

People who tried to access the webpage https://demo.cmrg.net/ recently with a current version of the Chrome browser or its free pendant Chromium have experienced that it causes a crash in the Browser. On Tuesday this was noted on the oss-security mailing list. The news spread quickly and gave this test page some attention. But the page was originally not set up to crash browsers. According to a thread on LWN.net it was set up in November 2013 to test extremely short parameters for a key exchange with Diffie Hellman. Diffie Hellman can be used in the TLS protocol to establish a connection with perfect forward secrecy.

For a key exchange with Diffie Hellman a server needs two parameters, those are transmitted to the client on a connection attempt: A large prime and a so-called generator. The size of the prime defines the security of the algorithm. Usually, primes with 1024 bit are used today, although this is not very secure. Mostly the Apache web server is responsible for this, because before the very latest version 2.4.7 it was not able to use longer primes for key exchanges.

The test page mentioned above tries a connection with 16 bit - extremely short - and it seems it has caught a serious bug in chromium. We had a look how other browsers handle short or nonsense key exchange parameters.

Mozilla Firefox rejects connections with very short primes like 256 bit or shorter, but connections with 512 and 768 bit were possible. This is completely insecure today. When the Chromium crash is prevented with a patch that is available it has the same behavior. Both browsers use the NSS library that blocks connections with very short primes.

The test with the Internet Explorer was a bit difficult because usually the Microsoft browser doesn't support Diffie Hellman key exchanges. It is only possible if the server certificate uses a DSA key with a length of 1024 bit. DSA keys for TLS connections are extremely rare, most certificate authorities only support RSA keys and certificates with 1024 bit usually aren't issued at all today. But we found that CAcert, a free certificate authority that is not included in mainstream browsers, still allows DSA certificates with 1024 bit. The Internet Explorer allowed only connections with primes of 512 bit or larger. Interestingly, Microsoft's browser also rejects connections with 2048 and 4096 bit. So it seems Microsoft doesn't accept too much security. But in practice this is mostly irrelevant, with common RSA certificates the Internet Explorer only allows key exchange with elliptic curves.

Opera is stricter than other browsers with short primes. Connections below 1024 bit produce a warning and the user is asked if he really wants to connect. Other browsers should probably also reject such short primes. There are no legitimate reasons for a key exchange with less than 1024 bit.

The behavior of Safari on MacOS and Konqueror on Linux was interesting. Both browsers accepted almost any kind of nonsense parameters. Very short primes like 17 were accepted. Even with 15 as a "prime" a connection was possible.

No browser checks if the transmitted prime is really a prime. A test connection with 1024 bit which used a prime parameter that was non-prime was possible with all browsers. The reason is probably that testing a prime is not trivial. To test large primes the Miller-Rabin test is used. It doesn't provide a strict mathematical proof for primality, only a very high probability, but in practice this is good enough. A Miller-Rabin test with 1024 bit is very fast, but with 4096 bit it can take seconds on slow CPUs. For a HTTPS connection an often unacceptable delay.

At first it seems that it is irrelevant if browsers accept insecure parameters for a key exchange. Usually this does not happen. The only way this could happen is a malicious server, but that would mean that the server itself is not trustworthy. The transmitted data is not secure anyway in this case because the server could send it to third parties completely unencrypted.

But in connection with client certificates insecure parameters can be a problem. Some days ago a research team found some possibilities for attacks against the TLS protocol. In these attacks a malicious server could pretend to another server that it has the certificate of a user connecting to the malicious server. The authors of this so-called Triple Handshake attack mention one variant that uses insecure Diffie Hellman parameters. Client certificates are rarely used, so in most scenarios this does not matter. The authors suggest that TLS could use standardized parameters for a Diffie Hellman key exchange. Then a server could check quickly if the parameters are known - and would be sure that they are real primes. Future research may show if insecure parameters matter in other scenarios.

The crash problems in Chromium show that in the past software wasn't very well tested with nonsense parameters in cryptographic protocols. Similar tests for other protocols could reveal further problems.

The mentioned tests for browsers are available at the URL https://dh.tlsfun.de/.

This text is mostly a translation of a German article I wrote for the online magazine Golem.de.

March 05, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)

In a previous article titled ‘using deltas to speed up SquashFS ebuild repository updates’, the author has considered benefits of using binary deltas to update SquashFS images. The proposed method has proven very efficient in terms of disk I/O, memory and CPU time use. However, the relatively large size of deltas made network bandwidth a bottleneck.

The rough estimations done at the time proved that this is not a major issue for a common client with a moderate-bandwidth link such as ADSL. Nevertheless, the size is an inconvenience both to clients and to mirror providers. Assuming that there is an upper bound on disk space consumed by snapshots, the extra size reduces the number of snapshots stored on mirrors, and therefore shortens the supported update period.

The most likely cause for the excessive delta size is the complexity of correlation between input and compressed output. Changes in input files are likely to cause much larger changes in the SquashFS output that the tested delta algorithms fail to express efficiently.

For example, in the LZ family of compression algorithms, a change in input stream may affect the contents of the dictionary and therefore the output stream following it. In block-based compressors such as bzip2, a change in input may shift all the following data moving it across block boundaries. As a result, the contents of all the blocks following it change, and therefore the compressed output for each of them.

Since SquashFS splits the input into multiple blocks that are compressed separately, the scope of this issue is much smaller than in plain tarballs. Nevertheless, small changes occurring in multiple blocks are able to grow delta two to four times as large as it would be if the data was not compressed. In this paper, the author explores the possibility of introducing a transparent decompression in the delta generation process to reduce the delta size.

Read on… [PDF]

Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.4 "Ukraine" is released (March 05, 2014, 14:22 UTC)

Hi all,
we are pleased to announce version 0.4 of Trojitá, a fast Qt IMAP e-mail client. For this release, a lot of changes were made under the hood, but of course there are some changes that are visible to the user as well.

Improvements:

  • Users are able to use multiple sessions, which means that it is possible to use Trojitá with multiple IMAP accounts at the same time. It can be used by invoking Trojitá with the --profile something switch. For each profile, a new instance of the application is started. Please note that this is not our final solution for the multi-accounts problem; work on this is ongoing. For details, refer to the detailed instructions.
  • In the Composer Window, users can now control whether the current message is a reply to some other message. Hopefully, this will make it easier to reply to a ton of people while starting a new thread, not lumping the unrelated conversations together.
  • Trojitá will now detect changes to the network connection state. So for example, when a user switches from a wireless connection to a wired one, Trojitá will detect that and try to reconnect automatically.
  • Trojitá gained a setting to automatically use the system proxy settings.
  • SOCKS5 and HTTP proxies are supported.
  • Memory usage has been reduced and speed has been improved. Our benchmarks indicate being ten times faster when syncing huge mailboxes, and using 38% less memory at the same time.
  • The Compose Window supports editing the "From" field with hand-picked addresses as per common user requests.

This release has been tagged in git as "v0.4". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS .

This release is dedicated to the people of all nations living in Ukraine. We are no fans of political messages in software announcements, but we also cannot remain silent when unmarked Russian troops are marching over a free country. The Trojitá project was founded in a republic formerly known as Czechoslovakia. We were "protected" by foreign aggressors twice in the 20th century — first in 1938 by the Nazi Germany, and second time in 1968 by the occupation forces of the USSR. Back in 1938, Adolf Hitler used the same rhetorics we hear today: that a national minority was oppressed. In 1968, eight people who protested against the occupation in Moscow were detained within a couple of minutes, convicted and sent to jail. In 2014, Moscowians are protesting on a bigger scale, yet we all see the cops arresting them on Youtube — including those displaying blank signs.

This is not about politics, this is about morality. What is happening today in Ukraine is a barbaric act, an occupation of an innocent country which has done nothing but stopped being attracted to their more prominent eastern neighbor. No matter what one thinks about the international politics and the Crimean independence, this is an act which must be condemned and fiercely fought against. There isn't much what we could do, so we hope that at least this symbolic act will let the Ukrainians know that the world's thoughts are with them in this dire moment. За вашу и нашу свободу, indeed!

Finally, we would like to thank Jai Luthra, Danny Rim, Benjamin Kaiser and Yazeed Zoabi, our Google Code-In students, and Stephan Platz, Karan Luthra, Tomasz Kalkosiński and Luigi Toscano, people who recently joined Trojitá, for their code contributions.

The Trojitá developers

  • Jan Kundrát
  • Yuri Chornoivan
  • Karan Luthra
  • Pali Rohár
  • Tomasz Kalkosiński
  • Christian Degenkolb
  • Jai Luthra
  • Stephan Platz
  • Thomas Lübking

March 04, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

A friend of mine recently sent me a link to this video clip of a Drill Sergeant coming down hard on a preteen boy for his bad behaviour (taken from the Jenny Jones show, Bootcamp my Preteen episode). The boy’s response caught him off-guard, but should serve as a reminder to us of the different situations that children have to endure.


Drill Sergeant stunned by preteen boy’s response – Jenny Jones – Bootcamp my Preteen

March 02, 2014
Gentoo Monthly Newsletter - February 2014 (March 02, 2014, 23:04 UTC)

The February 2014 GMN issue is now available online.

This month on GMN:

  • Interview with Gentoo developer Sven Vermeulen (swift)
  • Latest Gentoo news, job openings, interesting stats and much more.

Matthew Thode a.k.a. prometheanfire (homepage, bugs)
testing the s3700 (March 02, 2014, 06:00 UTC)

The Setup

Two 100G s3700 drives, one tested with luks, one not.

Filled the drive to test with it filled.

Testing 4k/8k, with luks using --size=8 or --size=9 for 4k and 8k respectivly.

I used the following settings in fio, changing the filename and block size where appropriate.

[global]
bs=4k
ioengine=posixaio
iodepth=32
size=200g
filename=/dev/mapper/testssd
direct=1

[rand-read]
rw=randread
stonewall

[rand-write]
rw=randwrite
stonewall

[seq-read]
rw=read
stonewall

[seq-write]
rw=write
stonewall

Results

There was generally high interupts and context switches, but oddly less so with luks.

Sequential IOPS

  • 4k sequential writes with luks was 73.7% of max ( 12424 vs 16855 )
  • 4k sequential reads with luks was 76.7% of max ( 14471 vs 18864 )
  • 8k sequential writes with luks was 71.0% of max ( 9640 vs 13573 )
  • 8k sequential reads with luks was 71.8% of max ( 10744 vs 14966 )

Random IOPS

  • 4k random writes with luks was 82.2% of max ( 13919 vs 16924
  • 4k random reads with luks was 80.7% of max ( 6260 vs 7756 )
  • 8k random writes with luks was 71.7% of max ( 9718 vs 13557 )
  • 8k random reads with luks was 64.7% of max ( 4222 vs 6526 )

Conclusion

My use case is zfs usage as l2arc/zil cache, I'll be using 8k on luks.