Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Peter Wilmott
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Yury German
. Zack Medico

Last updated:
April 30, 2016, 03:06 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 28, 2016
GSoC 2016: Five projects accepted (April 28, 2016, 00:00 UTC)

We are excited to announce that 5 students have been selected to participate with Gentoo during the Google Summer of Code 2016!

You can follow our students’ progress on the gentoo-soc mailing list and chat with us regarding our GSoC projects via IRC in #gentoo-soc on freenode.
Congratulations to all the students. We look forward to their contributions!

GSoC logo

Accepted projects

Clang native support - Lei Zhang: Bring native clang/LLVM support in Gentoo.

Continuous Stabilization - Pallav Agarwal: Automate the package stabilization process using continuous integration practices.

kernelconfig - André Erdmann: Consistently generate custom Linux kernel configurations from curated sources.

libebuild - Denys Romanchuk: Create a common shared C-based implementation for package management and other ebuild operations in the form of a library.

Gentoo-GPG- Angelos Perivolaropoulos: Code the new Meta­-Manifest system for Gentoo and improve Gentoo Keys capabilities.

Events: Gentoo Miniconf 2016 (April 28, 2016, 00:00 UTC)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University.

Want to participate? The call for papers is open until 1 August 2016.

April 25, 2016
Gentoo Miniconf 2016 a.k.a. miniconf-2016 (homepage, bugs)

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University in Prague (FIT ČVUT).

The call for papers is now open where you can submit your session proposal until 1 August 2016. Want to have a meeting, discussion, presentation, workshop, do ebuild hacking, or anything else? Tell us!

miniconf-2016

April 22, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Well, it was that time of year again… time to dress up as one’s favourite Superhero and run for a great cause! The first weekend of April, I made the drive down to the lovely city of Huntsville, Alabama in order to support the National Children’s Advocacy Center by running in the 2016 NCAC Superheros 5K (here’s my post about last year’s race).

This year’s course was the same as last year, so it was a really nice run through parts of the city centre and through more residential areas of Huntsville. Unlike last year, the race started much later in the afternoon, so the temperatures were a lot higher (last year, truthfully, a bit chilly). It was beautifully sunny, and actually bordered on a bit warm, but I would gladly take those conditions over the cold! :)

2016 NCAC Superheroes race - pre-race start
Right before the start of the race
Click for full photo

I wasn’t quite sure how this race was going to turn out, seeing as it was my first since the knee injury late last year. I was hopeful that my rehabilitation and training since the injury would help me at least come close to my time last year, but I also doubted that possibility. I came in first place overall with a time of 20:13, which was a little over 30 seconds slower than last year. All things considered, I was pleased with my time. A few other fantastic runners to mention this year were Elliott Kliesner (age 14) who came in about 37 seconds after me, Christian Grant (age 12) with a time of 21:42, and Bud Bettler (age 72) who finished with an outstanding time for his age bracket at 28:16.

2016 NCAC Superheroes race - 5K results
5K Results 1st through 5th place
Click for top 45 results

Years ago, I decided that I wouldn’t run in any races unless they benefited a children’s charity, and I can’t think of any organisation with which my goals more align with their mission than the National Children’s Advocacy Center. According to WAFF News in Huntsville, the race raised over $24,000 for the NCAC! That will make a huge difference in the lives of the children that the NCAC serves! Here’s to hoping that next year’s race (the 7th annual) will raise even more. Hope to see you there!

2016 NCAC Superheroes race - Nathan Zachary award acceptance
Nathan Zachary’s award (and Superhero cape) acceptance
Click to enlarge

Cheers,
Zach

April 15, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level — and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.

Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There’s the whole Gentoo-keys project that aims to eventually solve the problem but it’s not there yet. Maybe this year’s Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I’m not saying it would be that hard — but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I’m not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won’t be kept unencrypted outside the memory. All nice and shiny.

The problem is — this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I’m around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there’s one more issue: the software. I won’t be running all those scripts in my home. So it’s not just me you have to trust — you have to trust all other people with administrative access to the machine that’s running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won’t be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What’s the value of this signature then?

And this is the exact reason why I don’t do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I’ve mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What’s important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What’s the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn’t guarantee that the alterations are genuine, the danger coming from them is rather small (if any).

April 11, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

A couple weeks ago, I decided to update my primary laptop’s kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:


#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the ‘iwlwifi’ driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:


[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn’t loading:


[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:


# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):


[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the “Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2” error message.

Cheers,
Zach

April 04, 2016
Hanno Böck a.k.a. hanno (homepage, bugs)

OwncloudThe Owncloud web application has an encryption module. I first became aware of it when a press release was published advertising this encryption module containing this:

“Imagine you are an IT organization using industry standard AES 256 encryption keys. Let’s say that a vulnerability is found in the algorithm, and you now need to improve your overall security by switching over to RSA-2048, a completely different algorithm and key set. Now, with ownCloud’s modular encryption approach, you can swap out the existing AES 256 encryption with the new RSA algorithm, giving you added security while still enabling seamless access to enterprise-class file sharing and collaboration for all of your end-users.”

To anyone knowing anything about crypto this sounds quite weird. AES and RSA are very different algorithms – AES is a symmetric algorithm and RSA is a public key algorithm - and it makes no sense to replace one by the other. Also RSA is much older than AES. This press release has since been removed from the Owncloud webpage, but its content can still be found in this Reuters news article. This and some conversations with Owncloud developers caused me to have a look at this encryption module.

First it is important to understand what this encryption module is actually supposed to do and understand the threat scenario. The encryption provides no security against a malicious server operator, because the encryption happens on the server. The only scenario where this encryption helps is if one has a trusted server that is using an untrusted storage space.

When one uploads a file with the encryption module enabled it ends up under the same filename in the user's directory on the file storage. Now here's a first, quite obvious problem: The filename itself is not protected, so an attacker that is assumed to be able to see the storage space can already learn something about the supposedly encrypted data.

The content of the file starts with this:
BEGIN:oc_encryption_module:OC_DEFAULT_MODULE:cipher:AES-256-CFB:HEND----

It is then padded with further dashes until position 0x2000 and then the encrypted contend follows Base64-encoded in blocks of 8192 bytes. The header tells us what encryption algorithm and mode is used: AES-256 in CFB-mode. CFB stands for Cipher Feedback.

Authenticated and unauthenticated encryption modes

In order to proceed we need some basic understanding of encryption modes. AES is a block cipher with a block size of 128 bit. That means we cannot just encrypt arbitrary input with it, the algorithm itself only encrypts blocks of 128 bit (or 16 byte) at a time. The naive way to encrypt more data is to split it into 16 byte blocks and encrypt every block. This is called Electronic Codebook mode or ECB and it should never be used, because it is completely insecure.

Common modes for encryption are Cipherblock Chaining (CBC) and Counter mode (CTR). These modes are unauthenticated and have a property that's called malleability. This means an attacker that is able to manipulate encrypted data is able to manipulate it in a way that may cause a certain defined behavior in the output. Often this simply means an attacker can flip bits in the ciphertext and the same bits will be flipped in the decrypted data.

To counter this these modes are usually combined with some authentication mechanism, a common one is called HMAC. However experience has shown that this combining of encryption and authentication can go wrong. Many vulnerabilities in both TLS and SSH were due to bad combinations of these two mechanism. Therefore modern protocols usually use dedicated authenticated encryption modes (AEADs), popular ones include Galois/Counter-Mode (GCM), Poly1305 and OCB.

Cipher Feedback (CFB) mode is a self-correcting mode. When an error happens, which can be simple data transmission error or a hard disk failure, two blocks later the decryption will be correct again. This also allows decrypting parts of an encrypted data stream. But the crucial thing for our attack is that CFB is not authenticated and malleable. And Owncloud didn't use any authentication mechanism at all.

Therefore the data is encrypted and an attacker cannot see the content of a file (however he learns some metadata: the size and the filename), but an Owncloud user cannot be sure that the downloaded data is really the data that was uploaded in the first place. The malleability of CFB mode works like this: An attacker can flip arbitrary bits in the ciphertext, the same bit will be flipped in the decrypted data. However if he flips a bit in any block then the following block will contain unpredictable garbage.

Backdooring an EXE file

How does that matter in practice? Let's assume we have a group of people that share a software package over Owncloud. One user uploads a Windows EXE installer and the others download it from there and install it. Let's further assume that the attacker doesn't know the content of the EXE file (this is a generous assumption, in many cases he will know, as he knows the filename).

EXE files start with a so-called MZ-header, which is the old DOS EXE header that gets usually ignored. At a certain offset (0x3C), which is at the end of the fourth 16 byte block, there is an address of the PE header, which on Windows systems is the real EXE header. After the MZ header even on modern executables there is still a small DOS program. This starts with the fifth 16 byte block. This DOS program usually only shows the message “Th is program canno t be run in DOS mode”. And this DOS stub program is almost always the exactly the same.

EXE file

Therefore our attacker can do the following: First flip any non-relevant bit in the third 16 byte block. This will cause the fourth block to contain garbage. The fourth block contains the offset of the PE header. As this is now garbled Windows will no longer consider this executable to be a Windows application and will therefore execute the DOS stub.

The attacker can then XOR 16 bytes of his own code with the first 16 bytes of the standard DOS stub code. He then XORs the result with the fifth block of the EXE file where he expects the DOS stub to be. Voila: The resulting decrypted EXE file will contain 16 bytes of code controlled by the attacker.

I created a proof of concept of this attack. This isn't enough to launch a real attack, because an attacker only has 16 bytes of DOS assembler code, which is very little. For a real attack an attacker would have to identify further pieces of the executable that are predictable and jump through the code segments.

The first fix

I reported this to Owncloud via Hacker One in January. The first fix they proposed was a change where they used Counter-Mode (CTR) in combination with HMAC. They still encrypt the file in blocks of 8192 bytes size. While this is certainly less problematic than the original construction it still had an obvious problem: All the 8192 bytes sized file blocks where encrypted the same way. Therefore an attacker can swap or remove chunks of a file. The encryption is still malleable.

The second fix then included a counter of the file and also avoided attacks where an attacker can go back to an earlier version of a file. This solution is shipped in Owncloud 9.0, which has recently been released.

Is this new construction secure? I honestly don't know. It is secure enough that I didn't find another obvious flaw in it, but that doesn't mean a whole lot.

You may wonder at this point why they didn't switch to an authenticated encryption mode like GCM. The reason for that is that PHP doesn't support any authenticated encryption modes. There is a proposal and most likely support for authenticated encryption will land in PHP 7.1. However given that using outdated PHP versions is a very widespread practice it will probably take another decade till anyone can use that in mainstream web applications.

Don't invent your own crypto protocols

The practical relevance of this vulnerability is probably limited, because the scenario that it protects from is relatively obscure. But I think there is a lesson to learn here. When people without a strong cryptographic background create ad-hoc designs of cryptographic protocols it will almost always go wrong.

It is widely known that designing your own crypto algorithms is a bad idea and that you should use standardized and well tested algorithms like AES. But using secure algorithms doesn't automatically create a secure protocol. One has to know the interactions and limitations of crypto primitives and this is far from trivial. There is a worrying trend – especially since the Snowden revelations – that new crypto products that never saw any professional review get developed and advertised in masses. A lot of these products are probably extremely insecure and shouldn't be trusted at all.

If you do crypto you should either do it right (which may mean paying someone to review your design or to create it in the first place) or you better don't do it at all. People trust your crypto, and if that trust isn't justified you shouldn't ship a product that creates the impression it contains secure cryptography.

There's another thing that bothers me about this. Although this seems to be a pretty standard use case of crypto – you have a symmetric key and you want to encrypt some data – there is no straightforward and widely available standard solution for it. Using authenticated encryption solves a number of issues, but not all of them (this talk by Adam Langley covers some interesting issues and caveats with authenticated encryption).

The proof of concept can be found on Github. I presented this vulnerability in a talk at the Easterhegg conference, a video recording is available.

Michal Hrusecky a.k.a. miska (homepage, bugs)
Turris Omnia and openSUSE (April 04, 2016, 05:29 UTC)

About two weeks ago I was on the annual openSUSE Board face to face meeting. It was great and you can read reports of what was going on in there on openSUSE project mailing list. In this post I would like to focus on my other agenda I had while coming to Nuremberg. Nuremberg is among other things SUSE HQ and therefore there is a high concentration of skilled engineers and I wanted to take an advantage of that…

Little bit of my personal history. I recently join Turris team at CZ.NIC, partly because Omnia is so cool and I wanted to help to make it happen. And being long term openSUSE contributor I really wanted to see some way how to help both projects. I discussed it with my bosses at CZ.NIC and got in contact with Andreas Färber who you might know as one of the guys playing with ARMs within openSUSE project. The result was that I got an approval to bring Omnia prototype during the weekend to him and let him play with it.

My point was to give him a head start, so when Omnias will start shipping, there will be already some research done and maybe even howto for openSUSE so you could replace OpenWRT with openSUSE if you wanted. On the other hand, we will also get some preliminary feedback we can still try to incorporate.

Andreas Färber with Omnia

Andreas Färber with Omnia

Why testing whether you can install openSUSE on Omnia? And do you want to do that? As a typical end user probably not. Here are few arguments that speaks against it. OpenWRT is great for routers – it has nice interface and anything you want to do regarding the network setup is really easy to do. You are able to setup even complicated network using simple web UI. Apart from that, by throwing away OpenWRT you would throw away quite some of the perks of Omnia – like parental control or mobile application. You might think that it is worth it to sacrifice those to get full-fledged server OS you are familiar with and where you can install everything in non-stripped down version. Actually, you don’t have to sacrifice anything – OpenWRT in Omnia will support LXC, so you can install your OS of choice inside LXC container and have both – easily manageable router with all the bells and whistles and also virtual server with very little overhead doing complicated stuff. Or even two or three of them. So most probably, you want to keep OpenWRT and install openSUSE or some other Linux distribution inside a container.

But if you still do want to replace OpenWRT, can you? And how difficult would it be? Long story short, the answer is yes. Andreas was able to get openSUSE running on Omnia and even wrote instructions how to do that! One little comment, Turris Omnia is still under heavy development. What Andreas played with was one of the prototypes we have. Software is still being worked on and even hardware is being polished a little bit from time to time. But still, HW will not change drastically and therefor howto probably wouldn’t change as well. It is nice to see that it is possible and quite easy to install your average Linux distribution.

Why is having this option so important given all the arguments I stated against doing so? Because of freedom. I consider it great advantage when buying a piece of hardware knowing that I can do whatever I want with it and I’m not locked in and depending on the vendor with everything. Being able to install openSUSE on Omnia basically proves that Omnia is really open and even in the unlikely situation in which hell freezes over and CZ.NIC will disappear or turn evil, you will still be able to install latest kernel 66.6 and continue to do whatever you want with your router.

This post was originally posted on CZ.NIC blog, re-posted here to make it available on Planet openSUSE.

April 03, 2016
Michal Hrusecky a.k.a. miska (homepage, bugs)
Shell calendar generator (April 03, 2016, 18:45 UTC)

Some people still use paper calendars. Stuff where you have a picture of the month and all days in the month listed. I have some relatives that do use those. On loosely related topic, I like to travel and I like to take some pictures in foreign lands. So combining both is an obvious idea – to create a calendar where pictures of the month are taken by me. I searched for some ready to use solution but haven’t found anything. So I decided to create my own simple tool. And this post is about creating that tool.

I know time and date stuff is complicated and I wasn’t really looking into learning all the rules regarding date and time and programing them. There had to be a simple way how I can use some of the tools that are already implemented. Obvious option would be to use some of the date manipulation libraries like mktime and write the tool in C. But that that sounded quite heavy weight for such a simple tool. Using Ruby would be an option, but still kinda too much and I’m not fluent rubyist and my python and perl are even rustier. I was also thinking what output format should I use to print it easily. As I was targeting some pretty printed paper LaTeX sounded like a good choice and in theory it could be used to implement the whole thing. I even found somebody who did that, but I didn’t managed to comprehend how it worked, how to modify it or even how to compile it. Turns out my LaTeX is rusty as well.

So I decided to use shell and powerful date command to generate the content. Started with generating LaTeX code as I still want it on paper in the end, right? Trouble is, LaTeX make great papers if you want to look serious and make some serious typography. For calendar on the wall, you probably want to make it fancy and screw typography. I was trying to make it what I wanted, but it was hard. So hard I gave up. And I ended up with the winning combo – shell and html. Html is easy to view and print and CSS supports various of options including different style for screen and print type media.

Html and css made the whole exercise really easy and I have something working now on GitHub in 150 lines of code where half of it is CSS. It’s not perfect, there is plenty of space for optimization, but it is really simple and fast enough. Are you interested? Give it a try and if it doesn’t work well for you, pull requests are welcome 😉

April 02, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Last words on diabetes and software (April 02, 2016, 22:24 UTC)

Started as a rant on G+, then became too long and suited for a blog.

I do not understand why we can easily get people together with something like VideoLAN, but the moment when health is involved, the results are just horrible.

Projects either end up in "startuppy", which want to keep things for themselves and by themselves, or we end up fractionated in tiny one-person-projects because every single glucometer is a different beast and nobody wants to talk with others.

Tonight I ended up in a half-fight with a project to which I came saying "I've started drafting an exchange format, because nobody has written one down, and the format I've seen you use is just terrible and when I told you, you haven't replied" and the answer was "we're working on something we want to standardize by talking with manufacturers."

Their "we talk with these" projects are also something insane — one seem to be more like the idea of building a new device from scratch (great long term solution, terrible usefulness for people) and the other one is yet-another-build-your-own-cloud kind of solution that tells you to get Heroku or Azure with MongoDB to store your data. It also tells you to use a non-manufacturer-approved scanner for the sensors, which the comments point out can fry those sensors to begin with. (I should check whether that's actually within ToS for Play Store.)

So you know what? I'm losing hope in FLOSS once again. Maybe I should just stop caring, give up this laptop for a new Microsoft Surface Pro, and keep my head away from FLOSS until I am ready for retirement, at which point I can probably just go and keep up with the reading.

I have tried reaching out to the people who have written other tools, like I posted before, but it looks like people are just not interested in discussing this — I did talk with a few people over email about some of the glucometers I dealt with, but that came to one person creating yet another project wanting to become a business, and two figuring out which original proprietary tools to use, because they do actually work.

So I guess you won't be reading much about diabetes on my blog in the future, because I don't particularly enjoy writing this for my sole use, and clearly that's the only kind of usage these projects will ever get. Sharing seems to be considered deprecated.

April 01, 2016
Luca Barbato a.k.a. lu_zero (homepage, bugs)
AVScale – part1 (April 01, 2016, 18:53 UTC)

swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

Colorspace conversion and Scaling

Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

Most multimedia concepts are more or less intuitive:
encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
demuxing is getting back the compressed data with the timing information stored in the container format
decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.

AVScale

The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

Everything is a Kernel

One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

Those components are called kernels.

The kernels can be conceptually divided in two kinds:
Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

This pipeline approach gets great flexibility and helps code reuse.

The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.

API

AVScale works with two kind of structures:
AVPixelFormaton: A full description of the pixel format
AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

For now only avscale_config and avscale_convert_frame are implemented.

So if the input and output are pre-determined the context can be configured like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_config(ctx, out, in);
if (ret < 0)
    ...

But you can skip it and scale and/or convert from a input to an output like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)
    ...

ret = avscale_convert_frame(ctx, out, in);
if (ret < 0)
    ...

avscale_free(&ctx);

The context gets lazily configured on the first call.

Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

As said the API is really simple and essential.

Help welcome!

Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.

March 31, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

I’ll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account!  So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program.  Since I don’t like writing trivial stuff, I’m going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions.  Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on.  Chpt 1 of Kerrisk’s The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today.  If you look thorugh the code base of uClibc you’ll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU.  These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro.  Consider the approach to portability taken by GNU autotools.  Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__.  Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__.  If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)

This is getting a bit ugly and syntactically ambiguous.  Someone not familiar with this could easily misinterpret it, or reject it.

So I’ve hit bugs like these.  I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__.   Alternatively I hit this in geocode-glib and geoclue, and they did accept it.  I went with the wrong minded approach because that’s what was already there, and I didn’t feel like sifting through their code base and revamping their build system.  This isn’t just laziness, its historical weight.

So kudos to musl.  And for all the faults of GNU autotools, at least its approach to portability is correct.

 

 

 

 

March 30, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Travel cards collection (March 30, 2016, 19:07 UTC)

As some of you might have noticed, for example by following me on Twitter, I have been traveling a significant amount over the past four years. Part of it has been for work, part for my involvement with VideoLAN and part again for personal reason (i.e. vacation.)

When I travel, I don't rent a car. The main reason being I (still) don't have a driving license, so particularly when I travel for leisure I tend to travel where there is at least some form of public transport, and even better if there is a good one. This matched perfectly with my hopes of visiting Japan (which I did last year), and usually tends to work relatively well with conference venues, so I have not had much trouble on it in the past few years.

One thing that is going a bit overboard for me, though, is the number of travel cards I have by now. With the exception of Japan, here every city or so have a different travel card — while London appears to have solved that, at least for tourists and casual passengers, by accepting contactless cards as if it was their local travel card (Oyster), it does not seem to be followed up by anyone else, that I can see.

Indeed I have at this point at home:

  • Clipper for San Francisco and Bay Area; prepaid, I actually have not used it in a while so I have some money "stuck" on it.
  • SmarTrip for Washington DC; also prepaid, but at least I managed to only keep very little on it.
  • Metro dayLink for Belfast; prepaid by tickets.
  • Ridacard for Edinburgh and the Lothian region; this one has my photo on it, and I paid for a weekly ticket when I used it.
  • imob.venezia, which is now discontinued, and I used when I lived in Venice, it's just terrible.
  • Suica, for Japan, which is a stored-value card that can be used for payments as well as travel, so it comes the closest to London's use of contactless.
  • Leap which is the local Dublin transports card, also prepaid.
  • Navigo for Paris, but I only used it once because you can only store Monday-to-Sunday tickets on it.

I might add a few more this year, as I'm hitting a few new places. On the other hand, while in London yesterday, I realized how nice and handy it is to just use my bank card for popping in and out of the Tube. And I've been wondering how did we get to this system of incompatible cards.

In the list above, most of the cities are one per State or Country, which might suggest cards work better within a country, but that's definitely not the case. I have been told that recently Nottingham has moved to a consolidate travelcard which is not compatible with Oyster either, and both of them are in England.

Suica is the exception. The IC system used in Japan is a stored-value system which can be used for both travel and for general payments, in stores and cafes and so on. This is not "limited" to Tokyo (though limited might be the wrong word there), but rather works in most of the cities I've visited — one exception being busses in Hiroshima, while it worked fine for trams and trains. It is essentially an upside-down version of what happens in London, like if instead of using your payment card to travel, you used your travel card for in-store purchases.

The convenience of using a payment card, by the way, lies for me mostly on being able to use (one of) my bank accounts to pay for the money without having to "earmark" it the way I did for Clipper, which is now going to be used only the next time I actually use the public transport in SF — which I'm not sure when it is!

At the same time, I can think of two big obstacles to implementing contactless payment in place for travelcards: contracts and incentives. On the first note, I'm sure that there is some weight that TfL (Travel for London) can pull, that your average small town can't. On the other note, it's a matter for finance experts, which I can only guess on: there is value for the travel companies to receive money before you travel — Clipper has already had my money in their coffers since I topped it up, though I have not used it.

While topped-up credit of customers is essentially a liability for the companies, it also increases their liquidity. So there is little incentive for them, particularly the smaller ones. Indeed, moving to a payment system for which the companies get their money mostly from banks rather than through cash, is likely to be a problem for them. And we're back on the first matter: contracts. I'm sure TfL can get better deals from banks and credit card companies than most.

There is also the matter of the tech behind all of this. TfL has definitely done a good job with keeping compatible systems — the Oyster I got in 2009, the first time I boarded a plane, still works. During the same seven years, Venice changed their system twice: once keeping the same name/brand but with different protocols on the card (making it compatible with more NFC systems), and once by replacing the previous brand — I assume they have kept some compatibility on the cards but since I no longer live there I have not investigated.

I'm definitely not one of those people who insist that opensource is the solution to everything, and that just by being opened, things become better for society. On the other hand, I do wonder if it would make sense for the opensource community to engage with public services like this to provide a solution that can be more easily mirrored by smaller towns, who would not otherwise be able to afford the system themselves.

On the other hand, this would require, most likely, compromises. The contracts with service providers would likely include a number of NDA-like provisions, and at the same time, the hardware would not be available off-the-shelf.

This post is not providing any useful information I'm afraid, it's just a bit of a bigger opinion I have about opensource nowadays, and particularly about how so many people limit their idea of "public interest" to "privacy" and cryptography.

March 29, 2016
Luca Barbato a.k.a. lu_zero (homepage, bugs)
New AVCodec API (March 29, 2016, 11:58 UTC)

Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

What is here now

Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

  • Function Names: push got renamed to send and pull got renamed to receive.
  • No separated function to probe the process state, need_data and have_data are not here.
  • No codecs ported to use the new API, so no actual asyncronicity for now.
  • Subtitles aren’t supported yet.

New API

There are just 4 new functions replacing both audio-specific and video-specific ones:

// Decode
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

// Encode
int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);

The workflow is sort of simple:
– You setup the decoder or the encoder as usual
– You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
– You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
– Once you are done feeding data you have to pass a NULL to signal the end of stream.
– You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
– You free the contexts as usual.

Decoding examples

Setup

The setup uses the usual avcodec_open2.

    ...

    c = avcodec_alloc_context3(codec);

    ret = avcodec_open2(c, codec, &opts);
    if (ret < 0)
        ...

Simple decoding loop

People using the old API usually have some kind of simple loop like

while (get_packet(pkt)) {
    ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
    if (ret < 0) {
        ...
    }
    if (got_picture) {
        ...
    }
}

The old functions can be replaced by calling something like the following.

// The flush packet is a non-NULL packet with size 0 and data NULL
int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
{
    int ret;

    *got_frame = 0;

    if (pkt) {
        ret = avcodec_send_packet(avctx, pkt);
        // In particular, we don't expect AVERROR(EAGAIN), because we read all
        // decoded frames with avcodec_receive_frame() until done.
        if (ret < 0)
            return ret == AVERROR_EOF ? 0 : ret;
    }

    ret = avcodec_receive_frame(avctx, frame);
    if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
        return ret;
    if (ret >= 0)
        *got_frame = 1;

    return 0;
}

Callback approach

Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

// return 0 on success, negative on error
typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);

int decode(AVCodecContext *avctx, AVFrame *pkt,
           process_frame_cb cb, void *priv)
{
    AVFrame *frame = av_frame_alloc();
    int ret;

    ret = avcodec_send_packet(avctx, pkt);
    // Again EAGAIN is not expected
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_frame(avctx, frame);
        if (!ret)
            ret = cb(priv, frame);
    }

out:
    av_frame_free(&frame);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

Separated threads

The new API makes sort of easy to split the workload in two separated threads.

// Assume we have context with a mutex, a condition variable and the AVCodecContext


// Feeding loop
{
    AVPacket *pkt = NULL;

    while ((ret = get_packet(ctx, pkt)) >= 0) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_send_packet(avctx, pkt);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the draining loop
            pthread_cond_signal(&ctx->cond);
            // Wait here
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);
    }

    pthread_mutex_lock(&ctx->lock);
    ret = avcodec_send_packet(avctx, NULL);

    pthread_cond_signal(&ctx->cond);

out:
    pthread_mutex_unlock(&ctx->lock)
    return ret;
}

// Draining loop
{
    AVFrame *frame = av_frame_alloc();

    while (!done) {
        pthread_mutex_lock(&ctx->lock);

        ret = avcodec_receive_frame(avctx, frame);
        if (!ret) {
            pthread_cond_signal(&ctx->cond);
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the feeding loop
            pthread_cond_signal(&ctx->cond);
            // Wait
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;

        pthread_mutex_unlock(&ctx->lock);

        if (!ret) {
            do_something(frame);
        }
    }

out:
        pthread_mutex_unlock(&ctx->lock)
    return ret;
}

It isn’t as neat as having all this abstracted away, but is mostly workable.

Encoding Examples

Simple encoding loop

Some compatibility with the old API can be achieved using something along the lines of:

int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
{
    int ret;

    *got_packet = 0;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        return ret;

    ret = avcodec_receive_packet(avctx, pkt);
    if (!ret)
        *got_packet = 1;
    if (ret == AVERROR(EAGAIN))
        return 0;

    return ret;
}

Callback approach

Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

// return 0 on success, negative on error
typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);

int encode(AVCodecContext *avctx, AVFrame *frame,
           process_packet_cb cb, void *priv)
{
    AVPacket *pkt = av_packet_alloc();
    int ret;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            ret = cb(priv, pkt);
    }

out:
    av_packet_free(&pkt);
    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;
}

The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

Coming Next

This post is long enough so the next one might involve converting a codec to the new API.

March 28, 2016
Anthony Basile a.k.a. blueness (homepage, bugs)

RBAC is a security feature of the hardened-sources kernels.  As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources.   Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root.  In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned!  I wouldn’t recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not.  RBAC has to be part of a more comprehensive security plan and is not a single security solution.  In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers.  Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges.  So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself.  In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel.  Of course, if you’re planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy.  This is where the fun begins because you need to ask yourself what kind of a system you’re going to be running and decide on the policies you’re going to implement.  Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack.  I did this for years when I ran a moodle server for D’Youville College.  For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file.  RBAC is ideally suited for this.  Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs.  Basically your services should have attempted to access each resource at least once so the event gets logged.  You then distill those logs into a policy file describing only what should be permitted and tweak as needed.  Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm –F –L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm –F –L /etc/grsec/learning.log –O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to

Easy right?  This will get you pretty far but you’ll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning.  A typical example here, is you might have ssh’ed in from one IP, but now you’re ssh-ing in from a different IP and you’re getting denied.  To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role.  Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0

This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied.  As we’ll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root.  To do this you edit the policy file and look for subject /bin/ls under role root.  You then add an ‘l’ to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the ‘l’

You restart RBAC using  gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log  -O /etc/grsec/partial-learning.policy.  That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied.  But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there?  While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let’s start with a failry permissive policy file for root:

role admin sA
subject / rvka {
	/			rwcdmlxi
}

role default
subject / {
	/			h
	-CAP_ALL
	connect disabled
	bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
	/			r
	/boot			h
#
	/bin			rx
	/sbin			rx
	/usr/bin		rx
	/usr/libexec		rx
	/usr/sbin		rx
	/usr/local/bin		rx
	/usr/local/sbin		rx
	/lib32			rx
	/lib64			rx
	/lib64/modules		h
	/usr/lib32		rx
	/usr/lib64		rx
	/usr/local/lib32	rx
	/usr/local/lib64	rx
#
	/dev			hx
	/dev/log		r
	/dev/urandom		r
	/dev/null		rw
	/dev/tty		rw
	/dev/ptmx		rw
	/dev/pts		rw
	/dev/initctl		rw
#
	/etc/grsec		h
#
	/home			rwcdl
	/root			rcdl
#
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
#
	/run/lock		rwcdl
	/sys			h
	/tmp			rwcdl
	/var			rwcdl
#
	+CAP_ALL
	-CAP_MKNOD
	-CAP_NET_ADMIN
	-CAP_NET_BIND_SERVICE
	-CAP_SETFCAP
	-CAP_SYS_ADMIN
	-CAP_SYS_BOOT
	-CAP_SYS_MODULE
	-CAP_SYS_RAWIO
	-CAP_SYS_TTY_CONFIG
	-CAP_SYSLOG
#
	bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
	connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
	sock_allow_family all
}

The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is ‘special’ meaning that it doesn’t correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role’s behavior. The ‘s’ and ‘A’ in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn’t have (eg, it is not subject ptrace restrictions). Its good to have the ‘A’ flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it’ll still function the same way.

The root role has the ‘u’ and the ‘G’ flags. The ‘u’ flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the ‘g’ flag instead which says to match a group by the same name. The ‘G’ flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don’t need to add those permissions yourself. Finally the default role doesn’t and shouldn’t have any flags. If its not a ‘u’ or ‘g’ or ‘s’ role, then its a default role.

Before we jump into the subject blocks, you’ll notice a couple of lines after the root role. The first says ‘role_transitions admin’ and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says ‘role_allow_ip 0.0.0.0/0’. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don’t have the line in there, get used to logging on on console because you’ll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the ‘subject’ keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the ‘/’ subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We’ll see this towards the end of the article.

Let’s start by looking at the ‘/’ subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have ‘/ h’ which says that every file in the file system starting from ‘/’ downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don’t forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn’t the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the ‘connect’ and ‘bind’ lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here ‘disabled’ means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let’s look at the ‘/’ subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags ‘rvka’. Here ‘r’ means that we relax ptrace restrictions for this subject, ‘a’ means we do not hide access to /dev/grsec, ‘k’ means we allow this subject to kill protected processes and ‘v’ means we allow this subject to view hidden processes. So ‘k’ and ‘v’ are interesting and have counterparts ‘p’ and ‘h’ respectively. If a subject is flagged as ‘p’ it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with ‘k’. Similarly processes belonging to a subject marked ‘h’ can only be viewed by processes belonging to a subject marked ‘v’. Nifty, eh? The only object line in this subject block is ‘/ rwcdmlxi’. This says that this subject can ‘r’ead, ‘w’rite, ‘c’reate, ‘d’elete, ‘m’ark as setuid/setgid, hard ‘l’ink to, e’x’ecute, and ‘i’nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let’s look at the ‘/’ subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don’t open any security holes. Notice that here we have ‘+CAP_ALL’ followed by a series of ‘-CAP_*’. Each of these were included otherwise gradm would complain. For example, if ‘CAP_SYS_ADMIN’ is not removed, an attacker can mount filesystems to bypass your policies.

So I won’t go through this entire subject block in detail, but let me highlight a few points. First consider these lines

	/			r
	/boot			h
	/etc/grsec		h
	/proc/slabinfo		h
	/proc/modules		h
	/proc/kallsyms		h
	/sys			h

The first line gives ‘r’ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by ‘h’iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory

but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module

Next consider these lines:

	/bin			rx
	/sbin			rx
	...
	/lib32			rx
	/lib64			rx
	/lib64/modules		h

Since the ‘x’ flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The ‘r’ flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the ‘r’ flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop ‘r’ on /bin and not break our system. This, however, is not the case with the library directories. Dropping ‘r’ on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw

The ‘h’ flag will hide /dev and its contents, but the ‘x’ flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The ‘h’ is required to hide the directory and its contents because, as we saw above, ‘x’ is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log

In the admin role, all those files are visible.

Let’s end our study of this subject by looking at the ‘bind’, ‘connect’ and ‘sock_allow_family’ lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The ‘sock_allow_family’ allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename ‘role root’ to ‘role myusername’. You’ll probably want to remove the ‘role_transitions’ line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don’t want this user to be able to list any files or directories. You can simply add a line to his ‘/’ subject block which reads ‘/bin/ls h’ and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just ‘h’ all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user’s listing of a directory to just his home. To do this, we’ll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let’s proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}

Note that this is identical to the extremely restrictive ‘/’ subject for the default role except that the subject is ‘/bin/ls’ not ‘/’. There is also a subject flag ‘o’ which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0

Well that makes sense. We’ve started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we’ll restore read access to it by adding a line ‘/lib64/ld-2.21.so r’. Repeating our test, we get a seg fault! Obviously, we don’t just need read access to the ld.so, but we also execute privileges. We add ‘x’ and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

Of course! We need ‘rx’ for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

All we need now is ‘/home/myusername r’ and we’re done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}

Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System.  A good reference for the role, subject and object flags can be found in these  Tables.

March 27, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Template was specified incorrectly (March 27, 2016, 11:32 UTC)

After reorganizing my salt configuration, I received the following error:

[ERROR   ] Template was specified incorrectly: False

Enabling some debugging on the command gave me a slight pointer why this occurred:

[DEBUG   ] Could not find file from saltenv 'testing', u'salt://top.sls'
[DEBUG   ] No contents loaded for env: testing
[DEBUG   ] compile template: False
[ERROR   ] Template was specified incorrectly: False

I was using a single top file as recommended by Salt, but apparently it was still looking for top files in the other environments.

Yet, if I split the top files across the environments, I got the following warning:

[WARNING ] Top file merge strategy set to 'merge' and multiple top files found. Top file merging order is undefined; for better results use 'same' option

So what's all this about?

When using a single top file is preferred

If you want to stick with a single top file, then the first error is (or at least, in my case) caused by my environments not having a fall-back definition.

My /etc/salt/master configuration file had the following file_roots setting:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing

The problem is that Salt expects ''a'' top file through the environment. What I had to do was to set the fallback directory to the base directory again, like so:

file_roots:
  base:
    - /srv/salt/base
  testing:
    - /srv/salt/testing
    - /srv/salt/base

With this set, the error disappeared and both salt and myself were happy again.

When multiple top files are preferred

If you really want to use multiple top files (which is also a use case in my configuration), then first we need to make sure that the top files of all environments correctly isolate the minion matches. If two environments would match the same minion, then this approach becomes more troublesome.

On the one hand, we can just let saltstack merge the top files (default behavior) but the order of the merging is undefined (and no, you can't set it using env_order) which might result in salt states being executed in an unexpected order. If the definitions are done to such an extend that this is not a problem, then you can just ignore the warning. See also bug 29104 about the warning itself.

But better would be to have the top files of the environment(s) isolated so that each environment top file completely manages the entire environment. When that is the case, then we tell salt that only the top file of the affected environment should be used. This is done using the following setting in /etc/salt/master:

top_file_merging_strategy: same

If this is used, then the env_order setting is used to define in which order the environments are processed.

Oh and if you're using salt-ssh, then be sure to set the environment of the minion in the roster file, as there is no running minion on the target system that informs salt about the environment to use otherwise:

# In /etc/salt/roster
testserver:
  host: testserver.example.com
  minion_opts:
    environment: testing

March 26, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using salt-ssh with agent forwarding (March 26, 2016, 18:57 UTC)

Part of a system's security is to reduce the attack surface. Following this principle, I want to see if I can switch from using regular salt minions for a saltstack managed system set towards salt-ssh. This would allow to do some system management over SSH instead of ZeroMQ.

I'm not confident yet that this is a solid approach to take (as performance is also important, which is greatly reduced with salt-ssh), and the security exposure of the salt minions over ZeroMQ is also not that insecure (especially not when a local firewall ensures that only connections from the salt master are allowed). But playing doesn't hurt.

Using SSH agent forwarding

Anyway, I quickly got stuck with accessing minions over the SSH interface as it seemed that salt requires its own SSH keys (I don't enable password-only authentication, most of the systems use the AuthenticationMethods approach to chain both key and passwords). But first things first, the current target uses regular ssh key authentication (no chained approach, that's for later). But I don't want to assign such a powerful key to my salt master (especially not if it would later also document the passwords). I would like to use SSH agent forwarding.

Luckily, salt does support that, it just forgot to document it. Basically, what you need to do is update the roster file with the priv: parameter set to agent-forwarding:

myminion:
  host: myminion.example.com
  priv: agent-forwarding

It will use the known_hosts file of the currently logged on user (the one executing the salt-ssh command) so make sure that the system's key is already known.

~$ salt-ssh myminion test.ping
myminion:
    True

March 23, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Of OpenStack and uwsgi (March 23, 2016, 05:00 UTC)

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

  • Install >=www-servers/uwsgi-2.0.11.2-r1 with the python use flag as it has an updated init script.
  • Make sure you note the group you want for the webserver to access the uwsgi sockets, I chose nginx.

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi

UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'

/etc/uwsgi.d/keystone-admin.ini

[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin

/etc/uwsgi.d/keystone-main.ini

[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main

I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini

[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

/var/www/keystone/main perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon

#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()

March 22, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

A random OpenPGP-signed e-mail

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

Mail with a trusted signature

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

Something fishy is going on!

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

All the glory details about an X.509 trust chain

Encrypted messages are of course supported, too:

An ancrypted message

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!

March 21, 2016
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Bitstream Filtering (March 21, 2016, 20:45 UTC)

Last weekend, after few months of work, the new bitstream filter API eventually landed.

Bitstream filters

In Libav is possible to manipulate raw and encoded data in many ways, the most common being

  • Demuxing: extracting single data packets and their timing information
  • Decoding: converting the compressed data packets in raw video or audio frames
  • Encoding: converting the raw multimedia information in a compressed form
  • Muxing: store the compressed information along timing information and additional information.

Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.

New API

The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

// Query
const AVBitStreamFilter *av_bsf_next(void **opaque);
const AVBitStreamFilter *av_bsf_get_by_name(const char *name);

// Setup
int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
int av_bsf_init(AVBSFContext *ctx);

// Usage
int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);

// Cleanup
void av_bsf_free(AVBSFContext **ctx);

In order to use a bsf you need to:

  • Look up its definition AVBitStreamFilter using a query function.
  • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
  • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
  • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.

Query

You can enumerate the available filters

void *state = NULL;

const AVBitStreamFilter *bsf;

while ((bsf = av_bsf_next(&state)) {
    av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);
}

or directly pick the one you need by name:

const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");

Setup

A bsf may use some codec parameters and time_base and provide updated ones.

AVBSFContext *ctx;

ret = av_bsf_alloc(bsf, &ctx);
if (ret < 0)
    return ret;

ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
if (ret < 0)
    goto fail;

ctx->time_base_in = in->time_base;

ret = av_bsf_init(ctx);
if (ret < 0)
    goto fail;

ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
if (ret < 0)
    goto fail;

out->time_base = ctx->time_base_out;

Usage

Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

AVPacket *pkt;

while (got_new_packet(&pkt)) {
    ret = av_bsf_send_packet(ctx, pkt);
    if (ret < 0)
        goto fail;

    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
        yield_packet(pkt);
    }

    if (ret == AVERROR(EAGAIN)
        continue;
    IF (ret == AVERROR_EOF)
        goto end;
    if (ret < 0)
        goto fail;
}

// Flush
ret = av_bsf_send_packet(ctx, NULL);
if (ret < 0)
    goto fail;

while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
    yield_packet(pkt);
}

if (ret != AVERROR_EOF)
    goto fail;

In order to signal the end of stream a NULL pkt should be fed to send_packet.

Cleanup

The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.

    av_bsf_free(&ctx);

All the memory is freed and the ctx pointer is set to NULL.

Coming Soon

Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have already reviewed the Abbott FreeStyle Libre continuous glucose monitor, and I have hinted that I already started reverse engineering the protocol it uses to communicate with the (Windows) software. I should also point out that for once the software does provide significant value, as they seem to have spent more effort in the data analysis than any other part of it.

Please note that this is just a first part for this device. Unlike the previous blog posts, I have not managed yet to get even partial information downloaded with my script as I write and post this. Indeed, if you, as you read this, have any suggestion of things I have not tried yet, please do let me know.

Since at this point it's getting common, I've started up the sniffer, and sniffed starting from the first transaction. As it is to be expected, the amount of data in these transactions is significantly higher than that of the other glucometers. Even if you were taking seven blood samples a day for months with one of the other glucometers, it's going to take a much longer time to get the same amount of readings as this sensor, which takes 96 readings a day by itself, plus the spot-checks and added notes and information to comment them.

The device itself presents itself as a standard HID device, which is a welcome change from the craziness of SCSI-based hidden message protocols. The messages within are of course not defined in any standard of course, so inspecting them become interesting.

It took me a while to figure out what the data that the software was already decoding for me meant. At first I thought I would have to use magic constant and libusb to speak raw USB to the device — indeed, a quick glance around Xavier's work showed me that there were plently of similarities, and he's including quite a few magical consants in that code. Luckily for me, after managing to query the device with python-libusb1, which was quite awkward as I also had to fix it to work, I realized that I was essentially reimplementing hidraw access.

After rewriting the code to use /dev/hidraw1 (which makes it significantly simpler), I also managed to understand that the device uses exactly the same initialization procedure as the FreeStyle InsuLinx that Xavier already implemented, and similar but not identical command handling (some of the commands match, and some even match the Optium, at least in format.)

Indeed the device seem to respond to two general classes of commands: text-commands and binary commands, the first device I reverse engineer with such a hybrid protocol. Text commands also have the same checksumming as both the Optium and Neo protocols.

The messages are always transferred in 64-bytes packets, even though the second byte of the message declares the actual significant length, which can be even zero. Neither the software nor the device zero out their buffers before writing the new command/response packets, so there is lots of noise in those packets.

I've decided that the custom message framing and its usage of HID is significant enough to warrant being documented by itself so I did that for now, although I have not managed to complete the reverse engineering of the protocol.

The remaining of the protocol kept baffling me. Some of the commands appear to include a checksum, and are ignored if they are not sent correctly. Others actually seem to append to an error buffer that you can somehow access (but probably more by mistake than design) and in at least one case I managed to "crash" the device, which asked me to turn it off and on again. I have thus decided to stop trying to send random messages to it for a while.

I have not been pouring time on this as much as I was considering doing before, what with falling for a bad flu, being oncall, and having visitors in town, so I have only been looking at traces from time to time, particularly recording all of them as I downloaded more data out of it. What still confuses me is that the commands sent from the software are not constant across different calls, but I couldn't really make much heads or tails of it.

Then yesterday I caught a break — I really wanted to figure out at least if it was encoding or compressing the data, so I started looking for at least a sequence of numbers, by transcribing the device's logbook into hexadecimal and looking in the traces for them.

This is not as easy as it might sound, because I have a British device — in UK, Ireland and Australia the measure of blood sugar is given in mmol/l rather than the much more common mg/dl. There is a stable conversion between the two units (you multiply the former by 18 to get the latter), but this conversion usually happens on display. All the devices I have used up to now have been storing and sending over the wire values in mg/dl and only converted when the data is shown, usually by providing some value within the protocol to specify that the device is set to use a given unit measure. Because of this conversion issue, and the fact that I only had access to the values mmol/l, I usually had two different options for each of the readings, as I wasn't sure how the rounding happened.

The break happened when I was going through the software's interface, trying to get the latest report data to at least match the reading timing difference, so that I could look for what might appear like a timestamp in the transcript. Instead, I found the "Export" function. The exported file is a comma-separated values file, which includes all readings, including those by the sensor, rather than just the spot-checks I could see from the device interface and in the export report. Not only that, but it includes a "reading ID", which was interesting because it started from a value a bit over 32000, and is not always sequential. This was lucky.

I imported the CSV to Google Sheets, then added columns next to the ID and glucose readings. The latter were multiplied by 18 to get the value in mg/dl (yes the export feature still uses mmol/l, I think it might be some certification requirement), and then convert the whole lot to hexadecimal (hint: Google Sheets and LibreOffice have a DEC2HEX function that do that for you.) Now I had something interesting to search for: the IDs.

Now, I have to point out that the output I have from USBlyzer is a CSV file that includes the hexdump of the USB packets that are being exchanged. I already started writing a set of utilities (too rough to be published though) to convert those into a set of binary files (easier to bgrep or binwalk them) or hexdump-like transcripts (easier to recognize strings.) I wrote both a general "full USB transcript" script as well as a "Verio-specific USB transcript" while I was working on my OneTouch meter, so I wrote one for the Abbott protocol, too.

Because of the way that works, of course, it is not completely obvious if any value which is not a single byte is present, by looking at the text transcript, as it might be found on the message boundary. One would think they wouldn't, since that means there are odd-sized records, but indeed that is the case for this device at least. Indeed it took me a few tries of IDs found in the CSV file to find one in the USB transcript.

And even after finding one the question was to figure out the record format. What I have done in the past when doing binary format reverse engineering was to print on a piece of paper a dump of the binary I'm looking at, and start doodling on it trying to mark similar parts of the message. I don't have a printer in Dublin, so I decided to do a paperless version of the same, by taking a screenshot of a fragment of transcript, and loading it into a drawing app on my tablet. It's not quite as easy, but it does making sharing results easier and thanks to layers it's even easier to try and fail.

I made a mistake with the screenshot by not keeping the command this was a reply to in the picture — this will become more relevant later. Because of the size limit in the HID-based framing protocol Abbott uses, many commands reply with more than one message – although I have not understood yet how it signals a continuation – so in this case the three messages (separated by a white line) are in response to a single command (which by the way is neither the first or the last in a long series.)

The first thing I wanted to identify in the response was all the reading IDs, the one I searched for is marked in black in the screenshot, the others are marked in the same green tone. As you can see they are not (all) sequential; the values are written down as little-endian by the way. The next step was to figure out the reading values, which are marked in pink in the image. While the image itself has no value that is higher than 255, thus using more than bytes to represent them, not only it "looked fair" to assume little endian. It was also easy to confirm as (as noted in my review) I did have a flu while wearing the sensor, so by filtering for readings over 14 mmol/L I was able to find an example of a 16-bit reading.

The next thing I noted was the "constant" 0C 80 which might include some flags for the reading, I have not decoded it yet, but it's an easy way to find most of the other IDs anyway. Following from that, I needed to find an important value, as it could allow decoding many other record types just by being present: the timestamp of the reading. The good thing with timestamps is that they tend to stay similar for a relative long time: the two highest bytes are the same for most of a day, and the highest of those is usually the same for a long while. Unfortunately looking for the hex representation of the Unix timestamp at the time yield nothing, but that was not so surprising, given how I found usage of a "newer" epoch in the Verio device I looked at earlier.

Now, since I have the exported data I know not only the reading ID but also the timestamp it reports it at, which does not include seconds. I also know that since the readings are (usually) taken at 15 minutes intervals, if they are using seconds since a given epoch the numbers should be incrementing by 900 between readings. Knowing this and doing some mental pattern matching it became easy to see where the timestamps have been hiding, they are marked in blue in the image above. I'll get back to the epoch.

At this point, I still have not figured out where the record starts and ends — from the image it might appear that it starts with the record ID, but remember I took this piece of transcript mid-stream. What I can tell is that the length of the record is not only not a multiple of eight (the bytes in hexdump are grouped by eight) but it is odd, which, by itself, is fairly odd (pun intended.) This can be told by noticing how the colouring crosses the mid-row spacing, for 0c 80, for reading values and timestamps alike.

Even more interesting, not only the records can cross the message boundaries (see record 0x8fe0 for which the 0x004b value is the next message over), but even do the fields. Indeed you can see on the third message the timestamp ends abruptly at the end of the message. This wouldn't be much of an issue if it wasn't that it provides us with one more piece of information to decode the stream.

As I said earlier, timestamps change progressively, and in particular reading records shouldn't usually be more than 900 seconds apart, which means only the lower two bytes change that often. Since the device uses little-endian to encode the numbers, the higher bytes are at the end of the encoded sequence, which means 4B B5 DE needs to terminate with 05, just like CC B8 DE 05 before it. But the next time we encounter 05 is in position nine of the following message, what gives?

The first two bytes of the message, if you checked the protocol description linked earlier, describe the message type (0B) and the number of significant bytes following (out of the usb packet), in this case 3E means the whole rest of the packet is significant. Following that there are six bytes (highlighted turquoise in the image), and here is where things get to be a bit more confusing.

You can actually see how discarding those six bytes from each message now gives us a stream of records that are at least fixed length (except the last one that is truncated, which means the commands are requesting continuous sequences, rather than blocks of records.) Those six bytes now become interesting, together with the inbound command.

The command that was sent just before receiving this response was 0D 04 A5 13 00 00. Once again the first two bytes are only partially relevant (message type 0D, followed by four significant bytes.) But A5 13 is interesting, since the first message of the reply starts with 13 A6, and the next three message increment the second byte each. Indeed, the software follows these with 0D 04 A9 13 00 00, which matches the 13 A9 at the start of the last response message.

What the other four bytes mean is still quite the mystery. My assumption right now is that they are some form of checksum. The reason is to be found in a different set of messages:

>>>> 00000000: 0D 04 5F 13 00 00                                 .._...

<<<< 00000000: 0B 3E 10 5F 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>._4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 60 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.`4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 61 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.a4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 62 34 EC 5A 6D  00 00 00 00 00 00 00 00  .>.b4.Zm........
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000030: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................

<<<< 00000000: 0B 3E 10 63 E8 B6 84 09  00 00 00 00 00 00 00 00  .>.c............
<<<< 00000010: 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ................
<<<< 00000020: 00 00 00 00 9A 39 65 70  99 51 09 30 4D 30 30 30  .....9ep.Q.0M000
<<<< 00000030: 30 37 52 4B 35 34 00 00  01 00 02 A0 9F DE 05 FC  07RK54..........

In this set of replies, there are two significant differences compared to the ones with record earlier. The first is that while the command lists 5F 13 the replies start with 10 5F, so that not only 13 becomes 10, but 5F is not incremented until the next message, making it unlikely for the two bytes to form a single 16-bit word. The second is that there are at least four messages with identical payload (fifty-six bytes of value zero). And despite the fourth byte of the message changing progressively, the following four bytes are staying the same. This makes me think it's a checksum we're talking about, although I can't for the life of me figure out which at first sight. It's not CRC32, CRC32c nor Adler32.

By the way, the data in the last message relates to the list of sensors the devices has seen — 9ep.Q.0M00007RK54 is the serial number, and A0 9F DE 05 is the timestamp of it initializing.

Going back to the epoch, which is essentially the last thing I can talk about for now. The numbers above clearly shower in a different range than the UNIX timestamp, which would start with 56 rather than 05. So I used the same method I used for the Verio, and used a fixed, known point in time, got the timestamp from the device and compared with its UNIX timestamp. The answer was 1455392700 — which is 2012-12-31T00:17:00+00:00. It would make perfect sense, if it wasn't 23 hours and 43 minutes away from a new year…

I guess that is all for now, I'm still trying to figure out how the data is passed around. I'm afraid that what I'm seeing from the software looks like it's sending whole "preferences" structures that change things at once, which makes it significantly more complicated to understand. It's also not so easy to tell how the device and software decide the measure unit as I don't have access to logs of a mg/dl device.

March 20, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
CGM review: Abbott FreeStyle Libre (March 20, 2016, 16:26 UTC)

While working on reverse engineering glucometers I decided to give a try to a CGM solution. As far as I know the only solution available in Ireland is Dexcom. A friend of mine already has this, and I've seen it, but it felt a bit too bulky for my taste.

Instead, I found out on Twitter about a new solution from Abbott – the same company I wrote plenty before while reverse engineering devices – called FreeStyle Libre. When I first got to their website, though, I found out that the description videos themselves were "not available in my country". When I went back to check on it, the whole website was not available at all, and instead redirected me to a general website telling me the device is not available in my country.

I won't spend time here to describe how to work around the geolocking, I'm sure you can figure it out or find the instructions on other websites. Once you work around accessing the website, ordering is also limited to UK addresses for both billing and shipping — these are also fairly easy to work around, particularly when you live in Éire. I can't blame Abbott for not selling the device in this country (they are not allowed by law) but it would be nice if they didn't hide the whole website though!

Anyway, I have in some ways (which I won't specify) worked around the website geolocking and order one of the starter kits back in February. The kit comes with two sensors (each valid for 14 days) and with a reader device which doubles as a normal glucometer.

The sensors come with an applicator that primes them and attaches them to the arm. The applicator is not too difficult to use even with your weak hand, which is a nice feature given that you should be alternating the arm you attach it to. Once you put the sensor on you do feel quite a bit of discomfort but you "get used to it" relatively quickly. I would suggest avoiding the outer-side of the arm though, particularly if you're clumsy like me and tend to run into walls fairly often — I ended up discarding my second sensor after only a week because I just took it out by virtue of falling.

One of the concerns that I've been warned about by a friend, on CGM sensors, is that while the sensor has no problem reading for the specified amount of time, the adhesive does not last that long. This was referred to another make and model (the Dexcom G4) and does not match my experience with the Libre. It might be because the Libre has a wider adhesive surface area, or because it's smaller and lighter, but I haven't had much problem with it trying to come away before the 14 days, even with showers and sweat. I would still suggest keeping at hand a roll of bandage tape though, just in case.

The reader device, as I said earlier, doubles as a normal glucometer, as it accepts the usual FreeStyle testing strips, both for blood and for ketone reading, although it does not come with sample strips. I did manage to try blood readings by using one of the sample strips I had from the FreeStyle Optium but I guess I should procure a few more just for the sake of it.

The design of the reading device is inspired by the FreeStyle InsuLinx, with a standard micro-USB port for both data access and charging – I was afraid the time would come that they would put non-replaceable batteries on glucometers! – and a strip-port to be used only for testing (I tried putting the serial port cable but the reader errors out.) It comes with a colourful capacitive touch-screen, from which you can change most (But not all) settings. A couple of things, such as the patient name, can only be changed from the software (available for Windows and OSX.)

The sensor takes a measurement every 15 minutes to draw the historical graph, which is stored for up to eight hours. Plus it takes a separate, instantaneous reading when you scan it. I really wish they put a little more memory in it to keep, say, 12 hours on the device, though. Eight hours is okay during the day if you're home, but it does mean you shouldn't forget the device home, when you go to the office (unless you work part-time), and that you might lose some of the data from just after going to sleep if you manage to sleep more than eight hours at a time — lucky you, by the way! I can't seem to be able to sleep more than six hours.

The scan is at least partially performed over NFC, as my phone can "see" the sensor as a tag, although it doesn't know what to do with it, of course. I'm not sure if the whole data dumping is done over NFC, but it would make it theoretically possible to get rid of the reader in favour of just using a smartphone then… but that's a topic for a different time.

The obvious problem with CGM solutions is their accuracy. Since they don't actually measure blood samples (they do use a needle, but it's a very small one) but rather interstitial fluid, it is often an open question on whether their readings can be trusted, and the suggestion is to keep measuring normal blood sugar once or twice a day. Which is part of the reason why the reader also doubles as a normal glucometer.

Your mileage here may vary widely, among other things because it varies for me as well! Indeed, I've had days in which the Libre sensor and the Accu-Chek Mobile matched perfectly, while the last couple of days (as I'm writing this) the Libre gave a slightly lower reading, between 1 and 2 mmol/l (yes this is the measure used in UK, Ireland and Australia) lower than the Accu-Chek blood sample reading. In the opinion of my doctor, hearing from his colleagues across the water (remember, this device is not available in my country), it is quite accurate and trustworthy. I'll run with his opinion — particularly because while trying to cross-check different meters I have here, they all seem to have a quite wider error range you'd expect, even when working on a blood sample from the same finger (from different fingers it gets complicated even for the same reader.)

Side-by-side picture of Accu-Chek Mobile and FreeStyle Libre

I'm not thrilled by the idea of using rechargeable batteries for a glucometer. If I need to take a measurement and my Accu-Chek Mobile doesn't turn on, it takes me just a moment to pick up another pair of AAA from my supply and put them in — not so on a USB-charged device. But on the other hand, it does make for a relatively small size, given the amount of extra components the device need, as you can see from the picture. The battery also lasts more than a couple of weeks without charging, and it does charge with the same microUSB standard as most of my other devices (excluding the iPod Touch and the Nexus 5X), so it's not too cumbersome while traveling.

A note on the picture: while the Accu-Chek Mobile has a much smaller and monochromatic non-touch screen, lots of its bulk is taken by the cassette with the tests (as it does not use strips at all), and it includes the lancing devices on its side, making it still quite reasonably sized. See also my review of it.

While the sensors store up to 8 hours of readings, the reader stores then up to three months of that data, including additional notes you can add to it like insulin dosage (similar to InsuLinx), meals and so on. The way it shows you that data is interesting too: any spot-check (when you scan the sensor yourself) is stored in a logbook, together with the blood sample tests — the logbooks also include a quick evaluation on whether the blood sugar is rising, falling (and greatly so) or staying constant. The automatic sensor readings are kept visible only as a "daily graph" (for midnight to midnight), or through "daily patterns" that graph (for 7, 14, 30 and 90 days) the median glucose within a band of high and low percentiles (the device does not tell you which ones they are, more to that later.)

I find the ability to see these information, particularly after recording notes on the meals, for instance, very useful. It is making me change my approach for many things, in particular I have stopped eating bagels in the morning (but I still eat them in the evenings) since I get hypers if I do — according to my doctor it's not unheard of for insulinoresistance to be stronger as you wake up.

I also discovered that other health issues you'd expect not to be related do make a mess of diabetes treatment (and thus why my doctors both insisted I take the flu shot every year). A "simple" flu (well, one that got me to 38.7⁰C, but that's unrelated, no?) brought my blood sugar to raise quite high (over 20 mmol/l), even though I was not eating as much as usual either. I could have noticed with the usual blood checking, but that's not something you look forward to when you're already feverish and unwell. For next time, I should increase insulin in those cases, but it also made me wary of colds and in general gave me a good data point that caring even for small things is important.

A more sour point is, not unusually, the software. Now, to be fair, as my doctor pointed out, all diabetes management software sucks because none of it can have a full picture of things, so the data is not as useful, particularly not to the patients. I have of course interest in the software because of my work on reverse engineering, so I installed the Windows software right away (for once, they also provide an OSX software, but since the only Mac I have access to nowadays is a work device, I have not tried it.)

Unlike the previous terrible experience with Abbott software, this time I managed to download it without a glitch, except for the already-noted geolocking of their website. It also installed fine on Windows 10 and works out of the box, among other things because it requires no kernel drivers whatsoever (I'll talk about that later when I go into the reverse engineering bits.)

Another difference between this software and anything else I've seen up to now, is that it's completely stateless. It does not download the data off the glucometer to store it locally, it downloads it and run the reports. But if you don't have the reader with you, there's no data. And since the reader stores up to 90 days worth of data before discarding, there are no reports that cross that horizon!

On the other hand, the software does seem to do a good job at generating a vast number of information. Not only it generates all the daily graphs, and documents the data more properly regarding which percentiles the "patterns" refer to (they also include two more percentile levels just to give a better idea of the actual pattern), but it provides info such as the "expected A1C" which is quite interesting.

At first, I mistakenly thought that the report functionality only worked by printing, similarly to the OneTouch software, but it turns out you can "Save" the report as PDF and that actually works quite well. It also allows you to "Export" the data, which provides you with a comma-separated values file with most of the raw data coming from the device (again, this will become useful in a separate post.)

That does not mean the software is not free from bugs, though. First of all, it does not close. Instead, if you click on the window's X button, it'll be minimized. There's an "Exit" option in the "File" menu, but more often than not it seems to cause the software to get stuck and either terminated by Windows, or requiring termination through the Task Manager. It also keeps "prodding" for the device, which ends up using 25% of one core, just for the sake of being open.

The funniest bit, though, was when I tried to "print" the report to PDF — which as I said above is not really needed, you can export it from the software just fine, but I didn't notice. In this situation, after the print dialog is shown, the software decides to hide any other window for its process behind its main window. I can only assume that this hide some Windows printing dialog that they don't want to distract the user with, but it also hides the "Save As" dialog that pops up. You can type the name blindly, assuming you can confirm you're in the right window through Alt-Tab, but you'll also have to deal with the software using its installation directory as work directory. Luckily Windows 10 is smart enough, and will warn about not having write access to the directory, and if you "OK" the invisible dialog, it'll save the file on your user's home directory instead.

As for final words, I'm sure hoping the device becomes available in Republic of Ireland, and I would really like for it to be covered by the HSE's Long Term Illness program, as the sensors are not cheap at £58 every two weeks (unless you're clumsy as me and have to replace it sooner.) I originally bought the starter kit to try this out and evaluate it, but I think it's making enough of good impact that (since I can afford it) I'll keep buying the supplies with my current method until it is actually available here (or until they make it too miserable.) I am not going to stop using the Accu-Chek Mobile for blood testing, though. While it would be nice to use a single device, the cassette system used by the Roche meter is just too handy, particularly when out in a restaurant.

I'll provide more information on my effort of reverse engineering the protocol in a follow-up post, so stay tuned if you're interested in it.

March 15, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
FOSDEM and the unrealistic IPv6-only network (March 15, 2016, 14:28 UTC)

Most of you know FOSDEM already, for those who don't, it's the largest Free and Open Source Software focused conference in Europe (if not the world.) If you haven't been to it I definitely suggest it, particularly because it's a free admission conference and it always has something interesting to discuss.

Even though there is no ticket and no badge, the conference does have free WiFi Internet access, which is how the number of attendees is usually estimated. In the past few years, their network has also been pushing the envelope on IPv6 support, first providing a dualstack network when IPv6 was fairly rare, and in the recent (three?) years providing an IPv6-only network as the default.

I can see the reason to do this, in the sense that a lot of Free Software developers are physically at the conference, which means they can see their tools suffer in an IPv6 environment and fix them. But at the same time, this has generated lots of complaints about Android not working in this setup. While part of that noise was useful, I got the impression this year that the complaints are repeated only for the sake of complaining.

Full disclosure, of course: I do happen to work for the company behind Android. On the other hand, I don't work on anything related at all. So this post is as usual my own personal opinion.

The complaints about Android started off quite healthy: devices couldn't actually connect to an IPv6 dual-stack network, and then they couldn't connect to a IPv6-only network. Both are valid complaints to begin with, though there is a bit more to it. This year in particular the complaints were not so healthy because current versions of Android (6.0) actually do support IPv6-only networks, though most of the Android devices out there are not running this version, either because they have too old hardware or because the manufacturer has not released a new build yet.

What does tick me though has really nothing to do with Android, but rather with the idea that people have that the current IPv6-only setup used by FOSDEM is a realistic approach to IPv6 networking — it really is not. It is a nice setup to test things out and stress the need for proper support for IPv6 in tools, but it's very unlikely to be used in production by anybody as is.

The technique used (at least this year) by FOSDEM is NAT64. To oversimplify how this works, it is designed to modify the DNS replies when resolving hostnames so that they always provide an IPv6 address, even though they would only have A records (IPv4 addresses). The IPv6 addresses used would then map back to IPv4, and the edge router would then "translate" between the two connections.

Unlike classic NAT, this technique requires user-space components, as the kernel uses separate stacks for IPv4 and IPv6 which do not allow direct message passing between the two. This makes it complicated and significantly slower (you have to copy the data from kernel to userspace and back all the time), unless you use one of the hardware router that are designed to deal with this (I know both Juniper and Cisco have those.)

NAT64 is a very useful testbed, if your target is figuring out what in your stack is not ready for IPv6. It is not, though, a realistic approach for consumer networks. If your client application does not have IPv6 support, it'll just fail to connect. If for whatever reason you rely on IPv4 literals, they won't work. Even worse, if the code allows a connection to be established over IPv6, but relies on IPv4 semantics for things like logging, or (worse) access control, then you now have bugs, crashes or worse, vulnerabilities.

And while fuzzing and stress-testing are great for development environments, they are not good for final users. In the same way -Werror is a great tool to fix your code, but uselessly disrupts your users.

In a similar fashion, while IPv6-only datacenters are not that uncommon – Facebook (the company) talked about them two years ago already – they serve a definite different purpose from a customer network. You don't want, after all, your database cluster to connect to random external services that you don't control — and if you do control the services, you just need to make sure they are all available over IPv6. In such a system, having a single stack to worry about simplifies, rather than complicate, things. I do something similar for the server I divide into containers: some of them, that are only backends, get no IPv4 at all, not even in NAT. If they ever have to go fetch something to build on the Internet at large, they go through a proxy instead.

I'm not saying that FOSDEM setting up such a network is not useful. It actually hugely is, as it clearly highlights the problems of applications not supporting IPv6 properly. And for Free Software developers setting up a network like this might indeed be too expensive in time or money, so it is a chance to try things out and iron out bugs. But at the same time it does not reflect a realistic environment. Which is why adding more and more rant on the tracking Android bug (which I'm not even going to link here) is not going to be useful — the limitation was known for a while and has been addressed on newer versions, but it would be useless to try backporting it.

For what it's worth, what is more likely to happen as IPv6 adoption needs to happen, is that providers will move towards solutions like DS-Lite (nothing to do with Nintendo), which couples native IPv6 with carrier-grade NAT. While this has limitations, depending on the size of the ISP pools, it is still easier to set up than NAT64, and is essentially transparent for customers if their systems don't support IPv6 at all. My ISP here in Ireland (Virgin Media) already has such a setup.

March 14, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)

Komodo IDE starts a debugger bound to 0.0.0.0, by default. Maker ActiveState’s reaction was rather unprofessional at the time when I asked for an option to bind to 127.0.0.1, instead. I can no longer add links to that post, but I can link to my demo Komodo IDE exploit script up here.
Now it seems like the option to disable or even customize debugger settings was removed from the GUI: I cannot find it in version 9.3.2. I found a workaround when reading the source code that allows to still plug that hole in my setup. If I tweak the config file to an invalid port (outside of 0..65535 range), the debugger will just not start-up but Komodo starts up with no complaints. Nice :-)

# fgrep debuggerListenerPort ~/.komodoide/*/prefs.xml
/home/user/.komodoide/9.3/prefs.xml:
  <long id="debuggerListenerPort">77777</long>
/home/user/.komodoide/9.3/prefs.xml:
  <string id="debuggerListenerPortType">specific</string>

If you use that trick, be sure to check the version number in the path so you edit the latest / actually used version, 9.3 in my case.

March 13, 2016
Sven Vermeulen a.k.a. swift (homepage, bugs)
Trying out imapsync (March 13, 2016, 11:57 UTC)

Recently, I had to migrate mail boxes for a couple of users from one mail provider to another. Both mail providers used IMAP, so I looked into IMAP related synchronization methods. I quickly found the imapsync application, also supported through Gentoo's repository.

What I required

The migration required that all mails, except for the spam and trash e-mails, were migrated to another mail server. The migrated mails had to retain their status flags (so unread mails had to remain unread while read mails had to remain read), and the migration had to be done in two waves: one while the primary mail server was still in use (where most of the mails where synchronized) and then, after switching the mail servers (which was done through DNS changes) re-sync to fetch the final ones.

I did not get access to the credentials of all mail boxes, but together with the main administrator we enabled a sort-of shadow authentication system (a temporary OpenLDAP installation) in which the same users were enabled, but with passwords that will be used during the synchronization. The mailservers were then configured to have a secondary interface available which used this OpenLDAP rather than the primary authentication that was being used by the end users.

Using imapsync

Using imapsync is simple. It is a command-line application, and everything configurable is done through command arguments. The basic ones are of course the source and target definitions, as well as the authentication information for both sides.

~$ imapsync \
  --host1 src-host --user1 src-user --password1 src-pw --authmech1 LOGIN --ssl1 \
  --host2 dst-host --user2 dst-user --password2 dst-pw --authmech2 LOGIN --ssl2

The use of the --ssl1 and --ssl2 is not to enable an older or newer version of the SSL/TLS protocol. It just enables the use of SSL/TLS for the source host (--ssl1) and destination host (--ssl2).

This would just start synchronizing messages, but we need to include the necessary directives to skip trash and spam mailboxes for instance. For this, the --exclude parameter can be used:

~$ imapsync ... --exclude "Trash|Spam|Drafts"

It is also possible to transform some mailbox names. For instance, if the source host uses Sent as the mailbox for sent mail, while the target has Sent Items, then the following would enable migrating mails between the right folders:

~$ imapsync ... --folder "Sent" --regextrans2 's/Sent/Sent Items/'

Conclusions and interesting resources

Using the application was a breeze. I do recommend to create a test account on both sides so that you can easily see the available folders, source and target naming conventions as well as test if rerunning the application works flawlessly.

In my case for instance, I had to add --skipsize so that the application does not use the mail sizes for comparing if a mail is already transferred or not, as the target mailserver showed different mail sizes for the same mails. This was luckily often documented on the various online tutorials about imapsync, such as

The migration took a while, but without major issues. Within a few hours, the mailboxes of all users where correctly migrated.

March 12, 2016
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
How to break sysctl (March 12, 2016, 12:05 UTC)

A long time ago sysctl used one confg file: /etc/sysctl.conf

There was a simple way to (re)load the values from that file: sysctl -p
There are aliases -f --file that do the same.

Then things were Improved and Enhanced. Now sysctl -p will either fail (bug in procps-3.3.9) or not apply the config (3.3.10+). Which is possibly at times a bit fatal on production machines that rely on nonstandard settings to handle the workload.

How did things break? Of course new config paths must be added. Like any Modern Application sysctl will read snippets from a directory, and not just one directory but six:
/run/sysctl.d/*.conf
/etc/sysctl.d/*.conf
/usr/local/lib/sysctl.d/*.conf
/usr/lib/sysctl.d/*.conf
/lib/sysctl.d/*.conf
/etc/sysctl.conf

So let's think ...

/run ? Why would you put config there. Srsly wat. Use sysctl -w if you want to temporarily set a value.

/etc/sysctl.d ? Looks reasonable.

/usr/local/lib ? WAT. That's not a path where config lives. /usr/lib ? Why do you put things that are not libs in libdir. And since you need to be administrative access person to modify that path, it is like /etc/sysctl.d only strictly worse.

/lib ? oh, because ... uhm ... I can't figure this one out

and finally, the classic /etc/sysctl.conf.

So four of the six new paths are poop, and we could completely remove this misfeature by adding an 'include /etc/sysctl.d/*.conf' to /etc/sysctl.conf. Then we wouldn't need sysctl --system, sysctl -p would still work, and there'd be less code written to implement this misfortune and less code written to mitigate the failures caused by it.

Having to fight such changes and the breakage they cause is frustrating, by changing less we could achieve more.
What amuses me most about this is that this change actually broke the new feature (--system) in the first iteration, after breaking the old behaviour. Amazing amount of churn that doesn't fix a problem we've had. No, I'm not grumpy!

March 11, 2016

The Norwegian Government proposed Proposition 68 L (2015-2016) today extending and introducing a wide range of methods for the police to cross the privacy boundry with increased surveillance, including what the Minister of Justice, Progress Party (FrP)'s Anundsen, calls "surveillance closer to the soul". The possibility to perform telecommunications control in Norway has history back … Continue reading "Norwegian government propose access to extended surveillance methods"

March 07, 2016

Due to my involvement in sks-keyservers.net I frequently get questions on whether I can remove OpenPGP certificates from the keyservers. TL;DR; Removal of OpenPGP certificates from a keyserver is not possible. To start off with, the OpenPGP keyserver network consists of more than 150 keyservers reconciliating their database between the peers. Even if I could … Continue reading "OpenPGP Certificates can not be deleted from keyservers"

March 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.9 (March 02, 2016, 08:23 UTC)

py3status v2.9 is out with a good bunch of new modules, exciting improvements and fixes !

Thanks

This release is made of their stuff, thank you contributors !

  • @4iar
  • @AnwariasEu
  • @cornerman
  • Alexandre Bonnetain
  • Alexis ‘Horgix’ Chotard
  • Andrwe Lord Weber
  • Ben Oswald
  • Daniel Foerster
  • Iain Tatch
  • Johannes Karoff
  • Markus Weimar
  • Rail Aliiev
  • Themistokle Benetatos

New modules

  • arch_updates module, by Iain Tatch
  • deadbeef module to show current track playing, by Themistokle Benetatos
  • icinga2 module, by Ben Oswald
  • scratchpad_async module, by johannes karoff
  • wifi module, by Markus Weimar

Fixes and enhancements

  • Rail Aliiev implement flake8 check via travis-ci, we now have a new build-passing badge
  • fix: handle format_time tztime parameter thx to @cornerman, fix issue #177
  • fix: respect ordering of the ipv6 i3status module even on empty configuration, fix #158 as reported by @nazco
  • battery_level module: add multiple battery support, by 4iar
  • battery_level module: added formatting options, by Alexandre Bonnetain
  • battery_level module: added option hide_seconds, by Andrwe Lord Weber
  • dpms module: added color support, by Andrwe Lord Weber
  • spotify module: added format_down option, by Andrwe Lord Weber
  • spotify module: fixed color & playbackstatus check, by Andrwe Lord Weber
  • spotify module: workaround broken dbus, removed PlaybackStatus query, by christian
  • weather_yahoo module: support woeid, add more configuration parameters, by Rail Aliiev

What’s next ?

Some major core enhancements and code clean up are coming up thanks to @cornerman, @Horgix and @pydsigner. The next release will be faster than ever and even less CPU consuming !

Meanwhile, this 2.9 release is available on pypi and Gentoo portage, have fun !

February 29, 2016
Gentoo accepted to GSoC 2016 (February 29, 2016, 00:00 UTC)

Students are encouraged to start working now on their project proposals. You can peruse the list of ideas or come up with your own. In any case, it is highly recommended you talk to a mentor sooner rather than later. The official application period for student proposals starts on March 14th.

Do not hesitate to join us in the #gentoo-soc channel on freenode. We will be happy to answer your questions there.
More information on Gentoo’s GSoC effort is also available on our Wiki.

February 28, 2016
Richard Freeman a.k.a. rich0 (homepage, bugs)
Gentoo Ought to be About Choice (February 28, 2016, 02:07 UTC)

“Gentoo is about choice.”  We’ve said it so often that it seems like we just don’t bother to say it any more.  However, with some of the recent conflicts on the lists (which I’ve contributed to) and indeed across the FOSS community at large, I think this is a message that is worth repeating…

Ok, bare with me because I’m going to talk about systemd.  This post isn’t really about systemd, but it would probably not be nearly as important in its absence.  So, we need to talk about why I’m bringing this up.

How we got here

Systemd has brought a wave of change in the Linux community, and most of the popular distros have decided to adopt it.  This has created a bit of a vacuum for those who strongly prefer to avoid it, and many of these have adopted Gentoo (the only other large-ish option is Slackware), and indeed some have begun to contribute back.  The resulting shift in demographics have caused tensions in the community, and I believe this has created a tendency for us to focus too much on what makes us different.

Where we are now

Every distro has a niche of some kind – a mission that gives it a purpose for existence.  It is the thing that its community coalesces around.  When a distro loses this sense of purpose, it will die or fork, whether by the forces of lost contributors or lost profits.  This purpose can certainly evolve over time, but ultimately it is this purpose which holds everything together.

For many years in Gentoo our purpose has been about providing choices, and enabling the user.  Sometimes we enable them to shoot their own feet, and we often enable them to break things in ways that our developers would prefer not to troubleshoot.  We tend to view the act of suppressing choices as contrary to our values, even if we don’t always have the manpower to support every choice that can possibly exist.

The result of this philosophy is what we all see around us.  Gentoo is a distro that can be used to build the most popular desktop linux-based operating system (ChromeOS), and which reportedly is also used as the basis of servers that run NASDAQ[1].  It shouldn’t be surprising that Gentoo works with no fewer than 7 device-manager implementations and 4 service managers.

Still, many in the Linux community struggle to understand us.  They mistake our commitment to providing a choice as some kind of endorsement of that choice.  Gentoo isn’t about picking winners.  We’re not an anti-systemd distro, even if many who dislike systemd may be found among us and it is straightforward to install Gentoo without “systemd” appearing anywhere in the filesystem.  We’re not a pro-systemd distro, even if (IMHO) we offer one of the best and undiluted systemd experiences around.  We’re a distro where developers and users with a diverse set of interests come together to contribute using a set of tools that makes it practical for each of us to reach in and pull out the system that we want to have.

Where we need to be

Ultimately, I think a healthy Gentoo is one which allows us all to express our preferences and exchange our knowledge, but where in the end we all get behind a shared goal of empowering our users to make the decisions.  There will always be conflict when we need to pick a default, but we must view defaults as conveniences and not endorsements.  Our defaults must be reasonably well-supported, but not litmus tests against which packages and maintainers are judged.  And, in the end, we all benefit when we are exposed to those who disagree and are able to glean from them the insights that we might have otherwise missed on our own.

When we stop making Gentoo about a choice, and start making it about having a choice, we find our way.

1 – http://www.computerworld.com/article/2510334/financial-it/how-linux-mastered-wall-street.html


Filed under: foss, gentoo, linux, Uncategorized

February 27, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
EFF's Panopticlick at Enigma 2016 (February 27, 2016, 12:25 UTC)

One of the thing I was the most interested to hear about, at Enigma 2016, was news about EFF's Panopticlick. For context, here is the talk from Bill Burlington:

I wrote before about the tool, but they have recently reworked and rebranded it to use it as a platform for promoting their Privacy Badger, which I don't particularly care for. For my intents, they luckily still provide the detailed information, and this time around they make it more prominent that they rely on the fingerprintjs2 library for this information. Which means I could actually try and extend it.

I tried to bring up one of my concerns at the post-talk Q&A at the conference (the Q&A were not recorded), so I thought it wold be nice to publish my few comments about the tool as it is right now.

The first comment is this: both Panopticlick and Privacy Badger do not consider the idea of server-side tracking. I have said that before, and I will repeat it now: there are plenty of ways to identify a particular user, even across sites, just by tracking behaviour that are seen passively on the server side. Bill Budington's answer to this at the conference was that Privacy Badger's answer is allowing cookies only if if there is a policy in place from the site, and count on this policy being binding for the site.

But this does not mean much — Privacy Badger may stop the server from setting a cookie, but there are plenty of behaviours that can be observed without the help of the browser, or even more interestingly, with the help of Privacy Badger, uBlock, and similar other "privacy conscious" extensions.

Indeed, not allowing cookies is, already, a piece of trackable information. And that's where the problem with self-selection, which I already hinted at before, comes to: when I ran Panopticlick on my laptop earlier it told me that one out of 1.42 browsers have cookies enabled. While I don't have any access to facts and statistics about that, I do not think it's a realistic number to say that about 30% of browsers have cookies disabled.

If you connect this to the commentaries on NSA's Rob Joyce said at the closing talk, which unfortunately I was not present for, you could say that the fact that Privacy Badger is installed, and fetches a given path from a server trying to set a cookie, is a good way to figure out information on a person, too.

The other problem is more interesting. In the talk, Budington introduces briefly the concept of Shannon Entropy), although not by that name, and gives an example on different amount of entropy provided by knowing someone's zodiac sign versus knowing their birthday. He also points out that these two information are not independent so you cannot sum their entropy together, which is indeed correct. But there are two problems with that.

The first, is that the Panopticlick interface does seem to think that all the information it gathers is at least partially independent and indeed shows a number of entropy bits higher than the single highest entry they have. But it is definitely not the case that all entries are independent. Even leaving aside browser specific things such as the type of images requested and so on, for many languages (though not English) there is a timezone correlation: the vast majority of Italian users would be reporting the same timezone, either +1 or +2 depending on the time of the year; sure there are expats and geeks, but they are definitely not as common.

The second problem is that there is a more interesting approach to take, when you are submitted key/value pair of information that should not be independent, in independent ways. Going back to the example of date of birth and zodiac sign, the calculation of entropy in this example is done starting from facts, particularly those in which people cannot lie — I'm sure that for any one database of registered users, January 1st is skewed as having many more than than 1/365th of the users.

But what happens if the information is gathered separately? If you ask an user both their zodiac sign and their date of birth separately, they may lie. And when (not if) they do, you may have a more interesting piece of information. Because if you have a network of separate social sites/databases, in which only one user ever selects being born on February 18th but being a Scorpio, you have a very strong signal that it might be the same user across them.

This is the same situation I described some time ago of people changing their User-Agent string to try to hide, but then creating unique (or nearly unique) signatures of their passage.

Also, while Panopticlick will tell you if the browser is doing anything to avoid fingerprinting (how?) it still does not seem to tell you if any of your extensions are making you more unique. And since it's hard to tell whether some JavaScript bit is trying to load a higher-definition picture, or hide pieces of the UI for your small screen, versus telling the server about your browser setup, it is not like they care if you disabled your cookies…

For a more proactive approach to improve users' privacy, we should ask for more browser vendors to do what Mozilla did six years ago and sanitize what their User-Agent content should be. Currently, Android mobile browsers would report both the device type and build number, which makes them much easier to track, even though the suggestion has been, up to now, to use mobile browsers because they look more like each other.

And we should start wondering how much a given browser extension adds or subtract from the uniqueness of a session. Because I think most of them are currently adding to the entropy, even those that are designed to "improve privacy."

February 26, 2016
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Setting USE_EXPAND flags in package.use (February 26, 2016, 17:32 UTC)

This has apparently been supported in Portage for some time, but I only learned it recently from a gentoo-dev mail: you do not have to write down the expanded USE-flags in package.use anymore (or set them in make.conf)!

For example, if I wanted to set some APACHE2_MODULES and a custom APACHE2_MPM, the standard package.use entry would be something like:

www-servers/apache apache2_modules_proxy apache2_modules_proxy apache2_modules_proxy_http apache2_mpms_event ssl

Not as pretty/convenient as a ‘APACHE2_MODULES=”proxy proxy_http”‘ line in make.conf. Here is the best-of-both-worlds syntax (also supported in Paludis apparently):

www-servers/apache ssl APACHE2_MODULES: proxy proxy_http APACHE2_MPMS: event

Or if you use python 2.7 as your main python interpreter, set 3.4 for libreoffice-5.1 😉

app-office/libreoffice PYTHON_SINGLE_TARGET: python3_4

Have fun cleaning your package.use file

February 23, 2016
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Working at RedHat and moving to Czech Republic (February 23, 2016, 00:15 UTC)

Warning: this is yet another delayed announcement :-)

I'm happy to announce here that since September 1st of past year I left Titans Group and Joined RedHat, working remotely from Brazil. I'm working as Software Engineer, and I'm a member of the oVirt integration team.

After 3 months working from Brazil, I moved to Brno, Czech Republic. Now I'm working from one of the RedHat offices in the city.

You can expect some posts here about my work, that is mostly open-source now, and about my experiences in Czech Republic.

February 18, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Creating Gentoo VM Images (February 18, 2016, 06:00 UTC)

Initial Setup and Info

This guide uses Openstack's Diskimage-builder tool for generation of images, while you can use this for Openstack, you can also create generic images with it.

Setting up Diskimage-builder is fairly simple, when you use it, it does expect to be run as root.

All you need to do is follow this guide, at it's simplest it's just a couple of git clones and PATH setup.

You will need app-emulation/qemu for generation of qcow2 files.

The current setup utilizes the stage4 images being generated, see this link for more details.

There are currently only 4 profiles supported, however I hope to support musl and selinux profiles 'soon'.

  • default/linux/amd64/13.0
  • default/linux/amd64/13.0/no-multilib
  • hardened/linux/amd64
  • hardened/linux/amd64/no-multilib

Generating an Openstack image

To use a profile other than default/linux/amd64/13.0 set the GENTOO_PROFILE environment variable to one of the other supported profiles.

disk-image-create -a amd64 -t qcow2 --image-size 2 gentoo simple-init growroot vm is all you need to start. It will output a file named image.qcow2.

For openstack there are two ways you could go for initial setup (post-vm start). The first and most common is cloud-init, but that includes a few python deps that I don't think are really needed. The other is simple-init (glean), which is more limited, but as it's name suggests, simple.

Here is a link to glean (simple-init) for those wanting more info glean

Generating a Generic Image You Can Log Into

Using the devuser element you can set up custom users. You will need to set up some more environment variables though.

Docs can be found here

An example invocation follows, simple-init may be needed so that interfaces get dhcp addresses, though you may wat to set that up manually, your choice.

DIB_DEV_USER_PASSWORD=foobar DIB_DEV_USER_USERNAME=gentoo DIB_DEV_USER_PWDLESS_SUDO=yes DIB_DEV_USER_AUTHORIZED_KEYS=/dev/null disk-image-create -a amd64 -t qcow2 --image-size 2 gentoo simple-init growroot devuser vm

February 17, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Bear with me — this post will start with a much longer trial-and-error phase than the previous one…

I have received the OneTouch Verio glucometer from LifeScan last year, when I noticed that my previous glucometer (the protocol of which was fully specified on their website) was getting EOL'd. I have used it for a couple of months, but as I posted that review, I was suggested a different one, so I moved on. It was for a while in the back of my mind as LifeScan refused providing the protocol for it, though.

So over the past week, after finishing the lower-hanging fruit I decided to get serious and figure out how this device worked.

First of all, unlike the older OneTouch devices I own, this device does not use a TRS (stereo-jack) serial port, instead it comes with a standard micro-A USB connector. This is nice as the previous cables needed to be requested and received before you could do anything at all with the software.

Once connected, the device appears to the operating system as a USB Mass Storage device – a thumbdrive – with a read-only FAT16 partition with a single file in it, an HTML file sending you to LifeScan's website. This is not very useful.

My original assumption was that the software would use a knocking sequence to replace the mass storage interface with a serial one — this is what most of the GSM/3G USB modems do, which is why usb_modeswitch was created. So I fired the same USBlyzer (which by now I bought a license of, lacking a Free Software alternative for the moment) and started tracing. But not only no new devices or interfaces appeared on the Device Manager tree, I couldn't see anything out of the ordinary in the trace.

Since at first I was testing this on a laptop that had countless services and things running (this is the device I used for the longest time to develop Windows software for customers), I then wanted to isolate the specific non-mass storage USB commands the software had to be sending to the device, so I disabled the disk device and retried… to find the software didn't find the meter anymore.

This is when I knew things were going to get complicated (thus why I moved onto working on the Abbott device then.) The next step was to figure out what messages were the computer and meter exchanging; unfortunately USBlyzer does not have a WireShark export, so I had to make do with exporting to CSV and then reassembling the information from that. Let me just say it was not the easiest thing to do, although I now have a much more polished script to do that — it's still terrible so I'm not sure I'm going to publish it any time soon though.

The first thing I did was extracting the URBs (USB Request Blocks) in binary form from the hex strings in the CSV. This would allow me to run strings on them, in the hope of seeing something such as the meter's serial number. When reverse engineering an unknown glucometer protocol, it's good to keep in mind essentially all diabetes management software relies on the meters' serial numbers to connect the readings to a patient. As I've later discovered, I was onto something, but either strings is buggy or I used the wrong parameters. What I did find then was a lot of noise with MSDOS signatures (for MBR and FAT16) appearing over and over. Clearly I needed better filtering.

I've enhanced the parsing to figure out what the URBs meant. Turns out that USB Mass Storage uses signatures USBC and USBS (for Command and Status) – which also explained why I saw them in the Supermicro trace – so it's not too difficult to identify them, and ignore them. Once I did that, the remaining URBs didn't make much sense either, particularly because I could still notice they were only the data being written and read (as I could see many of them matched with blocks from the device's content.)

So I had to dig further. USB is somewhat akin to a networking stack, with different layers of protocols one on top of the other — the main difference being that the USB descriptor (the stuff lsub -v prints) containing the information for all levels, rather than providing that information on each packet. A quick check on the device's interface tells me indeed that it's a fairly standard one:

Interface Descriptor:
  bLength                 9
  bDescriptorType         4
  bInterfaceNumber        0
  bAlternateSetting       0
  bNumEndpoints           2
  bInterfaceClass         8 Mass Storage
  bInterfaceSubClass      6 SCSI
  bInterfaceProtocol     80 Bulk-Only
  iInterface              7 LifeScan MSC

What this descriptor says is that the device is expecting SCSI commands, which is indeed the case of most USB thumbdrives — occasionally, a device might report itself as using the SDIO protocol, but that's not very common. The iInterface = LifeScan MSC setting, though, says that there is an extension of the protocol that is specific to LifeScan. Once again here I thought it had to be some extension to the SCSI command set, so I went to look for the specs of the protocol, and started looking at the CDBs (command blocks.)

I'm not sure at this point if I was completely surprised not to see any special command at all. The only commands in the trace seemed to make sense at the time (INQUIRY, READ, WRITE, TEST MEDIA READY, etc). It was clear at that point that the software piggybacked the standard volume interface, but I expected it to access some hidden file to read the data, so I used an app to log the filesystem access and… nothing. The only files that were touched were the output Access files used by the tool.

I had to dig deeper, so I started parsing the full CDBs and looked at which part of the disk were accessed — I could see some scattered access to what looked like the partition table (but wasn't it supposed to be read-only?) and some garbage at the end of the disk with System Volume Information. I dumped the content of the data read and written and used strings but couldn't find anything useful, even looking for Unicode characters. So I took another trace, started it with the device already connected this time, and compared — that started sending me to the right direction: I could see a number of write-then-read requests happening on three particular blocks: 3, 4 and 5.

At that point I tried to focus on the sequence of writes and reads on those blocks, and things got interesting: some of the written and read data had the same content across sessions, which meant there was communication going on. The device is essentially exposing a register-based communication interface-over-SCSI-over-USB. I'm not sure if brilliant or crazy. But the problem remained of understanding the commands.

At this point was hoping to get some help by looking at what commands were actually being sent to the kernel, so I downloaded the latest Windows SDK and fired up WinDbg, hoping to log the events. I didn't that, but I did find something even more interesting. The OneTouch software and drivers have been built with debug logging still on, probably because nobody would notice there is logging unless they attach a debugger… just like I did. This was a lucky breakthrough because it allowed me to see what driver the software used (and thus its symbol table and function names — yes, PE would allow you to obfuscate the function names by using an import library, but they didn't) and also to see what it thoughts about things.

An interesting discovery is that the software seems to communicate with its drivers via XML documents (properly human-readable ones at that), while the driver seemed to talk to the device via binary commands. Unfortunately, said commands didn't match what I was seeing in the trace, at least not fully — I could find some subsets of data here and there, but not consistently, it looks like one of the libraries is translating from the protocol the device actually accepted to another (older?) binary protocol, to speak to a driver that then converted it to XML and to the device. This does sound dopey, doesn't it?

Anyway, I decided to then start matching messages in the sequences. This started to be interesting. Using hexdump -C to have a human-readable copy of the content of the SCSI blocks written and read, I would see the first few lines matching between messages in the same sequence, while those after 255 bytes to be different, but in a predictable way: a four-bytes word would appear at a certain address, and the following words would have the same distance from it. I was afraid this was going to be some sort of signature or cryptographic exchange — until I compared this with the trace under WinDbg, that had nothing at all after the first few lines. I then decided to filter anything after the first 16-bytes of zeros, and compare again.

This lead to more interesting results. Indeed I could see that across the three sessions, some packets would be exactly the same, while in others the written packet would be the same and the read packet would be different. And when they would be different, there would be a byte or two different and then the last two bytes would differ. Now one of the things I did when I started looking at WinDbg, was checking the symbol table of the libraries that were used by the software, and one of them had a function that included crc_ccitt in its name. This is a checksum algorithm that LifeScan used before — but with a twist there as well, it used a non-standard (0xFFFF) seed. Copying the packet up until the checksum and pasting it in an online calculator confirmed that I now found the checksum of the packet.

At that point I opened the OneTouch UltraEasy specs (an older meter, of which LifeScan published the protocol), which shared the same checksum, and noticed at least one more similarity: the messages are framed the same with (0x02 at the beginning, 0x03 at the end). And the second byte matches the length of the packet. A quick comparison with the log I got off the debugger, and the other binary protocol does not use the framing but does use the same length specification and the same checksum algo. Although in this case I could confirm the length is defined as 16-bit, as this intermediate protocol reassembled what soon clearly appeared to be a set of separate responses into one.

Once you get to this point, figuring out the commands is much easier than you think — some of them will return things such as the serial number of the device (printed on the back), the model name, or the software version, which the debug log let me match for sure. I was confused at first because strings -el can't find them in the binary files, but strings -eb did… they are not big-endian though. At tis point, there are a few things that need to be figured out to write a proper useful driver for the meter.

The first low-hanging fruit is usually to be found in the functions to get and set time, which, given I couldn't see any strings around, I assumed to be some sort of timestamp — but I couldn't find anything that looked like the day's timestamp in the trace. To be honest, there was an easier way to figure this out, but the way I did that, was by trying to figure out the reading record format. Because something that looked like a 32-bit counter in high numbers could be found, so I compared that with one that looked like it in a promising command, and I looked at the difference — the number, interpreted as seconds, gave me a 22 weeks delta, which matched the delta between the last reading and the trace — I was onto something! Given I knew the exact timestamp of the last reading, the difference between that and the number I had brought me exactly to January 1st 2000, the device's own epoch.

Once again, from there things get easier — the format of the records is simple, includes a counter and what I soon realized to be a lifetime counter, the timestamp with the device's own epoch, some (still unknown) flags, and the reading value in mg/dL as usual for most devices. What was curious was that the number shown in the debug log's XML does not match the mg/dL reading, but the data in the protocol match what the device and software show for each readings, so it's okay.

While I was working on this, I got approached over twitter from someone having a OneTouch Select Plus meter, which is not sold in Ireland at all. I asked him for a trace of the device and I compared it with my tools and the reverse engineering I had to that point, and it appears to be using the same protocol, although it replies with a lot more data to one of the commands I have not found the meaning of (and that the device don't seem to need — there's no knock sequence, so it's either to detect some other model, or a kind of ping-back to the device.) The driver I wrote should work for both. Unfortunately they are both mmol/L devices, so I can't for sure tell which unit the device is supposed to use.

One last curiosity, while comparing the protocol as I reversed it and the OneTouch UltraEasy protocol that was published by LifeScan. Many of the commands are actually matching, including the "memory reset" one, with one difference: whereas the UltraEasy commands (after preamble) start with 0x05, the Verio commands start with 0x04 — so for instance memory reset is 05 1a on the UltraEasy, but 04 1a on the Verio.

The full documentation of the protocol as I reversed it is available on my repository and glucometerutils gained an otverio2015 driver. For the driver I needed to fix the python-scsi module to actually work to send SCSI commands over the SGIO interface in Linux, but that is fixed upstream now.

If you happen to have this device, or another LifeScan device that appears as a USB Mass Storage, but using mg/dL (or something that does not appear to work with this driver), please get in touch so I can get a USB trace of its dumping memory. I could really use the help.

I won't be spending time reverse engineering anything this weekend, because I'm actually spending time with friends, but I'll leave you confirming that there will be at least one more device getting reverse engineered soon, but the next post will first be a review of it. The device is the Abbott FreeStyle Libre, for which I can't link a website, as it would just not appear if you're not in one of (the one?) country it's sold in. Bummer.

Yury German a.k.a. blueknight (homepage, bugs)
Gentoo Blogs – Announcement for Developers (February 17, 2016, 17:33 UTC)

== Announcement Gentoo Blogs: ==

We have upgraded the WordPress install, the themes and the plugins for
the blogs.gentoo.org.

There are a few announcements as some things have changed with the
latest version:

1. “Twenty Fifteen” Theme – PROBLEMS / Not Working
“Twenty Fifteen” was the theme for the previous version of wordpress as
the default theme, and if you accepted the default theme it is your
theme as well.

There were some changes where some sites, are not displaying correctly
using that theme. Please take a look at your site and feel free to pick
another theme.  Wordpress has introduced “Twenty Sixteen” which looks
cleaner now and might be a good choice.

2. “Twenty Thirteen” theme is currently broken.
The new WordPress update also brought with it a broken theme now.
“Twenty Thirteen” no longer works correctly as well. Please take a look
at alternative themes for your web site. As within seven (7) days we
will be turning off that theme.

3. Picasa Albums Plugin
The Picasa Albums plugin has not been updated in Two (2) years, and with
this version is no longer functioning. If you are using this version. If
you are using this plug-in please let me know. As we would have to find
a replacement.

If you have any questions please feel free to contact me directly or
planet@gentoo.org

February 16, 2016

Description:
Portage-utils is small and fast portage helper tools written in C.

I discovered that a crafted file is able to cause a stack-based buffer overflow.

The complete ASan output:

~ # qfile -f qfile-OOB-crash.log                                                                                                                                                                                                                                          
=================================================================                                                                                                                                                                                                              
==12240==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd067c1ac1 at pc 0x000000495bdc bp 0x7ffd067bd6f0 sp 0x7ffd067bceb0                                                                                                                                     
READ of size 4095 at 0x7ffd067c1ac1 thread T0                                                                                                                                                                                                                                  
    #0 0x495bdb in strncpy /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/asan_interceptors.cc:632:5                                                                                                                                  
    #1 0x4fb5b9 in prepare_qfile_args /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:297:3                                                                                                                                                      
    #2 0x4fb5b9 in qfile_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:530                                                                                                                                                                
    #3 0x4e7f22 in q_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./q.c:79:10                                                                                                                                                                      
    #4 0x4e7afe in main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/main.c:1405:9                                                                                                                                                                      
    #5 0x7f5ccc29e854 in __libc_start_main /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/csu/libc-start.c:289                                                                                                                                                            
    #6 0x4192f8 in _init (/usr/bin/q+0x4192f8)                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                               
Address 0x7ffd067c1ac1 is located in stack of thread T0 at offset 17345 in frame                                                                                                                                                                                               
    #0 0x4f8b3f in qfile_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qfile.c:394                                                                                                                                                                
                                                                                                                                                                                                                                                                               
  This frame has 10 object(s):                                                                                                                                                                                                                                                 
    [32, 4128) 'pkg.i'                                                                                                                                                                                                                                                         
    [4256, 8353) 'rpath.i'                                                                                                                                                                                                                                                     
    [8624, 8632) 'fullpath.i'                                                                                                                                                                                                                                                  
    [8656, 8782) 'slot.i'                                                                                                                                                                                                                                                      
    [8816, 8824) 'slot_hack.i'                                                                                                                                                                                                                                                 
    [8848, 8856) 'slot_len.i'                                                                                                                                                                                                                                                  
    [8880, 12977) 'tmppath.i'                                                                                                                                                                                                                                                  
    [13248, 17345) 'abspath.i'                                                                                                                                                                                                                                                 
    [17616, 17736) 'state' <== Memory access at offset 17345 partially underflows this variable                                                                                                                                                                                
    [17776, 17784) 'p' 0x100020cf0350: 00 00 00 00 00 00 00 00[01]f2 f2 f2 f2 f2 f2 f2
  0x100020cf0360: f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 f2
  0x100020cf0370: f2 f2 f2 f2 f2 f2 f2 f2 f2 f2 00 00 00 00 00 00
  0x100020cf0380: 00 00 00 00 00 00 00 00 00 f2 f2 f2 f2 f2 00 f3
  0x100020cf0390: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
  0x100020cf03a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==12240==ABORTING

Affected version:
All versions.

Fixed version:
0.61

Commit fix:
https://gitweb.gentoo.org/proj/portage-utils.git/commit/?id=070f64a84544f74ad633f08c9c07f99a06aea551

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Not Assigned.

Timeline:
2016-02-01: bug discovered
2016-02-01: bug reported to upstream
2016-02-04: upstream release a fix
2016-02-16: advisory release

Note:
This bug was found with American Fuzzy Lop.
As the commit clearly state, the ability to read directly from a file was removed.

Permalink:

portage-utils: stack-based buffer overflow in qfile.c

Description:
Portage-utils is small and fast portage helper tools written in C.

I discovered that a crafted file is able to cause an heap-based buffer overflow.

The complete ASan output:

~ # qlop -f $CRAFTED_FILE -s
Mon Jan 25 11:38:31 2016 >>> gentoo
=================================================================
==14281==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61900001e44a at pc 0x000000425676 bp 0x7fff2b3f3970 sp 0x7fff2b3f3130
READ of size 1 at 0x61900001e44a thread T0
    #0 0x425675 in __interceptor_strncmp /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:218:3
    #1 0x50d5b1 in show_sync_history /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qlop.c:350:7
    #2 0x50d5b1 in qlop_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./qlop.c:687
    #3 0x4e7f22 in q_main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/./q.c:79:10
    #4 0x4e7afe in main /tmp/portage/app-portage/portage-utils-0.60/work/portage-utils-0.60/main.c:1405:9
    #5 0x7fafd8594854 in __libc_start_main /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/csu/libc-start.c:289
    #6 0x4192f8 in _init (/usr/bin/q+0x4192f8)

0x61900001e44a is located 0 bytes to the right of 970-byte region [0x61900001e080,0x61900001e44a)
allocated by thread T0 here:
    #0 0x4a839e in realloc /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:61:3
    #1 0x7fafd85dc95f in getdelim /tmp/portage/sys-libs/glibc-2.21-r1/work/glibc-2.21/libio/iogetdelim.c:106

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/llvm-3.7.1/work/llvm-3.7.1.src/projects/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:218:3 in __interceptor_strncmp
Shadow bytes around the buggy address:
  0x0c327fffbc30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c327fffbc70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c327fffbc80: 00 00 00 00 00 00 00 00 00[02]fa fa fa fa fa fa
  0x0c327fffbc90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c327fffbca0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c327fffbcb0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c327fffbcc0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c327fffbcd0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd                                                                                                                                                                                                              
Shadow byte legend (one shadow byte represents 8 application bytes):                                                                                                                                                                                                           
  Addressable:           00                                                                                                                                                                                                                                                    
  Partially addressable: 01 02 03 04 05 06 07                                                                                                                                                                                                                                  
  Heap left redzone:       fa                                                                                                                                                                                                                                                  
  Heap right redzone:      fb                                                                                                                                                                                                                                                  
  Freed heap region:       fd                                                                                                                                                                                                                                                  
  Stack left redzone:      f1                                                                                                                                                                                                                                                  
  Stack mid redzone:       f2                                                                                                                                                                                                                                                  
  Stack right redzone:     f3                                                                                                                                                                                                                                                  
  Stack partial redzone:   f4                                                                                                                                                                                                                                                  
  Stack after return:      f5                                                                                                                                                                                                                                                  
  Stack use after scope:   f8                                                                                                                                                                                                                                                  
  Global redzone:          f9                                                                                                                                                                                                                                                  
  Global init order:       f6                                                                                                                                                                                                                                                  
  Poisoned by user:        f7                                                                                                                                                                                                                                                  
  Container overflow:      fc                                                                                                                                                                                                                                                  
  Array cookie:            ac                                                                                                                                                                                                                                                  
  Intra object redzone:    bb                                                                                                                                                                                                                                                  
  ASan internal:           fe                                                                                                                                                                                                                                                  
  Left alloca redzone:     ca                                                                                                                                                                                                                                                  
  Right alloca redzone:    cb                                                                                                                                                                                                                                                  
==14281==ABORTING

Affected version:
All versions.

Fixed version:
0.61

Commit fix:
https://gitweb.gentoo.org/proj/portage-utils.git/commit/?id=7aff0263204d80304108dbe4f0061f44ed8f189f

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Not Assigned.

Timeline:
2016-01-26: bug discovered
2016-01-27: bug reported to upstream
2016-01-29: upstream release a fix
2016-02-16: advisory release

Note:
This bug was found with American Fuzzy Lop.

Permalink:

portage-utils: heap-based buffer overflow in qlop.c

February 14, 2016
I love free software but I love you more (February 14, 2016, 22:07 UTC)

The Free Software Foundation Europe is running its campaign once again this year, and I quote: In the Free Software society we exchange a lot of criticism. We write bug reports, tell others how they can improve the software, ask them for new features, and generally are not shy about criticising others. There is nothing … Continue reading "I love free software but I love you more"

February 10, 2016
Denis Dupeyron a.k.a. calchan (homepage, bugs)
It is GSoC season again (February 10, 2016, 17:54 UTC)

Google Summer of Code 2016 is starting.

If you are a student please be patient. Your time will come soon, at which point we’ll be able to answer all your questions. In this initial phase the audience is project mentors. Note that you do not need to be a Gentoo developer to be a mentor.

While we are finalizing the application, we need all of you to submit your project ideas before the end of next week (February 19th). To do so you should go to this year’s idea page and follow the instructions in the “Ideas” section. If you proposed an idea last year and would like to propose it again for this year, you can look it up and add it. Or you can just tell us and we will do it for you. Don’t hesitate to add an idea even it isn’t totally fleshed out; we will help you fill in the blanks. A project represents 3 months of full-time work for a student. A good rule of thumb is that it should take you between 2 and 4 weeks to complete depending on how expert you are in the field. You can also have a look at last year’s idea page for examples.

If you would like to be a mentor for this year please tell us sooner rather than later. You can be a mentor following up with students while they’re working on their project, or you can be an expert-mentor in a specific field who will be called in to advise on an as-needed basis (or maybe never). We will also need people to help us review project proposals. In case you can help in any capacity don’t hesitate to contact us. GSoC is seriously fun!

If you want to reach us or have any questions about any of the above, the easiest way is to ping Calchan or rafaelmartins in the #gentoo-soc channel on Freenode.

February 09, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

As I said in previous posts, I have decided to spend some time reverse engineering the remaining two glucometers I had at home for which the protocol is not known. The OneTouch Verio is proving a complex problem, but the FreeStyle Optium proved itself much easier to deal with, if nothing else because it is clearly a serial protocol. Let's see all the ducks to get to the final (mostly) working state.

Alexander Schrijver already reverse engineered the previous Freestyle protocol, but that does not work with this model at all. As I'll say later, it's still a good thing to keep this at hand.

The "strip-port" cable that Abbott sent me uses a Texas Instrument USB-to-Serial converter chip, namely the TIUSB3410; it's supported by the Linux kernel just fine by itself, although I had to fix the kernel to recognize this particular VID/PID pair; anything after v3.12 will do fine. As I found later on, having the datasheet at hand is a good idea.

To reverse engineer an USB device, you generally start with snooping a session on Windows, to figure out what the drivers and the software tell the device and what they get back. Unfortunately usbsnoop – the open source windows USB snooper of choice – has not been updated in a few years and does not support Windows 10 at all. So I had to search harder for one.

Windows 7 and later support USB event logging through ETW natively, and thankfully more recently Microsoft understood that those instructions are way too convoluted and they actually provide an updated guide based on Microsoft Message Analyzer, which appears to be their WireShark solution. Try as I might, I have not been able to get MMA to provide me useful information: it shows me fine the responses from the device, but it does not show me the commands as sent by the software, making it totally useless for the purpose of reverse engineering, not sure if that's by design or me not understanding how it works and forgetting some settings.

A quick look around pointed me at USBlyzer, which is commercial software, but both has a free complete trial and has an affordable price ($200), at least now that I'm fully employed, that is. So I decided to try it out, and while the UI is not as advanced as MMA's, it does the right thing and shows me all the information I need.

Start of capture with USBlyzer

Now that I have a working tool to trace the USB inputs and outputs, I recorded a log while opening the software – actually, it auto-starts­ – downloading the data, checking the settings and change the time. Now it's time to start making heads and tails of it.

First problem: TI3410 requires firmware to be uploaded when it's connected, which means a lot of the trace is gibberish that you shouldn't really spend time staring at. On the other hand, the serial data is transferred over raw URB (USB Request Block), so once the firmware is set up, the I/O log is just what I need. So, scroll away until something that looks like ASCII data comes up (not all serial protocols are ASCII of course, the Ultra Mini uses a binary protocol, so identifying that would have been trickier, but it was my first guess.

ASCII data found on the capture

Now with a bit of backtracking I can identify the actual commands: $xmem, $colq and $tim (the latest with parameters to set the time.) From here it would all be simple, right? Well, not really. The next problem to figure out is the right parameters to open the serial port. At first I tried the two "obvious" positions: 9600 baud and 115200 baud, but neither worked.

I had to dig up a bit more. I went to the Linux driver and started fishing around for how the serial port is set up on the 3410 — given the serial interface is not encapsulated in the URBs, I assumed there had to be a control packet, and indeed there is. Scrollback to find it in the log gives me good results.

TI3410 configuration data

While the kernel has code to set up the config buffer, it obviously doesn't have a parser, so it's a matter of reading it correctly. The bRequest = 05h in the Setup Packet correspond to the TI_SET_CONFIG command in the kernel, so that's the packet I need. The raw data is the content of the configuration structure, which declares it being a standard 8N1 serial format, although 0x0030 value set for the baudrate is unexpected…

Indeed the kernel has a (complicated) formula to figure the right value for that element, based on the actual baudrate requested, but reversing it is complicated. Luckily, checking the datasheet of the USB to serial conveted I linked earlier, I can find in Section 5.5.7.11 a description of that configuration structure value, and a table that provides the expected values for the most common baudrates; 0x0030 sets a rate close to 19200 (within 0.16% error), which is what we need to know.

It might be a curious number to choose for an USB to serial adapter, but a quick chat with colleagues tells me that in the early '90s this was actually the safest, fastest speed you could set for many serial ports providers in many operating systems. Why this is still the case for a device that clearly uses USB is a different story.

So now I have some commands to send to the device, and I get some answers back, which is probably a good starting point, from there on, it's a matter of writing the code to send the commands and parse the output… almost.

One thing that I'm still fighting with is that sometimes it takes a lot of tries for the device to answer me, whereas the software seems to identify it in a matter of seconds. As far as I can tell, this happens because the Windows driver keeps sending the same exchange over the serial port, to see if a device is actually connected — since there is no hotplugging notifications to wake it up, and, as far as I can see, it's the physical insertion of the device that does wake it up. Surprisingly though, sometimes I read back from the serial device the same string I just sent. I'm not sure what to do of that.

One tidbit of interesting information is that there are at least three different formats for dates as provided by the device. One is provided in response to the $colq command (that provides the full information of the device), one at the start of the response for the $xmem command, and another one in the actual readings. With exception of the first, they match the formats described by Alexander, including the quirk of using three letter abbreviation for months… except June and July. I'm still wondering what was in their coffee when they decided on this date format. It doesn't seem to make sense to me.

Anyway, I have added support to glucometerutils and wrote a specification for it. If you happen to have a similar device but for a non-UK or Irish market, please let me know what the right strings should be to identify the mg/dL values.

And of course, if you feel like contributing another specification to my repository of protocols I'd be very happy!

February 08, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

In the time between Enigma and FOSDEM, I have been writing some musings on reverse engineering to the point I intended to spend a weekend playing with an old motherboard to have it run Coreboot. I decided to refocus a moment instead; while I knew the exercise would be pointless (among other things, because coreboot does purge obsolete motherboards fairly often), and I Was interested in it only to prove to myself I had the skills to do that, I found that there was something else I should be reverse engineering that would have actual impact: my glucometers.

If you follow my blog, I have written about diabetes, and in particular about my Abbott Freestyle Optium and the Lifescan OneTouch Verio, both of which lack a publicly available protocol definition, though manufacturers make available custom proprietary software for them.

Unsurprisingly, if you're at least familiar with the quality level of consumer-oriented healthcare related software, the software is clunky, out of date, and barely working on modern operating systems. Which is why the simple, almost spartan, HTML reports generated by the Accu-Chek Mobile are a net improvement over using that software.

The OneTouch software in particular has not been updated in a long while, and is still not an Unicode Windows application. This would be fine, if it wasn't that it also decided that my "sacrificial laptop" had incompatible locale settings, and forced me to spend a good half hour to try configuring it in a way that it found acceptable. It also requires a separate download for "drivers" totalling over 150MB of installers. I'll dig into the software separately as I describe my odyssey with the Verio, but I'll add this in: since the installation of the "drivers" is essentially a sequence of separate installs for both kernel-space drivers and userland libraries, it is not completely surprising that one of those fails — I forgot which command returned the error, but something used by .NET has removed the parameters that are being used during the install, so at least one of the meters would not work under Windows 10.

Things are even more interesting for FreeStyle Auto-Assist, the software provided by Abbott. The link goes to the Irish website (given I live in Dublin), though it might redirect you to a more local website: Abbott probably thinks there is no reason for someone living in the Republic to look at an imperialist website, so even if you click on the little flag on the top-right, it will never send you to the UK website, at least coming from an Irish connection… which mean to see the UK version I need to use TunnelBear. No worries though, because no matter whether you're Irish or British, the moment when you try to download the software, you're presented with a 404 Not Found page (at least as of writing, 2016-02-06) — I managed getting a copy of the software from their Australian website instead.

As an aside, I have been told about a continuous glucose meter from Abbott some time ago, which looked very nice, as the sensor seemed significantly smaller than other CGMs I've seen — unfortunately when I went to check on the (UK) website, its YouTube promotional and tutorial videos were region-locked away from me. Guess I won't be moving to that meter any time soon.

I'll be posting some more rants about the problems of reverse engineering these meters as I get results or frustration, so hang tight if you're curious. And while I don't usually like telling people to share my posts, I think for once it might be beneficial to spread the word that diabetes care needs better software. So if you feel to share this or any other of my posts on the subject please do so!

Michał Górny a.k.a. mgorny (homepage, bugs)
A quick note on portable shebangs (February 08, 2016, 12:57 UTC)

While at first shebangs may seem pretty obvious and well supported, there is a number of not-so-well-known portability issues affecting them. Only during my recent development work, I have hit more than one of them. For this reason, I’d like to write a quick note summarizing how to stay on the safe side and keep your scripts working across various systems.

Please note I will only cover the basic solution to the most important portability issues. If you’d like to know more about shebang handling in various systems, I’d like to recommend you an excellent article ‘The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours’ by Sven Mascheck.

So, in order to stay portable you should note that:

  1. Many systems (Linux included!) have limits on shebang length. If you exceed this length, the kernel will cut the shebang in the middle of a path component, and usually try to execute the script with the partial path! To stay safe you need to keep shebang short. Since you can’t really control where the programs are installed (think of Prefix!), you should always rely on PATH lookups.
  2. Shebangs do not have built-in PATH lookups. Instead, you have to use the /usr/bin/env tool which performs the lookup on its argument (the exact path is mostly portable, with a few historical exceptions).
  3. Different systems split parameters in shebangs differently. In particular, Linux splits on the first space only, passing everything following it as a single parameter. To stay portable, you can not pass more than one parameter, and it can not contain whitespace. Which — considering the previous points made — means the parameter is reserved for program name passed to env, and you can not pass any actual parameters.
  4. Shebang nesting (i.e. referencing an interpreted script inside a shebang) is supported only by some systems, and only to some extent. For this reason, shebangs need to reference actual executable programs. However, using env effectively works around the issue since env is the immediate interpreter.

A few quick examples:

#!/usr/bin/env python  # GOOD!

#!python  # BAD: won't work

#!/usr/bin/env python -b  # BAD: it may try to spawn program named 'python -b'

#!/usr/bin/python  # BAD: absolute path is non-portable, also see below

#!/foo/bar/baz/usr/bin/python  # BAD: prefix can easily exceed length limit

#!/usr/lib/foo/foo.sh  # BAD: calling interpreted scripts is non-portable

February 07, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I'm currently looking to reverse engineer at least some support for OneTouch Verio and FreeStyle Optium devices I own (more on that once I have something to talk about I guess.)

While doing this I figured out that there are at least two more projects for handling glucometers in the open source world: GGC, which despite its '90s SourceForge website seem to be fairly active, and OpenGlucose. I know about them, and I looked at their websites, but I'm not particularly keen to look into, or contribute, their codebase (expect for the build system.) The reason is to be found in my own glucometerutils project.

When I started working on it, I very explicitly wanted to license it with the most permissive license that I was able to. I should probably have documented why I wanted to do that, but I guess it's better late than never.

The Python code I wrote is designed to support multiple glucometers, although it realistically supports only a couple, but it's very rough, as it only allows you to download the data off the reader, clear it, or set the time (the latter being probably the most useful part of it.) I was really hoping that by adding support for multiple reader, someone else with more of an UI/UX background than me would help by building a proper analysis UI for the data it downloads, but this has not happened (while I see GGC at least has some UI, though in Java, and I expect OpenGlucose to have something, too.) Unfortunately, the fact that even LifeScan stopped providing the protocol documentation for their meters makes it very unlikely to ever take off.

But even after that, my idea was still to be able to build a permissive low-level access library for different glucometers, and the reason is mostly philosophical. While I love Free Software, I think that enabling anybody to build a better diabetes management software, whether Free or not, is a net win in the fight against diabetes.

Sure, I would be enthusiastic if such a software was to be built as Free Software, but I don't want to hold my breath to that: the healthcare industry is known not to be spending much time to care for the final user (more on that in future posts.) On the other hand, having a base interface that can be contributed to without having to open any business logic could entice some company to give back at least the base interface for the glucometers.

Two years in, I'm thinking I made the wrong decision. Right now this difference in philosophy makes it just very fragmented, with GGC having the most device support (but relying on Java, which is a real problem for people like me who are banned to have it installed on their work computers), and a decent UI, even though it's very hard to find out about it, and has a website that reminds me a lot of the '90s as I said earlier.

I think what I should be doing now is translating that Python code into human-readable specifications (since the official specs coming from OneTouch that I used to implement it are overly complicated), and release those under CC0. After that, I can probably contribute support for those meters to OpenGlucose.

As for the stuff I'm reverse engineering now, I think I'll essentially do the same: my Python script would be a nice proof of concept for the results, then I can write the specs down, and contribute it back to have at least one less project intending to be fully functional.

February 04, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

http://www.akhuettel.de/publications/remo.pdf
We're happy to be able to announce that our manuscript "Co-sputtered MoRe thin films for carbon nanotube growth-compatible superconducting coplanar resonators" has just been accepted for publication in Nanotechnology.
For quite some time we have been working on techniques to combine ultra-clean carbon nanotubes and their regular electronic spectrum with superconducting material systems. One of our objectives is to perform high-frequency measurements on carbon nanotube nano-electromechanical systems at millikelvin temperatures. With this in mind we have established the fabrication and characterization of compatible superconducting coplanar resonators in our research group. A serious challenge here was that the high-temperature process of carbon nanotube growth destroys most metal films, or if not, at least lowers the critical temperature Tc of superconductors so much that they are not useful anymore.
In the present manuscript, we demonstrate deposition of a molybdenum-rhenium alloy of variable composition by simultaneous sputtering from two sources. We characterize the resulting thin films using x-ray photoelectron spectroscopy, and analyze the saturation of the surface layers with carbon during the nanotube growth process. Low-temperature dc measurements show that specifically an alloy of composition Mo20Re80 remains very stable during this process, with large critical currents and critical temperatures even rising to up to Tc~8K. We use this alloy to fabricate coplanar resonator structures and demonstrate even after a nanotube growth high temperature process resonant behaviour at Gigahertz frequencies with quality factors up to Q~5000. Observation of the temperature dependent behaviour shows that our devices are well described by Mattis-Bardeen theory, in combination with dissipation by two-level systems in the dielectric substrate.

"Co-sputtered MoRe thin films for carbon nanotube growth-compatible superconducting coplanar resonators"
K. J. G. Götz, S. Blien, P. L. Stiller, O. Vavra, T. Mayer, T. Huber, T. N. G. Meier, M. Kronseder, Ch. Strunk, and A. K. Hüttel
accepted for publication in Nanotechnology; arXiv:1510.00278 (PDF)

February 03, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Recently, I needed to get into a client’s computer (running Windows 8) in order to fix a few problems. Having forgotten to ask for a most obvious piece of needed information (the account password), I just decided to get around it. The account that he was using on a daily basis was tied to a Microsoft Live account instead of being local to the machine. So, instead of changing that account password, I chose to activate the local Windows administrator account and change the password for it. This method was tested on Windows 7 and Windows 8, but it should work on all modern versions of Windows (including XP, Vista, Windows 7, Windows 8, Windows 8.1, and Windows 10).

Before jumping into the procedure, you’ll want to grab a copy of a Linux live CD. You can really use any distribution, but I prefer the SystemRescueCD, because it is simple, lightweight, and based on Gentoo (my preferred distribution). There are instructions on that site for burning SysRescCD to a CD, or installing it on a USB drive. It would also be helpful for you to know the basics of the Linux CLI, but in case you don’t, I’ve tried to use exact commands as much as possible. Now that you’re ready, here are the steps:

  • Boot the System Rescue CD (or any Linux live CD of your choice)
  • Find the disk partition that contains the Windows installation (probably on the primary disk, which is /dev/sda:
    • fdisk -l /dev/sda
    • Look for the partition has a type of “Microsoft basic data,” or “HPFS/NTFS/exFAT”, OR it is likely that it is largest partition (probably a few hundred GB or more) on the drive
    • For the sake of ease, we’re going to say that’s /dev/sda5, but anywhere you see that code in the following steps, replace it with the partition that you actually found with fdisk
  • Make a temporary directory for Windows, fix the Windows hibernation problem, and mount the partition:
    • mkdir -p /mnt/win/
      ntfsfix /dev/sda5
      ntfs-3g -o remove_hiberfile /dev/sda5 /mnt/win/
    • NOTE: Don’t run the ntfsfix command or use the -o remove_hiberfile option unless you are unable to mount the partition due to an error like:

      The disk contains an unclean file system (0, 0).
      Metadata kept in Windows cache, refused to mount.
      Failed to mount ‘/dev/sda5’: Operation not permitted
      The NTFS partition is in an unsafe state. Please resume and shutdown
      Windows fully (no hibernation or fast restarting), or mount the volume
      read-only with the ‘ro’ mount option.

      Otherwise, the Microsoft filesystem check may run when you boot back into Windows (which isn’t usually a big deal, but will take some time to run).

  • Go into the Windows system folder, swap some executable files, and get out of there:
    • cd /mnt/win/Windows/System32/
      mv cmd.exe cmdREAL.exe && mv sethc.exe sethcREAL.exe
      cp -v cmdREAL.exe sethc.exe
      cd ~ && sync && umount /mnt/win/
      init 0
  • The last command shuts down the system. Now, remove the CD or USB drive from the system, so that you can boot into Windows.
  • In the lower-left corner, click on the “Ease of Access” icon, which looks like this:
    • Windows Ease of Access icon
  • Turn on the “Sticky keys” option
  • Press the Shift key five times, and that will bring up the command prompt
  • At this point you have two options. If there is a local account you want to change, follow option 1. If there are only Microsoft Live (remote) accounts, you can enable the local Administrator account by following option 2.
  • 1. Changing the password for a local user:
    • Type net user to see a list of available user accounts
    • Type net user $USERNAME * (replacing $USERNAME with the desired username), and follow the prompts to set the password for that local user
    • NOTE: You can just hit the enter key if you want an empty password.
  • 2. Enabling the local Administrator account, and setting the password
    • Type net user administrator /active:yes to activate the local Administrator account
    • Type net user administrator * and follow the prompts to set the password for the local Administrator
    • NOTE: You can just hit the enter key if you want an empty password.
  • Now that you’ve taken care of the password, reboot the computer back into the System Rescue CD
  • Make a temporary directory for Windows, fix the Windows hibernation problem, and mount the partition:
    • mkdir -p /mnt/win/
      ntfsfix /dev/sda5
      ntfs-3g -o remove_hiberfile /dev/sda5 /mnt/win/
  • Undo the sethc.exe and cmd.exe changes:
    • cd /mnt/win/Windows/System32/
      rm -fv sethc.exe && mv cmdREAL.exe cmd.exe && mv sethcREAL.exe sethc.exe
      cd ~ && sync && umount /mnt/win
      init 0

Now when you power on the computer again (back into Windows), your new password(s) will be in place. If you followed option 2 from above, you’ll also have the local Windows ‘Administrator’ account active.

Hope the information helps!

Cheers,
Zach

January 31, 2016
Gentoo at FOSDEM 2016 (January 31, 2016, 13:00 UTC)

Gentoo Linux was present at this year's Free and Open Source Developer European Meeting (FOSDEM). For those not familiar with FOSDEM it is a conference that consists of more than 5,000 developers and more than 600 presentations over a two-day span at the premises of the Université libre de Bruxelles. The presentations are both streamed … Continue reading "Gentoo at FOSDEM 2016"

January 30, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo at FOSDEM: Posters (systemd, arches) (January 30, 2016, 15:24 UTC)

Especially after Lennart Poettering made some publicity for Gentoo Linux in his keynote talk (unfortunately I missed it due to other commitments :), we've had a lot of visitors at our FOSDEM booth. So, because of popular demand, here are again the files for our posters. They are based on the great "Gentoo Abducted" design by Matteo Pescarin.  Released under CC BY-SA 2.5 as the original. Enjoy!



PDF SVG


PDF SVG

January 29, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Stage4 tarballs, minimal and cloud (January 29, 2016, 06:00 UTC)

Where are they

The tarballs can be found in the normal place.

Minimal

This is meant to be just what you need to boot, the disk won't expand itself, it won't even get networking info or set any passwords for you (no default password).

This tarball is suposed to be the base you generate more complex images from, it is what is going to be used by Openstack's diskimage-builder.

The primary things it does is get you a kernel, bootloader and sshd.

stage4-minimal spec

Cloud

This was primarilly targeted for use with openstack but it should work with amazon as well, both use cloud-init.

Network interfaces are expected to use dhcp, a couple of other useful things are installed as well, syslog, logrotate, etc.

By default cloud-init will take data (keys mainly) and set them up for the 'gentoo' user.

stage4-cloud spec

Next

I'll be posting about the work being done to take these stages and build bootable images. At the momebt I do have images available here.

openstack images