Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Peter Wilmott
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
February 10, 2016, 10:06 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

February 09, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

As I said in previous posts, I have decided to spend some time reverse engineering the remaining two glucometers I had at home for which the protocol is not known. The OneTouch Verio is proving a complex problem, but the FreeStyle Optium proved itself much easier to deal with, if nothing else because it is clearly a serial protocol. Let's see all the ducks to get to the final (mostly) working state.

Alexander Schrijver already reverse engineered the previous Freestyle protocol, but that does not work with this model at all. As I'll say later, it's still a good thing to keep this at hand.

The "strip-port" cable that Abbott sent me uses a Texas Instrument USB-to-Serial converter chip, namely the TIUSB3410; it's supported by the Linux kernel just fine by itself, although I had to fix the kernel to recognize this particular VID/PID pair; anything after v3.12 will do fine. As I found later on, having the datasheet at hand is a good idea.

To reverse engineer an USB device, you generally start with snooping a session on Windows, to figure out what the drivers and the software tell the device and what they get back. Unfortunately usbsnoop – the open source windows USB snooper of choice – has not been updated in a few years and does not support Windows 10 at all. So I had to search harder for one.

Windows 7 and later support USB event logging through ETW natively, and thankfully more recently Microsoft understood that those instructions are way too convoluted and they actually provide an updated guide based on Microsoft Message Analyzer, which appears to be their WireShark solution. Try as I might, I have not been able to get MMA to provide me useful information: it shows me fine the responses from the device, but it does not show me the commands as sent by the software, making it totally useless for the purpose of reverse engineering, not sure if that's by design or me not understanding how it works and forgetting some settings.

A quick look around pointed me at USBlyzer, which is commercial software, but both has a free complete trial and has an affordable price ($200), at least now that I'm fully employed, that is. So I decided to try it out, and while the UI is not as advanced as MMA's, it does the right thing and shows me all the information I need.

Start of capture with USBlyzer

Now that I have a working tool to trace the USB inputs and outputs, I recorded a log while opening the software – actually, it auto-starts­ – downloading the data, checking the settings and change the time. Now it's time to start making heads and tails of it.

First problem: TI3410 requires firmware to be uploaded when it's connected, which means a lot of the trace is gibberish that you shouldn't really spend time staring at. On the other hand, the serial data is transferred over raw URB (USB Request Block), so once the firmware is set up, the I/O log is just what I need. So, scroll away until something that looks like ASCII data comes up (not all serial protocols are ASCII of course, the Ultra Mini uses a binary protocol, so identifying that would have been trickier, but it was my first guess.

ASCII data found on the capture

Now with a bit of backtracking I can identify the actual commands: $xmem, $colq and $tim (the latest with parameters to set the time.) From here it would all be simple, right? Well, not really. The next problem to figure out is the right parameters to open the serial port. At first I tried the two "obvious" positions: 9600 baud and 115200 baud, but neither worked.

I had to dig up a bit more. I went to the Linux driver and started fishing around for how the serial port is set up on the 3410 — given the serial interface is not encapsulated in the URBs, I assumed there had to be a control packet, and indeed there is. Scrollback to find it in the log gives me good results.

TI3410 configuration data

While the kernel has code to set up the config buffer, it obviously doesn't have a parser, so it's a matter of reading it correctly. The bRequest = 05h in the Setup Packet correspond to the TI_SET_CONFIG command in the kernel, so that's the packet I need. The raw data is the content of the configuration structure, which declares it being a standard 8N1 serial format, although 0x0030 value set for the baudrate is unexpected…

Indeed the kernel has a (complicated) formula to figure the right value for that element, based on the actual baudrate requested, but reversing it is complicated. Luckily, checking the datasheet of the USB to serial conveted I linked earlier, I can find in Section 5.5.7.11 a description of that configuration structure value, and a table that provides the expected values for the most common baudrates; 0x0030 sets a rate close to 19200 (within 0.16% error), which is what we need to know.

It might be a curious number to choose for an USB to serial adapter, but a quick chat with colleagues tells me that in the early '90s this was actually the safest, fastest speed you could set for many serial ports providers in many operating systems. Why this is still the case for a device that clearly uses USB is a different story.

So now I have some commands to send to the device, and I get some answers back, which is probably a good starting point, from there on, it's a matter of writing the code to send the commands and parse the output… almost.

One thing that I'm still fighting with is that sometimes it takes a lot of tries for the device to answer me, whereas the software seems to identify it in a matter of seconds. As far as I can tell, this happens because the Windows driver keeps sending the same exchange over the serial port, to see if a device is actually connected — since there is no hotplugging notifications to wake it up, and, as far as I can see, it's the physical insertion of the device that does wake it up. Surprisingly though, sometimes I read back from the serial device the same string I just sent. I'm not sure what to do of that.

One tidbit of interesting information is that there are at least three different formats for dates as provided by the device. One is provided in response to the $colq command (that provides the full information of the device), one at the start of the response for the $xmem command, and another one in the actual readings. With exception of the first, they match the formats described by Alexander, including the quirk of using three letter abbreviation for months… except June and July. I'm still wondering what was in their coffee when they decided on this date format. It doesn't seem to make sense to me.

Anyway, I have added support to glucometerutils and wrote a specification for it. If you happen to have a similar device but for a non-UK or Irish market, please let me know what the right strings should be to identify the mg/dL values.

And of course, if you feel like contributing another specification to my repository of protocols I'd be very happy!

February 08, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

In the time between Enigma and FOSDEM, I have been writing some musings on reverse engineering to the point I intended to spend a weekend playing with an old motherboard to have it run Coreboot. I decided to refocus a moment instead; while I knew the exercise would be pointless (among other things, because coreboot does purge obsolete motherboards fairly often), and I Was interested in it only to prove to myself I had the skills to do that, I found that there was something else I should be reverse engineering that would have actual impact: my glucometers.

If you follow my blog, I have written about diabetes, and in particular about my Abbott Freestyle Optium and the Lifescan OneTouch Verio, both of which lack a publicly available protocol definition, though manufacturers make available custom proprietary software for them.

Unsurprisingly, if you're at least familiar with the quality level of consumer-oriented healthcare related software, the software is clunky, out of date, and barely working on modern operating systems. Which is why the simple, almost spartan, HTML reports generated by the Accu-Chek Mobile are a net improvement over using that software.

The OneTouch software in particular has not been updated in a long while, and is still not an Unicode Windows application. This would be fine, if it wasn't that it also decided that my "sacrificial laptop" had incompatible locale settings, and forced me to spend a good half hour to try configuring it in a way that it found acceptable. It also requires a separate download for "drivers" totalling over 150MB of installers. I'll dig into the software separately as I describe my odyssey with the Verio, but I'll add this in: since the installation of the "drivers" is essentially a sequence of separate installs for both kernel-space drivers and userland libraries, it is not completely surprising that one of those fails — I forgot which command returned the error, but something used by .NET has removed the parameters that are being used during the install, so at least one of the meters would not work under Windows 10.

Things are even more interesting for FreeStyle Auto-Assist, the software provided by Abbott. The link goes to the Irish website (given I live in Dublin), though it might redirect you to a more local website: Abbott probably thinks there is no reason for someone living in the Republic to look at an imperialist website, so even if you click on the little flag on the top-right, it will never send you to the UK website, at least coming from an Irish connection… which mean to see the UK version I need to use TunnelBear. No worries though, because no matter whether you're Irish or British, the moment when you try to download the software, you're presented with a 404 Not Found page (at least as of writing, 2016-02-06) — I managed getting a copy of the software from their Australian website instead.

As an aside, I have been told about a continuous glucose meter from Abbott some time ago, which looked very nice, as the sensor seemed significantly smaller than other CGMs I've seen — unfortunately when I went to check on the (UK) website, its YouTube promotional and tutorial videos were region-locked away from me. Guess I won't be moving to that meter any time soon.

I'll be posting some more rants about the problems of reverse engineering these meters as I get results or frustration, so hang tight if you're curious. And while I don't usually like telling people to share my posts, I think for once it might be beneficial to spread the word that diabetes care needs better software. So if you feel to share this or any other of my posts on the subject please do so!

Michał Górny a.k.a. mgorny (homepage, bugs)
A quick note on portable shebangs (February 08, 2016, 12:57 UTC)

While at first shebangs may seem pretty obvious and well supported, there is a number of not-so-well-known portability issues affecting them. Only during my recent development work, I have hit more than one of them. For this reason, I’d like to write a quick note summarizing how to stay on the safe side and keep your scripts working across various systems.

Please note I will only cover the basic solution to the most important portability issues. If you’d like to know more about shebang handling in various systems, I’d like to recommend you an excellent article ‘The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours’ by Sven Mascheck.

So, in order to stay portable you should note that:

  1. Many systems (Linux included!) have limits on shebang length. If you exceed this length, the kernel will cut the shebang in the middle of a path component, and usually try to execute the script with the partial path! To stay safe you need to keep shebang short. Since you can’t really control where the programs are installed (think of Prefix!), you should always rely on PATH lookups.
  2. Shebangs do not have built-in PATH lookups. Instead, you have to use the /usr/bin/env tool which performs the lookup on its argument (the exact path is mostly portable, with a few historical exceptions).
  3. Different systems split parameters in shebangs differently. In particular, Linux splits on the first space only, passing everything following it as a single parameter. To stay portable, you can not pass more than one parameter, and it can not contain whitespace. Which — considering the previous points made — means the parameter is reserved for program name passed to env, and you can not pass any actual parameters.
  4. Shebang nesting (i.e. referencing an interpreted script inside a shebang) is supported only by some systems, and only to some extent. For this reason, shebangs need to reference actual executable programs. However, using env effectively works around the issue since env is the immediate interpreter.

A few quick examples:

#!/usr/bin/env python  # GOOD!

#!python  # BAD: won't work

#!/usr/bin/env python -b  # BAD: it may try to spawn program named 'python -b'

#!/usr/bin/python  # BAD: absolute path is non-portable, also see below

#!/foo/bar/baz/usr/bin/python  # BAD: prefix can easily exceed length limit

#!/usr/lib/foo/foo.sh  # BAD: calling interpreted scripts is non-portable

February 07, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I'm currently looking to reverse engineer at least some support for OneTouch Verio and FreeStyle Optium devices I own (more on that once I have something to talk about I guess.)

While doing this I figured out that there are at least two more projects for handling glucometers in the open source world: GGC, which despite its '90s SourceForge website seem to be fairly active, and OpenGlucose. I know about them, and I looked at their websites, but I'm not particularly keen to look into, or contribute, their codebase (expect for the build system.) The reason is to be found in my own glucometerutils project.

When I started working on it, I very explicitly wanted to license it with the most permissive license that I was able to. I should probably have documented why I wanted to do that, but I guess it's better late than never.

The Python code I wrote is designed to support multiple glucometers, although it realistically supports only a couple, but it's very rough, as it only allows you to download the data off the reader, clear it, or set the time (the latter being probably the most useful part of it.) I was really hoping that by adding support for multiple reader, someone else with more of an UI/UX background than me would help by building a proper analysis UI for the data it downloads, but this has not happened (while I see GGC at least has some UI, though in Java, and I expect OpenGlucose to have something, too.) Unfortunately, the fact that even LifeScan stopped providing the protocol documentation for their meters makes it very unlikely to ever take off.

But even after that, my idea was still to be able to build a permissive low-level access library for different glucometers, and the reason is mostly philosophical. While I love Free Software, I think that enabling anybody to build a better diabetes management software, whether Free or not, is a net win in the fight against diabetes.

Sure, I would be enthusiastic if such a software was to be built as Free Software, but I don't want to hold my breath to that: the healthcare industry is known not to be spending much time to care for the final user (more on that in future posts.) On the other hand, having a base interface that can be contributed to without having to open any business logic could entice some company to give back at least the base interface for the glucometers.

Two years in, I'm thinking I made the wrong decision. Right now this difference in philosophy makes it just very fragmented, with GGC having the most device support (but relying on Java, which is a real problem for people like me who are banned to have it installed on their work computers), and a decent UI, even though it's very hard to find out about it, and has a website that reminds me a lot of the '90s as I said earlier.

I think what I should be doing now is translating that Python code into human-readable specifications (since the official specs coming from OneTouch that I used to implement it are overly complicated), and release those under CC0. After that, I can probably contribute support for those meters to OpenGlucose.

As for the stuff I'm reverse engineering now, I think I'll essentially do the same: my Python script would be a nice proof of concept for the results, then I can write the specs down, and contribute it back to have at least one less project intending to be fully functional.

February 06, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Yes we still needs autotools (February 06, 2016, 15:36 UTC)

One of the most common refrains that I hear lately, particularly when people discover Autotools Mythbuster is that we don't need autotools anymore.

The argument goes as such: since Autotools were designed for portability on ancient systems that nobody really uses anymore, and that most of the modern operating systems have a common interface, whether that is POSIX or C99, the reasons to keep Autotools around are minimal.

This could be true… if your software does nothing that is ever platform specific. Which indeed is possible, but quite rare. Indeed, unpaper has a fairly limited amount of code in its configure.ac, as the lowest level code it has, it's to read and write files. Indeed, I could have easily used anything else for the build system.

But on the other hand, if you're doing anything more specific, which usually includes network I/O, you end up with a bit more of a script. Furthermore, if you don't want to pull a systemd and decide that the latest Linux version is all you want to support, you end up having to figure out alternatives, or at least conditionals to what you can and cannot use. You may not want to do like VLC which supports anything between OS/2 and the latest Apple TV, but there is space between those extremes.

If you're a library, this is even more important. Because while it might be that you're not interested in any peculiar systems, it might very well be that one of your consumers is. Going back to the VLC example, I have spent quite a bit of time in the past weekends of this year helping the VLC project by fixing (or helping to fix) the build system of new libraries that are made a dependency of VLC for Android.

So while we have indeed overcome the difficulties of porting across many different UNIX flavours, we still have portability concerns. I would guess that it is true that we should reconsider what Autoconf tests for by default, and in particular there are some tests that are not completely compatible for modern systems (for instance the endianness tests were an obvious failure when MacIntel arrived, as then it would be building the code for both big endian (PPC) and little endian (Intel) — on the other hand, even these concerns are not important anymore, as universal binaries are already out of style.

So yes, I do think we still need portability, and I still think that not requiring a tool that depends on XML RPC libraries is a good side of autotools…

February 04, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

http://www.akhuettel.de/publications/remo.pdf
We're happy to be able to announce that our manuscript "Co-sputtered MoRe thin films for carbon nanotube growth-compatible superconducting coplanar resonators" has just been accepted for publication in Nanotechnology.
For quite some time we have been working on techniques to combine ultra-clean carbon nanotubes and their regular electronic spectrum with superconducting material systems. One of our objectives is to perform high-frequency measurements on carbon nanotube nano-electromechanical systems at millikelvin temperatures. With this in mind we have established the fabrication and characterization of compatible superconducting coplanar resonators in our research group. A serious challenge here was that the high-temperature process of carbon nanotube growth destroys most metal films, or if not, at least lowers the critical temperature Tc of superconductors so much that they are not useful anymore.
In the present manuscript, we demonstrate deposition of a molybdenum-rhenium alloy of variable composition by simultaneous sputtering from two sources. We characterize the resulting thin films using x-ray photoelectron spectroscopy, and analyze the saturation of the surface layers with carbon during the nanotube growth process. Low-temperature dc measurements show that specifically an alloy of composition Mo20Re80 remains very stable during this process, with large critical currents and critical temperatures even rising to up to Tc~8K. We use this alloy to fabricate coplanar resonator structures and demonstrate even after a nanotube growth high temperature process resonant behaviour at Gigahertz frequencies with quality factors up to Q~5000. Observation of the temperature dependent behaviour shows that our devices are well described by Mattis-Bardeen theory, in combination with dissipation by two-level systems in the dielectric substrate.

"Co-sputtered MoRe thin films for carbon nanotube growth-compatible superconducting coplanar resonators"
K. J. G. Götz, S. Blien, P. L. Stiller, O. Vavra, T. Mayer, T. Huber, T. N. G. Meier, M. Kronseder, Ch. Strunk, and A. K. Hüttel
accepted for publication in Nanotechnology; arXiv:1510.00278 (PDF)

February 03, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Hardware review: comparison of Bose QC15 and QC20 (February 03, 2016, 04:22 UTC)

As I wrote in On the conference circuit I have been traveling a whole lot in the past couple of years, even though I used to be terrified of the idea. Because of that, I also tried looking for every escape hatch from all the bothersome parts of traveling that I could get to, within my budget — which does mean I don't usually travel business class, though sometimes I do.

One of the earliest things I wanted to address was the headache caused by a long-haul flight. Part of the reason for the headache is directly the hum of the engines, but even more so than that, the problem was due to me cranking the volume up on audiobooks or podcasts I listened to, just to make sure I could hear them through said hum. The obvious answer was to be found in noise-cancelling headphones, so on my birthday, on a trip to Las Vegas, I bought myself a pair of Bose QC15 (no longer manufactured.)

This was a definite lifesaviour for me, particularly as the number of flights I took afterwards kept increasing steadily, and I found in these headphones the only way to sleep on planes. I really wish I had these when I was still living in Italy, particularly as all repeating noises, including lawnmowers and safety warning alarms can be cancelled very nicely — and these were the primary complaints I had when living back there, particularly during the summer.

Unfortunately, as everything in life, these were not perfect, and in particular they relied on making a good seal around your ears, which is perfectly feasible… unless you wear glasses. Indeed depending on the model of glasses I wore, the seal would be from imperfect to completely missing. This became more of an issue when I started flying Dublin to San Francisco non-stop, as that's a quite long flight enough.

A second problem became more apparent as I managed to get a few more business class trips (either through bids for upgrades, upgrades with miles, or just random stroke of luck on the fare when booking.) When I sleep I tend to turn my head to the side, even more so in a plane because of lights usually being visible in the aisle. When that happens, if the earphone ends up touching the seat, the noise-cancelling gets completely thrown off by the vibration, and stops working altogether.

So last year I decided it was a good time to get a new pair, this time as in-ear earphones, Bose QC20, and I found the improvement worthwhile (of course, it's still a matter of budget.)

While the actual noise-cancelling is stronger on the QC15 with a good seal (as in, when I'm not wearing glasses), the QC20 provide a better result in a plane when wearing glasses. This makes them much more suitable for the usage pattern I have, but I guess for those who don't need to wear glasses, and who don't travel as much, the QC25 might still be a better option.

Compared to the '15, the '20 have the drawback of requiring charging the battery, which luckily has a micro-B USB connector, so does not require any special cable. My previous pair is powered by a simple AAA battery, so I just kept one or two spares in the headphones' case. This was also convenient because that is the same type of batteries that my glucometer uses. On the other hand, the '20s work fine even without being powered, though without the noise cancelling, of course.

Because of the nature of the earphones, they are also much more practical to carry: the case is many times smaller and easily fit in my pocket, while the previous one would stay in my backpack until I got to the plane. They also are more discreet (even with the bright aqua-colored stripe mine have), which means I have less refrain on using them on the street here in Dublin (I have heard stories about fancy headphones on the street here, but that's probably paranoia.)

If you wonder why I use these on the street, while they don't do much good to get rid of the cars themselves, they do take care of two major problems when trying to listen to Audiobooks while walking around the city: the wind (in the winter it can be quite nasty and noisy), and the wheels on the asphalt. Probably due to differences in amounts of rain, I can listen to audiobooks on normal earphones in California, but not so much over here. And I'd rather not crank up the volume on the earphones on the road, as it would cover important safety noises, such as the car trying to run you over.

An interesting factoid of using noise-cancelling headphones during flights can be added to the list of non-directly-logical actions while traveling. If you read the Wikipedia page I linked earlier on, you can read

In the aviation environment, noise-cancelling headphones increase the signal-to-noise ratio significantly more than passive noise attenuating headphones or no headphones, making hearing important information such as safety announcements easier.

In reality, due to the practical difficulty for the cabin crew to tell what kind of headphones you're wearing (although you'd expect the QC15 to be a very common sight nowadays), in many flights I've been asked to take the headphones away during the security demonstration. In case of Aer Lingus (which is, by virtue of being based in Dublin, my airline of choice at least for "local" European flights), they allow you to keep "earbuds type headphones" on, which is another good reason for me to use the '20. Other airlines frown upon those as well.

The unfortunate bit is that Bose now requires you to choose your allegiance upfront. QC15 came with a generic cable without controls, and a cable with controls for Apple devices, while allowing you to buy the microphone and controls version for "Samsung" (really, Android), allowing you to pick the right control based on the device. QC20 and QC25 only have one cable each and you need to choose which ones to get the moment you buy them. I have the Android version, even though I also own an iPod Touch.

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Recently, I needed to get into a client’s computer (running Windows 8) in order to fix a few problems. Having forgotten to ask for a most obvious piece of needed information (the account password), I just decided to get around it. The account that he was using on a daily basis was tied to a Microsoft Live account instead of being local to the machine. So, instead of changing that account password, I chose to activate the local Windows administrator account and change the password for it. This method was tested on Windows 7 and Windows 8, but it should work on all modern versions of Windows (including XP, Vista, Windows 7, Windows 8, Windows 8.1, and Windows 10).

Before jumping into the procedure, you’ll want to grab a copy of a Linux live CD. You can really use any distribution, but I prefer the SystemRescueCD, because it is simple, lightweight, and based on Gentoo (my preferred distribution). There are instructions on that site for burning SysRescCD to a CD, or installing it on a USB drive. It would also be helpful for you to know the basics of the Linux CLI, but in case you don’t, I’ve tried to use exact commands as much as possible. Now that you’re ready, here are the steps:

  • Boot the System Rescue CD (or any Linux live CD of your choice)
  • Find the disk partition that contains the Windows installation (probably on the primary disk, which is /dev/sda:
    • fdisk -l /dev/sda
    • Look for the partition has a type of “Microsoft basic data,” or “HPFS/NTFS/exFAT”, OR it is likely that it is largest partition (probably a few hundred GB or more) on the drive
    • For the sake of ease, we’re going to say that’s /dev/sda5, but anywhere you see that code in the following steps, replace it with the partition that you actually found with fdisk
  • Make a temporary directory for Windows, fix the Windows hibernation problem, and mount the partition:
    • mkdir -p /mnt/win/
      ntfsfix /dev/sda5
      ntfs-3g -o remove_hiberfile /dev/sda5 /mnt/win/
    • NOTE: Don’t run the ntfsfix command or use the -o remove_hiberfile option unless you are unable to mount the partition due to an error like:

      The disk contains an unclean file system (0, 0).
      Metadata kept in Windows cache, refused to mount.
      Failed to mount ‘/dev/sda5’: Operation not permitted
      The NTFS partition is in an unsafe state. Please resume and shutdown
      Windows fully (no hibernation or fast restarting), or mount the volume
      read-only with the ‘ro’ mount option.

      Otherwise, the Microsoft filesystem check may run when you boot back into Windows (which isn’t usually a big deal, but will take some time to run).

  • Go into the Windows system folder, swap some executable files, and get out of there:
    • cd /mnt/win/Windows/System32/
      mv cmd.exe cmdREAL.exe && mv sethc.exe sethcREAL.exe
      cp -v cmdREAL.exe sethc.exe
      cd ~ && sync && umount /mnt/win/
      init 0
  • The last command shuts down the system. Now, remove the CD or USB drive from the system, so that you can boot into Windows.
  • In the lower-left corner, click on the “Ease of Access” icon, which looks like this:
    • Windows Ease of Access icon
  • Turn on the “Sticky keys” option
  • Press the Shift key five times, and that will bring up the command prompt
  • At this point you have two options. If there is a local account you want to change, follow option 1. If there are only Microsoft Live (remote) accounts, you can enable the local Administrator account by following option 2.
  • 1. Changing the password for a local user:
    • Type net user to see a list of available user accounts
    • Type net user $USERNAME * (replacing $USERNAME with the desired username), and follow the prompts to set the password for that local user
    • NOTE: You can just hit the enter key if you want an empty password.
  • 2. Enabling the local Administrator account, and setting the password
    • Type net user administrator /active:yes to activate the local Administrator account
    • Type net user administrator * and follow the prompts to set the password for the local Administrator
    • NOTE: You can just hit the enter key if you want an empty password.
  • Now that you’ve taken care of the password, reboot the computer back into the System Rescue CD
  • Make a temporary directory for Windows, fix the Windows hibernation problem, and mount the partition:
    • mkdir -p /mnt/win/
      ntfsfix /dev/sda5
      ntfs-3g -o remove_hiberfile /dev/sda5 /mnt/win/
  • Undo the sethc.exe and cmd.exe changes:
    • cd /mnt/win/Windows/System32/
      rm -fv sethc.exe && mv cmdREAL.exe cmd.exe && mv sethcREAL.exe sethc.exe
      cd ~ && sync && umount /mnt/win
      init 0

Now when you power on the computer again (back into Windows), your new password(s) will be in place. If you followed option 2 from above, you’ll also have the local Windows ‘Administrator’ account active.

Hope the information helps!

Cheers,
Zach

January 31, 2016
Gentoo at FOSDEM 2016 (January 31, 2016, 13:00 UTC)

Gentoo Linux was present at this year's Free and Open Source Developer European Meeting (FOSDEM). For those not familiar with FOSDEM it is a conference that consists of more than 5,000 developers and more than 600 presentations over a two-day span at the premises of the Université libre de Bruxelles. The presentations are both streamed directly and recorded making it available to browse the archive once published.

Hanno Böck, a name mostly heard in relation to the fuzzing project, was the only Gentoo Developer presenting a talk this year on the very important subject of security and how Gentoo can be used as a framework for running Address Sanitizer to detect security bugs: "Can we run C code and be safe?: A Linux system protected with Address Sanitizer".

For the first time in many years Gentoo had a stand this year where we handed out buttons and stickers in addition to a LiveDVD.

Gentoo Boot

The Gentoo Ten team created a hybrid amd64/x86 "FOSDEM 2016 Special Edition" release for our user's benefit (thanks likewhoa!), and 200 DVDs were printed of which 155 were already distributed to potential users by the end of day one. A posters on the stand succinctly listed all the packages included on the LiveDVD with some highlights of packages more familiar to some users, something that also highlights one of the benefits of rolling release distributions in that the versions are up to date with upstream releases.

Gentoo DVD package list

If the LiveDVD is used on USB instead of the handed out DVDs it also offers the option of using persistence to store changes on the USB. It uses aufs to layer a read-write file system on top of a read-only squashfs compressed file system. This is great, because it allows you to make changes to the livedvd and have those changes appear on future reboots.

As mentioned in a blog post by dilfridge the stand also attracted attention due to a comment involving Gentoo Linux by Lennart Poettering in his keynote speech as a distribution that doesn't use systemd by default. This fit nicely with one of our banners at the stand; "Gentoo Linux | Works even without systemd | choice included".

gentoo-choice

There was a lot of positive feedback from various users and the stand functioned very nicely as a meeting place for the various people and the atmosphere was good throughout the conference.

10 fosdem-booth

As has become a tradition there was also a Gentoo dinner again this year amongst developers and users (thanks xaviermiller), a nice way to meet up and discuss everything in a relaxing setting.

January 30, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

In relation to the other post on firmware, and with my recent trip to FOSDEM, I have been musing a few things about reverse engineering old devices and particularly, old firmware.

While emulators have been a thing for a very long time, lots of them are not designed to document how things worked as much as they intended to run code (ROMs, games, whatever else) for the original platform. I can't remember many projects in my past experience with emulators that cared to provide system documentation of their reverse engineered efforts — probably because lots of those emulators were not, to begin with, open source. Indeed I remember that quite a few ended up competing with each other, particularly when Sony PlayStation emulators came to be.

The reason why I find this important is that reverse engineering a modern firmware is difficult, and yet it's the very cornerstone of validating the behaviour of software of which we don't have sources. And unfortunately we don't have the sources for lots of software right now.

Unfortunately, reverse engineering, say, the BIOS of a ten years old motherboard is neither glamorous nor directly useful: you can run the same software on a modern system, so why spending time on fixing things there? But on the other hand, knowing a lot more of those systems, and documenting processes and utilities would provide insight for future analysis.

Reverse engineering and reimplementation of formats, protocols, firmware that are not publicly described or available, and providing the missing documentation, is an useful skill to have, if not a directly marketable one. Can you take an old system, dump its BIOS, figure out how all components fit together and have it run Coreboot? That might not be by itself a very fulfilling result, but it shows clearly that you can deal with many layers of fiddly objects, in hardware and in software. To be honest, I doubt I would be able to do that myself.

I know more than a few people have asked before why would you have something like ReactOS spending time and development energy, I have had my doubts myself, but having the ability to study and reimplement APIs that are not published by Microsoft is definitely an advantage for the general world out there.

Take the ARM1 reverse engineering as an example. It's a very interesting article, even though ARM1 is an absolutely obsolete technology by now. Its usefulness on the practical scale is close to zero, but its usefulness as a teaching device is huge.

We need more of that, and more published works of it.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Gentoo at FOSDEM: Posters (systemd, arches) (January 30, 2016, 15:24 UTC)

Especially after Lennart Poettering made some publicity for Gentoo Linux in his keynote talk (unfortunately I missed it due to other commitments :), we've had a lot of visitors at our FOSDEM booth. So, because of popular demand, here are again the files for our posters. They are based on the great "Gentoo Abducted" design by Matteo Pescarin.  Released under CC BY-SA 2.5 as the original. Enjoy!



PDF SVG


PDF SVG

January 29, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Stage4 tarballs, minimal and cloud (January 29, 2016, 05:05 UTC)

Where are they

The tarballs can be found in the normal place.

Minimal

This is meant to be just what you need to boot, the disk won't expand itself, it won't even get networking info or set any passwords for you (no default password).

This tarball is suposed to be the base you generate more complex images from, it is what is going to be used by Openstack's diskimage-builder.

The primary things it does is get you a kernel, bootloader and sshd.

stage4-minimal spec

Cloud

This was primarilly targeted for use with openstack but it should work with amazon as well, both use cloud-init.

Network interfaces are expected to use dhcp, a couple of other useful things are installed as well, syslog, logrotate, etc.

By default cloud-init will take data (keys mainly) and set them up for the 'gentoo' user.

stage4-cloud spec

Next

I'll be posting about the work being done to take these stages and build bootable images. At the momebt I do have images available here.

openstack images

January 28, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Inspecting and knowing your firmware images (January 28, 2016, 01:15 UTC)

Update: for context, here's the talk I was watching while writing this post.

Again posting about the Enigma conference. Teddy Reed talked about firmware security, in particular based on pre-boot EFI services. The video will be available at some point, it talks in details about osquery (which I'd like to package for Gentoo), but also has a lower-key announcement of something I found very interesting: VirusTotal is now (mostly) capable of scanning firmware images of various motherboard manufacturers.

The core of this implementation leverages two open-source tools: uefi_firmware by Teddy himself, and UEFITool by Nikolaj Schlej. They are pretty good but since this is still in the early stages, there are still a few things to iron out.

For instance, when I first scanned the firmware of my home PC it was reported with a clearly marker of malware, which made me suspicious – and indeed got ASUS to take notice and look into it themselves – but it looks like it was a problem with parsing the file, Teddy's looking into it.

On the other hand, sticking with ASUS, my ZenBook shows in its report the presence of CompuTrace — luckily for me I don't run this on Windows.

This tool is very interesting under many different point of views, because not only it will (maybe in due time, as firmware behaviour analysis improves) provide information about possibly-known malware (such as CompuTrace) in a firmware upgrade, before you apply it, but even before you even buy the computer.

And this is not just about malware. The information that VirusTotal provides (or to be precise the tools behind it) include information about certificates, which for instance told me that my home PC would allow me to install Ubuntu under SecureBoot, since the Canonical certificate is present — or, according to Matthew Garrett, it will allow an Ubuntu signed bootloaded to boot just about anything defeating SecureBoot altogether.

Unfortunately this only works for manufacturers that provide raw firmware updates right now. ASUS and Intel both do that, but for instance Dell devices will provide the firmware upgrade only as a Windows (or DOS) executable. Some old extraction instructions exist, but they are out of date. Thankfully, Nikolaj pointed me at a current script that works at least for my E6510 laptop — which by the way also has CompuTrace.

That script, though, fails with my other Dell laptop, a Vostro 3750 — in that case, you can get your hands on the BIOS image by simply executing it with Wine (it will fail with an obscure error message) and then fetching it from Wine's temporary folder. Similarly, it does not work with the updater for the XPS 13 (which I'm considering buying to replace the Zenbook), and in this case Wine is not of enough help (because it does not

Unfortunately that script does not work with the more recent laptops such as the XPS13 that I'm considering buying, so I should possibly look into extending it if I can manage to get it work, although Nikolaj with much more experience than me tried and failed to get a valid image out of it.

To complete the post, I would like to thank Teddy for pointing the audience to Firmware Security — I know I'll be reading a lot more about that soon!

January 27, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Usable security: the sudo security model (January 27, 2016, 18:00 UTC)

I'm starting writing this while I'm at Enigma 2016 listening to the usable security track. I think it's a perfectly good time to start talk publicly about my experience trying to bring security to a Unix-based company I worked for before.

This is not a story of large company, the company I worked for was fairly small, with five people working in it at the time I was there. I'll use "we" but I will point out that I'm no longer at that company and this is all in the past. I hope and expect the company to have improved their practices. When I joined the company, it was working on a new product, which meant we had a number of test servers running within the office and only one real "production" server for this running in a datacenter. In addition to the new product, a number of servers for a previous product were in production, and a couple of local testing servers for these.

While there was no gaping security hole for the company (otherwise I wouldn't even be talking about it!) the security hygiene in the company was abysmal. We had an effective sysadmin at the office for the production server, and an external consultant to manage the network, but the root password (yes singular) of all devices was also known to the owner of the company, who also complained when I told them I wouldn't share my password.

One of the few things that I wanted to set up there was a stronger authentication and stopping people from accessing everything with root privileges. For that stepping stone I ended up using, at least for the test servers (I never managed to put this into proper production), sudo.

We have all laughed at sudo make me a sandwich but the truth is that it's still a better security mode than running as root, if used correctly. In particular, I did ask the boss what they wanted to run as root, and after getting rid of the need for root for a few actions that could be done unprivileged, I set up a whitelist of commands that their user could run without password. They were mostly happy not to have to login as root, but it was still not enough for me.

My follow-up ties to the start of this article, in particular the fact I started writing this while listening to Jon Oberheide. What I wanted to achieve was having an effective request for privilege escalation to root — that is, if someone were to actually find the post-it with the owner's password they wouldn't get access to root on any production system, even though they may be able to execute some (safe) routine tasks. At the time, my plan involved using Duo Security and a customized duo_unix so that a sudo request for any non-whitelisted command (including sudo -i) would require confirmation to the owner's phone. Unfortunately at the time this hit two obstacles: the pull request with the code to handle PAM authentication for sudo was originally rejected (I'm not sure what the current state of that is, maybe it can be salvaged if it's still not supported) and the owners didn't want to pay for the Duo license – even just for the five of us, let alone providing it as a service to customers – even though my demo did have them quite happy about the idea of only ever needing their own password (or ssh key, but let's not go there for now.)

This is just one of many things that were wrong in that company of course, but I think it shows a little bit that even in the system administration work, sometimes security and usability do go hand in hand, and a usable solution can make even a small company more secure.

And for those wondering, no I'm in no way affiliate with Duo, I just find it a good technology and I'm glad Dug pointed me at it a while back.

January 26, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

http://www.akhuettel.de/publications/german-research.pdf
The 3/2015 edition of the "german research" magazine of the DFG includes an article about the work of our research group! This is a translation of a previous publication in the German language journal "Forschung" of the DFG. Enjoy!

"Carbon Nanotubes: Strong, Conductive and Defect-Free"
Carbon nanotubes are a fascinating material. In experiments at ultra-low temperatures, physicists make their different properties interact with one another - and in so doing find answers to fundamental questions.
Andreas K. Hüttel
german research 3/2015, 24-27 (2015) (PDF)

Hanno Böck a.k.a. hanno (homepage, bugs)

GentooAddress Sanitizer is a remarkable feature that is part of the gcc and clang compilers. It can be used to find many typical C bugs - invalid memory reads and writes, use after free errors etc. - while running applications. It has found countless bugs in many software packages. I'm often surprised that many people in the free software community seem to be unaware of this powerful tool.

Address Sanitizer is mainly intended to be a debugging tool. It is usually used to test single applications, often in combination with fuzzing. But as Address Sanitizer can prevent many typical C security bugs - why not use it in production? It doesn't come for free. Address Sanitizer takes significantly more memory and slows down applications by 50 - 100 %. But for some security sensitive applications this may be a reasonable trade-off. The Tor project is already experimenting with this with its Hardened Tor Browser.

One project I've been working on in the past months is to allow a Gentoo system to be compiled with Address Sanitizer. Today I'm publishing this and want to allow others to test it. I have created a page in the Gentoo Wiki that should become the central documentation hub for this project. I published an overlay with several fixes and quirks on Github.

I see this work as part of my Fuzzing Project. (I'm posting it here because the Gentoo category of my personal blog gets indexed by Planet Gentoo.)

I am not sure if using Gentoo with Address Sanitizer is reasonable for a production system. One thing that makes me uneasy in suggesting this for high security requirements is that it's currently incompatible with Grsecurity. But just creating this project already caused me to find a whole number of bugs in several applications. Some notable examples include Coreutils/shred, Bash ([2], [3]), man-db, Pidgin-OTR, Courier, Syslog-NG, Screen, Claws-Mail ([2], [3]), ProFTPD ([2], [3]) ICU, TCL ([2]), Dovecot. I think it was worth the effort.

I will present this work in a talk at FOSDEM in Brussels this Saturday, 14:00, in the Security Devroom.

January 25, 2016
Michał Górny a.k.a. mgorny (homepage, bugs)
Mangling shell options in ebuilds (January 25, 2016, 10:46 UTC)

A long time ago eutils.eclass was gifted with a set of terribly ugly functions to push/pop various variables and shell options. Those functions were written very badly, and committed without any review. As a result, a number of eclasses and ebuilds are now using that code without even understanding how bad it is.

In this post, I would like to shortly summarize how to properly and reliably save states of shell options. While the resulting code is a little bit longer than use of e*_push and e*_pop functions, it is much more readable, does not abuse eval, does not abuse global variables and is more reliable.

Preferable solution: subshell scope

Of course, the preferable way of altering shell options is to do that in a subshell. This is the only way that reliably isolates the alterations from parent ebuild environment. However, subshells are rarely desired — so this is something you’d rather reuse if it’s already there, rather than introducing just for the sake of shell option mangling.

Mangling shopt options

Most of the ‘new’ bash options are mangled using shopt builtin. In this case, the -s and -u switches are used to change the option state, while the -p option can be used to get the current value. The current value is output in the form of shopt command syntax that can be called directly to restore the previous value.

my_function() {
	local prev_shopt=$(shopt -p nullglob)
	# prev_shopt='shopt -u nullglob' now
	shopt -s nullglob
	# ...
	${prev_shopt}
}

Mangling set options

The options set using the set builtin can be manipulated in a similar way. While the builtin support both short and long options, I strongly recommend using long options for readability. In fact, the long option names can be used through shopt with the additional -o parameter.

my_function() {
	local prev_shopt=$(shopt -p -o noglob)
	# prev_shopt='set +o noglob' now
	set -o noglob  # or shopt -s -o noglob
	# ...
	${prev_shopt}
}

Mangling umask

The umask builtin returns the current octal umask when called with no parameters. Furthermore, the -p parameter can be used to get full command for use alike shopt -p output.

my_function() {
	local prev_umask=$(umask)
	# prev_umask=0022 now
	umask 077
	# ...
	umask "${prev_umask}"
}

alternative_function() {
	local prev_umask=$(umask -p)
	# prev_umask='umask 0022' now
	umask 077
	# ...
	${prev_umask}
}

Mangling environment variables

The eutils hackery went as far as to reinvent local variables using… global stacks. Not that it makes any sense. Whenever you want to change variable’s value, attributes or just unset it temporarily, just use local variables. If the change needs to apply to part of a function, create a sub-function and put the local variable inside it.

While at it, please remember that bash does not support local functions. Therefore, you need to namespace your functions to avoid collisions and unset them after use.

my_function() {
	# unset FOO in local scope (this also prevents it from being exported)
	local FOO
	# 'localize' bar for modifications, preserving value
	local bar="${bar}"

	#...

	my_sub_func() {
		# export LC_ALL=POSIX in function scope
		local -x LC_ALL=POSIX
		#...
	}
	my_sub_func
	# unset the function after use
	unset -f my_sub_func
}

Update: mangling shell options without a single subshell

(added on 2016-01-28)

izabera has brought it to my attention that the shopt builtin supports -q option to suppress output and uses exit statuses to return the original flag state. This makes it possible to set and unset the flags without using a single subshell or executing returned commands.

Since I do not expect most shell script writers to use such a long replacement, I present it merely as a curiosity.

my_setting_function() {
	shopt -q nullglob
	local prev_shopt=${?}
	shopt -s nullglob

	#...

	[[ ${prev_shopt} -eq 0 ]] || shopt -u nullglob
}

my_unsetting_function() {
	shopt -q extquote
	local prev_shopt=${?}
	shopt -u extquote

	#...

	[[ ${prev_shopt} -eq 0 ]] && shopt -s extquote
}

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
On the conference circuit (January 25, 2016, 02:00 UTC)

You may remember that I used not to be a fan of travel, and that for a while I was absolutely scared by the idea of flying. This has clearly not been the case in a while, given that I've been working for US companies and traveling a lot of the time.

One of the side effects of this is that I enjoy the "conference circuit", to the point that I'm currently visiting three to four conferences a year, some of which for VideoLAN and others for work, and in a few cases for nothing in particular. This is an interesting way to keep in touch with what's going on in the community and in the corporate world out there.

Sometimes, though, I wish I had more energy and skills to push through my ideas. I find it curious how nowadays it's all about Docker and containers, while I jumped on the LXC bandwagon quite some time ago thanks to Tiziano, and because of that need I made Gentoo a very container-friendly distribution from early on. Similarly, O'Reilly now has a booklet on static site generators which describe things not too far from what I've been doing since at least 2006 for my website, and for xine's later on. Maybe if I wasn't at the time so afraid of traveling I would have had more impact on this, but I guess (to use a flying metaphor) I lost my slot there.

To focus bit more on SCaLE14x in particular, and especially about Cory Doctorow's opening keynote, I have to say tht the conference is again a good load of fun. Admittedly I rarely manage to go listening to talks, but the amount of people going in and out of the expo floor, and the random conversation struck there are always useful.

In the case of Doctorow's keynote, while he's (as many) a bit too convinced, in my opinion, that he has most if not all the answers, his final argument was a positive one: don't try to be "pure" (as FSF would like you to be), instead hedge your bets by contributing (time, energy, money) to organizations and projects that work towards increasing your freedom. I've been pleasantly surprised to hear Cory name, earlier in that talk, VLC and Handbrake — although part of the cotnext in which he namechecked us is likely going to be a topic for a different post, once I have something figured out.

My current trip brings me to San Francisco tonight, for ENIGMA, and on this note I would like to remember to conferencegoers that, while most of us are aiming for a friendly and relaxed atmosphere, there is some opsec you should be looking into. I don't have a designated conference laptop (just yet, I might get myself a Chromebook for it) but I do have at least a privacy screen. I've seen more than a couple corp email interfaces running on laptops while walking the expo floor this time.

Finally, I need to thank TweetDeck for their webapp. The ability to monitor hashtags, and particularly multiple hashtags from the same view is gorgeous when you're doing back-to-back conferences (#scale14x, #enigma2016, #fosdem.) I know at least one of them is reading, so, thanks!

January 24, 2016
Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.6 is released (January 24, 2016, 11:14 UTC)

Hi all,
we are pleased to announce version 0.6 of Trojitá, a fast Qt IMAP e-mail client. This release brings several new features as well as the usual share of bugfixes:

  • Plugin-based infrastructure for the address book, which will allow better integration with other applications
  • Usability improvements in the message composer on several fronts
  • Better keyboard-only usability for those of us who do not touch mouse that often
  • More intuitive message tagging, and support for standardized actions for junk mail
  • Optional sharing of authentication data between IMAP and SMTP
  • Change to using Qt5 by default. This is the last release which still supports Qt4.
  • Improved robustness on unstable network connections
  • The old status bar is now gone to save screen real estate
  • IMAP interoperability fixes
  • Speed improvements

This release has been tagged in git as "v0.6". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

This release is named after the Aegean island Λέσβος (Lesvos). Jan was there for the past five weeks, and he insisted on mentioning this challenging experience.

The Trojitá developers

January 22, 2016
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Please test www-apache/mod_perl-2.0.10_pre201601 (January 22, 2016, 18:39 UTC)

We're trying to get both Perl 5.22 and Apache 2.4 stable on Gentoo these days. One thing that would be really useful is to have a www-apache/mod_perl that works with all current Perl and Apache versions... and there's a candidate for that: a snapshot of what is hopefully going to be mod_perl-2.0.10 pretty soon. So...

Please keyword (if necessary) and test www-apache/mod_perl-2.0.10_pre201601!!! Feedback for all Perl and Apache versions is very much appreciated. Gentoo developers can directly edit our compatibility table with the results, everyone else please comment on this blog post or file bugs in case of problems!

Please always include exact www-servers/apache, dev-lang/perl, and www-apache/mod_perl versions!

January 18, 2016
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Of OpenStack and SSL (January 18, 2016, 06:00 UTC)

SSL in vanila OpenStack

The nature of OpenStack projects is largely like projects in Gentoo. Even though they are all under the OpenStack umbrella that doesn't mean they all have to work the same, or even work together.

For instance, nova has the ability to do ssl itself, you can define a CA and public/private keypair. Glance (last time I checked) doesn't do ssl yourself so you must offload it. Other service might do ssl themselves but not in the same way nova does it.

This means that the most 'standard' setup would be to not run ssl, but this isn't exactly desirable. So run a ssl reverse proxy.

Basic Setup

  • OpenStack services are set up on one host (just in this example).
  • OpenStack services are configed to listen on localhost only.
  • Public, Internal and Admin URLs need to be defined with https.
  • Some tuning needs to be done so services work properly, primarily to glance and nova-novnc.
  • Nginx is used as the reverse proxy.

Configs and Tuning

General Config for All Services/Sites

This is the basic setup for each of the openstack services, the only difference between them will be what goes in the location subsection.

server {
    listen LOCAL_PUBLIC_IPV4:PORT;
    listen [LOCAL_PUBLIC_IPV6]:PORT;
    server_name = name.subdomain.example.com;
    access_log /var/log/nginx/keystone/access.log;
    error_log /var/log/nginx/keystone/error.log;

    ssl on;
    ssl_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    ssl_certificate_key /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header Public-Key-Pins 'pin-sha256="PUB_KEY_PIN_SHA"; max-age=2592000; includeSubDomains';
    ssl_dhparam /etc/nginx/params.4096;
    resolver TRUSTED_DNS_SERVER;
    resolver_timeout 5s;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/COMBINED_PUB_PRIV_KEY.pem;
    add_header X-XSS-Protection "1; mode=block";
    add_header Content-Security-Policy "default-src 'self' https: wss:;";
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;

    location / {
        # this changes depending on the service
        proxy_pass http://127.0.0.1:PORT;
    }
}

Keystone and Uwsgi

It turns out keystone has switched to uwsgi for it's service backend. This is good because it means we can have the web server connect to that, no more trying to do it all by itself. I'll leave the setting up of uwsgi itself as an exercise to the reader :P

This config has a few extra things, but it is currently what I know to be 'secure' (similiar config on this blog gets an A+ on all those ssl test things). It's the last location piece that changes the most between services.

location / {
    uwsgi_pass unix:///run/uwsgi/keystone_admin.socket;
    include /etc/nginx/uwsgi_params;
    uwsgi_param SCRIPT_NAME admin;
}

Glance

Glance just needs one thing on top of the general proxying that it needs. It needs client_max_body_size 0; in the main server stanza so that you can upload images without being cut off at some low size.

client_max_body_size 0;
location / {
    proxy_pass http://127.0.0.1:9191;
}

Nova

The serviecs for nova just need the basic proxy_pass line. The only exception is novnc, it needs some proxy headers passed.

location / {
    proxy_pass http://127.0.0.1:6080;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
}

Rabbitmq

Rabbit is fairly simple, you just need to enable ssl and disable the plaintext port (setting up your config of course).

[
    {ssl, [{versions, ['tlsv1.2', 'tlsv1.1']}]},
    {rabbit, [
        {tcp_listeners, []},
        {ssl_listeners, [5671]},
        {ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/CA_CERT.pem"},
                       {certfile,  "/etc/rabbitmq/ssl/PUB_KEY.pem"},
                       {keyfile,   "PRIV_KEYKEY.key"},
                       {versions, ['tlsv1.2', 'tlsv1.1']}
                      ]
        }]
    }
].

Openstack Configs

The openstack configs can differ slightly but they are all mostly the same now they are using the same libraries (oslo stuff).

General Config

[keystone_authtoken]
auth_uri = https://name.subdomain.example.com:5000
auth_url = https://name.subdomain.example.com:35357

[oslo_messaging_rabbit]
rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True

Nova

osapi_compute_listen = 127.0.0.1
metadata_listen = 127.0.0.1
novncproxy_host = 127.0.0.1
enabled_apis = osapi_compute, metadata
[vnc]
novncproxy_base_url = https://name.subdomain.example.com:6080/vnc_auto.html
# the following only on the 'master' host
vncserver_proxyclient_address = 1.2.3.4
vncserver_listen = 1.2.3.4

[glance]
host = name.subdomain.example.com
protocol = https
api_servers = https://name.subdomain.example.com:9292

[neutron]
url = https://name.subdomain.example.com:9696
auth_url = https://name.subdomain.example.com:35357

Cinder

# api-servers get this
osapi_volume_listen = 127.0.0.1

# volume-servers and api-servers get this
glance_api_servers=https://name.subdomain.example.com:9292

Glance

glance_api_servers=https://name.subdomain.example.com:9292

Glance

# api
bind_host = 127.0.0.1
registry_host = name.subdomain.example.com
registry_port = 9191
registry_client_protocol = https

.

# cache
registry_host = name.subdomain.example.com
registry_port = 9191

.

# registry
bind_host = 127.0.0.1
rabbit_host = name.subdomain.example.com
rabbit_port = 5671
rabbit_use_ssl = True

.

# scrubber
registry_host = name.subdomain.example.com
registry_port = 9191

Neutron

# neutron.conf
bind_host = 127.0.0.1
nova_url = https://name.subdomain.example.com:8774/v2

[nova]
auth_url = https://name.subdomain.example.com:35357

.

# metadata_agent.ini
nova_metadata_ip = name.subdomain.example.com
nova_metadata_protocol = https

January 17, 2016
Michal Hrusecky a.k.a. miska (homepage, bugs)
Getting to your PiDrive (January 17, 2016, 19:38 UTC)

ocipv6I wrote few times about my PiDrive already, this is continuation of the work in progress and I would like to share what I did since the last time.

Getting accessible

We need to address two problems regarding the accessibility of PiDrive. First one is actually not that you need to access your PiDrive from Internet, but something much simpler. Once you connect your PiDrive to your local network, you need to find out it’s local address first so you can set it up. There are various options, for example including avahi or netbios and configuring them to publish some recognizable name. I’m sure everybody has those in mind and I do as well. But I wanted to start first with something that might have escaped the others and what I consider quite simple but at the same time quite effective approach. On boot, I display ownCloud logo on HDMI attached display if there is one and bellow it address of the device. My PiDrive came with 90 degrees angle HDMI converter so it looked like it is expected that you will connect display to it. And reading what is written on HDMI is much simpler and reliable than anything you do on computer.

Other accessibility issue is getting to your PiDrive from Internet. I already prepared kinda solution (still need to implement tunnel option thought) for the Internet of tomorrow (IPv6), but as quite some people still live in the past, I extended my application and now it supports even UPnP. What it is and how does it work? If you have smart enough router that allows such kind of thing (most of them, although you probably need to enable it), you can instruct your PiDrive to open up a port on it and forward it to itself. Tricky part is that your router has to support it and you kinda need public IPv4 on the other side (otherwise it misses the point). So it doesn’t solve everything, but gives PiDrive accessible on IPv4 to at least some people. And Dynv6 I implemented previously while playing with IPv6 should be able to resolve to your public IPv4 as well. So you can get ready for the future, while still maintaining compatibility for people living in stone age.

Image

Both of the improvements mentioned above are installed into my image and also finally installed ownCloud on it. Although currently just a git snapshot. For final version, I’ll have to switch to packages I guess, but currently I have some dependency issues with them (and php7) that I need to solve first. You can now try improvements I mentioned by yourself. Just inspect my GitHub repo or you can still download my temporal binaries (now updated).

January 16, 2016
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

You might have seen the word TEXTREL thrown around security or hardening circles, or used in Gentoo Linux installation warnings, but one thing that is clear out there is that the documentation around this term is not very useful to understand why they are a problem. so I've been asked to write something about it.

Let's start with taking apart the terminology. TEXTREL is jargon for "text relocation", which is once again more jargon, as "text" in this case means "code portion of an executable file." Indeed, in ELF files, the .text section is the one that contains all the actual machine code.

As for "relocation", the term is related to dynamic loaders. It is the process of modifying the data loaded from the loaded file to suit its placement within memory. This might also require some explanation.

When you build code into executables, any named reference is translated into an address instead. This includes, among others, variables, functions, constants and labels — and also some unnamed references such as branch destinations on statements such as if and for.

These references fall into two main typologies: relative and absolute references. This is the easiest part to explain: a relative reference takes some address as "base" and then adds or subtracts from it. Indeed, many architectures have a "base register" which is used for relative references. In case of executable code, particularly with the reference to labels and branch destinations, relative references translate into relative jumps, which are relative to the current instruction pointer. An absolute reference is instead a fully qualified pointer to memory, well at least to the address space of the running process.

While absolute addresses are kinda obvious as a concept, they are not very practical for a compiler to emit in many cases. For instance, when building shared objects, there is no way for the compiler to know which addresses to use, particularly because a single process can load multiple objects, and they need to all be loaded at different addresses. So instead of writing to the file the actual final (unknown) address, what gets written by the compiler first – and by the link editor afterwards – is a placeholder. It might sound ironic, but an absolute reference is then emitted as a relative reference based upon the loading address of the object itself.

When the loader takes an object and loads it to memory, it'll be mapped at a given "start" address. After that, the absolute references are inspected, and the relative placeholder resolved to the final absolute address. This is the process of relocation. Different types of relocation (or displacements) exists, but they are not the topic of this post.

Relocations as described up until now can apply to both data and code, but we single out code relocations as TEXTRELs. The reason for this is to be found in mitigation (or hardening) techniques. In particular, what is called W^X, NX or PaX. The basic idea of this technique is to disallow modification to executable areas of memory, by forcing the mapped pages to either be writable or executable, but not both (W^X reads "writable xor executable".) This has a number of drawbacks, which are most clearly visible with JIT (Just-in-Time) compilation processes, including most JavaScript engines.

But beside JIT problem, there is the problem with relocations happening in code section of an executable, too. Since the relocations need to be written to, it is not feasible (or at least not easy) to provide an exclusive writeable or executable access to those. Well, there are theoretical ways to produce that result, but it complicates memory management significantly, so the short version is that generally speaking, TEXTRELs and W^X techniques don't go well together.

This is further complicated by another mitigation strategy: ASLR, Address Space Layout Randomization. In particular, ASLR fully defeats prelinking as a strategy for dealing with TEXTRELs — theoretically on a system that allows TEXTREL but has the address space to map every single shared object at a fixed address, it would not be necessary to relocate at runtime. For stronger ASLR you also want to make sure that the executables themselves are mapped at different addresses, so you use PIE, Position Independent Executable, to make sure they don't depend on a single stable loading address.

Usage of PIE was for a long while limited to a few select hardened distributions, such as Gentoo Hardened, but it's getting more common, as ASLR is a fairly effective mitigation strategy even for binary distributions where otherwise function offsets would be known to an attacker.

At the same time, SELinux also implements protection against text relocation, so you no longer need to have a patched hardened kernel to provide this protection.

Similarly, Android 6 is now disallowing the generation of shared objects with text relocations, although I have no idea if binaries built to target this new SDK version gain any more protection at runtime, since it's not really my area of expertise.

Michał Górny a.k.a. mgorny (homepage, bugs)

The way packages are maintained in Gentoo have been evolving for quite some time already. So far all of that has been happening on top of old file formats which slowly started to diverge from the needs of Gentoo developers, and become partially broken. The concept of herds has become blurry, with confusion in definition between different developers and partial assumption about their deprecation. Maintenance of herd by project has been broken by moving projects to the Wiki. Some projects have stopped using herds, others have been declaring them in metadata.xml in different ways.

The problem has finally reached the Gentoo Council and has been discussed on 2015-10-25 meeting (note: no summary still…). The Council attempted to address different problems by votes, and create a new solution by combining the results of votes. However, finally it decided that it is not possible to create a good specification this way. Instead, the meeting has brought two major points. Firstly, herds are definitely deprecated. Secondly, someone needs to provide a complete, consistent replacement in GLEP form.

This is how GLEP 67 came to be. It was based on results of previous discussion, Council votes and thorough analysis of different problems. It provides a complete, consistent system for maintaining packages and expressing the maintenance information. It has been approved by the Council on 2016-01-10, with two week deadline on preparing to the switch.

Therefore, on 2016-01-24 Gentoo is going to switch to the new maintenance structure described in GLEP 67 completely. The announcement with transition details has been sent already. Instead, I’d like to focus on describing how things are going to work starting from the day GLEP 67 becomes implemented.

Who is going to maintain packages?

Before getting into technical details, GLEP 67 starts by limiting possible package maintainer entries. Until now, metadata files were allowed to list practically any e-mail addresses for package maintainers. From now on, only real people (either developers or proxied maintainers) and projects (meeting the requirements of GLEP 39, in particular having a Wiki page) are allowed to be maintainers. All maintainers are identified by e-mail addresses which are required to be unique (i.e. sharing the same address between multiple projects is forbidden) and registered on bugs.g.o.

This supports the two major goals behind maintainer specifications: bug assignment and responsibility assignment. The former is rather obvious — Bugzilla is the most reliable communication platform for both Gentoo developers and users. Therefore, it is important that the bugs can be assigned to appropriate entities without any issues. The latter aims to address the problem of unclear ‘ownership’ of some packages, and packages maintained by ‘dead’ e-mail aliases.

In other words, from now on we require that for every maintained package in Gentoo, we need to be able to obtain a complete, clear list of people maintaining it (directly and via projects). We no longer accept ‘dumb’ e-mail aliases that make it impossible to distinguish real maintainers from people who are simply following bugs. This gives three important advantages:

  1. we can actually ping the correct people on IRC without having to go through hoops,
  2. we can determine whether a package is actually maintained by someone, rather than assigned to an alias from which nobody reads bug mail anymore,
  3. we can clearly determine who is responsible for a package and who is the appropriate person to acknowledge changes.

Changes for maintainer-needed packages

The new requirements brought a new issue: maintainer-needed@g.o. The specific use of this e-mail alias did not really seem to suit a project. Creating a ‘maintainer-needed project’ would either imply creating a dead entity or assigning actual maintainers to supposedly-unmaintained packages. On the other hand, I did not really want to introduce special cases in the specification.

Instead, I have decided that the best way forward is to remove it. In other words, unmaintained packages now explicitly list no maintainers. The assignment to maintainer-needed@g.o will be done implicitly, and is a rule specific to the Gentoo repository. Other repositories may use different policies for packages with no explicit maintainers (like assigning to the repository owner).

The metadata.xml and projects.xml files

The changes to metadata.xml are really minimal, and backwards compatible. The <herd/> element is no longer used, and will be prohibited. Instead, <maintainer/> elements are going to be used for all kinds of maintainers. There are no incompatible changes in those elements, therefore existing tools will not be broken.

The <maintainer/> element gets a new obligatory type attribute that needs to either be person or project. The latter value may cause tools to look the project up (by e-mail) in projects.xml.

The projects.xml file (unlike past herds.xml) is clearly defined in the repository scope. In particular, the tools must not assume it always comes out of Gentoo repository. Other repositories are allowed to define their own projects (though overriding projects is not allowed), and project lookup needs to respect masters= setting in repositories.

For the Gentoo repository, projects.xml is generated automatically from the Wiki project pages, and distributed both via api.gentoo.org and via the usual repository distribution means (the rsync and git mirrors).

Summary

A quick summary for Gentoo developers of how things look like with GLEP 67:

  1. Only people and projects can maintain packages. If you want to maintain packages in an organized group, you have to create a project — with Wiki page, unique e-mail address and bugs.g.o account.
  2. Only the explicitly listed project members (and subproject members whenever member inheritance is enabled) are considered maintainers of the package. Other people subscribing the e-mail alias are not counted.
  3. Packages with no maintainers have no <maintainer/> elements. The bugs are still implicitly assigned to maintainer-needed@g.o but this e-mail alias is no longer used in metadata.xml.
  4. <herd/> elements are no longer used, <maintainer/>s are used instead.
  5. <maintainer/> requires a new type attribute that takes either person or project value.

Peter Wilmott a.k.a. p8952 (homepage, bugs)
Starling Murmuration (January 16, 2016, 00:00 UTC)

January 12, 2016
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Sling TV logoI’m not much of a television watcher, but recently I thought that I would check out Sling TV for streaming live television stations. Seeing as I also have an Amazon FireTV, the choice seemed to be an easy one, and since they were offering a 7-day free trial, I had nothing to lose. After my free trial, though, I decided that Sling is not quite ready for the prime time (at least in my opinion). Here are some bullet points about my overall experience (some good, some not-so-good, and some downright ugly). I hope that these points will give you a quick overview of Sling so that you can decide if it is right for you. After the list of points, I will discuss some of the more important concerns in greater detail.

  • The Good:
  • The Bad:
    • Quality is not as high as watching over regular co-ax cable or satellite
    • Customer service was not very helpful in any regard
    • Pausing/fast-forwarding/rewinding doesn’t work well, or on many channels. It doesn’t work at all on 3-day replay streams.
    • No Linux support for the Sling App
    • The Windows app doesn’t work in anything below Windows 7, and then won’t uninstall
    • Video quality is forced to a VERY low level inside a virtual machine
    • In a Windows 7 VM (running inside VirtualBox), the app uncleanly closes every time
    • 3-day replay streams don’t ever start in the Windows app
  • The Ugly:
    • Limited to one concurrent stream per account
    • No in-browser streaming

Now that there’s a list of some of my bigger points, I’d like to go into further detail about some of the key factors that determined my stance on the current state of Sling (e.g. it not yet being ready for prime time). All of the good points are self-explanatory, and more information can be found on Sling’s website.

For some of the bad aspects, the quality really wasn’t as good as standard cable or satellite. I found the picture to be very soft by comparison, and the sound quality was lacking, especially on a nicer 5.x audio system. I also experienced several glitches in the video streams, and popping sounds in the audio streams (despite a fibre internet connection). When I emailed customer service about my concerns, I only received canned responses that weren’t very helpful. Further, they would close the ticket immediately, which gives the impression that they don’t care about keeping my business, but rather just want to close support cases as quickly as possible.

With regard to the applications that can be used on Windows and Mac, I didn’t have much luck. As a Linux user, I had no option except to try to install the Windows app inside of a Windows virtual machine (VM). I firstly tried it in an old Windows XP VM. Though it installed, it didn’t actually run, and then refused to fully uninstall. I put it in a Windows 7 VM, and at least it ran. However, any time I would start a stream (live or 3-day replay), an error message popped up stating that the “quality was reduced to a minimum” because my “video card was not supported,” and to “update my video drivers or upgrade my video card.” Basically, it seemed like the application didn’t function much at all inside of a VM.

Now for the ugly parts. :( Though I can deal with the lack of streaming within an internet browser, it is still a huge oversight on Sling’s part. In-browser streaming works on most platforms these days (e.g. Netflix, Amazon Prime Video, et cetera). If Sling provided in-browser streaming, it would eliminate the problem with the apps only being available for Windows and Mac. It would also negate the issues that I had with the Windows app inside a virtual machine.

The biggest problem with Sling is the limitation of one stream per account. To put that in perspective, let’s say that you have a few televisions in your home, and a couple tablets (iPads or Android-based). If you wanted to watch a particular TV channel via Sling on the TV in your bedroom, and one of your kids wanted to watch his favourite show on his tablet or phone in his room, you would need two separate Sling accounts in order to do that. With only one Sling account, your stream would stop when he started streaming his show. This is a complete deal-breaker for many people, and by today’s standards, it’s really unacceptable from a service provider. I can understand that they don’t want people pirating out the service, but what about checking for the connections coming from the same external IP? That wouldn’t be ideal either, considering you could be out of the home and using your mobile device, but at least it would allow for all devices in the same home (and tied to the same network) to stream concurrently.

Overall, I think Sling is a great idea in that it will allow people like me to have television service. I think that they currently fall short in several areas, though, and for those reasons, I won’t continue to subscribe. Hopefully in the months and years to come, they will rectify these problems (especially the ones in the “ugly” category). If they do, it would be a service that I could stand behind.

Cheers,
Zach

Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Audio Devices and Configuration (January 12, 2016, 09:54 UTC)

This one’s going to be a bit of a long post. You might want to grab a cup of coffee before you jump in!

Over the last few years, I’ve spent some time getting PulseAudio up and running on a few Android-based phones. There was the initial Galaxy Nexus port, a proof-of-concept port of Firefox OS (git) to use PulseAudio instead of AudioFlinger on a Nexus 4, and most recently, a port of Firefox OS to use PulseAudio on the first gen Moto G and last year’s Sony Xperia Z3 Compact (git).

The process so far has been largely manual and painstaking, and I’ve been trying to make that easier. But before I talk about the how of that, let’s see how all this works in the first place.

The Problem

If you have managed to get by without having to dig into this dark pit, the porting process can be something of an exercise in masochism. More so if you’re in my shoes and don’t have access to any of the documentation for the audio hardware. Hardware vendors and OEMs usually don’t share these specifications unless under NDA, which is hard to set up as someone just hacking on this stuff as an experiment or for fun in their spare time.

Broadly, the task involves looking at how the devices is set up on Android, and then replicating that process using the standard ALSA library, which is what PulseAudio uses (this works because both the Android and generic Linux userspace talk to the same ALSA-based kernel audio drivers).

Android’s configuration

First, you look at the Android audio HAL code for the device you’re porting, and the corresponding mixer paths XML configuration. Between the two of these, you get a description of how you can configure the hardware to play back audio in various use cases (music, tones, voice calls), and how to route the audio (headphones, headset, speakers, Bluetooth).

Snippet from mixer paths XMLSnippet from mixer paths XML

In this example, there is one path that describes how to set up the hardware for “deep buffer playback” (used for music, where you can buffer a bunch of data and let the CPU go to sleep). The next path, “speaker”, tells us how to set up the routing to play audio out of the speaker.

These strings are not well-defined, so different hardware uses different path names and combinations to set up the hardware. The XML configuration also does not tell us a number of things, such as what format the hardware supports or what ALSA device to use. All of this information is embedded in the audio HAL code.

Configuring with ALSA

Next, you need to translate this configuration into something PulseAudio will understand1. The preferred method for this is ALSA’s UCM, which describes how to set up the hardware for each use case it supports, and how to configure the routing in each of those use cases.

Snippet from UCMSnippet from UCM

This is a snippet from the “hi-fi” use case, which is the UCM use case roughly corresponding to “deep buffer playback” in the previous section. Within that, we’re looking at the “speaker device” and you can see the same mixer controls as in the previous XML file are toggled. This file does have some additional information — for example, this snippet specifies what ALSA device should be used to toggle mixer controls (“hw:apq8064tablasnd”).

Doing the Porting

Typically, I start with the “hi-fi” use case — what you would normally use for music playback (and could likely use for tones and such as well). Getting the “phone” use case working is usually much more painful. In addition to setting up the audio hardware similar to th “hi-fi use case, it involves talking to the modem, for which there isn’t a standard method across Android devices. To complicate things, the modem firmware can be extremely sensitive to the order/timing of setup, often with no means of debugging (a.k.a. fun times!).

When there is a new Android version, I need to look at all the changes in the HAL and the XML file, redo the translation to UCM, and then test everything again.

This is clearly repetitive work, and I know I’m not the only one having to do it. Hardware vendors often face the same challenge when supporting the same devices on multiple platforms — Android’s HAL usually uses the XML config I showed above, ChromeOS’s CrAS and PulseAudio use ALSA UCM, Intel uses the parameter framework with its own XML format.

Introducing xml2ucm

With this background, when I started looking at the Z3 Compact port last year, I decided to write a tool to make this and future ports easier. That tool is creatively named xml2ucm2.

As we saw, the ALSA UCM configuration contains more information than the XML file. It contains a description of the playback and mixer devices to use, as well as some information about configuration (channel count, primarily). This information is usually hardcoded in the audio HAL on Android.

To deal with this, I introduced a small configuration file that provides the additional information required to perform the translation. The idea is that you write this configuration once, and can more or less perform the translation automatically. If the HAL or the XML file changes, it should be easy to implement that as a change in the configuration and just regenerate the UCM files.

Example xml2ucm configurationExample xml2ucm configuration

This example shows how the Android XML like in the snippet above can be converted to the corresponding UCM configuration. Once I had the code done, porting all the hi-fi bits on the Xperia Z3 Compact took about 30 minutes. The results of this are available as a more complete example: the mixer paths XML, the config XML, and the generated UCM.

What’s next

One big missing piece here is voice calls. I spent some time trying to get voice calls working on the two phones I had available to me (the Moto G and the Z3 Compact), but this is quite challenging without access to hardware documentation and I ran out of spare time to devote to the problem. It would be nice to have a complete working example for a device, though.

There are other configuration mechanisms out there — notably Intel’s parameter framework. It would be interesting to add support for that as well. Ideally, the code could be extended to build a complete model of the audio routing/configuration, and generate any of the configuration that is supported.

I’d like this tool to be generally useful, so feel free to post comments and suggestions on Github or just get in touch.

p.s. Thanks go out to Abhinav for all the Haskell help!


  1. Another approach, which the Ubuntu Phone and Jolla SailfishOS folks take, is to just use the Android HAL directly from PulseAudio to set up and use the hardware. This makes sense to quickly enable any arbitrary device (because the HAL provides a hardware-independent interface to do so). In the longer term, I prefer to enable using UCM and alsa-lib directly since it gives us more control, and allows us to use such features as PulseAudio’s dynamic latency adjustment if the hardware allows it. 

  2. You might have noticed that the tool is written in Haskell. While this is decidedly not a popular choice of language, it did make for a relatively easy implementation and provides a number of advantages. The unfortunate cost is that most people will find it hard to jump in and start contributing. If you have a feature request or bug fix but are having trouble translating it into code, please do file a bug, and I would happy to help! 

January 10, 2016
Michal Hrusecky a.k.a. miska (homepage, bugs)
My way of booting PiDrive (Raspberry Pi2) (January 10, 2016, 12:43 UTC)

PiDrive bootingSome time ago I started playing with PiDrive project. I implemented an application that I think will be useful to people using it in the end – some simple IPv6 enabler/browser and DynDNS client. But I kinda cheated and implemented it on the ARM board I already had at home. Over the last week I didn’t had much free time, but I still continued on the project and I got my Pi booting my custom image. How will PiDrive boot was subject to lengthy discussions on mailing list, so I wanted to provide a proof of concept of how do I think it can be done. As it is long post, TLDR version at the end 😉

Starting points

First, let’s take a look at how Raspberry boots. Most ARM board I encountered so far used u-Boot that was either in internal flash memory or on the SD card at some predefined address. Raspberry searches for a files with predefined names on FAT filesystem that it expects to find on first partition of your SD card. On one hand, it looks weirder, on the other it is simpler if you don’t have any prior experience with ARM. And if you do, you can still put u-Boot there to get all the options u-Boot provides.

Other observation to take into account made by others is that SD cards die quickly in those devices. Mine didn’t yet, but I don’t do any crazy stuff and tend to be lucky. But let’s take for granted that during the life of the device, SD card might die and maybe even multiple times. We should somehow address that. And recovering from broken SD card should be simple so even average Joe can do that.

As I’m openSUSE guy, I obviously picked openSUSE to power my PiDrive, but nothing I did should depend on chosen distribution. I picked 13.2 as Leap for armv7 is not ready yet (about that some other time) and I added few packages that I think can be useful on OBS in separate project.

Last point I have is that I would personally prefer to have system installed on SD card. It will be slower to start, but it will let harddrive to spin down and go to sleep when not used. Prolonging it’s life and reducing the noise of spinning drive. But if system is running from SD card, second paragraph is even more important – we need to gracefully handle SD cards death.

How I boot my Pi

Now you know from what I started and let’s take a look with what I ended up. I wanted to make firsts time installation as easy as possible. As SD cards come preformatted to FAT already, all you have to do to boot is just to copy some files there. I found out what files I need there, tweaked a configuration a little bit and compiled my own kernel. Why? To compile all drivers needed during boot directly in and to include custom initramdisk. For those who don’t know, initramdisk is, as name suggests, small disk in RAM that contains few programs and script that is executed before the system boots and can be used if you need to do some complicated stuff before booting. And that is exactly what I had in mind.

On first boot, my initramdisk takes everything on SD card, copies it to memory (we have 1G of RAM and pretty much nothing is running at this point), then it repartitions the SD card and copies the files back. It makes first partition with FAT smaller and creates new Ext2 partition behind it. What are those for? First partition contains stuff needed by the board to boot up (various firmwares and kernel) but also squashfs compressed rootfs. Once init script will discover correct ext2 partition, it will use it as overlay on top of this squashfs image to make OS image writeable and boots the system using this overlayed filesystem. During boot it also checks whether overlay is on SD is empty (for example because card died) and if so, looks for the backup on harddrive and if found, it is used to populate overlay. So the idea is to use SD card, but backup everything and make a recovery in case SD card dies as easy as possible – you just get a new card from nearest convenience store, unpack an archive on it and boot up. Can’t be simpler I guess.

Little about why choosing squashfs and overlayfs on top of it. SD card is pretty small, and having compressed filesystem will make it much easier to fit. And also, I played with it some time ago and thanks to compressed rootfs device can actually read from it faster as SD cards are slow, but decompression is easy and fast.

If you want to see it or want to try my approach, I put sources to my GitHub repo. It has some implicit dependencies I was too lazy to enumerate and I should write some documentation, but it should download kernel and busybox sources, compile everything, download openSUSE JeOS and Raspberry binary blobs, put everything together and produce SD directory with files to put on SD card. Just for convenience, I will temporally provide binary archive that you can just unpack to SD card and test. Currently it is really just booting. No ownCloud installed yet, although nginx and php7 are there. It should get IP via DHCP, use HDMI out and you should be able to either ssh to it or login on second console using root account and password owncloud.

Plans

What is currently just in TODO is to actually install and play with ownCloud. There is plenty of questions to discuss, like where to put the database and which one. In case of MySQL, we could even divide it and put some tables there and others elsewhere. We have SD, HDD and RAM. Speaking about HDD, also in TODO is to find, format and mount USB drive. That’s also about customizing the final rootfs. Last thing that I would like to do at some point is to try to have an easy way how to recompress the system after updates/additional installs and cleanup the overlay. But that is far future. This post is mainly for people playing with PiDrive to explain which way I’m going and pointing everybody to my GitHub repo :-)

TLDR version

I got my Raspberry Pi 2 booting from SD card and automatically repartitioning it and it’s easy to deploy, just unpack this archive on SD card with FAT. Sources are at my GitHub, so you can take a look at what it does. No ownCloud installed yet. Root password is owncloud.

January 05, 2016
Sebastian Pipping a.k.a. sping (homepage, bugs)

Demo start:
https://www.youtube.com/watch?v=2A7V3GLWF6U&feature=youtu.be&t=37s

Where “OpenRC 0.19.1 is starting up Gentoo Linux (x86_64)” scrolls into display:
https://www.youtube.com/watch?v=2A7V3GLWF6U&feature=youtu.be&t=1m21s

January 04, 2016
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
A Quick Update (January 04, 2016, 09:58 UTC)

Happy 2016 everyone!

While I did mention a while back (almost two years ago, wow) that I was taking a break, I realised recently that I hadn’t posted an update from when I started again.

For the last year and a half, I’ve been providing freelance consulting around PulseAudio, GStreamer, and various other directly and tangentially related projects. There’s a brief list of the kind of work I’ve been involved in.

If you’re looking for help with PulseAudio, GStreamer, multimedia middleware or anything else you might’ve come across on this blog, do get in touch!

January 03, 2016

The new year kicks off with two large events with Gentoo participation: The Southern California Linux Expo SCALE14x and FOSDEM 2016, both featuring a Gentoo booth.

SCALE14x

SCALE14x logo

First we have the Southern California Linux Expo SCALE in its 14th edition. The Pasadena Convention Center will host the event this year from January 21 to January 24.

Gentoo will be present as an exhibitor, like in the previous years.

Thanks to the organizers, we can share a special promotional code for attending SCALE with our community, valid for full access passes. Using the code GNTOO on the registration page will get you a 50% discount.

FOSDEM 2016

FOSDEM 2016 logo

Then, on the last weekend of January, we’ll be on the other side of the pond in Brussels, Belgium where FOSDEM 2016 will take place on January 30 and 31.

Located at the Université libre de Bruxelles, it doesn’t just offer interesting talks, but also the finest Belgian beers when the sun sets. :)

This year, Gentoo will also be manning a stand with gadgets, swag, and LiveDVDs.

Booth locations

We’ll update this news item with more detailed information on how to find our booths at both conferences once we have information from the organizers.

January 02, 2016
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux on DELL XPS 13 9350 (January 02, 2016, 10:43 UTC)

As I found little help about this online I figured I’d write a summary piece about my recent experience in installing Gentoo Linux on a DELL XPS 13 9350.

UEFI or MBR ?

This machine ships with a NVME SSD so don’t think twice : UEFI is the only sane way to go.

BIOS configuration

I advise to use the pre-installed Windows 10 to update the XPS to the latest BIOS (1.1.7 at the time of writing). Then you need to change some stuff to boot and get the NVME SSD disk discovered by the live CD.

  • Turn off Secure Boot
  • Set SATA Operation to AHCI (will break your Windows boot but who cares)

Live CD

Go for the latest SystemRescueCD (it’s Gentoo based, you won’t be lost) as it’s quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We’ll be using GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

  • 500Mo UEFI boot partition (type EF00)
  • 16Go Swap partition
  • 60Go Linux root partition
  • 400Go home partition

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +16G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +60G ↵
Hex Code: ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

If your live CD is old (pre 4.4 kernel), the integrated broadcom 4350 wifi card won’t be available !

My trick was to use my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

  • get your Android phone connected on your local WiFi (unless you want to use your cellular data)
  • plug in your phone using USB to your XPS
  • on your phone, go to Settings / More / Tethering & portable hotspot
  • enable USB tethering

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

Et voilà, you have now access to the internet.

Proceed with installation

The only thing to worry about is to format the UEFI boot partition as FAT32.

# mkfs.vfat -F 32 /dev/nvme0n1p1

Then follow the Gentoo handbook as usual for the next steps of the installation process until you arrive to the kernel and the bootloader / grub part.

From this moment I can already say that NO we won’t be using GRUB at all so don’t bother installing it. Why ? Because at the time of writing, the efi-64 support of GRUB was totally not working at all as it failed to discover the NVME SSD disk on boot.

Kernel sources and consideration

The trick here is that we’ll setup the boot ourselves directly from the BIOS later so we only need to build a standalone kernel (meaning able to boot without an initramfs).

EDIT: as of Jan. 10 2016, kernel 4.4 is available on portage so you don’t need the patching below any more !

Make sure you install and use at least a 4.3.x kernel (4.3.3 at the time of writing). Add sys-kernel/gentoo-sources to your /etc/portage/package.keywords file if needed. If you have a 4.4 kernel available, you can skip patching it below.

Patching 4.3.x kernels for Broadcom 4350 WiFi support

To get the broadcom 4350 WiFi card working on 4.3.x, we need to patch the kernel sources. This is very easy to do thanks to Gentoo’s user patches support. Do this before installing gentoo-sources (or reinstall it afterwards).

This example is for gentoo-sources-4.3.3, adjust your version accordingly :

(chroot) # mkdir -p /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # cd /etc/portage/patches/sys-kernel/gentoo-sources-4.3.3
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/0001-bcm4350.patch

When emerging the gentoo-sources package, you should see the patch being applied. Check that it worked by issuing :

(chroot) # grep BRCM_CC_4350 /usr/src/linux/drivers/net/wireless/brcm80211/brcmfmac/chip.c
case BRCM_CC_4350_CHIP_ID:

The resulting kernel module will be called brcmfmac, make sure to load it on boot by adding it in your /etc/conf.d/modules :

modules="brcmfmac"

EDIT: as of Jan. 7 2016, version 20151207 of linux-firmware ships with the needed files so you don’t need to download those any more !

Then we need to download the WiFi card’s firmware files which are not part of the linux-firmware package at the time of writing (20150012).

(chroot) # emerge '>=sys-kernel/linux-firmware-20151207'

# DO THIS ONLY IF YOU DONT HAVE >=sys-kernel/linux-firmware-20151207 available !
(chroot) # cd /lib/firmware/brcm/
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/BCM-0a5c-6412.hcd
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/brcmfmac4350-pcie.bin

Kernel config & build

I used genkernel to build my kernel. I’ve done a very few adjustments but these are the things to mind in this pre-built kernel :

  • support for NVME SSD added as builtin
  • it is builtin for ext4 only (other FS are not compiled in)
  • support for DM_CRYPT and LUKS ciphers for encrypted /home
  • the root partition is hardcoded in the kernel as /dev/nvme0n1p3 so if yours is different, you’ll need to change CONFIG_CMDLINE and compile it yourself
  • the CONFIG_CMDLINE above is needed because you can’t pass kernel parameters using UEFI so you have to hardcode them in the kernel itself
  • support for the intel graphic card DRM and framebuffer (there’s a kernel bug with skylake CPUs which will spam the logs but it still works good)

Get the kernel config and compile it :

(chroot) # mkdir -p /etc/kernels
(chroot) # cd /etc/kernels
(chroot) # wget http://ultrabug.fr/gentoo/xps9350/kernel-config-x86_64-4.3.3-gentoo
(chroot) # genkernel kernel

The proposed kernel config here is for gentoo-sources-4.3.3 so make sure to rename the file for your current version.

This kernel is far from perfect but it works very good so far, sound, webcam and suspend work smoothly !

make.conf settings for intel graphics

I can recommend using the following on your /etc/portage/make.conf :

INPUT_DRIVERS="evdev synaptics"
VIDEO_CARDS="intel i965"

fstab for SSD

Don’t forget to make sure the noatime option is used on your fstab for / and /home !

/dev/nvme0n1p1    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p2    none     swap    sw                0 0
/dev/nvme0n1p3    /        ext4    noatime   0 1
/dev/nvme0n1p4    /home    ext4    noatime   0 1

As pointed out by stefantalpalaru on comments, it is recommended to schedule a SSD TRIM on your crontab once in a while, see Gentoo Wiki on SSD for more details.

encrypted /home auto-mounted at login

I advise adding the cryptsetup to your USE variable in /etc/portage/make.conf and then updating your @world with a emerge -NDuq @world.

I assume you don’t have created your user yet so your unmounted /home is empty. Make sure that :

  • your /dev/nvme0n1p4 home partition is not mounted
  • you removed the corresponding /home line from your /etc/fstab (we’ll configure pam_mount to get it auto-mounted on login)

AFAIK, the LUKS password you’ll set on the first slot when issuing luksFormat below should be the same as your user’s password !

(chroot) # cryptsetup luksFormat -s 512 /dev/nvme0n1p4
(chroot) # cryptsetup luksOpen /dev/nvme0n1p4 crypt_home
(chroot) # mkfs.ext4 /dev/mapper/crypt_home
(chroot) # mount /dev/mapper/crypt_home /home
(chroot) # useradd -m -G wheel,audio,video,plugdev,portage,users USERNAME
(chroot) # passwd USERNAME
(chroot) # umount /home
(chroot) # cryptsetup luksClose crypt_home

We’ll use sys-auth/pam_mount to manage the mounting of our /home partition when a user logs in successfully, so make sure you emerge pam_mount first, then configure the following files :

  • /etc/security/pam_mount.conf.xml (only line added is the volume one)
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE pam_mount SYSTEM "pam_mount.conf.xml.dtd">
<!--
	See pam_mount.conf(5) for a description.
-->

<pam_mount>

		<!-- debug should come before everything else,
		since this file is still processed in a single pass
		from top-to-bottom -->

<debug enable="0" />

		<!-- Volume definitions -->

<volume user="USERNAME" fstype="auto" path="/dev/nvme0n1p4" mountpoint="/home" options="fsck,noatime" />

		<!-- pam_mount parameters: General tunables -->

<!--
<luserconf name=".pam_mount.conf.xml" />
-->

<!-- Note that commenting out mntoptions will give you the defaults.
     You will need to explicitly initialize it with the empty string
     to reset the defaults to nothing. -->
<mntoptions allow="nosuid,nodev,loop,encryption,fsck,nonempty,allow_root,allow_other" />
<!--
<mntoptions deny="suid,dev" />
<mntoptions allow="*" />
<mntoptions deny="*" />
-->
<mntoptions require="nosuid,nodev" />

<!-- requires ofl from hxtools to be present -->
<logout wait="0" hup="no" term="no" kill="no" />


		<!-- pam_mount parameters: Volume-related -->

<mkmountpoint enable="1" remove="true" />


</pam_mount>
  • /etc/pam.d/system-auth (only lines added are the ones with pam_mount.so)
auth		required	pam_env.so 
auth		required	pam_unix.so try_first_pass likeauth nullok 
auth		optional	pam_mount.so
auth		optional	pam_permit.so

account		required	pam_unix.so 
account		optional	pam_permit.so

password	optional	pam_mount.so
password	required	pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 
password	required	pam_unix.so try_first_pass use_authtok nullok sha512 shadow 
password	optional	pam_permit.so

session		optional	pam_mount.so
session		required	pam_limits.so 
session		required	pam_env.so 
session		required	pam_unix.so 
session		optional	pam_permit.so

That’s it, easy heh ?! When you login as your user, pam_mount will decrypt your home partition using your user’s password and mount it on /home !

UEFI booting your Gentoo Linux

The best (and weird ?) way I found for booting the installed Gentoo Linux and its kernel is to configure the UEFI boot directly from the XPS BIOS.

The idea is that the BIOS can read the files from the EFI boot partition since it is formatted as FAT32. All we have to do is create a new boot option from the BIOS and configure it to use the kernel file stored in the EFI boot partition.

  • reboot your machine
  • get on the BIOS (hit F2)
  • get on the General / Boot Sequence menu
  • click Add
  • set a name (like Gentoo 4.3.3) and find + select the kernel file (use the integrated file finder)

IMG_20160102_113759

  • remove all unwanted boot options

IMG_20160102_113619

  • save it and reboot

Your Gentoo kernel and OpenRC will be booting now !

Suggestions, corrections, enhancements ?

As I said, I wrote all this quickly to spare some time to whoever it could help. I’m sure there are a lot of improvements to be done still so I’ll surely update this article later on.

January 01, 2016

Most of us use email rather frequently, however how email is used varies substantially across groups of people. The differences are especially noticeable when it comes to etiquette of quotation and threading of email, that in particular becomes important as the volume of information increase in order to keep track of the various threads.

Some groups are inherently better than others, and some email clients encourage better practices than others, but in the end it all comes down to the user choices being made. Not surprisingly, developers tend to have a better grasp at email etiquette, but what can we learn from this?

HTML emails:

Lets start off with HTML emails. I mean, seriously, disable this at once. Email should be text only, and if that isn't sufficient it likely should be an external reference or an attachment. HTML emails doesn't provide any obvious advantage over text email, but has many downsides, in particular external loading of resources leading to privacy issues and possibility to execute script files leading to security vulnerabilities. Having HTML, in particular in combination with scripting, or allowing external resource loading is as such only negative, not to mention you can't really work and compose a response offline.

If using HTML for the purpose of text formatting there exists common practices for styling plain text emails that removes most of the need for it. The following are a few of the tricks for bold, underlined and italic text that will have effect in sane email clients.

  • *bold text* , the asterisk will be treated to indicate bold
  • _underlined text_ , the underscore character on both sides of the text will be used throught the client
  • /italic text/ , slashes are useful too

bold_underline_italic

Proper quotation

So, once you start writing proper emails in plain text only, the question of quotation comes to mind. Do you ever top-post? if so, why would you do such a silly thing? Wikipedia has some more information on this quoting style, but I prefer to keep to the basic:

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Reading in the opposite order does something funky with your brain, doesn't it? Why do you want your readers to suffer like that in order to try to get a context of a conversation, in particular if this is one of 100 different threads they are actually following?

Continuing form that; quotes should be properly nested using the ">" character. That is how it has been done for several decades, and newcomers to email not following the practice are just an annoyance. Proper email clients will interpret this and display it accordingly:

quotation

On that same note, remove information that isn't relevant when responding to an email, reading through non-related information just increases the workload of everyone of your recipient, so just... don't, the few seconds more it takes you to clear out the information that isn't related results in a N times amplification of savings for the readers depending on the number of people involved (and the times the email is accessed).

Threading

Etiquette also comes into mind with regards to threading. In some mailing lists this becomes vital in order to follow the discussion, and if someone is using an email client (such as Google's  Android standard email client that doesn't support In-Reply-To and References). A user that posts a message that breaks the thread can reasonable expect to be cursed upon and told to get a proper email client.

threading

A sub-topic of proper threading is, if you during the course of the discussion create a sub-thread that actually focus on something different from the initial one, for crying out loud, change the subject of the email to reflect this to make it easier to keep track of and look back in the archives.

OpenPGP signature and encryption

I've likely said enough about this topic already, but any post about emails are required to cover the need for proper authentication, integrity and confidentiality of information. So if you don't use OpenPGP / GnuPG / PGP to protect your information (and going forwards, hopefully with a Memory Hole compliant client). Well, don't expect too much positive response from me at least.

Diaspora*: A different social community model (January 01, 2016, 15:01 UTC)

One of the talks on 32C3 titled "A new kid on the block" talked about Diaspora* and the social networking effects required to build alternatives to existing social network structures. Now, I must admit I haven't paid too much attention to Diaspora* in the past, despite it having been around for quite a while, but now I got more curious and set up my own pod to test it a bit, with the added side benefit that I can stop using Hootsuite to publish blog posts to Twitter and Facebook as it can be integrated directly in this service.

So, what is Diaspora? The official website focus on three aspects:

  • Decentralization: Instead of everyone’s data being contained on huge central servers owned by a large organization, local servers (“pods”) can be set up anywhere in the world. You choose which pod to register with - perhaps your local pod - and seamlessly connect with the diaspora* community worldwide.
  • Freedom: You can be whoever you want to be in diaspora*. Unlike some networks, you don’t have to use your real identity. You can interact with whomever you choose in whatever way you want. The only limit is your imagination. diaspora* is also Free Software, giving you liberty to use it as you wish.
  • Privacy: In diaspora* you own your data. You do not sign over any rights to a corporation or other interest who could use it. With diaspora*, your friends, your habits, and your content is your business ... not ours! In addition, you choose who sees what you share, using Aspects.

My own Diaspora page can be seen on social.sumptuouscapital.com. Time will show whether that increase my activity on social networks in general. As participating on Diaspora requires access to a pod, if you are an acquaintance of mine and want access to sign up send me a message and I'll arrange for an invite. For others, there are plenty of publicly available pods that can be used, including those in this list.

 

 

December 31, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
uWSGI v2.0.12 (December 31, 2015, 14:18 UTC)

It’s been a long time since I made a blog post about a uWSGI release but this one is special to me because it contains some features I asked for to a colleague of mine.

For his first contributions to a big Open Source project, our fellow @shir0kamii added two features (spooler_get_task and -if-hostname-match) which were backported in this release and that we needed at work for quite a long time : congratulations again :)

Highlights

  • official PHP 7 support
  • uwsgi.spooler_get_task API to easily read and inspect a spooler file from your code
  • -if-hostname-match regexp on uWSGI configuration files to allow a more flexible configuration based on the hostname of the machine

Of course, all of this is already available on Gentoo Linux !

Full changelog here as usual.

Michal Hrusecky a.k.a. miska (homepage, bugs)
Getting IPv6 for your ownCloud (December 31, 2015, 10:32 UTC)

As you could have read, I joined the ownClouds PiDrive effort. I like the idea and we were brainstorming on the mailing list regarding what can we do. One notion really popped out. If you have ownCloud at home, you might be interested in reaching your home cloud from anywhere you go. And if you don’t have public IPv4 or you don’t want to forward public ports from the router, you might be interested in getting IPv6 for you home cloud. It can be pretty easy. Both on your home cloud and your notebook. I would like to talk about few options I considered and how and which I decided to integrate them (also into ownCloud app that you can use anywhere).

Overview of options

Native IPv6

You might be lucky and get IPv6 segment directly from your Internet provider. I know mine does offer it. But I also know that quite some providers are trying to fight inevitable future and keep postponing IPv6 deployment. On the other hand, I heard that there are some providers that migrated fully to IPv6 and no longer offer IPv4, just NAT64. If you have native IPv6, you probably know about it and if your router is correctly setup, your home NAS as well as your other computers will get it automatically. So for those lucky ones, my app will just detect IPv6 and display the address you can use to connect to your cloud.

6to4

Kinda nice way to get IPv6, but requires public IPv4 and some advanced setup on the device that has this IP. So probably nothing average Joe will do. But if he does and propagate it to his PiDrive, app will detect it and show it.

Tunnel from IPv6 broker

This is quite popular option. You register with some tunnel broker (like HE or SixXS) and they will give you fixed IPv6 address or range of IPv6 address that you can use however you want. You get always the same IP no matter where you are and setup is usually pretty easy, the only tricky part is registration which often requires you to fill in quite some personal details. I was thinking about providing this as an option on PiDrive, but the need to register somewhere and need to choose a broker sounded quite bothersome, so I decided to support mainly the last option I’m going to mention which I consider the most user-friendly. But as in previous case, if you set this up, the app will detect that IP address that you got this way and display it and this option might be added later on as I consider it quite useful.

Teredo

Teredo is I would say the easiest way (from the end-users point of view) how to get IPv6. You just need to run the client on your machine, and it will figure everything out and assign you some IPv6 address that you can use. It works behind NAT and you don’t have to know anything, it will figure out everything – even the closest relay to use. There are some disadvantages. There is an overhead with figuring out everything, the protocol itself has some overhead and on top of that, your IP depends on public IP of your NAT and how it handles your traffic. It also depends on port you were assigned by your NAT, so your IP will likely change with every reboot. But overall I think it is viable solution for end users if used together with some DynDNS service (where you would need to register but mostly with less personal info). I have in my TODO to add option to support DynDNS as well (started working on it already, but really just started, so nothing published yet).

Conclusion

I think making it easy for home users to enable IPv6 on their home cloud and educating people about how to get IPv6 on their other machines is probably the best way to go regarding how to let people have their data available everywhere they go. An I hope the app I’m working on will help to achieve that.

December 30, 2015
32C3 (December 30, 2015, 21:18 UTC)

This year I participated in the Chaos Computer Club's annual congress for the first time, despite it being the 32nd such event being arranged, hence its name 32c3. This year's event has the subname of "Gated Communities" and follows last year in its location in Hamburg after having been in Berlin for a while. By this point I expect many have written the event off as a nerd gathering of hackers, which, well, in many ways it is, but it requires some modification. The number of visitors exceeds 12,000, so this is a large event, lasting over four days from 27th to 30th of December each year, and if you look deeper into it actually is a family event for many with own events for teaching children technology and a childspace that include games that use technology to represent position or sound in order to control ping-pong-games. Picture taking is of course prohibited throughout the conference unless getting explicit permission from all involved parties (as it should be in the rest of society).

Presentations this year were organized in four main tracks, starting at 11:30 and going as late as 2am. It is a somewhat interesting experience to attend a lecture on "A gentle introduction to post-quantum cryptography" by Dan Bernstein and Tanja Lange at 23:00 - 00:00 and having a full lecture hall. I wonder how many universities would have the same result.

Don't worry though, if missing a lecture the video streaming is one of the better you can encounter, separated into multiple sections, (i) a live stream (ii) a Re-Live, which is un-modified version of the stream that can be watched later and (iii) A released video of the talk that is properly mastered and in better quality. So if wanting to watch the aforementioned talk on PQC you can do so at any time.

As a disproporational amount of my acquaintances are focusing on the legal field instead of technology in itself, lets continue with a good talk by Max Schrems suing Facebook over Safe Harbor and data protection going all the way to the european court of justice. Or maybe you want to learn more about the legal ambiguities surrounding Sealand, and the precesses involved in creation your own country and the operational failures of data havens?

If wanting to mix in the more technological part, how about a wrap-up of the Crypto Wars part II and comparisons to the 1990's. For those not having spent too much time looking into the first one, some particularly bad ideas were the clipper chip for key escrow, but what is curious is the same amount of arguments being used then as now. FBI/NSA and other governmental agencies wants un-fethered access to encrypted email and blames cryptography for its failures, even though those involved in recent events in Paris and San Bernadino actually used un-encrypted communication and the security services never picked up anything. As such, they, along with politicians, use Fear, Uncertainty, and Doubt (FUD) to make their case. Its typical of politicians to think that the problem is the rhethoric or the name rather than the underlying substance, and as a result we see discussions of a "secure golden key" or a "front door" instead of a "back door" to cryptography. The attempts of governments from the first crypto wars of couse influence us even today, in particular with the export restrictions imposed that until recently still exists compatibility for in various libraries allowing for downgrade attacks. A good talk by J. Alex Halderman and Nadia Heninger on Logjam underlines why attempts of undermining encryption is a bad thing even decades later.

What people seems to forget is that encryption is required for the e-commerce that we use every day. Who would ever connect to an internet banking application if their neighbour could be monitoring all account information and traffic? And the right to privacy is even established under the Universal Declaration of Human Rights, article 19, stating: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinion without interfearence and to seek, receive and impart information and ideas through any media and regardless of frontiers".

The United Kingdom (UK) is comming off this debate in a particularly bad way with Cameron's Snooper's Charter. In particular §189(4)(c): "Operators may be obliged to remove "electronic protection" if they provide ..." seems worrying. This is followed by Australia; where simply explaining an algorithm to someone can result in penalization. But none of these beats India; that require a copy of plain text to be retained for a minimum of 90 days if sending an encrypted message.

This level of tyranny from oppressive regimes of various governments nicely sets the stage for the presentation of North Korea's Red Star Operating System and the various ways the operating system, set to mimic Apple's Mac OS, in order to spy and keep the people down. Of particular interest is the watermarking technology and censoring application that forms part of the "anti-virus" (well, the red start icon of it could be a hint)

All in all, this is just a minimal representaiton of some of the interesting aspects of this conference. Not surprisingly the most used operating systems of the visitors (at least those connected to the network) was GNU/Linux (24.1%) and Android (17.6%), and if you want to see the talk about Windows 10 acting as a botnet you have that video as well.

December 29, 2015
Michal Hrusecky a.k.a. miska (homepage, bugs)
PiDrive unboxing (December 29, 2015, 18:41 UTC)

Not so long ago ownCloud announced their cooperation with Western Digital. Outcome is PiDrive – basically home NAS solution. ARM board (RaspberryPi2) connected with HDD. And with the announcement of the cooperation came a challenge. Community was asked to come up with ideas regarding what would they do with it. Who was interested in working on the image that will be shipped as part of the final solution was offered a prototype of the device. I was one of the guys brainstorming about what to do with it. I had some ideas and already started working on some of them. More about them and the progress later. Currently I want to share some pictures of the PiDrive (as I already received the prototype) – obligatory unboxing and few thoughts on hardware.

Important note – whole this blog post is about prototype. Final device can be totally different.

So what do we have here? Let’s start with the ARM board. Raspberry Pi 2 has some advantages and some disadvantages from specs alone (just opened up the box, not booted up yet). It has only USB 2.0 (while drive itself supports USB 3.0) and only 100MBit ethernet, which is not that much nowadays. With BananaPi, it could get harddrive attached directly via SATA port and it would have 1GBit ethernet. But on the other hand 4 cores can be quite useful on device that is expected to run webserver. And if we learned anything from RaspberryPi it is that marketing matters a lot and thanks to it, there will be huge community around Raspberry Pi2 and thus plenty of interesting projects (and peripherals) can later come to PiDrive.

[See image gallery at michal.hrusecky.net]

That kinda brings us to the box. As you can see on pictures, the prototype actually come with two boxes. One is black and other whiteish. Both of them are translucent, so you can see when some LED on your Pi is on or when harddrive is doing something. Both cases looks the same except of the colour. Really great idea is how to handle all the cables that go out of Pi. As you can guess, some are pretty mandatory – like power and ethernet. But there can be plenty of optional ones. More USB devices (keyboard, mouse, …), HDMI, some GPIO attached devices, maybe more. There would have to be plenty of holes to support all of them. The box solves the problem by letting you plug everything in and then having long and narrow hole in the back where you can guide out as many cables as you want. I really like the design of the box. Except one thing. I kinda miss the top of the box. In my case, top is open. You can see inside which is nice on one hand, but dust will be falling in quite heavily. I hope this will be fixed in final version or maybe just my devkit was missing the closure. One other small issue I have with box is that the stand that holds Pi and drive (board to board) doesn’t have hard plastic everywhere between them and I’m kinda worried that both board could touch if they vibrate enough and short out, but maybe it’s well tested and impossible and I’m just worried without any reason. But despite this criticism, I really like the boxing for the PiDrive. And I will somehow create a top of the box myself.

One last thing I haven’t mentioned yet is hard drive itself. It is 2.5″ 1T WD drive. More specs once I get the device booting. And there is a really cool cable attached to it. It has one power input, one power output as micro-USB (that goes to Raspberry), one USB that goes to USB port of Raspberry to connect it with drive and one micro-USB 3 that goes to the drive (both power and data). Kinda cool how it connects power and data while having four heads.

Overall I really like the idea and the project. There is a lot of work that needs to be done, but I’m happy that I’ll be part of the effort. So take a look at the pictures for now and I will create another blog post once I’ll have something up and running and some practical experience with the setup.

December 28, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
SCALE 14x (December 28, 2015, 20:30 UTC)

I don’t know about you but I’m going to SCALE 14x.

I'm going to SCALE 14x!

SCALE is always a lot of fun and last year I particularly liked their new embedded track. This year’s schedule looks equally exciting. Among the speakers you may recognize former Gentoo developers as well as a few OSS celebrities.

We’ll have a booth again this year. Feel free to come talk to us or help us on the booth. One of the new things this year is we’re getting a Gentoo specific promotional code for a 50% rebate on full-access passes for the entire event. The code is GNTOO. It’s not only for Gentoo developers, but also for all Gentoo users, potential users, new-year-resolution future users, friends, etc… Use it! Spread it! I’m looking forward to see you all there.

Michal Hrusecky a.k.a. miska (homepage, bugs)
openSUSE Summit Asia 2015 (December 28, 2015, 09:00 UTC)

Me & BudhaLast year first ever openSUSE Summit Asia took place in Beijing. From all the reports it sounded really awesome and I regretted that I couldn’t go. This year, I was lucky enough to manage to go. I was selected to do a board keynote and I got some of my travel expenses sponsored by Travel Support Programme (big thanks!). So how was this years openSUSE Summit Asia from my point of view? In short, amazing :-) In long, read on…

Let’s start with my Taiwan trip. I have never been to Taiwan before and visiting this country, which is quite different from my homeland, was by itself amazing. Lot of small shops everywhere, old and new temples between modern shopping centers, scooters everywhere, free WiFi in turistic places in Taipei… And friendly people everywhere. One really handy thing I noticed is that on every underground station, some buses and on few other places as well are recharging stations for your notebooks/cellphones. It is only starting to happen in my home town and in past I had several occasions when I could really use something like that. In Taipei, quite often you can see people recharging their devices so they can continue travelling :-) Other really interesting thing is kinda Foursquare predecessor. On plenty of places, you have a rubber stamp that you can use to put a mark in your log. On underground stations, all tourist offices, tourist attractions and more. So as you are travelling, you have a log of places you have been to. Really interesting comparing to all the high-tech stuff you see all around. I could go on how amazing Taiwan was from tourists point of view, but lets get to the main reason I was there – openSUSE Summit Asia.

Me & MascotEverything started on Friday by openSUSE Leap release party. It was combined with Ubuntu party. Both communities mixed well together and it was obvious that they know each other well and are friends. As everywhere in Taiwan, everybody was really friendly. I met there guy from Canonical who had Ubuntu phone, I showed him my Jolla, he showed me his Ubuntu phone, we talked and I was giving it some thoughts. I was thinking about what to buy as my next phone as Jolla was in deep troubles back than. Luckily they are now out of the dark, so I can look forward to next Jolla phone, but Ubuntu phone is still interesting especially as it can be probably converted into openSUSE phone :-) There was some talking in Chinese by the host that everybody seemed to enjoy and then lightning talks. One Debian guy did a presentation of C web framework. After that I stepped up and did presentation of my favourite C++ web framework afterwards. We had some pizza, some local food, something to drink and even birthday celebration of one of the community members. It was a lot of fun.

During the weekend was the summit itself. There was plenty of talks and workshops, some of it in Chinese, some of it in English. I did the keynote on behalf of openSUSE Board. Than I tried attending the Chinese workshop, but I got lost quite quickly, so I ended up following only the English part of the track. Regardless, there were some really interesting talks. One I would like to stress out is the talk that emphasises the importance of summit in Asia. There was a talk and long discussion regarding input methods. I found out that input methods varies a lot. Some uses English transcription, some maps keys on English keyboard to some Chinese signs, not sure whether full characters. But it varies a lot and it is not that easy to set it up. Now it is easier thanks to Chameleon tongue :-) And it is a type of problem, that is not that interesting in Europe/America, but is really important in Asia.

Apart from talks, there was as on every conference a lot of talks in the corridor and connecting with people and making new friends. We even had a visit from lizardy mascot :-) I learned what to visit on Taiwan, where to buy stuff, how is the conference infrastructure setup, that there is ongoing effort to publish openSUSE Japanese magazine and much more. I was a lot of fun. As a proof of how much fun was it, you can take a look at the pictures taken during the conference.

December 19, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
XScreenSaver unlock dialog tuning (December 19, 2015, 19:42 UTC)

I’m having a bit of trouble accepting that one of the dialogs that is presented to me as frequently as the XScreenSaver unlock window below is by far the least shiny part of my daily Linux desktop experience.


Tuning just the knobs that XScreenSaver already comes with, I eventually got to this point:


The logo still is too much noise and the font still lacks anti-aliasing. However most of the text noise, the pre-90s aesthetics and the so-called thermometer are gone.

To bring it to your desktop, use this content for ~/.Xdefaults

xscreensaver.dateFormat:
xscreensaver.passwd.body.label:
xscreensaver.passwd.heading.label:
xscreensaver.passwd.login.label:
xscreensaver.passwd.thermometer.width: 2
xscreensaver.passwd.uname: False 
xscreensaver.passwd.unlock.label:

xscreensaver.Dialog.background:         #000000
xscreensaver.Dialog.foreground:         #ffffff
xscreensaver.Dialog.Button.background:  #000000
xscreensaver.Dialog.Button.foreground:  #ffffff
xscreensaver.Dialog.text.background:    #000000
xscreensaver.Dialog.text.foreground:    #ffffff

xscreensaver.Dialog.shadowThickness: 1
xscreensaver.Dialog.topShadowColor:     #000000
xscreensaver.Dialog.bottomShadowColor:  #000000

and run

xrdb < ~/.Xdefaults  && xscreensaver-command -restart

as advised by the XScreenSaver Manual.

For other approaches, I’m only aware of this one: xscreensaver lock window themes. Please comment below if you know about other approaches. Thank you!

PS: The screensaver in the background is Fireflies. For a Debian package, you can run make deb from a Git clone.

December 11, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)
What got us into the SHA1 deprecation mess? (December 11, 2015, 14:24 UTC)

Important notice: After I published this text Adam Langley pointed out that a major assumption is wrong: Android 2.2 actually has no problems with SHA256-signed certificates. I checked this myself and in an emulated Android 2.2 instance I was able to connect to a site with a SHA256-signed certificate. I apologize for that error, I trusted the Cloudflare blog post on that. This whole text was written with that assumption in mind, so it's hard to change without rewriting it from scratch. I have marked the parts that are likely to be questioned. Most of it is still true and Android 2 has a problematic TLS stack (no SNI), but the specific claim regarding SHA256-certificates seems wrong.

Android 2.2 phoneThis week both Cloudflare and Facebook announced that they want to delay the deprecation of certificates signed with the SHA1 algorithm. This spurred some hot debates whether or not this is a good idea – with two seemingly good causes: On the one side people want to improve security, on the other side access to webpages should remain possible for users of old devices, many of them living in poor countries. I want to give some background on the issue and ask why that unfortunate situation happened in the first place, because I think it highlights some of the most important challenges in the TLS space and more generally in IT security.

SHA1 broken since 2005

The SHA1 algorithm is a cryptographic hash algorithm and it has been know for quite some time that its security isn't great. In 2005 the Chinese researcher Wang Xiaoyun published an attack that would allow to create a collision for SHA1. The attack wasn't practically tested, because it is quite expensive to do so, but it was clear that a financially powerful adversary would be able to perform such an attack. A year before the even older hash function MD5 was broken practically, in 2008 this led to a practical attack against the issuance of TLS certificates. In the past years browsers pushed for the deprecation of SHA1 certificates and it was agreed that starting January 2016 no more certificates signed with SHA1 must be issued, instead the stronger algorithm SHA256 should be used. Many felt this was already far too late, given that it's been ten years since we knew that SHA1 is broken.

A few weeks before the SHA1 deadline Cloudflare and Facebook now question this deprecation plan. They have some strong arguments. According to Cloudflare's numbers there is still a significant number of users that use browsers without support for SHA256-certificates. And those users are primarily in relatively poor, repressive or war-ridden countries. The top three on the list are China, Cameroon and Yemen. Their argument, which is hard to argue with, is that cutting of SHA1 support will primarily affect the poorest users.

Cloudflare and Facebook propose a new mechanism to get legacy validated certificates. These certificates should only be issued to site operators that will use a technology to separate users based on their TLS handshake and only show the SHA1 certificate to those that use an older browser. Facebook already published the code to do that, Cloudflare also announced that they will release the code of their implementation. Right now it's still possible to get SHA1 certificates, therefore those companies could just register them now and use them for three years to come. Asking for this legacy validation process indicates that Cloudflare and Facebook don't see this as a short-term workaround, instead they seem to expect that this will be a solution they use for years to come, without any decided end date.

It's a tough question whether or not this is a good idea. But I want to ask a different question: Why do we have this problem in the first place, why is it hard to fix and what can we do to prevent similar things from happening in the future? One thing is remarkable about this problem: It's a software problem. In theory software can be patched and the solution to a software problem is to update the software. So why can't we just provide updates and get rid of these legacy problems?

Windows XP and Android Froyo

According to Cloudflare there are two main reason why so many users can't use sites with SHA256 certificates: Windows XP and old versions of Android (SHA256 support was added in Android 2.3, so this affects mostly Android 2.2 aka Froyo). We all know that Windows XP shouldn't be used any more, that its support has ended in 2014. But that clearly clashes with realities. People continue using old systems and Windows XP is still alive in many countries, especially in China.

But I'm inclined to say that Windows XP is probably the smaller problem here. With Service Pack 3 Windows XP introduced support for SHA256 certificates. By using an alternative browser (Firefox is still supported on Windows XP if you install SP3) it is even possible to have a relatively safe browsing experience. I'm not saying that I recommend it, but given the circumstances advising people how to update their machines and to install an alternative browser can party provide a way to reduce the reliance on broken algorithms.

The Updatability Gap

But the problem with Android is much, much worse, and I think this brings us to probably the biggest problem in IT security we have today. For years one of the most important messages to users in IT security was: Keep your software up to date. But at the same time the industry has created new software ecosystems where very often that just isn't an option.

In the Android case Google says that it's the responsibility of device vendors and carriers to deliver security updates. The dismal reality is that in most cases they just don't do that. But even if device vendors are willing to provide updates it usually only happens for a very short time frame. Google only supports the latest two Android major versions. For them Android 2.2 is ancient history, but for a significant portion of users it is still the operating system they use.

What we have here is a huge gap between the time frame devices get security updates and the time frame users use these devices. And pretty much everything tells us that the vendors in the Internet of Things ignore these problems even more and the updatability gap will become larger. Many became accustomed to the idea that phones get only used for a year, but it's hard to imagine how that's going to work for a fridge. What's worse: Whether you look at phones or other devices, they often actively try to prevent users from replacing the software on their own.

This is a hard problem to tackle, but it's probably the biggest problem IT security is facing in the upcoming years. We need to get a working concept for updates – a concept that matches the real world use of devices.

Substandard TLS implementations

But there's another part of the SHA1 deprecation story. As I wrote above since 2005 it was clear that SHA1 needs to go away. That was three years before Android was even published. But in 2010 Android still wasn't capable of supporting SHA256 certificates. Google has to take a large part of the blame here. While these days they are at the forefront of deploying high quality and up to date TLS stacks, they shipped a substandard and outdated TLS implementation in Android 2. (Another problem is that all Android 2 versions don't support Server Name Indication, a technology that allows to use different certificates for different hosts on one IP address.)

This is not the first such problem we are facing. With the POODLE vulnerability it became clear that the old SSL version 3 is broken beyond repair and it's impossible to use it safely. The only option was to deprecate it. However doing so was painful, because a lot of devices out there didn't support better protocols. The successor protocol TLS 1.0 (SSL/TLS versions are confusing, I know) was released in 1999. But the problem wasn't that people were using devices older than 1999. The problem was that many vendors shipped devices and software that only supported SSLv3 in recent years.

One example was Windows Phone 7. In 2011 this was the operating system on Microsoft's and Nokia's flagship product, the Lumia 800. Its mail client is unable to connect to servers not supporting SSLv3. It is just inexcusable that in 2011 Microsoft shipped a product which only supported a protocol that was deprecated 12 years earlier. It's even more inexcusable that they refused to fix it later, because it only came to light when Windows Phone 7 was already out of support.

The takeaway from this is that sloppiness from the past can bite you many years later. And this is what we're seeing with Android 2.2 now.

But you might think given these experiences this has stopped today. It hasn't. The largest deployer of substandard TLS implementations these days is Apple. Up until recently (before El Capitan) Safari on OS X didn't support any authenticated encryption cipher suites with AES-GCM and relied purely on the CBC block mode. The CBC cipher suites are a hot candidate for the next deprecation plan, because 2013 the http://www.isg.rhul.ac.uk/tls/Lucky13.html Lucky 13 attack has shown that they are really hard to implement safely. The situation for applications other than the browser (Mail etc.) is even worse on Apple devices. They only support the long deprecated TLS 1.0 protocol – and that's still the case on their latest systems.

There is widespread agreement in the TLS and cryptography community that the only really safe way to use TLS these days is TLS 1.2 with a cipher suite using forward secrecy and authenticated encryption (AES-GCM is the only standardized option for that right now, however ChaCha20/Poly1304 will come soon).

Conclusions

For the specific case of the SHA1 deprecation I would propose the following: Cloudflare and Facebook should go ahead with their handshake workaround for the next years, as long as their current certificates are valid. But this time should be used to find solutions. Reach out to those users visiting your sites and try to understand what could be done to fix the situation. For the Windows XP users this is relatively easy – help them updating their machines and preferably install another browser, most likely that'd be Firefox. For Android there is probably no easy solution, but we have some of the largest Internet companies involved here. Please seriously ask the question: Is it possible to retrofit Android 2.2 with a reasonable TLS stack? What ways are there to get that onto the devices? Is it possible to install a browser app with its own TLS stack on at least some of those devices? This probably doesn't work in most cases, because on many cheap phones there just isn't enough space to install large apps. In the long term I hope that the tech community will start tackling the updatability problem.

In the TLS space I think we need to make sure that no more substandard TLS deployments get shipped today. Point out the vendors that do so and pressure them to stop. It wasn't acceptable in 2010 to ship TLS with long-known problems and it isn't acceptable today.

Image source: Wikimedia Commons

November 30, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)
A little POODLE left in GnuTLS (old versions) (November 30, 2015, 19:32 UTC)

Poodletl;dr Older GnuTLS versions (2.x) fail to check the first byte of the padding in CBC modes. Various stable Linux distributions, including Ubuntu LTS and Debian wheezy (oldstable) use this version. Current GnuTLS versions are not affected.

A few days ago an email on the ssllabs mailing list catched my attention. A Canonical developer had observed that the SSL Labs test would report the GnuTLS version used in Ubuntu 14.04 (the current long time support version) as vulnerable to the POODLE TLS vulnerability, while other tests for the same vulnerability showed no such issue.

A little background: The original POODLE vulnerability is a weakness of the old SSLv3 protocol that's now officially deprecated. POODLE is based on the fact that SSLv3 does not specify the padding of the CBC modes and the padding bytes can contain arbitrary bytes. A while after POODLE Adam Langley reported that there is a variant of POODLE in TLS, however while the original POODLE is a protocol issue the POODLE TLS vulnerability is an implementation issue. TLS specifies the values of the padding bytes, but some implementations don't check them. Recently Yngve Pettersen reported that there are different variants of this POODLE TLS vulnerability: Some implementations only check parts of the padding. This is the reason why sometimes different tests lead to different results. A test that only changes one byte of the padding will lead to different results than one that changes all padding bytes. Yngve Pettersen uncovered POODLE variants in devices from Cisco (Cavium chip) and Citrix.

I looked at the Ubuntu issue and found that this was exactly such a case of an incomplete padding check: The first byte wasn't checked. I believe this might explain some of the vulnerable hosts Yngve Pettersen found. This is the code:

for (i = 2; i <= pad; i++)
{
if (ciphertext.data[ciphertext.size - i] != pad)
pad_failed = GNUTLS_E_DECRYPTION_FAILED;
}


The padding in TLS is defined that the rightmost byte of the last block contains the length of the padding. This value is also used in all padding bytes. However the length field itself is not part of the padding. Therefore if we have e. g. a padding length of three this would result in four bytes with the value 3. The above code misses one byte. i goes from 2 (setting block length minus 2) to pad (block length minus pad length), which sets pad length minus one bytes. To correct it we need to change the loop to end with pad+1. The code is completely reworked in current GnuTLS versions, therefore they are not affected. Upstream has officially announced the end of life for GnuTLS 2, but some stable Linux distributions still use it.

The story doesn't end here: After I found this bug I talked about it with Juraj Somorovsky. He mentioned that he already read about this before: In the paper of the Lucky Thirteen attack. That was published in 2013 by Nadhem AlFardan and Kenny Paterson. Here's what the Lucky Thirteen paper has to say about this issue on page 13:

for (i = 2; i < pad; i++)
{
if (ciphertext->data[ciphertext->size - i] != ciphertext->data[ciphertext->size - 1])
pad_failed = GNUTLS_E_DECRYPTION_FAILED;
}


It is not hard to see that this loop should also cover the edge case i=pad in order to carry out a full padding check. This means that one byte of what should be padding actually has a free format.

If you look closely you will see that this code is actually different from the one I quoted above. The reason is that the GnuTLS version in question already contained a fix that was applied in response to the Lucky Thirteen paper. However what the Lucky Thirteen paper missed is that the original check was off by two bytes, not just one byte. Therefore it only got an incomplete fix reducing the attack surface from two bytes to one.

In a later commit this whole code was reworked in response to the Lucky Thirteen attack and there the problem got fixed for good. However that change never made it into version 2 of GnuTLS. Red Hat / CentOS packages contain a backport patch of those changes, therefore they are not affected.

You might wonder what the impact of this bug is. I'm not totally familiar with the details of all the possible attacks, but the POODLE attack gets increasingly harder if fewer bytes of the padding can be freely set. It most likely is impossible if there is only one byte. The Lucky Thirteen paper says: "This would enable, for example, a variant of the short MAC attack of [28] even if variable length padding was not supported.". People that know more about crypto than I do should be left with the judgement whether this might be practically exploitabe.

Fixing this bug is a simple one-line patch I have attached here. This will silence all POODLE checks, however this doesn't apply all the changes that were made in response to the Lucky Thirteen attack. I'm not sure if the code is practically vulnerable, but Lucky Thirteen is a tricky issue, recently a variant of that attack was shown against Amazon's s2n library.

The missing padding check for the first byte got CVE-2015-8313 assigned. Currently I'm aware of Ubuntu LTS (now fixed) and Debian oldstable (Wheezy) being affected.

November 29, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
lxc, ipv6 and iproute2 (November 29, 2015, 21:19 UTC)

Not so recently I got a soyoustart system since it is provided with an option to install Gentoo out of box.

The machine comes with a single ipv4 and a /64 amount of ipv6 addresses.

LXC

I want to use the box to host some of my flask applications (plaid mainly), keep some continuous integration instances for libav and some other experiments with compilers and libraries (such as musl, cparser other).

Since Diego was telling me about lxc I picked it. It is simple, requires not much effort and in Gentoo we have at least some documentation.

Setting up

I followed the documentation provided and it worked quite well up to a point. The btrfs integration works as explained, creating new Gentoo instances just worked, setting up the network… Required some effort.

Network woes

I have just 1 single ipv4 and some ipv6 so why not leveraging them? I decided to partition my /64 and use some, configured the bridge to take ::::1::1 and set up the container configuration like this:

lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
lxc.network.ipv4 = 192.168.1.4/16
lxc.network.ipv4.gateway = auto
lxc.network.ipv6 = ::::1::4/80
lxc.network.ipv6.gateway = auto
lxc.network.hwaddr = 02:00:ee:cb:8a:04

But the route to my container wasn’t advertised.

Having no idea why I just kept poking around sysctl and iproute2 until I got:

  • sysctl.conf:
  net.ipv6.conf.all.forwarding = 1
  net.ipv6.conf.eth0.proxy_ndp = 1

And

ip -6 neigh add proxy ::::1::4 dev eth0

In my container runner script.

I know that at least other people had the problem so here this mini-post.

November 26, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Grafting history onto your Gentoo git clone (November 26, 2015, 23:14 UTC)

Somehow after a while I got a bit tired that my git checkout of the main Gentoo repository didn't have any real history available. So, here's how I got it back:

(Note, you may want a fast network connection for this.)

  • cd into the main directory of the Gentoo git checkout:
$ cd ~/Gentoo/gentoo
  • fetch 2GByte of converted cvs history into a new local branch "history-20150809-draft"
$ git fetch https://github.com/gentoo/gentoo-gitmig-20150809-draft.git master:history-20150809-draft
  • attach the last commit of the cvs history to the first commit of the new git era
$ echo 56bd759df1d0c750a065b8c845e93d5dfa6b549d 2ebda5cd08db6bdf193adaa6de33239a83a73af0 > .git/info/grafts
And done. :)

Should at some point in the future a new, improved (or "official") conversion of the cvs history become available, here's (untested) what to do:
  • fetch it in the same way into a new local cvs history branch, and 
  • modify the grafts file to now connect the last commit of the new local cvs history branch with the first commit of the git era. 
Once you are happy with the result, you can delete the old local cvs history branch and run "git prune", freeing up the space used by the now obsolete old conversion.

Thanks to rich0 for providing the draft conversion (though inofficial so far) and to everyone else involved.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.7 (November 26, 2015, 18:52 UTC)

I’m more than two weeks late but I’m very glad to announce the release of py3status v2.7 which features a lot of interesting stuff !

For this release I want to salute the significant work and help of Daniel Foerster (@pydsigner), who discovered and fixed a bug in the event detection loop.

The result is a greatly improved click event detection and bar update speed with a largely reduced CPU consumption and less code !

Highlights

  • major performance and click event detection improvements by Daniel Foerster
  • support of %z on time and tztime modules fixes #110 and #123 thx to @derekdreery and @olhotak
  • directive %Z and any other failure in parsing the time and tztime modules format will result in using i3status date output
  • add ethernet, wireless and battery _first_ instance detection and support. thx to @rekoil for reporting on IRC
  • i3status.conf parser handles configuration values with the = char

New modules

  • new rt module: display ongoing tickets from RT queues
  • new xsel module: display xsel buffers, by umbsublime
  • new window_title_async module, by Anon1234

Modules enhancements

  • battery_level module: major improvements, documentation, add format option, by Maxim Baz
  • keyboard_layout module: color customisation, add format option, by Ali Mousavi
  • mpd_status module: fix connection leak, by Thomas Sanchez
  • pomodoro module: implement format option and add additional display features, by Christoph Schober
  • spotify module: fix support for new versions, by Jimmy Garpehäll
  • spotify module: add support for colors output based on the playback status, by Sondre Lefsaker
  • sysdata module: trim spaces in `cpu_temp`, by beetleman
  • whatismyip module: change default check URL and make it configurable

Thanks !

Once again, thanks to all contributors listed above !

November 23, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

Delltl;dr Dell laptops come preinstalled with a root certificate and a corresponding private key. That completely compromises the security of encrypted HTTPS connections. I've provided an online check, affected users should delete the certificate.

It seems that Dell hasn't learned anything from the Superfish-scandal earlier this year: Laptops from the company come with a preinstalled root certificate that will be accepted by browsers. The private key is also installed on the system and has been published now. Therefore attackers can use Man in the Middle attacks against Dell users to show them manipulated HTTPS webpages or read their encrypted data.

The certificate, which is installed in the system's certificate store under the name "eDellRoot", gets installed by a software called Dell Foundation Services. This software is still available on Dell's webpage. According to the somewhat unclear description from Dell it is used to provide "foundational services facilitating customer serviceability, messaging and support functions".

The private key of this certificate is marked as non-exportable in the Windows certificate store. However this provides no real protection, there are Tools to export such non-exportable certificate keys. A user of the plattform Reddit has posted the Key there.

For users of the affected Laptops this is a severe security risk. Every attacker can use this root certificate to create valid certificates for arbitrary web pages. Even HTTP Public Key Pinning (HPKP) does not protect against such attacks, because browser vendors allow locally installed certificates to override the key pinning protection. This is a compromise in the implementation that allows the operation of so-called TLS interception proxies.

I was made aware of this issue a while ago by Kristof Mattei. We asked Dell for a statement three weeks ago and didn't get any answer.

It is currently unclear which purpose this certificate served. However it seems unliklely that it was placed there deliberately for surveillance purposes. In that case Dell wouldn't have installed the private key on the system.

Affected are only users that use browsers or other applications that use the system's certificate store. Among the common Windows browsers this affects the Internet Explorer, Edge and Chrome. Not affected are Firefox-users, Mozilla's browser has its own certificate store.

Users of Dell laptops can check if they are affected with an online check tool. Affected users should immediately remove the certificate in the Windows certificate manager. The certificate manager can be started by clicking "Start" and typing in "certmgr.msc". The "eDellRoot" certificate can be found under "Trusted Root Certificate Authorities". You also need to remove the file Dell.Foundation.Agent.Plugins.eDell.dll, Dell has now posted an instruction and a removal tool.

This incident is almost identical with the Superfish-incident. Earlier this year it became public that Lenovo had preinstalled a software called Superfish on its Laptops. Superfish intercepts HTTPS-connections to inject ads. It used a root certificate for that and the corresponding private key was part of the software. After that incident several other programs with the same vulnerability were identified, they all used a software module called Komodia. Similar vulnerabilities were found in other software products, for example in Privdog and in the ad blocker Adguard.

This article is mostly a translation of a German article I wrote for Golem.de.

Image source and license: Wistula / Wikimedia Commons, Creative Commons by 3.0

Update (2015-11-24): Second Dell root certificate DSDTestProvider

I just found out that there is a second root certificate installed with some Dell software that causes exactly the same issue. It is named DSDTestProvider and comes with a software called Dell System Detect. Unlike the Dell Foundations Services this one does not need a Dell computer to be installed, therefore it was trivial to extract the certificate and the private key. My online test now checks both certificates. This new certificate is not covered by Dell's removal instructions yet.

Dell has issued an official statement on their blog and in the comment section a user mentioned this DSDTestProvider certificate. After googling what DSD might be I quickly found it. There have been concerns about the security of Dell System Detect before, Malwarebytes has an article about it from April mentioning that it was vulnerable to a remote code execution vulnerability.

Update (2015-11-26): Service tag information disclosure

Another unrelated issue on Dell PCs was discovered in a tool called Dell Foundation Services. It allows webpages to read an unique service tag. There's also an online check.

November 21, 2015
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Code and Conduct (November 21, 2015, 18:54 UTC)

This is a sort of short list of checklists and few ramblings in the wake of Fosdem’s Code of Conduct discussions and the not exactly welcoming statements about how to perceive a Code of Conduct such as this one.

Code of Conduct and OpenSource projects

A Code of Conduct is generally considered a mean to get rid of problematic people (and thus avoid toxic situations). I prefer consider it a mean to welcome people and provide good guidelines to newcomers.

Communities without a code of conduct tend to reject the idea of having one, thinking that a is only needed to solve the above mentioned issue and adding more bureaucracy would just actually give more leeway to macchiavellian ploys.

That is usually a problem since, no matter how good things are now, it takes just few poisonous people to get in an unbearable situation and a you just need one in few selected cases.

If you consider the CoC a shackle or a stick to beat “bad guys” so you do not need it until you see a bad guy, that is naive and utterly wrong: you will end up writing something that excludes people due a quite understandable, but wrong, knee-jerk reaction.

A Code of Conduct should do exactly the opposite, it should embrace people and make easier joining and fit in. It should be the social equivalent of the developer handbook or the coding style guidelines.

As everybody can make a little effort and make sure to send code with spaces between operators everybody can make an effort and not use colorful language. Likewise as people would be more happy to contribute if the codebase they are hacking on is readable so they are more confident in joining the community if the environment is pleasant.

Making an useful Code of Conduct

The Code of Conduct should be a guideline for people that have no idea what the expected behavior, it should be written thinking on how to help people get along not on how to punish who do not like.

  • It should be short. It is pointless to enumerate ALL the possible way to make people uncomfortable, you are bound to miss it.
  • It should be understanding and inclusive. Always assume cultural bias and not ill will.
  • It should be enforced. It gets quite depressing when you have a 100+ lines code of conduct but then nobody cares about it and nobody really enforces it. And I’m not talking about having specifically designated people to enforce it. Your WHOLE community should agree on what is an acceptable behavior and act accordingly on breaches.

People joining the community should consider the Code of Conduct first as a request (and not a demand) to make an effort to get along with the others.

Pitfalls

Since I saw quite some long and convoluted wall of text being suggested as THE CODE OF CONDUCT everybody MUST ABIDE TO, here some suggestion on what NOT do.

  • It should not be a political statement: this is a strong cultural bias that would make potential contributors just stay away. No matter how good and great you think your ideas are, those unrelated to a project that should gather people that enjoy writing code in their spare time should stay away. The Open Source is already an ideology, overloading it with more is just a recipe for a disaster.
  • Do not try to make a long list of definitions, you just dilute the content and give even more ammo to lawyer-type arguers.
  • Do not think much about making draconian punishments, this is a community on internet, even nowadays nobody really knows if you are actually a dog or not, you cannot really enforce anything if the other party really wants to be a pest.

Good examples

Some CoC I consider good are obviously the ones used in the communities I belong to, Gentoo and Libav, they are really short and to the point.

Enforcing

As I said before no matter how well written a code of conduct is, the only way to really make it useful is if the community as whole helps new (and not so new) people to get along.

The rule of thumb “if somebody feels uncomfortable in a non-technical discussion, once he says, drop it immediately”, is ok as long:
* The person uncomfortable speaks up. If you are shy you might ask somebody else to speak up for you, but do not be quiet when it happens and then fill a complaint much later, that is NOT OK.
* The rule is not bent to derail technical discussions. See my post about reviews to at least avoid this pitfall.
* People agree to drop at least some of their cultural biases, otherwise it would end up like walking on eggshells every moment.

Letting situations going unchecked is probably the main issue, newcomers can think it is OK to behave in a certain way if people are behaving such way and nobody stops that, again, not just specific enforcers of some kind, everybody should behave and tell clearly to those not behaving that they are problematic.

Gentoo is a big community so once somebody steps the boundaries gets problematic having a swift reaction, lots of people prefer not to speak up when something happens, so people unwillingly causing the problem are not made aware immediately.

The people then in charge to dish bans have to try to figure out what exactly was wrong and there the cultural biases everybody has might or might not trigger and make the problem harder to address.

Libav is a much smaller community and in general nobody has qualms in saying “please stop” (that is also partially due how the community evolved).

Hopefully this post would help avoid making some mistakes and help people getting along better.

November 15, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Memories of the future of the past (November 15, 2015, 13:32 UTC)

Every now and then I lament the badness of current things, and then I remember the things we had that we can't get anymore ...

  • ISDN Telephones were a wonderful upgrade to analog Telephones. 8kB/s dedicated bandwidth with low latency. Compared to that all VoIP things I've used were just horribly cheap shoddy crap of unspeakably bad quality. Luckily ISDN has been discontinued and is no longer available to consumers, so we have no reference for what good audio quality means.
  • High-res displays like the IBM T220. Look it up, it's a time traveller! And, of course, it was discontinued, with no modern device coming close.
  • Mobile phones that we recharged every week like the Motorola StarTac I bought a few years ago. Now 24h seems to be 'ok' ...
  • Washing machines that took 30 minutes for one load which is not energy-efficient, so instead the modern ones run for 1-2h. Not sure how that helps, and it looks like they use more water too. So we just hide the problem and PROBLEM SOLVED?
  • ThinkPad Notebooks
And many other things that were better in the past, but have regressed now to a lower quality, less-feature, harder to repair state.

Can we please have more future?

November 14, 2015
Employment in a technological era (November 14, 2015, 14:35 UTC)

Lately I've been spending some time reading up on research into developments to the nature of employment given the increased computerization and automation in today's, and in particular, in tomorrow's world. These developments brings immense increases in productivity and opens up a new world of opportunities, but are employees keeping track and updating their skill sets to utilize it? My personal opinion is no, which was what initiated looking into the research on the matter.

Frey and Osborne's paper "The future of employment: how suspectible are jobs to computerisation" (2013) bring some interesting aspects, including a decent historical context to this issue; starting with referencing how John Maynard Keynes is frequently cited for his prediction of a widespread technological unemployment "due to our discovery of means of economising the use of labor outrunning the pace of which we can find new uses for labor" (Keynes, 1933). This was of course during a different technological advancement than we're experiencing now,  but it shows that the discussion is not new, in fact it is nicely illustrated by an example of William Lee, inventing the stocking frame knitting machine in 1589, hoping that it would relieve workers of hand-knitting, something which met opposition by Queen Elizabeth I that was more concerned with the employment impact and refused to grant him a patent, claiming that "Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars" (cited in Acemoglu and Robinson, 2012).

Has anything changed since the 16th century, or are we facing the same kind of social opposition to changing the status quo? How many, today, are willing to learn a programming language in order to interface with and utilize the tools of today? As pointed out by Makyr (1998): "Unless all individuals accept the "verdict" of the market outcome, the decision whether to adopt an innovation is likely to be resisted by losers through non-market mechanisms and political activism". This was followed up by the luddite riots between 1811 and 1816 as a manifestation of a fear of technological change among workers as Parliament revoked a 1551 law prohibiting the use of gig mills in the wool-finishing trade.

Today's challenges to labor markets are different in form, yet resemble the historical aspects to a great extent. These days the ability to communicate with a computer is, in my humble opinion, as vital as learning human languages, yet there are barely a few pushes towards learning programming languages alongside human spoken languages. My hypothesis is that a reason for this is a lack of knowledge in the adult population for the same, and quite frankly mathematics and logic in general, which naturally makes people uncomfortable requiring children to learn these subjects. Initiatives such as the UK's attempt to get kids coding, with changes to the national curriculum. ICT – Information and Communications Technology introducing a new “computing” curriculum including coding lessons for children as young as five (September 2013) is therefore very welcome, but as referenced in an article in The Guardian: "it seems many parents will be surprised when their children come home from school talking about algorithms, debugging and Boolean logic" and "It's about giving the next generation a chance to shape their world, not just be consumers in it".

The ability to shape my own day is one of the reasons why I'm personally interested in the world of open source. If I'm experiencing an issue while running an application, or if I want to extend it with new functionality, it is possible to do something about it when the source is available. Even more so, in a world that is increasingly complex and interconnected, basing this communication on open standards enables participation from multiple participants across different operating systems and user interfaces.

At the same time, increasingly so in the aftermath of Edward Snowden, I want to have the ability to see what happens with my data. Reading through the End User License Agreements (EULA) of services being offered to consumers I sometimes get truly scared. The last explicit example was the music playing service Spotify that introduced new terms stating that in order to continue using the service I would have to accept to having gained permission from all contacts to share their personal information. Safe to say I terminated that subscription.

There is an increasing gap in the knowledge required to understand the ramifications of the services being developed, the value of private information, and people's ability to recognize what is happening in an ever connected world. As pointed out in two earlier posts, "Your Weakest Security Link? Your Children, or is it?" and "Some worries about mobile appliances and the Internet of Things" this can actually be quite difficult, with the end result of individuals just drifting along.

So what do you think? Why not pick up an online tutorial on learning SQL, the structured query language used to talk with most database systems the next time you're feeling bored and is inclined to put on a TV program or just lie back on the couch, or maybe pick up a little bit of python, C, or for that matter C# if you're in a windows-centric world. Or as a general plea; make read a book once in a while.