Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
April 19, 2015, 19:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 16, 2015
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Removing old NX packages from tree (April 16, 2015, 21:30 UTC)

I already sent the last rites announce a few days ago, but here is a more detailed post on the coming up removal of “old” NX packages. Long story short: migrate to X2Go if possible, or use the NX overlay (“best-effort” support provided)

Affected packages

Basically, all NX clients and servers except x2go and nxplayer! Here is the complete list with some specific last rites reasons:

  • net-misc/nxclient,net-misc/nxnode,net-misc/nxserver-freeedition: binary-only original NX client and server. Upstream has moved on to a closed-source technology, and this version  bundles potientally vulnerable binary code. It does not work as well as before with current libraries (like Cairo).
  • net-misc/nxserver-freenx, net-misc/nxsadmin: the first open-source alternative server. It could be tricky to get working, and is not updated anymore (last upstream activity around 2009)
  • net-misc/nxcl, net-misc/qtnx: an open-source alternative client (last upstream activity around 2008)
  • net-misc/neatx: Google’s take on a NX server, sadly it never took off (last upstream activity around 2010)
  • app-admin/eselect-nxserver (an eselect module to switch active NX server, useless without these servers in tree)

Continue using these packages on Gentoo

These packages will be dropped from the main tree by the end of this month (2015/04), and then only available in the NX overlay. They will still be supported there in a “best effort” way (no guarantee how long some of these packages will work with current systems).

So, if one of these packages still works better for you, or you need to keep them around before migrating, just run:
# layman -a nx

Alternatives

While it is not a direct drop-in replacement, x2go is the most complete solution currently in Gentoo tree (and my recommendation), with a lot of possible advanced features, active upstream development, … You can connect to net-misc/x2goserver with net-misc/x2goclient, net-misc/pyhoca-gui, or net-misc/pyhoca-cli.

If you want to try NoMachine’s (the company that created NX) new system, well the client is available in Portage as net-misc/nxplayer. The server itself is not packaged yet, if you are interested in it, this is bug #488334

April 13, 2015
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

This blog was hosted by GandiBlog since mid-2006. Thanks to the Gandi folks, it has served me well, though it was starting to show its age.

Now I intend to revive this blog, and the first step was to finally move it on my server, switching to WordPress in the process! The rest of this post will detail how I did the switch, keeping the blog content and links.

GandiBlog to self-hosted Dotclear2

GandiBlog’s only export possibility is via a flat text file export (Extensions->Import/Export). An old WordPress plugin exists to import such files, and will likely be the solution you will first find in your favourite search engine. But its last update was some years ago and it does not seem to work with current versions. Instead, I chose to set up a temporary DotClear2 installation on my server, to use the current import plugin in WordPress later.

Installing DotClear is as easy as creating a mysql user/db, unpack the latest 2.x tarball, and follow the installation wizard. No need to tweak configuration our users here, I just logged in the administration panel, imported from the GandiBlog flat file, and I was (mostly) done!

I say mostly, because I fixed 2 things before the WordPress jump. You may not need to do the same, but here they are:

  • Some blog posts were in wiki style and not in xhtml, and were not converted correctly. As there were only a few of them, I used the”Convert to XML” button available when editing a post. I think tools exist for a mass conversion
  • Some other posts were in the “Uncategorized” category, and were ignored in the import process. My manual workaround was to set them to a “Other” category I created for the occasion. Again, this can be automated

Dotclear2 to WordPress4

The next step was to set up a new vhost on my server, emerge latest wordpress and install it in this new vhost (no special steps here) .

Once WordPress was basically up and running, I installed the “DotClear2 Importer” plugin, and ran it (it is an additional entry in Tools/Import). Basically it summed up to filling the (dotclear) database connection infos, and hit next button a few times.

Some additional steps to clean the new blog posts:

  • configure the permalink settings to use “Month and name”. This along with the “Permalink Finder” plugin, allowed me to preserve old links to the blog posts (the plugin deduces the new post URL and redirects to it)
  • delete the user created by the import, moving all posts to the new user in WordPress (this is a single-user blog)
  • move back the “Other” category posts to “Uncategorized”
  • temporarily install “Broken Link Checker” plugin to fix some old/dead links. Some comments had an additional “http://” prefix in URLs, and I fixed them along with updating other real links
  • remove temporary parts (dotclear install and database, cleanup/import plugins, …)
  • And when everything looked good, I could update my DNS entries to the blog :)

April 12, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

So Sebastian posted recently about Panopticlick, but I'm afraid he has not grasped just how many subtleties are present when dealing with tracking by User-Agent and with the limitations of the tool as it is.

First of all, let's take a moment to realize what «Your browser fingerprint appears to be unique among the 5,207,918 tested so far.» (emphasis mine) means. If I try the exact same request as Incognito, the message is «Within our dataset of several million visitors, only one in 2,603,994 browsers have the same fingerprint as yours.» (emphasis mine). I'm not sure why EFF does not expose the numbers in the second situation, hiding the five millions under the word "several". I can't tell how they identify further requests on the same browser not to be a new hit altogether. So I'm not sure what the number represents.

Understanding what the number represents is a major problem, too: if you count that even just in his post Sebastian tried at least three browsers; I tried twice just to write this post — so one thing that the number does not count is unique users. I would venture a guess that the number of users is well below the million, and that does count into play for multiple factors. Because Panopticlick was born in 2010, and if less than a million real users hit it, in five years, it might not be that statistically relevant.

Indeed, according to the current reading, just the Accept headers would be enough to boil me down to one in four sessions — that would be encoding and language. I doubt that is so clear-cut, as I'm most definitely not one of four people in the UKIE area speaking Italian. A lot of this has to do with the self-selection of "privacy conscious" people who use this tool from EFF.

But what worries me is the reaction from Sebastian and, even more so, the first comment on his post. Suggesting that you can hide in the crowd by looking for a "more popular" User-Agent or by using a random bunch of extensions and disabling JavaScript or blocking certain domains is naïve to say the least, but most likely missing and misunderstanding the point that Panopticlick tries to make.

The whole idea of browser fingerprinting is the ability to identify an user across a set of sessions — it responds to a similar threat model as Tor. While I already pointed out I disagree on the threat model, I would like to point out again that the kind of "surveillance" that this counters is ideally the one that is executed by an external entity able to monitor your communications from different source connections — if you don't use Tor and you only use a desktop PC from the same connection, then it doesn't really matter: you can just check for the IP address! And if you use different devices, then it also does not really matter, because you're now using different profiles anyway; the power is in the correlation.

In particular, when trying to tweak User-Agent or other headers to make them "more common", you're now dealing with something that is more likely to backfire than not; as my ModSecurity Ruleset shows you very well, it's not so difficult to tell apart a real Chrome request by Firefox masquerading as Chrome, or IE masquerading as Safari, they have different Accept-Encoding, and other differences in style of request headers, making it quite straightforward to check for them. And while you could mix up the Accept headers enough to "look the part" it's more than likely that you'll be served bad data (e.g. sdch to IE, or webp to Firefox) and that would make your browsing useless.

More importantly, the then-unique combination of, say, a Chrome User-Agent for an obviously IE-generated request would make it very obvious to follow a session aggregated across different websites with a similar fingerprint. The answer I got by Sebastian is not good either: even if you tried to use a "more common" version string, you could still, very easily, create unwanted unique fingerprints; take Firefox 37: it started supporting the alt-svc extension to use HTTP2 when available, if you were to report your browser as Firefox 28 and then it followed alt-svc, then it would clearly be a fake version string, and again an easy one to follow. Similar version-dependent request fingerprinting, paired with a modified User-Agent string would make you light up as a Christmas tree during Earth Day.

There are more problems though; the suggestion of installing extensions such as AdBlock also adds to the fingerprinting rather than block from it; as long as JavaScript is allowed to run, it can detect AdBlock presence, and with a bit of work you can identify presence of one out of the set of different blocking lists, too. You could use NoScript to avoid running JavaScript at all, but given this is by far not something most users will do, it'll also add up to the entropy of a fingerprint for your browser, not remove from it, even if it prevents client-side fingerprinting to access things like the list of available plugins (which in my case is not that common, either!)

But even ignoring the fact that Panopticlick does not try to identify the set of installed extensions (finding Chrome's Readability is trivial, as it injects content into the DOM, and so do a lot more), there is one more aspect that it almost entirely ignores: server-side fingerprinting. Beside not trying to correlate the purported User-Agent against the request fingerprint, it does not seem to use a custom server at all, so it does not leverage TLS handshake fingerprints! As can be seen through Qualys analysis, there are some almost-unique handshake sequences on a given server depending on the client used; while this does not add up much more data when matched against a vanilla User-Agent, a faked User-Agent and a somewhat more rare TLS handshake would be just as easy to track.

Finally, there is the problem with self-selection: Sebastian has blogged about this while using Firefox 37.0.1 which was just released, and testing with that; I assume he also had the latest Chrome. While Mozilla increased the rate of release of Firefox, Chrome has definitely a very hectic one with many people updating all the time. Most people wouldn't go to Panopticlick every time they update their browser, so two entries that are exactly the same apart from the User-Agent version would be reported as unique… even though it's most likely that the person who tried two months ago updated since, and now has the same fingerprint as the person who tried recently with the same browser and settings.

Now this is a double-edged sword: if you rely on the User-Agent to track someone across connections, a ephemeral User-Agent that changes every other day due to updates is going to disrupt your plans quickly; on the other hand lagging behind or jumping ahead on the update train for a browser would make it more likely for you to have a quite unique version number, even more so if you're tracking beta or developer channels.

Interestingly, though, Mozilla has thought about this before, and their Gecko user agent string reference shows which restricted fields are used, and references the bugs that disallowed extensions and various software to inject into the User-Agent string — funnily enough I know of quite a few badware cases in which a unique identifier was injected into the User-Agent for fake ads and other similar websites to recognize a "referral".

Indeed, especially on Mobile, I think that User-Agents are a bit too liberal with the information they push; not only they include the full build number of the mobile browser such as Chrome, but they usually include the model of the device and the build number of the operating system: do you want to figure out if a new build of Android is available for some random device out there? Make sure you have access to HTTP logs for big enough websites and look for new build IDs. I think that in this particular sub-topic, Chrome and Safari could help a lot more by reducing the amount of details of the engine version as well as the underlying operating system.

So, for my parting words, I would like to point out that Panopticlick is a nice proof-of-concept that shows how powerful browser fingerprinting is, without having to rely on tracking cookies. I think lots of people both underestimate the power of fingerprinting and overestimate the threat. From one side, because Panopticlick does not have enough current data to make it feasible to evaluate the current uniqueness of a session across the world; from the other, because you get the wrong impression that if Panopticlick can't put you down as unique, you're safe — you're not, there are many more techniques that Panopticlick does not think of trying!

My personal advice is to stop worrying about the NSA and instead start safekeeping yourself: using click-to-play for Flash and Java is good prophylaxis for security, not just privacy, and NoScript can be useful too, in some cases, but don't just kill everything on sight. Even using the Data Saver extension for non-HTTPS websites can help (unfortunately I know of more than a few blocking it, and then there is the problem with captive portals bringing it to be clear-text HTTP too.)

April 11, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Firefox: You may want to update to 37.0.1 (April 11, 2015, 19:23 UTC)

I was pointed to this Mozilla Security Advisory:

Certificate verification bypass through the HTTP/2 Alt-Svc header
https://www.mozilla.org/en-US/security/advisories/mfsa2015-44/

While it doesn’t say if all versions prior to 37.0.1 are affected, it does say that sending a certain server response header disabled warnings of invalid SSL certificates for that domain. Ooops.

I’m not sure how relevant HTTP/2 is by now.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

Slot conflicts can be annoying. It's worse when an attempt to fix them leads to an even bigger mess. I hope this post helps you with some cases - and that portage will keep getting smarter about resolving them automatically.

Read more »

Sebastian Pipping a.k.a. sping (homepage, bugs)

A document that I was privileged to access pre-release snapshots of has now been published:

Introduction to Chinese Chess (XiangQi) for International Chess Players
A Comparison of Chess and XiangQi
By xq_info (add) gmx.de (a bit shy about his real name)
98 pages

You may like the self-assessment puzzles aspect about it, but not only that. Recommended!

It’s up here:

http://wxf.ca/wxf/doc/book/xiangqi_introduction_chessplayers_20150323.pdf

Best, Sebastian

While https://panopticlick.eff.org/ is not really new, I learned about that site only recently.
And while I knew that browser self-identification would reduce my anonymity on the Internet, I didn’t expect this result:

Your browser fingerprint appears to be unique among the 5,198,585 tested so far.

Wow. Why? Let’s try one of the others browsers I use. “Appears to be unique”, again (where Flash is enabled).

What’s so unique about my setup? The two reports say about my setup:

Characteristic One in x browsers have this value
Browser Firefox
36.0.1
Google Chrome
42.0.2311.68
Chromium
41.0.2272.76
User Agent 2,030.70 472,599.36 16,576.56
HTTP_ACCEPT Headers 12.66 5477.97 5,477.97
Browser Plugin Details 577,620.56 259,929.65 7,351.75
Time Zone 6.51 6.51 6.51
Screen Size and Color Depth 13.72 13.72 13.72
System Fonts 5,198,585.00 5,198,585.00 5.10
(Flash and Java disabled)
Are Cookies Enabled? 1.35 1.35 1.35
Limited supercookie test 1.83 1.83 1.83

User agent and browser plug-ins hurt, fonts alone kill me altogether. Ouch.

Update:

  • It’s the very same when browsing with an incognito window. Re-reading, what that feature is officially intended to do (being incognito to your own history), that stops being a surprise.
  • Chromium (with Flash/Java disabled) added

Thoughts on fixing this issue:

I’m not sure about how I want to fix this myself. Faking popular values (in a popular combination to not fire back) could work using a local proxy, a browser patch, a browser plug-in maybe. Obtaining true popular value combinations is another question. Fake values can reduce the quality of the content I am presented, e.g. I would not fake my screen resolution or be sure to not deviate by much, probably.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Almost quiet dataloss (April 11, 2015, 11:06 UTC)

Some harddisk manufacturers have interesting ideas ... using some old Samsung disks in a RAID5 config:

[15343.451517] ata3.00: exception Emask 0x0 SAct 0x40008410 SErr 0x0 action 0x6 frozen
[15343.451522] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451527] ata3.00: cmd 61/20:20:d8:7d:6c/01:00:07:00:00/40 tag 4 ncq 147456 out
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451530] ata3.00: status: { DRDY }
[15343.451532] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451536] ata3.00: cmd 61/30:50:d0:2f:40/00:00:0d:00:00/40 tag 10 ncq 24576 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451538] ata3.00: status: { DRDY }
[15343.451540] ata3.00: failed command: WRITE FPDMA QUEUED
[15343.451544] ata3.00: cmd 61/a8:78:90:be:da/00:00:0b:00:00/40 tag 15 ncq 86016 out
                        res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451546] ata3.00: status: { DRDY }
[15343.451549] ata3.00: failed command: READ FPDMA QUEUED
[15343.451552] ata3.00: cmd 60/38:f0:c0:2b:d6/00:00:0e:00:00/40 tag 30 ncq 28672 in
                        res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)                                                                                                                                                                                            
[15343.451555] ata3.00: status: { DRDY }
[15343.451557] ata3: hard resetting link
[15343.911891] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[15344.062112] ata3.00: configured for UDMA/133
[15344.062130] ata3.00: device reported invalid CHS sector 0
[15344.062139] ata3.00: device reported invalid CHS sector 0
[15344.062146] ata3.00: device reported invalid CHS sector 0
[15344.062153] ata3.00: device reported invalid CHS sector 0
[15344.062169] ata3: EH complete
Hmm, that doesn't look too good ... but mdadm still believes the RAID is functional.

And a while later things like this happen:
[ 2968.701999] XFS (md4): Metadata corruption detected at xfs_dir3_data_read_verify+0x72/0x77 [xfs], block 0x36900a0
[ 2968.702004] XFS (md4): Unmount and run xfs_repair
[ 2968.702007] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702011] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702015] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702018] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702021] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702048] XFS (md4): metadata I/O error: block 0x36900a0 ("xfs_trans_read_buf_map") error 117 numblks 8
[ 2968.702476] XFS (md4): Metadata corruption detected at xfs_dir3_data_reada_verify+0x69/0x6d [xfs], block 0x36900a0
[ 2968.702491] XFS (md4): Unmount and run xfs_repair
[ 2968.702494] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702498] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702501] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702505] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702508] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702825] XFS (md4): Metadata corruption detected at xfs_dir3_data_read_verify+0x72/0x77 [xfs], block 0x36900a0
[ 2968.702831] XFS (md4): Unmount and run xfs_repair
[ 2968.702834] XFS (md4): First 64 bytes of corrupted metadata buffer:
[ 2968.702839] ffff8802ab5cf000: 04 00 00 00 99 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702842] ffff8802ab5cf010: 03 00 00 00 00 00 00 00 02 00 00 00 9e 00 00 00  ................
[ 2968.702866] ffff8802ab5cf020: 0c 00 00 00 00 00 00 00 13 00 00 00 00 00 00 00  ................
[ 2968.702871] ffff8802ab5cf030: 04 00 00 00 82 00 00 00 fc ff ff ff ff ff ff ff  ................
[ 2968.702888] XFS (md4): metadata I/O error: block 0x36900a0 ("xfs_trans_read_buf_map") error 117 numblks 8
fsck finds quite a lot of data not being where it should be.
I'm not sure who to blame here - the kernel should actively punch out any harddisk that is fish-on-land flopping around like that, the md layer should hate on any device that even looks weirdly, but somehow "just doing a link reset" is considered enough.

I'm not really upset that an old cheap disk that is now ~9 years old decides to have dementia, but I'm quite unhappy with the firmware programming that doesn't seem to consider data loss as a problem ... (but at least it's not Seagate!)

April 08, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Zitat: “Mit dieser Petition fordern wir ein generelles und ausnahmsloses Fracking-Verbot für Kohlenwasserstoffe in Deutschland!”

https://www.change.org/p/bundestag-fracking-gesetzlich-verbieten-ausgfrackt-is

April 07, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)
How Heartbleed could've been found (April 07, 2015, 13:23 UTC)

Heartbleedtl;dr With a reasonably simple fuzzing setup I was able to rediscover the Heartbleed bug. This uses state-of-the-art fuzzing and memory protection technology (american fuzzy lop and Address Sanitizer), but it doesn't require any prior knowledge about specifics of the Heartbleed bug or the TLS Heartbeat extension. We can learn from this to find similar bugs in the future.

Exactly one year ago a bug in the OpenSSL library became public that is one of the most well-known security bug of all time: Heartbleed. It is a bug in the code of a TLS extension that up until then was rarely known by anybody. A read buffer overflow allowed an attacker to extract parts of the memory of every server using OpenSSL.

Can we find Heartbleed with fuzzing?

Heartbleed was introduced in OpenSSL 1.0.1, which was released in March 2012, two years earlier. Many people wondered how it could've been hidden there for so long. David A. Wheeler wrote an essay discussing how fuzzing and memory protection technologies could've detected Heartbleed. It covers many aspects in detail, but in the end he only offers speculation on whether or not fuzzing would have found Heartbleed. So I wanted to try it out.

Of course it is easy to find a bug if you know what you're looking for. As best as reasonably possible I tried not to use any specific information I had about Heartbleed. I created a setup that's reasonably simple and similar to what someone would also try it without knowing anything about the specifics of Heartbleed.

Heartbleed is a read buffer overflow. What that means is that an application is reading outside the boundaries of a buffer. For example, imagine an application has a space in memory that's 10 bytes long. If the software tries to read 20 bytes from that buffer, you have a read buffer overflow. It will read whatever is in the memory located after the 10 bytes. These bugs are fairly common and the basic concept of exploiting buffer overflows is pretty old. Just to give you an idea how old: Recently the Chaos Computer Club celebrated the 30th anniversary of a hack of the German BtX-System, an early online service. They used a buffer overflow that was in many aspects very similar to the Heartbleed bug. (It is actually disputed if this is really what happened, but it seems reasonably plausible to me.)

Fuzzing is a widely used strategy to find security issues and bugs in software. The basic idea is simple: Give the software lots of inputs with small errors and see what happens. If the software crashes you likely found a bug.

When buffer overflows happen an application doesn't always crash. Often it will just read (or write if it is a write overflow) to the memory that happens to be there. Whether it crashes depends on a lot of circumstances. Most of the time read overflows won't crash your application. That's also the case with Heartbleed. There are a couple of technologies that improve the detection of memory access errors like buffer overflows. An old and well-known one is the debugging tool Valgrind. However Valgrind slows down applications a lot (around 20 times slower), so it is not really well suited for fuzzing, where you want to run an application millions of times on different inputs.

Address Sanitizer finds more bug

A better tool for our purpose is Address Sanitizer. David A. Wheeler calls it “nothing short of amazing”, and I want to reiterate that. I think it should be a tool that every C/C++ software developer should know and should use for testing.

Address Sanitizer is part of the C compiler and has been included into the two most common compilers in the free software world, gcc and llvm. To use Address Sanitizer one has to recompile the software with the command line parameter -fsanitize=address . It slows down applications, but only by a relatively small amount. According to their own numbers an application using Address Sanitizer is around 1.8 times slower. This makes it feasible for fuzzing tasks.

For the fuzzing itself a tool that recently gained a lot of popularity is american fuzzy lop (afl). This was developed by Michal Zalewski from the Google security team, who is also known by his nick name lcamtuf. As far as I'm aware the approach of afl is unique. It adds instructions to an application during the compilation that allow the fuzzer to detect new code paths while running the fuzzing tasks. If a new interesting code path is found then the sample that created this code path is used as the starting point for further fuzzing.

Currently afl only uses file inputs and cannot directly fuzz network input. OpenSSL has a command line tool that allows all kinds of file inputs, so you can use it for example to fuzz the certificate parser. But this approach does not allow us to directly fuzz the TLS connection, because that only happens on the network layer. By fuzzing various file inputs I recently found two issues in OpenSSL, but both had been found by Brian Carpenter before, who at the same time was also fuzzing OpenSSL.

Let OpenSSL talk to itself

So to fuzz the TLS network connection I had to create a workaround. I wrote a small application that creates two instances of OpenSSL that talk to each other. This application doesn't do any real networking, it is just passing buffers back and forth and thus doing a TLS handshake between a server and a client. Each message packet is written down to a file. It will result in six files, but the last two are just empty, because at that point the handshake is finished and no more data is transmitted. So we have four files that contain actual data from a TLS handshake. If you want to dig into this, a good description of a TLS handshake is provided by the developers of OCaml-TLS and MirageOS.

Then I added the possibility of switching out parts of the handshake messages by files I pass on the command line. By calling my test application selftls with a number and a filename a handshake message gets replaced by this file. So to test just the first part of the server handshake I'd call the test application, take the output file packed-1 and pass it back again to the application by running selftls 1 packet-1. Now we have all the pieces we need to use american fuzzy lop and fuzz the TLS handshake.

I compiled OpenSSL 1.0.1f, the last version that was vulnerable to Heartbleed, with american fuzzy lop. This can be done by calling ./config and then replacing gcc in the Makefile with afl-gcc. Also we want to use Address Sanitizer, to do so we have to set the environment variable AFL_USE_ASAN to 1.

There are some issues when using Address Sanitizer with american fuzzy lop. Address Sanitizer needs a lot of virtual memory (many Terabytes). American fuzzy lop limits the amount of memory an application may use. It is not trivially possible to only limit the real amount of memory an application uses and not the virtual amount, therefore american fuzzy lop cannot handle this flawlessly. Different solutions for this problem have been proposed and are currently developed. I usually go with the simplest solution: I just disable the memory limit of afl (parameter -m -1). This poses a small risk: A fuzzed input may lead an application to a state where it will use all available memory and thereby will cause other applications on the same system to malfuction. Based on my experience this is very rare, so I usually just ignore that potential problem.

After having compiled OpenSSL 1.0.1f we have two files libssl.a and libcrypto.a. These are static versions of OpenSSL and we will use them for our test application. We now also use the afl-gcc to compile our test application:

AFL_USE_ASAN=1 afl-gcc selftls.c -o selftls libssl.a libcrypto.a -ldl

Now we run the application. It needs a dummy certificate. I have put one in the repo. To make things faster I'm using a 512 bit RSA key. This is completely insecure, but as we don't want any security here – we just want to find bugs – this is fine, because a smaller key makes things faster. However if you want to try fuzzing the latest OpenSSL development code you need to create a larger key, because it'll refuse to accept such small keys.

The application will give us six packet files, however the last two will be empty. We only want to fuzz the very first step of the handshake, so we're interested in the first packet. We will create an input directory for american fuzzy lop called in and place packet-1 in it. Then we can run our fuzzing job:

afl-fuzz -i in -o out -m -1 -t 5000 ./selftls 1 @@

american fuzzy lop screenshot

We pass the input and output directory, disable the memory limit and increase the timeout value, because TLS handshakes are slower than common fuzzing tasks. On my test machine around 6 hours later afl found the first crash. Now we can manually pass our output to the test application and will get a stack trace by Address Sanitizer:

==2268==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x629000013748 at pc 0x7f228f5f0cfa bp 0x7fffe8dbd590 sp 0x7fffe8dbcd38
READ of size 32768 at 0x629000013748 thread T0
#0 0x7f228f5f0cf9 (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x2fcf9)
#1 0x43d075 in memcpy /usr/include/bits/string3.h:51
#2 0x43d075 in tls1_process_heartbeat /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/t1_lib.c:2586
#3 0x50e498 in ssl3_read_bytes /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_pkt.c:1092
#4 0x51895c in ssl3_get_message /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_both.c:457
#5 0x4ad90b in ssl3_get_client_hello /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_srvr.c:941
#6 0x4c831a in ssl3_accept /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/ssl/s3_srvr.c:357
#7 0x412431 in main /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/selfs.c:85
#8 0x7f228f03ff9f in __libc_start_main (/lib64/libc.so.6+0x1ff9f)
#9 0x4252a1 (/data/openssl/openssl-handshake/openssl-1.0.1f-nobreakrng-afl-asan-fuzz/selfs+0x4252a1)

0x629000013748 is located 0 bytes to the right of 17736-byte region [0x62900000f200,0x629000013748)
allocated by thread T0 here:
#0 0x7f228f6186f7 in malloc (/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.2/libasan.so.1+0x576f7)
#1 0x57f026 in CRYPTO_malloc /home/hanno/code/openssl-fuzz/tests/openssl-1.0.1f/crypto/mem.c:308


We can see here that the crash is a heap buffer overflow doing an invalid read access of around 32 Kilobytes in the function tls1_process_heartbeat(). It is the Heartbleed bug. We found it.

I want to mention a couple of things that I found out while trying this. I did some things that I thought were necessary, but later it turned out that they weren't. After Heartbleed broke the news a number of reports stated that Heartbleed was partly the fault of OpenSSL's memory management. A mail by Theo De Raadt claiming that OpenSSL has “exploit mitigation countermeasures” was widely quoted. I was aware of that, so I first tried to compile OpenSSL without its own memory management. That can be done by calling ./config with the option no-buf-freelist.

But it turns out although OpenSSL uses its own memory management that doesn't defeat Address Sanitizer. I could replicate my fuzzing finding with OpenSSL compiled with its default options. Although it does its own allocation management, it will still do a call to the system's normal malloc() function for every new memory allocation. A blog post by Chris Rohlf digs into the details of the OpenSSL memory allocator.

Breaking random numbers for deterministic behaviour

When fuzzing the TLS handshake american fuzzy lop will report a red number counting variable runs of the application. The reason for that is that a TLS handshake uses random numbers to create the master secret that's later used to derive cryptographic keys. Also the RSA functions will use random numbers. I wrote a patch to OpenSSL to deliberately break the random number generator and let it only output ones (it didn't work with zeros, because OpenSSL will wait for non-zero random numbers in the RSA function).

During my tests this had no noticeable impact on the time it took afl to find Heartbleed. Still I think it is a good idea to remove nondeterministic behavior when fuzzing cryptographic applications. Later in the handshake there are also timestamps used, this can be circumvented with libfaketime, but for the initial handshake processing that I fuzzed to find Heartbleed that doesn't matter.

Conclusion

You may ask now what the point of all this is. Of course we already know where Heartbleed is, it has been patched, fixes have been deployed and it is mostly history. It's been analyzed thoroughly.

The question has been asked if Heartbleed could've been found by fuzzing. I'm confident to say the answer is yes. One thing I should mention here however: American fuzzy lop was already available back then, but it was barely known. It only received major attention later in 2014, after Michal Zalewski used it to find two variants of the Shellshock bug. Earlier versions of afl were much less handy to use, e. g. they didn't have 64 bit support out of the box. I remember that I failed to use an earlier version of afl with Address Sanitizer, it was only possible after a couple of issues were fixed. A lot of other things have been improved in afl, so at the time Heartbleed was found american fuzzy lop probably wasn't in a state that would've allowed to find it in an easy, straightforward way.

I think the takeaway message is this: We have powerful tools freely available that are capable of finding bugs like Heartbleed. We should use them and look for the other Heartbleeds that are still lingering in our software. Take a look at the Fuzzing Project if you're interested in further fuzzing work. There are beginner tutorials that I wrote with the idea in mind to show people that fuzzing is an easy way to find bugs and improve software quality.

I already used my sample application to fuzz the latest OpenSSL code. Nothing was found yet, but of course this could be further tweaked by trying different protocol versions, extensions and other variations in the handshake.

I also wrote a German article about this finding for the IT news webpage Golem.de.

Update:

I want to point out some feedback I got that I think is noteworthy.

On Twitter it was mentioned that Codenomicon actually found Heartbleed via fuzzing. There's a Youtube video from Codenomicon's Antti Karjalainen explaining the details. However the way they did this was quite different, they built a protocol specific fuzzer. The remarkable feature of afl is that it is very powerful without knowing anything specific about the used protocol. Also it should be noted that Heartbleed was found twice, the first one was Neel Mehta from Google.

Kostya Serebryany mailed me that he was able to replicate my findings with his own fuzzer which is part of LLVM, and it was even faster.

In the comments Michele Spagnuolo mentions that by compiling OpenSSL with -DOPENSSL_TLS_SECURITY_LEVEL=0 one can use very short and insecure RSA keys even in the latest version. Of course this shouldn't be done in production, but it is helpful for fuzzing and other testing efforts.

Matt Turner a.k.a. mattst88 (homepage, bugs)
Combining constants in i965 fragment shaders (April 07, 2015, 04:00 UTC)

On Intel's Gen graphics, three source instructions like MAD and LRP cannot have constants as arguments. When support for MAD instructions was introduced with Sandybridge, we assumed the choice between a MOV+MAD and a MUL+ADD sequence was inconsequential, so we chose to perform the multiply and add operations separately. Revisiting that assumption has uncovered some interesting things about the hardware and has lead us to some pretty nice performance improvements.

On Gen 7 hardware (Ivybridge, Haswell, Baytrail), multiplies and adds without immediate value arguments can be co-issued, meaning that multiple instructions can be issued from the same execution unit in the same cycle. MADs, never having immediates as sources, can always be co-issued. Considering that, we should prefer MADs, but a typical vec4 * vec4 + vec4(constant) pattern would lead to three duplicate (four total) MOV imm instructions.

mov(8)  g10<1>F    1.0F
mov(8)  g11<1>F    1.0F
mov(8)  g12<1>F    1.0F
mov(8)  g13<1>F    1.0F
mad(8)  g40<1>F    g10<8,8,1>F   g20<8,8,1>F   g30<8,8,1>F
mad(8)  g41<1>F    g11<8,8,1>F   g21<8,8,1>F   g31<8,8,1>F
mad(8)  g42<1>F    g12<8,8,1>F   g22<8,8,1>F   g32<8,8,1>F
mad(8)  g43<1>F    g13<8,8,1>F   g23<8,8,1>F   g33<8,8,1>F

Should be easy to clean up, right? We should simply combine those 1.0F MOVs and modify the MAD instructions to access the same register. Well, conceptually yes, but in practice not quite.

Since the i965 driver's fragment shader backend doesn't use static single assignment form (it's on our TODO list), our common subexpression elimination pass has to emit a MOV instruction when combining instructions. As a result, performing common subexpression elimination on immediate MOVs would undo constant propagation and the compiler's optimizer would go into an infinite loop. Not what you wanted.

Instead, I wrote a pass that scans the instruction list after the main optimization loop and creates a list of immediate values that are used. If an immediate value is used by a 3-source instruction (a MAD or a LRP) or at least four times by an instruction that can co-issue (ADD, MUL, CMP, MOV) then it's put into a register and sourced from there.

But there's still room for improvement. Each general register can store 8 floats, and instead of storing 8 separate constants in each, we're storing a single constant 8 times (and on SIMD16, 16 times!). Fixing that wasn't hard, and it significantly reduces register usage - we now only use one register for each 8 immediate values. Using a special vector-float immediate type we can even load four floating-point values in a single instruction.

With that in place, we can now always emit MAD instructions.

I'm pretty pleased with the results. Without using the New Intermediate Representation (NIR), the shader-db results are:

total instructions in shared programs: 5895414 -> 5747578 (-2.51%)
instructions in affected programs:     3618111 -> 3470275 (-4.09%)

And with NIR (that already unconditionally emits MAD instructions):

total instructions in shared programs: 7992936 -> 7772474 (-2.76%)
instructions in affected programs:     3738730 -> 3518268 (-5.90%)

Effects on a WebGL microbenchmark

In December, I checked what effect my constant combining pass would have on a WebGL procedural noise demo. The demo generates an effect ("noise") that looks like a ball of fire. Its fragment shader contains a ton of instructions but no texturing operations. We're currently able to compile the program in SIMD8 without spilling any registers, but at a cost of scheduling the instructions very badly.

Noise Demo in action

The effects the constant combining pass has on this demo are really interesting, and it actually gives me evidence that some of the ideas I had for the pass are valid, namely that co-issuing instructions is worth a little extra register pressure.

  1. 1.00x FPS of baseline - 3123 instructions - baseline
  2. 1.09x FPS of baseline - 2841 instructions - after promoting constants only if used by more than 2 MADs

Going from no-constant-combining to restricted-constant-combining gives us a 9% increase in frames per second for a 9% instruction count reduction. We're totally limited by fragment shader performance.

  1. 1.46x FPS of baseline - 2841 instructions - after promote any constant used by a MAD

Going from step 2 to 3 though is interesting. The instruction count doesn't change, but we reduced register pressure sufficiently that we can now schedule instructions better without spilling (SCHEDULE_PRE, instead of SCHEDULE_PRE_NON_LIFO) - a 33% speed up just by rearranging instructions.

  1. 1.62x FPS of baseline - 2852 instructions - after promoting constants used by at least 4 co-issueable instructions

I was worried that we weren't going to be able to measure any performance difference from pulling constants out of co-issueable instructions, but we can definitely get a nice improvement here, of about 10% increase in frames per second.

As an aside, I did an experiment to see what would happen if we used SCHEDULE_PRE and spilled registers anyway (I added a couple of extra instructions to increase register pressure over the threshold). I changed the window size to 2048x2048 and rendered a fixed number of frames.

  • SCHEDULE_PRE with no spills: 17.5 seconds
  • SCHEDULE_PRE with 4 spills (8 send instructions): 17.5 seconds
  • SCHEDULE_PRE_NON_LIFO with no spills: 28 seconds

So there's some good evidence that the cure is worse than the disease. Of course this demo doesn't do any texturing, so memory bandwidth is not at a premium.

  1. 1.76x FPS of baseline - 2609 instructions - ???

I ran the demo to see if we'd made any changes in the last two months and was pleasantly surprised to find that we'd cut another 9% of instructions. I have no idea what caused it, but I'll take it! Combined with everything else, we're up to a 76% performance improvement.

Where's the code

The Mesa patches that implement the constant combining pass were committed (commit bb33a31c) and will be in the next major release (presumably version 10.6).

If any of this sounds interesting enough that you'd like to do it for a living, feel free to contact me. My team at Intel is responsible for the open source 3D driver in Mesa and is looking for new talent.

April 05, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)

Here’s a bug and my response to it which both deserve a little bit more visibility than being buried under some random bug number. I’m actually surprised nobody complained about that before.

GNU R supports to run
> install.packages(‘ggplot2′)
in the R console as user. The library ggplot2 will then be installed in users home. Most distros like debian and the like provide a package per library.

First, thank you for pointing out that it is possible to install and maintain your own packages in your $HOME. It didn’t use to work, and the reason why it now does is a little further down but I will not spoil it.

Here’s my response:

Please, do not ever add R packages to the tree. There are thousands of them and they are mostly very badly written, to be polite. If you look at other distros you will see that they give an illusion of having some R packages, but almost all of them (if not all) are seriously lagging behind their respective upstream or simply unmaintained. The reason is that it’s a massive amount of very frustrating and pointless work.

Upstream recommends maintaining your packages yourself in your $HOME and we’ll go with that. I have sent patches a couple of years ago to fix the way this works, and now (as you can obviously see) it does work correctly on all distros, not just Gentoo. Also, real scientists usually like to lock down the exact versions of packages they use, which is not possible when they are maintained by a third party.

If you want to live on the edge then feel free to ask Benda Xu (heroxbd) for an access to the R overlay repository. It serves tens of thousands of ebuilds for R packages automatically converted from a number of sources. It mostly works, and helps in preserving a seemingly low but nonetheless functional level of mental sanity of your beloved volunteer developers.

That, or you maintain your own overlay of packages and have it added to layman.

While we are on that subject, I would like to publicly thank André Erdmann for the fantastic work he has done over the past few years. He wrote, and still occasionally updates, the magical software which runs behind the R overlay server. Thank you, André.

March 31, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

It’s 4:17 if you would like to skip the first prime number based slam. No more math after. Though, you may miss something.


Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.4 (March 31, 2015, 13:53 UTC)

I’m very pleased to announce this new release of py3status because it is by far the most contributed one with a total of 33 files changed, 1625 insertions and 509 deletions !

I’ll start by thanking this release’s contributors with a special mention for Federico Ceratto for his precious insights, his CLI idea and implementation and other modules contributions.

Thank you

  • Federico Ceratto
  • @rixx (and her amazing reactivity)
  • J.M. Dana
  • @Gamonics
  • @guilbep
  • @lujeni
  • @obb
  • @shankargopal
  • @thomas-

IMPORTANT

In order to keep a clean and efficient code base, this is the last version of py3status supporting the legacy modules loading and ordering, this behavior will be dropped on the next 2.5 version !

CLI commands

py3status now supports some CLI commands which allows you to get information about all the available modules and their documentation.

  • list all available modules

if you specify your own inclusion folder(s) with the -i parameter, your modules will be listed too !

$ py3status modules list
Available modules:
  battery_level          Display the battery level.
  bitcoin_price          Display bitcoin prices using bitcoincharts.com.
  bluetooth              Display bluetooth status.
  clementine             Display the current "artist - title" playing in Clementine.
  dpms                   Activate or deactivate DPMS and screen blanking.
  glpi                   Display the total number of open tickets from GLPI.
  imap                   Display the unread messages count from your IMAP account.
  keyboard_layout        Display the current keyboard layout.
  mpd_status             Display information from mpd.
  net_rate               Display the current network transfer rate.
  netdata                Display network speed and bandwidth usage.
  ns_checker             Display DNS resolution success on a configured domain.
  online_status          Display if a connection to the internet is established.
  pingdom                Display the latest response time of the configured Pingdom checks.
  player_control         Control music/video players.
  pomodoro               Display and control a Pomodoro countdown.
  scratchpad_counter     Display the amount of windows in your i3 scratchpad.
  spaceapi               Display if your favorite hackerspace is open or not.
  spotify                Display information about the current song playing on Spotify.
  sysdata                Display system RAM and CPU utilization.
  vnstat                 Display vnstat statistics.
  weather_yahoo          Display Yahoo! Weather forecast as icons.
  whoami                 Display the currently logged in user.
  window_title           Display the current window title.
  xrandr                 Control your screen(s) layout easily.
  • get available modules details and configuration
$ py3status modules details
Available modules:
  battery_level          Display the battery level.
                         
                         Configuration parameters:
                             - color_* : None means - get it from i3status config
                             - format : text with "text" mode. percentage with % replaces {}
                             - hide_when_full : hide any information when battery is fully charged
                             - mode : for primitive-one-char bar, or "text" for text percentage ouput
                         
                         Requires:
                             - the 'acpi' command line
                         
                         @author shadowprince, AdamBSteele
                         @license Eclipse Public License
                         ---
[...]

Modules changelog

  • new bluetooth module by J.M. Dana
  • new online_status module by @obb
  • new player_control module, by Federico Ceratto
  • new spotify module, by Pierre Guilbert
  • new xrandr module to handle your screens layout from your bar
  • dpms module activate/deactivate the screensaver as well
  • imap module various configuration and optimizations
  • pomodoro module can use DBUS notify, play sounds and be paused
  • spaceapi module bugfix for space APIs without ‘lastchange’ field
  • keyboard_layout module incorrect parsing of “setxkbmap -query”
  • battery_level module better python3 compatibility

Other highlights

Full changelog here.

  • catch daylight savings time change
  • ensure modules methods are always iterated alphabetically
  • refactor default config file detection
  • rename and move the empty_class example module to the doc/ folder
  • remove obsolete i3bar_click_events module
  • py3status will soon be available on debian thx to Federico Ceratto !

Thank you for participating in Gentoo’s 2015 April Fools’ joke!

Now that April 1 has passed, we shed a tear as we say goodbye CGA Web™ but also to our website. Our previous website, that is, that has been with us for more than a decade. Until all contents are migrated, you can find the previous version on wwwold.gentoo.org, please note that the contents found there are not maintained any longer.

As this is indeed a major change, we’re still working out some rough edges and would appreciate your feedback via email to www@gentoo.org or on IRC in #gentoo-www.

We hope you appreciate the new look and had a great time finding out how terrible you are at Pong and are looking forward to seeing your reactions once again when we celebrate the launch of the new Gentoo Disk™ set.

As for Alex, Robin, and all co-conspirators, thank you again for your participation!

The original April 1 news item is still available on the single news display page.


Old April Fools’ day announcement

Gentoo Linux today announced the launch of its new totally revamped and more inclusive website which was built to conform to the CGA Web™ graphics standards.

“Our previous website served the community superbly well for the past 10 years but was frankly not as inclusive as we would have liked as it could not be viewed by aspiring community members who did not have access to the latest hardware,” said a Gentoo Linux Council Member who asked not to be named.

“Dedicated community members worked all hours for many months to get the new site ready for its launch today. We are proud of their efforts and are convinced that the new site will be way more inclusive than ever and thereby deepen the sense of community felt by all,” they said.

“Gentoo Linux’s seven-person council determined that the interests of the community were not being served by the previous site and decided that it had to be made more inclusive,” said Web project lead Alex Legler (a3li). The new site is was also available via Gopher (gopher://gopher.gentoo.org/).

“What’s the use of putting millions of colours out there when so many in the world cannot appreciate them and who, indeed, may even feel disappointed by their less capable hardware platforms,” he said.

“We accept that members in more fortunate circumstances may feel that a site with a 16-colour palette and an optimal screen resolution of 640 x 200 pixels is not the best fit for their needs but we urge such members to keep the greater good in mind. The vast majority of potential new Gentoo Linux users are still using IBM XT computers, storing their information on 5.25-inch floppy disks and communicating via dial-up BBS,” said Roy Bamford (neddyseagoon), a Foundation trustee.

“These people will be touched and grateful that their needs are now being taken in account and that they will be able to view the Gentoo Linux site comfortably on whatever hardware they have available.”

“The explosion of gratitude will ensure other leading firms such as Microsoft and Apple begin to move into conformance with CGA Web™ and it is hoped it will help bring knowledge to more and more informationally-disadvantaged people every year,” said Daniel Robbins (drobbins), former Gentoo founder.

Several teams participated in the early development of the new website and would like to showcase their work:

  • Games Team (JavaScript Pong)
  • Multimedia Team (Ghostbusters Theme on 6 floppy drives)
  • Net-News Team (A list of Gentoo newsgroups)

Phase II

The second phase of the project to get Gentoo Linux to a wider user base will involve the creation of floppy disk sets containing a compact version of the operating system and a selection of software essentials. It is estimated that sets could be created using less than 700 disks each and sponsorship is currently being sought. The launch of Gentoo Disk™ can be expected in about a year.

Media release prepared by A Jackson.

Editorial inquiries: PR team.

Interviews, photography and screen shots available on request.

March 30, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

I found

Send email on SSH login using PAM

to be a great guide for setting up e-mail delivery for any successful log-in through SSH.

My current script:

#! /bin/bash
if [ "$PAM_TYPE" != "open_session" ]; then
  exit 0
fi

cat <<-BODY | mailx -s "Log-in to ${PAM_USER:-???}@$(hostname -f) \
(${PAM_SERVICE:-???}) detected" mail@example.org
        # $(LC_ALL=C date +'%Y-%m-%d %H:%M (UTC%z)')
        $(env | grep '^PAM_' | sort)
BODY

exit 0

March 26, 2015
Alex Legler a.k.a. a3li (homepage, bugs)
On Secunia’s Vulnerability Review 2015 (March 26, 2015, 19:44 UTC)

Today, Secunia have released their Vulnerability Review 2015, including various statistics on security issues fixed in the last year.

If you don’t know about Secunia’s services: They aggregate security issues from various sources into a single stream, or as they call it: they provide vulnerability intelligence.
In the past, this intelligence was available to anyone in a free newsletter or on their website. Recent changes however caused much of the useful information to go behind login and/or pay walls. This circumstance has also forced us at the Gentoo Security team to cease using their reports as references when initiating package updates due to security issues.

Coming back to their recently published document, there is one statistic that is of particular interest: Gentoo is listed as having the third largest number of vulnerabilities in a product in 2014.

from Secunia: Secunia Vulnerability Review 2015 (http://secunia.com/resources/reports/vr2015/)from Secunia: Secunia Vulnerability Review 2015
(http://secunia.com/resources/reports/vr2015/)

Looking at the whole table, you’d expect at least one other Linux distribution with a similarly large pool of available packages, but you won’t find any.

So is Gentoo less secure than other distros? tl;dr: No.

As Secunia’s website does not let me see the actual “vulnerabilities” they have counted for Gentoo in 2014, there’s no way to actually find out how these numbers came into place. What I can see though are “Secunia advisories” which seem to be issued more or less for every GLSA we send. Comparing the number of posted Secunia advisories for Gentoo to those available for Debian 6 and 7 tells me something is rotten in the state of Denmark (scnr):
While there were 203 Secunia advisories posted for Gentoo in the last year, Debian 6 and 7 had 304, yet Debian would have to have fixed less than 105 vulnerabilities in (55+249=) 304 advisories to be at least rank 21 and thus not included in the table above. That doesn’t make much sense. Maybe issues in Gentoo’s packages are counted for the distribution as well—no idea.

That aside, 2014 was a good year in terms of security for Gentoo: The huge backlog of issues waiting for an advisory was heavily reduced as our awesome team managed to clean up old issues and make them known to glsa-check in three wrap-up advisories—and then we also issued 239 others, more than ever since 2007. Thanks to everyone involved!

March 22, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My approach to paranoia: electronic bills (March 22, 2015, 15:27 UTC)

One thing that I've been told about my previous post is that I sounded paranoid. I may be, I"m not as paranoid as the kind of people who fear the NSA in my book.

We have plenty of content out there (I would venture a guess that most of it is on reddit, but don't take my word for it) where paranoids describe all the kind of shenanigans they go through to avoid "The Man". I thought I may as well put out there what I do in my "paranoia", and I'll start with my first tenet: Email is safer than snail mail.

We all know the Snowden revelation made people fret to find new email protocols and all that kind of stuff. But my point of view is that if someone wants to steal my mail (for whatever reason), they only have to force the very simple lock of my mailbox, or use some tool to take the envelopes out from the same opening that is used to put content in.

This might be not so obvious for my American readers, as I found recently that the way USPS's monopoly on mail delivery is enforced is by not letting anybody put stuff in your mailbox but the postman. Although I'm pretty sure that you can find black market keys for it. In Europe at least, mailboxes are not accessible by the postmen, and anybody can put envelopes in. In Italy in particular, TNT (the Dutch company) for a while ran a delivery service for mail, rather than packages. Both my bank, my mobile phone provider and me (to send mail to customers) used it because of the higher reliability.

So in this vein, I favour any kind of electronic communication over paper trail. This is not difficult in most countries right now; in particular in Italy it started more than five years ago with my landline and ADSL provider: not only they allowed me to receive their bills by email rather than snail mail, but they waived a €1.5/bill fee for delivery. Incidentally, this only worked if you had direct debit enabled, which I did because the bills kept arriving late, after expiration date passed, and we kept paying fines for that. As of today, the only bill that still arrives in the snail mail to my mother in Italy is the gas bill, and that's only because we don't use a city gas feed. This is especially handy as I'm the one paying said bills, and I'm no longer in Italy.

In Ireland, things are mostly okay, but not perfect: both my previous and current electricity and gas providers allow electronic bills, but the new one only allowed me to opt-in after I received the first two bills. Banks are strange — my first bank in Ireland was fully electronic, with the exception of inbound wires (which were pretty common for me due to Autotools Mythbuster and expense reimbursement for work travel); my current bank sends me the quarterly statements by mail, even though I have access to them on their website, but they do seem to have some problem with consistency and reliability. My Tesco VISA unfortunately does mail me the monthly statement by post, as they don't have an online banking site for Irish customers (they do for British ones, but let's not go there.) My American bank is totally paperless (which is very good for me, as I need to have my US mail forwarded), to the point that receiving rebate checks, I only needed my mobile phone to deposit them.

But there is a much more important piece of paper, that I kept receiving after I moved to Dublin: my payslip. It's probably not obvious to everybody but this is my first "proper" employment. Before I had contracts, and freelanced, and had my own "company", so I would send and receive invoices, but never received a payslip before joining the company I work for now. And for a few long months I would receive the paper copy of it in my mailbox at the end of the month. I don't think there is much more private than your salary, so this was bothering me for a while — luckily we now moved to an external online provider, so no more paper trail for this.

The question becomes how to handle the paper that you do receive. I already wrote a long time ago about my dream of a paperless office, and I have bought a professional EPSON scanner, as having your own company generates a huge amount of paper. While I don't use it with the same workflow as I had before, I still scan all the paper I receive in the mail, and then destroy it fully.

In Italy I had a shredder: I would shred any paper at all, whether it contained personal information or not; my point is that even if someone was dumpster diving into my personal shredded paper, they would end up finding the most recent promotional spam from TeamViewer or MediaMarkt. There are nasty problems with having a shredder: it's extremely noisy, it creates tons of dust, and you have to clean it manually which takes a lot of time. You have no idea how bad my home office was after I finished running the whole set of historical documents of the family!

Here things got lucky, instead of dealing with a home shredder, my office uses a shredding company services, so I just need to bring the papers with me and throw them in the dedicated bins. This makes it much simpler to deal with the trickling paper trail of mail (and boarding passes, and so on…).

I have multiple copies of all the PDFs scanned documents: Google Drive, Dropbox and an encrypted USB flash stick, to make it safe. So unless the interested attacker gets access to my personal accounts, there is no way to access that information.

March 21, 2015

Following the news that PC manufacturers have started to use Intel Boot Guard, a technology designed to prevent the installation of modified or custom firmware like coreboot, we now learn that Microsoft may drop the requirement of Secure Boot deactivatability from the Windows 10 Logo guidelines. This collusion between Microsoft and Intel in allowing only vendor signed code during early boot potentially affects all new computers which come with Windows 10 preinstalled.

It means that in a pessimistic scenario, the only thing that stands between Microsoft (or anybody else with access to the infrastructure) being able to disable millions of Linux computers on a whim by blacklisting their bootloader signatures, may be the ability to install user keys in the UEFI key storage. Computer owners would have no other way to defend against this.

It is probably time to familiarize yourself with the procedure to do Secure Boot with your own keys. At least this remains possible, for now.

March 20, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Currently, I have a T-Mobile branded Samsung Galaxy S4 (SGH-M919) and until this past weekend I was running CyanogenMod 11 (the M9 snapshot) on it. I was having one minor problem with it, and that was that all-too-often the artist and track information from my music stream would not be sent via Bluetooth. I hadn’t updated because if the majority of things are running smoothly, why test fate? Anyway, a few days ago I took the plunge and updated to the M12 snapshot via the CM Updater.

Almost immediately I ran into a big problem. Every so often (seemingly randomly), I would get the alert that “Unfortunately, com.android.phone” had stopped. Worse yet, when that happened, I would lose mobile connectivity on any voice call. If I was in a voice call, the connection would drop. If I wasn’t in a call, I would be unable to receive or dial out for some time. Clearly, this was a huge problem since… well… one of the primary foci of a mobile is to make and receive voice calls.

Thinking that it was a problem with flashing the new ROM from CM Updater, I tried manually flashing the M12 snapshot. When that didn’t work, I tried a bunch of other things as well:

Unfortunately, all of these things got me nowhere. The problem was still occurring, and I couldn’t figure out what was happening. At first I was thinking that there was a hardware problem, but the Java stack trace indicated otherwise:

[CRASH] com.android.phone threw java.lang.NullPointerException
at android.telephony.SmsMessage.getTimestampMillis(SmsMesssage.java:607)

After all of those troubleshooting steps, though, CyanogenMod 12 revealed something to me that I hadn’t seen in the stack trace before:

com.android.mms.service

Noticing the MMS portion of that error made me think to check the APN settings for the device. What I found is that there were some slight problems. Below is what the settings were “out of the box” in CyanogenMod for the jfltetmo/jflte (which is the Samsung Galaxy S4):

Name: T-Mobile US LTE
APN: fast.t-mobile.com
Proxy: Not set
Port: Not set
Username: none
Password: **** (yes, four asterisks)
Server: * (yes, just as asterisk)
MMSC: http://mms.msg.eng.t-mobile.com/mms/wapenc
MMS Proxy: Not set
MMS Port: Not set
MMC: 310
MNC: 260
Authentication type: Not set
APN type: default,supl,mms
APN protocol: IPv4
APN roaming protocol: IPv4
Bearer: Unspecified
MVNO type: None

HOWEVER, they should be (the changes that I had to make are in red; you may need to change others to match the ones below):

Name: T-Mobile US LTE
APN: fast.t-mobile.com
Proxy: Not set
Port: Not set
Username: none
Password: Not set
Server: Not set
MMSC: http://mms.msg.eng.t-mobile.com/mms/wapenc
MMS Proxy: Not set
MMS Port: 80
MMC: 310
MNC: 260
Authentication type: Not set
APN type: default,supl,mms
APN protocol: IPv4
APN roaming protocol: IPv4
Bearer: Unspecified
MVNO type: None

After making those changes and setting them in the APN, voilà! No more message about com.android.phone crashing with reference to com.android.mms.service.

If you are experiencing the same problem, I hope that you see this post before going through WAY too may troubleshooting steps whilst overlooking the smaller, more likely problems. Let us not forget Ockham’s Razor—given equal circumstances, the simplest solution tends to be the correct one.

Cheers,
Zach

March 18, 2015
Jan Kundrát a.k.a. jkt (homepage, bugs)

It is that time of the year again, and people are applying for Google Summer of Code positions. It's great to see a big crowd of newcomers. This article explains what sort of students are welcome in GSoC from the point of view of Trojitá, a fast Qt IMAP e-mail client. I suspect that many other projects within KDE share my views, but it's best to ask them. Hopefully, this post will help students understand what we are looking for, and assist in deciding what project to work for.

Finding a motivation

As a mentor, my motivation in GSoC is pretty simple — I want to attract new contributors to the project I maintain. This means that I value long-term sustainability above fancy features. If you are going to apply with us, make sure that you actually want to stick around. What happens when GSoC terminates? What happens when GSoC terminates and the work you've been doing is not ready yet? Do you see yourself continuing the work you've done so far? Or is it going to become an abandonware, with some cash in your pocket being your only reward? Who is going to maintain the code which you worked hard to create?

Selecting an area of work

This is probably the most important aspect of your GSoC involvement. You're going to spend three months of full time activity on some project, a project you might have not heard about before. Why are you doing this — is it only about the money, or do you already have a connection to the project you've selected? Is the project trying to solve a problem that you find interesting? Would you use the results of that project even without the GSoC?

My experience shows that it's best to find a project which fills a niche that you find interesting. Do you have a digital camera, and do you think that a random photo editor's interface sucks? Work on that, make the interface better. Do you love listening to music? Maybe your favorite music player has some annoying bug that you could fix. Maybe you could add a feature to, say, synchronize the playlist with your cell phone (this is just an example, of course). Do you like 3D printing? Help improve an existing software for 3D printing, then. Are you a database buff? Is there something you find lacking in, e.g., PostgreSQL?

Either way, it is probably a good idea to select something which you need to use, or want to use for some reason. It's of course fine to e.g. spend your GSoC term working on an astronomy tool even though you haven't used one before, but unless you really like astronomy, then you should probably choose something else. In case of Trojitá, if you have been using GMail's web interface for the past five years and you think that it's the best thing since sliced bread, well, chances are that you won't enjoy working on a desktop e-mail client.

Pick something you like, something which you enjoy working with.

Making a proposal

An excellent idea is to make yourself known in advance. This does not happen by joining the IRC channel and saying "I want to work on GSoC", or mailing us to let us know about this. A much better way of getting involved is through showing your dedication.

Try to play with the application you are about to apply for. Do you see some annoying bug? Fix it! Does it work well? Use the application more; you will find bugs. Look at the project's bug tracker, maybe there are some issues which people are hitting. Do you think that you can fix it? Diving into bug fixing is an excellent opportunity to get yourself familiar with the project's code base, and to make sure that our mentors know the style and pace of your work.

Now that you have some familiarity with the code, maybe you can already see opportunities for work besides what's already described on the GSoC ideas wiki page. That's fine — the best proposals usually come from students who have found them on their own. The list of ideas is just that, a list of ideas, not an exhaustive cookbook. There's usually much more what can be done during the course of the GSoC. What would be most interesting area for you? How does it fit into the bigger picture?

After you've thought about the area to work on, now it's time to write your proposal. Start early, and make sure that you talk about your ideas with your prospective mentors before you spend three hours preparing a detailed roadmap. Define the goals that you want to achieve, and talk with your mentors about them. Make sure that the work fits well with the length and style of the GSoC.

And finally, be sure that you stay open and honest with your mentoring team. Remember, this is not a contest of writing a best project proposal. For me, GSoC is all about finding people who are interested in working on, say, Trojitá. What I'm looking for are honest, fair-behaving people who demonstrate willingness to learn new stuff. On top of that, I like to accept people with whom I have already worked. Hearing about you for the first time when I read your GSoC proposal is not a perfect way of introducing yourself. Make yourself known in advance, and show us how you can help us make our project better. Show us that you want to become a part of that "we".

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Upgrading ThunderBird (March 18, 2015, 01:35 UTC)

With the recent update from the LongTimeSuffering / ExtendedSufferingRelease of Thunderbird from 24 to 31 we encountered some serious badness.

The best description of the symptoms would be "IMAP doesn't work at all"
On some machines the existing accounts would be disappeared, on others they would just be inert and never receive updates.

After some digging I was finally able to find the cause of this:
Too old config file.

Uhm ... what? Well - some of these accounts have been around since TB2. Some newer ones were enhanced by copying the prefs.js from existing accounts. And so there's a weird TB bugreport that is mostly triggered by some bits being rewritten around Firefox 30, and the config parser screwing up with translating 'old' into 'new', and ... effectively ... IMAP being not-whitelisted, thus by default blacklisted, and hilarity happens.

Should you encounter this bug you "just" need to revert to a prefs.js from before the update (sigh) and then remove all lines involving "capability.policy".
Then update and ... things work. Whew.

Why not just remove profile and start with a clean one you say? Well ... for one TB gets brutally unusably slow if you have emails. So just re-reading the mailbox content from a local fast IMAP server will take ~8h and TB will not respond to user input during that time.
And then you manually have to go into eeeevery single subfolder so that TB remembers it is there and actually updates it. That's about one work-day per user lost to idiocy, so sed'ing the config file into compliance is the easy way out.
Thank you, Mozilla, for keeping our lives exciting!

March 17, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB 3.0.1 (March 17, 2015, 13:46 UTC)

This is a quite awaited version bump coming to portage and I’m glad to announce it’s made its way to the tree today !

I’ll right away thank a lot Tomas Mozes and Darko Luketic for their amazing help, feedback and patience !

mongodb-3.0.1

I introduced quite some changes in this ebuild which I wanted to share with you and warn you about. MongoDB upstream have stripped quite a bunch of things out of the main mongo core repository which I have in turn split into ebuilds.

Major changes :

  • respect upstream’s optimization flags : unless in debug build, user’s optimization flags will be ignored to prevent crashes and weird behaviour.
  • shared libraries for C/C++ are not built by the core mongo respository anymore, so I removed the static-libs USE flag.
  • various dependencies optimization to trigger a rebuild of mongoDB when one of its linked dependency changes.

app-admin/mongo-tools

The new tools USE flag allows you to pull a new ebuild named app-admin/mongo-tools which installs the commands listed below. Obviously, you can now just install this package if you only need those tools on your machine.

  • mongodump / mongorestore
  • mongoexport / mongoimport
  • mongotop
  • mongofiles
  • mongooplog
  • mongostat
  • bsondump

app-admin/mms-agent

The MMS agent has now some real version numbers and I don’t have to host their source on Gentoo’s infra woodpecker. At the moment there is only the monitoring agent available, shall anyone request the backup one, I’ll be glad to add its support too.

dev-libs/mongo-c(xx)-driver

I took this opportunity to add the dev-libs/mongo-cxx-driver to the tree and bump the mongo-c-driver one. Thank you to Balint SZENTE for his insight on this.

March 15, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Wir unterbrechen für eine kurze Durchsage:

Gentoo Linux ist bei den Chemnitzer Linux-Tagen am

Samstag 21. und
Sonntag 22. März 2015

mit einem Stand vertreten.

https://chemnitzer.linux-tage.de/2015/de

Es gibt unter anderem Gentoo-T-Shirts, Lanyards und Buttons zum selbst kompilieren.

Hanno Böck a.k.a. hanno (homepage, bugs)

Just wanted to quickly announce two talks I'll give in the upcoming weeks: One at BSidesHN (Hannover, 20th March) about some findings related to PGP and keyservers and one at the Easterhegg (Braunschweig, 4th April) about the current state of TLS.

A look at the PGP ecosystem and its keys

PGP-based e-mail encryption is widely regarded as an important tool to provide confidential and secure communication. The PGP ecosystem consists of the OpenPGP standard, different implementations (mostly GnuPG and the original PGP) and keyservers.

The PGP keyservers operate on an add-only basis. That means keys can only be uploaded and never removed. We can use these keyservers as a tool to investigate potential problems in the cryptography of PGP-implementations. Similar projects regarding TLS and HTTPS have uncovered a large number of issues in the past.

The talk will present a tool to parse the data of PGP keyservers and put them into a database. It will then have a look at potential cryptographic problems. The tools used will be published under a free license after the talk.

Update:
Source code
A look at the PGP ecosystem through the key server data (background paper)
Slides

Some tales from TLS

The TLS protocol is one of the foundations of Internet security. In recent years it's been under attack: Various vulnerabilities, both in the protocol itself and in popular implementations, showed how fragile that foundation is.

On the other hand new features allow to use TLS in a much more secure way these days than ever before. Features like Certificate Transparency and HTTP Public Key Pinning allow us to avoid many of the security pitfals of the Certificate Authority system.

Update: Slides and video available. Bonus: Contains rant about DNSSEC/DANE.

Slides PDF, LaTeX, Slideshare
Video recording, also on Youtube

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

As much as I've become an expert on the topic, there is one question I still have no idea how to answer, and that is why on earth we have three separate projects (autoconf, automake, libtool) instead of a single Autotools project. Things get even more interesting when you think that there is the Autoconf Archive – which, by the way, references Autotools Mythbuster as best practices – and then projects such as dolt that are developed by completely separate organisations.

I do think that this is a quite big drawback of autotools compared to things like CMake: you now have to allow for combinations of different tools written in different languages (autoconf is almost entirely shell and M4, automake uses lots of Perl, libtool is shell as well), with their own deprecation timelines, and with different distributions providing different sets of them.

My guess is that many problems lie in the different sets of developers for each project. I know for instance that Stefano at least was planning to have a separate Automake-NG implementation that did not rely on Perl at all, but used GNU make features, including make macros. I generally like this idea, because similarly to dolt it removes overhead for the most common case (any Linux distribution will use GNU make by default), while not removing the option where this is indeed needed (any BSD system.) On the other hand it adds one more dimension to the already multi-dimensional compatibility problem.

Having a single "autotools" package, while making things a bit more complicated on the organizational level, could make a few things fit better. For instance if you accepted Perl as a dependency of the package – since automake needs it; but remember this is not a dependency for the projects using autotools! – you could simplify the libtoolize script which is currently written in shell.

And it would probably be interesting if you could just declare in your configure.ac file whether you want a fully portable build system, or you're okay with telling people that they need a more modern system, and drop some of the checks/compatibility quirks straight at make dist time. I'm sure that util-linux does not care about building dynamic libraries on Windows, and that PulseAudio does not really care for building on non-GNU make implementations.

Of course these musings are only personal and there is nothing that substantiate them regarding how things would turn out; I have not done any experiment with actually merging the packages into a single releasable unit, but I do have some experience with split-but-not-really software, and in this case I can't see many advantages in the split of autotools, at least from the point of view of the average project that is using the full set of them. There are certainly reasons for which people would prefer them to be split, because especially if they have been using only autoconf and snobbing automake all this time, but… I'm not sure I agree with those reasons to begin with.

March 11, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Siphoning data on public and private WiFi (March 11, 2015, 22:48 UTC)

So you may remember I have been reviewing some cyber-thrillers in the past, and some of them have been pretty bad. After that I actually thought I could write one myself; after all, it couldn't be as bad as Counting from Zero. Unfortunately the harsh reality is that I don't know enough diverse people out there to build up new, interesting but most importantly realistic characters. So I shelved the project completely.

But at the same time, I spent a lot of time thinking of interesting things that may happen in a cyber-thriller that fit more into my world view — while Doctorow will take on surveillance, and Russinovich battles terrorists armed with Windows viruses, I would have put my characters in to deal with the more mundane variety of cyber criminals.

One of the things that I thought about is a variant on an old technique, called Wardriving. While this is not a new technique, I think there are a few interesting twists and it would be a little too interesting tool for low-lifers with a little (not a lot) of computer knowledge.

First of all, when wardriving started as what became a fad, the wireless networks out there were vastly unencryped and for the most part underutilized. Things changed, now thanks to WPA a simple pass-by scan of a network does not give you as much data, and changes in the way wireless protocols are implemented have, for a while, made the efforts hard enough.

But things changed over time, so what is the current situation? I have been thinking of how many things you could do with a persistent wardriving, but it wasn't until I got bored out of my mind on a lounge at an airport that I was able to prove my point. On my own laptop, in a totally passive mode, invisible to any client on the network, a simple tcpdump or Wireshark dump would show a good chunk of information.

For the most part not something that would be highly confidential — namely I was not able to see anything being sent by the other clients of the network, but I was able to see most of the replies coming from the servers; just monitor DNS and clear-text HTTP and you can find a lot of information about who's around you.

For instance I could tell that there was another person in the lounge waiting for the same flight as me — as they were checking the RTE website, and I doubt any person not Irish or not connected with Ireland would spend time there. Oh and the guy sitting in front of me was definitely Japanese, because once he sat down I could see the replies back from yahoo.co.jp and a few more websites based in Japan.

Let me be clear, I was not doing that with the intention of doxxing somebody. I originally started tcpdump because one of my own servers was refusing me access — the lounge IP range is in multiple DNSBL, I was expecting the traffic on the network to be mostly viruses trying to replicate. What I found instead was that the access point is broadcasting to all connected clients the replies coming in for anyone else. This is not entirely common: usually you need to set your wireless card in promiscuous mode, and many cards nowadays don't even let you do that.

But if this is the small fries of information I can figure out by looking at a tcpdump trace in a few minutes, you can imagine what you can find if you can sniff a network for a few hours. But spending a few hours tracing a network in the coffee shop at the corner could be suspicious. How can you make it less obvious? Well, here's an interesting game, although I have not played it if not in my own stories' drafts.

There are plenty of mobile WiFi devices out there — they take a SIM card and then project a WiFi signal for you to connect your devices to. I have one by Vodafone (although I use it with a bunch of different operators depending on where I'm traveling), and it is very handy, but while it runs Linux I did not even look for the option of rooting it. These are pretty common to find on eBay, second hand, because sometimes they essentially come free with the contract, and people update them fairly often as new features come up. Quite a few can run OpenWRT.

These devices come with a decent battery (mine lasts easily a whole day of use), and if you buy them second hand they are fairly untraceable (does anybody ever record the IMEI/serial number of the devices they sell?), and are ready to connect to mobile networks (although that's trickier, the SIM is easier to trace.) Mine actually comes with a microSDHC slot, which means you can easily fit a very expensive 128GB microSD card if you want.

Of course it relies a lot on luck and the kind of very broad fishing net that makes it unfeasible for your average asshole to use, but there isn't much needed — just a single service that shows you your plaintext password on a website, to match to an username, as most people will not use different passwords across services, with very few exceptions.

But let's make it creepier – yes I'll insist on making my posts about what I perceive to be a more important threat model than the NSA – instead of playing this on a random coffee shop at the corner, you are looking into a specific someone's private life, and you're close enough that you know or can guess their WiFi access point name and password, dropping one of these devices within the WiFi reach is not difficult at all.

The obvious question becomes what can you find with such a trace. Well, in no particular order you can tell the routine of a person quite easily by figuring out which time of the day they are at home (my devices don't talk to each other that much when I'm not at home), what time they get up for work, and what time they are out of the door. You can tell how often they do their finances (I don't go to my bank's site every day, much less often the revenue's). For some of the people out there you can tell when they have a private moment and what their interests are (yes I admit I went and checked, assuming you can only see the server response, you can still tell the title of the content that is being streamed/downloaded.) You can tell if they are planning a vacation, and in many cases where. You can tell if they are going to see a movie soon.

Creepy enough? Do I need to paint you a picture of that creepy acquaintance that you called in last week to help you set up your home theater, and to which you gave the WiFi password so he could Google up your provider's setup guide?

How do you defend from this? Well, funnily enough a lot of the things people have been talking before the "Snowden Revelations" help a lo with this: HTTPS Everywhere and even Tor helps with this. While the latter gives you a different set of problems (it may be untraceable but it does not mean it's secure!), it does obfuscate the data flow out of your network. It does not hide the traffic patterns (so you can still tell when people are in or not, when they wake up, and so on) but it does hide where you're going, so that your private moments stay private. Unfortunately it is out of the reach of most people.

HTTPS is a compromise: you can't tell exactly what's going on, but if your target is going to YouPorn, you can still tell by the DNS reply. It does reduce the surface of attack considerably, though, and does not require that much technical knowledge on the client side. It's for reasons like this that service providers should use HTTPS — it does not matter if the NSA can break the encryption, your creepy guy is not the NSA, but small parts of the creepy guy's plan are thwarted by it: the logs can show the target visited the website of a movie theatre chain, but can't show the replies from the server with the name of the branch or the movie that the target was interested in.

What is not helping us here, right now, with the creepy guys that are so easy to come by, is the absolute paranoia of the security and cryptography community right now. Dark email? Secure text messaging? They are definitely technologies that need to be explored and developed, but they should not be the focus of the threat model for the public. In this, I'm totally agreeing with Mickens.

I was (and a bit am) scared about writing about this, it makes me feel creepy. It gives a very good impression of how easy it is to abuse a bit of technical knowledge to become a horrible person. And with the track record of the technical circle in the past few years, it does scare the hell out of me, pardon the language.

While the rest of the security and technical community keep focusing on the ghost of the NSA, my fears are in the ease of everyday scams and information leaks. I was not surprised of what the various secret agencies out there wanted to do, after all we've seen the movies and the TV series. I was surprised of a few of the tools and reaches, but not the intentions. But the abuse power? There's just as much of it outside of the surveillance community, it's just that the people who know don't care – they focus on theoretical problems, on the Chief World Systems, because that's where the fun and satisfaction is – and the people who are at risk either believe everything is alright, or everything is not alright; they listen to what the media has to say, and the media never paints useful pictures.

Denis Dupeyron a.k.a. calchan (homepage, bugs)
/bin/sh: Argument list too long (March 11, 2015, 19:22 UTC)

I tried building binutils-2.25 in a qemu chroot and I got the following error during the build process:

/bin/sh: Argument list too long

Google wasn’t helpful. So I looked at the code for qemu-2.2.0, which is where my static qemu binary comes from. At some point I stumbled on this line in linux-user/qemu.h:

#define MAX_ARG_PAGES 33

I changed that 33 to a 64, rebuilt, replaced the approriate static binary in my chroot, and the error went away.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

We're happy to be able to announce that our manuscript "Broken SU(4) symmetry in a Kondo-correlated carbon nanotube" has been accepted for publication in Physical Review B.
This manuscript is the result of a joint experimental and theoretical effort. We demonstrate that there is a fundamental difference between cotunneling and the Kondo effect - a distinction that has been debated repeatedly in the past. In carbon nanotubes, the two graphene-derived Dirac points can lead to a two-fold valley degeneracy in addition to spin degeneracy; each orbital "shell" of a confined electronic system can be filled with four electrons. In most nanotubes, these degeneracies are broken by the spin-orbit interaction (due to the wall curvature) and by valley mixing (due to, as recently demonstrated, scattering at the nanotube boundaries). Using an externally applied magnetic field, the quantum states involved in equilibrium (i.e., elastic, zero-bias) and nonequilibrium (i.e., inelastic, finite bias) transitions can be identified. We show theoretically and experimentally that in the case of Kondo correlations, not all quantum state pairs contribute to Kondo-enhanced transport; some of these are forbidden by symmetries stemming from the carbon nanotube single particle Hamiltonian. This is distinctly different from the case of inelastic cotunneling (at higher temperatures and/or weaker quantum dot-lead coupling), where all transitions have been observed in the past.

"Broken SU(4) symmetry in a Kondo-correlated carbon nanotube"
D. R. Schmid, S. Smirnov, M. Marganska, A. Dirnaichner, P. L. Stiller, M. Grifoni, A. K. Hüttel, and Ch. Strunk
accepted for publication in Physical Review B, arXiv:1312.6586 (PDF)

March 10, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

Gentoo allows users to have multiple versions of gcc installed and we (mostly?) support systems where userland is partially build with different versions.  There are both advantages and disadvantages to this and in this post, I’m going to talk about one of the disadvantages, the C++11 ABI incompatibility problem.  I don’t exactly have a solution, but at least we can define what the problem is and track it [1].

First what is C++11?  Its a new standard of C++ which is just now making its way through GCC and clang as experimental.  The current default standard is C++98 which you can verify by just reading the defined value of __cplusplus using the preprocessor.

$  g++ -x c++ -E -P - <<< __cplusplus
199711L
$  g++ -x c++ --std=c++98 -E -P - <<< __cplusplus
199711L
$  g++ -x c++ --std=c++11 -E -P - <<< __cplusplus
201103L

This shouldn’t be surprising, even good old C has standards:

$ gcc -x c -std=c90 -E -P - <<< __STDC_VERSION__
__STDC_VERSION__
$ gcc -x c -std=c99 -E -P - <<< __STDC_VERSION__
199901L
$ gcc -x c -std=c11 -E -P - <<< __STDC_VERSION__
201112L

We’ll leave the interpretation of these values as an exercise to the reader.  [2]

The specs for these different standards at least allow for different syntax and semantics in the language.  So here’s an example of how C++98 and C++11 differ in this respect:

// I build with both --std=c++98 and --std=c++11
#include <iostream>
using namespace std;
int main() {
    int i, a[] = { 5, -3, 2, 7, 0 };
    for (i = 0; i < sizeof(a)/sizeof(int); i++)
        cout << a[i] << endl ;
    return 0;
}
// I build with only --std=c++11
#include <iostream>
using namespace std;
int main() {
    int a[] = { 5, -3, 2, 7, 0 };
    for (auto& x : a)
        cout << x << endl ;
    return 0;
}

I think most people would agree that the C++11 way of iterating over arrays (or other objects like vectors) is sexy.  In fact C++11 is filled with sexy syntax, especially when it come to its threading and atomics, and so coders are seduced.  This is an upstream choice and it should be reflected in their build system with –std= sprinkled where needed.  I hope you see why you should never add –std= to your CFLAGS or CXXFLAGS.

The syntactic/semantic differences is the first “incompatiblity” and it is really not our problem downstream.  Our problem in Gentoo comes because of ABI incompatibilities between the two standards arrising from two sources: 1) Linking between objects compiled with –std=c++98 and –std=c++11 is not guaranteed to work.  2) Neither is linking between objects both compiled with –std=c+11 but with different versions of GCC differing in their minior release number.  (The minor release number is x in gcc-4.x.y.)

To see this problem in action, let’s consider the following little snippet of code which uses a C++11 only function [3]

#include <chrono>
using namespace std;
int main() {
    auto x = chrono::steady_clock::now;
}

Now if we compile that with gcc-4.8.3 and check its symbols we get the following:

$ $ g++ --version
g++ (Gentoo Hardened 4.8.3 p1.1, pie-0.5.9) 4.8.3
$ g++ --std=c++11 -c test.cpp
$ readelf -s test.o
Symbol table '.symtab' contains 12 entries:
Num:    Value          Size Type    Bind   Vis      Ndx Name
  0: 0000000000000000     0 NOTYPE  LOCAL  DEFAULT  UND
  1: 0000000000000000     0 FILE    LOCAL  DEFAULT  ABS test.cpp
  2: 0000000000000000     0 SECTION LOCAL  DEFAULT    1
  3: 0000000000000000     0 SECTION LOCAL  DEFAULT    3
  4: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
  5: 0000000000000000     0 SECTION LOCAL  DEFAULT    6
  6: 0000000000000000     0 SECTION LOCAL  DEFAULT    7
  7: 0000000000000000     0 SECTION LOCAL  DEFAULT    5
  8: 0000000000000000    78 FUNC    GLOBAL DEFAULT    1 main
  9: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND _GLOBAL_OFFSET_TABLE_
 10: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_
 11: 0000000000000000     0 NOTYPE  GLOBAL DEFAULT  UND __stack_chk_fail

We can now confirm that that symbol is in fact in libstdc++.so for 4.8.3 but NOT for 4.7.3 as follows:

$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6 | grep _ZNSt6chrono3_V212steady_
  1904: 00000000000e5698     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
  3524: 00000000000c8b00    89 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/libstdc++.so.6 | grep _ZNSt6chrono3_V212steady_
$

Okay, so we’re just seeing an example of things in flux.  Big deal?  If you finish linking test.cpp and check what it links against you get what you expect:

$ g++ --std=c++11 -o test.gcc48 test.o
$ ./test.gcc48
$ ldd test.gcc48
        linux-vdso.so.1 (0x000002ce333d0000)
        libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6 (0x000002ce32e88000)
        libm.so.6 => /lib64/libm.so.6 (0x000002ce32b84000)
        libgcc_s.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libgcc_s.so.1 (0x000002ce3296d000)
        libc.so.6 => /lib64/libc.so.6 (0x000002ce325b1000)
        /lib64/ld-linux-x86-64.so.2 (0x000002ce331af000)

Here’s where the wierdness comes in.  Suppose we now switch to gcc-4.7.3 and repeat.  Things don’t quite work as expected:

$ g++ --version
g++ (Gentoo Hardened 4.7.3-r1 p1.4, pie-0.5.5) 4.7.3
$ g++ --std=c++11 -o test.gcc47 test.cpp
$ ldd test.gcc47
        linux-vdso.so.1 (0x000003bec8a9c000)
        libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6 (0x000003bec8554000)
        libm.so.6 => /lib64/libm.so.6 (0x000003bec8250000)
        libgcc_s.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libgcc_s.so.1 (0x000003bec8039000)
        libc.so.6 => /lib64/libc.so.6 (0x000003bec7c7d000)
        /lib64/ld-linux-x86-64.so.2 (0x000003bec887b000)

Note that it says its linking against 4.8.3/libstdc++.so.6 and not 4.7.3.  That’s because of the order in which the library paths are search is defined in /etc/ld.so.conf.d/05gcc-x86_64-pc-linux-gnu.conf and this file is sorted that way it is on purpose.  So maybe it’ll run!  Let’s try:

$ ./test.gcc47
./test.gcc47: relocation error: ./test.gcc47: symbol _ZNSt6chrono12steady_clock3nowEv, version GLIBCXX_3.4.17 not defined in file libstdc++.so.6 with link time reference

Nope, no joy.  So what’s going on?  Let’s look at the symbols in both test.gcc47 and test.gcc48:

$ readelf -s test.gcc47  | grep chrono
  9: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono12steady_cloc@GLIBCXX_3.4.17 (4)
 50: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono12steady_cloc
$ readelf -s test.gcc48  | grep chrono
  9: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_@GLIBCXX_3.4.19 (4)
 49: 0000000000000000     0 FUNC    GLOBAL DEFAULT  UND _ZNSt6chrono3_V212steady_

Whoah!  The symbol wasn’t mangled the same way!  Looking more carefully at *all* the chrono symbols in 4.8.3/libstdc++.so.6 and 4.7.3/libstdc++.so.6 we see the problem.

$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6 | grep chrono
  353: 00000000000e5699     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212system_@@GLIBCXX_3.4.19
 1489: 000000000005e0e0    86 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1605: 00000000000e1a3f     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1904: 00000000000e5698     1 OBJECT  GLOBAL DEFAULT   13 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
 2102: 00000000000c8aa0    86 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212system_@@GLIBCXX_3.4.19
 3524: 00000000000c8b00    89 FUNC    GLOBAL DEFAULT   11 _ZNSt6chrono3_V212steady_@@GLIBCXX_3.4.19
$ readelf -s /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/libstdc++.so.6 | grep chrono
 1478: 00000000000c6260    72 FUNC    GLOBAL DEFAULT   12 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 1593: 00000000000dd9df     1 OBJECT  GLOBAL DEFAULT   14 _ZNSt6chrono12system_cloc@@GLIBCXX_3.4.11
 2402: 00000000000c62b0    75 FUNC    GLOBAL DEFAULT   12 _ZNSt6chrono12steady_cloc@@GLIBCXX_3.4.17

Only 4.7.3/libstdc++.so.6 has _ZNSt6chrono12steady_cloc@@GLIBCXX_3.4.17.  Normally when libraries change their exported symbols, they change their SONAME, but this is not the case here, as running `readelf -d` on both shows.  GCC doesn’t bump the SONAME that way for reasons explained in [4].  Great, so just switch around the order of path search in /etc/ld.so.conf.d/05gcc-x86_64-pc-linux-gnu.conf.  Then we get the problem the other way around:

$ ./test.gcc47
$ ./test.gcc48
./test.gcc48: /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by ./test.gcc48)

So no problem if your system has only gcc-4.7.  No problem if it has only 4.8.  But if it has both, then compiling C++11 with 4.7 and linking against libstdc++ for 4.8 (or vice versa) and you get breakage at the binary level.  This is the C++11 ABI incompatibility problem in Gentoo.  As an exercise for the reader, fix!

Ref.

[1] Bug 542482 – (c++11-abi) [TRACKER] c++11 abi incompatibility

[2] This is an old professor’s trick for saying, hey go find out why c90 doesn’t define a value for __STDC_VERSION__ and let me know, ‘cuz I sure as hell don’t!

[3] This example was inspired by bug #513386.  You can verify that it requires –std=c++11 by dropping the flag and getting yelled at by the compiler.

[4] Upstream explains why in comment #5 of GCC bug #61758.  The entire bug is dedicated to this issue.

March 08, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Again on threat models (March 08, 2015, 20:55 UTC)

I've read many people over the past few months referencing James Mickens's article on threat models. Given I wrote last year about a similar thing in regard to privacy policies, one would expect me to fall in line with said article fully. They would be disappointed.

While I agree with the general gist of the article, I think it gets a little too simplistic. In particular it downplays a lot the importance to protect yourself against two separate class of attackers: people close to you and people who may be targeting you even if you don't know them. These do seem at first sight to fit in with Mickens's categories, but they go a little further than he's describing. And by painting the categories as "funny" as he did I think he's undermining the importance of security.

Let's start with the first threat model that the article points out to in the "tl;dr" table;

Ex-girlfriend/boyfriend breaking into your email account and publicly releasing your correspondence with the My Little Pony fan club

Is this a credible threat? Not really, but if you think about it a little more you can easily see how this can morph into disgruntled ex breaking into your computer/email/cloud account and publicly releasing nude selfies as revenge porn. Now it sounds a little more ominous than being outed out as a fan of My Little Pony, doesn't it? And maybe you'll call me sexist to point this out, but I think it would be hypocrite not to point out that this particular problem sees women as much more vulnerable to this particular problem.

But it does not have to strictly be an ex; it may be any creepy guy (or gal, if you really want to go there) who somehow gets to access your computer or to guess your "strong" password. It's easy to blame the victim in these situations but that's not the point; there are plenty of people ready to betray the trust of their acquaintances out there — and believe me, people trust other people way too easily, especially when they are looking for a tech-savvy friend-of-a-friend to help them fix their computer, I've been said tech-savvy friend-of-a-friend, and it didn't take many times doing the kind of usual recovery to realize how important that trust is.

The second "threat model", that is easily discounted, is described as

Organized criminals breaking into your email account and sending spam using your identity

The problem with a similar description of the threat is that it's too easy for people to discard it with "so what?" People receive spam all the time, why would it matter whose identity it's sent as? Once again, there are multiple ways to rephrase this to make it more ominous.

A very simple option is to focus on the monetary problem: organized criminals breaking into your email account looking for your credit card details. There are still plenty of services that will request your credit card numbers by email, and even my credit card company sends me the full 16-digits number of my card on the statements. When you point out to people that the criminals are not just going to bother a random stranger, but actually are going after their money, they may care a significant bit more.

Again this is not all there is, though. For a security or privacy specialist to ignore the issues of targeted attacks such as doxxing, coming up with the harassment campaigns that are all the rage to date is at the very least irresponsible. And that does not involve only the direct targets of harassment: the protection of even the most careful person is always weak to the people they have around, because we trust them, with information, or access, and so on.

Take for instance Facebook's "living will" for users — if one wanted to harass some person, but their security was too strong, they could go after their immediate family, hoping that one of the would have the right access to close the account down. Luckily, I think Facebook is smarter than this, and so it should not be that straightforward, but many people also use member of the family's addresses as recovery addresses if they were to lose access to their own account.

So with all this in mind, I would like to point out that at the same time I agree and disagree with Mickens's article. There are way too many cryptographers out there that look into improbable threat models, but at the same time there are privacy experts that ignore what the actual threats are for many more users.

This is why I don't buy into the cult of personalities of Assange, Snowden or Appelbaum. I'm not going to argue that surveillance is a good thing, nor I'm going to argue that there are no abuses ever – I'm sure there are – but the focus over the past two years have been so much more on state actions that malicious actors like those I described earlier.

I already pointed out how privacy advocates are in love with Tor and they ignore the bad behaviours it enables, and I once again I do wonder why they are more concerned about the possibility of obscure political abuses of power, rather than the real and daily abuse of people, most likely a majority of which women.

Anyway, I'm not a thought leader, and my opinions are strictly personal — but I do think that the current focus on protecting the public from possibly systemic abuse from impersonal organisations such as the NSA is overshadowing the importance of protecting people from those they are most vulnerable from: the people around them.

And let's be clear: there are plenty of things that the crypto community can and should do to protect people in these situations: HTTPS is for instance extremely important, as it does not take a huge effort for a disgruntled ex to figure out how to snoop cleartext traffic to find the odd password or information that could lead to a break.

Just think twice, next time you decide to rally people up against a generic surveillance society phantom, or even to support EFF — I used to, I don't currently and while I agree they have done good things for people, I do find they are focusing on the wrong threats.

March 07, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Report from SCaLE13x (March 07, 2015, 23:53 UTC)

This year I have not been able to visit FOSDEM. Funnily enough this confirms the trend of me visiting FOSDEM only on even-numbered years, as I previously skipped 2013 as I was just out for my first and only job interview, and 2011 because of contract related timing. Since I still care for going to an open source conference early in the year, I opted instead for SCaLE, the timing of which fit perfectly my trip to Mountain View. It also allowed me to walk through Hermosa Beach once again.

So Los Angeles again it was, which meant I was able to meet with a few Gentoo developers, a few VideoLAN developers who also came all the way from Europe, and many friends who I have met at various previous conferences. It is funny how I end up meeting some people more often through conferences than I meet my close friends from back in Italy. I guess this is the life of the frequent travelers.

While my presence at SCaLE was mostly a way to meet some of the Gentoo devs that I had not met before, and see Hugo and Ludovic from VideoLAN who I missed at the past two meetings, I did pay some attention to the talks — I wish I could have had enough energy to go to more of them, but I was coming from three weeks straight of training, during which I sat for at least two hours a day in a room listening to talks on various technologies and projects… doing that in the free time too sounded like a bad idea.

What I found intriguing in the program, and in at least one of the talks I was able to attend, was that I could find at least a few topics that I wrote about in the past. Not only now containers are now all the rage, through Docker and other plumbing, but there was also a talk about static site generators, of which I wrote in 2009 and I've been using for much longer than that, out of necessity.

All in all, it was a fun conference and meeting my usual conference friends and colleagues is a great thing. And meeting the other Gentoo devs is what sparked my designs around TG4 which is good.

I would like to also thank James for suggesting me to use Tweetdeck during conferences, as it was definitely nicer to be able to keep track of what happened on the hashtag as well as the direct interactions and my personal stream. If you're the occasional conferencegoer you probably want to look into it yourself. It also is the most decent way to look at Twitter during a conference on a tablet, as it does not require you to jump around between search pages and interactions (on a PC you can at least keep multiple tabs open easily.)

Gentoo Monthly Newsletter: February 2015 (March 07, 2015, 20:00 UTC)

Gentoo News

Infrastructure News

Service relaunch: archives.gentoo.org

Thanks to our awesome infrastructure team, the archives.gentoo.org website is back online. Below is the announcement as posted on the gentoo-announce mailing list by Robin H. Johnson.

The Gentoo Infrastructure team is proud to announce that we have
re-engineered the mailing list archives, and re-launched it, back at archives.gentoo.org.
The prior Mhonarc-based system had numerous problems, and a
complete revamp was deemed the best forward solution to move
forward with. The new system is powered by ElasticSearch
(more features to come).

All existing URLs should either work directly, or redirect you to the new location for that content.

Major thanks to a3li, for his development of this project. Note
that we're still doing some catchup on newer messages, but delays will drop to under 2 hours soon,
with an eventual goal of under 3 minutes.

Please report problems to Bugzilla: Product Websites, Component
Archives [1][2]

Source available at:
git://git.gentoo.org/proj/ag.git (backend)
git://git.gentoo.org/proj/ag-web.git (frontend)

[1] https://tinyurl.com/mybyjq6 which is really [2]
[2] https://bugs.gentoo.org/enter_bug.cgi?alias=&assigned_to=infra-bugs%40gentoo.org&attach_text=&blocked=&bug_file_loc=http%3A%2F%2F&bug_severity=normal&bug_status=CONFIRMED&comment=&component=Archives&contenttypeentry=&contenttypemethod=autodetect&contenttypeselection=text%2Fplain&data=&deadline=&defined_groups=1&dependson=&description=&estimated_time=&flag_type-4=X&form_name=enter_bug&keywords=&maketemplate=Remember%20values%20as%20bookmarkable%20template&op_sys=Linux&priority=Normal&product=Websites&rep_platform=All&requestee_type-4=&short_desc=archives.gentoo.org%3A%20FILL%20IN%20HERE&version=n%2Fa

Gentoo Developer Moves

Summary

Gentoo is made up of 235 active developers, of which 33 are currently away.
Gentoo has recruited a total of 808 developers since its inception.

Additions

Changes

  • James Le Cuirot joined the Java team
  • Guilherme Amadio joined the Fonts team
  • Mikle Kolyada joined the Embedded team
  • Pavlos Ratis joined the Overlays team
  • Matthew Thode joined the Git mirror team
  • Patrice Clement joined the Java and Python teams
  • Manuel Rüger joined the QA team
  • Markus Duft left the Prefix team
  • Mike Gilbert left the Vmware team
  • Tim Harder left the Games and Tex teams

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 164
Packages 17997
Ebuilds 36495
Architecture Stable Testing Total % of Packages
alpha 3534 687 4221 23.45%
amd64 10983 6536 17519 97.34%
amd64-fbsd 2 1589 1591 8.84%
arm 2687 1914 4601 25.57%
arm64 536 93 629 3.50%
hppa 3102 535 3637 20.21%
ia64 3105 707 3812 21.18%
m68k 592 135 727 4.04%
mips 0 2439 2439 13.55%
ppc 6748 2536 9284 51.59%
ppc64 4329 1074 5403 30.02%
s390 1364 469 1833 10.19%
sh 1466 610 2076 11.54%
sparc 4040 994 5034 27.97%
sparc-fbsd 0 315 315 1.75%
x86 11560 5583 17143 95.25%
x86-fbsd 0 3235 3235 17.98%

gmn-portage-stats-2015-03

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201502-15 net-fs/samba Samba: Multiple vulnerabilities 479868
201502-14 sys-apps/grep grep: Denial of Service 537046
201502-13 www-client/chromium Chromium: Multiple vulnerabilities 537366
201502-12 dev-java/oracle-jre-bin (and 2 more) Oracle JRE/JDK: Multiple vulnerabilities 507798
201502-11 app-arch/cpio GNU cpio: Multiple vulnerabilities 530512
201502-10 media-libs/libpng libpng: User-assisted execution of arbitrary code 531264
201502-09 app-text/antiword Antiword: User-assisted execution of arbitrary code 531404
201502-08 media-video/libav Libav: Multiple vulnerabilities 492582
201502-07 dev-libs/libevent libevent: User-assisted execution of arbitrary code 535774
201502-06 www-servers/nginx nginx: Information disclosure 522994
201502-05 net-analyzer/tcpdump tcpdump: Multiple vulnerabilities 534660
201502-04 www-apps/mediawiki MediaWiki: Multiple vulnerabilities 498064
201502-03 net-dns/bind BIND: Multiple Vulnerabilities 531998
201502-02 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 536562
201502-01 media-sound/mpg123 mpg123: User-assisted execution of arbitrary code 500262

Package Removals/Additions

Removals

Package Developer Date
dev-ml/obrowser aballier 02 Feb 2015
games-server/tetrix pacho 03 Feb 2015
app-emulation/wine-doors pacho 03 Feb 2015
dev-libs/libgeier pacho 03 Feb 2015
dev-games/ggz-client-libs pacho 03 Feb 2015
dev-games/libggz pacho 03 Feb 2015
games-board/ggz-gtk-client pacho 03 Feb 2015
games-board/ggz-gtk-games pacho 03 Feb 2015
games-board/ggz-sdl-games pacho 03 Feb 2015
games-board/ggz-txt-client pacho 03 Feb 2015
games-board/xfrisk pacho 03 Feb 2015
games-mud/mcl pacho 03 Feb 2015
media-gfx/photoprint pacho 03 Feb 2015
media-gfx/rawstudio pacho 03 Feb 2015
app-office/imposter pacho 03 Feb 2015
dev-python/cl pacho 03 Feb 2015
sci-physics/camfr pacho 03 Feb 2015
net-analyzer/nagios-imagepack pacho 03 Feb 2015
dev-python/orm pacho 03 Feb 2015
dev-python/testoob pacho 03 Feb 2015
app-misc/fixdos pacho 03 Feb 2015
app-arch/mate-file-archiver pacho 03 Feb 2015
app-editors/mate-text-editor pacho 03 Feb 2015
app-text/mate-document-viewer pacho 03 Feb 2015
app-text/mate-doc-utils pacho 03 Feb 2015
mate-base/libmatekeyring pacho 03 Feb 2015
mate-base/mate-file-manager pacho 03 Feb 2015
mate-base/mate-keyring pacho 03 Feb 2015
mate-extra/mate-character-map pacho 03 Feb 2015
mate-extra/mate-file-manager-image-converter pacho 03 Feb 2015
mate-extra/mate-file-manager-open-terminal pacho 03 Feb 2015
mate-extra/mate-file-manager-sendto pacho 03 Feb 2015
mate-extra/mate-file-manager-share pacho 03 Feb 2015
media-gfx/mate-image-viewer pacho 03 Feb 2015
net-wireless/mate-bluetooth pacho 03 Feb 2015
x11-libs/libmatewnck pacho 03 Feb 2015
x11-misc/mate-menu-editor pacho 03 Feb 2015
x11-wm/mate-window-manager pacho 03 Feb 2015
net-zope/zope-fixers pacho 03 Feb 2015
sys-apps/kmscon pacho 03 Feb 2015
app-office/teapot pacho 03 Feb 2015
net-irc/bitchx pacho 03 Feb 2015
sys-power/cpufrequtils pacho 03 Feb 2015
x11-plugins/gkrellm-cpufreq pacho 03 Feb 2015
media-sound/gnome-alsamixer pacho 03 Feb 2015
sys-devel/ac-archive pacho 03 Feb 2015
net-misc/emirror pacho 03 Feb 2015
net-wireless/wimax pacho 03 Feb 2015
net-wireless/wimax-tools pacho 03 Feb 2015
rox-extra/clock pacho 03 Feb 2015
app-arch/rpm5 pacho 03 Feb 2015
app-admin/gksu-polkit pacho 03 Feb 2015
sys-apps/uhinv pacho 03 Feb 2015
net-libs/pjsip pacho 03 Feb 2015
net-voip/sflphone pacho 03 Feb 2015
net-im/ekg pacho 03 Feb 2015
sys-firmware/iwl2000-ucode pacho 03 Feb 2015
sys-firmware/iwl2030-ucode pacho 03 Feb 2015
sys-firmware/iwl5000-ucode pacho 03 Feb 2015
sys-firmware/iwl5150-ucode pacho 03 Feb 2015
net-wireless/cinnamon-bluetooth pacho 03 Feb 2015
net-wireless/ussp-push pacho 03 Feb 2015
app-vim/zencoding-vim radhermit 09 Feb 2015
x11-drivers/psb-firmware chithanh 10 Feb 2015
x11-drivers/xf86-video-cyrix chithanh 10 Feb 2015
x11-drivers/xf86-video-impact chithanh 10 Feb 2015
x11-drivers/xf86-video-nsc chithanh 10 Feb 2015
x11-drivers/xf86-video-sunbw2 chithanh 10 Feb 2015
x11-libs/libdrm-poulsbo chithanh 10 Feb 2015
x11-libs/xpsb-glx chithanh 10 Feb 2015
app-admin/lxqt-admin yngwin 10 Feb 2015
net-misc/lxqt-openssh-askpass yngwin 10 Feb 2015
games-puzzle/trimines mr_bones_ 11 Feb 2015
games-action/cylindrix mr_bones_ 13 Feb 2015
net-analyzer/openvas-administrator jlec 14 Feb 2015
net-analyzer/greenbone-security-desktop jlec 14 Feb 2015
dev-ruby/flickr mrueg 19 Feb 2015
dev-ruby/gemcutter mrueg 19 Feb 2015
dev-ruby/drydock mrueg 19 Feb 2015
dev-ruby/net-dns mrueg 19 Feb 2015
virtual/ruby-rdoc mrueg 19 Feb 2015
media-fonts/libertine-ttf yngwin 22 Feb 2015
dev-perl/IP-Country zlogene 22 Feb 2015
net-dialup/gtk-imonc pinkbyte 27 Feb 2015

Additions

Package Developer Date
dev-python/jenkins-autojobs idella4 02 Feb 2015
net-analyzer/ntopng slis 03 Feb 2015
app-leechcraft/lc-intermutko maksbotan 03 Feb 2015
x11-drivers/xf86-input-libinput chithanh 04 Feb 2015
dev-python/cached-property cedk 05 Feb 2015
games-board/stockfish yngwin 05 Feb 2015
dev-util/shellcheck jlec 06 Feb 2015
app-admin/cgmanager hwoarang 07 Feb 2015
app-admin/restart_services mschiff 07 Feb 2015
app-portage/lightweight-cvs-toolkit mgorny 08 Feb 2015
lxqt-base/lxqt-admin yngwin 10 Feb 2015
lxqt-base/lxqt-openssh-askpass yngwin 10 Feb 2015
sys-apps/inxi dastergon 10 Feb 2015
dev-python/pyamf radhermit 10 Feb 2015
app-doc/clsync-docs bircoph 11 Feb 2015
dev-libs/libclsync bircoph 11 Feb 2015
app-admin/clsync bircoph 11 Feb 2015
dev-ruby/hiera-eyaml robbat2 12 Feb 2015
dev-ruby/gpgme robbat2 12 Feb 2015
dev-ruby/hiera-eyaml-gpg robbat2 12 Feb 2015
app-shells/mpibash ottxor 13 Feb 2015
dev-ruby/vcard mjo 14 Feb 2015
dev-ruby/ruby-ole mjo 14 Feb 2015
dev-ml/easy-format aballier 15 Feb 2015
dev-ml/biniou aballier 15 Feb 2015
dev-ml/yojson aballier 15 Feb 2015
app-i18n/ibus-libpinyin dlan 16 Feb 2015
dev-libs/libusbhp vapier 16 Feb 2015
media-tv/kodi vapier 16 Feb 2015
dev-python/blessings jlec 17 Feb 2015
dev-perl/ExtUtils-CChecker chainsaw 17 Feb 2015
dev-python/wcwidth jlec 17 Feb 2015
dev-python/curtsies jlec 17 Feb 2015
dev-perl/Socket-GetAddrInfo chainsaw 17 Feb 2015
dev-python/elasticsearch-curator idella4 17 Feb 2015
dev-java/oracle-javamail fordfrog 17 Feb 2015
net-misc/linuxptp tomjbe 18 Feb 2015
dev-haskell/preprocessor-tools slyfox 18 Feb 2015
dev-haskell/hsb2hs slyfox 18 Feb 2015
media-plugins/vdr-recsearch hd_brummy 20 Feb 2015
media-fonts/ohsnap yngwin 20 Feb 2015
sci-libs/Rtree slis 20 Feb 2015
media-plugins/vdr-dvbapi hd_brummy 20 Feb 2015
dev-ml/typerep_extended aballier 20 Feb 2015
media-fonts/lohit-assamese yngwin 20 Feb 2015
media-fonts/lohit-bengali yngwin 20 Feb 2015
media-fonts/lohit-devanagari yngwin 20 Feb 2015
media-fonts/lohit-gujarati yngwin 20 Feb 2015
media-fonts/lohit-gurmukhi yngwin 20 Feb 2015
media-fonts/lohit-kannada yngwin 20 Feb 2015
media-fonts/lohit-malayalam yngwin 20 Feb 2015
media-fonts/lohit-marathi yngwin 20 Feb 2015
media-fonts/lohit-nepali yngwin 20 Feb 2015
media-fonts/lohit-odia yngwin 20 Feb 2015
media-fonts/lohit-tamil yngwin 20 Feb 2015
media-fonts/lohit-tamil-classical yngwin 20 Feb 2015
media-fonts/lohit-telugu yngwin 20 Feb 2015
media-fonts/ipaex yngwin 21 Feb 2015
dev-perl/Unicode-Stringprep dilfridge 21 Feb 2015
dev-perl/Authen-SASL-SASLprep dilfridge 21 Feb 2015
dev-perl/Crypt-URandom dilfridge 21 Feb 2015
dev-perl/PBKDF2-Tiny dilfridge 21 Feb 2015
dev-perl/Exporter-Tiny dilfridge 21 Feb 2015
dev-perl/Type-Tiny dilfridge 21 Feb 2015
dev-perl/Authen-SCRAM dilfridge 21 Feb 2015
dev-perl/Safe-Isa dilfridge 21 Feb 2015
dev-perl/syntax dilfridge 21 Feb 2015
dev-perl/Syntax-Keyword-Junction dilfridge 21 Feb 2015
net-analyzer/monitoring-plugins mjo 21 Feb 2015
dev-perl/Validate-Tiny monsieurp 22 Feb 2015
sys-firmware/iwl7265-ucode prometheanfire 22 Feb 2015
media-fonts/libertine yngwin 22 Feb 2015
net-dns/hash-slinger mschiff 22 Feb 2015
dev-util/bitcoin-tx blueness 23 Feb 2015
dev-python/jsonfield jlec 24 Feb 2015
dev-lua/lualdap chainsaw 24 Feb 2015
media-fonts/powerline-symbols yngwin 24 Feb 2015
app-emacs/wgrep ulm 24 Feb 2015
dev-python/trollius radhermit 25 Feb 2015
dev-perl/Pegex dilfridge 25 Feb 2015
dev-perl/Inline-C dilfridge 25 Feb 2015
dev-perl/Test-YAML dilfridge 25 Feb 2015
dev-python/asyncio prometheanfire 26 Feb 2015
dev-python/aioeventlet prometheanfire 26 Feb 2015
dev-python/neovim-python-client yngwin 26 Feb 2015
dev-lua/messagepack yngwin 26 Feb 2015
dev-libs/unibilium yngwin 26 Feb 2015
dev-libs/libtermkey yngwin 26 Feb 2015
app-editors/neovim yngwin 26 Feb 2015
dev-python/prompt_toolkit jlec 27 Feb 2015
dev-python/ptpython jlec 27 Feb 2015
dev-python/oslo-log prometheanfire 28 Feb 2015
dev-python/tempest-lib prometheanfire 28 Feb 2015
dev-python/mistune jlec 28 Feb 2015
dev-python/terminado jlec 28 Feb 2015
dev-python/ghp-import alunduil 28 Feb 2015
dev-python/mysqlclient jlec 28 Feb 2015

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 February 2015 and 28 February 2015. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2015-02

Bug Activity Number
New 1820
Closed 1519
Not fixed 281
Duplicates 162
Total 6621
Blocker 3
Critical 18
Major 68

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Games 188
2 Gentoo Security 52
3 Python Gentoo Team 45
4 Gentoo's Team for Core System packages 37
5 Gentoo KDE team 35
6 Gentoo X packagers 30
7 Gentoo Science Related Packages 29
8 Gentoo Perl team 29
9 Gentoo Linux Gnome Desktop Team 27
10 Others 1046

gmn-closed-2015-02

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Games 177
2 Gentoo Linux bug wranglers 133
3 Gentoo Security 66
4 Python Gentoo Team 50
5 Portage team 46
6 Gentoo KDE team 38
7 Gentoo X packagers 36
8 Gentoo's Team for Core System packages 36
9 Java team 35
10 Others 1202

gmn-opened-2015-02

 

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2015 (March 07, 2015, 06:45 UTC)

TL;DR: Gentoo not selected for GSoC 2015 ‘to make way for new orgs”. All is not lost: some Gentoo projects will be available within other organizations.

As you may have already noted, the Gentoo Foundation was not selected as a mentoring organization for GSoC 2015. Many immediately started to speculate why that happened.

Today I had an opportunity to talk (on irc) to Carol Smith, from Google’s Open Source Programs Office. I asked her why we had been rejected, if they had seen any issue with our application to GSoC, and if she had comments about it. Here’s what her answer was:

yeah, i’m sorry that this is going to be disappointing
but this was just us trying to make way for new orgs this year :-(
i don’t see anything wrong with your ideas page or application, it looks good

Then I asked her the following:

one discussion we had after our rejection is if we should keep focusing on doing GSoC to attract contributors as we’ve been doing, or focus more on having projects actually be implemented, and how much you cared about it

To which she replied:

well, i’ll say that wasn’t a factor in this rejection
having said that, we in general like to see more new developers instead of the same ones year over year
we’d prefer gsoc was used to attract new members of the community
but like i said, that wasn’t a factor in your case

It’s pretty clear we haven’t done anything wrong, and that they like what we do and the way we do it. Which doesn’t mean we can’t improve, by the way. I know Carol well enough to be sure she was not dodging my questions to politely brush me aside. She says things as they are.

So, what happened then? First, the overall number of accepted organizations went down roughly 30% compared to last year. The immediate thought which comes to mind is “budget cut”. Maybe. But the team who organizes GSoC is largely the same year over year. You can’t indefinitely grow an organization at constant manpower. And last year was big.

Second, and probably the main reason why we were rejected is that this year small and/or newer organizations were favored. This was explicitly said by Carol (and I believe others) multiple times. I’m sure some of you will argue that this isn’t a good idea, but the fact is it’s their program and they run it the way they want. I will certainly not blame them. This does not mean no large organizations were selected, but that tough choices had to be made among them.

In my opinion, Carol’s lack of words to explain why we were not selected meant “not bad but not good enough”. The playing field is improving every year. We surely felt a little too confident and now have to step up our game. I have ideas for next year, these will be discussed in due time.

In the meantime, some Gentoo projects will be available within other organizations. I will not talk about what hasn’t been announced yet, but I can certainly make this one official:
glee: Gentoo-based Linux appliances on Minnowboard
If you’re interested, feel free to contact me directly.

March 06, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Trying out Pelican, part one (March 06, 2015, 18:02 UTC)

One of the goals I’ve set myself to do this year (not as a new year resolution though, I *really* want to accomplish this ;-) is to move my blog from WordPress to a statically built website. And Pelican looks to be a good solution to do so. It’s based on Python, which is readily available and supported on Gentoo, and is quite readable. Also, it looks to be very active in development and support. And also: it supports taking data from an existing WordPress installation, so that none of the posts are lost (with some rounding error that’s inherit to such migrations of course).

Before getting Pelican ready (which is available through Gentoo btw) I also needed to install pandoc, and that became more troublesome than expected. While installing pandoc I got hit by its massive amount of dependencies towards dev-haskell/* packages, and many of those packages really failed to install. It does some internal dependency checking and fails, informing me to run haskell-updater. Sadly, multiple re-runs of said command did not resolve the issue. In fact, it wasn’t until I hit a forum post about the same issue that a first step to a working solution was found.

It turns out that the ~arch versions of the haskell packages are better working. So I enabled dev-haskell/* in my package.accept_keywords file. And then started updating the packages… which also failed. Then I ran haskell-updater multiple times, but that also failed. After a while, I had to run the following set of commands (in random order) just to get everything to build fine:

~# emerge -u $(qlist -IC dev-haskell) --keep-going
~# for n in $(qlist -IC dev-haskell); do emerge -u $n; done

It took quite some reruns, but it finally got through. I never thought I had this much Haskell-related packages installed on my system (89 packages here to be exact), as I never intended to do any Haskell development since I left the university. Still, I finally got pandoc to work. So, on to the migration of my WordPress site… I thought.

This is a good time to ask for stabilization requests (I’ll look into it myself as well of course) but also to see if you can help out our arch testing teams to support the stabilization requests on Gentoo! We need you!

I started with the official docs on importing. Looks promising, but it didn’t turn out too well for me. Importing was okay, but then immediately building the site again resulted in issues about wrong arguments (file names being interpreted as an argument name or function when an underscore was used) and interpretation of code inside the posts. Then I found Jason Antman’s converting wordpress posts to pelican markdown post to inform me I had to try using markdown instead of restructured text. And lo and behold – that’s much better.

The first builds look promising. Of all the posts that I made on WordPress, only one gives a build failure. The next thing to investigate is theming, as well as seeing how good the migration goes (it isn’t because there are no errors otherwise that the migration is successful of course) so that I know how much manual labor I have to take into consideration when I finally switch (right now, I’m still running WordPress).

March 05, 2015
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I've been occasionally hitting frustrating issues with bash history getting lost after a crash. Then I found this great blog post about keeping bash history in sync on disk and between multiple terminals.

tl;dr is to use "shopt -s histappend" and PROMPT_COMMAND="${PROMPT_COMMAND};history -a"

The first is usually default, and results in sane behavior when you have multiple bash sessions at the same time. Now the second one ("history -a") is really useful to flush the history to disk in case of crashes.

I'm happy to announce that both are now default in Gentoo! Please see bug #517342 for reference.

February 27, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Sometimes "best" does not mean "better" (February 27, 2015, 19:37 UTC)

Please bear with me, this post will start as ramblings of a photography nerd who's not really very good at photography, but will move on to talk of more general technical discussions.

I recently had a discussion on Google+ about a camera lens that I bought recently — the Tamron 16-300. I like the lens, or I wouldn't have bought it, but the discussion was about the fact that the specs of the lens are at best mediocre, and possibly worse than other cheaper lenses.

It is true, a lens that spaces between f/3.5 and f/6.3 is not exactly a super-duper lens; it means that it can't take pictures in low-light conditions unless you give up on quality (bring ISO up) or use a flash. I was also suggested just before that the Canon 24-105 f/4L which is according to many the best lens of its kind. But that's not a better lens for me.

The reason why I say this is that I have considered the best lenses ever and have turned many down for various issues — the most common of which is, of course, price; I'm not good enough as a photographer to desire to spend over ten grands on gear every year. The Tamron I bought is, in my opinion, the better one for the space I was trying to fill, so let's try to find what space I was trying to fill first.

First of all, I don't have professional gear; I'm still shooting with a three years old Canon EOS 600D (Rebel T3i for the Americans), a so-called prosumer camera with a cropped sensor. I have over time bought a few pieces of glass, but the one I used the most has definitely been the Canon 50mm f/1.4 which sounded expensive at the time and now sounds almost cheap in comparison — since this is a cropped sensor the lens is actually an 80mm, which is a good portrait length for mugshots and has its use for parties and conferences.

This lens turns out to be mostly useless in museums though, and in general in enclosed spaces. And for big panoramas. For that I wanted a wide-angle lens and I bought a Tokina 11-16mm f/2.8 which is fast enough to shoot within buildings, even though it has a bit of chromatic aberration when shooting at maximum aperture.

Yes I know it's getting boring, bear with me a little longer.

So what is missing here is a lens for general pictures of a party that is not as close-up as the 50mm (you can get something akin to the actual 50mm with a 28mm) and something to take pictures at a distance. For those I have been using the kit lenses, the cheap 18-55mm and the 55-250mm — neither version is, as far as I can tell, still on the market; they have been replaced by actually decent versions comparatively, especially the 18-55mm STM.

Okay nobody but the photographers – who already know this – care about these details, so in short what gives? Well, I wanted to replace the two kit lenses with one versatile lens. The idea is that if I'm, say, at Muir Woods I'd rather not keep switching between the 50mm and the 11-16mm lenses, especially under the trees, with the risk of having dirt hitting the mirror or the sensor. I did it that one time, but I found it very inconvenient, and thus why I was looking for a zoom lens that would not make me bankrupt.

The Tamron is a decent lens. I've paid some €700 for it and a couple of (good) filters (UV and CP), which is not cheap but does not ruin me (remember I paid $600/month for the tinderbox, and that stopped now). Its specs are not stellar, and especially at 300mm f/6.3 is indeed a little too slow, there is some CA when running at 16mm too, but not bad enough. It pays of in terms of opportunity: in many cases I'm not setting out to take pictures of something in particular, I'm just rather going somewhere and bringing the camera with me and when I want to take a picture I may not know of what at first… if I have the wrong lens on, I may no be able to take a picture quickly enough; if I did not bring the right lens, I would not be able to take a picture at all. With a versatile zoom lens as my default-on, I can just take the camera out and shoot, even if it's not perfect — if I want to make it perfect I can put on the good lens.

Again, I could have saved a few more months and bought a more expensive lens, the "best" lens — but there are other things to consider: since I did have this lens I was able to take some pictures of a RC car race near Mission College, without having to find space to switch lenses between the whole track pictures and the cars details. I also would not be sure that I'd be bringing a lens that's over $1k alone around with me when not sure where I'm going; sure the rest of the lenses together already build up to that number, but they are also less conspicuous — the only one that is actually bigger than someone's hand is the Tokina.

This has also another side effect: it seems like many places in California have a "no professional photography without permission" rule, including Stanford University. The way "professional photography" gets defined is by the size of one's lens (the joke about sizes counting is old, get over it), namely if the lens does not fit in your hands it is counted as "professional". By that rule, the Tamron lens is not a professional one, while the suggested Canon 24-105 would be.

Cutting finally down to the chase, the point of the episode I recounted is that there are many other reasons beside the technical specs, for which decisions are made. The same discussions I heard about the bad specs of the lenses I was looking for reminded me of the discussions I had before when people insisted that there are no reasons for tablets, because we have laptops (for the 10") and smartphones (for the 7"), or about using a convenient airline rather than one that has better planes, or… you get the gist, right?

The same is true when people try to discuss Free Software or Open Source in just terms of technical superiority. It is probably true, but that does not always make it a good choice by itself, there are other considerations - hardware support used to be the main concern, nowadays UI and UX seems to be the biggest, but this does not mean it's limited to those.

This is similar in point to the story of Jessica as SwiftOnSecurity posted, but in this case I'm not even talking about people who don't know (and don't care to know) but rather about the fact that people have different priorities, and technical superiority is not always a priority.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB 2.6.8, 2.4.13 & the upcoming 3.0.0 (February 27, 2015, 10:35 UTC)

I’m a bit slacking on those follow-up posts but with the upcoming mongoDB 3.x series and the recent new releases I guess it was about time I talked a bit about what was going on.

 mongodb-3.0.0_rcX

Thanks to the help of Tomas Mozes, we might get a release candidate version of the 3.0.0 version of mongoDB pretty soon in tree shall you want to test it on Gentoo. Feel free to contribute or give feedback in the bug, I’ll do my best to keep up.

What Tomas proposes matches what I had in mind so for now the plan is to :

  • split the mongo tools (mongodump/export etc) to a new package : dev-db/mongo-tools or app-admin/mongo-tools ?
  • split the MMS monitoring agent to its own package : app-admin/mms-monitoring-agent
  • have a look at the MMS backup agent and maybe propose its own package if someone is interested in this ?
  • after the first release, have a look at the MMS deployment automation to see how it could integrate with Gentoo

mongodb-2.6.8 & 2.4.13

Released 2 days ago, they are already on portage ! The 2.4.13 is mostly a security (SSL v3) and tiny backport release whereas the 2.6.8 fixes quite a bunch of bugs.

Please note that I will drop the 2.4.x releases when 3.0.0 hits the tree ! I will keep the latest 2.4.13 in my overlay if someone asks for it.

Service relaunch: archives.gentoo.org (February 27, 2015, 01:03 UTC)

The Gentoo Infrastructure team is proud to announce that we have re-engineered the mailing list archives, and re-launched it, back at archives.gentoo.org. The prior Mhonarc-based system had numerous problems, and a complete revamp was deemed the best forward solution to move forward with. The new system is powered by ElasticSearch (more features to come).

All existing URLs should either work directly, or redirect you to the new location for that content.

Major thanks to Alex Legler, for his development of this project.

Note that we're still doing some catchup on newer messages, but delays will drop to under 2 hours soon, with an eventual goal of under 30 minutes.

Please report problems to Bugzilla: Product Websites, Component Archives

February 25, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

Privdogtl;dr PrivDog will send webpage URLs you surf to a server owned by AdTrustMedia. This happened unencrypted in cleartext HTTP. This is true for both the version that is shipped with some Comodo products and the standalone version from the PrivDog webpage.

On Sunday I wrote here that the software PrivDog had a severe security issue that compromised the security of HTTPS connections. In the meantime PrivDog has published an advisory and an update for their software. I had a look at the updated version. While I haven't found any further obvious issues in the TLS certificate validation I found others that I find worrying.

Let me quickly recap what PrivDog is all about. The webpage claims: "PrivDog protects your privacy while browsing the web and more!" What PrivDog does technically is to detect ads it considers as bad and replace them with ads delivered by AdTrustMedia, the company behind PrivDog.

I had a look at the network traffic from a system using PrivDog. It sent some JSON-encoded data to the url http://ads.adtrustmedia.com/safecheck.php. The sent data looks like this:

{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php?serendipity[adminModule]=logout", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914287827}
{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914313848}
{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php?serendipity[adminModule]=entries&serendipity[adminAction]=editSelect", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914316235}


And from another try with the browser plugin variant shipped with Comodo Internet Security:

{"method":"register_url","url":"https:\\/\\/www.facebook.com\\/?_rdr","user_guid":"686F27D9580CF2CDA8F6D4843DC79BA1","referrer":""}
{"method":"register_url","url":"https:\\/\\/www.facebook.com\\/login.php?login_attempt=1","user_guid":"686F27D9580CF2CDA8F6D4843DC79BA1","referrer":"https:\\/\\/www.facebook.com\\/?_rdr"}


On a linux router or host system this could be tested with a command like tcpdump -A dst ads.adtrustmedia.com|grep register_url. (I was unable to do the same on the affected system with the windows version of tcpdump, I'm not sure why.)

Now here is the troubling part: The URLs I surf to are all sent to a server owned by AdTrustMedia. As you can see in this example these are HTTPS-protected URLs, some of them from the internal backend of my blog. In my tests all URLs the user surfed to were sent, sometimes with some delay, but not URLs of objects like iframes or images.

This is worrying for various reasons. First of all with this data AdTrustMedia could create a profile of users including all the webpages the user surfs to. Given that the company advertises this product as a privacy tool this is especially troubling, because quite obviously this harms your privacy.

This communication happened in clear text, even for URLs that are HTTPS. HTTPS does not protect metadata and a passive observer of the network traffic can always see which domains a user surfs to. But what HTTPS does encrypt is the exact URL a user is calling. Sometimes the URL can contain security sensitive data like session ids or security tokens. With PrivDog installed the HTTPS URL was no longer protected, because it was sent in cleartext through the net.

The TLS certificate validation issue was only present in the standalone version of PrivDog and not the version that is bundled with Comodo Internet Security as part of the Chromodo browser. However this new issue of sending URLs to an AdTrustMedia server was present in both the standalone and the bundled version.

I have asked PrivDog for a statement: "In accordance with our privacy policy all data sent is anonymous and we do not store any personally identifiable information. The API is utilized to help us prevent fraud of various types including click fraud which is a serious problem on the Internet. This allows us to identify automated bots and other threats. The data is also used to improve user experience and enables the system to deliver users an improved and more appropriate ad experience." They also said that they will update the configuration of clients to use HTTPS instead of HTTP to transmit the data.

PrivDog made further HTTP calls. Sometimes it fetched Javascript and iframes from the server trustedads.adtrustmedia.com. By manipulating these I was able to inject Javascript into webpages. However I have only experienced this with HTTP webpages. This by itself doesn't open up security issues, because an attacker able to control network traffic is already able to manipulate the content of HTTP webpages and can therefore inject JavaScript anyway. There are also other unencrypted HTTP requests to AdTrustMedia servers transmitting JSON data where I don't know what their exact meaning is.

February 24, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
TG4: Tinderbox Generation 4 (February 24, 2015, 21:08 UTC)

Everybody's a critic: the first comment I received when I showed other Gentoo developers my previous post about the tinderbox was a question on whether I would be using pkgcore for the new generation tinderbox. If you have understood what my blog post was about, you probably understand why I was not happy about such a question.

I thought the blog post made it very clear that my focus right now is not to change the way the tinderbox runs but the way the reporting pipeline works. This is the same problem as 2009: generating build logs is easy, sifting through them is not. At first I thought this was hard just for me, but the fact that GSoC attracted multiple people interested in doing continuous build, but not one interested in logmining showed me this is just a hard problem.

The approach I took last time, with what I'll start calling TG3 (Tinderbox Generation 3), was to: highlight the error/warning messages; provide a list of build logs for which a problem was identified (without caring much for which kind of problem), and just showing up broken builds or broken tests in the interface. This was easy to build up, and to a point to use, but it had a lots of drawbacks.

Major drawbacks in that UI is that it relies on manual work to identify open bugs for the package (and thus make sure not to report duplicate bugs), and on my own memory not to report the same issue multiple time, if the bug was closed by some child as NEEDINFO.

I don't have my graphic tablet with me to draw a mock of what I have in mind yet, but I can throw in some of the things I've been thinking of:

  • Being able to tell what problem or problems a particular build is about. It's easy to tell whether a build log is just a build failure or a test failure, but what if instead it has three or four different warning conditions? Being able to tell which ones have been found and having a single-click bug filing system would be a good start.
  • Keep in mind the bugs filed against a package. This is important because sometimes a build log is just a repeat of something filed already; it may be that it failed multiple times since you started a reporting run, so it might be better to show that easily.
  • Related, it should collapse failures for packages so not to repeat the same package multiple times on the page. Say you look at the build failures every day or two, you don't care if the same package failed 20 times, especially if the logs report the same error. Finding out whether the error messages are the same is tricky, but at least you can collapse the multiple logs in a single log per package, so you don't need to skip it over and over again.
  • Again related, it should keep track of which logs have been read and which weren't. It's going to be tricky if the app is made multi-user, but at least a starting point needs to be there.
  • It should show the three most recent bugs open for the package (and a count of how many other open bugs) so that if the bug was filed by someone else, it does not need to be filed again. Bonus points for showing the few most recently reported closed bugs too.

You can tell already that this is a considerably more complex interface than the one I used before. I expect it'll take some work with JavaScript at the very least, so I may end up doing it with AngularJS and Go mostly because that's what I need to learn at work as well, don't get me started. At least I don't expect I'll be doing it in Polymer but I won't exclude that just yet.

Why do I spend this much time thinking and talking (and soon writing) about UI? Because I think this is the current bottleneck to scale up the amount of analysis of Gentoo's quality. Running a tinderbox is getting cheaper — there are plenty of dedicated server offers that are considerably cheaper than what I paid for hosting Excelsior, let alone the initial investment in it. And this is without going to look again at the possible costs of running them on GCE or AWS at request.

Three years ago, my choice of a physical server in my hands was easier to justify than now, with 4-core HT servers with 48GB of RAM starting at €40/month — while I/O is still the limiting factor, with that much RAM it's well possible to have one tinderbox building fully in tmpfs, and just run a separate server for a second instance, rather than sharing multiple instances.

And even if GCE/AWS instances that are charged for time running are not exactly interesting for continuous build systems, having a cloud image that can be instructed to start running a tinderbox with a fixed set of packages, say all the reverse dependencies of libav, would make it possible to run explicit tests for code that is known to be fragile, while not pausing the main tinderbox.

Finally, there are different ideas of how we should be testing packages: all options enabled, all options disabled, multilib or not, hardened or not, one package at a time, all packages together… they can all share the same exact logmining pipeline, as all it needs is the emerge --info output, and the log itself, which can have markers for known issues to look out for or not. And then you can build the packages however you desire, as long as you can submit them there.

Now my idea is not to just build this for myself and run analysis over all the people who want to submit the build logs, because that would be just about as crazy. But I think it would be okay to have a shared instance for Gentoo developers to submit build logs from their own personal instances, if they want to, and then have them look at their own accounts only. It's not going to be my first target but I'll keep that in mind when I start my mocks and implementations, because I think it might prove successful.

February 23, 2015
Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.5 is released (February 23, 2015, 11:02 UTC)

Hi all,
we are pleased to announce version 0.5 of Trojitá, a fast Qt IMAP e-mail client. More than 500 changes went in since the previous release, so the following list highlights just a few of them:

  • Trojitá can now be invoked with a mailto: URL (RFC 6068) on the command line for composing a new email.
  • Messages can be forwarded as attachments (support for inline forwarding is planned).
  • Passwords can be remembered in a secure, encrypted storage via QtKeychain.
  • E-mails with attachments are decorated with a paperclip icon in the overview.
  • Better rendering of e-mails with extraordinary MIME structure.
  • By default, only one instance is kept running, and can be controlled via D-Bus.
  • Trojitá now provides better error reporting, and can reconnect on network failures automatically.
  • The network state (Offline, Expensive Connection or Free Access) will be remembered across sessions.
  • When replying, it is now possible to retroactively change the reply type (Private Reply, Reply to All but Me, Reply to All, Reply to Mailing List, Handpicked).
  • When searching in a message, Trojitá will scroll to the current match.
  • Attachment preview for quick access to the enclosed files.
  • The mark-message-read-after-X-seconds setting is now configurable.
  • The IMAP refresh interval is now configurable.
  • Speed and memory consumption improvements.
  • Miscellaneous IMAP improvements.
  • Various fixes and improvements.
  • We have increased our test coverage, and are now making use of an improved Continuous Integration setup with pre-commit patch testing.

This release has been tagged in git as "v0.5". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

We would like to thank Karan Luthra and Stephan Platz for their efforts during Google Summer of Code 2014.

The Trojitá developers

  • Jan Kundrát
  • Pali Rohár
  • Dan Chapman
  • Thomas Lübking
  • Stephan Platz
  • Boren Zhang
  • Karan Luthra
  • Caspar Schutijser
  • Lasse Liehu
  • Michael Hall
  • Toby Chen
  • Niklas Wenzel
  • Marko Käning
  • Bruno Meneguele
  • Yuri Chornoivan
  • Tomáš Chvátal
  • Thor Nuno Helge Gomes Hultberg
  • Safa Alfulaij
  • Pavel Sedlák
  • Matthias Klumpp
  • Luke Dashjr
  • Jai Luthra
  • Illya Kovalevskyy
  • Edward Hades
  • Dimitrios Glentadakis
  • Andreas Sturmlechner
  • Alexander Zabolotskikh

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The tinderbox is dead, long live the tinderbox (February 23, 2015, 03:24 UTC)

I announced it last November and now it became reality: the Tinderbox is no more, in hardware as well as software. Excelsior was taken out of the Hurricane Electric facility in Fremont this past Monday, just before I left for SCALE13x.

Originally the box was hosted by my then-employer, but as of last year, to allow more people to have access to is working, I had it moved to my own rented cabinet, at a figure of $600/month. Not chump change, but it was okay for a while; unfortunately the cost sharing option that was supposed to happen did not happen, and about an year later those $7200 do not feel like a good choice, and this is without delving into the whole insulting behavior of a fellow developer.

Right now the server is lying on the floor of an office in the Mountain View campus of my (current) employer. The future of the hardware is uncertain right now, but it's more likely than not going to be donated to Gentoo Foundation (minus the HDDs for obvious opsec). I'm likely going to rent a dedicated server of my own for development and testing, as even though they would be less powerful than Excelsior, they would be massively cheaper at €40/month.

The question becomes what we want to do with the idea of a tinderbox — it seems like after I announced the demise people would get together to fix it once and for all, but four months later there is nothing to show that. After speaking with other developers at SCaLE, and realizing I'm probably the only one with enough domain knowledge of the problems I tackled, at this point, I decided it's time for me to stop running a tinderbox and instead design one.

I'm going to write a few more blog posts to get into the nitty-gritty details of what I plan on doing, but I would like to provide at least a high-level idea of what I'm going to change drastically in the next iteration.

The first difference will be the target execution environment. When I wrote the tinderbox analysis scripts I designed them to run in a mostly sealed system. Because the tinderbox was running at someone else's cabinet, within its management network, I decided I would not provide any direct access to either the tinderbox container nor the app that would mangle that data. This is why the storage for both the metadata and the logs was Amazon: pushing the data out was easy and did not require me to give access to the system to anyone else.

In the new design this will not be important — not only because it'll be designed to push the data directly into Bugzilla, but more importantly because I'm not going to run a tinderbox in such an environment. Well, admittedly I'm just not going to run a tinderbox ever again, and will just build the code to do so, but the whole point is that I won't keep that restriction on to begin with.

And since the data store now is only temporary, I don't think it's worth over-optimizing for performance. While I originally considered and dropped the option of storing the logs in PostgreSQL for performance reasons, now this is unlikely to be a problem. Even if the queries would take seconds, it's not like this is going to be a deal breaker for an app with a single user. Even more importantly, the time taken to create the bug on the Bugzilla side is likely going to overshadow any database inefficiency.

The part that I've still got some doubts about is how to push the data from the tinderbox instance to the collector (which may or may not be the webapp that opens the bugs too.) Right now the tinderbox does some analysis through bashrc, leaving warnings in the log — the log is then sent to the collector through -chewing gum and saliva- tar and netcat (yes, really) to maintain one single piece of metadata: the filename.

I would like to be able to collect some metadata on the tinderbox side (namely, emerge --info, which before was cached manually) and send it down to the collector. But adding this much logic is tricky, as the tinderbox should still operate with most of the operating system busted. My original napkin plan involved having the agent written in Go, using Apache Thrift to communicate to the main app, probably written in Django or similar.

The reason why I'm saying that Go would be a good fit is because of one piece of its design I do not like (in the general use case) at all: the static compilation. A Go binary will not break during a system upgrade of any runtime, because it has no runtime; which is in my opinion a bad idea for a piece of desktop or server software, but it's a godsend in this particular environment.

But the reason for which I was considering Thrift was I didn't want to look into XML-RPC or JSON-RPC. But then again, Bugzilla supports only those two, and my main concern (the size of the log files) would still be a problem when attaching them to Bugzilla just as much. Since Thrift would require me to package it for Gentoo (seems like nobody did yet), while JSON-RPC is already supported in Go, I think it might be a better idea to stick with the JSON. Unfortunately Go does not support UTF-7 which would make escaping binary data much easier.

Now what remains a problem is filing the bug and attaching the log to Bugzilla. If I were to write that part of the app in Python, it would be just a matter of using the pybugz libraries to handle it. But with JSON-RPC it should be fairly easy to implement support for it from scratch (unlike XML-RPC) so maybe it's worth just doing the whole thing in Go, and reduce the proliferation of languages in use for such a project.

Python will remain in use for the tinderbox runner. Actually if anything I would like to remove the bash wrapper I've written and do the generation and selection of which packages to build in Python. It would also be nice if it could handle the USE mangling by itself, but that's difficult due to the sad conflicting requirements of the tree.

But this is enough details for the moment; I'll go back to thinking the implementation through and add more details about that as I get to them.

Hanno Böck a.k.a. hanno (homepage, bugs)
Software Privdog worse than Superfish (February 23, 2015, 00:27 UTC)

Privdogtl;dr There is a software called Privdog. It totally breaks HTTPS security in a similar way as Superfish.

In case you haven't heard it the past days an Adware called Superfish made headlines. It was preinstalled on Lenovo laptops and it is bad: It totally breaks the security of HTTPS connections. The story became bigger when it became clear that a lot of other software packages were using the same technology Komodia with the same security risk.

What Superfish and other tools do is that it intercepts encrypted HTTPS traffic to insert Advertising on webpages. It does so by breaking the HTTPS encryption with a Man-in-the-Middle-attack, which is possible because it installs its own certificate into the operating system.

A number of people gathered in a chatroom and we noted a thread on Hacker News where someone asked whether a tool called PrivDog is like Superfish. PrivDog's functionality is to replace advertising in web pages with it's own advertising "from trusted sources". That by itself already sounds weird even without any security issues.

A quick analysis shows that it doesn't have the same flaw as Superfish, but it has another one which arguably is even bigger. While Superfish used the same certificate and key on all hosts PrivDog recreates a key/cert on every installation. However here comes the big flaw: PrivDog will intercept every certificate and replace it with one signed by its root key. And that means also certificates that weren't valid in the first place. It will turn your Browser into one that just accepts every HTTPS certificate out there, whether it's been signed by a certificate authority or not. We're still trying to figure out the details, but it looks pretty bad. (with some trickery you can do something similar on Superfish/Komodia, too)

There are some things that are completely weird. When one surfs to a webpage that has a self-signed certificate (really self-signed, not signed by an unknown CA) it adds another self-signed cert with 512 bit RSA into the root certificate store of Windows. All other certs get replaced by 1024 bit RSA certs signed by a locally created PrivDog CA.

Certificate interceptionUS-CERT writes: "Adtrustmedia PrivDog is promoted by the Comodo Group, which is an organization that offers SSL certificates and authentication solutions." A variant of PrivDog that is not affected by this issue is shipped with products produced by Comodo (see below). This makes this case especially interesting because Comodo itself is a certificate authority (they had issues before). As ACLU technologist Christopher Soghoian points out on Twitter the founder of PrivDog is the CEO of Comodo. (See this blog post.)

We will try to collect information on this and other simliar software in a Wiki on Github. Discussions also happen on irc.ringoflightning.net #kekmodia.)

Thanks to Filippo, slipstream / raylee and others for all the analysis that has happened on this issue.

Update/Clarification: The dangerous TLS interception behaviour is part of the latest version of PrivDog 3.0.96.0, which can be downloaded from the PrivDog webpage. Comodo Internet Security bundles an earlier version of PrivDog that works with a browser extension, so it is not directly vulnerable to this threat. According to online sources PrivDog 3.0.96.0 was released in December 2014 and changed the TLS interception technology.

Update 2: Privdog published an Advisory.

February 21, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

On a rather young Gentoo setup of mine I ran into SSLV3_ALERT_HANDSHAKE_FAILURE from rss2email.
Plain Python showed it, too:

# python -c "import urllib2; \
    urllib2.urlopen('https://twitrss.me/twitter_user_to_rss/?user=...')" \
    |& tail -n 1
urllib2.URLError: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] \
    sslv3 alert handshake failure (_ssl.c:581)>

On other machines this yields

urllib2.HTTPError: HTTP Error 403: Forbidden

instead.

It turned out I overlooked USE="bindist ..." in /etc/portage/make.conf which is sitting there by default.
On OpenSSL, bindist disables elliptic curve support. So that is where the SSLV3_ALERT_HANDSHAKE_FAILURE came from.

February 20, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Code memes, an unsolved problem (February 20, 2015, 20:35 UTC)

I'll start the post by pointing out that my use of the word meme will follow relatively closely the original definition provided by Dawkins (hate him, love him, or find him a prat that has sometimes good ideas) in The Selfish Gene rather than the more modern usage of "standard image template with similar text on it."

The reason is that I really need that definition to describe what I see happening often in code: the copy-pasting of snippets, or concepts, across projects, and projects, and projects, mutating slightly in some cases because of coding style rules and preferences.

This is particularly true when you're dealing with frameworks, such as Rails and Autotools; the obvious reason for that is that most people will strive for consistency with someone else — if they try themselves, they might make a mistake, but someone else already did the work for them, so why not use the same code? Or a very slightly different one just to suit their tastes.

Generally speaking, consistency is a good thing. For instance if I can be guaranteed that a given piece of code will always follow the same structure throughout a codebase I can make it easier on me to mutate the code base if, as an example, a function call loses one of its parameters. But when you're maintaining a (semi-)public framework, you no longer have control over the whole codebase, and that's where the trouble starts.

As you no longer have control over your users, bad code memes are going to ruin your day for years: the moment when one influential user finds a way to work around a bug implement a nice trick, their meme will live on for years, and breaking it is going to be just painful. This is why Autotools-based build systems suck in many cases: they all copied old bad memes from another build system and they stuck around. Okay, there is a separate issue of people deciding to break all memes and creating something that barely works and will break at the first change in autoconf or automake, but that's beside my current point.

So when people started adding AC_CANONICAL_TARGET the result was an uphill battle to get people to drop it. It's not like it's a big problem for it to be there, it just makes the build system bloated, and it's one of a thousand cuts that make Autotools so despised. I'm using this as an example, but there are plenty of other memes in autotools that are worse, breaking compatibility, or cross-compilation, or the maintainers only know what.

This is not an easy corner to get out of, adding warnings about the use of deprecated features can help, but sometimes it's not as simple, because it's not a feature being used, it's the structure being the problem, which you can't easily (or at all) warn on. So what do you do?

If your framework is internal to an organisation, a company or a project, your best option is to make sure that there are no pieces of code hanging around that uses the wrong paradigm. It's easy to say "here is the best practices piece of code, follow that, not the bad examples" — but people don't work that way, they will be looking on a search engine (or grep) for what they need done, and find the myriad bad examples to follow instead.

When your framework is open to the public and is used by people all around the world, well, there isn't much you can do about it, beside being proactive and pointing out the bad examples and provide solutions to them that people can reference. This was the reason why I started Autotools Mythbuster, especially as a series of blog posts.

You could start breaking the bad code, but it would probably be a bad move for PR, given that people will complain loudly that your software is broken (see the multiple API breakages in libav/ffmpeg). Even if you were able to provide patches to all the broken software out there, it's extremely unlikely that it'll be seen as a good move, and it might make things worse if there is no clear backward compatibility with the new code, as then you'll end up with the bad code and the good code wrapped around compatibility checks.

I don't have a clean solution, unfortunately. My approach is fix and document, but it's not always possible and it takes much more time than most people have to spare. It's sad, but it's the nature of software source code.

February 18, 2015
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Reviewing moved files with git (February 18, 2015, 08:29 UTC)

This might be a well-known trick already, but just in case it’s not…

Reviewing a patch can be a bit painful when a file that has been changed and moved or renamed at one go (and there can be perfectly valid reasons for doing this). A nice thing about git is that you can reference files in an arbitrary tree while using git diff, so reviewing such changes can become easier if you do something like this:

$ git am 0001-the-thing-I-need-to-review.patch
$ git diff HEAD^:old/path/to/file.c new/path/to/file.c

This just references file.c in its old path, which is available in the commit before HEAD, and compares it to the file at the new path in the patch you just merged.

Of course, you can also use this to diff a file at some arbitrary point in the past, or in some arbitrary branch, with the same file at the current HEAD or any other point.

Hopefully this is helpful to someone out there!

Update: As Alex Elsayed points out in the comments, git diff -M/-C can be used to similar effect. The above example, for example, could be written as:

$ git am 0001-the-thing-I-need-to-review.patch
$ git show -C

February 17, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2015 (February 17, 2015, 04:47 UTC)

This is a quick informational message about GSoC 2015.

The Gentoo Foundation is in the process of applying to GSoC 2015 as an organization. This is the 10th year we’ll participate to this very successful and exciting program.

Right now, we need you to propose project ideas. You do not need to be a developer to propose an idea. First, open this link in a new tab/window. Change the title My_new_idea in the URL to the actual title, load the page again, fill in all the information and save the article. Then, edit the ideas page and include a link to it. If you need any help with this, or advice regarding the description or your idea, come talk to us in #gentoo-soc on Freenode.

Thanks.

February 15, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Apache AddHandler madness all over the place (February 15, 2015, 21:44 UTC)

Hi!

A friend of mine ran into known (though not well-known) security issues with Apache’s AddHandler directive.
Basically, Apache configuration like

# Avoid!
AddHandler php5-fcgi .php

applies to a file called evilupload.php.png, too. Yes.
Looking at the current Apache documentation, it should clearly say that AddHandler should not be used any more for security reasons.
That’s what I would expect. What I find as of 2015-02-15 looks different:

Maybe that’s why AddHandler is still proposed all across the Internet:

And maybe that’s why it made its way into app-admin/eselect-php (bug #538822) and a few more.

Please join the fight. Time to get AddHandler off the Internet!

I ❤ Free Software 2015-02-14 (February 15, 2015, 20:19 UTC)

I’m late. So what :)

I love Free Software!