Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Sergei Trofimovich
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
December 13, 2017, 18:06 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

December 12, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.7 (December 12, 2017, 14:34 UTC)

This important release has been long awaited as it focused on improving overall performance of py3status as well as dramatically decreasing its memory footprint!

I want once again to salute the impressive work of @lasers, our amazing contributors from the USA who has become top one contributor of 2017 in term of commits and PRs.

Thanks to him, this release brings a whole batch of improvements and QA clean ups on various modules. I encourage you to go through the changelog to see everything.

Highlights

Deep rework of the usage and scheduling of threads to run modules has been done by @tobes.

  •  now py3status does not keep one thread per module running permanently but instead uses a queue to spawn a thread to execute the module only when its cache expires
  • this new scheduling and usage of threads allows py3status to run under asynchronous event loops and gevent will be supported on the upcoming 3.8
  • memory footprint of py3status got largely reduced thanks to the threads modifications and thanks to a nice hunt on ever growing and useless variables
  • modules error reporting is now more detailed

Milestone 3.8

The next release will bring some awesome new features such as gevent support, environment variable support in config file and per module persistent data storage as well as new modules!

Thanks contributors!

This release is their work, thanks a lot guys!

  • JohnAZoidberg
  • lasers
  • maximbaz
  • pcewing
  • tobes

December 06, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have expressed some doubts on Keybase last year, but I kept mostly “using” the service, as in I kept it up to date when they added new services and so on. I have honestly mostly used it to keep track of the address of the bitcoin wallet I’ve been using (more on that later).

A couple of weeks ago, I got an (unencrypted) email telling me that “chris” (the Keybase founder) “sent me a message” (more likely that they decided to do another of their rounds of adding invites to people’s account to try to gather more people. Usually these messages, being completely non-private, would be sent in the clear as email, but since Keybase has been trying to sell themselves as a secure messaging system, this time they were sent through their secure message system.

Unlike most other features of the service, that can be used by copy-pasting huge command lines that combine gpg and curl to signal out of band the requests, the secure messaging system depends on the actual keybase command line tool (written in NodeJS, if I got this right). So I installed it (from AUR since my laptop is currently running Arch Linux), and tried accessing the message. This was already more complicated than it should have been, because the previous computer that was set up to access Keybase was the laptop I replaced and gifted to a friend.

The second problem was that Keybase also decided to make encrypted file transfer easy by bundling it with messaging. Effectively making conversations a shared encrypted directory, with messages being files. This may sound a cool idea particularly for those who like efforts like Plan9, but I think this is where my patience for the service ends. Indeed, while there are sub-commands in keybase for file access, they all depend on using a FUSE (filesystem-in-userspace) module, and the same is true for the messages. So you’re now down to have to run a GPG wrapper, as well as a background NodeJS service on your computer, and a FUSE service to access files and messages.

I gave up. Even though trying to simplify the key exchange for GPG usage is definitely interesting, I have a feeling Keybase decided to feature-creep in the hope that it would attract funding so that they can keep operating. zCash first, now this whole encrypted file-share and messaging system, appear to be overly complicated for the sake of having something that you can show. As such, I explicitly said in my profile that any message sent through this service is going to be a scream into the void, as I’m not going to pay attention to it at all.

The second GPG-related problem was with GreenAddress, the bitcoin wallet I signed up for some months ago. Let me start with a note that I do not agree with bitcoin’s mission statement, with its implementation, or with any of the other cryptocurrencies existence. I found that Thomas wrote this better than me so I’m not going to spend more words on this. The reason why I even have a wallet is that some time ago, I was contacted by one of my readers (who I won’t disclose), who was trying out the Brave browser, which at the time would compensate content creators with bitcoin (they now switched to their own cryptocurrency, making it even less appealing to me and probably most other content creators who have not gotten drunk with cryptocurrencies). For full disclosure, all the money I received this way (only twice since signing up) I donated back to VideoLAN since it was barely pocket money and in a format that is not useful to me.

So GreenAddress looked like a good idea at the time, and one of the things it let you do is to provide them with your public key so that any communication from them to your mailbox is properly encrypted. This is good, particularly because you can get the one-time codes by mail encrypted, making it an actual second factor authentication, as it proves access to the GPG key (which in my case is actually a physical device). So of course I did that. And then forgot all about it until I decided to send that money to VideoLAN.

Here’s where things get messy with my own mistakes. My GPG key is, due to best practices, set to expire every year, and every year I renew it extending the expiration date. Usually I need the expired key pretty quickly, because it’s also my SSH key and I use it nearly daily. Because of my move to London I have not used my SSH key for almost a month, and missed the expiration. So when I tried logging into GreenAddress and had them send me the OTP code… I’ll get to that in a moment.

The expiration date on a GPG key is meant to ensure that the person you’re contacting has had recent access to the key, and it’s a form of self-revocation. It is a perfectly valid feature when you consider the web of trust that we’re used to. It is slightly less obvious as a feature when explicitly providing a public key for a service, be it GreenAddress, Bugzilla or Facebook. Indeed, what I would expect in these cases would be:

  • a notification that the provided public key is about to expire and you need to extend or replace it; or
  • a notification that the key has expired, and so further communication will happen in clear text; or
  • the service to ignore the expiration date altogether, just as they ignore whether the key was signed and counter-signed (preferred).

Since providing a public key directly to a service sidesteps most other trust options, including the signing and counter-signing, I would expect the expiration date to also be ignored. Indeed, if I provided a public key to a service, I expect the service to keep using it until I told it otherwise. But as I said there are other options. They are probably all bad in some ways, but ignoring the expiration date does not appear to be obviously wrong to me.

What does appear totally broken to me is sending messages that contain exactly this text:

Warning: Invalid pgp key detected

Which is exactly what GreenAddress did.

For reference, it appears that Facebook just silently disables encryption of email notifications when the key expires. I have thus uploaded the extended key now, and should probably make a good reminder a month before the expiration to keep securing my accounts. I find this particularly bothersome, because the expiration date is public information, if you use the same key for all communication. Having a service-specific key is probably the most secure option, as long as the key is not available on public keyserver, but it also complicates the securing up of your account.

What does this tell us? That once again, making things usable is an important step to make things really secure. If you have to complicate people’s lives for them to jump through hoops to be able to keep their accounts secure, they’re likely going to drop a number of security features that are meant to protect them.

December 02, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
UK Banking, Attempt 1: Barclays (December 02, 2017, 20:04 UTC)

You may remember that back in August, I tried opening a NatWest account while not living in the UK yet, and hit a stonewall of an impossible declaration being required by the NatWest employees. I gave up on setting up a remote account, and waited to open one once I got in the country. Since the Northern Irish account seemed to be good for all I needed to do (spoiler: it wasn’t), I decided to wait for the Barclays representative to show up on my official starting date, and set up a “Premier” account with them.

The procedure, that sounded very “special” beforehand, turned out to just be a “Here is how you fill in the forms on the website”. Then, instead of sending you to a local branch to get your documents copied and stamped (something that appears to be very common in the British Isles), they had three people doing the stamping on a pre-made copy of the passport. Not particularly special, but at least practical, right?

Except they also said it would take a few day for the card, but over a week to have access the online banking as they need to “send me more stuff”. The forms were filled in on Monday, set up by Tuesday, and the card arrived on Wednesday, with the PIN following on Thursday. At that point I guessed that what else they told me to wait for was a simple EMV CAP device (I did not realise that the Wikipedia page had a Barclays device as an example, until I looked to link it over here), and decided to not wait, instead signing up for the online banking using my Ulster Bank CAP device, which worked perfectly fine.

On the Friday I also tried installing the Barclays app on my phone. As you probably all noticed by now, looking for a new app from the Play Store is risky, particularly when banking is involved, so I wanted to get a link to it from their website. Turns out that the Barclays website includes a link to the Apple App Store page for their app, but not for the Google Play one. Instead, the Play Store badge image is not clickable. Instead the option they give you is to provide your phone number and they will send you a link to the app as a text message. When I tried doing so, I got an error message suggesting to check my connection.

The reason for the error became apparent with developer tools open: the request to send the SMS is sent to a separate app running on a different hostname. And that host has a different certificate than their main website, which at that point was expired for at least four days! Indeed, since then, the certificate has been replaced with a new one, an EV certificate signed by Entrust, rather than Symantec as they had before. I do find it slightly disconcerting that they have no monitoring on the validity of the certificates for all of their websites, as a bank. But let’s move on.

The online banking relies heavily on “PINSentry” (that is, CAP) but doing so it makes it fairly easy to set up most things, from standing orders to transfers and changes of address. Changing address to my new apartment was quite straightforward, and it all seemed good. The mobile app on the other hand was less useful at first. The main problem is that the app will refuse to do much for the first ten days, because they “set it up” for you. I assume this is a security feature to avoid someone to get access to your account and have the app execute the transactions instead of the website. Unfortunately it also means that the app is useless if your phone dies and you need to get a new one.

Speaking of the mobile app, Barclays supports Apple Pay, but they don’t support Android Pay, probably because they don’t have to. On Android, you can have a replacement app to provide NFC payment support, and so they decided to use their banking app for the payments as well. Unfortunately the one time I tried using it, it kept throwing errors, and asked me to login, with network connection. I don’t think I’ll use this again and will rather look for a bank that supports Android Pay in the future.

Up to here everything sounds peachy, right? The card arrived, it worked, although I only used it a handful times, to buy stuff at IKEA and to buy plane tickets where Revolut would push an extra £5 due to it running on the credit card circuit1, rather than the debit card one.

Then the time came for me to buy a new computer, because of the one ““lost”” by the movers. Since Black Friday was around the corner, and with it my trip to Italy, I decided to wait for that and see if anything at all would come discounted. And indeed Crucial (Micron) had a discount on their SSDs, which is what I ended up ordering. Unfortunately, my first try to order ended up facing a Verified by Visa screen that, instead of trying to get more authentication factors for myself, just went on to tell me the transaction failed, and to check my phone for messages.

Indeed, my phone received two text messages: one telling me that a text message would be sent to confirm a transaction, and one asking me whether the transaction was intentional or not. After confirming it was me doing the transaction, I was responded to try the transaction again in a few minutes. Which I did, but even if this went through the Verified by Visa screen, PayPal refused the payment altogether. Trying to order directly through Crucial without using PayPal managed to get my order through… except it was cancelled half an hour later because Crucial could not confirm the details of the card.

At this point I tried topping up my Revolut account with the same card, and… it didn’t go well either. I tried calling them then, and they could only tell me that the problem was not theirs, and that they couldn’t even see the requests from Revolut, and they didn’t stop any other transactions, giving the fault to the vendor. The vendor of course blamed the bank, and so I got stuck in between.

Upon suggestion from Revolut on Twitter, I tried topping up by UK bank transfer. At first I got some silly “security questions” about the transfer (“Are you making this transfer to buy some goods? Is someone on the phone instructing you to make this payment?” and so on), but when it supposedly completed, I couldn’t see it in the list of transactions, and trying again would lead to a “technical problem” message. Calling the bank again has been even more frustrating because the call dropped once, and as usual the IVR asked me three times for my date of birth and never managed to recognize it. It wasn’t until I left the office, angry and disappointed, that the SMS arrived telling me to confirm if it was really me requesting the transfer…

The end result looked like Barclays put a stricter risk engine in place for Black Friday which has been causing my payments to not go through, particularly not from the office. Trying later in the evening from my apartment (which has a much more clear UK-based geolocation) allowed the orders to go through. You could say that this is for my own protection but I do find this particularly bothersome for one reason in particular: they have an app!

They could have just as easily sent a push notification to my phone to confirm or refuse the transaction, instead of requiring me to be able to receive text messages (which is not a given, as coverage is not perfect particularly in a city like London), in addition to me knowing my access code, having my bank card with me, and knowing its PIN.

At the end of the day I decided that Barclays is not the bank for me, and applied to open an account with Fineco which is Italian and appears to have Italian expats in the UK as their target market. Will keep you posted about it.


  1. But I found out just the other day that the new virtual cards from Revolut are actually VISA Electron, rather than MasterCard. This makes a difference for many airlines as VISA Electron are often considered debit cards, due to the “Electronic Use Only” limitation. I got myself a second virtual card for that and will see how that goes next time I book a flight. [return]

November 30, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

My love for Apple and especially MacOS does not run deep. Actually, it is essentially nonexistent. I was recently reminded of the myriad reasons why I don’t like MacOS, and one of them is that the OS should never stand in the way of what the operator wants to do. In this case, I found that even the root account couldn’t write to certain directories that MacOS deemed special. That “feature” is known System Integrity Protection. I’m not going to rant about how absurd it is to disallow the root account the ability to write, but instead I’d like to present the method of disabling System Integrity Protection.

First of all, one needs to get into the Recovery Mode of MacOS. Typically, this wouldn’t be all that difficult when following the instructions provided by Apple. Essentially, to get into Recovery Mode, one just has to hold Command+R when booting up the system. That’s all fine and dandy if it is a physical host and one has an Apple keyboard. However, my situation called for Recovery Mode from a virtual machine and using a non-Apple keyboard (so no Command key). Yes, yes, I know that MacOS offers the ability to set different key combinations, but then those would still have to be trapped by VMWare Fusion during boot. Instead, I figured that there had to be a way to do it from the MacOS terminal.

After digging through documentation and man pages (I’ll spare you the trials and tribulations of trying to find answers 😛 ), I finally found that, yes, one CAN reboot MacOS into Recovery Mode without the command key. To do so, open up the Terminal and type the following commands:

nvram "recovery-boot-mode=unused"
reboot recovery

The Apple host will reboot and the Recover Mode screen will be presented:

MacOS Recovery Mode - Utilities - Terminal
Click to enlarge

Now, in the main window, there are plenty of tasks that can be launched. However, I needed a terminal, and it might not be readily apparent, but to get it, you click on the “Utilities” menu in the top menu bar (see the screenshot above), and then select “Terminal”. Thereafter, it is fairly simple to disable System Integrity Protection via the following command:

csrutil disable

All that’s left is to reboot by going to the Apple Menu and clicking on “Restart”.

Though the procedures of getting to the MacOS Recovery menu without using the Command key and disabling System Integrity Protection are not all that difficult, they were a pain to figure out. Furthermore, I’m not sure why SIP disallows root’s write permissions anyway. That seems absurd, especially in light of Apple’s most recent glaring security hole of allowing root access without a password. 😳

Cheers,
Zach

November 29, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Are tutorials to blame for basic IT problems? (November 29, 2017, 12:04 UTC)

It’s now effectively impossible to spend a month following IT (and not just) new and not hear of breaches, “hacks”, or general security fiascos. Some of these are tracked down to very basic mistakes in configuration or coding of software, including the lack of hashing of passwords in database. Everyone in the industry, including me, have at some point expressed the importance of proper QA and testing, and budgeting for them in the development process. But what if the problem is much higher up the chain?

Falsehoods Programmers Believe About Names is now over seven years old, and yet my just barely complicated full name (first name with a space in it, surname with an accent) can’t be easily used by most of the services I routinely use. Ireland was particularly interesting, as most services would support characters in the “Latin extended” alphabet, due to the Irish language use of ó, but they wouldn’t accept my surname, which uses ò — this not a first, I had trouble getting invoices from French companies before because they only know about ó as a character.

In a related, but not directly connected topic, there are the problems an acquaintance of mine keeps stumbling across. They don’t want service providers to attach a title to their account, but it looks like most of the developers that implement account handling don’t actually think about this option at all, and make it hard to not set a honorific at all. In particular, it appears not only UIs tend to include a mandatory drop-down list of titles, but the database schema (or whichever other model is used to store the information) also provides the title as an enumeration within a list — that is apparent by the way my acquaintance has had their account reverted to a “default” value, likely the “zeroth” one in the enumeration.

And since most systems don’t end up using British Airways’s honorific list but are rather limited to the “usual” ones, that appears to be “Ms” more often than not, as it sorts (lexicographically) before the others. I have had that happen to me a couple of times too, as I don’t usually file the “title” field on paper forms (I never seen much of a point of it), and I guess somewhere in the pipeline a model really expects a person to have a title.

All of this has me wondering, oh-so-many times, why most systems appear to want to store a name in separate entries for first and last name (or variation thereof), and why they insist on having a honorific title that is “one of the list” rather than a freeform (which would accept the empty string as a valid value). My theory on this is that it’s the fault of the training, or of the documentation. Multiple tutorials I have read, and even followed, over the years defined a model for a “person” – whether it is an user, customer, or any other entity related to the service itself – and many of these use the most basic identifying information about a person as fields to show how the model works, which give you “name”, “surname”, and “title” fields. Bonus points to use an enumeration for the title rather than a freeform, or validation that the title is one of the “admissible” ones.

You could call this a straw man argument, but the truth is that it didn’t take me any time at all to find an example tutorial (See also Archive.is, as I hope the live version can be fixed!) that did exactly that.

Similarly, I have seen sample tutorial code explaining how to write authentication primitives that oversimplify the procedure by either ignoring the salt-and-hashing or using obviously broken hashing functions such as crypt() rather than anything solid. Given many of us know all too well how even important jobs that are not flashy enough for a “rockstar” can be pushed into the hands of junior developers or even interns, I would not be surprised if a good chunk of these weak authentication problems that are now causing us so much pain are caused by simple bad practices that are (still) taught to those who join our profession.

I am afraid I don’t have an answer of how to fix this situation. While musing, again on Twitter, the only suggestion for a good text on writing correct authentication code is the NIST recommendations, but these are, unsurprisingly, written in a tone that is not useful to teach how to do things. They are standards first and foremost, and they are good, but that makes them extremely unsuitable for newcomers to learn how to do things correctly. And while they do provide very solid ground for building formally correct implementations of common libraries to implement the authentication — I somehow doubt that most systems would care about the formal correctness of their login page, particularly given the stories we have seen up to now.

I have seen comments on social media (different people on different media) about what makes a good source of documentation changes depending on your expertise, which is quite correct. Giving a long list of things that you should or should not do is probably a bad way to introduce newcomers to development in general. But maybe we should make sure that examples, samples, and documentation are updated so that they show the current best practice rather than overly simplified, or artificially complicated (sometimes at the same time) examples.

If you’re writing documentation, or new libraries (because you’re writing documentation for new libraries you write, right?) you may want to make sure that the “minimal” example is actually the minimum you need to do, and not skip over things like error checks, or full initialisation. And please, take a look at the various “Falsehoods Programmers Believe About” lists — and see if your example implementation make those assumptions. And if so fix them, please. You’ll prevent many mistakes from happening in real world applications, simply because the next junior developer who gets hired to build a startup’s latest website will not be steered towards the wrong implementations.

November 26, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
A quick London update (November 26, 2017, 13:04 UTC)

It’s now nearly two months since I last posted something and I guess I should at least break the silence to say that I’m well and alive. Although right now I’m spending a week in Italy – to see friends and family here, and take care of some remaining paperwork – my transfer to London completed and I’m now living on the outskirts of the City.

I was almost writing “completed successfully”, but “success” is hard to measure. As I said in the post where I announced my move, a lot of the reason for me to leave Dublin was the lack of friends and a social circle, so you could say that the only way to measure success is seeing if I manage to build such a social circle in the new city (and country), and that will take a while of course.

In addition to this, more than a few things are not quite going in my favour. For instance, the removal companies that arranged and delivered my move managed to screw up, and two of my boxes are missing, which coincidentally were the ones with the most valuable goods: my gamestation, the router and the professional document scanner. And I’m still waiting to hear from the insurance to see that they at least pay the (depreciated) value of the goods so I can get part of my money back on the new computer I’m trying to buy.

I say I’m trying to buy it, because I spent most of this past Friday fighting with my bank trying to have them let me make payments out of my account, possibly because their risk engine was turned up to eleven due to Black Friday, and it took me a significant amount of time to get in a position to actually dispose of my own money. You can bet that the first thing I’m doing after I’m back from this trip is finding a new bank.

Oh yes and somehow it looks like the company that owns and manages the building my apartment is in, is having a spat with the electricity provider. Last week every tenant in the building received a note from e-on (which is not my provider) saying that they are applying for a warrant to disconnect the landlord’s power supply.

So yeah expect more rants as soon as I manage to settle down, I have drafts.

November 25, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

November 25: International Day for the Elimination of Violence against Women
(Wikipedia)

November 24, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

Not new at all but was new to me, and was well worth my time:

DEFCON 19: Bit-squatting: DNS Hijacking Without Exploitation (w speaker)

Only somewhat related: https://www.pytosquatting.org/

November 20, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux and extended permissions (November 20, 2017, 15:04 UTC)

One of the features present in the August release of the SELinux user space is its support for ioctl xperm rules in modular policies. In the past, this was only possible in monolithic ones (and CIL). Through this, allow rules can be extended to not only cover source (domain) and target (resource) identifiers, but also a specific number on which it applies. And ioctl's are the first (and currently only) permission on which this is implemented.

Note that ioctl-level permission controls isn't a new feature by itself, but the fact that it can be used in modular policies is.

What is ioctl?

Many interactions on a Linux system are done through system calls. From a security perspective, most system calls can be properly categorized based on who is executing the call and what the target of the call is. For instance, the unlink() system call has the following prototype:

int unlink(const char *pathname);

Considering that a process (source) is executing unlink (system call) against a target (path) is sufficient for most security implementations. Either the source has the permission to unlink that file or directory, or it hasn't. SELinux maps this to the unlink permission within the file or directory classes:

allow <domain> <resource> : { file dir }  unlink;

Now, ioctl() is somewhat different. It is a system call that allows device-specific operations which cannot be expressed by regular system calls. Devices can have multiple functions/capabilities, and with ioctl() these capabilities can be interrogated or updated. It has the following interface:

int ioctl(int fd, unsigned long request, ...);

The file descriptor is the target device on which an operation is launched. The second argument is the request, which is an integer whose value identifiers what kind of operation the ioctl() call is trying to execute. So unlike regular system calls, where the operation itself is the system call, ioctl() actually has a parameter that identifies this.

A list of possible parameter values on a socket for instance is available in the Linux kernel source code, under include/uapi/linnux/sockios.h.

SELinux allowxperm

For SELinux, having the purpose of the call as part of a parameter means that a regular mapping isn't sufficient. Allowing ioctl() commands for a domain against a resource is expressed as follows:

allow <domain> <resource> : <class> ioctl;

This of course does not allow policy developers to differentiate between harmless or informative calls (like SIOCGIFHWADDR to obtain the hardware address associated with a network device) and impactful calls (like SIOCADDRT to add a routing table entry).

To allow for a fine-grained policy approach, the SELinux developers introduced an extended allow permission, which is capable of differentiating based on an integer value.

For instance, to allow a domain to get a hardware address (SIOCGIFHWADDR, which is 0x8927) from a TCP socket:

allowxperm <domain> <resource> : tcp_socket ioctl 0x8927;

This additional parameter can also be ranged:

allowxperm <domain> <resource> : <class> ioctl 0x8910-0x8927;

And of course, it can also be used to complement (i.e. allow all ioctl parameters except a certain value):

allowxperm <domain> <resource> : <class> ioctl ~0x8927;

Small or negligible performance hit

According to a presentation given by Jeff Vander Stoep on the Linux Security Summit in 2015, the performance impact of this addition in SELinux is well under control, which helped in the introduction of this capability in the Android SELinux implementation.

As a result, interested readers can find examples of allowxperm invocations in the SELinux policy in Android, such as in the app.te file:

# only allow unprivileged socket ioctl commands
allowxperm { appdomain -bluetooth } self:{ rawip_socket tcp_socket udp_socket } ioctl { unpriv_sock_ioctls unpriv_tty_ioctls };

And with that, we again show how fine-grained the SELinux access controls can be.

November 18, 2017
Sergei Trofimovich a.k.a. slyfox (homepage, bugs)
Endianness of a single byte: big or little? (November 18, 2017, 00:00 UTC)

trofi's blog: Endianness of a single byte: big or little?

Endianness of a single byte: big or little?

Bug

Rolf Eike Beer noticed two test failures while testing radvd package (IPv6 route advertiser daemon and more) on sparc. Both tests failed likely due to endianness issue:

test/send.c:317:F:build:test_add_ra_option_lowpanco:0:
  Assertion '0 == memcmp(expected, sb.buffer, sizeof(expected))'
    failed: 0 == 0, memcmp(expected, sb.buffer, sizeof(expected)) == 1
test/send.c:342:F:build:test_add_ra_option_abro:0:
  Assertion '0 == memcmp(expected, sb.buffer, sizeof(expected))'
    failed: 0 == 0, memcmp(expected, sb.buffer, sizeof(expected)) == 1

I’ve confirmed the same failure on powerpc.

Eike noted that it’s unusual because sparc is a big-endian architecture and network byteorder is also big-endian (thus no need to flip bytes). Something very specific must have lurked in radvd code to break endianness in this case. Curiously all the radvd tests were working fine on amd64.

Two functions failed to produce expected results:

START_TEST(test_add_ra_option_lowpanco)
{
ck_assert_ptr_ne(0, iface);
struct safe_buffer sb = SAFE_BUFFER_INIT;
add_ra_option_lowpanco(&sb, iface->AdvLowpanCoList);
unsigned char expected[] = {
0x22, 0x03, 0x32, 0x48, 0x00, 0x00, 0xe8, 0x03, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
};
ck_assert_int_eq(sb.used, sizeof(expected));
ck_assert_int_eq(0, memcmp(expected, sb.buffer, sizeof(expected)));
safe_buffer_free(&sb);
}
END_TEST
START_TEST(test_add_ra_option_abro)
{
ck_assert_ptr_ne(0, iface);
struct safe_buffer sb = SAFE_BUFFER_INIT;
add_ra_option_abro(&sb, iface->AdvAbroList);
unsigned char expected[] = {
0x23, 0x03, 0x0a, 0x00, 0x02, 0x00, 0x02, 0x00, 0xfe, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xa2, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
};
ck_assert_int_eq(sb.used, sizeof(expected));
ck_assert_int_eq(0, memcmp(expected, sb.buffer, sizeof(expected)));
safe_buffer_free(&sb);
}
END_TEST

Both tests are straightforward: they verify 6LoWPAN and ABRO extension handling (both are related to route announcement for Low Power devices). Does not look complicated.

I looked at add_ra_option_lowpanco() implementation and noticed at least one bug of missing endianness conversion:

// radvd.h
struct nd_opt_abro {
uint8_t nd_opt_abro_type;
uint8_t nd_opt_abro_len;
uint16_t nd_opt_abro_ver_low;
uint16_t nd_opt_abro_ver_high;
uint16_t nd_opt_abro_valid_lifetime;
struct in6_addr nd_opt_abro_6lbr_address;
};
// ...
// send.c
static void add_ra_option_mipv6_home_agent_info(struct safe_buffer *sb, struct mipv6 const *mipv6)
{
struct HomeAgentInfo ha_info;
memset(&ha_info, 0, sizeof(ha_info));
ha_info.type = ND_OPT_HOME_AGENT_INFO;
ha_info.length = 1;
ha_info.flags_reserved = (mipv6->AdvMobRtrSupportFlag) ? ND_OPT_HAI_FLAG_SUPPORT_MR : 0;
ha_info.preference = htons(mipv6->HomeAgentPreference);
ha_info.lifetime = htons(mipv6->HomeAgentLifetime);
safe_buffer_append(sb, &ha_info, sizeof(ha_info));
}
static void add_ra_option_abro(struct safe_buffer *sb, struct AdvAbro const *abroo)
{
struct nd_opt_abro abro;
memset(&abro, 0, sizeof(abro));
abro.nd_opt_abro_type = ND_OPT_ABRO;
abro.nd_opt_abro_len = 3;
abro.nd_opt_abro_ver_low = abroo->Version[1];
abro.nd_opt_abro_ver_high = abroo->Version[0];
abro.nd_opt_abro_valid_lifetime = abroo->ValidLifeTime;
abro.nd_opt_abro_6lbr_address = abroo->LBRaddress;
safe_buffer_append(sb, &abro, sizeof(abro));
}

Note how add_ra_option_mipv6_home_agent_info() carefully flips bytes with htons() for all uint16_t fields but add_ra_option_abro() does not.

It means the ABRO does not really work on little-endian (aka most) systems in radvd and test checks for the wrong thing. I added missing htons() calls and fixed expected[] output in tests by manually flipping two bytes in a few locations.

Plot twist

The effect was slightly unexpected: I fixed only ABRO test, but not 6LoWPAN. It’s where things became interesting. Let’s look at add_ra_option_lowpanco() implementation:

// radvd.h
struct nd_opt_6co {
uint8_t nd_opt_6co_type;
uint8_t nd_opt_6co_len;
uint8_t nd_opt_6co_context_len;
uint8_t nd_opt_6co_res : 3;
uint8_t nd_opt_6co_c : 1;
uint8_t nd_opt_6co_cid : 4;
uint16_t nd_opt_6co_reserved;
uint16_t nd_opt_6co_valid_lifetime;
struct in6_addr nd_opt_6co_con_prefix;
};
// ...
// send.c
static void add_ra_option_lowpanco(struct safe_buffer *sb, struct AdvLowpanCo const *lowpanco)
{
struct nd_opt_6co co;
memset(&co, 0, sizeof(co));
co.nd_opt_6co_type = ND_OPT_6CO;
co.nd_opt_6co_len = 3;
co.nd_opt_6co_context_len = lowpanco->ContextLength;
co.nd_opt_6co_c = lowpanco->ContextCompressionFlag;
co.nd_opt_6co_cid = lowpanco->AdvContextID;
co.nd_opt_6co_valid_lifetime = lowpanco->AdvLifeTime;
co.nd_opt_6co_con_prefix = lowpanco->AdvContextPrefix;
safe_buffer_append(sb, &co, sizeof(co));
}

The test still failed to match one single byte: the one that spans 3 bitfields: nd_opt_6co_res, nd_opt_6co_c, nd_opt_6co_cid. But why? Does endianness really matter within byte? Apparently gcc happens to group those 3 fields in different orders on x86_64 and powerpc!

Let’s looks at a smaller example:

#include <stdio.h>
#include <stdint.h>
struct s {
uint8_t a : 3;
uint8_t b : 1;
uint8_t c : 4;
};
int main() {
struct s v = { 0x00, 0x1, 0xF, };
printf("v = %#02x\n", *(uint8_t*)&v);
return 0;
}

Output difference:

$ x86_64-pc-linux-gnu-gcc a.c -o a && ./a
v = 0xf8
# (0xF << 5) | (0x1 << 4) | 0x00
$ powerpc-unknown-linux-gnu-gcc a.c -o a && ./a
v = 0x1f
# (0x0 << 5) | (0x1 << 4) | 0xF

C standard does not specify layout of bitfields and it’s a great illustration of how things break :)

An interesting observation: the bitfield order on powerpc happens to be the desired order (as 6LoWPAN RFC defines it).

It means radvd code indeed happened to generate correct bitstream on big-endian platforms (as Eike predicted) but did not work on little-endian systems. Unfortunately golden expected[] output was generated on little-endian system.

Thus the 3 fixes:

That’s it :)

Posted on November 18, 2017
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> comments powered by Disqus

November 17, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Python’s M2Crypto fails to compile (November 17, 2017, 21:29 UTC)

When updating my music server, I ran into a compilation error on Python’s M2Crypto. The error message was a little bit strange and not directly related to the package itself:

fatal error: openssl/ecdsa.h: No such file or directory

Obviously, that error is generated from OpenSSL and not directly within M2Crypto. Remembering that there are some known problems with the “bindist” USE flag, I took a look at OpenSSL and OpenSSH. Indeed, “bindist” was set. Simply removing the USE flag from those two packages took care of the problem:


# grep -i 'openssh\|openssl' /etc/portage/package.use
>=dev-libs/openssl-1.0.2m -bindist
net-misc/openssh -bindist

In this case, the problems makes sense based on the error message. The error indicated that the Elliptic Curve Digital Signature Algorithm (ECDSA) header was not found. In the previously-linked page about the “bindist” USE flag, it clearly states that having bindist set will “Disable/Restrict EC algorithms (as they seem to be patented)”.

Cheers,
Nathan Zachary

November 16, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)
Some minor Security Quirks in Firefox (November 16, 2017, 16:24 UTC)

FirefoxI discovered a couple of more or less minor security issues in Firefox lately. None of them is particularly scary, but they affect interesting corner cases or unusual behavior. I'm posting this mainly hoping that other people will find it inspiring to think about unusual security issues and maybe also come up with more realistic attack scenarios for these bugs.

I'd like to point out that Mozilla hasn't fixed most of those issues, despite all of them being reported several months ago.

Bypassing XSA warning via FTP

XSA or Cross-Site Authentication is an interesting and not very well known attack. It's been discovered by Joachim Breitner in 2005.

Some web pages, mostly forums, allow users to include third party images. This can be abused by an attacker to steal other user's credentials. An attacker first posts something with an image from a server he controls. He then switches on HTTP authentication for that image. All visitors of the page will now see a login dialog on that page. They may be tempted to type in their login credentials into the HTTP authentication dialog, which seems to come from the page they trust.

The original XSA attack is, as said, quite old. As a countermeasure Firefox implements a warning in HTTP authentication dialogs that were created by a subresource like an image. However it only does that for HTTP, not for FTP.

So an attacker can run an FTP server and include an image from there. By then requiring an FTP login and logging all login attempts to the server he can gather credentials. The password dialog will show the host name of the attacker's FTP server, but he could choose one that looks close enough to the targeted web page to not raise suspicion.

Firefox FTP password dialog

I haven't found any popular site that allows embedding images from non-HTTP-protocols. The most popular page that allows embedding external images at all is Stack Overflow, but it only allows HTTPS. Generally embedding third party images is less common these days, most pages keep local copies if they embed external images.

This bug is yet unfixed.

Obviously one could fix it by showing the same warning for FTP that is shown for HTTP authentication. But I'd rather recommend to completely block authentication dialogs on third party content. This is also what Chrome is doing. Mozilla has been discussing this for several years with no result.

Firefox also has an open bug about disallowing FTP on subresources. This would obviously also fix this scenario.

Window-modal popup via FTP

In the early days of JavaScript web pages could annoy users with popups. Browsers have since changed the behavior of JavaScript popups. They are now tab-modal, which means they're not blocking the interaction with the whole browser, they're just part of one tab and will only block the interaction with the web page that created them.

So it is a goal of modern browsers to not allow web pages to create window-modal alerts that block the interaction with the whole browser. However I figured out FTP gives us a bypass of this restriction.

If Firefox receives some random garbage over an FTP connection that it cannot interpret as FTP commands it will open an alert window showing that garbage.

Window modal FTP alert

First we open up our fake "FTP-Server" that will simply send a message to all clients. We can just use netcat for this:

while true; do echo "Hello" | nc -l -p 21; done

Then we try to open a connection, e. g. by typing ftp://localhost in the address bar on the same system. Firefox will not show the alert immediately. However if we then click on the URL bar and press enter again it will show the alert window. I tried to replicate that behavior with JavaScript, which worked sometimes. I'm relatively sure this can be made reliable.

There are two problems here. One is that server controlled content is showed to the user without any interpretation. This alert window seems to be intended as some kind of error message. However it doesn't make a lot of sense like that. If at all it should probably be prefixed by some message like "the server sent an invalid command". But ultimately if the browser receives random garbage instead of protocol messages it's probably not wise to display that at all. The second problem is that FTP error messages probably should be tab-modal as well.

This bug is also yet unfixed.

FTP considered dangerous

FTP is an old protocol with many problems. Some consider the fact that browsers still support it a problem. I tend to agree, ideally FTP should simply be removed from modern browsers.

FTP in browsers is insecure by design. While TLS-enabled FTP exists browsers have never supported it. The FTP code is probably not well audited, as it's rarely used. And the fact that another protocol exists that can be used similarly to HTTP has the potential of surprises. For example I found it quite surprising to learn that it's possible to have unencrypted and unauthenticated FTP connections to hosts that enabled HSTS. (The lack of cookie support on FTP seems to avoid causing security issues, but it's still unexpected and feels dangerous.)

Self-XSS in bookmark manager export

The Firefox Bookmark manager allows exporting bookmarks to an HTML document. Before the current Firefox 57 it was possible to inject JavaScript into this exported HTML via the tags field.

I tried to come up with a plausible scenario where this could matter, however this turned out to be difficult. This would be a problematic behavior if there's a way for a web page to create such a bookmark. While it is possible to create a bookmark dialog with JavaScript, this doesn't allow us to prefill the tags field. Thus there is no way a web page can insert any content here.

One could come up with implausible social engineering scenarios (web page asks user to create a bookmark and insert some specific string into the tags field), but that seems very far fetched. A remotely plausible scenario would be a situation where a browser can be used by multiple people who are allowed to create bookmarks and the bookmarks are regularly exported and uploaded to a web page. However that also seems quite far fetched.

This was fixed in the latest Firefox release as CVE-2017-7840 and considered as low severity.

Crashing Firefox on Linux via notification API

The notification API allows browsers to send notification alerts that the operating system will show in small notification windows. A notification can contain a small message and an icon.

When playing this one of the very first things that came to my mind was to check what happens if one simply sends a very large icon. A user has to approve that a web page is allowed to use the notification API, however if he does the result is an immediate crash of the browser. This only "works" on Linux. The proof of concept is quite simple, we just embed a large black PNG via a data URI:

<script>Notification.requestPermission(function(status){
new Notification("",{icon: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAE4gAABOIAQAAAAB147pmAAAL70lEQVR4Ae3BAQ0AAADCIPuntscHD" + "A".repeat(4043) + "yDjFUQABEK0vGQAAAABJRU5ErkJggg==",});
});</script>


I haven't fully tracked down what's causing this, but it seems that Firefox tries to send a message to the system's notification daemon with libnotify and if that's too large for the message size limit of dbus it will not properly handle the resulting error.

What I found quite frustrating is that when I reported it I learned that this was a duplicate of a bug that has already been reported more than a year ago. I feel having such a simple browser crash bug open for such a long time is not appropriate. It is still unfixed.

November 10, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Latex 001 (November 10, 2017, 06:03 UTC)

This is manly a memo for remember each time which packages are needed for Japanese.

Usually I use xetex with cjk for write latex document on Gentoo this is done by adding xetex and cjk support to texlive.

app-text/texlive cjk xetex app-text/texlive-core cjk xetex

for platex is needed to install

dev-texlive/texlive-langjapanese

Google Summer of Code day41-51 (November 10, 2017, 05:59 UTC)

Google Summer of Code day 41

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: - working on making the code for sending a unknown number of files with Werkzeug

Making and Testing patch manager

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 42

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: * Fixing sending multiple file using requests I'm making the script for incremental add patches but I'm stuck on this requests problem.

looks like I cannot concatenate files like this files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} or like this: files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} getting AttributeError: 'tuple' object has no attribute 'read'

looks like requests cannot manage to send data with same key but the server part of flask-restful is using parser.add_argument('patch', action='append') for concatenate more arguments togheter. and it suggest to send the informations like this: curl http://api.example.com -d "patch=bob" -d "patch=sue" -d "patch=joe"

Unfortunatly as now I'm still trying to understand how I can do it with requests.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 43

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * Changed send_file for manage more patches at a time * Refactored functions name for reflecting the incremental patches change * Made function for list patches in the eapply_user portage user patches folder directory and temporary folder patches reflecting the applied livepatches.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 44

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * renamed command client function as internal function * put togheter the client incremental patch sender function

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 45

What was my plan for today?

  • testing and improving elivepatch

What i did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementatio - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 46

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * code refactoring

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 47

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * trying to make some way for make testing more fast, I'm thinking something like unit test and integration testing. * merged build_livepatch with send_files as it can reduce the steps for making the livepatch and help making it more isolated. * going on writing the incremental patch

Building with kpatch-build takes usually too much time, this is making testing parts where building live patch is needed, long and complicated. Would probably be better to add some unit/integration testing for keep each task/function under control without needing to build live patches every time. any thoughts?

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 48

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * added PORTAGE_CONFIGROOT for set the path from there get the incremental patches. * cannot download kernel sources to /usr/portage/distfiles/ * saving created livepatch and patch to the client but sending only the incremental patches and patch to the elivepatch server * Cleaned output from bash command

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 49

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

1) we need a way for having old genpatches. mpagano made a script for saving all the genpatches and they are saved here: http://dev.gentoo.org/~mpagano/genpatches/tarballs/ I think we can redirect the ebuild on our overlay for get the tarballs from there.

2) kpatch can work with initrd

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 50

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • refactoring code
  • starting writing first draft for the automatical kernel livepatching system

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 51

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • checking eapply_user eapply_user is using patch

  • writing elivepatch wiki page https://wiki.gentoo.org/wiki/User:Aliceinwire/elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day31-40 (November 10, 2017, 05:57 UTC)

Google Summer of Code day 31

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Dispatcher.py:

  • fixed comments
  • added static method for return the kernel path
  • Added todo
  • fixed how to make directory with uuid

livepatch.py * fixed docstring * removed sudo on kpatch-build * fixed comments * merged ebuild commands in one on build livepatch

restful.py * fixed comment

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 32

What was my plan for today?

  • testing and improving elivepatch

What I did today?

client checkers: * used os.path.join for ungz function * implemented temporary folder using python tempfile

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 33

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Working with tempfile for keeping the uncompressed configuration file using the appropriate tempfile module.
  • Refactoring and code cleaning.
  • check if the ebuild is present in the overlay before trying to merge it.

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 34

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • lpatch added locally
  • fix ebuild directory
  • Static patch and config filename on send This is useful for isolating the API requests using only the uuid as session identifier
  • Removed livepatchStatus and lpatch class configurations. Because we need a request to only be identified by is own UUID, for isolating the transaction.
  • return in case the request with the same uuid is already present.

we still have that problem about "can'\''t find special struct alt_instr size."

I will investigate it tomorrow

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 35

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Kpatch needs some special section data for finding where to inject the livepatch. This special section data existence is checked by kpatch-build in the given vmlinux file. The vmlinux file need CONFIG_DEBUG_INFO=y for making the debug symbols containing the special section data. This special section data is found like this:

# Set state if name matches
a == 0 && /DW_AT_name.* alt_instr[[:space:]]*$/ {a = 1; next}
b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next}
p == 0 && /DW_AT_name.* paravirt_patch_site[[:space:]]*$/ {p = 1; next}
e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next}

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 36

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fixed livepatch output name for reflecting the static set patch name
  • module install removed as not needed
  • debug option is now also copying the build.log to the uuid directory, for investigating failed attempt
  • refactored uuid_dir with uuid in the case where uuid_dir is not actually containing the full path
  • adding debug_info to the configuration file if not already present however, this setting is only needed for the livepatch creation (not tested yet)

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 37

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fix return message for cve option
  • Added different configuration example [4.9.30,4.10.17]
  • Catch Gentoo-sources not available error
  • Catch missing livepatch errors on the client

Tested kpatch with patch for kernel 4.9.30 and 4.10.17 and it worked without any problem. I checked with coverage for see which code is not used. I think we can maybe remove cve option for now as not implemented yet and we could use a modular implementation of for it, so the configuration could change. We need some documentation about elivepatch on the Gentoo wiki. We need some unit tests for making development more smooth and making it more simple to check the working status with GitHub Travis.

I talked with kpatch creator and we got some feedback:

“this project could also be used for kpatch testing :)
imagine instead of just loading the .ko, the client was to kick off a series of tests and report back.”

“why bother a production or tiny machine when you might have a patch-building server”

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 38

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

What we will do next: - incremental patch tracking on the client side - CVE security vulnerability checker - dividing the repository - ebuild - documentation [optional] - modularity [optional]

Kpatch work: - Started to make the incremental patch feature - Tested kpatch for permission issue

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 39

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementation - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 40

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor

summary of elivepatch work: - working with incremental patch manager - cleaning client terminal output by catching the exceptions

Making and Testing patch manager

what I will do next time?

  • testing and improving elivepatch

November 09, 2017
Cigars and the Norwegian Government (November 09, 2017, 21:36 UTC)

[Updated 22nd November to add link to the response to proposed new regulations] As some of my readers knows, I'm an aficionado of Cigars, to the extent I bought my own Cigar Importer and Store. The picture below is from the 20th best bar in the world in 2017, and they sell our cigars of … Continue reading "Cigars and the Norwegian Government"

Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google Summer of Code day26-30 (November 09, 2017, 11:14 UTC)

Google Summer of Code day 26

What was my plan for today?

  • testing and improving elivepatch

What i did today?

After discussion with my mentor. I need: * Availability and replicability of downloading kernel sources. [needed] * Represent same kernel sources as the client kernel source (use flags for now and in the future user added patches) [needed] * Support multiple request at the same time. [needed] * Modularity for adding VM machine or container support. (in the future)

Create overlay for keeping old gentoo-sources ebuild where we will keep retrocompatibility for old kernels: https://github.com/aliceinwire/gentoo-sources_overlay

Made function for download and get old kernel sources and install sources under the designated temporary folder.

what i will do next time?

  • Going on improving elivepatch

Google Summer of Code day 27

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Fixed git clone of the gentoo-sources overlay under a temporary directory
  • Fixed download directories
  • Tested elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 28

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Testing elivepatch with multi threading
  • Removed check for uname kernel version as is getting the kernel version directly from the kernel configuration file header.
  • Starting kpatch-build under the output folder Because kpatch-build is making the livepatch under the $PWD folder we are starting it under the uuid tmp folder and we are getting the livepatch from the uuid folder. this is usefull for dealing with multi-threading
  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]

For making elivepatch multithread we can simply change app.run() of flask with app.run(threaded=True).
This will make flask spawn thread for each request (using class SocketServer.ThreadingMixIn and baseWSGIserver)1, but also if is working pretty nice, the suggested way of threading is probably using gunicorn or uWSGI.2
maybe like using flask-gunicorn: https://github.com/doobeh/flask-gunicorn

what i will do next time?

  • testing and improving elivepatch

[1] https://docs.python.org/2/library/socketserver.html#SocketServer.ThreadingMixIn [2] http://flask.pocoo.org/docs/0.12/deploying/

Google Summer of Code day 29

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]
  • Sended pull request to kpatch for adding dynamic output folder option https://github.com/dynup/kpatch/pull/718
  • Sended pull request to kpatch for small style fix https://github.com/dynup/kpatch/pull/719

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 30

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • fixing uuid design

moved the uuid generating to the client and read rfc 4122

that states "A UUID is 128 bits long, and requires no central registration process."

  • made regex function for checking the format of uuid is actually correct.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day21-25 (November 09, 2017, 10:42 UTC)

Google Summer of Code day 21

What was my plan for today?

  • Testing elivepatch
  • Getting the kernel version dynamically
  • Updating kpatch-build for work with Gentoo better

What I did today?

Fixing the problems pointed out by my mentor.

  • Fixed software copyright header
  • Fixed popen call
  • Used regex for checking file extension
  • Added debug option for the server
  • Changed reserved word shadowing variables
  • used os.path.join for link path
  • dynamical pass kernel version

what I will do next time?
* Testing elivepatch * work on making elivepatch support multiple calls

Google Summer of Code day 22

What was my plan for today?

  • Testing elivepatch
  • work on making elivepatch support multiple calls

What I did today?

  • working on sending the kernel version
  • working on dividing users by UUID (Universally unique identifier)

With the first connection, because the UUID is not generated yet, it will be created and returned by the elivepatch server. The client will store the UUID in shelve so that it can authenticate next time is inquiring the server. By using python UUID I'm assigning different UUID for each user. By using UUID we can start supporting multiple calls.

what I will do next time?
* Testing elivepatch * Cleaning code

Google Summer of Code day 23

What was my plan for today?

  • Testing elivepatch
  • Cleaning code

What I did today?

  • commented some code part
  • First draft of UUID working

elivepatch server need to generate a UUID for a client connection, and assign the UUID to each client, this is needed for managing multiple request. The livepatch and configs files will be generated in different folders for each client request and returned using the UUID. Made a working draft.

what I will do next time?
* Cleaning code * testing it * Go on with programming and starting implementing the CVE

Google Summer of Code day 24

What was my plan for today?

  • Cleaning code
  • testing it
  • Go on with programming and starting implementing the CVE

What I did today?

  • Working with getting the Linux kernel version from configuration file
  • Working with parsing CVE repository

I could implement the Linux kernel version from the configuration file, and I'm working on parsing the CVE repository. Would also be a nice idea to work with the kpatch-build script for making it Gentoo compatible. But I also got into one problem, that is we need to find a way to download old Gentoo-sources ebuild for building an old kernel, where the server there isn't the needed version of kernel sources. This depends on how much back compatibility we want to give. And with old Gentoo-sources, we cannot assure that is working. Because old Gentoo-sources was using different versions of Gentoo repository, eclass. So is a problem to discuss in the next days.

what I will do next time?

  • work on the CVE repository
  • testing it

Google Summer of Code day 25

What was my plan for today?

  • Cleaning code
  • testing and improving elivepatch

What I did today?

As discussed with my mentor I worked on unifying the patch and configuration file RESTful API call. And made the function to standardize the subprocess commands.

what I will do next time?

  • testing it and improving elivepatch

November 02, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux on DELL XPS 13 9365 and 9360 (November 02, 2017, 17:33 UTC)

Since I received some positive feedback about my previous DELL XPS 9350 post, I am writing this summary about my recent experience in installing Gentoo Linux on a DELL XPS 13 9365.

This installation notes goals:

  • UEFI boot using Grub
  • Provide you with a complete and working kernel configuration
  • Encrypted disk root partition using LUKS
  • Be able to type your LUKS passphrase to decrypt your partition using your local keyboard layout

Grub & UEFI & Luks installation

This installation is a fully UEFI one using grub and booting an encrypted root partition (and home partition). I was happy to see that since my previous post, everything got smoother. So even if you can have this installation working using MBR, I don’t really see a point in avoiding UEFI now.

BIOS configuration

Just like with its ancestor, you should:

  • Turn off Secure Boot
  • Set SATA Operation to AHCI

Live CD

Once again, go for the latest SystemRescueCD (it’s Gentoo based, you won’t be lost) as it’s quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We’ll obviously use GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

  • 10Mo UEFI BIOS partition (type EF02)
  • 500Mo UEFI boot partition (type EF00)
  • 2Go Swap partition
  • 475Go Linux root partition

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +10M ↵
Hex Code: EF02 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +2G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: p ↵
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A73970B7-FF37-4BA7-92BE-2EADE6DDB66E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048           22527   10.0 MiB    EF02  BIOS boot partition
   2           22528         1046527   500.0 MiB   EF00  EFI System
   3         1046528         5240831   2.0 GiB     8200  Linux swap
   4         5240832      1000215182   474.4 GiB   8300  Linux filesystem

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

Once again on my (old?) SystemRescueCD stick, the integrated Intel 8265/8275 wifi card is not detected.

So I used my old trick with my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

  • get your Android phone connected on your local WiFi (unless you want to use your cellular data)
  • plug in your phone using USB to your XPS
  • on your phone, go to Settings / More / Tethering & portable hotspot
  • enable USB tethering

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

You have now access to the internet.

Proceed with installation

The only thing to prepare is to format the UEFI boot partition as FAT32. Do not worry about the UEFI BIOS partition (/dev/nvme0n1p1), grub will take care of it later.

# mkfs.vfat -F 32 /dev/nvme0n1p2

Do not forget to use cryptsetup to encrypt your /dev/nvme0n1p4 partition! In the rest of the article, I’ll be using its device mapper representation.

# cryptsetup luksFormat -s 512 /dev/nvme0n1p4
# cryptsetup luksOpen /dev/nvme0n1p4 root
# mkfs.ext4 /dev/mapper/root

Then follow the Gentoo handbook as usual for the stage3 related next steps. Make sure you mount and bind the following to your /mnt/gentoo LiveCD installation folder (the /sys binding is important for grub UEFI):

# mount -t proc none /mnt/gentoo/proc
# mount -o bind /dev /mnt/gentoo/dev
# mount -o bind /sys /mnt/gentoo/sys

make.conf settings

I strongly recommend using at least the following on your /etc/portage/make.conf :

GRUB_PLATFORM="efi-64"
INPUT_DEVICES="evdev synaptics"
VIDEO_CARDS="intel i965"

USE="bindist cryptsetup"

The GRUB_PLATFORM one is important for later grub setup and the cryptsetup USE flag will help you along the way.

fstab for SSD

Don’t forget to make sure the noatime option is used on your fstab for / and /home.

/dev/nvme0n1p2    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p3    none     swap    sw                0 0
/dev/mapper/root  /        ext4    noatime   0 1

Kernel configuration and compilation

I suggest you use a recent >=sys-kernel/gentoo-sources-4.13.0 along with genkernel.

  • You can download my kernel configuration file (iptables, docker, luks & stuff included)
  • Put the kernel configuration file into the /etc/kernels/ directory (with a training s)
  • Rename the configuration file with the exact version of your kernel

Then you’ll need to configure genkernel to add luks support, firmware files support and keymap support if your keyboard layout is not QWERTY.

In your /etc/genkernel.conf, change the following options:

LUKS="yes"
FIRMWARE="yes"
KEYMAP="1"

Then run genkernel all to build your kernel and luks+firmware+keymap aware initramfs.

Grub UEFI bootloader with LUKS and custom keymap support

Now it’s time for the grub magic to happen so you can boot your wonderful Gentoo installation using UEFI and type your password using your favourite keyboard layout.

  • make sure your boot vfat partition is mounted on /boot
  • edit your /etc/default/grub configuration file with the following:
GRUB_CMDLINE_LINUX="crypt_root=/dev/nvme0n1p4 keymap=fr"

This will allow your initramfs to know it has to read the encrypted root partition from the given partition and to prompt for its password in the given keyboard layout (french here).

Now let’s install the grub UEFI boot files and setup the UEFI BIOS partition.

# grub-install --efi-directory=/boot --target=x86_64-efi /dev/nvme0n1
Installing for x86_64-efi platform.
Installation finished. No error reported

It should report no error, then we can generate the grub boot config:

# grub-mkconfig -o /boot/grub/grub.cfg

You’re all set!

You will get a gentoo UEFI boot option, you can disable the Microsoft Windows one from your BIOS to get straight to the point.

Hope this helped!

November 01, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)
Expat 2.2.5 released (November 01, 2017, 18:45 UTC)

Expat 2.2.5 has recently been released. It fixes miscellaneous bugs. For more details, please check the changelog.

If you maintain Expat packaging or a bundled version of Expat somewhere, please update to 2.2.5. Thanks!

Sebastian Pipping

October 24, 2017

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# readelf -a -w -g -t -s -r -u -A -c -I -W $FILE
==92332==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x613000000f94 at pc 0x0000005b4c54 bp 0x7ffda4ab1e10 sp 0x7ffda4ab1e08
READ of size 1 at 0x613000000f94 thread T0
    #0 0x5b4c53 in get_line_filename_and_dirname /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4681:10
    #1 0x5b4c53 in display_debug_macro /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4839
    #2 0x536709 in display_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13371:16
    #3 0x536709 in process_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13457
    #4 0x536709 in process_object /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18148
    #5 0x51099b in process_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18540:13
    #6 0x51099b in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18612
    #7 0x7f4febd95680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #8 0x41a128 in getenv (/usr/x86_64-pc-linux-gnu/binutils-bin/git/readelf+0x41a128)

0x613000000f94 is located 0 bytes to the right of 340-byte region [0x613000000e40,0x613000000f94)
allocated by thread T0 here:
    #0 0x4d8318 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x511975 in get_data /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:392:9
    #2 0x50eccf in load_specific_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13172:38
    #3 0x50e623 in load_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13300:10
    #4 0x5b1969 in display_debug_macro /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4717:3
    #5 0x536709 in display_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13371:16
    #6 0x536709 in process_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13457
    #7 0x536709 in process_object /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18148
    #8 0x51099b in process_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18540:13
    #9 0x51099b in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18612
    #10 0x7f4febd95680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4681:10 in get_line_filename_and_dirname
Shadow bytes around the buggy address:
  0x0c267fff81a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c267fff81b0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x0c267fff81c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c267fff81d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff81e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff81f0: 00 00[04]fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8200: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8210: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8220: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8230: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==92332==ABORTING

Affected version:
2.28 – 2.28.1 – 2.29.51.20170919 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=5c1c468d0eddd0fda1ec8c5f33888657f94e3266

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00382-binutils-heapoverflow-get_line_filename_and_dirname

Timeline:
2017-09-19: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: heap-based buffer overflow in get_line_filename_and_dirname (dwarf.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==23816==ERROR: AddressSanitizer: SEGV on unknown address 0x4700004008d0 (pc 0x0000005427b6 bp 0x7ffd49033690 sp 0x7ffd49033680 T0)                                                                               
==23816==The signal is caused by a READ memory access.                                                                                                                                                            
    #0 0x5427b5 in _bfd_safe_read_leb128 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:1019:14                                                                                              
    #1 0x6a9b25 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2918:19                                                                                        
    #2 0x69a3ff in scan_unit_for_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3168:10                                                                                              
    #3 0x6a2de6 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3660:9                                                                                    
    #4 0x6a2de6 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3686                                                                                                   
    #5 0x6a0369 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4798:11                                                                                      
    #6 0x5f332e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10                                                                                                    
    #7 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9                                                                                                       
    #8 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                                                                                      
    #9 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                                                                                     
    #10 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                                                                                      
    #11 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                                                                             
    #12 0x7f839bb03680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                   
    #13 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)                                                                                                                                 
                                                                                                                                                                                                                  
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:1019:14 in _bfd_safe_read_leb128
==23816==ABORTING

Affected version:
2.29.51.20170925 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=1b86808a86077722ee4f42ff97f836b12420bb2a

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15938

Reproducer:
https://github.com/asarubbo/poc/blob/master/00381-binutils-invalidread-find_abstract_instance_name

Timeline:
2017-09-26: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue
2017-10-27: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: invalid memory read in find_abstract_instance_name (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The commit fix for this issue says:

The PR22200 fuzzer testcase found one way to put NULLs into .debug_line file tables. PR22205 finds another.
So mitre considers this an incomplete fix.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==19042==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000006a76a6 bp 0x7ffde0afde30 sp 0x7ffde0afde00 T0)
==19042==The signal is caused by a READ memory access.
==19042==Hint: address points to the zero page.
    #0 0x6a76a5 in concat_filename /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8
    #1 0x696ff3 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2265:44
    #2 0x6a2d36 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3651:26
    #3 0x6a2d36 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3686
    #4 0x6a0369 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4798:11
    #5 0x5f332e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #6 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f6c6d793680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8 in concat_filename
==19042==ABORTING

Affected version:
2.29.51.20170925 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=a54018b72d75abf2e74bf36016702da06399c1d9

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15939

Reproducer:
https://github.com/asarubbo/poc/blob/master/00380-binutils-NULLptr-concat_filename

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue
2017-10-27: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: NULL pointer dereference in concat_filename (dwarf2.c) (INCOMPLETE FIX FOR CVE-2017-15023)

October 16, 2017
Greg KH a.k.a. gregkh (homepage, bugs)
Linux Kernel Community Enforcement Statement FAQ (October 16, 2017, 09:05 UTC)

Based on the recent Linux Kernel Community Enforcement Statement and the article describing the background and what it means , here are some Questions/Answers to help clear things up. These are based on questions that came up when the statement was discussed among the initial round of over 200 different kernel developers.

Q: Is this changing the license of the kernel?

A: No.

Q: Seriously? It really looks like a change to the license.

A: No, the license of the kernel is still GPLv2, as before. The kernel developers are providing certain additional promises that they encourage users and adopters to rely on. And by having a specific acking process it is clear that those who ack are making commitments personally (and perhaps, if authorized, on behalf of the companies that employ them). There is nothing that says those commitments are somehow binding on anyone else. This is exactly what we have done in the past when some but not all kernel developers signed off on the driver statement.

Q: Ok, but why have this “additional permissions” document?

A: In order to help address problems caused by current and potential future copyright “trolls” aka monetizers.

Q: Ok, but how will this help address the “troll” problem?

A: “Copyright trolls” use the GPL-2.0’s immediate termination and the threat of an immediate injunction to turn an alleged compliance concern into a contract claim that gives the troll an automatic claim for money damages. The article by Heather Meeker describes this quite well, please refer to that for more details. If even a short delay is inserted for coming into compliance, that delay disrupts this expedited legal process.

By simply saying, “We think you should have 30 days to come into compliance”, we undermine that “immediacy” which supports the request to the court for an immediate injunction. The threat of an immediate junction was used to get the companies to sign contracts. Then the troll goes back after the same company for another known violation shortly after and claims they’re owed the financial penalty for breaking the contract. Signing contracts to pay damages to financially enrich one individual is completely at odds with our community’s enforcement goals.

We are showing that the community is not out for financial gain when it comes to license issues – though we do care about the company coming into compliance.  All we want is the modifications to our code to be released back to the public, and for the developers who created that code to become part of our community so that we can continue to create the best software that works well for everyone.

This is all still entirely focused on bringing the users into compliance. The 30 days can be used productively to determine exactly what is wrong, and how to resolve it.

Q: Ok, but why are we referencing GPL-3.0?

A: By using the terms from the GPLv3 for this, we use a very well-vetted and understood procedure for granting the opportunity to come fix the failure and come into compliance. We benefit from many months of work to reach agreement on a termination provision that worked in legal systems all around the world and was entirely consistent with Free Software principles.

Q: But what is the point of the “non-defensive assertion of rights” disclaimer?

A: If a copyright holder is attacked, we don’t want or need to require that copyright holder to give the party suing them an opportunity to cure. The “non-defensive assertion of rights” is just a way to leave everything unchanged for a copyright holder that gets sued.  This is no different a position than what they had before this statement.

Q: So you are ok with using Linux as a defensive copyright method?

A: There is a current copyright troll problem that is undermining confidence in our community – where a “bad actor” is attacking companies in a way to achieve personal gain. We are addressing that issue. No one has asked us to make changes to address other litigation.

Q: Ok, this document sounds like it was written by a bunch of big companies, who is behind the drafting of it and how did it all happen?

A: Grant Likely, the chairman at the time of the Linux Foundation’s Technical Advisory Board (TAB), wrote the first draft of this document when the first copyright troll issue happened a few years ago. He did this as numerous companies and developers approached the TAB asking that the Linux kernel community do something about this new attack on our community. He showed the document to a lot of kernel developers and a few company representatives in order to get feedback on how it should be worded. After the troll seemed to go away, this work got put on the back-burner. When the copyright troll showed back up, along with a few other “copycat” like individuals, the work on the document was started back up by Chris Mason, the current chairman of the TAB. He worked with the TAB members, other kernel developers, lawyers who have been trying to defend these claims in Germany, and the TAB members’ Linux Foundation’s lawyers, in order to rework the document so that it would actually achieve the intended benefits and be useful in stopping these new attacks. The document was then reviewed and revised with input from Linus Torvalds and finally a document that the TAB agreed would be sufficient was finished. That document was then sent to over 200 of the most active kernel developers for the past year by Greg Kroah-Hartman to see if they, or their company, wished to support the document. That produced the initial “signatures” on the document, and the acks of the patch that added it to the Linux kernel source tree.

Q: How do I add my name to the document?

A: If you are a developer of the Linux kernel, simply send Greg a patch adding your name to the proper location in the document (sorting the names by last name), and he will be glad to accept it.

Q: How can my company show its support of this document?

A: If you are a developer working for a company that wishes to show that they also agree with this document, have the developer put the company name in ‘(’ ‘)’ after the developer’s name. This shows that both the developer, and the company behind the developer are in agreement with this statement.

Q: How can a company or individual that is not part of the Linux kernel community show its support of the document?

A: Become part of our community! Send us patches, surely there is something that you want to see changed in the kernel. If not, wonderful, post something on your company web site, or personal blog in support of this statement, we don’t mind that at all.

Q: I’ve been approached by a copyright troll for Netfilter. What should I do?

A: Please see the Netfilter FAQ here for how to handle this

Q: I have another question, how do I ask it?

A: Email Greg or the TAB, and they will be glad to help answer them.

Linux Kernel Community Enforcement Statement (October 16, 2017, 09:00 UTC)

By Greg Kroah-Hartman, Chris Mason, Rik van Riel, Shuah Khan, and Grant Likely

The Linux kernel ecosystem of developers, companies and users has been wildly successful by any measure over the last couple decades. Even today, 26 years after the initial creation of the Linux kernel, the kernel developer community continues to grow, with more than 500 different companies and over 4,000 different developers getting changes merged into the tree during the past year. As Greg always says every year, the kernel continues to change faster this year than the last, this year we were running around 8.5 changes an hour, with 10,000 lines of code added, 2,000 modified, and 2,500 lines removed every hour of every day.

The stunning growth and widespread adoption of Linux, however, also requires ever evolving methods of achieving compliance with the terms of our community’s chosen license, the GPL-2.0. At this point, there is no lack of clarity on the base compliance expectations of our community. Our goals as an ecosystem are to make sure new participants are made aware of those expectations and the materials available to assist them, and to help them grow into our community.  Some of us spend a lot of time traveling to different companies all around the world doing this, and lots of other people and groups have been working tirelessly to create practical guides for everyone to learn how to use Linux in a way that is compliant with the license. Some of these activities include:

Unfortunately the same processes that we use to assure fulfillment of license obligations and availability of source code can also be used unjustly in trolling activities to extract personal monetary rewards. In particular, issues have arisen as a developer from the Netfilter community, Patrick McHardy, has sought to enforce his copyright claims in secret and for large sums of money by threatening or engaging in litigation. Some of his compliance claims are issues that should and could easily be resolved. However, he has also made claims based on ambiguities in the GPL-2.0 that no one in our community has ever considered part of compliance.  

Examples of these claims have been distributing over-the-air firmware, requiring a cell phone maker to deliver a paper copy of source code offer letter; claiming the source code server must be setup with a download speed as fast as the binary server based on the “equivalent access” language of Section 3; requiring the GPL-2.0 to be delivered in a local language; and many others.

How he goes about this activity was recently documented very well by Heather Meeker.

Numerous active contributors to the kernel community have tried to reach out to Patrick to have a discussion about his activities, to no response. Further, the Netfilter community suspended Patrick from contributing for violations of their principles of enforcement. The Netfilter community also published their own FAQ on this matter.

While the kernel community has always supported enforcement efforts to bring companies into compliance, we have never even considered enforcement for the purpose of extracting monetary gain.  It is not possible to know an exact figure due to the secrecy of Patrick’s actions, but we are aware of activity that has resulted in payments of at least a few million Euros.  We are also aware that these actions, which have continued for at least four years, have threatened the confidence in our ecosystem.

Because of this, and to help clarify what the majority of Linux kernel community members feel is the correct way to enforce our license, the Technical Advisory Board of the Linux Foundation has worked together with lawyers in our community, individual developers, and many companies that participate in the development of, and rely on Linux, to draft a Kernel Enforcement Statement to help address both this specific issue we are facing today, and to help prevent any future issues like this from happening again.

A key goal of all enforcement of the GPL-2.0 license has and continues to be bringing companies into compliance with the terms of the license. The Kernel Enforcement Statement is designed to do just that.  It adopts the same termination provisions we are all familiar with from GPL-3.0 as an Additional Permission giving companies confidence that they will have time to come into compliance if a failure is identified. Their ability to rely on this Additional Permission will hopefully re-establish user confidence and help direct enforcement activity back to the original purpose we have all sought over the years – actual compliance.  

Kernel developers in our ecosystem may put their own acknowledgement to the Statement by sending a patch to Greg adding their name to the Statement, like any other kernel patch submission, and it will be gladly merged. Those authorized to ‘ack’ on behalf of their company may add their company name in (parenthesis) after their name as well.

Note, a number of questions did come up when this was discussed with the kernel developer community. Please see Greg’s FAQ post answering the most common ones if you have further questions about this topic.

October 13, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux listed RethinkDB’s website (October 13, 2017, 08:22 UTC)

 

The rethinkdb‘s website has (finally) been updated and Gentoo Linux is now listed on the installation page!

Meanwhile, we have bumped the ebuild to version 2.3.6 with fixes for building on gcc-6 thanks to Peter Levine who kindly proposed a nice PR on github.

October 03, 2017

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==26890==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6130000006d3 at pc 0x000000472115 bp 0x7ffdb7d8a0d0 sp 0x7ffdb7d89880                                                                         
READ of size 298 at 0x6130000006d3 thread T0                                                                                                                                                                      
    #0 0x472114 in __interceptor_strlen /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:302                      
    #1 0x68fea5 in parse_die /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:254:12                                                                                                           
    #2 0x68ddda in _bfd_dwarf1_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:521:13                                                                                       
    #3 0x5f2f00 in _bfd_elf_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8659:10                                                                                            
    #4 0x517755 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1004:12                                                                                                      
    #5 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                                                                                      
    #6 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                                                                                     
    #7 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                                                                                       
    #8 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                                                                              
    #9 0x7f3dea34e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                    
    #10 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)                                                                                                                                 
                                                                                                                                                                                                                  
0x6130000006d3 is located 0 bytes to the right of 339-byte region [0x613000000580,0x6130000006d3)                                                                                                                 
allocated by thread T0 here:                                                                                                                                                                                      
    #0 0x4d8828 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67                                                                      
    #1 0x53f138 in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x799bc8 in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7b8797 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12
    #4 0x68e3b1 in _bfd_dwarf1_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:490:4
    #5 0x5f2f00 in _bfd_elf_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8659:10
    #6 0x517755 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1004:12
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f3dea34e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:302 in __interceptor_strlen
Shadow bytes around the buggy address:
  0x0c267fff8080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff80a0: 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff80c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff80d0: 00 00 00 00 00 00 00 00 00 00[03]fa fa fa fa fa
  0x0c267fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==26890==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=1da5c9a485f3dcac4c45e96ef4b7dae5948314b5

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15020

Reproducer:
https://github.com/asarubbo/poc/blob/master/00376-binutils-heapoverflow-parse_die

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: heap-based buffer overflow in parse_die (dwarf1.c)

Description:
binutils is a set of tools necessary to build programs.

The stacktrace of this issue appears to be a NULL pointer access. However the upstream maintainer changed the summary of the bugreport to “DW_AT_name with out of bounds reference”. The commit also reference to “DW_AT_name with out of bounds reference”

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==8739==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000053bf16 bp 0x7ffcab59ee60 sp 0x7ffcab59ee20 T0)
==8739==The signal is caused by a READ memory access.
==8739==Hint: address points to the zero page.
    #0 0x53bf15 in bfd_hash_hash /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:441:15
    #1 0x53bf15 in bfd_hash_lookup /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:467
    #2 0x6a2049 in insert_info_hash_table /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:487:37
    #3 0x6a2049 in comp_unit_hash_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3776
    #4 0x6a2049 in stash_maybe_update_info_hash_tables /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4120
    #5 0x69cbbc in stash_maybe_enable_info_hash_tables /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4214:3
    #6 0x69cbbc in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4613
    #7 0x5f330e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #8 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #9 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #10 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #11 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #12 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #13 0x7fd148c7b680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #14 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:441:15 in bfd_hash_hash
==8739==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=11855d8a1f11b102a702ab76e95b22082cccf2f8

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15022

Reproducer:
https://github.com/asarubbo/poc/blob/master/00375-binutils-NULLptr-bfd_hash_hash

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: NULL pointer dereference in bfd_hash_hash (hash.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==3765==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000006a7376 bp 0x7ffd5f9a3d50 sp 0x7ffd5f9a3d20 T0)
==3765==The signal is caused by a READ memory access.
==3765==Hint: address points to the zero page.
    #0 0x6a7375 in concat_filename /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8
    #1 0x696e83 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2258:44
    #2 0x6a2ab8 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3642:26
    #3 0x6a2ab8 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3677
    #4 0x6a0104 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4789:11
    #5 0x5f330e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #6 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f0f4a74b680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8 in concat_filename
==3765==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=c361faae8d964db951b7100cada4dcdc983df1bf

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15023

Reproducer:
https://github.com/asarubbo/poc/blob/master/00374-binutils-NULLptr-concat_filename

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: NULL pointer dereference in concat_filename (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==11994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000029e at pc 0x7f800af7095d bp 0x7ffeab0e5c90 sp 0x7ffeab0e5c88            
READ of size 1 at 0x60200000029e thread T0                                                                                                           
    #0 0x7f800af7095c in bfd_getl32 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:559:24                                       
    #1 0x7f800af91323 in bfd_get_debug_link_info_1 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1206:12                       
    #2 0x7f800af91b8a in find_separate_debug_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1423:10                        
    #3 0x7f800af91a0f in bfd_follow_gnu_debuglink /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1582:10                        
    #4 0x7f800b110614 in _bfd_dwarf2_slurp_debug_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4345:19                    
    #5 0x7f800b11bc67 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4538:9                    
    #6 0x7f800b05e38b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10                                 
    #7 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9                                          
    #8 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                         
    #9 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                        
    #10 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                         
    #11 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                
    #12 0x7f8009fa3680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                      
    #13 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)                                                                    

0x60200000029e is located 0 bytes to the right of 14-byte region [0x602000000290,0x60200000029e)
allocated by thread T0 here:
    #0 0x4d8e08 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x7f800af6f3fc in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x7f800af64b9f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7f800af91230 in bfd_get_debug_link_info_1 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1191:8
    #4 0x7f800af91b8a in find_separate_debug_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1423:10
    #5 0x7f800af91a0f in bfd_follow_gnu_debuglink /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1582:10
    #6 0x7f800b110614 in _bfd_dwarf2_slurp_debug_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4345:19
    #7 0x7f800b11bc67 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4538:9
    #8 0x7f800b05e38b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #9 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #10 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #11 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #12 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #13 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #14 0x7f8009fa3680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:559:24 in bfd_getl32
Shadow bytes around the buggy address:
  0x0c047fff8000: fa fa 00 01 fa fa 00 06 fa fa fd fa fa fa fd fa
  0x0c047fff8010: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa
  0x0c047fff8020: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
  0x0c047fff8030: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
  0x0c047fff8040: fa fa fd fa fa fa fd fd fa fa fd fa fa fa 00 fa
=>0x0c047fff8050: fa fa 00[06]fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==11994==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=52b36c51e5bf6d7600fdc6ba115b170b0e78e31d

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15021

Reproducer:
https://github.com/asarubbo/poc/blob/master/00373-binutils-heapoverflow-bfd_getl32

Timeline:
2017-09-24: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: heap-based buffer overflow in bfd_getl32 (opncls.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

 # nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==11125==ERROR: AddressSanitizer: FPE on unknown address 0x7f5e01fd42e5 (pc 0x7f5e01fd42e5 bp 0x7ffdaa5de290 sp 0x7ffdaa5de0e0 T0)
    #0 0x7f5e01fd42e4 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c
    #1 0x7f5e01fe192b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #2 0x7f5e01fe192b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #3 0x7f5e01fde94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #4 0x7f5e01f1c20b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #5 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #6 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #7 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #8 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #9 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #10 0x7f5e00e61680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #11 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c in decode_line_info
==11125==ABORTING

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=d8010d3e75ec7194a4703774090b27486b742d48

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15025

Reproducer:
https://github.com/asarubbo/poc/blob/master/00372-binutils-FPE-decode_line_info

Timeline:
2017-09-22: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: divide-by-zero in decode_line_info (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The relevant ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==22616==ERROR: AddressSanitizer: stack-overflow on address 0x7ffc2948efe8 (pc 0x0000004248eb bp 0x7ffc2948f8e0 sp 0x7ffc2948efe0 T0)
    #0 0x4248ea in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_allocator.cc:381
    #1 0x41f8f3 in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_allocator.cc:814
    #2 0x4d8de4 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:68
    #3 0x7ff17b5b237c in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9                                                                                                     
    #4 0x7ff17b5a7b2f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21                                                                               
    #5 0x7ff17b5e16d3 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12                                                                     
    #6 0x7ff17b75626e in read_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:556:8                                                                                                   
    #7 0x7ff17b772053 in read_indirect_string /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:730:9                                                                                           
    #8 0x7ff17b772053 in read_attribute_value /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1189                                                                                            
    #9 0x7ff17b76ebf4 in read_attribute /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1306:14                                                                                               
    #10 0x7ff17b76ebf4 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2913                                                                                    
    #11 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12                                                                                 
    #12 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12                                                                                 
    [..cut..]
    #252 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=52a93b95ec0771c97e26f0bb28630a271a667bd2

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15024

Reproducer:
https://github.com/asarubbo/poc/blob/master/00371-binutils-infiniteloop-find_abstract_instance_name

Timeline:
2017-09-22: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: infinite loop in find_abstract_instance_name (dwarf2.c)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Taking over a postal address, An Post edition (October 03, 2017, 11:03 UTC)

As I announced a few months ago, I’m moving to London. One of the tasks before the move is setting up postal address redirection, so that the services unable to mail me across the Irish Sea can still reach me. Luckily I know for a fact that An Post (the Irish postal service) has a redirection service, if not a cheap one.

A couple of weeks ago, I went on to sign up for the services, and I found that I had two choices: I could go to the post office (which is inside the EuroSpar next door), show a photo ID and a proof of address, and pay cash or with a debit card1; or I could fill in the form online, pay with a credit card, and then post out a physical signed piece of paper. I chose the latter.

There are many laughable things that I could complain about, in the process of setting up the redirection, but I want to focus on what I think is the big, and most important problem. After you choose the addresses (original and new destination), it will ask you where you want your confirmation PIN sent.

There is a reason why they do that. I set up the redirect well before I moved, and in particular I chose to redirect mail from my apartment to my local office — this way I can either batch together the mail, or simply ask for an inter-office forwarding. This meant I had access to both the original and the new address at the same time — but for many people, particularly moving out of the country, by the time they know where to forward the mail, they might only have access to the new address.

The issue is that if you decide to get the PIN at the new address, the only notification sent to the old address is one letter, confirming the activation of the redirection, sent to the old address. This is likely meant so you can call An Post and have them cancel the redirection if that was done against your will.

While this stops a possible long-term takeover of a mail address, it still allows a wide window of opportunity for a takeover. Also, it has one significant drawback: the letter does not tell you where the mail will be redirected!

Let’s say you want to take over someone’s address (let’s look later what for). First you need to know their address; this is the simplest part of course. Now you can fill in the request on An Post’s website for the redirection — the original address is not given any indication that a request was filled – and get the PIN at the new address. Once the PIN is received, there is some time to enable the redirection.

Until activation is completed, and the redirection time is selected, no communication is given to the original address.

If your target happens to be travelling or otherwise unable to get to their mail for a few weeks, then you have an opportunity. You can take over the address, get some documents at the given address, and get your hands on them. Of course the target will become suspicious when coming back, finding a note about redirection and no mail. But finding a way to recover the mail without being tied to an identity is left as an exercise to the reader.

So what would you accomplish, beside annoying your target, and possibly get some of their unsolicited mail? Well, there are a significant amount of interesting targets in the postal mail you receive in Ireland.

For instance, take credit card statements. Tesco Bank does not allow you to receive them electronic, and Ulster Bank will send you the paper copy even though you opt-in to all the possible electronic communications. And a credit card statement in Ireland include a lot more information than other countries, including just enough to take over the credit card. Tesco Bank for instance will authenticate you with the 16 digits PAN (on the statement), your full address (on the statement), the credit limit (you guessed it, on the statement), and your date of birth (okay, this one is not on the statement, but you can probably find my date of birth pretty easily).

And even if you don’t want to take over the full credit card, having the PAN is extremely useful in and by itself, to take over other accounts. And since you have the statement, it wouldn’t be difficult to figure out what the card is used for — take over an Amazon account, you can take over a lot more things.

But there are more concrete problems too — for instance I do receive a significant amount of pseudo-cash2 in form of Tesco vouchers — having physical control of the vouchers effectively means having the cash in your hand. Or say you want to get a frequent guest or frequent flyer card, because a card is often just enough to get the benefits, and have access to the information on the account. Or just get enough of a proof of address to register on any other service that will require one.

Because let’s remember: an authentication system is just as weak as its weakest link. So all those systems requiring a proof of address? You can skip over all of them by just having one recent enough proof of address, by hijacking someone’s physical mail. And that’s just a matter of paying for it.


  1. An Post is well known for only accepting VISA Debit cards, and refuses both MasterCard Debit and VISA Credit cards. Funnily enough, they issue MasterCard cards, but that’s a story for another time. [return]
  2. I should at some point write a post about pseudo-cash and the value of a euro when it’s not a coin. [return]

October 01, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
How blogging changed in the past ten years (October 01, 2017, 16:04 UTC)

One of the problems that keeps poking back at me every time I look for an alternative software for this blog, is that it somehow became not your average blog, particularly not in 2017.

The first issue is that there is a lot of history. While the current “incarnation” of the blog, with the Hugo install, is fairly recent, I have been porting over a long history of my pseudo-writing, merging back into this one big collection the blog posts coming from my original Gentoo Developer blog, as well as the few posts I wrote on the KDE Developers blog and a very minimal amount of content from my (mostly Italian) blog when I was in high school.

Why did I do it that way? Well the main thing is that I don’t want to lose the memories. As some of you might know already, I faced my mortality before, and I came to realize that this blog is probably the only thing of substance that I had a hand on, that will outlive me. And so I don’t want to just let migration, service turndowns, and other similar changes take away what I did. This is also why I did publish to this blog the articles I wrote for other websites, namely NewsForge and Linux.com (back when they were part of Geeknet).

Some of the recovery work actually required effort. As I said above there’s a minimal amount of content that comes from my high school days blog. And it’s in Italian that does not make it particularly interesting or useful. I had deleted that blog altogether years and years ago, so I had to use the Wayback Machine to recover at least some of the posts. I will be going through all my old backups in the hope of finding that one last backup that I remember making before tearing the thing down.

Why did I tear it down in the first place? It’s clearly a teenager’s blog and I am seriously embarrassed by the way I thought and wrote. It was 1314 years ago, and I have admitted last year that I can tell so many times I’ve been wrong. But this is not the change I want to talk about.

The change I want to talk about is the second issue with finding a good software to run my blog: blogging is not what it used to be ten years ago. Or fifteen years ago. It’s not just that a lot of money got involved in the mean time, so now there are a significant amount of “corporate blogs”, that end up being either product announcements in a different form, or the another outlet for not-quite-magazine content. I know of at least a couple of Italian newspapers that provide “blogs” for their writers, which look almost exactly like the paper’s website, but do not have to be reviewed by the editorial board.

In addition to this, a lot of people’s blogs stopped providing as much details of their personal life as they used to. Likely, this is related to the fact that we now know just how nasty people on the Internet can be (read: just as nasty as people off the Internet), and a lot of the people who used to write lightheartedly don’t feel as safe, correctly. But there is probably another reason: “Social Media”.

The advent of Twitter and Facebook made it so that there is less need to post short personal entries, too. And Facebook in particular appears to have swallowed most of the “cutesy memes” such as quizzes and lists of things people have or have not done. I know there are still a few people who insist on not using these big names social networks, and still post for their friends and family on blogs, but I have a feeling they are quite the minority. And I can tell you for sure that since I signed up for Facebook, a lot of my smaller “so here’s that” posts went away.

Distribution chart of blog post sizes over time

This is a bit of a rough plot of blog sizes. In particular I have used the raw file size of the markdown sources used by Hugo, in bytes, which make it not perfect for Unicode symbols, and it includes the “front matter”, which means that particularly all the non-Hugo-native posts have their title effectively doubled by the slug. But it shows trends particularly well.

You can see from that graph that some time around 2009 I almost entirely stopped writing short blog posts. That is around the time Facebook took off in Italy, and a lot of my interaction with friends started happening there. If you’re curious of that visible lack of posts just around half of 2007, that was the pancreatitis that had me disappear for nearly two months.

With this reduction in scope of what people actually write on blogs, I also have a feeling that lots of people were left without anything to say. A number of blogs I still follow (via NewsBlur since Google Reader was shut down), post once or twice a year. Planets are still a thing, and I still subscribe to a number of them, but I realize I don’t even recognize half the names nowadays. Lots of the “old guard” stopped blogging almost entirely, possibly because of a lack of engagement, or simply because, like me, many found a full time job (or a full time family), that takes most of their time.

You can definitely see from the plot that even my own blogging has significantly slowed down over the past few years. Part of it was the tooling giving up on me a few times, but it also involves the lack of energy to write all the time as I used to. Plus there is another problem: I now feel I need to be more accurate in what I’m saying and in the words I’m using. This is in part because I grew up, and know how much words can hurt people even when meant the right way, but also because it turns out when you put yourself in certain positions it’s too easy to attack you (been there, done that).

A number of people that think argue that it was the demise of Google Reader1 that caused blogs to die, but as I said above, I think it’s just the evolution of the concept veering towards other systems, that turned out to be more easily reachable by users.

So are blogs dead? I don’t think so. But they are getting harder to discover, because people use other platforms and it gets difficult to follow all of them. Hacker News and Reddit are becoming many geeks’ default way to discover content, and that has the unfortunate side effect of not having as much of the conversation to happen in shared media. I am indeed bothered about those people who prefer discussing the merit of my posts on those external websites than actually engaging on the comments, if nothing else because I do not track those platforms, and so the feeling I got is of talking behind one’s back — I would prefer if people actually told me if they shared my post on those platforms; for Reddit I can at least IFTTT to self-stalk the blog, but that’s a different problem.

Will we still have blogs in 10 years? Probably yes, but they will not look like the ones we’re used to most likely. The same way as nowadays there still are personal homepages, but they clearly don’t look like Geocities, and there are social media pages that do not look like MySpace.


  1. Usual disclaimer: I do work for Google at the time of writing this, but these are all personal opinions that have no involvement from the company. For reference, I signed the contract before the Google Reader shutdown announcement, but started after it. I was also sad, but I found NewsBlur a better replacement anyway. [return]

September 27, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

This is a workaround for a FreeType/fontconfig problem, but my be applicable in other cases as well. For Gentoo users, the related bug is 631502.

Recently, after updating to Mozilla Firefox to version 52 or later (55.0.2, in my case), and Mozilla Thunderbird to version 52 or later (52.3.0, in my case), I found that fonts were rendering horribly under Linux. It looked essentially like there was no anti-aliasing or hinting at all.

Come to find out, this was due to a change in the content rendering engine, which is briefly mentioned in the release notes for Firefox 52 (but it also applies to Thunderbird). Basically, in Linux, the default engine changed from cairo to Google’s Skia.

Ugly fonts in Firefox and Thunderbird under Linux - skia and cairo

For each application, the easiest method for getting the fonts to render nicely again is to make two changes directly in the configuration editor. To do so in Firefox, simply go to the address bar and type about:config. Within Thunderbird, it can be launched by going to Menu > Preferences > Advanced > Config Editor. Once there, the two keys that need to change are:

gfx.canvas.azure.backends
gfx.content.azure.backends

They likely have values of “skia” or a comma-separated list with “skia” being the first value. On my Linux hosts, I changed the value from skia back to cairo, restarted the applications, and all was again right in the world (or at least in the Mozilla font world 😛 ).

Hope that helps.

Cheers,
Zach

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is the story of how I ended up calling my bank at 11pm on a Sunday night to ask them to cancel my credit card. But it started with a complete different problem: I thought I found a bug in some PDF library.

I asked Hanno and Ange since they both have lots more experience with PDF as a format than me (I have nearly zero), as I expected this to be complete garbage either coming from random parts of the file or memory within the process that was generating or reading it, and thought it would be completely inconsequential. As you probably have guessed by the spoiler in both the title of the post and the first paragraph, it was not the case. Instead that string is a representation of my credit card number.

After a few hours, having worked on other tasks, and having just gone back and forth with various PDFs, including finding a possibly misconfigured AGPL library in my bank’s backend (worth of another blog post), I realized that Okular does not actually show a title for this PDF, which suggested a bug in Dolphin (the Plasma file manager). In particular Poppler’s pdfinfo also didn’t show any title at all, which suggested there’s a problem with a different part of the code. Since the problem was happening with my credit card statements, and the credit card statements include the full 16-digits PAN, I didn’t want to just file a bug attaching a sample, so instead I started asking around for help to figure out which part of the code is involved.

Albert Astals Cid sent me the right direction by telling me the low-level implementation was coming from KFileMetadata, and that quickly pointed me at this interesting piece of heuristics which is designed to guess the title of a document by looking at the first page. The code is quite a bit convoluted, so I couldn’t at first just exclude uninitialized memory access, but I couldn’t figure out where it could be coming from, so I decided to copy the code into a single executable to play around with it. The good news was that it would give me the exact same answer, so it was not uninitialized memory. Instead, the parser was mis-reading something in the file, which by being stable meant it wasn’t likely a security issue, just sub-optimal code.

As there is no current, updated tool for PDF that behaves like mkvinfo, that is print an element-by-element description of the content of a PDF file, I decided to just play with the code to figure out how it decided what to use as the title. Printing out each of the possible titles being evaluated showed it was considering first my address, then part of the summary information, then this strange string. What is going on there?

The code is a bit difficult to follow, particularly for me at first since I had no idea how PDF works to begin with. But the summary of it is that it goes through the textboxes (I knew already that PDF text is laid out in boxes) of the first page, joining together the text if the box has markers to follow up. Each of these entries is stored into a map of text heights, together with a “watermark” of the biggest text size encountered during this loop. If, when looking at a textbox, the height is lower than the previous maximum height, it gets discarded. At the end, the first biggest textbox content is reported as the title.

Once I disabled the height check and always reported all the considered title textboxes, I noticed something interesting: the string that kept being reported was found together with a number of textboxes that are drawn on top of the bank giro1 credit system. The cheque includes a very big barcode… and that’s where I started sweating a bit.

The reason of the sweat is that by then I already guessed I made a huge mistake sharing the string that Dolphin was showing me. The reference to pay up a credit card is universally the full 16-digits number (PAN). Indeed the full number is printed on the cheque, and as the “An Post Ref” (An Post being the Irish postal system), and the account information (10-digits, excluding the 6-digits IIN) is printed on the bottom of the same. All of this is why I didn’t want to share the sample file, and why I always destroy the statements that arrive, in paper form, from the banks. At this point, the likeliness of the barcode containing the same information was seriously high.

My usual Barcode Scanner for Android didn’t manage to understand the barcode though, which made it awkward. Instead I decided to confirm I was actually looking at the content of the barcode in an encoded form with a very advanced PDF inspection tool: strings $file | grep Font. This did bring up a reference to /BaseFont /Code128ARedA. And that was the confirmation I needed. Indeed a quick search for that name brings you to a public domain font that implements Code 128 barcodes as a TrueType font. This is not uncommon, particularly as it’s the same method used by most label printers, including the Dymo I used to use for labelling computers.

At that point a quick comparison of the barcode I had in front of me with one generated through an online generator (but only for the IIN because I don’t want to leak it all), confirmed I was looking at my credit card number, and that my tweet just leaked it — in a bit of a strange encoding that may take some work to decode, but still leaked it. I called Ulster Bank and got the card cancelled and replaced.

Which lessons I can learn from this experience? First of all to consider credit card statements even more of a security risk than I ever imagine. It also gave me a practical instance of what Brian Krebs advocates for years regarding barcodes of boarding passes and similar. In particular it looks like both Ulster Bank and Tesco Bank use the same software to generate the credit card statements (which is easily told not to be the same system that generates the normal bank statements), which is developed by Fiserv (their name is in the Author field of the PDF), and they all rely on using the normal full card number for payment.

This is something I don’t really understand. In Italy, you only use the 16-digits number to pay the bank one-off by wire, and instead the statements never had more than the last five digits of the card. Except for the Italian American Express — but that does not surprise me too much as they manage it from London as well.

I’m now looking to see how I can improve on the guessing of the title for the PDFs in the KFileMetadata library — although I’m warming up to the idea of just sending a patch that delete that part of the code altogether, and if the file has no title, no title is displayed. The simplest solutions are, usually, the better.


  1. The Wikipedia page appears to talk only of the UK system. Ireland, as usual, appears to have kept their own version of the same system, and all the credit card statements, and most bills, will have a similar pre-printed “credit cheque” at the bottom. Even when they are direct-debited. [return]

September 26, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux Userspace 2.7 (September 26, 2017, 12:50 UTC)

A few days ago, Jason "perfinion" Zaman stabilized the 2.7 SELinux userspace on Gentoo. This release has quite a few new features, which I'll cover in later posts, but for distribution packagers the main change is that the userspace now has many more components to package. The project has split up the policycoreutils package in separate packages so that deployments can be made more specific.

Let's take a look at all the various userspace packages again, learn what their purpose is, so that you can decide if they're needed or not on a system. Also, when I cover the contents of a package, be aware that it is based on the deployment on my system, which might or might not be a complete installation (as with Gentoo, different USE flags can trigger different package deployments).

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==3235==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x613000000512 at pc 0x7f7c93ae3c88 bp 0x7ffe38d7a970 sp 0x7ffe38d7a968
READ of size 1 at 0x613000000512 thread T0
    #0 0x7f7c93ae3c87 in read_1_byte /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:616:10
    #1 0x7f7c93ae3c87 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2311
    #2 0x7f7c93aee92b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #3 0x7f7c93aee92b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #4 0x7f7c93aeb94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #5 0x7f7c93a2920b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #6 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f7c9296e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)

0x613000000512 is located 0 bytes to the right of 338-byte region [0x6130000003c0,0x613000000512)
allocated by thread T0 here:
    #0 0x4d8e08 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x7f7c9393a37c in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x7f7c9392fb2f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7f7c939696d3 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12
    #4 0x7f7c93ade26e in read_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:556:8
    #5 0x7f7c93adef3c in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2047:9
    #6 0x7f7c93aee92b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #7 0x7f7c93aee92b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #8 0x7f7c93aeb94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #9 0x7f7c93a2920b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #10 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #11 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #12 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #13 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #14 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #15 0x7f7c9296e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:616:10 in read_1_byte
Shadow bytes around the buggy address:
  0x0c267fff8050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8060: 00 00 00 00 00 00 00 00 00 00 00 04 fa fa fa fa
  0x0c267fff8070: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c267fff8080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff80a0: 00 00[02]fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==3235==ABORTING

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=515f23e63c0074ab531bc954f84ca40c6281a724

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-14939

Reproducer:
https://github.com/asarubbo/poc/blob/master/00370-binutils-heapoverflow-read_1_byte

Timeline:
2017-09-21: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-09-26: blog post about the issue
2017-09-29: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: heap-based buffer overflow in read_1_byte (dwarf2.c)

September 24, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
(Audio)book review: We Are Legion (We Are Bob). (September 24, 2017, 11:04 UTC)

I have not posted a book review in almost a year, and I have not even written one for Goodreads, which I probably should do as well. I feel kind of awful for it because I do have a long list of good titles I appreciated in the meantime. So let me spend a few words on this one.

We Are Legion (We Are Bob) tickled me in the list of Audible books for a while because the name sounded so ludicrous I was expecting something almost along the lines of the Hitchikers’ Guide To The Galaxy. It was not that level of humour, but the book didn’t really disappoint either.

The book starts in the first scene in present time, with the protagonist going to a cryogenic facility… and you can tell from the cover that’s just a setup of course. I found it funny from the first scenes that the author clearly is talking of something he knows directly, so I wasn’t entirely too surprised when I found that he’s a computer programmer. I’m not sure what it is with people in my line of work deciding to write books, but the results are quite often greatly enjoyable, even if it takes a while to get into them. On this note, Tobias Klausmann of Gentoo fame wrote a two-part series1, which I definitely recommend.

Once you get on with the main stage for the book, it starts off in the direction you expect with spaceships and planets as the covers lets you to imagine. Some of the reviews I read before buying the book found it very lightweight and no-brainer, but I don’t see myself agreeing. While taking it with a lot of spirit and humour, and a metric ton of pop-culture references2, the topics that are brought up include self-determination, the concept of soul as seen by an atheist point of view, global politics as seen from lightyears away3, and the vast multitudes of “oneselves”.

Spoilers in this paragraph, yes definitely spoilers, and a bit of text so you may not read them out of line of sight. Go back to the following paragraph if you don’t want any. Indeed, it’s very hard to tell, and a question that the book spends quite a bit of time pondering over without an answer, whether the character we see in the first scene is actually the protagonist of the book. Because what we have later is a computer “replicant” of the memories and consciousness of him… and a multitudes of copies of that, each acting more or less differently from the original, leaving open the question whether the copies are losing something in the process, or whether it is the knowledge of not being the “original” that make them change. I found this maybe even more profound than the author intended.

Spoilers aside, I found the book enjoyable. It’s not an all-out bright and shiny future, but it’s also not the kind of grim and dark dystopia that appears to be a dime a dozen nowadays. The one thing that still bothers me a little bit, and that probably is because I would have fallen into the same trap, is that the vast majority of the book focuses on technical problems and solutions, though to be fair it pulls it off (in my opinion) quite healthily, rather than by hiding all the human factors away into “someone else’s problem” territory. It reminded me of an essay I had to write in middle school about the “school of the future”, and I ended up not spending a single word on people, even after the teacher pointed out I should have done so and got me to rewrite it. I’m glad there are people (who are not me) studying humanities.

I found it funny that the Wikipedia page about the book insisted on pointing out that reviewers noted the lack of female characters. That’s true, there are a handful of throwaway women throughout the book, but no major character. I don’t know if there was any way around it given the plot as it stands now though, so I wouldn’t read it too much into it, as the book itself feels a lot like a trip into one’s own essence, and I’m not sure I’d expect an author to be able to analyse this way someone else but themselves. I have not read/listened to the other books in the series (though I did add them to my list now), so maybe that changes with the change of focus, not sure.

As for the audiobook itself, which I got through Audible where it was at “special price” $1.99, I just loved the production. Ray Porter does a fantastic job, and since the book is all written in the first person (from somewhat different points of view), his voicework to make you know which point of view is speaking is extremely helpful not to get lost.

All in all, I’ve really enjoyed the book, and look forward to compare with the rest of the series. If you’re looking for something that distracts you from all the dread that is happening right now in the world, and can give you a message of “If we get together, we can do it!”, then this is a worthy book.


  1. I hadn’t realized book two was out until I looked Tobias up on Amazon. I’ll have stern words with him next time I see him for not warning me! [return]
  2. This happens most of the time with geeks writing books, although not all the time thankfully. From one side it does build a nice sense of camaraderie with the protagonists because they feel like “one of us” but on the other hand sometimes it feels too much. Unless it’s part of the story, like here or in Magic 2.0. [return]
  3. Pun totally intended. [return]

September 20, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Free Idea: structured access logs for Apache HTTPD (September 20, 2017, 14:04 UTC)

This post is part of a series of free ideas that I’m posting on my blog in the hope that someone with more time can implement. It’s effectively a very sketched proposal that comes with no design attached, but if you have time you would like to spend learning something new, but no idea what to do, it may be a good fit for you.

I have been commenting on Twitter a bit about the lack of decent tooling to deal with Apache HTTPD’s Combined Logging Format (inherited from NCSA). For those who do not know about it, this is hte format used by standard access_log files, which include information about requests, including the source IP, the time, the requested path, the status code and the User-Agent used.

These logs are useful for debugging but are also consumed by tools such as AWStats to produce useful statistics about the request patterns of a website. I used these extensively when writing my ModSecurity rulesets, and I still keep an eye out on them for instance to report wasteful feed readers.

The files are simple text files, and that makes it easy to act on them: you can use tail and grep, and logrotate needs no special code beside moving the file and reloading Apache to have it re-open the paths. This makes it hard to query for particular entries in fields, such as to get the list of User-Agent strings present in a log. Some of the suggestions I got over Twitter to solve this were to use awk, but as it happens, these logs are not actually parseable with a straightforward field separation.

Lacking finding a good set of tools to handle these formats directly, I have been complaining that we should probably start moving away from simple text files into more structured log formats. Indeed, I know that there used to be at least some support for logging directly to MySQL and other relational databases, and that there are more complicated machinery often used by companies and startups that process these access logs into analysis software and so on. But all of these tend to be high overhead, much more than what I or someone else with a small personal blog would care about implementing.

Instead I think it’s time to start using structured file logs. A few people including thresh from VideoLAN suggested using JSON to write the log files. This is not a terrible idea, as the format is at least well understood and easy to interface with most other software, but honestly I would prefer something with an actual structure, a schema that can be followed. Of course I’m not meaning XML, and I would rather suggest having a standardized schema for proto3. Part of that I guess is because I’m used to use this at work, but also because I like the idea of being able to just define my schema and have it generate the code to parse the messages.

Unfortunately currently there is no support or library to access a sequence of protocol buffer messages. Using a single message with repeated sub-messages would work, but it is not append-friendly so there is no way to just keep writing this to a file, and being able to truncate and resume writing to it, which is a property needed for a proper structured log format to actually fit in the space previously occupied by text formats. This is something I don’t usually have to deal with at work, but I would assume that a simple LV (Length-Value) or LVC (Length-Value-Checksum) encoding would be okay to solve this problem.

But what about other properties of the current format? Well, the obvious answer is that, assuming your structured log contains at least as much information (but possibly more) as the current log, you can always have tools that convert on the fly to the old format. This would for instance allow to have a special tail-like command and a grep-like command that provides compatibility with the way the files are currently looked at manually by your friendly sysadmin.

Having more structured information would also allow easier, or deeper analysis of the logs. For instance you could log the full set of headers (like ModSecurity does) instead of just the referrer and User-Agent. And allow for customizing the output on the conversion side rather than lose the details when writing.

Of course this is just one possible way to solve this problem, and just because I would prefer working with technologies that I’m already friendly with it does not mean I wouldn’t take another format that is similarly low-dependency and easy to deal with. I’m just thinking that the change-averse solution of not changing anything and keeping logs in text format may be counterproductive in this situation.

September 17, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Anyone working on motherboard RGB controllers? (September 17, 2017, 11:04 UTC)

I have been contacted by email last week by a Linux user, probably noticing my recent patch for the gpio_it87 driver in the kernel. They have been hoping my driver could extend to IT7236 chips that are used in a number of gaming motherboards for controlling RGB LEDs.

Having left the case modding world after my first and only ThermalTake chassis – my mother gave me hell for the fans noise, mostly due to the plexiglass window on the side of the case – I still don’t have any context whatsoever on what the current state of these boards is, whether someone has written generic tools to set the LEDs, or even UIs for them. But it was an interesting back and forth of looking for leads into figuring out what is needed.

The first problem is, like most of you who already know a bit about electrical engineering and electronics, that clearly the IT7236 chip is clearly not the same series as the IT87xx chips that my driver supports. And since they are not the same series, they are unlikely to share the same functionality.

The IT87xx series chips are Super I/O controllers, which mean they implement functionality such as floppy-disk controllers, serial and parallel ports and similar interfaces, generally via the LPC bus. You usually know these chip names because these need to be supported by the kernel for them to show up in sensors output. In addition to these standard devices, many controllers include at least a set of general purpose I/O (GPIO) lines. On most consumer motherboards these are not exposed in any way, but boards designed for industrial applications, or customized boards tend to expose those lines easily.

Indeed, I wrote the gpio_it87 driver (well, actually adapted and extended it from a previous driver), because the board I was working on in Los Angeles had one of those chips, and we were interested in having access to the GPIO lines to drive some extra LEDs (and possibly in future versions more external interfaces, although I don’t know if anything was made of those). At the time, I did not manage to get the driver merged; a couple of years back, LaCie manufactured a NAS using a compatible chip, and two of their engineers got my original driver (further extended) merged into the Linux kernel. Since then I only submitted one other patch to add another ID for a compatible chip, because someone managed to send me a datasheet, and I could match it to the one I originally used to implement the driver as having the same behaviour.

Back to the original topic, the IT7236 chip is clearly not a Super I/O controller. It’s also not an Environmental Control (EC) chip, as I know that series is actually IT85xx, which is what my old laptop had. Somewhat luckily though, a “Preliminary Specifications” datasheet for that exact chip is available online from a company that appears to distribute electronics component in the general sense. I’m not sure if that was intentional or not, but having the datasheet is always handy of course.

According to these specifications, the IT7236xFN chips are “Touch ASIC Cap Button Controllers”. And indeed, ITE lists them as such. Comparing this with a different model in the same series shows that probably LED driving was not their original target, but they came to be useful for that. These chips also include an MCU based on a 8051 core, similarly to their EC solution — this makes them, and in particular the datasheet I found earlier, a bit more interesting to me. Unfortunately the datasheet is clearly amended to be the shorter version, and does not include a programming interface description.

Up to this point this tells us exactly one thing only: my driver is completely useless for this chip, as it implements specifically the Super I/O bus access, and it’s unlikely to be extensible to this series of chips. So a new driver is needed and some reverse engineering is likely to be required. The user who wrote me also gave me two other ITE chip names found on the board they have: IT87920 and IT8686 (which appears to be a PWN fan controller — I couldn’t find it on the ITE website at all). Since the it87 (hwmon) driver is still developed out-of-kernel on GitHub, I checked and found an issue that appears to describe a common situation for gaming motherboards: the fans are not controlled with the usual Super I/O chip, but with a separate one (more accurate?) one, and that suggests that the LEDs are indeed controlled by another separate chip, which makes sense. The user ran strings on the UEFI/BIOS image and did indeed find modules named after IT8790 and IT7236 (and IT8728 for whatever reason), to confirm this.

None of this brings us any closer to supporting it though, so let’s take a loop at the datasheet, and we can see that the device has an I²C bus, instead of the LPC (or ISA) bus used by Super I/O and the fan controller. Which meant looking at i2cdev and lsi2c. Unfortunately the output can only see that there are things connected to the bus, but not what they are.

This leaves us pretty much dry. Particularly me since I don’t have hardware access. So my suggestion has been to consider looking into the Windows driver and software (that I’m sure the motherboard manufacturer provides), and possibly figure out if they can run in a virtualized environment (qemu?) where I²C traffic can be inspected. But there may be simpler, more useful or more advanced tools to do most of this already, since I have not spent any time on this particular topic before. So if you know of any of them, feel free to leave a comment on the blog, and I’ll make sure to forward them to the concerned user (since I have not asked them if I can publish their name I’m not going to out them — they can, if they want, leave a comment with their name to be reached directly!).

Sergei Trofimovich a.k.a. slyfox (homepage, bugs)
signed char or unsigned char? (September 17, 2017, 00:00 UTC)

trofi's blog: signed char or unsigned char?

signed char or unsigned char?

Yesterday I debugged an interesting bug: sqlite test suite was hanging up on csv parsing test on powerpc32 and powerpc64 platforms. But other platforms were fine! ia64 was ok, hppa was ok, sparc was ok. Thus it’s not (just) endianness issue or stack growth direction. What could it be?

It took me a while to debug the issue but it boiled down to an infinite loop of trying to find EOF when reading a file. Let’s look at the simplified version of buggy code in sqlite:

#include <stdio.h> /* #define EOF (-1) */
int main() {
int n = 0;
FILE * f = fopen ("/dev/null", "rb"); // supposed to have 0 bytes
for (;;) {
char c = fgetc (f); // truncate 'int' to char
if (c == EOF) break;
++n;
}
return n;
}

The code is supposed reach end of file (or maybe ‘xFF’ symbol) and finish. Normally it’s exactly what happens:

$ x86_64-pc-linux-gnu-gcc a.c -o a && ./a && echo $?
0

But not on powerpc64:

$ powerpc64-unknown-linux-gnu-gcc a.c -o a && ./a && echo $?
<hung>

The bug here is simple: c == EOF promotes both operands char c and -1 to int. Thus for EOF case condition looks like that:

((int)(char)(-1) == -1)

So why does it never fire on powerpc64? Because it’s an unsigned char platform! As a result it looks like two different conditions:

((int)0xff == -1) // powerpc64
((int)(-1) == -1) // x86_64

Once we know the problem we can force x86_64 to hang as well by using -funsigned-char gcc option:

$ x86_64-pc-linux-gnu-gcc a.c -o a -funsigned-char && ./a && echo $?
<hung>

I did not encounter bugs related to char signedness for quite a while.

What are other platforms defaulting to unsigned char? I tried the simple hack (I’ve encountered it today due to changes in c++11 to forbid narrowing conversion in initializers):

$ cat a.cc
char c[] = { -1 };

for cxx in /usr/bin/*-g++; do echo -n "$cxx "; $cxx -c a.cc 2>/dev/null && echo SIGNED || echo UNSIGNED; done | sort -k2

/usr/bin/afl-g++ SIGNED
/usr/bin/alpha-unknown-linux-gnu-g++ SIGNED
/usr/bin/hppa-unknown-linux-gnu-g++ SIGNED
/usr/bin/hppa2.0-unknown-linux-gnu-g++ SIGNED
/usr/bin/i686-w64-mingw32-g++ SIGNED
/usr/bin/ia64-unknown-linux-gnu-g++ SIGNED
/usr/bin/m68k-unknown-linux-gnu-g++ SIGNED
/usr/bin/mips64-unknown-linux-gnu-g++ SIGNED
/usr/bin/sh4-unknown-linux-gnu-g++ SIGNED
/usr/bin/sparc-unknown-linux-gnu-g++ SIGNED
/usr/bin/sparc64-unknown-linux-gnu-g++ SIGNED
/usr/bin/x86_64-HEAD-linux-gnu-g++ SIGNED
/usr/bin/x86_64-UNREG-linux-gnu-g++ SIGNED
/usr/bin/x86_64-pc-linux-gnu-g++ SIGNED
/usr/bin/x86_64-w64-mingw32-g++ SIGNED

/usr/bin/aarch64-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/armv5tel-softfloat-linux-gnueabi-g++ UNSIGNED
/usr/bin/armv7a-hardfloat-linux-gnueabi-g++ UNSIGNED
/usr/bin/armv7a-unknown-linux-gnueabi-g++ UNSIGNED
/usr/bin/powerpc-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/powerpc64-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/powerpc64le-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/s390x-unknown-linux-gnu-g++ UNSIGNED

Or in a shorter form:

  • signed: alpha, hppa, x86, ia64, m68k, mips, sh, sparc
  • unsigned: arm, powerpc, s390

Why would compiler prefer one signedness over another? The answer is the underlying Instruction Set Architecture. Or … not :), read on!.

Let’s look at generated code for two simple functions fetching single char from memory into register and compare generated code:

signed long sc2sl (signed char * p) { return *p; }
unsigned long uc2ul (unsigned char * p) { return *p; }

Alpha

Alpha is a 64-bit architecture. Does not support unaligned reads in its basic ISA. You have been warned.

; alpha-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
; example: 0x12345(BB address, p)
; | 0x12346(CC address, p+1)
; v v
; mem: [ .. AA BB CC DD .. ]
; a0 = 0x12345
lda t0,1(a0) ; load address: t0 = p+1
; t0 = 0x12346 (a0 + 1)
ldq_u v0,0(a0) ; load unaligned: v0 = *(long*)(align(p))
; v0 = *(long*)0x12340
; v0 = 0xDDCCBBAA????????
extqh v0,t0,v0 ; extract actual byte into MSB position
; v0 = v0 << 16
; v0 = 0xBBAA????????0000
sra v0,56,v0 ; get sign-extended byte using arithmetic shift-right
; v0 = v0 >> 56
; v0 = 0xFFFFFFFFFFFFFFBB
ret ; return
uc2ul:
ldq_u v0,0(a0) ; load unaligned: v0 = *(long*)(align(p))
extbl v0,a0,v0 ; extract byte in v0
ret

In this case Alpha handles unsigned load slightly nicer (does not require arithmetic shift and shift offset computation). It takes quite a bit of time to understand sc2sl implementation.

creemj noted on #gentoo-alpha BWX ISA extension (enabled with -mbwx in gcc):

; alpha-unknown-linux-gnu-gcc -O2 -mbwx -c a.c && objdump -d a.o
sc2sl:
ldbu v0,0(a0)
sextb v0,v0 ; sign-extend-byte
ret
uc2ul:
ldbu v0,0(a0)
ret

Here signed load requires one instruction to amend default-unsigned load semantics.

HPPA (PA-RISC)

Currently HPPA userland supports only 32-bit mode on linux. Similar to many RISC architectures its branching instructions take two clock cycles to execute. By convention it means the next instruction right after branch (bv) is also executed.

; hppa2.0-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
ldb 0(r26),ret0 ; load byte
bv r0(rp) ; return
extrw,s ret0,31,8,ret0 ; sign-extend 8 bits into 31
uc2ul:
bv r0(rp) ; return
ldb 0(r26),ret0

Similar to Alpha signed chars require one more arithmetic operation.

x86

64-bit mode:

; x86_64-pc-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
movsbq (%rdi),%rax ; load/sign-extend byte to quad
retq
uc2ul:
movzbl (%rdi),%eax ; load/zero-extend byte to long
retq

Note the difference between target operands (64 vs. 32 bits). x86-64 implicitly zeroes out register part for us in 64-bit mode.

32-bit mode:

; x86_64-pc-linux-gnu-gcc -O2 -m32 -c a.c && objdump -d a.o
sc2sl:
mov 0x4(%esp),%eax
movsbl (%eax),%eax
ret
uc2ul:
mov 0x4(%esp),%eax
movzbl (%eax),%eax
ret

No surprises here. Argument is passed through stack.

ia64

ia64 “instructions” are huge. They are 128-bit long and encode 3 real instructions. Result of memory fetch is not used in the same bundle thus we need at least two bundles to fetch and shift. (I don’t know why yet, either in order to avoid memory stall in the same bundle or it’s a “Write ; Read-Write” conflict on r8 in a single bundle)

; ia64-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
[MMI] nop.m 0x0
ld1 r8=[r32] # load byte (implicit zero-extend)
nop.i 0x0;;
[MIB] nop.m 0x0
sxt1 r8=r8 # sign-extend
br.ret.sptk.many b0;;
uc2ul:
[MIB] ld1 r8=[r32] # load byte (implicit zero-extend)
nop.i 0x0
br.ret.sptk.many b0;;

Unsigned char load requires fewer instructions (no additional shift required).

m68k

For some reason frame pointer is still preserved on -O2. I’ve disabled it with -fomit-frame-pointer to make assembly shorter:

; m68k-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
moveal %sp@(4),%a0 ; arguments are passed through stack (as would be in i386)
moveb %a0@,%d0 ; load byte
extbl %d0 ; sign-extend result
rts
uc2ul:
moveal %sp@(4),%a0
clrl %d0 ; zero destination register
moveb %a0@,%d0 ; load byte
rts

Both functions are similar. Both require arithmetic fiddling.

mips

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; mips64-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
jr ra
lb v0,0(a0) ; load byte (sign-extend)
uc2ul:
jr ra
lbu v0,0(a0) ; load byte (zero-extend)

Both functions are taking exactly one instruction.

SuperH

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; sh4-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
rts
mov.b @r4,r0 ; load byte (sign-extend)
uc2ul:
mov.b @r4,r0 ; load byte (sign-extend)
rts
extu.b r0,r0 ; zero-extend result

Here unsigned load requires one instruction to amend default-signed load semantics.

SPARC

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; sparc-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
retl
ldsb [ %o0 ], %o0
uc2ul:
retl
ldub [ %o0 ], %o0

Both functions are taking exactly one instruction.

ARM

; armv5tel-softfloat-linux-gnueabi-gcc -O2 -fomit-frame-pointer -c a.c && armv5tel-softfloat-linux-gnueabi-objdump -d a.o
sc2sl:
ldrsb r0, [r0] ; load/sign-extend
bx lr
uc2ul:
ldrb r0, [r0] ; load/zero-extend
bx lr

Both functions are taking exactly one instruction.

PowerPC

Powerpc generates quite inefficient code for -fPIC mode. Enabling -fno-PIC by default.

; powerpc-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
lbz r3,0(r3) ; load-byte/zero-extend
extsb r3,r3 ; sign-extend
blr
nop
uc2ul:
lbz r3,0(r3) ; load-byte/zero-extend
blr

Here signed load requires one instruction to amend default-unsigned load semantics.

S390

64-bit mode:

; s390x-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
icmh %r2,8,0(%r2) ; insert-characters-under-mask-64
srag %r2,%r2,56 ; shift-right-single-64
br %r14
uc2ul:
llgc %r2,0(%r2) ; load-logical-character
br %r14

Most esoteric instruction set :) It looks like unsigned loads are slightly shorter here.

“31”-bit mode (note -m31):

; s390x-unknown-linux-gnu-gcc -m31 -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
icm %r2,8,0(%r2) ; insert-characters-under-mask-64
sra %r2,24 ; shift-right-single
br %r14
uc2ul:
lhi %r1,0 ; load-halfword-immediate
ic %r1,0(%r2) ; insert-character
lr %r2,%r1 ; register-to-register(?) move
br %r14

Surprisingly in 31-bit mode signed stores are slightly shorter. But it looks like uc2ul could be shorter by eliminating lr.

Parting words

At least from ISA standpoint some architectures treat signed char and unsigned char equally and could pick any signedness. Others differ quite a bit.

Here is my silly table:

architecture signedness preferred signedness match
alpha SIGNED UNSIGNED NO
arm UNSIGNED AMBIVALENT YES
hppa SIGNED UNSIGNED NO
ia64 SIGNED UNSIGNED NO
m68k SIGNED AMBIVALENT YES
mips SIGNED AMBIVALENT YES
powerpc UNSIGNED UNSIGNED YES
s390(64) UNSIGNED UNSIGNED YES
sh SIGNED SIGNED YES
sparc SIGNED AMBIVALENT YES
x86 SIGNED AMBIVALENT YES

What do we see here:

  • alpha follows the majority of architecture in char signedness but pays for it a lot.
  • arm could have been signed just fine (for this tiny silly test)
  • hppa and ia64 might be unsigned and balance the table a bit (6/5 versus 8/3) :)

Have fun!

Posted on September 17, 2017
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> comments powered by Disqus

September 11, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Authenticating with U2F (September 11, 2017, 16:25 UTC)

In order to further secure access to my workstation, after the switch to Gentoo sources, I now enabled two-factor authentication through my Yubico U2F USB device. Well, at least for local access - remote access through SSH requires both userid/password as well as the correct SSH key, by chaining authentication methods in OpenSSH.

Enabling U2F on (Gentoo) Linux is fairly easy. The various guides online which talk about the pam_u2f setup are indeed correct that it is fairly simple. For completeness sake, I've documented what I know on the Gentoo Wiki, as the pam_u2f article.

September 10, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


If you're reading this, the last act in this drama (see the previous blog post) was that in Patras a friendly employee from Hertz picked up the rental car to bring it to a repair workshop. A bit later than planned, but nevertheless. Now the story continues.

  • About 20:00 the same day I get a phone call from the same lady that my car was ready, and we could meet in about 20min next to my hotel so I can pick it up again. That sounded great to me. Some minutes later I saw the car coming.
  • Of course I wanted to try out the repaired roof / window immediately, so we did that. Opened the roof, closed the roof. The passenger side window did not close; precisely the same phenomenon. Oops.
  • I tried a few more times on instruction by the Hertz employee, with the result that the window got stuck at half height and did not move anymore even after shutting down and restarting the ignition. Since it was stuck on the wrong side of its rubber seal, also the passenger door did not open anymore.
  • The visibly nervous Hertz employee calls her manager on the mobile, who arrives after a few minutes. The manager opens the passenger door with application of force. Afterwards, and after restarting the engine, the window slides up again.
  • We have some discussion about a replacement car, where I point out that I paid a lot of money for having a convertible, and really want one. I agree to come to the office thursday morning to sort things out.
  • Next morning, Thursday, at the Hertz office, I'm glad to learn that a replacement car will be sent. Of course, I'm now leaving Patras, so the car will have to be sent to a station near my next stops.
  • We discuss this and agree that I will pick the car up tomorrow (Friday) afternoon in Kalamata (which is only about 80km from my Friday evening hotel in Kyparissia).
Oh well. Glad that things are somehow sorted out. I spend the rest of the day visiting a Mycenaean castle (1200 BC), a Frankish castle (1200 AD), and re-visiting Olympia, spend the night near Olympia, and then start towards Kalamata through the mountains via, e.g. the excavations of ancient Messene. Sometime on the way I realize that the Kalamata Hertz offices (according to the website) are closed 14:00 to 17:00, so I plan with arriving there around 18:00. That's ample time since they should be open 17:00 - 21:00 (search for Kalamata here).
  • Arrive 18:10 at the Kalamata city office. Nobody there, and there's a sign on the door saying "We are at Kalamata Airport."
  • Drive back the ~10km to the airport (which I passed on the way before). Arrive there around 18:30. The entire airport is already closed for the day. No Hertz employees in sight.
  • Call the Kalamata office. First response, "We closed half an hour ago." When I start explaining my problem, the lady on the phone says "But your car has not arrived from Athens yet!" I point out that I have to go back to Kyparissia, quite some way, today. She doesnt know when it will arrive, but says something about late evening. 
  • I tell her I will now get dinner here in Kalamata, and afterwards call her again.
That's where we are now. Just as a reminder, it's now Friday evening, and the problem has essentially been known to Hertz since last Sunday.

Update:
  • Tried calling the Hertz Kalamata office again around 20:45. No response, after a while some mailbox text in Greek. 
  • Drove back the 60km to Kyparissia, arrived at the hotel 22:00. Will call Hertz again tomorrow.
Update 2: Yes, Hertz knows my mobile phone number. It's big and fat on my contract, and I also gave it again and reconfirmed it to the employee at Patras. So, one could assume if something goes wrong they phone me...

Update 3: It ends well. See the next post.

    Fun with Hertz car rentals, part 3 (it ends well) (September 10, 2017, 19:37 UTC)

    If you've read the last part, I had just arrived at my hotel in Kyparissia late in the night, slightly fuming. Well...

    Next morning, saturday, around 10:15 somebody called my mobile phone. For some reason I didn't notice, but only got a text notification of a missed call an hour later. I called back; turns out this was the Kalamata airport Hertz office. "Your replacement car has arrived; you can pick it up anytime."

    I arranged to come by around 16:00 in the afternoon, and from here on everything went smoothly. Now I'm driving a white BMW Mini convertible, and the roof and windows work just fine.

    In the end, obviously I'm quite happy that a replacement car was driven from Athens to Kalamata and that I can now continue with my vacation as planned. The path that lead to that outcome, however, was not so great...

    September 08, 2017
    Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


    So, I decided to get myself a rather expensive treat this summer. For travelling the Peloponnes I rented a Mini Cooper convertible. These are really cute, and driving around in the sun with the roof open felt like a very nice idea. I'm a Hertz Gold Club customer, so why not go for Hertz again.

    I picked up the car in Athens, all looked fine. The first day I had some longer driving to do, and also the manual was only in Greek, so I decided to drive to my first stop and check out the convertible roof there. OK, with some fiddling I found and read a German manual on the BMW website (now I know where to find the VIN number, if anyone asks :), opened the roof, enjoyed half a day in the mountains near Kalavrita.

    Afterwards the passenger side window didn't close anymore.

    It turns out something was already bent or damaged inside the door, so the window was sliding up on the wrong side of its rubber seal. At some point it can't move any further, so the electronics stops and disables the window. The effect is perfectly reproducible, and scratch marks on the rubber seal and door frame indicate it's been doing that already for a while. Oh well.

    • Phoned the nearest Hertz office in Patras. After some complicated discussion in English they advised me to contact the office in Athens.
    • Phoned the Hertz office in Athens. I managed to explain the problem there. They said I should contact their central technical service office, since maybe they know something easy to do. 
    • Phoned the central technical service office. There the problem was quickly understood; a very helpful lady explained to me that most likely the car would have to be exchanged. Since it was Sunday afternoon, they couldn't do it now, but somebody would call me back on Monday morning 9-10.
    • Waited Monday morning for the call. Nothing happened. 
    • Phoned the central technical service office, Monday around 13:00. They asked me where I was. After telling them I'm going to Patras the next day, they told me I should come by their office there.
    • Arrived at the Patras office tuesday around 17:30. I demonstrated the problem to the lady there. She acknowledged that something's broken, and told me she'd come to my hotel the next day between 11:00 and 12:00 to pick up the car and bring it to the BMW service for repair. 
    • Now I'm sitting in the bar of the hotel, it's 12:30, no one has called or come by, and slowly I'm getting seriously annoyed.
    Let's see how the story continues...
    • Update: 13:00, friendly lady from Hertz picked up the car. Fingers crossed. Made clear it's a long rental, so delaying makes no sense. Wants to phone me either in the afternoon or tomorrow morning.
    • Update 2: The drama continues in the next blog post.

    September 07, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    In Search of a Secure Time Source (September 07, 2017, 15:07 UTC)

    ClockAll our computers and smartphones have an internal clock and need to know the current time. As configuring the time manually is annoying it's common to set the time via Internet services. What tends to get forgotten is that a reasonably accurate clock is often a crucial part of security features like certificate lifetimes or features with expiration times like HSTS. Thus the timesetting should be secure - but usually it isn't.

    I'd like my systems to have a secure time. So I'm looking for a timesetting tool that fullfils two requirements:

    1. It provides authenticity of the time and is not vulnerable to man in the middle attacks.
    2. It is widely available on common Linux systems.

    Although these seem like trivial requirements to my knowledge such a tool doesn't exist. These are relatively loose requirements. One might want to add:
    1. The timesetting needs to provide a good accuracy.
    2. The timesetting needs to be protected against malicious time servers.

    Some people need a very accurate time source, for example for certain scientific use cases. But that's outside of my scope. For the vast majority of use cases a clock that is off by a few seconds doesn't matter. While it's certainly a good idea to consider rogue servers given the current state of things I'd be happy to have a solution where I simply trust a server from Google or any other major Internet entity.

    So let's look at what we have:

    NTP

    The common way of setting the clock is the NTP protocol. NTP itself has no transport security built in. It's a plaintext protocol open to manipulation and man in the middle attacks.

    There are two variants of "secure" NTP. "Autokey", an authenticated variant of NTP, is broken. There's also a symmetric authentication, but that is impractical for widespread use, as it would require to negotiate a pre-shared key with the time server in advance.

    NTPsec and Ntimed

    In response to some vulnerabilities in the reference implementation of NTP two projects started developing "more secure" variants of NTP. Ntimed - a rewrite by Poul-Henning Kamp - and NTPsec, a fork of the original NTP software. Ntimed hasn't seen any development for several years, NTPsec seems active. NTPsec had some controversies with the developers of the original NTP reference implementation and its main developer is - to put it mildly - a controversial character.

    But none of that matters. Both projects don't implement a "secure" NTP. The "sec" in NTPsec refers to the security of the code, not to the security of the protocol itself. It's still just an implementation of the old, insecure NTP.

    Network Time Security

    There's a draft for a new secure variant of NTP - called Network Time Security. It adds authentication to NTP.

    However it's just a draft and it seems stalled. It hasn't been updated for over a year. In any case: It's not widely implemented and thus it's currently not usable. If that changes it may be an option.

    tlsdate

    tlsdate is a hack abusing the timestamp of the TLS protocol. The TLS timestamp of a server can be used to set the system time. This doesn't provide high accuracy, as the timestamp is only given in seconds, but it's good enough.

    I've used and advocated tlsdate for a while, but it has some problems. The timestamp in the TLS handshake doesn't really have any meaning within the protocol, so several implementers decided to replace it with a random value. Unfortunately that is also true for the default server hardcoded into tlsdate.

    Some Linux distributions still ship a package with a default server that will send random timestamps. The result is that your system time is set to a random value. I reported this to Ubuntu a while ago. It never got fixed, however the latest Ubuntu version Zesty Zapis (17.04) doesn't ship tlsdate any more.

    Given that Google has shipped tlsdate for some in ChromeOS time it seems unlikely that Google will send randomized timestamps any time soon. Thus if you use tlsdate with www.google.com it should work for now. But it's no future-proof solution.

    TLS 1.3 removes the TLS timestamp, so this whole concept isn't future-proof. Alternatively it supports using an HTTPS timestamp. The development of tlsdate has stalled, it hasn't seen any updates lately. It doesn't build with the latest version of OpenSSL (1.1) So it likely will become unusable soon.

    OpenNTPDOpenNTPD

    The developers of OpenNTPD, the NTP daemon from OpenBSD, came up with a nice idea. NTP provides high accuracy, yet no security. Via HTTPS you can get a timestamp with low accuracy. So they combined the two: They use NTP to set the time, but they check whether the given time deviates significantly from an HTTPS host. So the HTTPS host provides safety boundaries for the NTP time.

    This would be really nice, if there wasn't a catch: This feature depends on an API only provided by LibreSSL, the OpenBSD fork of OpenSSL. So it's not available on most common Linux systems. (Also why doesn't the OpenNTPD web page support HTTPS?)

    Roughtime

    Roughtime is a Google project. It fetches the time from multiple servers and uses some fancy cryptography to make sure that malicious servers get detected. If a roughtime server sends a bad time then the client gets a cryptographic proof of the malicious behavior, making it possible to blame and shame rogue servers. Roughtime doesn't provide the high accuracy that NTP provides.

    From a security perspective it's the nicest of all solutions. However it fails the availability test. Google provides two reference implementations in C++ and in Go, but it's not packaged for any major Linux distribution. Google has an unfortunate tendency to use unusual dependencies and arcane build systems nobody else uses, so packaging it comes with some challenges.

    One line bash script beats all existing solutions

    As you can see none of the currently available solutions is really feasible and none fulfils the two mild requirements of authenticity and availability.

    This is frustrating given that it's a really simple problem. In fact, it's so simple that you can solve it with a single line bash script:

    date -s "$(curl -sI https://www.google.com/|grep -i 'date:'|sed -e 's/^.ate: //g')"

    This line sends an HTTPS request to Google, fetches the date header from the response and passes that to the date command line utility.

    It provides authenticity via TLS. If the current system time is far off then this fails, as the TLS connection relies on the validity period of the current certificate. Google currently uses certificates with a validity of around three months. The accuracy is only in seconds, so it doesn't qualify for high accuracy requirements. There's no protection against a rogue Google server providing a wrong time.

    Another potential security concern may be that Google might attack the parser of the date setting tool by serving a malformed date string. However I ran american fuzzy lop against it and it looks robust.

    While this certainly isn't as accurate as NTP or as secure as roughtime, it's better than everything else that's available. I put this together in a slightly more advanced bash script called httpstime.

    September 06, 2017
    Greg KH a.k.a. gregkh (homepage, bugs)
    4.14 == This years LTS kernel (September 06, 2017, 14:41 UTC)

    As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

    So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)

    As always, if people have questions about this, email me and I will be glad to discuss it, or talk to me in person next week at the LinuxCon^WOpenSourceSummit or Plumbers conference in Los Angeles, or at any of the other conferences I’ll be at this year (ELCE, Kernel Recipes, etc.)

    September 05, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    Abandoned Domain Takeover as a Web Security Risk (September 05, 2017, 17:11 UTC)

    In the modern web it's extremely common to include thirdparty content on web pages. Youtube videos, social media buttons, ads, statistic tools, CDNs for fonts and common javascript files - there are plenty of good and many not so good reasons for this. What is often forgotten is that including other peoples content means giving other people control over your webpage. This is obviously particularly risky if it involves javascript, as this gives a third party full code execution rights in the context of your webpage.

    I recently helped a person whose Wordpress blog had a problem: The layout looked broken. The cause was that the theme used a font from a web host - and that host was down. This was easy to fix. I was able to extract the font file from the Internet Archive and store a copy locally. But it made me thinking: What happens if you include third party content on your webpage and the service from which you're including it disappears?

    I put together a simple script that would check webpages for HTML tags with the src attribute. If the src attribute points to an external host it checks if the host name actually can be resolved to an IP address. I ran that check on the Alexa Top 1 Million list. It gave me some interesting results. (This methodology has some limits, as it won't discover indirect src references or includes within javascript code, but it should be good enough to get a rough picture.)

    Yahoo! Web Analytics was shut down in 2012, yet in 2017 Flickr still tried to use it

    The webpage of Flickr included a script from Yahoo! Web Analytics. If you don't know Yahoo Analytics - that may be because it's been shut down in 2012. Although Flickr is a Yahoo! company it seems they haven't noted for quite a while. (The code is gone now, likely because I mentioned it on Twitter.) This example has no security impact as the domain still belongs to Yahoo. But it likely caused an unnecessary slowdown of page loads over many years.

    Going through the list of domains I saw plenty of the things you'd expect: Typos, broken URLs, references to localhost and subdomains no longer in use. Sometimes I saw weird stuff, like references to javascript from browser extensions. My best explanation is that someone had a plugin installed that would inject those into pages and then created a copy of the page with the browser which later endet up being used as the real webpage.

    I looked for abandoned domain names that might be worth registering. There weren't many. In most cases the invalid domains were hosts that didn't resolve, but that still belonged to someone. I found a few, but they were only used by one or two hosts.

    Takeover of unregistered Azure subdomain

    But then I saw a couple of domains referencing a javascript from a non-resolving host called piwiklionshare.azurewebsites.net. This is a subdomain from Microsoft's cloud service Azure. Conveniently Azure allows creating test accounts for free, so I was able to grab this subdomain without any costs.

    Doing so allowed me to look at the HTTP log files and see what web pages included code from that subdomain. All of them were local newspapers from the US. 20 of them belonged to two adjacent IP addresses, indicating that they were all managed by the same company. I was able to contact them. While I never received any answer, shortly afterwards the code was gone from all those pages.

    Saline Courier defacement
    "Friendly defacement" of the Saline Courier.
    However the page with most hits was not so easy to contact. It was also a newspaper, the Saline Courier. I tried contacting them directly, their chief editor and their second chief editor. No answer.

    After a while I wondered what I could do. Ultimately at some point Microsoft wouldn't let me host that subdomain any longer for free. I didn't want to risk that others could grab that subdomain, but at the same time I obviously also didn't want to pay in order to keep some web page safe whose owners didn't even bother to read my e-mails.

    But of course I had another way of contacting them: I could execute Javascript on their web page and use that for some friendly defacement. After some contemplating whether that would be a legitimate thing to do I decided to go for it. I changed the background color to some flashy pink and send them a message. The page remained usable, but it was a message hard to ignore.

    With some trouble on the way - first they broke their CSS, then they showed a PHP error message, then they reverted to the page with the defacement. But in the end they managed to remove the code.

    There are still a couple of other pages that include that Javascript. Most of them however look like broken test webpages. The only legitimately looking webpage that still embeds that code is the Columbia Missourian. However they don't embed it on the start page, only on the error reporting form they have for every article. It's been several weeks now, they don't seem to care.

    What happens to abandoned domains?

    There are reasons to believe that what I showed here is only the tip of the iceberg. In many cases when services discontinue their domains don't simply disappear. If the domain name is valuable then almost certainly someone will try to register it immediately after it becomes available.

    Someone trying to abuse abandoned domains could watch out for services going ot of business or widely referenced domains becoming available. Just to name an example: I found a couple of hosts referencing subdomains of compete.com. If you go to their web page you can learn that the company Compete has discontinued its service in 2016. How long will they keep their domain? And what will happen with it afterwards? Whoever gets the domain can hijack all the web pages that still include javascript from it.

    Be sure to know what you include

    There are some obvious takeaways from this. If you include other peoples code on your web page then you should know what that means: You give them permission to execute whatever they want on your web page. This means you need to wonder how much you can trust them.

    At the very least you should be aware who is allowed to execute code on your web page. If they shut down their business or discontinue the service you have been using then you obviously should remove that code immediately. And if you include code from a web statistics service that you never look at anyway you may simply want to remove that as well.

    September 04, 2017
    Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
    A Late GUADEC 2017 Post (September 04, 2017, 03:20 UTC)

    It’s been a little over a month since I got back from Manchester, and this post should’ve come out earlier but I’ve been swamped.

    The conference was absolutely lovely, the organisation was a 110% on point (serious kudos, I know first hand how hard that is). Others on Planet GNOME have written extensively about the talks, the social events, and everything in between that made it a great experience. What I would like to write about is about why this year’s GUADEC was special to me.

    GNOME turning 20 years old is obviously a large milestone, and one of the main reasons I wanted to make sure I was at Manchester this year. There were many occasions to take stock of how far we had come, where we are, and most importantly, to reaffirm who we are, and why we do what we do.

    And all of this made me think of my own history with GNOME. In 2002/2003, Nat and Miguel came down to Bangalore to talk about some of the work they were doing. I know I wasn’t the only one who found their energy infectious, and at Linux Bangalore 2003, they got on stage, just sat down, and started hacking up a GtkMozEmbed-based browser. The idea itself was fun, but what I took away — and I know I wasn’t the only one — is the sheer inclusive joy they shared in creating something and sharing that with their audience.

    For all of us working on GNOME in whatever way we choose to contribute, there is the immediate gratification of shaping this project, as well as the larger ideological underpinning of making everyone’s experience talking to their computers better and free-er.

    But I think it is also important to remember that all our efforts to make our community an inviting and inclusive space have a deep impact across the world. So much so that complete strangers from around the world are able to feel a sense of belonging to something much larger than themselves.

    I am excited about everything we will achieve in the next 20 years.

    (thanks go out to the GNOME Foundation for helping me attend GUADEC this year)

    Sponsored by GNOME!