Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Sergei Trofimovich
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
November 23, 2017, 05:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

November 20, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux and extended permissions (November 20, 2017, 15:04 UTC)

One of the features present in the August release of the SELinux user space is its support for ioctl xperm rules in modular policies. In the past, this was only possible in monolithic ones (and CIL). Through this, allow rules can be extended to not only cover source (domain) and target (resource) identifiers, but also a specific number on which it applies. And ioctl's are the first (and currently only) permission on which this is implemented.

Note that ioctl-level permission controls isn't a new feature by itself, but the fact that it can be used in modular policies is.

What is ioctl?

Many interactions on a Linux system are done through system calls. From a security perspective, most system calls can be properly categorized based on who is executing the call and what the target of the call is. For instance, the unlink() system call has the following prototype:

int unlink(const char *pathname);

Considering that a process (source) is executing unlink (system call) against a target (path) is sufficient for most security implementations. Either the source has the permission to unlink that file or directory, or it hasn't. SELinux maps this to the unlink permission within the file or directory classes:

allow <domain> <resource> : { file dir }  unlink;

Now, ioctl() is somewhat different. It is a system call that allows device-specific operations which cannot be expressed by regular system calls. Devices can have multiple functions/capabilities, and with ioctl() these capabilities can be interrogated or updated. It has the following interface:

int ioctl(int fd, unsigned long request, ...);

The file descriptor is the target device on which an operation is launched. The second argument is the request, which is an integer whose value identifiers what kind of operation the ioctl() call is trying to execute. So unlike regular system calls, where the operation itself is the system call, ioctl() actually has a parameter that identifies this.

A list of possible parameter values on a socket for instance is available in the Linux kernel source code, under include/uapi/linnux/sockios.h.

SELinux allowxperm

For SELinux, having the purpose of the call as part of a parameter means that a regular mapping isn't sufficient. Allowing ioctl() commands for a domain against a resource is expressed as follows:

allow <domain> <resource> : <class> ioctl;

This of course does not allow policy developers to differentiate between harmless or informative calls (like SIOCGIFHWADDR to obtain the hardware address associated with a network device) and impactful calls (like SIOCADDRT to add a routing table entry).

To allow for a fine-grained policy approach, the SELinux developers introduced an extended allow permission, which is capable of differentiating based on an integer value.

For instance, to allow a domain to get a hardware address (SIOCGIFHWADDR, which is 0x8927) from a TCP socket:

allowxperm <domain> <resource> : tcp_socket ioctl 0x8927;

This additional parameter can also be ranged:

allowxperm <domain> <resource> : <class> ioctl 0x8910-0x8927;

And of course, it can also be used to complement (i.e. allow all ioctl parameters except a certain value):

allowxperm <domain> <resource> : <class> ioctl ~0x8927;

Small or negligible performance hit

According to a presentation given by Jeff Vander Stoep on the Linux Security Summit in 2015, the performance impact of this addition in SELinux is well under control, which helped in the introduction of this capability in the Android SELinux implementation.

As a result, interested readers can find examples of allowxperm invocations in the SELinux policy in Android, such as in the app.te file:

# only allow unprivileged socket ioctl commands
allowxperm { appdomain -bluetooth } self:{ rawip_socket tcp_socket udp_socket } ioctl { unpriv_sock_ioctls unpriv_tty_ioctls };

And with that, we again show how fine-grained the SELinux access controls can be.

November 18, 2017
Sergei Trofimovich a.k.a. slyfox (homepage, bugs)
Endianness of a single byte: big or little? (November 18, 2017, 00:00 UTC)

trofi's blog: Endianness of a single byte: big or little?

Endianness of a single byte: big or little?

Bug

Rolf Eike Beer noticed two test failures while testing radvd package (IPv6 route advertiser daemon and more) on sparc. Both tests failed likely due to endianness issue:

test/send.c:317:F:build:test_add_ra_option_lowpanco:0:
  Assertion '0 == memcmp(expected, sb.buffer, sizeof(expected))'
    failed: 0 == 0, memcmp(expected, sb.buffer, sizeof(expected)) == 1
test/send.c:342:F:build:test_add_ra_option_abro:0:
  Assertion '0 == memcmp(expected, sb.buffer, sizeof(expected))'
    failed: 0 == 0, memcmp(expected, sb.buffer, sizeof(expected)) == 1

I’ve confirmed the same failure on powerpc.

Eike noted that it’s unusual because sparc is a big-endian architecture and network byteorder is also big-endian (thus no need to flip bytes). Something very specific must have lurked in radvd code to break endianness in this case. Curiously all the radvd tests were working fine on amd64.

Two functions failed to produce expected results:

START_TEST(test_add_ra_option_lowpanco)
{
ck_assert_ptr_ne(0, iface);
struct safe_buffer sb = SAFE_BUFFER_INIT;
add_ra_option_lowpanco(&sb, iface->AdvLowpanCoList);
unsigned char expected[] = {
0x22, 0x03, 0x32, 0x48, 0x00, 0x00, 0xe8, 0x03, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
};
ck_assert_int_eq(sb.used, sizeof(expected));
ck_assert_int_eq(0, memcmp(expected, sb.buffer, sizeof(expected)));
safe_buffer_free(&sb);
}
END_TEST
START_TEST(test_add_ra_option_abro)
{
ck_assert_ptr_ne(0, iface);
struct safe_buffer sb = SAFE_BUFFER_INIT;
add_ra_option_abro(&sb, iface->AdvAbroList);
unsigned char expected[] = {
0x23, 0x03, 0x0a, 0x00, 0x02, 0x00, 0x02, 0x00, 0xfe, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xa2, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
};
ck_assert_int_eq(sb.used, sizeof(expected));
ck_assert_int_eq(0, memcmp(expected, sb.buffer, sizeof(expected)));
safe_buffer_free(&sb);
}
END_TEST

Both tests are straightforward: they verify 6LoWPAN and ABRO extension handling (both are related to route announcement for Low Power devices). Does not look complicated.

I looked at add_ra_option_lowpanco() implementation and noticed at least one bug of missing endianness conversion:

// radvd.h
struct nd_opt_abro {
uint8_t nd_opt_abro_type;
uint8_t nd_opt_abro_len;
uint16_t nd_opt_abro_ver_low;
uint16_t nd_opt_abro_ver_high;
uint16_t nd_opt_abro_valid_lifetime;
struct in6_addr nd_opt_abro_6lbr_address;
};
// ...
// send.c
static void add_ra_option_mipv6_home_agent_info(struct safe_buffer *sb, struct mipv6 const *mipv6)
{
struct HomeAgentInfo ha_info;
memset(&ha_info, 0, sizeof(ha_info));
ha_info.type = ND_OPT_HOME_AGENT_INFO;
ha_info.length = 1;
ha_info.flags_reserved = (mipv6->AdvMobRtrSupportFlag) ? ND_OPT_HAI_FLAG_SUPPORT_MR : 0;
ha_info.preference = htons(mipv6->HomeAgentPreference);
ha_info.lifetime = htons(mipv6->HomeAgentLifetime);
safe_buffer_append(sb, &ha_info, sizeof(ha_info));
}
static void add_ra_option_abro(struct safe_buffer *sb, struct AdvAbro const *abroo)
{
struct nd_opt_abro abro;
memset(&abro, 0, sizeof(abro));
abro.nd_opt_abro_type = ND_OPT_ABRO;
abro.nd_opt_abro_len = 3;
abro.nd_opt_abro_ver_low = abroo->Version[1];
abro.nd_opt_abro_ver_high = abroo->Version[0];
abro.nd_opt_abro_valid_lifetime = abroo->ValidLifeTime;
abro.nd_opt_abro_6lbr_address = abroo->LBRaddress;
safe_buffer_append(sb, &abro, sizeof(abro));
}

Note how add_ra_option_mipv6_home_agent_info() carefully flips bytes with htons() for all uint16_t fields but add_ra_option_abro() does not.

It means the ABRO does not really work on little-endian (aka most) systems in radvd and test checks for the wrong thing. I added missing htons() calls and fixed expected[] output in tests by manually flipping two bytes in a few locations.

Plot twist

The effect was slightly unexpected: I fixed only ABRO test, but not 6LoWPAN. It’s where things became interesting. Let’s look at add_ra_option_lowpanco() implementation:

// radvd.h
struct nd_opt_6co {
uint8_t nd_opt_6co_type;
uint8_t nd_opt_6co_len;
uint8_t nd_opt_6co_context_len;
uint8_t nd_opt_6co_res : 3;
uint8_t nd_opt_6co_c : 1;
uint8_t nd_opt_6co_cid : 4;
uint16_t nd_opt_6co_reserved;
uint16_t nd_opt_6co_valid_lifetime;
struct in6_addr nd_opt_6co_con_prefix;
};
// ...
// send.c
static void add_ra_option_lowpanco(struct safe_buffer *sb, struct AdvLowpanCo const *lowpanco)
{
struct nd_opt_6co co;
memset(&co, 0, sizeof(co));
co.nd_opt_6co_type = ND_OPT_6CO;
co.nd_opt_6co_len = 3;
co.nd_opt_6co_context_len = lowpanco->ContextLength;
co.nd_opt_6co_c = lowpanco->ContextCompressionFlag;
co.nd_opt_6co_cid = lowpanco->AdvContextID;
co.nd_opt_6co_valid_lifetime = lowpanco->AdvLifeTime;
co.nd_opt_6co_con_prefix = lowpanco->AdvContextPrefix;
safe_buffer_append(sb, &co, sizeof(co));
}

The test still failed to match one single byte: the one that spans 3 bitfields: nd_opt_6co_res, nd_opt_6co_c, nd_opt_6co_cid. But why? Does endianness really matter within byte? Apparently gcc happens to group those 3 fields in different orders on x86_64 and powerpc!

Let’s looks at a smaller example:

#include <stdio.h>
#include <stdint.h>
struct s {
uint8_t a : 3;
uint8_t b : 1;
uint8_t c : 4;
};
int main() {
struct s v = { 0x00, 0x1, 0xF, };
printf("v = %#02x\n", *(uint8_t*)&v);
return 0;
}

Output difference:

$ x86_64-pc-linux-gnu-gcc a.c -o a && ./a
v = 0xf8
# (0xF << 5) | (0x1 << 4) | 0x00
$ powerpc-unknown-linux-gnu-gcc a.c -o a && ./a
v = 0x1f
# (0x0 << 5) | (0x1 << 4) | 0xF

C standard does not specify layout of bitfields and it’s a great illustration of how things break :)

An interesting observation: the bitfield order on powerpc happens to be the desired order (as 6LoWPAN RFC defines it).

It means radvd code indeed happened to generate correct bitstream on big-endian platforms (as Eike predicted) but did not work on little-endian systems. Unfortunately golden expected[] output was generated on little-endian system.

Thus the 3 fixes:

That’s it :)

Posted on November 18, 2017
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> comments powered by Disqus

November 17, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Python’s M2Crypto fails to compile (November 17, 2017, 21:29 UTC)

When updating my music server, I ran into a compilation error on Python’s M2Crypto. The error message was a little bit strange and not directly related to the package itself:

fatal error: openssl/ecdsa.h: No such file or directory

Obviously, that error is generated from OpenSSL and not directly within M2Crypto. Remembering that there are some known problems with the “bindist” USE flag, I took a look at OpenSSL and OpenSSH. Indeed, “bindist” was set. Simply removing the USE flag from those two packages took care of the problem:


# grep -i 'openssh\|openssl' /etc/portage/package.use
>=dev-libs/openssl-1.0.2m -bindist
net-misc/openssh -bindist

In this case, the problems makes sense based on the error message. The error indicated that the Elliptic Curve Digital Signature Algorithm (ECDSA) header was not found. In the previously-linked page about the “bindist” USE flag, it clearly states that having bindist set will “Disable/Restrict EC algorithms (as they seem to be patented)”.

Cheers,
Nathan Zachary

November 16, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)
Some minor Security Quirks in Firefox (November 16, 2017, 16:24 UTC)

FirefoxI discovered a couple of more or less minor security issues in Firefox lately. None of them is particularly scary, but they affect interesting corner cases or unusual behavior. I'm posting this mainly hoping that other people will find it inspiring to think about unusual security issues and maybe also come up with more realistic attack scenarios for these bugs.

I'd like to point out that Mozilla hasn't fixed most of those issues, despite all of them being reported several months ago.

Bypassing XSA warning via FTP

XSA or Cross-Site Authentication is an interesting and not very well known attack. It's been discovered by Joachim Breitner in 2005.

Some web pages, mostly forums, allow users to include third party images. This can be abused by an attacker to steal other user's credentials. An attacker first posts something with an image from a server he controls. He then switches on HTTP authentication for that image. All visitors of the page will now see a login dialog on that page. They may be tempted to type in their login credentials into the HTTP authentication dialog, which seems to come from the page they trust.

The original XSA attack is, as said, quite old. As a countermeasure Firefox implements a warning in HTTP authentication dialogs that were created by a subresource like an image. However it only does that for HTTP, not for FTP.

So an attacker can run an FTP server and include an image from there. By then requiring an FTP login and logging all login attempts to the server he can gather credentials. The password dialog will show the host name of the attacker's FTP server, but he could choose one that looks close enough to the targeted web page to not raise suspicion.

Firefox FTP password dialog

I haven't found any popular site that allows embedding images from non-HTTP-protocols. The most popular page that allows embedding external images at all is Stack Overflow, but it only allows HTTPS. Generally embedding third party images is less common these days, most pages keep local copies if they embed external images.

This bug is yet unfixed.

Obviously one could fix it by showing the same warning for FTP that is shown for HTTP authentication. But I'd rather recommend to completely block authentication dialogs on third party content. This is also what Chrome is doing. Mozilla has been discussing this for several years with no result.

Firefox also has an open bug about disallowing FTP on subresources. This would obviously also fix this scenario.

Window-modal popup via FTP

In the early days of JavaScript web pages could annoy users with popups. Browsers have since changed the behavior of JavaScript popups. They are now tab-modal, which means they're not blocking the interaction with the whole browser, they're just part of one tab and will only block the interaction with the web page that created them.

So it is a goal of modern browsers to not allow web pages to create window-modal alerts that block the interaction with the whole browser. However I figured out FTP gives us a bypass of this restriction.

If Firefox receives some random garbage over an FTP connection that it cannot interpret as FTP commands it will open an alert window showing that garbage.

Window modal FTP alert

First we open up our fake "FTP-Server" that will simply send a message to all clients. We can just use netcat for this:

while true; do echo "Hello" | nc -l -p 21; done

Then we try to open a connection, e. g. by typing ftp://localhost in the address bar on the same system. Firefox will not show the alert immediately. However if we then click on the URL bar and press enter again it will show the alert window. I tried to replicate that behavior with JavaScript, which worked sometimes. I'm relatively sure this can be made reliable.

There are two problems here. One is that server controlled content is showed to the user without any interpretation. This alert window seems to be intended as some kind of error message. However it doesn't make a lot of sense like that. If at all it should probably be prefixed by some message like "the server sent an invalid command". But ultimately if the browser receives random garbage instead of protocol messages it's probably not wise to display that at all. The second problem is that FTP error messages probably should be tab-modal as well.

This bug is also yet unfixed.

FTP considered dangerous

FTP is an old protocol with many problems. Some consider the fact that browsers still support it a problem. I tend to agree, ideally FTP should simply be removed from modern browsers.

FTP in browsers is insecure by design. While TLS-enabled FTP exists browsers have never supported it. The FTP code is probably not well audited, as it's rarely used. And the fact that another protocol exists that can be used similarly to HTTP has the potential of surprises. For example I found it quite surprising to learn that it's possible to have unencrypted and unauthenticated FTP connections to hosts that enabled HSTS. (The lack of cookie support on FTP seems to avoid causing security issues, but it's still unexpected and feels dangerous.)

Self-XSS in bookmark manager export

The Firefox Bookmark manager allows exporting bookmarks to an HTML document. Before the current Firefox 57 it was possible to inject JavaScript into this exported HTML via the tags field.

I tried to come up with a plausible scenario where this could matter, however this turned out to be difficult. This would be a problematic behavior if there's a way for a web page to create such a bookmark. While it is possible to create a bookmark dialog with JavaScript, this doesn't allow us to prefill the tags field. Thus there is no way a web page can insert any content here.

One could come up with implausible social engineering scenarios (web page asks user to create a bookmark and insert some specific string into the tags field), but that seems very far fetched. A remotely plausible scenario would be a situation where a browser can be used by multiple people who are allowed to create bookmarks and the bookmarks are regularly exported and uploaded to a web page. However that also seems quite far fetched.

This was fixed in the latest Firefox release as CVE-2017-7840 and considered as low severity.

Crashing Firefox on Linux via notification API

The notification API allows browsers to send notification alerts that the operating system will show in small notification windows. A notification can contain a small message and an icon.

When playing this one of the very first things that came to my mind was to check what happens if one simply sends a very large icon. A user has to approve that a web page is allowed to use the notification API, however if he does the result is an immediate crash of the browser. This only "works" on Linux. The proof of concept is quite simple, we just embed a large black PNG via a data URI:

<script>Notification.requestPermission(function(status){
new Notification("",{icon: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAE4gAABOIAQAAAAB147pmAAAL70lEQVR4Ae3BAQ0AAADCIPuntscHD" + "A".repeat(4043) + "yDjFUQABEK0vGQAAAABJRU5ErkJggg==",});
});</script>


I haven't fully tracked down what's causing this, but it seems that Firefox tries to send a message to the system's notification daemon with libnotify and if that's too large for the message size limit of dbus it will not properly handle the resulting error.

What I found quite frustrating is that when I reported it I learned that this was a duplicate of a bug that has already been reported more than a year ago. I feel having such a simple browser crash bug open for such a long time is not appropriate. It is still unfixed.

November 10, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Latex 001 (November 10, 2017, 06:03 UTC)

This is manly a memo for remember each time which packages are needed for Japanese.

Usually I use xetex with cjk for write latex document on Gentoo this is done by adding xetex and cjk support to texlive.

app-text/texlive cjk xetex app-text/texlive-core cjk xetex

for platex is needed to install

dev-texlive/texlive-langjapanese

Google Summer of Code day41-51 (November 10, 2017, 05:59 UTC)

Google Summer of Code day 41

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: - working on making the code for sending a unknown number of files with Werkzeug

Making and Testing patch manager

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 42

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: * Fixing sending multiple file using requests I'm making the script for incremental add patches but I'm stuck on this requests problem.

looks like I cannot concatenate files like this files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} or like this: files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} getting AttributeError: 'tuple' object has no attribute 'read'

looks like requests cannot manage to send data with same key but the server part of flask-restful is using parser.add_argument('patch', action='append') for concatenate more arguments togheter. and it suggest to send the informations like this: curl http://api.example.com -d "patch=bob" -d "patch=sue" -d "patch=joe"

Unfortunatly as now I'm still trying to understand how I can do it with requests.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 43

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * Changed send_file for manage more patches at a time * Refactored functions name for reflecting the incremental patches change * Made function for list patches in the eapply_user portage user patches folder directory and temporary folder patches reflecting the applied livepatches.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 44

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * renamed command client function as internal function * put togheter the client incremental patch sender function

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 45

What was my plan for today?

  • testing and improving elivepatch

What i did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementatio - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 46

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * code refactoring

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 47

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * trying to make some way for make testing more fast, I'm thinking something like unit test and integration testing. * merged build_livepatch with send_files as it can reduce the steps for making the livepatch and help making it more isolated. * going on writing the incremental patch

Building with kpatch-build takes usually too much time, this is making testing parts where building live patch is needed, long and complicated. Would probably be better to add some unit/integration testing for keep each task/function under control without needing to build live patches every time. any thoughts?

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 48

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * added PORTAGE_CONFIGROOT for set the path from there get the incremental patches. * cannot download kernel sources to /usr/portage/distfiles/ * saving created livepatch and patch to the client but sending only the incremental patches and patch to the elivepatch server * Cleaned output from bash command

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 49

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

1) we need a way for having old genpatches. mpagano made a script for saving all the genpatches and they are saved here: http://dev.gentoo.org/~mpagano/genpatches/tarballs/ I think we can redirect the ebuild on our overlay for get the tarballs from there.

2) kpatch can work with initrd

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 50

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • refactoring code
  • starting writing first draft for the automatical kernel livepatching system

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 51

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • checking eapply_user eapply_user is using patch

  • writing elivepatch wiki page https://wiki.gentoo.org/wiki/User:Aliceinwire/elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day31-40 (November 10, 2017, 05:57 UTC)

Google Summer of Code day 31

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Dispatcher.py:

  • fixed comments
  • added static method for return the kernel path
  • Added todo
  • fixed how to make directory with uuid

livepatch.py * fixed docstring * removed sudo on kpatch-build * fixed comments * merged ebuild commands in one on build livepatch

restful.py * fixed comment

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 32

What was my plan for today?

  • testing and improving elivepatch

What I did today?

client checkers: * used os.path.join for ungz function * implemented temporary folder using python tempfile

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 33

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Working with tempfile for keeping the uncompressed configuration file using the appropriate tempfile module.
  • Refactoring and code cleaning.
  • check if the ebuild is present in the overlay before trying to merge it.

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 34

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • lpatch added locally
  • fix ebuild directory
  • Static patch and config filename on send This is useful for isolating the API requests using only the uuid as session identifier
  • Removed livepatchStatus and lpatch class configurations. Because we need a request to only be identified by is own UUID, for isolating the transaction.
  • return in case the request with the same uuid is already present.

we still have that problem about "can'\''t find special struct alt_instr size."

I will investigate it tomorrow

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 35

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Kpatch needs some special section data for finding where to inject the livepatch. This special section data existence is checked by kpatch-build in the given vmlinux file. The vmlinux file need CONFIG_DEBUG_INFO=y for making the debug symbols containing the special section data. This special section data is found like this:

# Set state if name matches
a == 0 && /DW_AT_name.* alt_instr[[:space:]]*$/ {a = 1; next}
b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next}
p == 0 && /DW_AT_name.* paravirt_patch_site[[:space:]]*$/ {p = 1; next}
e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next}

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 36

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fixed livepatch output name for reflecting the static set patch name
  • module install removed as not needed
  • debug option is now also copying the build.log to the uuid directory, for investigating failed attempt
  • refactored uuid_dir with uuid in the case where uuid_dir is not actually containing the full path
  • adding debug_info to the configuration file if not already present however, this setting is only needed for the livepatch creation (not tested yet)

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 37

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fix return message for cve option
  • Added different configuration example [4.9.30,4.10.17]
  • Catch Gentoo-sources not available error
  • Catch missing livepatch errors on the client

Tested kpatch with patch for kernel 4.9.30 and 4.10.17 and it worked without any problem. I checked with coverage for see which code is not used. I think we can maybe remove cve option for now as not implemented yet and we could use a modular implementation of for it, so the configuration could change. We need some documentation about elivepatch on the Gentoo wiki. We need some unit tests for making development more smooth and making it more simple to check the working status with GitHub Travis.

I talked with kpatch creator and we got some feedback:

“this project could also be used for kpatch testing :)
imagine instead of just loading the .ko, the client was to kick off a series of tests and report back.”

“why bother a production or tiny machine when you might have a patch-building server”

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 38

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

What we will do next: - incremental patch tracking on the client side - CVE security vulnerability checker - dividing the repository - ebuild - documentation [optional] - modularity [optional]

Kpatch work: - Started to make the incremental patch feature - Tested kpatch for permission issue

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 39

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementation - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 40

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor

summary of elivepatch work: - working with incremental patch manager - cleaning client terminal output by catching the exceptions

Making and Testing patch manager

what I will do next time?

  • testing and improving elivepatch

November 09, 2017
Cigars and the Norwegian Government (November 09, 2017, 21:36 UTC)

[Updated 22nd November to add link to the response to proposed new regulations] As some of my readers knows, I'm an aficionado of Cigars, to the extent I bought my own Cigar Importer and Store. The picture below is from the 20th best bar in the world in 2017, and they sell our cigars of … Continue reading "Cigars and the Norwegian Government"

Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google Summer of Code day26-30 (November 09, 2017, 11:14 UTC)

Google Summer of Code day 26

What was my plan for today?

  • testing and improving elivepatch

What i did today?

After discussion with my mentor. I need: * Availability and replicability of downloading kernel sources. [needed] * Represent same kernel sources as the client kernel source (use flags for now and in the future user added patches) [needed] * Support multiple request at the same time. [needed] * Modularity for adding VM machine or container support. (in the future)

Create overlay for keeping old gentoo-sources ebuild where we will keep retrocompatibility for old kernels: https://github.com/aliceinwire/gentoo-sources_overlay

Made function for download and get old kernel sources and install sources under the designated temporary folder.

what i will do next time?

  • Going on improving elivepatch

Google Summer of Code day 27

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Fixed git clone of the gentoo-sources overlay under a temporary directory
  • Fixed download directories
  • Tested elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 28

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Testing elivepatch with multi threading
  • Removed check for uname kernel version as is getting the kernel version directly from the kernel configuration file header.
  • Starting kpatch-build under the output folder Because kpatch-build is making the livepatch under the $PWD folder we are starting it under the uuid tmp folder and we are getting the livepatch from the uuid folder. this is usefull for dealing with multi-threading
  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]

For making elivepatch multithread we can simply change app.run() of flask with app.run(threaded=True).
This will make flask spawn thread for each request (using class SocketServer.ThreadingMixIn and baseWSGIserver)1, but also if is working pretty nice, the suggested way of threading is probably using gunicorn or uWSGI.2
maybe like using flask-gunicorn: https://github.com/doobeh/flask-gunicorn

what i will do next time?

  • testing and improving elivepatch

[1] https://docs.python.org/2/library/socketserver.html#SocketServer.ThreadingMixIn [2] http://flask.pocoo.org/docs/0.12/deploying/

Google Summer of Code day 29

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]
  • Sended pull request to kpatch for adding dynamic output folder option https://github.com/dynup/kpatch/pull/718
  • Sended pull request to kpatch for small style fix https://github.com/dynup/kpatch/pull/719

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 30

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • fixing uuid design

moved the uuid generating to the client and read rfc 4122

that states "A UUID is 128 bits long, and requires no central registration process."

  • made regex function for checking the format of uuid is actually correct.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day21-25 (November 09, 2017, 10:42 UTC)

Google Summer of Code day 21

What was my plan for today?

  • Testing elivepatch
  • Getting the kernel version dynamically
  • Updating kpatch-build for work with Gentoo better

What I did today?

Fixing the problems pointed out by my mentor.

  • Fixed software copyright header
  • Fixed popen call
  • Used regex for checking file extension
  • Added debug option for the server
  • Changed reserved word shadowing variables
  • used os.path.join for link path
  • dynamical pass kernel version

what I will do next time?
* Testing elivepatch * work on making elivepatch support multiple calls

Google Summer of Code day 22

What was my plan for today?

  • Testing elivepatch
  • work on making elivepatch support multiple calls

What I did today?

  • working on sending the kernel version
  • working on dividing users by UUID (Universally unique identifier)

With the first connection, because the UUID is not generated yet, it will be created and returned by the elivepatch server. The client will store the UUID in shelve so that it can authenticate next time is inquiring the server. By using python UUID I'm assigning different UUID for each user. By using UUID we can start supporting multiple calls.

what I will do next time?
* Testing elivepatch * Cleaning code

Google Summer of Code day 23

What was my plan for today?

  • Testing elivepatch
  • Cleaning code

What I did today?

  • commented some code part
  • First draft of UUID working

elivepatch server need to generate a UUID for a client connection, and assign the UUID to each client, this is needed for managing multiple request. The livepatch and configs files will be generated in different folders for each client request and returned using the UUID. Made a working draft.

what I will do next time?
* Cleaning code * testing it * Go on with programming and starting implementing the CVE

Google Summer of Code day 24

What was my plan for today?

  • Cleaning code
  • testing it
  • Go on with programming and starting implementing the CVE

What I did today?

  • Working with getting the Linux kernel version from configuration file
  • Working with parsing CVE repository

I could implement the Linux kernel version from the configuration file, and I'm working on parsing the CVE repository. Would also be a nice idea to work with the kpatch-build script for making it Gentoo compatible. But I also got into one problem, that is we need to find a way to download old Gentoo-sources ebuild for building an old kernel, where the server there isn't the needed version of kernel sources. This depends on how much back compatibility we want to give. And with old Gentoo-sources, we cannot assure that is working. Because old Gentoo-sources was using different versions of Gentoo repository, eclass. So is a problem to discuss in the next days.

what I will do next time?

  • work on the CVE repository
  • testing it

Google Summer of Code day 25

What was my plan for today?

  • Cleaning code
  • testing and improving elivepatch

What I did today?

As discussed with my mentor I worked on unifying the patch and configuration file RESTful API call. And made the function to standardize the subprocess commands.

what I will do next time?

  • testing it and improving elivepatch

November 02, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux on DELL XPS 13 9365 and 9360 (November 02, 2017, 17:33 UTC)

Since I received some positive feedback about my previous DELL XPS 9350 post, I am writing this summary about my recent experience in installing Gentoo Linux on a DELL XPS 13 9365.

This installation notes goals:

  • UEFI boot using Grub
  • Provide you with a complete and working kernel configuration
  • Encrypted disk root partition using LUKS
  • Be able to type your LUKS passphrase to decrypt your partition using your local keyboard layout

Grub & UEFI & Luks installation

This installation is a fully UEFI one using grub and booting an encrypted root partition (and home partition). I was happy to see that since my previous post, everything got smoother. So even if you can have this installation working using MBR, I don’t really see a point in avoiding UEFI now.

BIOS configuration

Just like with its ancestor, you should:

  • Turn off Secure Boot
  • Set SATA Operation to AHCI

Live CD

Once again, go for the latest SystemRescueCD (it’s Gentoo based, you won’t be lost) as it’s quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We’ll obviously use GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

  • 10Mo UEFI BIOS partition (type EF02)
  • 500Mo UEFI boot partition (type EF00)
  • 2Go Swap partition
  • 475Go Linux root partition

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +10M ↵
Hex Code: EF02 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +2G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: p ↵
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A73970B7-FF37-4BA7-92BE-2EADE6DDB66E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048           22527   10.0 MiB    EF02  BIOS boot partition
   2           22528         1046527   500.0 MiB   EF00  EFI System
   3         1046528         5240831   2.0 GiB     8200  Linux swap
   4         5240832      1000215182   474.4 GiB   8300  Linux filesystem

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

Once again on my (old?) SystemRescueCD stick, the integrated Intel 8265/8275 wifi card is not detected.

So I used my old trick with my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

  • get your Android phone connected on your local WiFi (unless you want to use your cellular data)
  • plug in your phone using USB to your XPS
  • on your phone, go to Settings / More / Tethering & portable hotspot
  • enable USB tethering

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

You have now access to the internet.

Proceed with installation

The only thing to prepare is to format the UEFI boot partition as FAT32. Do not worry about the UEFI BIOS partition (/dev/nvme0n1p1), grub will take care of it later.

# mkfs.vfat -F 32 /dev/nvme0n1p2

Do not forget to use cryptsetup to encrypt your /dev/nvme0n1p4 partition! In the rest of the article, I’ll be using its device mapper representation.

# cryptsetup luksFormat -s 512 /dev/nvme0n1p4
# cryptsetup luksOpen /dev/nvme0n1p4 root
# mkfs.ext4 /dev/mapper/root

Then follow the Gentoo handbook as usual for the stage3 related next steps. Make sure you mount and bind the following to your /mnt/gentoo LiveCD installation folder (the /sys binding is important for grub UEFI):

# mount -t proc none /mnt/gentoo/proc
# mount -o bind /dev /mnt/gentoo/dev
# mount -o bind /sys /mnt/gentoo/sys

make.conf settings

I strongly recommend using at least the following on your /etc/portage/make.conf :

GRUB_PLATFORM="efi-64"
INPUT_DEVICES="evdev synaptics"
VIDEO_CARDS="intel i965"

USE="bindist cryptsetup"

The GRUB_PLATFORM one is important for later grub setup and the cryptsetup USE flag will help you along the way.

fstab for SSD

Don’t forget to make sure the noatime option is used on your fstab for / and /home.

/dev/nvme0n1p2    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p3    none     swap    sw                0 0
/dev/mapper/root  /        ext4    noatime   0 1

Kernel configuration and compilation

I suggest you use a recent >=sys-kernel/gentoo-sources-4.13.0 along with genkernel.

  • You can download my kernel configuration file (iptables, docker, luks & stuff included)
  • Put the kernel configuration file into the /etc/kernels/ directory (with a training s)
  • Rename the configuration file with the exact version of your kernel

Then you’ll need to configure genkernel to add luks support, firmware files support and keymap support if your keyboard layout is not QWERTY.

In your /etc/genkernel.conf, change the following options:

LUKS="yes"
FIRMWARE="yes"
KEYMAP="1"

Then run genkernel all to build your kernel and luks+firmware+keymap aware initramfs.

Grub UEFI bootloader with LUKS and custom keymap support

Now it’s time for the grub magic to happen so you can boot your wonderful Gentoo installation using UEFI and type your password using your favourite keyboard layout.

  • make sure your boot vfat partition is mounted on /boot
  • edit your /etc/default/grub configuration file with the following:
GRUB_CMDLINE_LINUX="crypt_root=/dev/nvme0n1p4 keymap=fr"

This will allow your initramfs to know it has to read the encrypted root partition from the given partition and to prompt for its password in the given keyboard layout (french here).

Now let’s install the grub UEFI boot files and setup the UEFI BIOS partition.

# grub-install --efi-directory=/boot --target=x86_64-efi /dev/nvme0n1
Installing for x86_64-efi platform.
Installation finished. No error reported

It should report no error, then we can generate the grub boot config:

# grub-mkconfig -o /boot/grub/grub.cfg

You’re all set!

You will get a gentoo UEFI boot option, you can disable the Microsoft Windows one from your BIOS to get straight to the point.

Hope this helped!

November 01, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)
Expat 2.2.5 released (November 01, 2017, 18:45 UTC)

Expat 2.2.5 has recently been released. It fixes miscellaneous bugs. For more details, please check the changelog.

If you maintain Expat packaging or a bundled version of Expat somewhere, please update to 2.2.5. Thanks!

Sebastian Pipping

October 24, 2017

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# readelf -a -w -g -t -s -r -u -A -c -I -W $FILE
==92332==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x613000000f94 at pc 0x0000005b4c54 bp 0x7ffda4ab1e10 sp 0x7ffda4ab1e08
READ of size 1 at 0x613000000f94 thread T0
    #0 0x5b4c53 in get_line_filename_and_dirname /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4681:10
    #1 0x5b4c53 in display_debug_macro /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4839
    #2 0x536709 in display_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13371:16
    #3 0x536709 in process_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13457
    #4 0x536709 in process_object /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18148
    #5 0x51099b in process_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18540:13
    #6 0x51099b in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18612
    #7 0x7f4febd95680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #8 0x41a128 in getenv (/usr/x86_64-pc-linux-gnu/binutils-bin/git/readelf+0x41a128)

0x613000000f94 is located 0 bytes to the right of 340-byte region [0x613000000e40,0x613000000f94)
allocated by thread T0 here:
    #0 0x4d8318 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x511975 in get_data /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:392:9
    #2 0x50eccf in load_specific_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13172:38
    #3 0x50e623 in load_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13300:10
    #4 0x5b1969 in display_debug_macro /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4717:3
    #5 0x536709 in display_debug_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13371:16
    #6 0x536709 in process_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:13457
    #7 0x536709 in process_object /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18148
    #8 0x51099b in process_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18540:13
    #9 0x51099b in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/readelf.c:18612
    #10 0x7f4febd95680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/dwarf.c:4681:10 in get_line_filename_and_dirname
Shadow bytes around the buggy address:
  0x0c267fff81a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c267fff81b0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x0c267fff81c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c267fff81d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff81e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff81f0: 00 00[04]fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8200: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8210: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8220: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8230: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==92332==ABORTING

Affected version:
2.28 – 2.28.1 – 2.29.51.20170919 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=5c1c468d0eddd0fda1ec8c5f33888657f94e3266

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00382-binutils-heapoverflow-get_line_filename_and_dirname

Timeline:
2017-09-19: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: heap-based buffer overflow in get_line_filename_and_dirname (dwarf.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==23816==ERROR: AddressSanitizer: SEGV on unknown address 0x4700004008d0 (pc 0x0000005427b6 bp 0x7ffd49033690 sp 0x7ffd49033680 T0)                                                                               
==23816==The signal is caused by a READ memory access.                                                                                                                                                            
    #0 0x5427b5 in _bfd_safe_read_leb128 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:1019:14                                                                                              
    #1 0x6a9b25 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2918:19                                                                                        
    #2 0x69a3ff in scan_unit_for_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3168:10                                                                                              
    #3 0x6a2de6 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3660:9                                                                                    
    #4 0x6a2de6 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3686                                                                                                   
    #5 0x6a0369 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4798:11                                                                                      
    #6 0x5f332e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10                                                                                                    
    #7 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9                                                                                                       
    #8 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                                                                                      
    #9 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                                                                                     
    #10 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                                                                                      
    #11 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                                                                             
    #12 0x7f839bb03680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                   
    #13 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)                                                                                                                                 
                                                                                                                                                                                                                  
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:1019:14 in _bfd_safe_read_leb128
==23816==ABORTING

Affected version:
2.29.51.20170925 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=1b86808a86077722ee4f42ff97f836b12420bb2a

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15938

Reproducer:
https://github.com/asarubbo/poc/blob/master/00381-binutils-invalidread-find_abstract_instance_name

Timeline:
2017-09-26: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue
2017-10-27: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: invalid memory read in find_abstract_instance_name (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The commit fix for this issue says:

The PR22200 fuzzer testcase found one way to put NULLs into .debug_line file tables. PR22205 finds another.
So mitre considers this an incomplete fix.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==19042==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000006a76a6 bp 0x7ffde0afde30 sp 0x7ffde0afde00 T0)
==19042==The signal is caused by a READ memory access.
==19042==Hint: address points to the zero page.
    #0 0x6a76a5 in concat_filename /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8
    #1 0x696ff3 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2265:44
    #2 0x6a2d36 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3651:26
    #3 0x6a2d36 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3686
    #4 0x6a0369 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4798:11
    #5 0x5f332e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #6 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f6c6d793680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8 in concat_filename
==19042==ABORTING

Affected version:
2.29.51.20170925 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=a54018b72d75abf2e74bf36016702da06399c1d9

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15939

Reproducer:
https://github.com/asarubbo/poc/blob/master/00380-binutils-NULLptr-concat_filename

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-26: upstream released a patch
2017-10-24: blog post about the issue
2017-10-27: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: NULL pointer dereference in concat_filename (dwarf2.c) (INCOMPLETE FIX FOR CVE-2017-15023)

October 16, 2017
Greg KH a.k.a. gregkh (homepage, bugs)
Linux Kernel Community Enforcement Statement FAQ (October 16, 2017, 09:05 UTC)

Based on the recent Linux Kernel Community Enforcement Statement and the article describing the background and what it means , here are some Questions/Answers to help clear things up. These are based on questions that came up when the statement was discussed among the initial round of over 200 different kernel developers.

Q: Is this changing the license of the kernel?

A: No.

Q: Seriously? It really looks like a change to the license.

A: No, the license of the kernel is still GPLv2, as before. The kernel developers are providing certain additional promises that they encourage users and adopters to rely on. And by having a specific acking process it is clear that those who ack are making commitments personally (and perhaps, if authorized, on behalf of the companies that employ them). There is nothing that says those commitments are somehow binding on anyone else. This is exactly what we have done in the past when some but not all kernel developers signed off on the driver statement.

Q: Ok, but why have this “additional permissions” document?

A: In order to help address problems caused by current and potential future copyright “trolls” aka monetizers.

Q: Ok, but how will this help address the “troll” problem?

A: “Copyright trolls” use the GPL-2.0’s immediate termination and the threat of an immediate injunction to turn an alleged compliance concern into a contract claim that gives the troll an automatic claim for money damages. The article by Heather Meeker describes this quite well, please refer to that for more details. If even a short delay is inserted for coming into compliance, that delay disrupts this expedited legal process.

By simply saying, “We think you should have 30 days to come into compliance”, we undermine that “immediacy” which supports the request to the court for an immediate injunction. The threat of an immediate junction was used to get the companies to sign contracts. Then the troll goes back after the same company for another known violation shortly after and claims they’re owed the financial penalty for breaking the contract. Signing contracts to pay damages to financially enrich one individual is completely at odds with our community’s enforcement goals.

We are showing that the community is not out for financial gain when it comes to license issues – though we do care about the company coming into compliance.  All we want is the modifications to our code to be released back to the public, and for the developers who created that code to become part of our community so that we can continue to create the best software that works well for everyone.

This is all still entirely focused on bringing the users into compliance. The 30 days can be used productively to determine exactly what is wrong, and how to resolve it.

Q: Ok, but why are we referencing GPL-3.0?

A: By using the terms from the GPLv3 for this, we use a very well-vetted and understood procedure for granting the opportunity to come fix the failure and come into compliance. We benefit from many months of work to reach agreement on a termination provision that worked in legal systems all around the world and was entirely consistent with Free Software principles.

Q: But what is the point of the “non-defensive assertion of rights” disclaimer?

A: If a copyright holder is attacked, we don’t want or need to require that copyright holder to give the party suing them an opportunity to cure. The “non-defensive assertion of rights” is just a way to leave everything unchanged for a copyright holder that gets sued.  This is no different a position than what they had before this statement.

Q: So you are ok with using Linux as a defensive copyright method?

A: There is a current copyright troll problem that is undermining confidence in our community – where a “bad actor” is attacking companies in a way to achieve personal gain. We are addressing that issue. No one has asked us to make changes to address other litigation.

Q: Ok, this document sounds like it was written by a bunch of big companies, who is behind the drafting of it and how did it all happen?

A: Grant Likely, the chairman at the time of the Linux Foundation’s Technical Advisory Board (TAB), wrote the first draft of this document when the first copyright troll issue happened a few years ago. He did this as numerous companies and developers approached the TAB asking that the Linux kernel community do something about this new attack on our community. He showed the document to a lot of kernel developers and a few company representatives in order to get feedback on how it should be worded. After the troll seemed to go away, this work got put on the back-burner. When the copyright troll showed back up, along with a few other “copycat” like individuals, the work on the document was started back up by Chris Mason, the current chairman of the TAB. He worked with the TAB members, other kernel developers, lawyers who have been trying to defend these claims in Germany, and the TAB members’ Linux Foundation’s lawyers, in order to rework the document so that it would actually achieve the intended benefits and be useful in stopping these new attacks. The document was then reviewed and revised with input from Linus Torvalds and finally a document that the TAB agreed would be sufficient was finished. That document was then sent to over 200 of the most active kernel developers for the past year by Greg Kroah-Hartman to see if they, or their company, wished to support the document. That produced the initial “signatures” on the document, and the acks of the patch that added it to the Linux kernel source tree.

Q: How do I add my name to the document?

A: If you are a developer of the Linux kernel, simply send Greg a patch adding your name to the proper location in the document (sorting the names by last name), and he will be glad to accept it.

Q: How can my company show its support of this document?

A: If you are a developer working for a company that wishes to show that they also agree with this document, have the developer put the company name in ‘(’ ‘)’ after the developer’s name. This shows that both the developer, and the company behind the developer are in agreement with this statement.

Q: How can a company or individual that is not part of the Linux kernel community show its support of the document?

A: Become part of our community! Send us patches, surely there is something that you want to see changed in the kernel. If not, wonderful, post something on your company web site, or personal blog in support of this statement, we don’t mind that at all.

Q: I’ve been approached by a copyright troll for Netfilter. What should I do?

A: Please see the Netfilter FAQ here for how to handle this

Q: I have another question, how do I ask it?

A: Email Greg or the TAB, and they will be glad to help answer them.

Linux Kernel Community Enforcement Statement (October 16, 2017, 09:00 UTC)

By Greg Kroah-Hartman, Chris Mason, Rik van Riel, Shuah Khan, and Grant Likely

The Linux kernel ecosystem of developers, companies and users has been wildly successful by any measure over the last couple decades. Even today, 26 years after the initial creation of the Linux kernel, the kernel developer community continues to grow, with more than 500 different companies and over 4,000 different developers getting changes merged into the tree during the past year. As Greg always says every year, the kernel continues to change faster this year than the last, this year we were running around 8.5 changes an hour, with 10,000 lines of code added, 2,000 modified, and 2,500 lines removed every hour of every day.

The stunning growth and widespread adoption of Linux, however, also requires ever evolving methods of achieving compliance with the terms of our community’s chosen license, the GPL-2.0. At this point, there is no lack of clarity on the base compliance expectations of our community. Our goals as an ecosystem are to make sure new participants are made aware of those expectations and the materials available to assist them, and to help them grow into our community.  Some of us spend a lot of time traveling to different companies all around the world doing this, and lots of other people and groups have been working tirelessly to create practical guides for everyone to learn how to use Linux in a way that is compliant with the license. Some of these activities include:

Unfortunately the same processes that we use to assure fulfillment of license obligations and availability of source code can also be used unjustly in trolling activities to extract personal monetary rewards. In particular, issues have arisen as a developer from the Netfilter community, Patrick McHardy, has sought to enforce his copyright claims in secret and for large sums of money by threatening or engaging in litigation. Some of his compliance claims are issues that should and could easily be resolved. However, he has also made claims based on ambiguities in the GPL-2.0 that no one in our community has ever considered part of compliance.  

Examples of these claims have been distributing over-the-air firmware, requiring a cell phone maker to deliver a paper copy of source code offer letter; claiming the source code server must be setup with a download speed as fast as the binary server based on the “equivalent access” language of Section 3; requiring the GPL-2.0 to be delivered in a local language; and many others.

How he goes about this activity was recently documented very well by Heather Meeker.

Numerous active contributors to the kernel community have tried to reach out to Patrick to have a discussion about his activities, to no response. Further, the Netfilter community suspended Patrick from contributing for violations of their principles of enforcement. The Netfilter community also published their own FAQ on this matter.

While the kernel community has always supported enforcement efforts to bring companies into compliance, we have never even considered enforcement for the purpose of extracting monetary gain.  It is not possible to know an exact figure due to the secrecy of Patrick’s actions, but we are aware of activity that has resulted in payments of at least a few million Euros.  We are also aware that these actions, which have continued for at least four years, have threatened the confidence in our ecosystem.

Because of this, and to help clarify what the majority of Linux kernel community members feel is the correct way to enforce our license, the Technical Advisory Board of the Linux Foundation has worked together with lawyers in our community, individual developers, and many companies that participate in the development of, and rely on Linux, to draft a Kernel Enforcement Statement to help address both this specific issue we are facing today, and to help prevent any future issues like this from happening again.

A key goal of all enforcement of the GPL-2.0 license has and continues to be bringing companies into compliance with the terms of the license. The Kernel Enforcement Statement is designed to do just that.  It adopts the same termination provisions we are all familiar with from GPL-3.0 as an Additional Permission giving companies confidence that they will have time to come into compliance if a failure is identified. Their ability to rely on this Additional Permission will hopefully re-establish user confidence and help direct enforcement activity back to the original purpose we have all sought over the years – actual compliance.  

Kernel developers in our ecosystem may put their own acknowledgement to the Statement by sending a patch to Greg adding their name to the Statement, like any other kernel patch submission, and it will be gladly merged. Those authorized to ‘ack’ on behalf of their company may add their company name in (parenthesis) after their name as well.

Note, a number of questions did come up when this was discussed with the kernel developer community. Please see Greg’s FAQ post answering the most common ones if you have further questions about this topic.

October 13, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux listed RethinkDB’s website (October 13, 2017, 08:22 UTC)

 

The rethinkdb‘s website has (finally) been updated and Gentoo Linux is now listed on the installation page!

Meanwhile, we have bumped the ebuild to version 2.3.6 with fixes for building on gcc-6 thanks to Peter Levine who kindly proposed a nice PR on github.

October 03, 2017

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==26890==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6130000006d3 at pc 0x000000472115 bp 0x7ffdb7d8a0d0 sp 0x7ffdb7d89880                                                                         
READ of size 298 at 0x6130000006d3 thread T0                                                                                                                                                                      
    #0 0x472114 in __interceptor_strlen /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:302                      
    #1 0x68fea5 in parse_die /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:254:12                                                                                                           
    #2 0x68ddda in _bfd_dwarf1_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:521:13                                                                                       
    #3 0x5f2f00 in _bfd_elf_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8659:10                                                                                            
    #4 0x517755 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1004:12                                                                                                      
    #5 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                                                                                      
    #6 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                                                                                     
    #7 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                                                                                       
    #8 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                                                                              
    #9 0x7f3dea34e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                                                                                    
    #10 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)                                                                                                                                 
                                                                                                                                                                                                                  
0x6130000006d3 is located 0 bytes to the right of 339-byte region [0x613000000580,0x6130000006d3)                                                                                                                 
allocated by thread T0 here:                                                                                                                                                                                      
    #0 0x4d8828 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67                                                                      
    #1 0x53f138 in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x799bc8 in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7b8797 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12
    #4 0x68e3b1 in _bfd_dwarf1_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf1.c:490:4
    #5 0x5f2f00 in _bfd_elf_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8659:10
    #6 0x517755 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1004:12
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f3dea34e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc:302 in __interceptor_strlen
Shadow bytes around the buggy address:
  0x0c267fff8080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff80a0: 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff80c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff80d0: 00 00 00 00 00 00 00 00 00 00[03]fa fa fa fa fa
  0x0c267fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff8120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==26890==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=1da5c9a485f3dcac4c45e96ef4b7dae5948314b5

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15020

Reproducer:
https://github.com/asarubbo/poc/blob/master/00376-binutils-heapoverflow-parse_die

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: heap-based buffer overflow in parse_die (dwarf1.c)

Description:
binutils is a set of tools necessary to build programs.

The stacktrace of this issue appears to be a NULL pointer access. However the upstream maintainer changed the summary of the bugreport to “DW_AT_name with out of bounds reference”. The commit also reference to “DW_AT_name with out of bounds reference”

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==8739==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x00000053bf16 bp 0x7ffcab59ee60 sp 0x7ffcab59ee20 T0)
==8739==The signal is caused by a READ memory access.
==8739==Hint: address points to the zero page.
    #0 0x53bf15 in bfd_hash_hash /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:441:15
    #1 0x53bf15 in bfd_hash_lookup /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:467
    #2 0x6a2049 in insert_info_hash_table /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:487:37
    #3 0x6a2049 in comp_unit_hash_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3776
    #4 0x6a2049 in stash_maybe_update_info_hash_tables /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4120
    #5 0x69cbbc in stash_maybe_enable_info_hash_tables /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4214:3
    #6 0x69cbbc in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4613
    #7 0x5f330e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #8 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #9 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #10 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #11 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #12 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #13 0x7fd148c7b680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #14 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/hash.c:441:15 in bfd_hash_hash
==8739==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=11855d8a1f11b102a702ab76e95b22082cccf2f8

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15022

Reproducer:
https://github.com/asarubbo/poc/blob/master/00375-binutils-NULLptr-bfd_hash_hash

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: NULL pointer dereference in bfd_hash_hash (hash.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==3765==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000006a7376 bp 0x7ffd5f9a3d50 sp 0x7ffd5f9a3d20 T0)
==3765==The signal is caused by a READ memory access.
==3765==Hint: address points to the zero page.
    #0 0x6a7375 in concat_filename /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8
    #1 0x696e83 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2258:44
    #2 0x6a2ab8 in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3642:26
    #3 0x6a2ab8 in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3677
    #4 0x6a0104 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4789:11
    #5 0x5f330e in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #6 0x5176a3 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x514e4d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x514e4d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510976 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50f4ce in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f0f4a74b680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41a638 in chmod (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41a638)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1601:8 in concat_filename
==3765==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=c361faae8d964db951b7100cada4dcdc983df1bf

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15023

Reproducer:
https://github.com/asarubbo/poc/blob/master/00374-binutils-NULLptr-concat_filename

Timeline:
2017-09-25: bug discovered and reported to upstream
2017-09-25: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: NULL pointer dereference in concat_filename (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==11994==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000029e at pc 0x7f800af7095d bp 0x7ffeab0e5c90 sp 0x7ffeab0e5c88            
READ of size 1 at 0x60200000029e thread T0                                                                                                           
    #0 0x7f800af7095c in bfd_getl32 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:559:24                                       
    #1 0x7f800af91323 in bfd_get_debug_link_info_1 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1206:12                       
    #2 0x7f800af91b8a in find_separate_debug_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1423:10                        
    #3 0x7f800af91a0f in bfd_follow_gnu_debuglink /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1582:10                        
    #4 0x7f800b110614 in _bfd_dwarf2_slurp_debug_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4345:19                    
    #5 0x7f800b11bc67 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4538:9                    
    #6 0x7f800b05e38b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10                                 
    #7 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9                                          
    #8 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7                                         
    #9 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200                                        
    #10 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7                                         
    #11 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12                                                
    #12 0x7f8009fa3680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289                      
    #13 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)                                                                    

0x60200000029e is located 0 bytes to the right of 14-byte region [0x602000000290,0x60200000029e)
allocated by thread T0 here:
    #0 0x4d8e08 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x7f800af6f3fc in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x7f800af64b9f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7f800af91230 in bfd_get_debug_link_info_1 /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1191:8
    #4 0x7f800af91b8a in find_separate_debug_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1423:10
    #5 0x7f800af91a0f in bfd_follow_gnu_debuglink /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/opncls.c:1582:10
    #6 0x7f800b110614 in _bfd_dwarf2_slurp_debug_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4345:19
    #7 0x7f800b11bc67 in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4538:9
    #8 0x7f800b05e38b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8695:10
    #9 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #10 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #11 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #12 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #13 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #14 0x7f8009fa3680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:559:24 in bfd_getl32
Shadow bytes around the buggy address:
  0x0c047fff8000: fa fa 00 01 fa fa 00 06 fa fa fd fa fa fa fd fa
  0x0c047fff8010: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa
  0x0c047fff8020: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
  0x0c047fff8030: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
  0x0c047fff8040: fa fa fd fa fa fa fd fd fa fa fd fa fa fa 00 fa
=>0x0c047fff8050: fa fa 00[06]fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==11994==ABORTING

Affected version:
2.29.51.20170924 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=52b36c51e5bf6d7600fdc6ba115b170b0e78e31d

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15021

Reproducer:
https://github.com/asarubbo/poc/blob/master/00373-binutils-heapoverflow-bfd_getl32

Timeline:
2017-09-24: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: heap-based buffer overflow in bfd_getl32 (opncls.c)

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

 # nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==11125==ERROR: AddressSanitizer: FPE on unknown address 0x7f5e01fd42e5 (pc 0x7f5e01fd42e5 bp 0x7ffdaa5de290 sp 0x7ffdaa5de0e0 T0)
    #0 0x7f5e01fd42e4 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c
    #1 0x7f5e01fe192b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #2 0x7f5e01fe192b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #3 0x7f5e01fde94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #4 0x7f5e01f1c20b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #5 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #6 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #7 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #8 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #9 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #10 0x7f5e00e61680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #11 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c in decode_line_info
==11125==ABORTING

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=d8010d3e75ec7194a4703774090b27486b742d48

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15025

Reproducer:
https://github.com/asarubbo/poc/blob/master/00372-binutils-FPE-decode_line_info

Timeline:
2017-09-22: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: divide-by-zero in decode_line_info (dwarf2.c)

Description:
binutils is a set of tools necessary to build programs.

The relevant ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==22616==ERROR: AddressSanitizer: stack-overflow on address 0x7ffc2948efe8 (pc 0x0000004248eb bp 0x7ffc2948f8e0 sp 0x7ffc2948efe0 T0)
    #0 0x4248ea in __asan::Allocator::Allocate(unsigned long, unsigned long, __sanitizer::BufferedStackTrace*, __asan::AllocType, bool) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_allocator.cc:381
    #1 0x41f8f3 in __asan::asan_malloc(unsigned long, __sanitizer::BufferedStackTrace*) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_allocator.cc:814
    #2 0x4d8de4 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:68
    #3 0x7ff17b5b237c in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9                                                                                                     
    #4 0x7ff17b5a7b2f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21                                                                               
    #5 0x7ff17b5e16d3 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12                                                                     
    #6 0x7ff17b75626e in read_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:556:8                                                                                                   
    #7 0x7ff17b772053 in read_indirect_string /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:730:9                                                                                           
    #8 0x7ff17b772053 in read_attribute_value /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1189                                                                                            
    #9 0x7ff17b76ebf4 in read_attribute /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:1306:14                                                                                               
    #10 0x7ff17b76ebf4 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2913                                                                                    
    #11 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12                                                                                 
    #12 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12                                                                                 
    [..cut..]
    #252 0x7ff17b76ec98 in find_abstract_instance_name /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2930:12

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=52a93b95ec0771c97e26f0bb28630a271a667bd2

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-15024

Reproducer:
https://github.com/asarubbo/poc/blob/master/00371-binutils-infiniteloop-find_abstract_instance_name

Timeline:
2017-09-22: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-10-03: blog post about the issue
2017-10-04: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core
Infrastructure Initiative
.

Permalink:

binutils: infinite loop in find_abstract_instance_name (dwarf2.c)

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Taking over a postal address, An Post edition (October 03, 2017, 11:03 UTC)

As I announced a few months ago, I’m moving to London. One of the tasks before the move is setting up postal address redirection, so that the services unable to mail me across the Irish Sea can still reach me. Luckily I know for a fact that An Post (the Irish postal service) has a redirection service, if not a cheap one.

A couple of weeks ago, I went on to sign up for the services, and I found that I had two choices: I could go to the post office (which is inside the EuroSpar next door), show a photo ID and a proof of address, and pay cash or with a debit card1; or I could fill in the form online, pay with a credit card, and then post out a physical signed piece of paper. I chose the latter.

There are many laughable things that I could complain about, in the process of setting up the redirection, but I want to focus on what I think is the big, and most important problem. After you choose the addresses (original and new destination), it will ask you where you want your confirmation PIN sent.

There is a reason why they do that. I set up the redirect well before I moved, and in particular I chose to redirect mail from my apartment to my local office — this way I can either batch together the mail, or simply ask for an inter-office forwarding. This meant I had access to both the original and the new address at the same time — but for many people, particularly moving out of the country, by the time they know where to forward the mail, they might only have access to the new address.

The issue is that if you decide to get the PIN at the new address, the only notification sent to the old address is one letter, confirming the activation of the redirection, sent to the old address. This is likely meant so you can call An Post and have them cancel the redirection if that was done against your will.

While this stops a possible long-term takeover of a mail address, it still allows a wide window of opportunity for a takeover. Also, it has one significant drawback: the letter does not tell you where the mail will be redirected!

Let’s say you want to take over someone’s address (let’s look later what for). First you need to know their address; this is the simplest part of course. Now you can fill in the request on An Post’s website for the redirection — the original address is not given any indication that a request was filled – and get the PIN at the new address. Once the PIN is received, there is some time to enable the redirection.

Until activation is completed, and the redirection time is selected, no communication is given to the original address.

If your target happens to be travelling or otherwise unable to get to their mail for a few weeks, then you have an opportunity. You can take over the address, get some documents at the given address, and get your hands on them. Of course the target will become suspicious when coming back, finding a note about redirection and no mail. But finding a way to recover the mail without being tied to an identity is left as an exercise to the reader.

So what would you accomplish, beside annoying your target, and possibly get some of their unsolicited mail? Well, there are a significant amount of interesting targets in the postal mail you receive in Ireland.

For instance, take credit card statements. Tesco Bank does not allow you to receive them electronic, and Ulster Bank will send you the paper copy even though you opt-in to all the possible electronic communications. And a credit card statement in Ireland include a lot more information than other countries, including just enough to take over the credit card. Tesco Bank for instance will authenticate you with the 16 digits PAN (on the statement), your full address (on the statement), the credit limit (you guessed it, on the statement), and your date of birth (okay, this one is not on the statement, but you can probably find my date of birth pretty easily).

And even if you don’t want to take over the full credit card, having the PAN is extremely useful in and by itself, to take over other accounts. And since you have the statement, it wouldn’t be difficult to figure out what the card is used for — take over an Amazon account, you can take over a lot more things.

But there are more concrete problems too — for instance I do receive a significant amount of pseudo-cash2 in form of Tesco vouchers — having physical control of the vouchers effectively means having the cash in your hand. Or say you want to get a frequent guest or frequent flyer card, because a card is often just enough to get the benefits, and have access to the information on the account. Or just get enough of a proof of address to register on any other service that will require one.

Because let’s remember: an authentication system is just as weak as its weakest link. So all those systems requiring a proof of address? You can skip over all of them by just having one recent enough proof of address, by hijacking someone’s physical mail. And that’s just a matter of paying for it.


  1. An Post is well known for only accepting VISA Debit cards, and refuses both MasterCard Debit and VISA Credit cards. Funnily enough, they issue MasterCard cards, but that’s a story for another time. [return]
  2. I should at some point write a post about pseudo-cash and the value of a euro when it’s not a coin. [return]

October 01, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
How blogging changed in the past ten years (October 01, 2017, 16:04 UTC)

One of the problems that keeps poking back at me every time I look for an alternative software for this blog, is that it somehow became not your average blog, particularly not in 2017.

The first issue is that there is a lot of history. While the current “incarnation” of the blog, with the Hugo install, is fairly recent, I have been porting over a long history of my pseudo-writing, merging back into this one big collection the blog posts coming from my original Gentoo Developer blog, as well as the few posts I wrote on the KDE Developers blog and a very minimal amount of content from my (mostly Italian) blog when I was in high school.

Why did I do it that way? Well the main thing is that I don’t want to lose the memories. As some of you might know already, I faced my mortality before, and I came to realize that this blog is probably the only thing of substance that I had a hand on, that will outlive me. And so I don’t want to just let migration, service turndowns, and other similar changes take away what I did. This is also why I did publish to this blog the articles I wrote for other websites, namely NewsForge and Linux.com (back when they were part of Geeknet).

Some of the recovery work actually required effort. As I said above there’s a minimal amount of content that comes from my high school days blog. And it’s in Italian that does not make it particularly interesting or useful. I had deleted that blog altogether years and years ago, so I had to use the Wayback Machine to recover at least some of the posts. I will be going through all my old backups in the hope of finding that one last backup that I remember making before tearing the thing down.

Why did I tear it down in the first place? It’s clearly a teenager’s blog and I am seriously embarrassed by the way I thought and wrote. It was 1314 years ago, and I have admitted last year that I can tell so many times I’ve been wrong. But this is not the change I want to talk about.

The change I want to talk about is the second issue with finding a good software to run my blog: blogging is not what it used to be ten years ago. Or fifteen years ago. It’s not just that a lot of money got involved in the mean time, so now there are a significant amount of “corporate blogs”, that end up being either product announcements in a different form, or the another outlet for not-quite-magazine content. I know of at least a couple of Italian newspapers that provide “blogs” for their writers, which look almost exactly like the paper’s website, but do not have to be reviewed by the editorial board.

In addition to this, a lot of people’s blogs stopped providing as much details of their personal life as they used to. Likely, this is related to the fact that we now know just how nasty people on the Internet can be (read: just as nasty as people off the Internet), and a lot of the people who used to write lightheartedly don’t feel as safe, correctly. But there is probably another reason: “Social Media”.

The advent of Twitter and Facebook made it so that there is less need to post short personal entries, too. And Facebook in particular appears to have swallowed most of the “cutesy memes” such as quizzes and lists of things people have or have not done. I know there are still a few people who insist on not using these big names social networks, and still post for their friends and family on blogs, but I have a feeling they are quite the minority. And I can tell you for sure that since I signed up for Facebook, a lot of my smaller “so here’s that” posts went away.

Distribution chart of blog post sizes over time

This is a bit of a rough plot of blog sizes. In particular I have used the raw file size of the markdown sources used by Hugo, in bytes, which make it not perfect for Unicode symbols, and it includes the “front matter”, which means that particularly all the non-Hugo-native posts have their title effectively doubled by the slug. But it shows trends particularly well.

You can see from that graph that some time around 2009 I almost entirely stopped writing short blog posts. That is around the time Facebook took off in Italy, and a lot of my interaction with friends started happening there. If you’re curious of that visible lack of posts just around half of 2007, that was the pancreatitis that had me disappear for nearly two months.

With this reduction in scope of what people actually write on blogs, I also have a feeling that lots of people were left without anything to say. A number of blogs I still follow (via NewsBlur since Google Reader was shut down), post once or twice a year. Planets are still a thing, and I still subscribe to a number of them, but I realize I don’t even recognize half the names nowadays. Lots of the “old guard” stopped blogging almost entirely, possibly because of a lack of engagement, or simply because, like me, many found a full time job (or a full time family), that takes most of their time.

You can definitely see from the plot that even my own blogging has significantly slowed down over the past few years. Part of it was the tooling giving up on me a few times, but it also involves the lack of energy to write all the time as I used to. Plus there is another problem: I now feel I need to be more accurate in what I’m saying and in the words I’m using. This is in part because I grew up, and know how much words can hurt people even when meant the right way, but also because it turns out when you put yourself in certain positions it’s too easy to attack you (been there, done that).

A number of people that think argue that it was the demise of Google Reader1 that caused blogs to die, but as I said above, I think it’s just the evolution of the concept veering towards other systems, that turned out to be more easily reachable by users.

So are blogs dead? I don’t think so. But they are getting harder to discover, because people use other platforms and it gets difficult to follow all of them. Hacker News and Reddit are becoming many geeks’ default way to discover content, and that has the unfortunate side effect of not having as much of the conversation to happen in shared media. I am indeed bothered about those people who prefer discussing the merit of my posts on those external websites than actually engaging on the comments, if nothing else because I do not track those platforms, and so the feeling I got is of talking behind one’s back — I would prefer if people actually told me if they shared my post on those platforms; for Reddit I can at least IFTTT to self-stalk the blog, but that’s a different problem.

Will we still have blogs in 10 years? Probably yes, but they will not look like the ones we’re used to most likely. The same way as nowadays there still are personal homepages, but they clearly don’t look like Geocities, and there are social media pages that do not look like MySpace.


  1. Usual disclaimer: I do work for Google at the time of writing this, but these are all personal opinions that have no involvement from the company. For reference, I signed the contract before the Google Reader shutdown announcement, but started after it. I was also sad, but I found NewsBlur a better replacement anyway. [return]

September 27, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

This is a workaround for a FreeType/fontconfig problem, but my be applicable in other cases as well. For Gentoo users, the related bug is 631502.

Recently, after updating to Mozilla Firefox to version 52 or later (55.0.2, in my case), and Mozilla Thunderbird to version 52 or later (52.3.0, in my case), I found that fonts were rendering horribly under Linux. It looked essentially like there was no anti-aliasing or hinting at all.

Come to find out, this was due to a change in the content rendering engine, which is briefly mentioned in the release notes for Firefox 52 (but it also applies to Thunderbird). Basically, in Linux, the default engine changed from cairo to Google’s Skia.

Ugly fonts in Firefox and Thunderbird under Linux - skia and cairo

For each application, the easiest method for getting the fonts to render nicely again is to make two changes directly in the configuration editor. To do so in Firefox, simply go to the address bar and type about:config. Within Thunderbird, it can be launched by going to Menu > Preferences > Advanced > Config Editor. Once there, the two keys that need to change are:

gfx.canvas.azure.backends
gfx.content.azure.backends

They likely have values of “skia” or a comma-separated list with “skia” being the first value. On my Linux hosts, I changed the value from skia back to cairo, restarted the applications, and all was again right in the world (or at least in the Mozilla font world 😛 ).

Hope that helps.

Cheers,
Zach

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is the story of how I ended up calling my bank at 11pm on a Sunday night to ask them to cancel my credit card. But it started with a complete different problem: I thought I found a bug in some PDF library.

I asked Hanno and Ange since they both have lots more experience with PDF as a format than me (I have nearly zero), as I expected this to be complete garbage either coming from random parts of the file or memory within the process that was generating or reading it, and thought it would be completely inconsequential. As you probably have guessed by the spoiler in both the title of the post and the first paragraph, it was not the case. Instead that string is a representation of my credit card number.

After a few hours, having worked on other tasks, and having just gone back and forth with various PDFs, including finding a possibly misconfigured AGPL library in my bank’s backend (worth of another blog post), I realized that Okular does not actually show a title for this PDF, which suggested a bug in Dolphin (the Plasma file manager). In particular Poppler’s pdfinfo also didn’t show any title at all, which suggested there’s a problem with a different part of the code. Since the problem was happening with my credit card statements, and the credit card statements include the full 16-digits PAN, I didn’t want to just file a bug attaching a sample, so instead I started asking around for help to figure out which part of the code is involved.

Albert Astals Cid sent me the right direction by telling me the low-level implementation was coming from KFileMetadata, and that quickly pointed me at this interesting piece of heuristics which is designed to guess the title of a document by looking at the first page. The code is quite a bit convoluted, so I couldn’t at first just exclude uninitialized memory access, but I couldn’t figure out where it could be coming from, so I decided to copy the code into a single executable to play around with it. The good news was that it would give me the exact same answer, so it was not uninitialized memory. Instead, the parser was mis-reading something in the file, which by being stable meant it wasn’t likely a security issue, just sub-optimal code.

As there is no current, updated tool for PDF that behaves like mkvinfo, that is print an element-by-element description of the content of a PDF file, I decided to just play with the code to figure out how it decided what to use as the title. Printing out each of the possible titles being evaluated showed it was considering first my address, then part of the summary information, then this strange string. What is going on there?

The code is a bit difficult to follow, particularly for me at first since I had no idea how PDF works to begin with. But the summary of it is that it goes through the textboxes (I knew already that PDF text is laid out in boxes) of the first page, joining together the text if the box has markers to follow up. Each of these entries is stored into a map of text heights, together with a “watermark” of the biggest text size encountered during this loop. If, when looking at a textbox, the height is lower than the previous maximum height, it gets discarded. At the end, the first biggest textbox content is reported as the title.

Once I disabled the height check and always reported all the considered title textboxes, I noticed something interesting: the string that kept being reported was found together with a number of textboxes that are drawn on top of the bank giro1 credit system. The cheque includes a very big barcode… and that’s where I started sweating a bit.

The reason of the sweat is that by then I already guessed I made a huge mistake sharing the string that Dolphin was showing me. The reference to pay up a credit card is universally the full 16-digits number (PAN). Indeed the full number is printed on the cheque, and as the “An Post Ref” (An Post being the Irish postal system), and the account information (10-digits, excluding the 6-digits IIN) is printed on the bottom of the same. All of this is why I didn’t want to share the sample file, and why I always destroy the statements that arrive, in paper form, from the banks. At this point, the likeliness of the barcode containing the same information was seriously high.

My usual Barcode Scanner for Android didn’t manage to understand the barcode though, which made it awkward. Instead I decided to confirm I was actually looking at the content of the barcode in an encoded form with a very advanced PDF inspection tool: strings $file | grep Font. This did bring up a reference to /BaseFont /Code128ARedA. And that was the confirmation I needed. Indeed a quick search for that name brings you to a public domain font that implements Code 128 barcodes as a TrueType font. This is not uncommon, particularly as it’s the same method used by most label printers, including the Dymo I used to use for labelling computers.

At that point a quick comparison of the barcode I had in front of me with one generated through an online generator (but only for the IIN because I don’t want to leak it all), confirmed I was looking at my credit card number, and that my tweet just leaked it — in a bit of a strange encoding that may take some work to decode, but still leaked it. I called Ulster Bank and got the card cancelled and replaced.

Which lessons I can learn from this experience? First of all to consider credit card statements even more of a security risk than I ever imagine. It also gave me a practical instance of what Brian Krebs advocates for years regarding barcodes of boarding passes and similar. In particular it looks like both Ulster Bank and Tesco Bank use the same software to generate the credit card statements (which is easily told not to be the same system that generates the normal bank statements), which is developed by Fiserv (their name is in the Author field of the PDF), and they all rely on using the normal full card number for payment.

This is something I don’t really understand. In Italy, you only use the 16-digits number to pay the bank one-off by wire, and instead the statements never had more than the last five digits of the card. Except for the Italian American Express — but that does not surprise me too much as they manage it from London as well.

I’m now looking to see how I can improve on the guessing of the title for the PDFs in the KFileMetadata library — although I’m warming up to the idea of just sending a patch that delete that part of the code altogether, and if the file has no title, no title is displayed. The simplest solutions are, usually, the better.


  1. The Wikipedia page appears to talk only of the UK system. Ireland, as usual, appears to have kept their own version of the same system, and all the credit card statements, and most bills, will have a similar pre-printed “credit cheque” at the bottom. Even when they are direct-debited. [return]

September 26, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux Userspace 2.7 (September 26, 2017, 12:50 UTC)

A few days ago, Jason "perfinion" Zaman stabilized the 2.7 SELinux userspace on Gentoo. This release has quite a few new features, which I'll cover in later posts, but for distribution packagers the main change is that the userspace now has many more components to package. The project has split up the policycoreutils package in separate packages so that deployments can be made more specific.

Let's take a look at all the various userspace packages again, learn what their purpose is, so that you can decide if they're needed or not on a system. Also, when I cover the contents of a package, be aware that it is based on the deployment on my system, which might or might not be a complete installation (as with Gentoo, different USE flags can trigger different package deployments).

Description:
binutils is a set of tools necessary to build programs.

The complete ASan output of the issue:

# nm -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D $FILE
==3235==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x613000000512 at pc 0x7f7c93ae3c88 bp 0x7ffe38d7a970 sp 0x7ffe38d7a968
READ of size 1 at 0x613000000512 thread T0
    #0 0x7f7c93ae3c87 in read_1_byte /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:616:10
    #1 0x7f7c93ae3c87 in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2311
    #2 0x7f7c93aee92b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #3 0x7f7c93aee92b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #4 0x7f7c93aeb94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #5 0x7f7c93a2920b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #6 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #7 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #8 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #9 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #10 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #11 0x7f7c9296e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #12 0x41ac18 in _init (/usr/x86_64-pc-linux-gnu/binutils-bin/git/nm+0x41ac18)

0x613000000512 is located 0 bytes to the right of 338-byte region [0x6130000003c0,0x613000000512)
allocated by thread T0 here:
    #0 0x4d8e08 in malloc /var/tmp/portage/sys-libs/compiler-rt-sanitizers-5.0.0/work/compiler-rt-5.0.0.src/lib/asan/asan_malloc_linux.cc:67
    #1 0x7f7c9393a37c in bfd_malloc /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/libbfd.c:193:9
    #2 0x7f7c9392fb2f in bfd_get_full_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/compress.c:248:21
    #3 0x7f7c939696d3 in bfd_simple_get_relocated_section_contents /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/simple.c:193:12
    #4 0x7f7c93ade26e in read_section /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:556:8
    #5 0x7f7c93adef3c in decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:2047:9
    #6 0x7f7c93aee92b in comp_unit_maybe_decode_line_info /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3608:26
    #7 0x7f7c93aee92b in comp_unit_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:3643
    #8 0x7f7c93aeb94f in _bfd_dwarf2_find_nearest_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:4755:11
    #9 0x7f7c93a2920b in _bfd_elf_find_line /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/elf.c:8694:10
    #10 0x517c83 in print_symbol /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1003:9
    #11 0x51542d in print_symbols /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1084:7
    #12 0x51542d in display_rel_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1200
    #13 0x510f56 in display_file /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1318:7
    #14 0x50faae in main /var/tmp/portage/sys-devel/binutils-9999/work/binutils/binutils/nm.c:1792:12
    #15 0x7f7c9296e680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-devel/binutils-9999/work/binutils/bfd/dwarf2.c:616:10 in read_1_byte
Shadow bytes around the buggy address:
  0x0c267fff8050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8060: 00 00 00 00 00 00 00 00 00 00 00 04 fa fa fa fa
  0x0c267fff8070: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c267fff8080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c267fff8090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c267fff80a0: 00 00[02]fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c267fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==3235==ABORTING

Affected version:
2.29.51.20170921 and maybe past releases

Fixed version:
N/A

Commit fix:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=515f23e63c0074ab531bc954f84ca40c6281a724

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
CVE-2017-14939

Reproducer:
https://github.com/asarubbo/poc/blob/master/00370-binutils-heapoverflow-read_1_byte

Timeline:
2017-09-21: bug discovered and reported to upstream
2017-09-24: upstream released a patch
2017-09-26: blog post about the issue
2017-09-29: CVE assigned

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

binutils: heap-based buffer overflow in read_1_byte (dwarf2.c)

September 24, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
(Audio)book review: We Are Legion (We Are Bob). (September 24, 2017, 11:04 UTC)

I have not posted a book review in almost a year, and I have not even written one for Goodreads, which I probably should do as well. I feel kind of awful for it because I do have a long list of good titles I appreciated in the meantime. So let me spend a few words on this one.

We Are Legion (We Are Bob) tickled me in the list of Audible books for a while because the name sounded so ludicrous I was expecting something almost along the lines of the Hitchikers’ Guide To The Galaxy. It was not that level of humour, but the book didn’t really disappoint either.

The book starts in the first scene in present time, with the protagonist going to a cryogenic facility… and you can tell from the cover that’s just a setup of course. I found it funny from the first scenes that the author clearly is talking of something he knows directly, so I wasn’t entirely too surprised when I found that he’s a computer programmer. I’m not sure what it is with people in my line of work deciding to write books, but the results are quite often greatly enjoyable, even if it takes a while to get into them. On this note, Tobias Klausmann of Gentoo fame wrote a two-part series1, which I definitely recommend.

Once you get on with the main stage for the book, it starts off in the direction you expect with spaceships and planets as the covers lets you to imagine. Some of the reviews I read before buying the book found it very lightweight and no-brainer, but I don’t see myself agreeing. While taking it with a lot of spirit and humour, and a metric ton of pop-culture references2, the topics that are brought up include self-determination, the concept of soul as seen by an atheist point of view, global politics as seen from lightyears away3, and the vast multitudes of “oneselves”.

Spoilers in this paragraph, yes definitely spoilers, and a bit of text so you may not read them out of line of sight. Go back to the following paragraph if you don’t want any. Indeed, it’s very hard to tell, and a question that the book spends quite a bit of time pondering over without an answer, whether the character we see in the first scene is actually the protagonist of the book. Because what we have later is a computer “replicant” of the memories and consciousness of him… and a multitudes of copies of that, each acting more or less differently from the original, leaving open the question whether the copies are losing something in the process, or whether it is the knowledge of not being the “original” that make them change. I found this maybe even more profound than the author intended.

Spoilers aside, I found the book enjoyable. It’s not an all-out bright and shiny future, but it’s also not the kind of grim and dark dystopia that appears to be a dime a dozen nowadays. The one thing that still bothers me a little bit, and that probably is because I would have fallen into the same trap, is that the vast majority of the book focuses on technical problems and solutions, though to be fair it pulls it off (in my opinion) quite healthily, rather than by hiding all the human factors away into “someone else’s problem” territory. It reminded me of an essay I had to write in middle school about the “school of the future”, and I ended up not spending a single word on people, even after the teacher pointed out I should have done so and got me to rewrite it. I’m glad there are people (who are not me) studying humanities.

I found it funny that the Wikipedia page about the book insisted on pointing out that reviewers noted the lack of female characters. That’s true, there are a handful of throwaway women throughout the book, but no major character. I don’t know if there was any way around it given the plot as it stands now though, so I wouldn’t read it too much into it, as the book itself feels a lot like a trip into one’s own essence, and I’m not sure I’d expect an author to be able to analyse this way someone else but themselves. I have not read/listened to the other books in the series (though I did add them to my list now), so maybe that changes with the change of focus, not sure.

As for the audiobook itself, which I got through Audible where it was at “special price” $1.99, I just loved the production. Ray Porter does a fantastic job, and since the book is all written in the first person (from somewhat different points of view), his voicework to make you know which point of view is speaking is extremely helpful not to get lost.

All in all, I’ve really enjoyed the book, and look forward to compare with the rest of the series. If you’re looking for something that distracts you from all the dread that is happening right now in the world, and can give you a message of “If we get together, we can do it!”, then this is a worthy book.


  1. I hadn’t realized book two was out until I looked Tobias up on Amazon. I’ll have stern words with him next time I see him for not warning me! [return]
  2. This happens most of the time with geeks writing books, although not all the time thankfully. From one side it does build a nice sense of camaraderie with the protagonists because they feel like “one of us” but on the other hand sometimes it feels too much. Unless it’s part of the story, like here or in Magic 2.0. [return]
  3. Pun totally intended. [return]

September 20, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Free Idea: structured access logs for Apache HTTPD (September 20, 2017, 14:04 UTC)

This post is part of a series of free ideas that I’m posting on my blog in the hope that someone with more time can implement. It’s effectively a very sketched proposal that comes with no design attached, but if you have time you would like to spend learning something new, but no idea what to do, it may be a good fit for you.

I have been commenting on Twitter a bit about the lack of decent tooling to deal with Apache HTTPD’s Combined Logging Format (inherited from NCSA). For those who do not know about it, this is hte format used by standard access_log files, which include information about requests, including the source IP, the time, the requested path, the status code and the User-Agent used.

These logs are useful for debugging but are also consumed by tools such as AWStats to produce useful statistics about the request patterns of a website. I used these extensively when writing my ModSecurity rulesets, and I still keep an eye out on them for instance to report wasteful feed readers.

The files are simple text files, and that makes it easy to act on them: you can use tail and grep, and logrotate needs no special code beside moving the file and reloading Apache to have it re-open the paths. This makes it hard to query for particular entries in fields, such as to get the list of User-Agent strings present in a log. Some of the suggestions I got over Twitter to solve this were to use awk, but as it happens, these logs are not actually parseable with a straightforward field separation.

Lacking finding a good set of tools to handle these formats directly, I have been complaining that we should probably start moving away from simple text files into more structured log formats. Indeed, I know that there used to be at least some support for logging directly to MySQL and other relational databases, and that there are more complicated machinery often used by companies and startups that process these access logs into analysis software and so on. But all of these tend to be high overhead, much more than what I or someone else with a small personal blog would care about implementing.

Instead I think it’s time to start using structured file logs. A few people including thresh from VideoLAN suggested using JSON to write the log files. This is not a terrible idea, as the format is at least well understood and easy to interface with most other software, but honestly I would prefer something with an actual structure, a schema that can be followed. Of course I’m not meaning XML, and I would rather suggest having a standardized schema for proto3. Part of that I guess is because I’m used to use this at work, but also because I like the idea of being able to just define my schema and have it generate the code to parse the messages.

Unfortunately currently there is no support or library to access a sequence of protocol buffer messages. Using a single message with repeated sub-messages would work, but it is not append-friendly so there is no way to just keep writing this to a file, and being able to truncate and resume writing to it, which is a property needed for a proper structured log format to actually fit in the space previously occupied by text formats. This is something I don’t usually have to deal with at work, but I would assume that a simple LV (Length-Value) or LVC (Length-Value-Checksum) encoding would be okay to solve this problem.

But what about other properties of the current format? Well, the obvious answer is that, assuming your structured log contains at least as much information (but possibly more) as the current log, you can always have tools that convert on the fly to the old format. This would for instance allow to have a special tail-like command and a grep-like command that provides compatibility with the way the files are currently looked at manually by your friendly sysadmin.

Having more structured information would also allow easier, or deeper analysis of the logs. For instance you could log the full set of headers (like ModSecurity does) instead of just the referrer and User-Agent. And allow for customizing the output on the conversion side rather than lose the details when writing.

Of course this is just one possible way to solve this problem, and just because I would prefer working with technologies that I’m already friendly with it does not mean I wouldn’t take another format that is similarly low-dependency and easy to deal with. I’m just thinking that the change-averse solution of not changing anything and keeping logs in text format may be counterproductive in this situation.

September 17, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Anyone working on motherboard RGB controllers? (September 17, 2017, 11:04 UTC)

I have been contacted by email last week by a Linux user, probably noticing my recent patch for the gpio_it87 driver in the kernel. They have been hoping my driver could extend to IT7236 chips that are used in a number of gaming motherboards for controlling RGB LEDs.

Having left the case modding world after my first and only ThermalTake chassis – my mother gave me hell for the fans noise, mostly due to the plexiglass window on the side of the case – I still don’t have any context whatsoever on what the current state of these boards is, whether someone has written generic tools to set the LEDs, or even UIs for them. But it was an interesting back and forth of looking for leads into figuring out what is needed.

The first problem is, like most of you who already know a bit about electrical engineering and electronics, that clearly the IT7236 chip is clearly not the same series as the IT87xx chips that my driver supports. And since they are not the same series, they are unlikely to share the same functionality.

The IT87xx series chips are Super I/O controllers, which mean they implement functionality such as floppy-disk controllers, serial and parallel ports and similar interfaces, generally via the LPC bus. You usually know these chip names because these need to be supported by the kernel for them to show up in sensors output. In addition to these standard devices, many controllers include at least a set of general purpose I/O (GPIO) lines. On most consumer motherboards these are not exposed in any way, but boards designed for industrial applications, or customized boards tend to expose those lines easily.

Indeed, I wrote the gpio_it87 driver (well, actually adapted and extended it from a previous driver), because the board I was working on in Los Angeles had one of those chips, and we were interested in having access to the GPIO lines to drive some extra LEDs (and possibly in future versions more external interfaces, although I don’t know if anything was made of those). At the time, I did not manage to get the driver merged; a couple of years back, LaCie manufactured a NAS using a compatible chip, and two of their engineers got my original driver (further extended) merged into the Linux kernel. Since then I only submitted one other patch to add another ID for a compatible chip, because someone managed to send me a datasheet, and I could match it to the one I originally used to implement the driver as having the same behaviour.

Back to the original topic, the IT7236 chip is clearly not a Super I/O controller. It’s also not an Environmental Control (EC) chip, as I know that series is actually IT85xx, which is what my old laptop had. Somewhat luckily though, a “Preliminary Specifications” datasheet for that exact chip is available online from a company that appears to distribute electronics component in the general sense. I’m not sure if that was intentional or not, but having the datasheet is always handy of course.

According to these specifications, the IT7236xFN chips are “Touch ASIC Cap Button Controllers”. And indeed, ITE lists them as such. Comparing this with a different model in the same series shows that probably LED driving was not their original target, but they came to be useful for that. These chips also include an MCU based on a 8051 core, similarly to their EC solution — this makes them, and in particular the datasheet I found earlier, a bit more interesting to me. Unfortunately the datasheet is clearly amended to be the shorter version, and does not include a programming interface description.

Up to this point this tells us exactly one thing only: my driver is completely useless for this chip, as it implements specifically the Super I/O bus access, and it’s unlikely to be extensible to this series of chips. So a new driver is needed and some reverse engineering is likely to be required. The user who wrote me also gave me two other ITE chip names found on the board they have: IT87920 and IT8686 (which appears to be a PWN fan controller — I couldn’t find it on the ITE website at all). Since the it87 (hwmon) driver is still developed out-of-kernel on GitHub, I checked and found an issue that appears to describe a common situation for gaming motherboards: the fans are not controlled with the usual Super I/O chip, but with a separate one (more accurate?) one, and that suggests that the LEDs are indeed controlled by another separate chip, which makes sense. The user ran strings on the UEFI/BIOS image and did indeed find modules named after IT8790 and IT7236 (and IT8728 for whatever reason), to confirm this.

None of this brings us any closer to supporting it though, so let’s take a loop at the datasheet, and we can see that the device has an I²C bus, instead of the LPC (or ISA) bus used by Super I/O and the fan controller. Which meant looking at i2cdev and lsi2c. Unfortunately the output can only see that there are things connected to the bus, but not what they are.

This leaves us pretty much dry. Particularly me since I don’t have hardware access. So my suggestion has been to consider looking into the Windows driver and software (that I’m sure the motherboard manufacturer provides), and possibly figure out if they can run in a virtualized environment (qemu?) where I²C traffic can be inspected. But there may be simpler, more useful or more advanced tools to do most of this already, since I have not spent any time on this particular topic before. So if you know of any of them, feel free to leave a comment on the blog, and I’ll make sure to forward them to the concerned user (since I have not asked them if I can publish their name I’m not going to out them — they can, if they want, leave a comment with their name to be reached directly!).

Sergei Trofimovich a.k.a. slyfox (homepage, bugs)
signed char or unsigned char? (September 17, 2017, 00:00 UTC)

trofi's blog: signed char or unsigned char?

signed char or unsigned char?

Yesterday I debugged an interesting bug: sqlite test suite was hanging up on csv parsing test on powerpc32 and powerpc64 platforms. But other platforms were fine! ia64 was ok, hppa was ok, sparc was ok. Thus it’s not (just) endianness issue or stack growth direction. What could it be?

It took me a while to debug the issue but it boiled down to an infinite loop of trying to find EOF when reading a file. Let’s look at the simplified version of buggy code in sqlite:

#include <stdio.h> /* #define EOF (-1) */
int main() {
int n = 0;
FILE * f = fopen ("/dev/null", "rb"); // supposed to have 0 bytes
for (;;) {
char c = fgetc (f); // truncate 'int' to char
if (c == EOF) break;
++n;
}
return n;
}

The code is supposed reach end of file (or maybe ‘xFF’ symbol) and finish. Normally it’s exactly what happens:

$ x86_64-pc-linux-gnu-gcc a.c -o a && ./a && echo $?
0

But not on powerpc64:

$ powerpc64-unknown-linux-gnu-gcc a.c -o a && ./a && echo $?
<hung>

The bug here is simple: c == EOF promotes both operands char c and -1 to int. Thus for EOF case condition looks like that:

((int)(char)(-1) == -1)

So why does it never fire on powerpc64? Because it’s an unsigned char platform! As a result it looks like two different conditions:

((int)0xff == -1) // powerpc64
((int)(-1) == -1) // x86_64

Once we know the problem we can force x86_64 to hang as well by using -funsigned-char gcc option:

$ x86_64-pc-linux-gnu-gcc a.c -o a -funsigned-char && ./a && echo $?
<hung>

I did not encounter bugs related to char signedness for quite a while.

What are other platforms defaulting to unsigned char? I tried the simple hack (I’ve encountered it today due to changes in c++11 to forbid narrowing conversion in initializers):

$ cat a.cc
char c[] = { -1 };

for cxx in /usr/bin/*-g++; do echo -n "$cxx "; $cxx -c a.cc 2>/dev/null && echo SIGNED || echo UNSIGNED; done | sort -k2

/usr/bin/afl-g++ SIGNED
/usr/bin/alpha-unknown-linux-gnu-g++ SIGNED
/usr/bin/hppa-unknown-linux-gnu-g++ SIGNED
/usr/bin/hppa2.0-unknown-linux-gnu-g++ SIGNED
/usr/bin/i686-w64-mingw32-g++ SIGNED
/usr/bin/ia64-unknown-linux-gnu-g++ SIGNED
/usr/bin/m68k-unknown-linux-gnu-g++ SIGNED
/usr/bin/mips64-unknown-linux-gnu-g++ SIGNED
/usr/bin/sh4-unknown-linux-gnu-g++ SIGNED
/usr/bin/sparc-unknown-linux-gnu-g++ SIGNED
/usr/bin/sparc64-unknown-linux-gnu-g++ SIGNED
/usr/bin/x86_64-HEAD-linux-gnu-g++ SIGNED
/usr/bin/x86_64-UNREG-linux-gnu-g++ SIGNED
/usr/bin/x86_64-pc-linux-gnu-g++ SIGNED
/usr/bin/x86_64-w64-mingw32-g++ SIGNED

/usr/bin/aarch64-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/armv5tel-softfloat-linux-gnueabi-g++ UNSIGNED
/usr/bin/armv7a-hardfloat-linux-gnueabi-g++ UNSIGNED
/usr/bin/armv7a-unknown-linux-gnueabi-g++ UNSIGNED
/usr/bin/powerpc-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/powerpc64-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/powerpc64le-unknown-linux-gnu-g++ UNSIGNED
/usr/bin/s390x-unknown-linux-gnu-g++ UNSIGNED

Or in a shorter form:

  • signed: alpha, hppa, x86, ia64, m68k, mips, sh, sparc
  • unsigned: arm, powerpc, s390

Why would compiler prefer one signedness over another? The answer is the underlying Instruction Set Architecture. Or … not :), read on!.

Let’s look at generated code for two simple functions fetching single char from memory into register and compare generated code:

signed long sc2sl (signed char * p) { return *p; }
unsigned long uc2ul (unsigned char * p) { return *p; }

Alpha

Alpha is a 64-bit architecture. Does not support unaligned reads in its basic ISA. You have been warned.

; alpha-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
; example: 0x12345(BB address, p)
; | 0x12346(CC address, p+1)
; v v
; mem: [ .. AA BB CC DD .. ]
; a0 = 0x12345
lda t0,1(a0) ; load address: t0 = p+1
; t0 = 0x12346 (a0 + 1)
ldq_u v0,0(a0) ; load unaligned: v0 = *(long*)(align(p))
; v0 = *(long*)0x12340
; v0 = 0xDDCCBBAA????????
extqh v0,t0,v0 ; extract actual byte into MSB position
; v0 = v0 << 16
; v0 = 0xBBAA????????0000
sra v0,56,v0 ; get sign-extended byte using arithmetic shift-right
; v0 = v0 >> 56
; v0 = 0xFFFFFFFFFFFFFFBB
ret ; return
uc2ul:
ldq_u v0,0(a0) ; load unaligned: v0 = *(long*)(align(p))
extbl v0,a0,v0 ; extract byte in v0
ret

In this case Alpha handles unsigned load slightly nicer (does not require arithmetic shift and shift offset computation). It takes quite a bit of time to understand sc2sl implementation.

creemj noted on #gentoo-alpha BWX ISA extension (enabled with -mbwx in gcc):

; alpha-unknown-linux-gnu-gcc -O2 -mbwx -c a.c && objdump -d a.o
sc2sl:
ldbu v0,0(a0)
sextb v0,v0 ; sign-extend-byte
ret
uc2ul:
ldbu v0,0(a0)
ret

Here signed load requires one instruction to amend default-unsigned load semantics.

HPPA (PA-RISC)

Currently HPPA userland supports only 32-bit mode on linux. Similar to many RISC architectures its branching instructions take two clock cycles to execute. By convention it means the next instruction right after branch (bv) is also executed.

; hppa2.0-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
ldb 0(r26),ret0 ; load byte
bv r0(rp) ; return
extrw,s ret0,31,8,ret0 ; sign-extend 8 bits into 31
uc2ul:
bv r0(rp) ; return
ldb 0(r26),ret0

Similar to Alpha signed chars require one more arithmetic operation.

x86

64-bit mode:

; x86_64-pc-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
movsbq (%rdi),%rax ; load/sign-extend byte to quad
retq
uc2ul:
movzbl (%rdi),%eax ; load/zero-extend byte to long
retq

Note the difference between target operands (64 vs. 32 bits). x86-64 implicitly zeroes out register part for us in 64-bit mode.

32-bit mode:

; x86_64-pc-linux-gnu-gcc -O2 -m32 -c a.c && objdump -d a.o
sc2sl:
mov 0x4(%esp),%eax
movsbl (%eax),%eax
ret
uc2ul:
mov 0x4(%esp),%eax
movzbl (%eax),%eax
ret

No surprises here. Argument is passed through stack.

ia64

ia64 “instructions” are huge. They are 128-bit long and encode 3 real instructions. Result of memory fetch is not used in the same bundle thus we need at least two bundles to fetch and shift. (I don’t know why yet, either in order to avoid memory stall in the same bundle or it’s a “Write ; Read-Write” conflict on r8 in a single bundle)

; ia64-unknown-linux-gnu-gcc -O2 -c a.c && objdump -d a.o
sc2sl:
[MMI] nop.m 0x0
ld1 r8=[r32] # load byte (implicit zero-extend)
nop.i 0x0;;
[MIB] nop.m 0x0
sxt1 r8=r8 # sign-extend
br.ret.sptk.many b0;;
uc2ul:
[MIB] ld1 r8=[r32] # load byte (implicit zero-extend)
nop.i 0x0
br.ret.sptk.many b0;;

Unsigned char load requires fewer instructions (no additional shift required).

m68k

For some reason frame pointer is still preserved on -O2. I’ve disabled it with -fomit-frame-pointer to make assembly shorter:

; m68k-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
moveal %sp@(4),%a0 ; arguments are passed through stack (as would be in i386)
moveb %a0@,%d0 ; load byte
extbl %d0 ; sign-extend result
rts
uc2ul:
moveal %sp@(4),%a0
clrl %d0 ; zero destination register
moveb %a0@,%d0 ; load byte
rts

Both functions are similar. Both require arithmetic fiddling.

mips

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; mips64-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
jr ra
lb v0,0(a0) ; load byte (sign-extend)
uc2ul:
jr ra
lbu v0,0(a0) ; load byte (zero-extend)

Both functions are taking exactly one instruction.

SuperH

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; sh4-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
rts
mov.b @r4,r0 ; load byte (sign-extend)
uc2ul:
mov.b @r4,r0 ; load byte (sign-extend)
rts
extu.b r0,r0 ; zero-extend result

Here unsigned load requires one instruction to amend default-signed load semantics.

SPARC

Similar to HPPA has the same rule of executing one instruction after branch instruction.

; sparc-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -c a.c && objdump -d a.o
sc2sl:
retl
ldsb [ %o0 ], %o0
uc2ul:
retl
ldub [ %o0 ], %o0

Both functions are taking exactly one instruction.

ARM

; armv5tel-softfloat-linux-gnueabi-gcc -O2 -fomit-frame-pointer -c a.c && armv5tel-softfloat-linux-gnueabi-objdump -d a.o
sc2sl:
ldrsb r0, [r0] ; load/sign-extend
bx lr
uc2ul:
ldrb r0, [r0] ; load/zero-extend
bx lr

Both functions are taking exactly one instruction.

PowerPC

Powerpc generates quite inefficient code for -fPIC mode. Enabling -fno-PIC by default.

; powerpc-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
lbz r3,0(r3) ; load-byte/zero-extend
extsb r3,r3 ; sign-extend
blr
nop
uc2ul:
lbz r3,0(r3) ; load-byte/zero-extend
blr

Here signed load requires one instruction to amend default-unsigned load semantics.

S390

64-bit mode:

; s390x-unknown-linux-gnu-gcc -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
icmh %r2,8,0(%r2) ; insert-characters-under-mask-64
srag %r2,%r2,56 ; shift-right-single-64
br %r14
uc2ul:
llgc %r2,0(%r2) ; load-logical-character
br %r14

Most esoteric instruction set :) It looks like unsigned loads are slightly shorter here.

“31”-bit mode (note -m31):

; s390x-unknown-linux-gnu-gcc -m31 -O2 -fomit-frame-pointer -fno-PIC -c a.c && objdump -d a.o
sc2sl:
icm %r2,8,0(%r2) ; insert-characters-under-mask-64
sra %r2,24 ; shift-right-single
br %r14
uc2ul:
lhi %r1,0 ; load-halfword-immediate
ic %r1,0(%r2) ; insert-character
lr %r2,%r1 ; register-to-register(?) move
br %r14

Surprisingly in 31-bit mode signed stores are slightly shorter. But it looks like uc2ul could be shorter by eliminating lr.

Parting words

At least from ISA standpoint some architectures treat signed char and unsigned char equally and could pick any signedness. Others differ quite a bit.

Here is my silly table:

architecture signedness preferred signedness match
alpha SIGNED UNSIGNED NO
arm UNSIGNED AMBIVALENT YES
hppa SIGNED UNSIGNED NO
ia64 SIGNED UNSIGNED NO
m68k SIGNED AMBIVALENT YES
mips SIGNED AMBIVALENT YES
powerpc UNSIGNED UNSIGNED YES
s390(64) UNSIGNED UNSIGNED YES
sh SIGNED SIGNED YES
sparc SIGNED AMBIVALENT YES
x86 SIGNED AMBIVALENT YES

What do we see here:

  • alpha follows the majority of architecture in char signedness but pays for it a lot.
  • arm could have been signed just fine (for this tiny silly test)
  • hppa and ia64 might be unsigned and balance the table a bit (6/5 versus 8/3) :)

Have fun!

Posted on September 17, 2017
<noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> comments powered by Disqus

September 14, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Public Money, Public Code (September 14, 2017, 20:04 UTC)

Imagine that all publicly funded software were under a free license: Everybody would be able to use, study, share and improve it.

I have been waiting for Free Software Foundation Europe to launch the Public Money, Public Code campaign for almost a year now, when first Matthias told me about this being in the works. I have been arguing the same point, although not quite as organized, since back in 2009 when I complained about how the administration of Venice commissioned a GIS application to a company they directly own.

For those who have not seen the campaign yet, the idea is simple: software built with public money (that is, commissioned and paid for by public agencies), should be licensed using a FLOSS license, to make it public code. I like this idea and will support it fully. I even rejoined the Fellowship!

The timing of this campaign ended up resonating with a post on infrastructure projects and their costs, which I find particularly interesting and useful to point out. Unlike the article that is deep-linked there, which lamented of the costs associated with this project, this article focuses on pointing out how that money actually needs to be spent, because for the most part off the shelf Free Software is not really up to the task of complex infrastructure projects.

You may think the post I linked is overly critical of Free Software, and that it’s just a little rough around the edges and everything is okay once you spend some time on it. But that’s exactly what the article is saying! Free Software is a great baseline to build complex infrastructure on top of. This is what all the Cloud companies do, this is what even Microsoft has been doing in the past few years, and it is reasonable to expect most for-profit projects would do that, for a simple reason: you don’t want to spend money working on reinventing the wheel when you can charge for designing an innovative engine — which is a quite simplistic view of course, as sometimes you can invent a more efficient wheel indeed, but that’s a different topic.

Why am I bringing this topic up together with the FSFE campaign? Because I think this is exacly what we should be asking from our governments and public agencies, and the article I linked shows exactly why!

You can’t take off the shelf FLOSS packages and have them run a whole infrastructure, because they usually they are unpolished, and might not scale or require significant work to bring them up to the project required. You will have to spend money to do that, and maybe in some cases it will be cheaper to just not use already existing FLOSS projects at all, and build your own new, innovative wheel. So publicly funded projects need money to produce results, we should not complain about the cost1, but rather demand that the money spent actually produces something that will serve the public in all possible ways, not only with the objective of the project, but also with any byproduct of it, which include the source code.

Most of the products funded with public money are not particularly useful for individuals, or for most for-profit enterprises, but byproducts and improvements may very well be. For example, in the (Italian) post I wrote in 2009 I was complaining about a GIS application that was designed to report potholes and other roadwork problems. In abstract, this is a way to collect and query points of interests (POI), which is the base of many other current services, from review sites, to applications such as Field Trip.

But do we actually care? Sure, by making the code available of public projects, you may now actually be indirectly funding private companies that can reuse that code, and thus be jumpstarted into having applications that would otherwise cost time or money to build from scratch. On the other hand, this is what Free Software has been already about before: indeed, Linux, the GNU libraries and tools, Python, Ruby, and all those tools out there are nothing less than a full kit to quickly start projects that a long time ago would have taken a lot of money or a lot of time to start.

You could actually consider the software byproducts of these project similarly to the public infrastructure that we probably all take from granted: roads, power distribution, communication, and so on. Businesses couldn’t exist without all of this infrastructure, and while it is possible for a private enterprise to set out and build all the infrastructure themselves (road, power lines, fiber), we don’t expect them to do so. Instead we accept that we want more enterprises, because they bring more jobs, more value, and the public investment is part of it.

I actually fear the reason a number of people may disagree with this campaign is rooted in localism — as I said before, I’m a globalist. Having met many people with such ideas, I can hear them in my mind complaining that, to take again the example of the IRIS system in Venice, the Venetian shouldn’t have to pay for something and then give it away for free to Palermo. It’s a strawman, but just because I replaced the city that they complained about when I talked about my idea those eight years ago.

This argument may make sense if you really care about local money being spent locally and not counting on any higher-order funding. But myself I think that public money is public, and I don’t really care if the money from Venice is spent to help reporting potholes in Civitella del Tronto. Actually, I think that cities where the median disposable income is higher have a duty to help providing infrastructure for the smaller, poorer cities at the very least in their immediate vicinity, but overall too.

Unfortunately “public money” may not always be so, even if it appears like that. So I’m not sure if, even if a regulation was passed for publicly funded software development to be released as FLOSS, we’d get a lot in form of public transport infrastructure being open sourced. I would love for it to be though: we’d more easily get federated infrastructure, if they would share the same backend, and if you knew how the system worked you could actually build tools around it, for instance integrating Open Street Map directly with the transport system itself. But I fear this is all wishful thinking and it won’t happen in my lifetime.

There is also another interesting point to make here, which I think I may expand upon, for other contexts, later on. As I said above, I’m all for requiring the software developed with public money to be released to the public with a FLOSS-compatible license. Particularly one that allows using other FLOSS components, and the re-use of even part of the released code into bigger projects. This does not mean that everybody should have a say in what’s going on with that code.

While it makes perfect sense to be able to fix bugs and incompatibilities with websites you need to use as part of your citizen life (in the case of the Venetian GIS I would probably have liked to fix the way they identified the IP address they received the request for), adding new features may actually not be in line with the roadmap of the project itself. Particularly if the public money is already tight rather than lavish, I would surely prefer that they focused on delivering what the project needs and just drop the sources out in compatible licenses, without trying to create a community around them. While the latter would be nice to have, it should not steal the focus on the important part: a lot of this code is currently one-off and is not engineered to be re-used or extensible.

Of course on the long run, if you do have public software available already as open-source, there would be more and more situations where solving the same problem again may become easier, particularly if an option is added there, or a constant string can become a configured value, or translations were possible at all. And in that case, why not have them as features of a single repository, rather than have a lot of separate forks?

But all of this should really be secondary, in my opinion. Let’s focus on getting those sources, they are important, they matter and they can make a difference. Building communities around this will take time. And to be honest, even making these secure will take time. I’m fairly sure that in many cases right now if you do take a look at the software that is running for public services, you can find backdoors, voluntary or not, and even very simple security issues. While the “many eyes” idea is easily disproved, it’s also true that for the most part those projects cut corners, and are very difficult to make sure to begin with.

I want to believe we can do at least this bit.


  1. Okay, so there are case of artificially inflated costs due to friends-of-friends. Those are complicated issues, and I’ll leave them to experts. We should still not be complaining that these projects don’t appear for free. [return]

September 13, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The breadwinner product (September 13, 2017, 12:04 UTC)

This may feel a bit random of a post, as Business and Economics are not my areas of expertise and I usually do my best not talk about stuff I don’t know, but I have seen the complete disregard for this concept lately and I thought it would be a good starting point to define here, before I talk about it, what a “breadwinner product” is, from my point of view.

The term breadwinner is used generally to refer to the primary income-earner in a household. While I have not seen this very often extended to products and services in companies, I think it should be fairly obvious how the extension would work.

In a world of startups there are still plenty of companies that have a real “breadwinner product”, even when acting as startups. This is the case for instance of the company I used to contract out for, in Los Angeles: they have been in activity for a number of years with a different, barely related product, and I was contracting out for their new project.

I think it’s important to think of this term, because without having this concept in mind, it’s hard to understand a lot of business decisions of many companies, why startups such as Revolut are “sweeping up the market” and so on.

This is something that came up on Twitter a time or two: a significant amount of geeks appear to wilfully ignore the needs of a business, and the marketing concepts as words of the devil, and will refuse to try considering whether decisions made business sense, and instead they will either try to judge decisions based purely on technical merits, or even just on their own direct interests. Now it is true that technical merits can make good business sense, but sometimes there are very good long-term vision reasons that people don’t appreciate on the pure technical point of view.

In particular, sometimes it’s hard to understand why a service by a company that may appear as a startup is based on “old” technology, but it may just be the case that it is actually a “traditional” company trying to pivot into a different market or a different brand or level of service. And when that happens, there’s at least some gravity pull to keep the stack in line with the previous technology. Particularly if the new service can piggyback on the old one for a while, both in term of revenue, technology and staff.

So in the case of the company I referred to above, when I started contracting out they were already providing a separate service that was built on a real legacy technology, ran on a stack based on bare metal servers with RedHat 5. Since the new service had two components, one of them ended up being based on the same stack and the other one (which I was setting up) ended up based on Gentoo Linux with containers instead. The same way as the Tinderbox used to be run. If you wonder why one would run two stacks this separate, the answer is that messing with the breadwinner product, most of the time, is a risky endeavour and unless you have a very good reason to do so, you don’t.

So even though I was effectively building a new architecture from scratch, and was setting up new servers, with more proper monitoring (based on Munin and Icinga), and Puppet for configuration management, I was not allowed to touch the old service. And rightly so, as it was definitely brittle and it would have lead to actually losing money, as that service was running in production, while the new one was not ready yet, and the few users of it would be able to be told about maintenance windows in advance.

There is often a tipping point though, when the cost of running a legacy service is higher than the revenue the service is bringing in. For that company that happened right as I was leaving it to start working at my current place of work. The owner though was more business savvy than many other people I met before and since, and was actually already planning how to cut some expenses. Indeed the last thing I helped that company with was setting up a single1 baremetal server with multiple containers to virtualise their former fully bare metal hardware, and bring it physically to a new location (Fremont, CA) to cut on the hosting costs.

The more the breadwinner service is making money, and the less the company is experimenting with alternative approaches to cut the costs in the future or to build up new services or open new market opportunities, the more working for those companies become hard. Of all the possible things I can complain about my boss at the time, ability to deal with business details was not one of those. Actually, I think that despite leaving me in quite the bad predicament afterwards, he did end up teaching me quite a bit of the nitty-gritty details of doing business, particularly US-style — and I may not entirely like it either.

But all in all, I think this is something lots more people in tech should learn about. Because I still maintain that Free Software can only be marketed by businesses and to be able to have your project cater to business users without selling its soul, you need to be able to tell what they need and how they need it provided.


  1. Okay, actually a bit more than one: a single machine ran the production environment for the legacy servers, and acted as warm-backup for the new service; another machine ran the production environment for the new service, and acted as warm-backup for the new service. A pair of the older baremetal servers acted as database backends for both systems. [return]

September 11, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Authenticating with U2F (September 11, 2017, 16:25 UTC)

In order to further secure access to my workstation, after the switch to Gentoo sources, I now enabled two-factor authentication through my Yubico U2F USB device. Well, at least for local access - remote access through SSH requires both userid/password as well as the correct SSH key, by chaining authentication methods in OpenSSH.

Enabling U2F on (Gentoo) Linux is fairly easy. The various guides online which talk about the pam_u2f setup are indeed correct that it is fairly simple. For completeness sake, I've documented what I know on the Gentoo Wiki, as the pam_u2f article.

September 10, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


If you're reading this, the last act in this drama (see the previous blog post) was that in Patras a friendly employee from Hertz picked up the rental car to bring it to a repair workshop. A bit later than planned, but nevertheless. Now the story continues.

  • About 20:00 the same day I get a phone call from the same lady that my car was ready, and we could meet in about 20min next to my hotel so I can pick it up again. That sounded great to me. Some minutes later I saw the car coming.
  • Of course I wanted to try out the repaired roof / window immediately, so we did that. Opened the roof, closed the roof. The passenger side window did not close; precisely the same phenomenon. Oops.
  • I tried a few more times on instruction by the Hertz employee, with the result that the window got stuck at half height and did not move anymore even after shutting down and restarting the ignition. Since it was stuck on the wrong side of its rubber seal, also the passenger door did not open anymore.
  • The visibly nervous Hertz employee calls her manager on the mobile, who arrives after a few minutes. The manager opens the passenger door with application of force. Afterwards, and after restarting the engine, the window slides up again.
  • We have some discussion about a replacement car, where I point out that I paid a lot of money for having a convertible, and really want one. I agree to come to the office thursday morning to sort things out.
  • Next morning, Thursday, at the Hertz office, I'm glad to learn that a replacement car will be sent. Of course, I'm now leaving Patras, so the car will have to be sent to a station near my next stops.
  • We discuss this and agree that I will pick the car up tomorrow (Friday) afternoon in Kalamata (which is only about 80km from my Friday evening hotel in Kyparissia).
Oh well. Glad that things are somehow sorted out. I spend the rest of the day visiting a Mycenaean castle (1200 BC), a Frankish castle (1200 AD), and re-visiting Olympia, spend the night near Olympia, and then start towards Kalamata through the mountains via, e.g. the excavations of ancient Messene. Sometime on the way I realize that the Kalamata Hertz offices (according to the website) are closed 14:00 to 17:00, so I plan with arriving there around 18:00. That's ample time since they should be open 17:00 - 21:00 (search for Kalamata here).
  • Arrive 18:10 at the Kalamata city office. Nobody there, and there's a sign on the door saying "We are at Kalamata Airport."
  • Drive back the ~10km to the airport (which I passed on the way before). Arrive there around 18:30. The entire airport is already closed for the day. No Hertz employees in sight.
  • Call the Kalamata office. First response, "We closed half an hour ago." When I start explaining my problem, the lady on the phone says "But your car has not arrived from Athens yet!" I point out that I have to go back to Kyparissia, quite some way, today. She doesnt know when it will arrive, but says something about late evening. 
  • I tell her I will now get dinner here in Kalamata, and afterwards call her again.
That's where we are now. Just as a reminder, it's now Friday evening, and the problem has essentially been known to Hertz since last Sunday.

Update:
  • Tried calling the Hertz Kalamata office again around 20:45. No response, after a while some mailbox text in Greek. 
  • Drove back the 60km to Kyparissia, arrived at the hotel 22:00. Will call Hertz again tomorrow.
Update 2: Yes, Hertz knows my mobile phone number. It's big and fat on my contract, and I also gave it again and reconfirmed it to the employee at Patras. So, one could assume if something goes wrong they phone me...

Update 3: It ends well. See the next post.

    Fun with Hertz car rentals, part 3 (it ends well) (September 10, 2017, 19:37 UTC)

    If you've read the last part, I had just arrived at my hotel in Kyparissia late in the night, slightly fuming. Well...

    Next morning, saturday, around 10:15 somebody called my mobile phone. For some reason I didn't notice, but only got a text notification of a missed call an hour later. I called back; turns out this was the Kalamata airport Hertz office. "Your replacement car has arrived; you can pick it up anytime."

    I arranged to come by around 16:00 in the afternoon, and from here on everything went smoothly. Now I'm driving a white BMW Mini convertible, and the roof and windows work just fine.

    In the end, obviously I'm quite happy that a replacement car was driven from Athens to Kalamata and that I can now continue with my vacation as planned. The path that lead to that outcome, however, was not so great...

    September 09, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    Dell XPS 13, problems with WiFi (September 09, 2017, 18:04 UTC)

    A couple of months ago I bought a Dell XPS 13. I’m still very happy with the laptop, particularly given the target use that I have for it, but I have started noticing a list of problems that do bother me more than a little bit.

    The first problem is something that I have spoken of in the original post and updated a couple of times: the firmware (“BIOS”) update. While the firmware is actually published through LVFS by Dell, either Antergos or Arch Linux have some configuration issue with EFI and the System Partition, that cause the EFI shim not to be able to find the right capsule. I ended up just running the update manually twice now, since I didn’t want to spare time to fix the packaging of the firmware updater, and trying with different firmware updates is not easy.

    Also, while the new firmware updates made the electrical whining noise effectively disappear, making the laptop very nice to use in quiet hotel rooms (not all hotel rooms are quiet), it seems to have triggered more WiFi problems. Indeed, it got to the point that I could not use the laptop at home at all. I’m not sure what exactly was the problem, but my Linksys WRT1900ACv2 seems to trigger known problems with the WiFi card on this model.

    At first I thought it would be a problem with using Arch Linux rather than Dell’s own Ubuntu image, that appeared to have separate Qualcomm drivers for the ath10k card. But it turns out the same error pops up repeated in Dell forums and LaunchPad too. A colleague with the same laptop suggested to just replace the card, getting rid of the whole set of problems introduced by the ath10k driver. Indeed, even looking around the Windows users websites, the recommendation appear to be the same: just replace your card.

    The funny bit is that I only really noticed this when I came back from my long August trips, because since I bought the laptop, I hadn’t spent more than a few days at home at that point. I have been in Helsinki, Vancouver and Seattle, used the laptop in airports, lounges, hotels and cafes, as well as my office. And none of those places had any issue with my laptop. I used the laptop extensively to livetweet SREcon Europe from the USENIX wireless at the hotel, and it had no problem whatsoever.

    My current theory for this is that there is some mostly-unused feature that is triggered by high-performance access point like the one I have at home, that runs LEDE, and as such is not something you’ll encounter in the wild. This also would explain why the Windows sites that I found referencing the problem are suggesting the card replacement — your average Windows user is unlikely to know how to do so or interested in a solution that does not involve shipping the device back to Dell, and to be fair they probably have a point, why on earth are they selling laptops with crappy WiFi cards?

    So anyway my solution to this was to order an Intel 8265 wireless card which includes the same 802.11ac dual-band support and Bluetooth 4.2, and is the same format as the ath10k that the laptop comes with. It feels a bit strange having to open up a new laptop to replace a component, but since this is the serviceable version of Dell, it was not a horrible experience (my Vostro laptop still has a terrible 802.11g 2.4GHz-only card on it, but I can’t replace it easily).

    Moving onto something else, the USB-C dock is working great, although I found out the hard way that if you ask Plasma, or whatever else it is that I ended up asking it to, not to put the laptop to sleep the moment the lid is closed, if the power is connected (which I need to make sure I can use the laptop “docked” onto my usual work-from-home setup), it also does not go to sleep if the power is subsequently disconnected. So the short version is that I now usually run the laptop without the power connected unless it’s already running low, and I can easily stay a whole day at a conference without charging, which is great!

    Speaking of charging, turns out that the Apple 65W USB-C charger also works great with the XPS 13. Unfortunately it comes without a cable, and particularly with Apple USB-C cable your mileage may vary. It seems to be fine with the Google Pixel phone cable though. I have not tried measuring how much power and which power mode it uses, among other things because I wouldn’t know how to query the USB-C controller to get that information. If you have suggestions I’m all ears.

    Otherwise the laptop appears to be working great for me. I only wish I could wake it up from sleep without opening it, when using it docked, but that’s also a minor feature.

    The remaining problems are software. For instance Plasma sometimes crashes when I dock the laptop, and the new monitor comes online. And I can’t reboot while docked because the external keyboard (connected on the USB-C dock) is not able to type in the password for the full-disk encryption. Again this is a bother but not a big deal.

    September 08, 2017
    Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


    So, I decided to get myself a rather expensive treat this summer. For travelling the Peloponnes I rented a Mini Cooper convertible. These are really cute, and driving around in the sun with the roof open felt like a very nice idea. I'm a Hertz Gold Club customer, so why not go for Hertz again.

    I picked up the car in Athens, all looked fine. The first day I had some longer driving to do, and also the manual was only in Greek, so I decided to drive to my first stop and check out the convertible roof there. OK, with some fiddling I found and read a German manual on the BMW website (now I know where to find the VIN number, if anyone asks :), opened the roof, enjoyed half a day in the mountains near Kalavrita.

    Afterwards the passenger side window didn't close anymore.

    It turns out something was already bent or damaged inside the door, so the window was sliding up on the wrong side of its rubber seal. At some point it can't move any further, so the electronics stops and disables the window. The effect is perfectly reproducible, and scratch marks on the rubber seal and door frame indicate it's been doing that already for a while. Oh well.

    • Phoned the nearest Hertz office in Patras. After some complicated discussion in English they advised me to contact the office in Athens.
    • Phoned the Hertz office in Athens. I managed to explain the problem there. They said I should contact their central technical service office, since maybe they know something easy to do. 
    • Phoned the central technical service office. There the problem was quickly understood; a very helpful lady explained to me that most likely the car would have to be exchanged. Since it was Sunday afternoon, they couldn't do it now, but somebody would call me back on Monday morning 9-10.
    • Waited Monday morning for the call. Nothing happened. 
    • Phoned the central technical service office, Monday around 13:00. They asked me where I was. After telling them I'm going to Patras the next day, they told me I should come by their office there.
    • Arrived at the Patras office tuesday around 17:30. I demonstrated the problem to the lady there. She acknowledged that something's broken, and told me she'd come to my hotel the next day between 11:00 and 12:00 to pick up the car and bring it to the BMW service for repair. 
    • Now I'm sitting in the bar of the hotel, it's 12:30, no one has called or come by, and slowly I'm getting seriously annoyed.
    Let's see how the story continues...
    • Update: 13:00, friendly lady from Hertz picked up the car. Fingers crossed. Made clear it's a long rental, so delaying makes no sense. Wants to phone me either in the afternoon or tomorrow morning.
    • Update 2: The drama continues in the next blog post.

    September 07, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    In Search of a Secure Time Source (September 07, 2017, 15:07 UTC)

    ClockAll our computers and smartphones have an internal clock and need to know the current time. As configuring the time manually is annoying it's common to set the time via Internet services. What tends to get forgotten is that a reasonably accurate clock is often a crucial part of security features like certificate lifetimes or features with expiration times like HSTS. Thus the timesetting should be secure - but usually it isn't.

    I'd like my systems to have a secure time. So I'm looking for a timesetting tool that fullfils two requirements:

    1. It provides authenticity of the time and is not vulnerable to man in the middle attacks.
    2. It is widely available on common Linux systems.

    Although these seem like trivial requirements to my knowledge such a tool doesn't exist. These are relatively loose requirements. One might want to add:
    1. The timesetting needs to provide a good accuracy.
    2. The timesetting needs to be protected against malicious time servers.

    Some people need a very accurate time source, for example for certain scientific use cases. But that's outside of my scope. For the vast majority of use cases a clock that is off by a few seconds doesn't matter. While it's certainly a good idea to consider rogue servers given the current state of things I'd be happy to have a solution where I simply trust a server from Google or any other major Internet entity.

    So let's look at what we have:

    NTP

    The common way of setting the clock is the NTP protocol. NTP itself has no transport security built in. It's a plaintext protocol open to manipulation and man in the middle attacks.

    There are two variants of "secure" NTP. "Autokey", an authenticated variant of NTP, is broken. There's also a symmetric authentication, but that is impractical for widespread use, as it would require to negotiate a pre-shared key with the time server in advance.

    NTPsec and Ntimed

    In response to some vulnerabilities in the reference implementation of NTP two projects started developing "more secure" variants of NTP. Ntimed - a rewrite by Poul-Henning Kamp - and NTPsec, a fork of the original NTP software. Ntimed hasn't seen any development for several years, NTPsec seems active. NTPsec had some controversies with the developers of the original NTP reference implementation and its main developer is - to put it mildly - a controversial character.

    But none of that matters. Both projects don't implement a "secure" NTP. The "sec" in NTPsec refers to the security of the code, not to the security of the protocol itself. It's still just an implementation of the old, insecure NTP.

    Network Time Security

    There's a draft for a new secure variant of NTP - called Network Time Security. It adds authentication to NTP.

    However it's just a draft and it seems stalled. It hasn't been updated for over a year. In any case: It's not widely implemented and thus it's currently not usable. If that changes it may be an option.

    tlsdate

    tlsdate is a hack abusing the timestamp of the TLS protocol. The TLS timestamp of a server can be used to set the system time. This doesn't provide high accuracy, as the timestamp is only given in seconds, but it's good enough.

    I've used and advocated tlsdate for a while, but it has some problems. The timestamp in the TLS handshake doesn't really have any meaning within the protocol, so several implementers decided to replace it with a random value. Unfortunately that is also true for the default server hardcoded into tlsdate.

    Some Linux distributions still ship a package with a default server that will send random timestamps. The result is that your system time is set to a random value. I reported this to Ubuntu a while ago. It never got fixed, however the latest Ubuntu version Zesty Zapis (17.04) doesn't ship tlsdate any more.

    Given that Google has shipped tlsdate for some in ChromeOS time it seems unlikely that Google will send randomized timestamps any time soon. Thus if you use tlsdate with www.google.com it should work for now. But it's no future-proof solution.

    TLS 1.3 removes the TLS timestamp, so this whole concept isn't future-proof. Alternatively it supports using an HTTPS timestamp. The development of tlsdate has stalled, it hasn't seen any updates lately. It doesn't build with the latest version of OpenSSL (1.1) So it likely will become unusable soon.

    OpenNTPDOpenNTPD

    The developers of OpenNTPD, the NTP daemon from OpenBSD, came up with a nice idea. NTP provides high accuracy, yet no security. Via HTTPS you can get a timestamp with low accuracy. So they combined the two: They use NTP to set the time, but they check whether the given time deviates significantly from an HTTPS host. So the HTTPS host provides safety boundaries for the NTP time.

    This would be really nice, if there wasn't a catch: This feature depends on an API only provided by LibreSSL, the OpenBSD fork of OpenSSL. So it's not available on most common Linux systems. (Also why doesn't the OpenNTPD web page support HTTPS?)

    Roughtime

    Roughtime is a Google project. It fetches the time from multiple servers and uses some fancy cryptography to make sure that malicious servers get detected. If a roughtime server sends a bad time then the client gets a cryptographic proof of the malicious behavior, making it possible to blame and shame rogue servers. Roughtime doesn't provide the high accuracy that NTP provides.

    From a security perspective it's the nicest of all solutions. However it fails the availability test. Google provides two reference implementations in C++ and in Go, but it's not packaged for any major Linux distribution. Google has an unfortunate tendency to use unusual dependencies and arcane build systems nobody else uses, so packaging it comes with some challenges.

    One line bash script beats all existing solutions

    As you can see none of the currently available solutions is really feasible and none fulfils the two mild requirements of authenticity and availability.

    This is frustrating given that it's a really simple problem. In fact, it's so simple that you can solve it with a single line bash script:

    date -s "$(curl -sI https://www.google.com/|grep -i 'date:'|sed -e 's/^.ate: //g')"

    This line sends an HTTPS request to Google, fetches the date header from the response and passes that to the date command line utility.

    It provides authenticity via TLS. If the current system time is far off then this fails, as the TLS connection relies on the validity period of the current certificate. Google currently uses certificates with a validity of around three months. The accuracy is only in seconds, so it doesn't qualify for high accuracy requirements. There's no protection against a rogue Google server providing a wrong time.

    Another potential security concern may be that Google might attack the parser of the date setting tool by serving a malformed date string. However I ran american fuzzy lop against it and it looks robust.

    While this certainly isn't as accurate as NTP or as secure as roughtime, it's better than everything else that's available. I put this together in a slightly more advanced bash script called httpstime.

    September 06, 2017
    Greg KH a.k.a. gregkh (homepage, bugs)
    4.14 == This years LTS kernel (September 06, 2017, 14:41 UTC)

    As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

    So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)

    As always, if people have questions about this, email me and I will be glad to discuss it, or talk to me in person next week at the LinuxCon^WOpenSourceSummit or Plumbers conference in Los Angeles, or at any of the other conferences I’ll be at this year (ELCE, Kernel Recipes, etc.)

    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    A selection of good papers from USENIX Security '17 (September 06, 2017, 10:04 UTC)

    I have briefly talked about Adrienne’s and April’s talk at USENIX Security 2017, but I have not given much light to other papers and presentations that got my attention at the conference. I thought I should do a round up of good content for this conference, and if I can manage, go back to it later.

    First of all, the full proceedings are available on the Program page of the conference. As usual, USENIX open access policy means that everybody has access to these proceedings, and since we’re talking academic papers, effectively everything I’m talking about is available to the public. I know that some videos were recorded, but I’m not sure when they will be published1.

    Before I go into link you to interesting content and give brief comments on them, I would like to start with a complaint about academic papers. The proper name of the conference would be 26th USENIX Security Symposium, and it’s effectively an academic conference. This means that the content is all available in form of papers. These papers are written, as usual, in LaTeX, and available in 2-columns PDFs, as it is usual. Usual, but not practical. This is a perfect format to read the paper when doing so on actual paper. But the truth is that nowadays this content is almost exclusively read in digital form.

    I would love to be able to have an ePub version of the various papersto just load on an ebook reader, for instance2. But even just providing a clear HTML file would be an improvement! When reading these PDFs on a screen, you end up having to zoom in and move around a freaking lot because of the column format, and more than once that would be enough for me to stop caring and not read the paper unless I really have interest in it, and I think this is counterproductive.

    Since I already wrote about Measuring HTTPS Adoption on the Web, I should not go back to that particular presentation. Right after that one, though, Katharina Krombholz presented “I Have No Idea What I’m Doing” - On the Usability of Deploying HTTPS which was definitely interesting to show how complicated still is setting up HTTPS properly, without even going into further advanced features such as HPKP, CSP and similar.

    And speaking of these, an old acquaintance of mine from university time3, Stefano Calzavara, presented CCSP: Controlled Relaxation of Content Security Policies by Runtime Policy Composition (my, what a mouthful!) and I really liked the idea. Effectively the idea behind this is that CSP is too complicated to use and is turning down a significant amount of people from implementing at least the basic parts of security policies. This fits very well with the previous talk, and with my experience. This blog currently depends on a few external resources and scripts, namely Google Analytics, Amazon OneLink, and Font Awesome, and I can’t really spend the time figuring out whether I can make all the changes all the time.

    In the same session as Stefano, Iskander Sanchez-Rola presented Extension Breakdown: Security Analysis of Browsers Extension Resources Control Policies, which easily sounded familiar to me, as it overlaps and extends my own complaint back in 2013 that browser extensions were becoming the next source of entropy for fingerprinting, replacing plugins. Since we had dinner with Stefano, Iskander and Igor (co-author of the paper above), we managed to have quite a chat on the topic. I’m glad to see that my hunches back in the days was not completely off and that there is more interest in fixing this kind of problems nowadays.

    Another interesting area to hear from was the Understanding the Mirai Botnet that revealed one very interesting bit of information: the attack on Dyn that caused a number of outages just last year appears to have as its target not the Dyn service itself but rather Sony PlayStation Network, and should thus be looked at in the light of the previous attacks to that. This should remind to everyone that just because you get something out personally from a certain attack, you should definitely not cheer on them; you may be the next target, even just as a bystander.

    Now, not all the talks were exceptional. In particular, I found See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing a bit… hypy. In the sense that the whole premise of considering 3D-printed sourcing as trusted by default, and then figure out a minimal amount of validation seemed to be stemming from the crowd that has been insisting that 3D printing is the future, for the past ten years or so. While it clearly is interesting, and it has a huge amount of use for prototyping, one-off designs and even cosplay, it does not seem like it got as far as people kept thinking it would. And at least from the talk and skimming the paper I couldn’t find a good explanation of how it compares against “classic” manufacturing trust.

    On a similar note I found not particularly enticing the out-of-band call verification system proposed by AuthentiCall: Efficient Identitiy and Content Authentication for Phone Calls which appears to leave out all the details of identity verification and trust system. And assumes a fairly North American point of view on the communication space.

    Of course I was interested in the talk about mobile payments, Picking Up My Tab: Understanding and Mitigating Synchronized Token Lifting and Spending in Mobile Payment, given my previous foray into related topics. It was indeed good, although the final answer of adding a QR-code to do a two-way verification of who it is you’re going to pay sounds like a NIH implementation of the EMV protocol. It is worth it to read to figure out the absurd implementation of Magnetic Secure Transmission that is used in Samsung Pay implementation: spoilers, it implements magnetic stripe payments through a mobile phone.

    For the less academic of you, TrustBase: An Architecture to Repair and Strengthen Certificate-based Authentication appears fairly interesting, particularly as the source code is available. The idea is to move the implementation of SSL clients into an operating system service, rather than into libraries, so that it can be configured once and for all at the system level, including selecting the available cipher to use and the Authorities to trust. It sounds good, but at the same time it sounds a lot like what NSS (the Mozilla one, not the glibc one) tried to implement. Except that didn’t go anywhere, not just because of API differences.

    But it can’t be an interesting post (or conference) without a bit of controversy. A Longitudinal, End-to-End View of the DNSSEC Ecosystem has been an interesting talk, and one that once again confirmed the fears around the lack of proper DNSSEC support in the wild right now. But in that very same talk, the presenter pointed out how they used a service Luminati to get access to endpoints within major ISPs networks to test their DNSSEC resolution. While I understand why a similar service would be useful in these circumstances, I need to remind people that the Luminati service is not one of the good guys!

    Indeed, Luminati is described as allowing you to request access to connections following certain characteristics. What it omits to say, is that it does so by targeting connections of users who installed the Hola “VPN” tool. If you haven’t come across this, Hola is one of the many extensions that allowed users to appear as if connecting from a different country to fool Netflix and other streaming services. Beside being against terms of services (but who cares, right?), in 2015 Hola was found to be compromising its users. In particular, the users running Hola are running the equivalent of a Tor exit node, without any of the security measures to protect its users, and – because its target is non-expert users who are trying to watch content not legally available in their country – without a good understanding of what such an exit node allows.

    I cannot confirm whether currently they still allow access to the full local network to the users of the “commercial” service, which include router configuration pages (cough DNS hijacking cough), and local office LANs that are usually trusted more than they should be. But it gives you quite an idea, as that was clearly the case before.

    So here is my personal set of opinions and a number of pointers to good and interesting talks and papers. I just wish they would be more usable by the non-academics by not being forced only in LaTeX format, but I’m afraid the two worlds shall never meet enough.


    1. As it turns out you can blame me a little bit for this part, I promised to help out. [return]
    2. Thankfully, for USENIX conferences, the full proceedings are available as ePub and Mobi. Although the size is big enough that you can’t use the mail-to-Kindle feature. [return]
    3. All the two weeks I managed to stay in it. [return]

    September 05, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    Abandoned Domain Takeover as a Web Security Risk (September 05, 2017, 17:11 UTC)

    In the modern web it's extremely common to include thirdparty content on web pages. Youtube videos, social media buttons, ads, statistic tools, CDNs for fonts and common javascript files - there are plenty of good and many not so good reasons for this. What is often forgotten is that including other peoples content means giving other people control over your webpage. This is obviously particularly risky if it involves javascript, as this gives a third party full code execution rights in the context of your webpage.

    I recently helped a person whose Wordpress blog had a problem: The layout looked broken. The cause was that the theme used a font from a web host - and that host was down. This was easy to fix. I was able to extract the font file from the Internet Archive and store a copy locally. But it made me thinking: What happens if you include third party content on your webpage and the service from which you're including it disappears?

    I put together a simple script that would check webpages for HTML tags with the src attribute. If the src attribute points to an external host it checks if the host name actually can be resolved to an IP address. I ran that check on the Alexa Top 1 Million list. It gave me some interesting results. (This methodology has some limits, as it won't discover indirect src references or includes within javascript code, but it should be good enough to get a rough picture.)

    Yahoo! Web Analytics was shut down in 2012, yet in 2017 Flickr still tried to use it

    The webpage of Flickr included a script from Yahoo! Web Analytics. If you don't know Yahoo Analytics - that may be because it's been shut down in 2012. Although Flickr is a Yahoo! company it seems they haven't noted for quite a while. (The code is gone now, likely because I mentioned it on Twitter.) This example has no security impact as the domain still belongs to Yahoo. But it likely caused an unnecessary slowdown of page loads over many years.

    Going through the list of domains I saw plenty of the things you'd expect: Typos, broken URLs, references to localhost and subdomains no longer in use. Sometimes I saw weird stuff, like references to javascript from browser extensions. My best explanation is that someone had a plugin installed that would inject those into pages and then created a copy of the page with the browser which later endet up being used as the real webpage.

    I looked for abandoned domain names that might be worth registering. There weren't many. In most cases the invalid domains were hosts that didn't resolve, but that still belonged to someone. I found a few, but they were only used by one or two hosts.

    Takeover of unregistered Azure subdomain

    But then I saw a couple of domains referencing a javascript from a non-resolving host called piwiklionshare.azurewebsites.net. This is a subdomain from Microsoft's cloud service Azure. Conveniently Azure allows creating test accounts for free, so I was able to grab this subdomain without any costs.

    Doing so allowed me to look at the HTTP log files and see what web pages included code from that subdomain. All of them were local newspapers from the US. 20 of them belonged to two adjacent IP addresses, indicating that they were all managed by the same company. I was able to contact them. While I never received any answer, shortly afterwards the code was gone from all those pages.

    Saline Courier defacement
    "Friendly defacement" of the Saline Courier.
    However the page with most hits was not so easy to contact. It was also a newspaper, the Saline Courier. I tried contacting them directly, their chief editor and their second chief editor. No answer.

    After a while I wondered what I could do. Ultimately at some point Microsoft wouldn't let me host that subdomain any longer for free. I didn't want to risk that others could grab that subdomain, but at the same time I obviously also didn't want to pay in order to keep some web page safe whose owners didn't even bother to read my e-mails.

    But of course I had another way of contacting them: I could execute Javascript on their web page and use that for some friendly defacement. After some contemplating whether that would be a legitimate thing to do I decided to go for it. I changed the background color to some flashy pink and send them a message. The page remained usable, but it was a message hard to ignore.

    With some trouble on the way - first they broke their CSS, then they showed a PHP error message, then they reverted to the page with the defacement. But in the end they managed to remove the code.

    There are still a couple of other pages that include that Javascript. Most of them however look like broken test webpages. The only legitimately looking webpage that still embeds that code is the Columbia Missourian. However they don't embed it on the start page, only on the error reporting form they have for every article. It's been several weeks now, they don't seem to care.

    What happens to abandoned domains?

    There are reasons to believe that what I showed here is only the tip of the iceberg. In many cases when services discontinue their domains don't simply disappear. If the domain name is valuable then almost certainly someone will try to register it immediately after it becomes available.

    Someone trying to abuse abandoned domains could watch out for services going ot of business or widely referenced domains becoming available. Just to name an example: I found a couple of hosts referencing subdomains of compete.com. If you go to their web page you can learn that the company Compete has discontinued its service in 2016. How long will they keep their domain? And what will happen with it afterwards? Whoever gets the domain can hijack all the web pages that still include javascript from it.

    Be sure to know what you include

    There are some obvious takeaways from this. If you include other peoples code on your web page then you should know what that means: You give them permission to execute whatever they want on your web page. This means you need to wonder how much you can trust them.

    At the very least you should be aware who is allowed to execute code on your web page. If they shut down their business or discontinue the service you have been using then you obviously should remove that code immediately. And if you include code from a web statistics service that you never look at anyway you may simply want to remove that as well.

    September 04, 2017
    Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
    A Late GUADEC 2017 Post (September 04, 2017, 03:20 UTC)

    It’s been a little over a month since I got back from Manchester, and this post should’ve come out earlier but I’ve been swamped.

    The conference was absolutely lovely, the organisation was a 110% on point (serious kudos, I know first hand how hard that is). Others on Planet GNOME have written extensively about the talks, the social events, and everything in between that made it a great experience. What I would like to write about is about why this year’s GUADEC was special to me.

    GNOME turning 20 years old is obviously a large milestone, and one of the main reasons I wanted to make sure I was at Manchester this year. There were many occasions to take stock of how far we had come, where we are, and most importantly, to reaffirm who we are, and why we do what we do.

    And all of this made me think of my own history with GNOME. In 2002/2003, Nat and Miguel came down to Bangalore to talk about some of the work they were doing. I know I wasn’t the only one who found their energy infectious, and at Linux Bangalore 2003, they got on stage, just sat down, and started hacking up a GtkMozEmbed-based browser. The idea itself was fun, but what I took away — and I know I wasn’t the only one — is the sheer inclusive joy they shared in creating something and sharing that with their audience.

    For all of us working on GNOME in whatever way we choose to contribute, there is the immediate gratification of shaping this project, as well as the larger ideological underpinning of making everyone’s experience talking to their computers better and free-er.

    But I think it is also important to remember that all our efforts to make our community an inviting and inclusive space have a deep impact across the world. So much so that complete strangers from around the world are able to feel a sense of belonging to something much larger than themselves.

    I am excited about everything we will achieve in the next 20 years.

    (thanks go out to the GNOME Foundation for helping me attend GUADEC this year)

    Sponsored by GNOME!

    August 26, 2017
    Sebastian Pipping a.k.a. sping (homepage, bugs)
    GIMP 2.9.6 now in Gentoo (August 26, 2017, 19:53 UTC)

    Here’s what upstream has to say about the new release 2.9.6. Enjoy 🙂

    August 23, 2017
    Sven Vermeulen a.k.a. swift (homepage, bugs)
    Using nVidia with SELinux (August 23, 2017, 17:04 UTC)

    Yesterday I've switched to the gentoo-sources kernel package on Gentoo Linux. And with that, I also attempted (succesfully) to use the propriatary nvidia drivers so that I can enjoy both a smoother 3D experience while playing minecraft, as well as use the CUDA support so I don't need to use cloud-based services for small exercises.

    The move to nvidia was quite simple, as the nvidia-drivers wiki article on the Gentoo wiki was quite easy to follow.

    Sebastian Pipping a.k.a. sping (homepage, bugs)
    Expat 2.2.4 released (August 23, 2017, 16:52 UTC)

    Expat 2.2.4 has recently been released. It features one major bugfix regarding files encoded as UTF-8, and improvements to the build system.

    If you are using a more ancient version of Visual Studio like 2012, please check the post-2.2.4 commits in Git for related fixes to compilation.

    Also, founding of Rhodri’s work on Expat by the Core Infrastructure Initiative is coming to an end. If you can fund additional developers for work on Expat — including smooth integration of by-default protection against billion laughs denial-of-service attacks — please get in touch.

    Sebastian Pipping

    August 22, 2017
    Sven Vermeulen a.k.a. swift (homepage, bugs)
    Switch to Gentoo sources (August 22, 2017, 17:04 UTC)

    You've might already read it on the Gentoo news site, the Hardened Linux kernel sources are removed from the tree due to the grsecurity change where the grsecurity Linux kernel patches are no longer provided for free. The decision was made due to supportability and maintainability reasons.

    That doesn't mean that users who want to stick with the grsecurity related hardening features are left alone. Agostino Sarubbo has started providing sys-kernel/grsecurity-sources for the users who want to stick with it, as it is based on minipli's unofficial patchset. I seriously hope that the patchset will continue to be maintained and, who knows, even evolve further.

    Personally though, I'm switching to the Gentoo sources, and stick with SELinux as one of the protection measures. And with that, I might even start using my NVidia graphics card a bit more, as that one hasn't been touched in several years (I have an Optimus-capable setup with both an Intel integrated graphics card and an NVidia one, but all attempts to use nouveau for the one game I like to play - minecraft - didn't work out that well).