Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
September 19, 2017, 18:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

September 19, 2017
graphicsmagick: assertion failure in pixel_cache.c (September 19, 2017, 15:15 UTC)

Description:
graphicsmagick is a collection of tools and libraries for many image formats.

The complete output of the issue:

# gm convert $FILE null
gm: magick/pixel_cache.c:1089: const PixelPacket AcquireImagePixels(const Image , const long, const long, const unsigned long, const unsigned long, ExceptionInfo ): Assertion `image != (Image ) NULL' failed.

Affected version:
1.3.25, 1.3.26 and maybe past releases

Fixed version:
N/A

Commit fix:
http://hg.code.sf.net/p/graphicsmagick/code/rev/358608a46f0a

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00366-graphicsmagick_assertionfailure_pixel_cache_c

Timeline:
2017-08-12: bug discovered and reported to upstream privately
2017-08-16: bug reported to the public upstream bugtracker
2017-08-29: upstream released a fix
2017-09-19: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

graphicsmagick: assertion failure in pixel_cache.c

Description:
bladeenc is an mp3 encoder.

There is a write overflow by default without a crafted file in the bladeenc command-line tool. The upstream website does not work anymore for me.
The complete ASan output of the issue:

# bladeenc $FILE
==15358==ERROR: AddressSanitizer: global-buffer-overflow on address 0x00000141c3b4 at pc 0x00000052afc8 bp 0x7ffcb9e50bb0 sp 0x7ffcb9e50ba8
WRITE of size 4 at 0x00000141c3b4 thread T0
    #0 0x52afc7 in iteration_loop /var/tmp/portage/media-sound/bladeenc-0.94.2-r1/work/bladeenc-0.94.2/bladeenc/loop.c:728:20
    #1 0x54fb91 in codecEncodeChunk /var/tmp/portage/media-sound/bladeenc-0.94.2-r1/work/bladeenc-0.94.2/bladeenc/codec.c:353:2
    #2 0x519694 in main /var/tmp/portage/media-sound/bladeenc-0.94.2-r1/work/bladeenc-0.94.2/bladeenc/main.c:518:23
    #3 0x7f3d35989680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #4 0x419dc8 in getenv (/usr/bin/bladeenc+0x419dc8)

0x00000141c3b4 is located 44 bytes to the left of global variable 'lo_quant_s' defined in 'loop.c:372:17' (0x141c3e0) of size 156
0x00000141c3b4 is located 0 bytes to the right of global variable 'hi_quant_l' defined in 'loop.c:370:17' (0x141c360) of size 84
SUMMARY: AddressSanitizer: global-buffer-overflow /var/tmp/portage/media-sound/bladeenc-0.94.2-r1/work/bladeenc-0.94.2/bladeenc/loop.c:728:20 in iteration_loop
Shadow bytes around the buggy address:
  0x00008027b820: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008027b830: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008027b840: 00 00 00 f9 f9 f9 f9 f9 f9 f9 f9 f9 04 f9 f9 f9
  0x00008027b850: f9 f9 f9 f9 04 f9 f9 f9 f9 f9 f9 f9 00 00 00 00
  0x00008027b860: 00 00 00 00 00 00 00 f9 f9 f9 f9 f9 00 00 00 00
=>0x00008027b870: 00 00 00 00 00 00[04]f9 f9 f9 f9 f9 00 00 00 00
  0x00008027b880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04
  0x00008027b890: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008027b8a0: 00 00 00 00 00 00 00 04 f9 f9 f9 f9 04 f9 f9 f9
  0x00008027b8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
  0x00008027b8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==15358==ABORTING
Aborted

Affected version:
0.94.2

Fixed version:
N/A

Commit fix:
N/A

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Timeline:
2017-09-19: bug discovered
2017-09-19: blog post about the issue

Note:
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bladeenc: global buffer overflow in iteration_loop (loop.c)

September 17, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Anyone working on motherboard RGB controllers? (September 17, 2017, 11:04 UTC)

I have been contacted by email last week by a Linux user, probably noticing my recent patch for the gpio_it87 driver in the kernel. They have been hoping my driver could extend to IT7236 chips that are used in a number of gaming motherboards for controlling RGB LEDs.

Having left the case modding world after my first and only ThermalTake chassis – my mother gave me hell for the fans noise, mostly due to the plexiglass window on the side of the case – I still don’t have any context whatsoever on what the current state of these boards is, whether someone has written generic tools to set the LEDs, or even UIs for them. But it was an interesting back and forth of looking for leads into figuring out what is needed.

The first problem is, like most of you who already know a bit about electrical engineering and electronics, that clearly the IT7236 chip is clearly not the same series as the IT87xx chips that my driver supports. And since they are not the same series, they are unlikely to share the same functionality.

The IT87xx series chips are Super I/O controllers, which mean they implement functionality such as floppy-disk controllers, serial and parallel ports and similar interfaces, generally via the LPC bus. You usually know these chip names because these need to be supported by the kernel for them to show up in sensors output. In addition to these standard devices, many controllers include at least a set of general purpose I/O (GPIO) lines. On most consumer motherboards these are not exposed in any way, but boards designed for industrial applications, or customized boards tend to expose those lines easily.

Indeed, I wrote the gpio_it87 driver (well, actually adapted and extended it from a previous driver), because the board I was working on in Los Angeles had one of those chips, and we were interested in having access to the GPIO lines to drive some extra LEDs (and possibly in future versions more external interfaces, although I don’t know if anything was made of those). At the time, I did not manage to get the driver merged; a couple of years back, LaCie manufactured a NAS using a compatible chip, and two of their engineers got my original driver (further extended) merged into the Linux kernel. Since then I only submitted one other patch to add another ID for a compatible chip, because someone managed to send me a datasheet, and I could match it to the one I originally used to implement the driver as having the same behaviour.

Back to the original topic, the IT7236 chip is clearly not a Super I/O controller. It’s also not an Environmental Control (EC) chip, as I know that series is actually IT85xx, which is what my old laptop had. Somewhat luckily though, a “Preliminary Specifications” datasheet for that exact chip is available online from a company that appears to distribute electronics component in the general sense. I’m not sure if that was intentional or not, but having the datasheet is always handy of course.

According to these specifications, the IT7236xFN chips are “Touch ASIC Cap Button Controllers”. And indeed, ITE lists them as such. Comparing this with a different model in the same series shows that probably LED driving was not their original target, but they came to be useful for that. These chips also include an MCU based on a 8051 core, similarly to their EC solution — this makes them, and in particular the datasheet I found earlier, a bit more interesting to me. Unfortunately the datasheet is clearly amended to be the shorter version, and does not include a programming interface description.

Up to this point this tells us exactly one thing only: my driver is completely useless for this chip, as it implements specifically the Super I/O bus access, and it’s unlikely to be extensible to this series of chips. So a new driver is needed and some reverse engineering is likely to be required. The user who wrote me also gave me two other ITE chip names found on the board they have: IT87920 and IT8686 (which appears to be a PWN fan controller — I couldn’t find it on the ITE website at all). Since the it87 (hwmon) driver is still developed out-of-kernel on GitHub, I checked and found an issue that appears to describe a common situation for gaming motherboards: the fans are not controlled with the usual Super I/O chip, but with a separate one (more accurate?) one, and that suggests that the LEDs are indeed controlled by another separate chip, which makes sense. The user ran strings on the UEFI/BIOS image and did indeed find modules named after IT8790 and IT7236 (and IT8728 for whatever reason), to confirm this.

None of this brings us any closer to supporting it though, so let’s take a loop at the datasheet, and we can see that the device has an I²C bus, instead of the LPC (or ISA) bus used by Super I/O and the fan controller. Which meant looking at i2cdev and lsi2c. Unfortunately the output can only see that there are things connected to the bus, but not what they are.

This leaves us pretty much dry. Particularly me since I don’t have hardware access. So my suggestion has been to consider looking into the Windows driver and software (that I’m sure the motherboard manufacturer provides), and possibly figure out if they can run in a virtualized environment (qemu?) where I²C traffic can be inspected. But there may be simpler, more useful or more advanced tools to do most of this already, since I have not spent any time on this particular topic before. So if you know of any of them, feel free to leave a comment on the blog, and I’ll make sure to forward them to the concerned user (since I have not asked them if I can publish their name I’m not going to out them — they can, if they want, leave a comment with their name to be reached directly!).

September 14, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Public Money, Public Code (September 14, 2017, 20:04 UTC)

Imagine that all publicly funded software were under a free license: Everybody would be able to use, study, share and improve it.

I have been waiting for Free Software Foundation Europe to launch the Public Money, Public Code campaign for almost a year now, when first Matthias told me about this being in the works. I have been arguing the same point, although not quite as organized, since back in 2009 when I complained about how the administration of Venice commissioned a GIS application to a company they directly own.

For those who have not seen the campaign yet, the idea is simple: software built with public money (that is, commissioned and paid for by public agencies), should be licensed using a FLOSS license, to make it public code. I like this idea and will support it fully. I even rejoined the Fellowship!

The timing of this campaign ended up resonating with a post on infrastructure projects and their costs, which I find particularly interesting and useful to point out. Unlike the article that is deep-linked there, which lamented of the costs associated with this project, this article focuses on pointing out how that money actually needs to be spent, because for the most part off the shelf Free Software is not really up to the task of complex infrastructure projects.

You may think the post I linked is overly critical of Free Software, and that it’s just a little rough around the edges and everything is okay once you spend some time on it. But that’s exactly what the article is saying! Free Software is a great baseline to build complex infrastructure on top of. This is what all the Cloud companies do, this is what even Microsoft has been doing in the past few years, and it is reasonable to expect most for-profit projects would do that, for a simple reason: you don’t want to spend money working on reinventing the wheel when you can charge for designing an innovative engine — which is a quite simplistic view of course, as sometimes you can invent a more efficient wheel indeed, but that’s a different topic.

Why am I bringing this topic up together with the FSFE campaign? Because I think this is exacly what we should be asking from our governments and public agencies, and the article I linked shows exactly why!

You can’t take off the shelf FLOSS packages and have them run a whole infrastructure, because they usually they are unpolished, and might not scale or require significant work to bring them up to the project required. You will have to spend money to do that, and maybe in some cases it will be cheaper to just not use already existing FLOSS projects at all, and build your own new, innovative wheel. So publicly funded projects need money to produce results, we should not complain about the cost1, but rather demand that the money spent actually produces something that will serve the public in all possible ways, not only with the objective of the project, but also with any byproduct of it, which include the source code.

Most of the products funded with public money are not particularly useful for individuals, or for most for-profit enterprises, but byproducts and improvements may very well be. For example, in the (Italian) post I wrote in 2009 I was complaining about a GIS application that was designed to report potholes and other roadwork problems. In abstract, this is a way to collect and query points of interests (POI), which is the base of many other current services, from review sites, to applications such as Field Trip.

But do we actually care? Sure, by making the code available of public projects, you may now actually be indirectly funding private companies that can reuse that code, and thus be jumpstarted into having applications that would otherwise cost time or money to build from scratch. On the other hand, this is what Free Software has been already about before: indeed, Linux, the GNU libraries and tools, Python, Ruby, and all those tools out there are nothing less than a full kit to quickly start projects that a long time ago would have taken a lot of money or a lot of time to start.

You could actually consider the software byproducts of these project similarly to the public infrastructure that we probably all take from granted: roads, power distribution, communication, and so on. Businesses couldn’t exist without all of this infrastructure, and while it is possible for a private enterprise to set out and build all the infrastructure themselves (road, power lines, fiber), we don’t expect them to do so. Instead we accept that we want more enterprises, because they bring more jobs, more value, and the public investment is part of it.

I actually fear the reason a number of people may disagree with this campaign is rooted in localism — as I said before, I’m a globalist. Having met many people with such ideas, I can hear them in my mind complaining that, to take again the example of the IRIS system in Venice, the Venetian shouldn’t have to pay for something and then give it away for free to Palermo. It’s a strawman, but just because I replaced the city that they complained about when I talked about my idea those eight years ago.

This argument may make sense if you really care about local money being spent locally and not counting on any higher-order funding. But myself I think that public money is public, and I don’t really care if the money from Venice is spent to help reporting potholes in Civitella del Tronto. Actually, I think that cities where the median disposable income is higher have a duty to help providing infrastructure for the smaller, poorer cities at the very least in their immediate vicinity, but overall too.

Unfortunately “public money” may not always be so, even if it appears like that. So I’m not sure if, even if a regulation was passed for publicly funded software development to be released as FLOSS, we’d get a lot in form of public transport infrastructure being open sourced. I would love for it to be though: we’d more easily get federated infrastructure, if they would share the same backend, and if you knew how the system worked you could actually build tools around it, for instance integrating Open Street Map directly with the transport system itself. But I fear this is all wishful thinking and it won’t happen in my lifetime.

There is also another interesting point to make here, which I think I may expand upon, for other contexts, later on. As I said above, I’m all for requiring the software developed with public money to be released to the public with a FLOSS-compatible license. Particularly one that allows using other FLOSS components, and the re-use of even part of the released code into bigger projects. This does not mean that everybody should have a say in what’s going on with that code.

While it makes perfect sense to be able to fix bugs and incompatibilities with websites you need to use as part of your citizen life (in the case of the Venetian GIS I would probably have liked to fix the way they identified the IP address they received the request for), adding new features may actually not be in line with the roadmap of the project itself. Particularly if the public money is already tight rather than lavish, I would surely prefer that they focused on delivering what the project needs and just drop the sources out in compatible licenses, without trying to create a community around them. While the latter would be nice to have, it should not steal the focus on the important part: a lot of this code is currently one-off and is not engineered to be re-used or extensible.

Of course on the long run, if you do have public software available already as open-source, there would be more and more situations where solving the same problem again may become easier, particularly if an option is added there, or a constant string can become a configured value, or translations were possible at all. And in that case, why not have them as features of a single repository, rather than have a lot of separate forks?

But all of this should really be secondary, in my opinion. Let’s focus on getting those sources, they are important, they matter and they can make a difference. Building communities around this will take time. And to be honest, even making these secure will take time. I’m fairly sure that in many cases right now if you do take a look at the software that is running for public services, you can find backdoors, voluntary or not, and even very simple security issues. While the “many eyes” idea is easily disproved, it’s also true that for the most part those projects cut corners, and are very difficult to make sure to begin with.

I want to believe we can do at least this bit.


  1. Okay, so there are case of artificially inflated costs due to friends-of-friends. Those are complicated issues, and I’ll leave them to experts. We should still not be complaining that these projects don’t appear for free. [return]

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
==4435==ERROR: AddressSanitizer: stack-buffer-underflow on address 0x7fe62b800e86 at pc 0x00000057b5a3 bp 0x7ffea98c1b10 sp 0x7ffea98c1b08                                                                        
WRITE of size 1 at 0x7fe62b800e86 thread T0                                                                                                                                                                       
    #0 0x57b5a2 in AP4_VisualSampleEntry::ReadFields(AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:780:40                                                                             
    #1 0x575726 in AP4_SampleEntry::Read(AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:108:5                                                                        
    #2 0x57d624 in AP4_VisualSampleEntry::AP4_VisualSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:742:5                     
    #3 0x57d624 in AP4_AvcSampleEntry::AP4_AvcSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:994                             
    #4 0x5cbf58 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:305:24             
    #5 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #6 0x586a2c in AP4_StsdAtom::AP4_StsdAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:100:13
    #7 0x58566f in AP4_StsdAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:56:16
    #8 0x5ca71c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:422:20
    #9 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #10 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #11 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #12 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #13 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #15 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #16 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #17 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #18 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #19 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #20 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #21 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #22 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #23 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #24 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #25 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #26 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #27 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #28 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #29 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #30 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #31 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #32 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #33 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #34 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #35 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #36 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #37 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #38 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #39 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #40 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #41 0x7fe62e887680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #42 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

Address 0x7fe62b800e86 is located in stack of thread T0 at offset 6 in frame
    #0 0x57b2ef in AP4_VisualSampleEntry::ReadFields(AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:759

  This frame has 1 object(s):
    [32, 65) 'compressor_name'
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-underflow /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:780:40 in AP4_VisualSampleEntry::ReadFields(AP4_ByteStream&)
Shadow bytes around the buggy address:
  0x0ffd456f8180: f1 f1 f1 f1 00 f2 f2 f2 00 f3 f3 f3 00 00 00 00
  0x0ffd456f8190: f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5
  0x0ffd456f81a0: f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5 f5
  0x0ffd456f81b0: f1 f1 f1 f1 00 f2 f2 f2 00 f3 f3 f3 00 00 00 00
  0x0ffd456f81c0: f1 f1 f1 f1 04 f2 00 f2 f2 f2 00 f3 f3 f3 f3 f3
=>0x0ffd456f81d0:[f1]f1 f1 f1 00 00 00 00 01 f3 f3 f3 f3 f3 f3 f3
  0x0ffd456f81e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffd456f81f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffd456f8200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffd456f8210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ffd456f8220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==4435==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/03d1222ab9c2ce779cdf01bdb96cdd69cbdcfeda

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00344-bento4-stackunderflow-AP4_VisualSampleEntry_ReadFields

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: stack-based buffer underflow in AP4_VisualSampleEntry::ReadFields (Ap4SampleEntry.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
==9052==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fc5ce900866 at pc 0x00000057b5a3 bp 0x7ffd0f773130 sp 0x7ffd0f773128
WRITE of size 1 at 0x7fc5ce900866 thread T0
    #0 0x57b5a2 in AP4_VisualSampleEntry::ReadFields(AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:780:40
    #1 0x575726 in AP4_SampleEntry::Read(AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:108:5
    #2 0x57d624 in AP4_VisualSampleEntry::AP4_VisualSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:742:5
    #3 0x57d624 in AP4_AvcSampleEntry::AP4_AvcSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:994
    #4 0x5cbf58 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:305:24
    #5 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #6 0x586a2c in AP4_StsdAtom::AP4_StsdAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:100:13
    #7 0x58566f in AP4_StsdAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:56:16
    #8 0x5ca71c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:422:20
    #9 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #10 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #11 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #12 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #13 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #15 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #16 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #17 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #18 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #19 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #20 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #21 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #22 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #23 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #24 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #25 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #26 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #27 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #28 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #29 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #30 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #31 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #32 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #33 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #34 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #35 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #36 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #37 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #38 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #39 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #40 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #41 0x7fc5d1a6a680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #42 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

Address 0x7fc5ce900866 is located in stack of thread T0 at offset 102 in frame
    #0 0x58676f in AP4_StsdAtom::AP4_StsdAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:88

  This frame has 3 object(s):
    [32, 36) 'entry_count'
    [48, 56) 'bytes_available'
    [80, 88) 'atom' 0x0ff939d18100: f1 f1 f1 f1 04 f2 00 f2 f2 f2 00 f3[f3]f3 f3 f3
  0x0ff939d18110: f1 f1 f1 f1 00 00 00 00 01 f3 f3 f3 f3 f3 f3 f3
  0x0ff939d18120: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff939d18130: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff939d18140: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0ff939d18150: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==9052==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
The maintainer said that one of the previous commit fixed this issue. It needs a bisect.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00343-bento4-stackoverflow-AP4_VisualSampleEntry_ReadFields

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: stack-based buffer overflow in AP4_VisualSampleEntry::ReadFields (Ap4SampleEntry.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
==20986==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x606000000174 at pc 0x0000004ee515 bp 0x7ffd0b8395f0 sp 0x7ffd0b838da0                                                                         
READ of size 65509 at 0x606000000174 thread T0                                                                                                                                                                    
    #0 0x4ee514 in __asan_memcpy /var/tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.1/work/compiler-rt-4.0.1.src/lib/asan/asan_interceptors.cc:453                                                              
    #1 0x54de2b in AP4_DataBuffer::SetData(unsigned char const*, unsigned int) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DataBuffer.cpp:175:5                                                                      
    #2 0x5d4a83 in AP4_AvccAtom::AP4_AvccAtom(unsigned int, unsigned char const*) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AvccAtom.cpp:165:32                                                                    
    #3 0x5d1b6b in AP4_AvccAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AvccAtom.cpp:95:16                                                                                
    #4 0x5cb2e2 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:477:20             
    #5 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #6 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                       
    #7 0x575855 in AP4_SampleEntry::Read(AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:115:9                                                                        
    #8 0x57d624 in AP4_VisualSampleEntry::AP4_VisualSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:742:5                     
    #9 0x57d624 in AP4_AvcSampleEntry::AP4_AvcSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:994                             
    #10 0x5cbf58 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:305:24            
    #11 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                       
    #12 0x586a2c in AP4_StsdAtom::AP4_StsdAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:100:13                         
    #13 0x58566f in AP4_StsdAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:56:16                                                             
    #14 0x5ca71c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:422:20            
    #15 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                       
    #16 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                      
    #17 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #18 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #19 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #20 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #21 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #22 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #23 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #24 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #25 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #26 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #27 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #28 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #29 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #30 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #31 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #32 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #33 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #34 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #35 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #36 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #37 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #38 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #39 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #40 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #41 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #42 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #43 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #44 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #45 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #46 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #47 0x7f1552e11680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #48 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

0x606000000174 is located 0 bytes to the right of 52-byte region [0x606000000140,0x606000000174)
allocated by thread T0 here:
    #0 0x53dfb0 in operator new[](unsigned long) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.1/work/compiler-rt-4.0.1.src/lib/asan/asan_new_delete.cc:84
    #1 0x54c887 in AP4_DataBuffer::AP4_DataBuffer(unsigned int) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DataBuffer.cpp:55:16
    #2 0x5d1690 in AP4_AvccAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AvccAtom.cpp:69:20
    #3 0x5cb2e2 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:477:20
    #4 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #5 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #6 0x575855 in AP4_SampleEntry::Read(AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:115:9
    #7 0x57d624 in AP4_VisualSampleEntry::AP4_VisualSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:742:5
    #8 0x57d624 in AP4_AvcSampleEntry::AP4_AvcSampleEntry(unsigned int, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4SampleEntry.cpp:994
    #9 0x5cbf58 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:305:24
    #10 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #11 0x586a2c in AP4_StsdAtom::AP4_StsdAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:100:13
    #12 0x58566f in AP4_StsdAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StsdAtom.cpp:56:16
    #13 0x5ca71c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:422:20
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #15 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #16 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #17 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #18 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #19 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #20 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #21 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #22 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #23 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #24 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #25 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #26 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #27 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #28 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #29 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #30 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #31 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #32 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #33 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #34 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.1/work/compiler-rt-4.0.1.src/lib/asan/asan_interceptors.cc:453 in __asan_memcpy
Shadow bytes around the buggy address:
  0x0c0c7fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c0c7fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c0c7fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c0c7fff8000: fa fa fa fa 00 00 00 00 00 00 02 fa fa fa fa fa
  0x0c0c7fff8010: 00 00 00 00 00 00 00 fa fa fa fa fa 00 00 00 00
=>0x0c0c7fff8020: 00 00 00 fa fa fa fa fa 00 00 00 00 00 00[04]fa
  0x0c0c7fff8030: fa fa fa fa 00 00 00 00 00 00 04 fa fa fa fa fa
  0x0c0c7fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0c7fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0c7fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c0c7fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==20986==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/53499d8d4c69142137c7c7f0097a444783fdeb90

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00342-bento4-heapoverflow-AP4_DataBuffer_SetData

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: heap-based buffer overflow in AP4_DataBuffer::SetData (Ap4DataBuffer.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
==1966==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x617000000324 at pc 0x000000690d51 bp 0x7ffc25bed310 sp 0x7ffc25bed308                                                                          
READ of size 1 at 0x617000000324 thread T0                                                                                                                                                                        
    #0 0x690d50 in AP4_BytesToUInt32BE(unsigned char const*) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Utils.h:78:22                                                                                               
    #1 0x690d50 in AP4_StszAtom::AP4_StszAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StszAtom.cpp:85                                                
    #2 0x69036e in AP4_StszAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StszAtom.cpp:51:16                                                                                
    #3 0x5ca79a in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:442:20             
    #4 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #5 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                       
    #6 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5               
    #7 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87                       
    #8 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20             
    #9 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #10 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                      
    #11 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5              
    #12 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87                      
    #13 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #15 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #16 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #17 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #18 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #19 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #20 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #21 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #22 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #23 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #24 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #25 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #26 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #27 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #28 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #29 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #30 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #31 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #32 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #33 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #34 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #35 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #36 0x7f3271712680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #37 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

0x617000000324 is located 0 bytes to the right of 676-byte region [0x617000000080,0x617000000324)
allocated by thread T0 here:
    #0 0x53dfb0 in operator new[](unsigned long) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.1/work/compiler-rt-4.0.1.src/lib/asan/asan_new_delete.cc:84
    #1 0x6909e3 in AP4_StszAtom::AP4_StszAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StszAtom.cpp:78:33
    #2 0x69036e in AP4_StszAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4StszAtom.cpp:51:16
    #3 0x5ca79a in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:442:20
    #4 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #5 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #6 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #7 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #8 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #9 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #10 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #11 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #12 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #13 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #15 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #16 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #17 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #18 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #19 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #20 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #21 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #22 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #23 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #24 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #25 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #26 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #27 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #28 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #29 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #30 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #31 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #32 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #33 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #34 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Utils.h:78:22 in AP4_BytesToUInt32BE(unsigned char const*)
Shadow bytes around the buggy address:
  0x0c2e7fff8010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c2e7fff8020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c2e7fff8030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c2e7fff8040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c2e7fff8050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c2e7fff8060: 00 00 00 00[04]fa fa fa fa fa fa fa fa fa fa fa
  0x0c2e7fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c2e7fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c2e7fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c2e7fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c2e7fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==1966==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/5eb8cf89d724ccb0b4ce5f24171ec7c11f0a7647

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00341-bento4-heapoverflow-AP4_BytesToUInt32BE

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: heap-based buffer overflow in AP4_BytesToUInt32BE (Ap4Utils.h)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
==10603==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000000af at pc 0x000000622588 bp 0x7ffccfc80f10 sp 0x7ffccfc80f08                                                                         
WRITE of size 1 at 0x6020000000af thread T0                                                                                                                                                                       
    #0 0x622587 in AP4_HdlrAtom::AP4_HdlrAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:87:21                                             
    #1 0x621f4e in AP4_HdlrAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:51:16                                                                                
    #2 0x5cae91 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:387:20             
    #3 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #4 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                       
    #5 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5               
    #6 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87                       
    #7 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20             
    #8 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                        
    #9 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                       
    #10 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5              
    #11 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5                                                       
    #12 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20                                                               
    #13 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377               
    #14 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                       
    #15 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12                                      
    #16 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5              
    #17 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5                                                        
    #18 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20                                                               
    #19 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357               
    #20 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14                                       
    #21 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #22 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #23 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #24 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #25 0x7f37dafa8680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #26 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

0x6020000000af is located 1 bytes to the left of 1-byte region [0x6020000000b0,0x6020000000b1)
allocated by thread T0 here:
    #0 0x53dfb0 in operator new[](unsigned long) /var/tmp/portage/sys-libs/compiler-rt-sanitizers-4.0.1/work/compiler-rt-4.0.1.src/lib/asan/asan_new_delete.cc:84
    #1 0x6223fa in AP4_HdlrAtom::AP4_HdlrAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:85:18
    #2 0x621f4e in AP4_HdlrAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:51:16
    #3 0x5cae91 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:387:20
    #4 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #5 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #6 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #7 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #8 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #9 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #10 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #11 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #12 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #13 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #14 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #15 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #16 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #17 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #18 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #19 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #20 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #21 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #22 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #23 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #24 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #25 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #26 0x7f37dafa8680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289

SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:87:21 in AP4_HdlrAtom::AP4_HdlrAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&)
Shadow bytes around the buggy address:
  0x0c047fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c047fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c047fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c047fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0c047fff8000: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
=>0x0c047fff8010: fa fa 04 fa fa[fa]01 fa fa fa fa fa fa fa fa fa
  0x0c047fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c047fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==10603==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
The maintainer said that one of the previous commit fixed this issue. It needs a bisect.

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00340-bento4-heapoverflow-AP4_HdlrAtom_AP4_HdlrAtom

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: heap-based buffer overflow in AP4_HdlrAtom::AP4_HdlrAtom (Ap4HdlrAtom.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
ASAN:DEADLYSIGNAL
=================================================================
==18215==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f23fa12110e bp 0x000000000017 sp 0x7fff671b9178 T0)
==18215==The signal is caused by a WRITE memory access.
==18215==Hint: address points to the zero page.
    #0 0x7f23fa12110d  /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/string/../sysdeps/x86_64/memcpy.S:71
    #1 0x7f23fa10febd in __GI__IO_file_xsgetn /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/libio/fileops.c:1392
    #2 0x7f23fa10520f in fread /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/libio/iofread.c:38
    #3 0x5b6557 in AP4_StdcFileByteStream::ReadPartial(void*, unsigned int, unsigned int&) /tmp/Bento4-1.5.0-617/Source/C++/System/StdC/Ap4StdCFileByteStream.cpp:237:14
    #4 0x544473 in AP4_ByteStream::Read(void*, unsigned int) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ByteStream.cpp:55:29
    #5 0x622427 in AP4_HdlrAtom::AP4_HdlrAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:86:12
    #6 0x621f4e in AP4_HdlrAtom::Create(unsigned int, AP4_ByteStream&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4HdlrAtom.cpp:51:16
    #7 0x5cae91 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:387:20
    #8 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #9 0x617c17 in AP4_DrefAtom::AP4_DrefAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DrefAtom.cpp:83:16
    #10 0x617329 in AP4_DrefAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DrefAtom.cpp:49:16
    #11 0x5c90ae in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:529:20
    #12 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #13 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #14 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #15 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #16 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #17 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #18 0x617c17 in AP4_DrefAtom::AP4_DrefAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DrefAtom.cpp:83:16
    #19 0x617329 in AP4_DrefAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4DrefAtom.cpp:49:16
    #20 0x5c90ae in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:529:20
    #21 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #22 0x60c29f in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #23 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #24 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #25 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #26 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #27 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #28 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #29 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #30 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #31 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #32 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #33 0x60b1d2 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #34 0x60b1d2 in AP4_ContainerAtom::Create(unsigned int, unsigned long long, bool, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:87
    #35 0x5ca44c in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:751:20
    #36 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #37 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #38 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #39 0x58e6ed in AP4_TrakAtom::AP4_TrakAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.cpp:165:5
    #40 0x5c8e3b in AP4_TrakAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4TrakAtom.h:58:20
    #41 0x5c8e3b in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:377
    #42 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #43 0x60c561 in AP4_ContainerAtom::ReadChildren(AP4_AtomFactory&, AP4_ByteStream&, unsigned long long) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:193:12
    #44 0x60c099 in AP4_ContainerAtom::AP4_ContainerAtom(unsigned int, unsigned long long, bool, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.cpp:138:5
    #45 0x5521b0 in AP4_MoovAtom::AP4_MoovAtom(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.cpp:79:5
    #46 0x5cad1d in AP4_MoovAtom::Create(unsigned int, AP4_ByteStream&, AP4_AtomFactory&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:56:20
    #47 0x5cad1d in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:357
    #48 0x5c7fbd in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:220:14
    #49 0x5c75c0 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomFactory.cpp:150:12
    #50 0x54ea2c in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:104:12
    #51 0x54f0fa in AP4_File::AP4_File(AP4_ByteStream&, bool) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:78:5
    #52 0x542552 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:242:32
    #53 0x7f23fa0bf680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #54 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/string/../sysdeps/x86_64/memcpy.S:71 
==18215==ABORTING

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/22192de5367fa0cee985917f092be4060b7c00b0

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00339-bento4-NULLptr-AP4_StdcFileByteStream_ReadPartial

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: NULL pointer dereference in AP4_StdcFileByteStream::ReadPartial (Ap4StdCFileByteStream.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
ASAN:DEADLYSIGNAL
=================================================================
==11595==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000005b27fe bp 0x7ffce60a67e0 sp 0x7ffce60a67c0 T0)
==11595==The signal is caused by a READ memory access.
==11595==Hint: address points to the zero page.
    #0 0x5b27fd in AP4_DataAtom::~AP4_DataAtom() /tmp/Bento4-1.5.0-617/Source/C++/MetaData/Ap4MetaData.cpp:1357:5
    #1 0x5b27fd in AP4_DataAtom::~AP4_DataAtom() /tmp/Bento4-1.5.0-617/Source/C++/MetaData/Ap4MetaData.cpp:1356
    #2 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #3 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #4 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48:7
    #5 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48
    #6 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #7 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #8 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48:7
    #9 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48
    #10 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #11 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #12 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48:7
    #13 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48
    #14 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #15 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #16 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48:7
    #17 0x60e6d8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48
    #18 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #19 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #20 0x553af8 in AP4_ContainerAtom::~AP4_ContainerAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4ContainerAtom.h:48:7
    #21 0x553af8 in AP4_MoovAtom::~AP4_MoovAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:47
    #22 0x553af8 in AP4_MoovAtom::~AP4_MoovAtom() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4MoovAtom.h:47
    #23 0x5bf8d4 in AP4_List::DeleteReferences() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4List.h:476:9
    #24 0x5bf8d4 in AP4_AtomParent::~AP4_AtomParent() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Atom.cpp:512
    #25 0x54f634 in AP4_File::~AP4_File() /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4File.cpp:85:1
    #26 0x5433c4 in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:292:5
    #27 0x7f0ba50e1680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #28 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/Bento4-1.5.0-617/Source/C++/MetaData/Ap4MetaData.cpp:1357:5 in AP4_DataAtom::~AP4_DataAtom()
==11595==ABORTING
Audio Track:
  duration: 7848 ms
  sample count: 16

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/41cad602709436628f07b4c4f64e9ff7a611f687

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00338-bento4-NULLptr-AP4_DataAtom_AP4_DataAtom

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: NULL pointer dereference in AP4_DataAtom::~AP4_DataAtom (Ap4MetaData.cpp)

Description:
bento4 is a fast, modern, open source C++ toolkit for all your MP4 and MPEG DASH media format needs.

The complete ASan output of the issue:

# mp42aac $FILE out.aac
ASAN:DEADLYSIGNAL
=================================================================
==6365==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x0000005cf94c bp 0x7fff5857d580 sp 0x7fff5857d4c0 T0)
==6365==The signal is caused by a READ memory access.
==6365==Hint: address points to the zero page.
    #0 0x5cf94b in AP4_AtomSampleTable::GetSample(unsigned int, AP4_Sample&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomSampleTable.cpp
    #1 0x58d158 in AP4_Track::GetSample(unsigned int, AP4_Sample&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Track.cpp:435:43
    #2 0x58d158 in AP4_Track::ReadSample(unsigned int, AP4_Sample&, AP4_DataBuffer&) /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4Track.cpp:469
    #3 0x5430ad in WriteSamples(AP4_Track*, AP4_SampleDescription*, AP4_ByteStream*) /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:192:12
    #4 0x5430ad in main /tmp/Bento4-1.5.0-617/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:274
    #5 0x7f41deb72680 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.23-r4/work/glibc-2.23/csu/../csu/libc-start.c:289
    #6 0x44f3f8 in _start (/usr/bin/mp42aac+0x44f3f8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /tmp/Bento4-1.5.0-617/Source/C++/Core/Ap4AtomSampleTable.cpp in AP4_AtomSampleTable::GetSample(unsigned int, AP4_Sample&)
==6365==ABORTING
Audio Track:
  duration: 7848 ms
  sample count: 169

Affected version:
1.5.0-617

Fixed version:
N/A

Commit fix:
https://github.com/axiomatic-systems/Bento4/commit/2f267f89f957088197f4b1fc254632d1645b415d

Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.

CVE:
Waiting for a CVE assignment

Reproducer:
https://github.com/asarubbo/poc/blob/master/00337-bento4-NULLptr-AP4_AtomSampleTable_GetSample

Timeline:
2017-09-08: bug discovered and reported to upstream
2017-09-14: blog post about the issue

Note:
This bug was found with American Fuzzy Lop.
This bug was identified with bare metal servers donated by Packet. This work is also supported by the Core Infrastructure Initiative.

Permalink:

bento4: NULL pointer dereference in AP4_AtomSampleTable::GetSample (Ap4AtomSampleTable.cpp)

September 13, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The breadwinner product (September 13, 2017, 12:04 UTC)

This may feel a bit random of a post, as Business and Economics are not my areas of expertise and I usually do my best not talk about stuff I don’t know, but I have seen the complete disregard for this concept lately and I thought it would be a good starting point to define here, before I talk about it, what a “breadwinner product” is, from my point of view.

The term breadwinner is used generally to refer to the primary income-earner in a household. While I have not seen this very often extended to products and services in companies, I think it should be fairly obvious how the extension would work.

In a world of startups there are still plenty of companies that have a real “breadwinner product”, even when acting as startups. This is the case for instance of the company I used to contract out for, in Los Angeles: they have been in activity for a number of years with a different, barely related product, and I was contracting out for their new project.

I think it’s important to think of this term, because without having this concept in mind, it’s hard to understand a lot of business decisions of many companies, why startups such as Revolut are “sweeping up the market” and so on.

This is something that came up on Twitter a time or two: a significant amount of geeks appear to wilfully ignore the needs of a business, and the marketing concepts as words of the devil, and will refuse to try considering whether decisions made business sense, and instead they will either try to judge decisions based purely on technical merits, or even just on their own direct interests. Now it is true that technical merits can make good business sense, but sometimes there are very good long-term vision reasons that people don’t appreciate on the pure technical point of view.

In particular, sometimes it’s hard to understand why a service by a company that may appear as a startup is based on “old” technology, but it may just be the case that it is actually a “traditional” company trying to pivot into a different market or a different brand or level of service. And when that happens, there’s at least some gravity pull to keep the stack in line with the previous technology. Particularly if the new service can piggyback on the old one for a while, both in term of revenue, technology and staff.

So in the case of the company I referred to above, when I started contracting out they were already providing a separate service that was built on a real legacy technology, ran on a stack based on bare metal servers with RedHat 5. Since the new service had two components, one of them ended up being based on the same stack and the other one (which I was setting up) ended up based on Gentoo Linux with containers instead. The same way as the Tinderbox used to be run. If you wonder why one would run two stacks this separate, the answer is that messing with the breadwinner product, most of the time, is a risky endeavour and unless you have a very good reason to do so, you don’t.

So even though I was effectively building a new architecture from scratch, and was setting up new servers, with more proper monitoring (based on Munin and Icinga), and Puppet for configuration management, I was not allowed to touch the old service. And rightly so, as it was definitely brittle and it would have lead to actually losing money, as that service was running in production, while the new one was not ready yet, and the few users of it would be able to be told about maintenance windows in advance.

There is often a tipping point though, when the cost of running a legacy service is higher than the revenue the service is bringing in. For that company that happened right as I was leaving it to start working at my current place of work. The owner though was more business savvy than many other people I met before and since, and was actually already planning how to cut some expenses. Indeed the last thing I helped that company with was setting up a single1 baremetal server with multiple containers to virtualise their former fully bare metal hardware, and bring it physically to a new location (Fremont, CA) to cut on the hosting costs.

The more the breadwinner service is making money, and the less the company is experimenting with alternative approaches to cut the costs in the future or to build up new services or open new market opportunities, the more working for those companies become hard. Of all the possible things I can complain about my boss at the time, ability to deal with business details was not one of those. Actually, I think that despite leaving me in quite the bad predicament afterwards, he did end up teaching me quite a bit of the nitty-gritty details of doing business, particularly US-style — and I may not entirely like it either.

But all in all, I think this is something lots more people in tech should learn about. Because I still maintain that Free Software can only be marketed by businesses and to be able to have your project cater to business users without selling its soul, you need to be able to tell what they need and how they need it provided.


  1. Okay, actually a bit more than one: a single machine ran the production environment for the legacy servers, and acted as warm-backup for the new service; another machine ran the production environment for the new service, and acted as warm-backup for the new service. A pair of the older baremetal servers acted as database backends for both systems. [return]

September 11, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Authenticating with U2F (September 11, 2017, 16:25 UTC)

In order to further secure access to my workstation, after the switch to Gentoo sources, I now enabled two-factor authentication through my Yubico U2F USB device. Well, at least for local access - remote access through SSH requires both userid/password as well as the correct SSH key, by chaining authentication methods in OpenSSH.

Enabling U2F on (Gentoo) Linux is fairly easy. The various guides online which talk about the pam_u2f setup are indeed correct that it is fairly simple. For completeness sake, I've documented what I know on the Gentoo Wiki, as the pam_u2f article.

The setup, basically

The setup of U2F is done in a number of steps: 1. Validate that the kernel is ready for the USB device 2. Install the PAM module and supporting tools 3. Generate the necessary data elements for each user (keys and such) 4. Configure PAM to require authentication through the U2F key

For the kernel, the configuration item needed is the raw HID device support. Now, in current kernels, two settings are available that both talk about raw HID device support: CONFIG_HIDRAW is the general raw HID device support, while CONFIG_USB_HIDDEV is the USB-specific raw HID device support.

It is very well possible that only a single one is needed, but both where active on my kernel configuration already, and Internet sources are not clear which one is needed, so let's assume for now both are.

Next, the PAM module needs to be installed. On Gentoo, this is a matter of installing the pam\_u2f package, as the necessary dependencies will be pulled in automatically:

~# emerge pam_u2f

Next, for each user, a registration has to be made. This registration is needed for the U2F components to be able to correctly authenticate the use of a U2F key for a particular user. This is done with pamu2fcfg:

~$ pamu2fcfg -u<username> > ~/.config/Yubico/u2f_keys

The U2F USB key must be plugged in when the command is executed, as a succesful keypress (on the U2F device) is needed to complete the operation.

Finally, enable the use of the pam\_u2f module in PAM. On my system, this is done through the /etc/pam.d/system-local-login PAM configuration file used by all local logon services.

auth     required     pam_u2f.so

Consider the problems you might face

When fiddling with PAM, it is important to keep in mind what could fail. During the setup, it is recommended to have an open administrative session on the system so that you can validate if the PAM configuration works, without locking yourself out of the system.

But other issues need to be considered as well.

My Yubico U2F USB key might have a high MTBF (Mean Time Between Failures) value, but once it fails, it would lock me out of my workstation (and even remote services and servers that use it). For that reason, I own a second one, safely stored, but is a valid key nonetheless for my workstation and remote systems/services. Given the low cost of a simple U2F key, it is a simple solution for this threat.

Another issue that could come up is a malfunction in the PAM module itself. For me, this is handled by having remote SSH access done without this PAM module (although other PAM modules are still involved, so a generic PAM failure itself wouldn't resolve this). Of course, worst case, the system needs to be rebooted in single user mode.

One issue that I faced was the SELinux policy. Some applications that provide logon services don't have the proper rights to handle U2F, and because PAM just works in the address space (and thus SELinux domain) of the application, the necessary privileges need to be added to these services. My initial investigation revealed the following necessary policy rules (refpolicy-style);

udev_search_pids(...)
udev_read_db(...)
dev_rw_generic_usb_dev(...)

The first two rules are needed because the operation to trigger the USB key uses the udev tables to find out where the key is located/attached, before it interacts with it. This interaction is then controlled through the first rule.

Simple yet effective

Enabling the U2F authentication on the system is very simple, and gives a higher confidence that malicious activities through regular accounts will have it somewhat more challenging to switch to a more privileged session (one control is the SELinux policy of course, but for those domains that are allowed to switch then the PAM-based authentication is another control), as even evesdropping on my password (or extracting it from memory) won't suffice to perform a successful authentication.

If you want to use a different two-factor authentication, check out the use of the Google authenticator, another nice article on the Gentoo wiki. It is also possible to use Yubico keys for remote authentication, but that uses the OTP (One Time Password) functionality which isn't active on the Yubico keys that I own.

September 10, 2017
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


If you're reading this, the last act in this drama (see the previous blog post) was that in Patras a friendly employee from Hertz picked up the rental car to bring it to a repair workshop. A bit later than planned, but nevertheless. Now the story continues.

  • About 20:00 the same day I get a phone call from the same lady that my car was ready, and we could meet in about 20min next to my hotel so I can pick it up again. That sounded great to me. Some minutes later I saw the car coming.
  • Of course I wanted to try out the repaired roof / window immediately, so we did that. Opened the roof, closed the roof. The passenger side window did not close; precisely the same phenomenon. Oops.
  • I tried a few more times on instruction by the Hertz employee, with the result that the window got stuck at half height and did not move anymore even after shutting down and restarting the ignition. Since it was stuck on the wrong side of its rubber seal, also the passenger door did not open anymore.
  • The visibly nervous Hertz employee calls her manager on the mobile, who arrives after a few minutes. The manager opens the passenger door with application of force. Afterwards, and after restarting the engine, the window slides up again.
  • We have some discussion about a replacement car, where I point out that I paid a lot of money for having a convertible, and really want one. I agree to come to the office thursday morning to sort things out.
  • Next morning, Thursday, at the Hertz office, I'm glad to learn that a replacement car will be sent. Of course, I'm now leaving Patras, so the car will have to be sent to a station near my next stops.
  • We discuss this and agree that I will pick the car up tomorrow (Friday) afternoon in Kalamata (which is only about 80km from my Friday evening hotel in Kyparissia).
Oh well. Glad that things are somehow sorted out. I spend the rest of the day visiting a Mycenaean castle (1200 BC), a Frankish castle (1200 AD), and re-visiting Olympia, spend the night near Olympia, and then start towards Kalamata through the mountains via, e.g. the excavations of ancient Messene. Sometime on the way I realize that the Kalamata Hertz offices (according to the website) are closed 14:00 to 17:00, so I plan with arriving there around 18:00. That's ample time since they should be open 17:00 - 21:00 (search for Kalamata here).
  • Arrive 18:10 at the Kalamata city office. Nobody there, and there's a sign on the door saying "We are at Kalamata Airport."
  • Drive back the ~10km to the airport (which I passed on the way before). Arrive there around 18:30. The entire airport is already closed for the day. No Hertz employees in sight.
  • Call the Kalamata office. First response, "We closed half an hour ago." When I start explaining my problem, the lady on the phone says "But your car has not arrived from Athens yet!" I point out that I have to go back to Kyparissia, quite some way, today. She doesnt know when it will arrive, but says something about late evening. 
  • I tell her I will now get dinner here in Kalamata, and afterwards call her again.
That's where we are now. Just as a reminder, it's now Friday evening, and the problem has essentially been known to Hertz since last Sunday.

Update:
  • Tried calling the Hertz Kalamata office again around 20:45. No response, after a while some mailbox text in Greek. 
  • Drove back the 60km to Kyparissia, arrived at the hotel 22:00. Will call Hertz again tomorrow.
Update 2: Yes, Hertz knows my mobile phone number. It's big and fat on my contract, and I also gave it again and reconfirmed it to the employee at Patras. So, one could assume if something goes wrong they phone me...

Update 3: It ends well. See the next post.

    Fun with Hertz car rentals, part 3 (it ends well) (September 10, 2017, 19:37 UTC)

    If you've read the last part, I had just arrived at my hotel in Kyparissia late in the night, slightly fuming. Well...

    Next morning, saturday, around 10:15 somebody called my mobile phone. For some reason I didn't notice, but only got a text notification of a missed call an hour later. I called back; turns out this was the Kalamata airport Hertz office. "Your replacement car has arrived; you can pick it up anytime."

    I arranged to come by around 16:00 in the afternoon, and from here on everything went smoothly. Now I'm driving a white BMW Mini convertible, and the roof and windows work just fine.

    In the end, obviously I'm quite happy that a replacement car was driven from Athens to Kalamata and that I can now continue with my vacation as planned. The path that lead to that outcome, however, was not so great...

    September 09, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    Dell XPS 13, problems with WiFi (September 09, 2017, 18:04 UTC)

    A couple of months ago I bought a Dell XPS 13. I’m still very happy with the laptop, particularly given the target use that I have for it, but I have started noticing a list of problems that do bother me more than a little bit.

    The first problem is something that I have spoken of in the original post and updated a couple of times: the firmware (“BIOS”) update. While the firmware is actually published through LVFS by Dell, either Antergos or Arch Linux have some configuration issue with EFI and the System Partition, that cause the EFI shim not to be able to find the right capsule. I ended up just running the update manually twice now, since I didn’t want to spare time to fix the packaging of the firmware updater, and trying with different firmware updates is not easy.

    Also, while the new firmware updates made the electrical whining noise effectively disappear, making the laptop very nice to use in quiet hotel rooms (not all hotel rooms are quiet), it seems to have triggered more WiFi problems. Indeed, it got to the point that I could not use the laptop at home at all. I’m not sure what exactly was the problem, but my Linksys WRT1900ACv2 seems to trigger known problems with the WiFi card on this model.

    At first I thought it would be a problem with using Arch Linux rather than Dell’s own Ubuntu image, that appeared to have separate Qualcomm drivers for the ath10k card. But it turns out the same error pops up repeated in Dell forums and LaunchPad too. A colleague with the same laptop suggested to just replace the card, getting rid of the whole set of problems introduced by the ath10k driver. Indeed, even looking around the Windows users websites, the recommendation appear to be the same: just replace your card.

    The funny bit is that I only really noticed this when I came back from my long August trips, because since I bought the laptop, I hadn’t spent more than a few days at home at that point. I have been in Helsinki, Vancouver and Seattle, used the laptop in airports, lounges, hotels and cafes, as well as my office. And none of those places had any issue with my laptop. I used the laptop extensively to livetweet SREcon Europe from the USENIX wireless at the hotel, and it had no problem whatsoever.

    My current theory for this is that there is some mostly-unused feature that is triggered by high-performance access point like the one I have at home, that runs LEDE, and as such is not something you’ll encounter in the wild. This also would explain why the Windows sites that I found referencing the problem are suggesting the card replacement — your average Windows user is unlikely to know how to do so or interested in a solution that does not involve shipping the device back to Dell, and to be fair they probably have a point, why on earth are they selling laptops with crappy WiFi cards?

    So anyway my solution to this was to order an Intel 8265 wireless card which includes the same 802.11ac dual-band support and Bluetooth 4.2, and is the same format as the ath10k that the laptop comes with. It feels a bit strange having to open up a new laptop to replace a component, but since this is the serviceable version of Dell, it was not a horrible experience (my Vostro laptop still has a terrible 802.11g 2.4GHz-only card on it, but I can’t replace it easily).

    Moving onto something else, the USB-C dock is working great, although I found out the hard way that if you ask Plasma, or whatever else it is that I ended up asking it to, not to put the laptop to sleep the moment the lid is closed, if the power is connected (which I need to make sure I can use the laptop “docked” onto my usual work-from-home setup), it also does not go to sleep if the power is subsequently disconnected. So the short version is that I now usually run the laptop without the power connected unless it’s already running low, and I can easily stay a whole day at a conference without charging, which is great!

    Speaking of charging, turns out that the Apple 65W USB-C charger also works great with the XPS 13. Unfortunately it comes without a cable, and particularly with Apple USB-C cable your mileage may vary. It seems to be fine with the Google Pixel phone cable though. I have not tried measuring how much power and which power mode it uses, among other things because I wouldn’t know how to query the USB-C controller to get that information. If you have suggestions I’m all ears.

    Otherwise the laptop appears to be working great for me. I only wish I could wake it up from sleep without opening it, when using it docked, but that’s also a minor feature.

    The remaining problems are software. For instance Plasma sometimes crashes when I dock the laptop, and the new monitor comes online. And I can’t reboot while docked because the external keyboard (connected on the USB-C dock) is not able to type in the password for the full-disk encryption. Again this is a bother but not a big deal.

    September 08, 2017
    Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)


    So, I decided to get myself a rather expensive treat this summer. For travelling the Peloponnes I rented a Mini Cooper convertible. These are really cute, and driving around in the sun with the roof open felt like a very nice idea. I'm a Hertz Gold Club customer, so why not go for Hertz again.

    I picked up the car in Athens, all looked fine. The first day I had some longer driving to do, and also the manual was only in Greek, so I decided to drive to my first stop and check out the convertible roof there. OK, with some fiddling I found and read a German manual on the BMW website (now I know where to find the VIN number, if anyone asks :), opened the roof, enjoyed half a day in the mountains near Kalavrita.

    Afterwards the passenger side window didn't close anymore.

    It turns out something was already bent or damaged inside the door, so the window was sliding up on the wrong side of its rubber seal. At some point it can't move any further, so the electronics stops and disables the window. The effect is perfectly reproducible, and scratch marks on the rubber seal and door frame indicate it's been doing that already for a while. Oh well.

    • Phoned the nearest Hertz office in Patras. After some complicated discussion in English they advised me to contact the office in Athens.
    • Phoned the Hertz office in Athens. I managed to explain the problem there. They said I should contact their central technical service office, since maybe they know something easy to do. 
    • Phoned the central technical service office. There the problem was quickly understood; a very helpful lady explained to me that most likely the car would have to be exchanged. Since it was Sunday afternoon, they couldn't do it now, but somebody would call me back on Monday morning 9-10.
    • Waited Monday morning for the call. Nothing happened. 
    • Phoned the central technical service office, Monday around 13:00. They asked me where I was. After telling them I'm going to Patras the next day, they told me I should come by their office there.
    • Arrived at the Patras office tuesday around 17:30. I demonstrated the problem to the lady there. She acknowledged that something's broken, and told me she'd come to my hotel the next day between 11:00 and 12:00 to pick up the car and bring it to the BMW service for repair. 
    • Now I'm sitting in the bar of the hotel, it's 12:30, no one has called or come by, and slowly I'm getting seriously annoyed.
    Let's see how the story continues...
    • Update: 13:00, friendly lady from Hertz picked up the car. Fingers crossed. Made clear it's a long rental, so delaying makes no sense. Wants to phone me either in the afternoon or tomorrow morning.
    • Update 2: The drama continues in the next blog post.

    September 07, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    In Search of a Secure Time Source (September 07, 2017, 15:07 UTC)

    ClockAll our computers and smartphones have an internal clock and need to know the current time. As configuring the time manually is annoying it's common to set the time via Internet services. What tends to get forgotten is that a reasonably accurate clock is often a crucial part of security features like certificate lifetimes or features with expiration times like HSTS. Thus the timesetting should be secure - but usually it isn't.

    I'd like my systems to have a secure time. So I'm looking for a timesetting tool that fullfils two requirements:

    1. It provides authenticity of the time and is not vulnerable to man in the middle attacks.
    2. It is widely available on common Linux systems.

    Although these seem like trivial requirements to my knowledge such a tool doesn't exist. These are relatively loose requirements. One might want to add:
    1. The timesetting needs to provide a good accuracy.
    2. The timesetting needs to be protected against malicious time servers.

    Some people need a very accurate time source, for example for certain scientific use cases. But that's outside of my scope. For the vast majority of use cases a clock that is off by a few seconds doesn't matter. While it's certainly a good idea to consider rogue servers given the current state of things I'd be happy to have a solution where I simply trust a server from Google or any other major Internet entity.

    So let's look at what we have:

    NTP

    The common way of setting the clock is the NTP protocol. NTP itself has no transport security built in. It's a plaintext protocol open to manipulation and man in the middle attacks.

    There are two variants of "secure" NTP. "Autokey", an authenticated variant of NTP, is broken. There's also a symmetric authentication, but that is impractical for widespread use, as it would require to negotiate a pre-shared key with the time server in advance.

    NTPsec and Ntimed

    In response to some vulnerabilities in the reference implementation of NTP two projects started developing "more secure" variants of NTP. Ntimed - a rewrite by Poul-Henning Kamp - and NTPsec, a fork of the original NTP software. Ntimed hasn't seen any development for several years, NTPsec seems active. NTPsec had some controversies with the developers of the original NTP reference implementation and its main developer is - to put it mildly - a controversial character.

    But none of that matters. Both projects don't implement a "secure" NTP. The "sec" in NTPsec refers to the security of the code, not to the security of the protocol itself. It's still just an implementation of the old, insecure NTP.

    Network Time Security

    There's a draft for a new secure variant of NTP - called Network Time Security. It adds authentication to NTP.

    However it's just a draft and it seems stalled. It hasn't been updated for over a year. In any case: It's not widely implemented and thus it's currently not usable. If that changes it may be an option.

    tlsdate

    tlsdate is a hack abusing the timestamp of the TLS protocol. The TLS timestamp of a server can be used to set the system time. This doesn't provide high accuracy, as the timestamp is only given in seconds, but it's good enough.

    I've used and advocated tlsdate for a while, but it has some problems. The timestamp in the TLS handshake doesn't really have any meaning within the protocol, so several implementers decided to replace it with a random value. Unfortunately that is also true for the default server hardcoded into tlsdate.

    Some Linux distributions still ship a package with a default server that will send random timestamps. The result is that your system time is set to a random value. I reported this to Ubuntu a while ago. It never got fixed, however the latest Ubuntu version Zesty Zapis (17.04) doesn't ship tlsdate any more.

    Given that Google has shipped tlsdate for some in ChromeOS time it seems unlikely that Google will send randomized timestamps any time soon. Thus if you use tlsdate with www.google.com it should work for now. But it's no future-proof solution.

    TLS 1.3 removes the TLS timestamp, so this whole concept isn't future-proof. Alternatively it supports using an HTTPS timestamp. The development of tlsdate has stalled, it hasn't seen any updates lately. It doesn't build with the latest version of OpenSSL (1.1) So it likely will become unusable soon.

    OpenNTPDOpenNTPD

    The developers of OpenNTPD, the NTP daemon from OpenBSD, came up with a nice idea. NTP provides high accuracy, yet no security. Via HTTPS you can get a timestamp with low accuracy. So they combined the two: They use NTP to set the time, but they check whether the given time deviates significantly from an HTTPS host. So the HTTPS host provides safety boundaries for the NTP time.

    This would be really nice, if there wasn't a catch: This feature depends on an API only provided by LibreSSL, the OpenBSD fork of OpenSSL. So it's not available on most common Linux systems. (Also why doesn't the OpenNTPD web page support HTTPS?)

    Roughtime

    Roughtime is a Google project. It fetches the time from multiple servers and uses some fancy cryptography to make sure that malicious servers get detected. If a roughtime server sends a bad time then the client gets a cryptographic proof of the malicious behavior, making it possible to blame and shame rogue servers. Roughtime doesn't provide the high accuracy that NTP provides.

    From a security perspective it's the nicest of all solutions. However it fails the availability test. Google provides two reference implementations in C++ and in Go, but it's not packaged for any major Linux distribution. Google has an unfortunate tendency to use unusual dependencies and arcane build systems nobody else uses, so packaging it comes with some challenges.

    One line bash script beats all existing solutions

    As you can see none of the currently available solutions is really feasible and none fulfils the two mild requirements of authenticity and availability.

    This is frustrating given that it's a really simple problem. In fact, it's so simple that you can solve it with a single line bash script:

    date -s "$(curl -sI https://www.google.com/|grep -i 'date:'|sed -e 's/^.ate: //g')"

    This line sends an HTTPS request to Google, fetches the date header from the response and passes that to the date command line utility.

    It provides authenticity via TLS. If the current system time is far off then this fails, as the TLS connection relies on the validity period of the current certificate. Google currently uses certificates with a validity of around three months. The accuracy is only in seconds, so it doesn't qualify for high accuracy requirements. There's no protection against a rogue Google server providing a wrong time.

    Another potential security concern may be that Google might attack the parser of the date setting tool by serving a malformed date string. However I ran american fuzzy lop against it and it looks robust.

    While this certainly isn't as accurate as NTP or as secure as roughtime, it's better than everything else that's available. I put this together in a slightly more advanced bash script called httpstime.

    September 06, 2017
    Greg KH a.k.a. gregkh (homepage, bugs)
    4.14 == This years LTS kernel (September 06, 2017, 14:41 UTC)

    As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

    So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)

    As always, if people have questions about this, email me and I will be glad to discuss it, or talk to me in person next week at the LinuxCon^WOpenSourceSummit or Plumbers conference in Los Angeles, or at any of the other conferences I’ll be at this year (ELCE, Kernel Recipes, etc.)

    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    A selection of good papers from USENIX Security '17 (September 06, 2017, 10:04 UTC)

    I have briefly talked about Adrienne’s and April’s talk at USENIX Security 2017, but I have not given much light to other papers and presentations that got my attention at the conference. I thought I should do a round up of good content for this conference, and if I can manage, go back to it later.

    First of all, the full proceedings are available on the Program page of the conference. As usual, USENIX open access policy means that everybody has access to these proceedings, and since we’re talking academic papers, effectively everything I’m talking about is available to the public. I know that some videos were recorded, but I’m not sure when they will be published1.

    Before I go into link you to interesting content and give brief comments on them, I would like to start with a complaint about academic papers. The proper name of the conference would be 26th USENIX Security Symposium, and it’s effectively an academic conference. This means that the content is all available in form of papers. These papers are written, as usual, in LaTeX, and available in 2-columns PDFs, as it is usual. Usual, but not practical. This is a perfect format to read the paper when doing so on actual paper. But the truth is that nowadays this content is almost exclusively read in digital form.

    I would love to be able to have an ePub version of the various papersto just load on an ebook reader, for instance2. But even just providing a clear HTML file would be an improvement! When reading these PDFs on a screen, you end up having to zoom in and move around a freaking lot because of the column format, and more than once that would be enough for me to stop caring and not read the paper unless I really have interest in it, and I think this is counterproductive.

    Since I already wrote about Measuring HTTPS Adoption on the Web, I should not go back to that particular presentation. Right after that one, though, Katharina Krombholz presented “I Have No Idea What I’m Doing” - On the Usability of Deploying HTTPS which was definitely interesting to show how complicated still is setting up HTTPS properly, without even going into further advanced features such as HPKP, CSP and similar.

    And speaking of these, an old acquaintance of mine from university time3, Stefano Calzavara, presented CCSP: Controlled Relaxation of Content Security Policies by Runtime Policy Composition (my, what a mouthful!) and I really liked the idea. Effectively the idea behind this is that CSP is too complicated to use and is turning down a significant amount of people from implementing at least the basic parts of security policies. This fits very well with the previous talk, and with my experience. This blog currently depends on a few external resources and scripts, namely Google Analytics, Amazon OneLink, and Font Awesome, and I can’t really spend the time figuring out whether I can make all the changes all the time.

    In the same session as Stefano, Iskander Sanchez-Rola presented Extension Breakdown: Security Analysis of Browsers Extension Resources Control Policies, which easily sounded familiar to me, as it overlaps and extends my own complaint back in 2013 that browser extensions were becoming the next source of entropy for fingerprinting, replacing plugins. Since we had dinner with Stefano, Iskander and Igor (co-author of the paper above), we managed to have quite a chat on the topic. I’m glad to see that my hunches back in the days was not completely off and that there is more interest in fixing this kind of problems nowadays.

    Another interesting area to hear from was the Understanding the Mirai Botnet that revealed one very interesting bit of information: the attack on Dyn that caused a number of outages just last year appears to have as its target not the Dyn service itself but rather Sony PlayStation Network, and should thus be looked at in the light of the previous attacks to that. This should remind to everyone that just because you get something out personally from a certain attack, you should definitely not cheer on them; you may be the next target, even just as a bystander.

    Now, not all the talks were exceptional. In particular, I found See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing a bit… hypy. In the sense that the whole premise of considering 3D-printed sourcing as trusted by default, and then figure out a minimal amount of validation seemed to be stemming from the crowd that has been insisting that 3D printing is the future, for the past ten years or so. While it clearly is interesting, and it has a huge amount of use for prototyping, one-off designs and even cosplay, it does not seem like it got as far as people kept thinking it would. And at least from the talk and skimming the paper I couldn’t find a good explanation of how it compares against “classic” manufacturing trust.

    On a similar note I found not particularly enticing the out-of-band call verification system proposed by AuthentiCall: Efficient Identitiy and Content Authentication for Phone Calls which appears to leave out all the details of identity verification and trust system. And assumes a fairly North American point of view on the communication space.

    Of course I was interested in the talk about mobile payments, Picking Up My Tab: Understanding and Mitigating Synchronized Token Lifting and Spending in Mobile Payment, given my previous foray into related topics. It was indeed good, although the final answer of adding a QR-code to do a two-way verification of who it is you’re going to pay sounds like a NIH implementation of the EMV protocol. It is worth it to read to figure out the absurd implementation of Magnetic Secure Transmission that is used in Samsung Pay implementation: spoilers, it implements magnetic stripe payments through a mobile phone.

    For the less academic of you, TrustBase: An Architecture to Repair and Strengthen Certificate-based Authentication appears fairly interesting, particularly as the source code is available. The idea is to move the implementation of SSL clients into an operating system service, rather than into libraries, so that it can be configured once and for all at the system level, including selecting the available cipher to use and the Authorities to trust. It sounds good, but at the same time it sounds a lot like what NSS (the Mozilla one, not the glibc one) tried to implement. Except that didn’t go anywhere, not just because of API differences.

    But it can’t be an interesting post (or conference) without a bit of controversy. A Longitudinal, End-to-End View of the DNSSEC Ecosystem has been an interesting talk, and one that once again confirmed the fears around the lack of proper DNSSEC support in the wild right now. But in that very same talk, the presenter pointed out how they used a service Luminati to get access to endpoints within major ISPs networks to test their DNSSEC resolution. While I understand why a similar service would be useful in these circumstances, I need to remind people that the Luminati service is not one of the good guys!

    Indeed, Luminati is described as allowing you to request access to connections following certain characteristics. What it omits to say, is that it does so by targeting connections of users who installed the Hola “VPN” tool. If you haven’t come across this, Hola is one of the many extensions that allowed users to appear as if connecting from a different country to fool Netflix and other streaming services. Beside being against terms of services (but who cares, right?), in 2015 Hola was found to be compromising its users. In particular, the users running Hola are running the equivalent of a Tor exit node, without any of the security measures to protect its users, and – because its target is non-expert users who are trying to watch content not legally available in their country – without a good understanding of what such an exit node allows.

    I cannot confirm whether currently they still allow access to the full local network to the users of the “commercial” service, which include router configuration pages (cough DNS hijacking cough), and local office LANs that are usually trusted more than they should be. But it gives you quite an idea, as that was clearly the case before.

    So here is my personal set of opinions and a number of pointers to good and interesting talks and papers. I just wish they would be more usable by the non-academics by not being forced only in LaTeX format, but I’m afraid the two worlds shall never meet enough.


    1. As it turns out you can blame me a little bit for this part, I promised to help out. [return]
    2. Thankfully, for USENIX conferences, the full proceedings are available as ePub and Mobi. Although the size is big enough that you can’t use the mail-to-Kindle feature. [return]
    3. All the two weeks I managed to stay in it. [return]

    September 05, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)
    Abandoned Domain Takeover as a Web Security Risk (September 05, 2017, 17:11 UTC)

    In the modern web it's extremely common to include thirdparty content on web pages. Youtube videos, social media buttons, ads, statistic tools, CDNs for fonts and common javascript files - there are plenty of good and many not so good reasons for this. What is often forgotten is that including other peoples content means giving other people control over your webpage. This is obviously particularly risky if it involves javascript, as this gives a third party full code execution rights in the context of your webpage.

    I recently helped a person whose Wordpress blog had a problem: The layout looked broken. The cause was that the theme used a font from a web host - and that host was down. This was easy to fix. I was able to extract the font file from the Internet Archive and store a copy locally. But it made me thinking: What happens if you include third party content on your webpage and the service from which you're including it disappears?

    I put together a simple script that would check webpages for HTML tags with the src attribute. If the src attribute points to an external host it checks if the host name actually can be resolved to an IP address. I ran that check on the Alexa Top 1 Million list. It gave me some interesting results. (This methodology has some limits, as it won't discover indirect src references or includes within javascript code, but it should be good enough to get a rough picture.)

    Yahoo! Web Analytics was shut down in 2012, yet in 2017 Flickr still tried to use it

    The webpage of Flickr included a script from Yahoo! Web Analytics. If you don't know Yahoo Analytics - that may be because it's been shut down in 2012. Although Flickr is a Yahoo! company it seems they haven't noted for quite a while. (The code is gone now, likely because I mentioned it on Twitter.) This example has no security impact as the domain still belongs to Yahoo. But it likely caused an unnecessary slowdown of page loads over many years.

    Going through the list of domains I saw plenty of the things you'd expect: Typos, broken URLs, references to localhost and subdomains no longer in use. Sometimes I saw weird stuff, like references to javascript from browser extensions. My best explanation is that someone had a plugin installed that would inject those into pages and then created a copy of the page with the browser which later endet up being used as the real webpage.

    I looked for abandoned domain names that might be worth registering. There weren't many. In most cases the invalid domains were hosts that didn't resolve, but that still belonged to someone. I found a few, but they were only used by one or two hosts.

    Takeover of unregistered Azure subdomain

    But then I saw a couple of domains referencing a javascript from a non-resolving host called piwiklionshare.azurewebsites.net. This is a subdomain from Microsoft's cloud service Azure. Conveniently Azure allows creating test accounts for free, so I was able to grab this subdomain without any costs.

    Doing so allowed me to look at the HTTP log files and see what web pages included code from that subdomain. All of them were local newspapers from the US. 20 of them belonged to two adjacent IP addresses, indicating that they were all managed by the same company. I was able to contact them. While I never received any answer, shortly afterwards the code was gone from all those pages.

    Saline Courier defacement
    "Friendly defacement" of the Saline Courier.
    However the page with most hits was not so easy to contact. It was also a newspaper, the Saline Courier. I tried contacting them directly, their chief editor and their second chief editor. No answer.

    After a while I wondered what I could do. Ultimately at some point Microsoft wouldn't let me host that subdomain any longer for free. I didn't want to risk that others could grab that subdomain, but at the same time I obviously also didn't want to pay in order to keep some web page safe whose owners didn't even bother to read my e-mails.

    But of course I had another way of contacting them: I could execute Javascript on their web page and use that for some friendly defacement. After some contemplating whether that would be a legitimate thing to do I decided to go for it. I changed the background color to some flashy pink and send them a message. The page remained usable, but it was a message hard to ignore.

    With some trouble on the way - first they broke their CSS, then they showed a PHP error message, then they reverted to the page with the defacement. But in the end they managed to remove the code.

    There are still a couple of other pages that include that Javascript. Most of them however look like broken test webpages. The only legitimately looking webpage that still embeds that code is the Columbia Missourian. However they don't embed it on the start page, only on the error reporting form they have for every article. It's been several weeks now, they don't seem to care.

    What happens to abandoned domains?

    There are reasons to believe that what I showed here is only the tip of the iceberg. In many cases when services discontinue their domains don't simply disappear. If the domain name is valuable then almost certainly someone will try to register it immediately after it becomes available.

    Someone trying to abuse abandoned domains could watch out for services going ot of business or widely referenced domains becoming available. Just to name an example: I found a couple of hosts referencing subdomains of compete.com. If you go to their web page you can learn that the company Compete has discontinued its service in 2016. How long will they keep their domain? And what will happen with it afterwards? Whoever gets the domain can hijack all the web pages that still include javascript from it.

    Be sure to know what you include

    There are some obvious takeaways from this. If you include other peoples code on your web page then you should know what that means: You give them permission to execute whatever they want on your web page. This means you need to wonder how much you can trust them.

    At the very least you should be aware who is allowed to execute code on your web page. If they shut down their business or discontinue the service you have been using then you obviously should remove that code immediately. And if you include code from a web statistics service that you never look at anyway you may simply want to remove that as well.

    September 04, 2017
    Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
    A Late GUADEC 2017 Post (September 04, 2017, 03:20 UTC)

    It’s been a little over a month since I got back from Manchester, and this post should’ve come out earlier but I’ve been swamped.

    The conference was absolutely lovely, the organisation was a 110% on point (serious kudos, I know first hand how hard that is). Others on Planet GNOME have written extensively about the talks, the social events, and everything in between that made it a great experience. What I would like to write about is about why this year’s GUADEC was special to me.

    GNOME turning 20 years old is obviously a large milestone, and one of the main reasons I wanted to make sure I was at Manchester this year. There were many occasions to take stock of how far we had come, where we are, and most importantly, to reaffirm who we are, and why we do what we do.

    And all of this made me think of my own history with GNOME. In 2002/2003, Nat and Miguel came down to Bangalore to talk about some of the work they were doing. I know I wasn’t the only one who found their energy infectious, and at Linux Bangalore 2003, they got on stage, just sat down, and started hacking up a GtkMozEmbed-based browser. The idea itself was fun, but what I took away — and I know I wasn’t the only one — is the sheer inclusive joy they shared in creating something and sharing that with their audience.

    For all of us working on GNOME in whatever way we choose to contribute, there is the immediate gratification of shaping this project, as well as the larger ideological underpinning of making everyone’s experience talking to their computers better and free-er.

    But I think it is also important to remember that all our efforts to make our community an inviting and inclusive space have a deep impact across the world. So much so that complete strangers from around the world are able to feel a sense of belonging to something much larger than themselves.

    I am excited about everything we will achieve in the next 20 years.

    (thanks go out to the GNOME Foundation for helping me attend GUADEC this year)

    Sponsored by GNOME!

    September 03, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    Tiny Tiny RSS: don't support Nazi sympathisers (September 03, 2017, 09:04 UTC)

    XKCD #1357 — Free Speech

    After complaining about the lack of cache hits from feed readers, and figuring out why NewsBlur (that was doing the right thing), and then again fixing the problem, I started looking at what other readers kept being unfixed. It turned out that about a dozen people used to read my blog using Tiny Tiny RSS, a PHP-based personal feed reader for the web. I say “used to” because, as of 2017-08-17, TT-RSS is banned from accessing anything from my blog via ModSecurity rule.

    The reason why I went to this extent is not merely technical, which is why you get the title of this blog the way it is. But it all started with me filing requests to support modern HTTP features for feeds, particularly regarding the semantics of permanent redirects, but also about the lack of If-Modified-Since, which allows significant reduction on the bandwidth usage of a blog1. Now, the first response I got about the permanent redirect request was disappointing but it was a technical answer, so I provided more information. After that?

    After that the responses stopped being focused on the technical issues, but rather appear to be – and that’s not terribly surprising in FLOSS of course – “not my problem”. Except, the answers also came from someone with a Pepe the Frog avatar.2 And this is August of 2017, when America shown having a real Nazi problem, and willingly associating themselves to alt-right is effectively being Nazi sympathizers. The tone of the further answers also show that it is no mistake or misunderstanding.

    You can read the two bugs here: and . Trigger warning: extreme right and ableist views ahead.

    While I try to not spend too much time on political activism on my blog, there is a difference from debating whether universal basic income (or even universal health care) is a right nor not, and arguing for ethnic cleansing and the death of part of a population. So no, no way I’ll refrain from commenting or throwing a light on this kind of toxic behaviour from developers in the Free Software community. Particularly when they are not even holding these beliefs for themselves but effectively boasting them by using a loaded avatar on their official support forum.

    So what you can do about this? If you get to read this post, and have subscribed to my blog through TT-RSS, you now know why you don’t get any updates from it. I would suggest you look for a new feed reader. I will as usual suggest NewsBlur, since its implementation is the best one out there. You can set it up by yourself, since it’s open source. Not only you will be cutting your support to Nazi sympathisers, but you also will save bandwidth for the web as a whole, by using a reader that actually implements the protocol correctly.

    Update (2017-08-06): as pointed out in the comments by candrewswpi, FreshRSS is another option if you don’t want to set up NewsBlur (which admittedly may be a bit heavy). It uses PHP so it should be easier to migrate given the same or similar stack. It supports at least proper caching, but I’m not sure about the permanent redirects, it needs testing.

    You could of course, as the developers said on those bugs, change the User-Agent string that TT-RSS reports, and keep using it to read my blog. But in that case, you’d be supporting Nazi sympathisers. If you don’t mind doing that, I may ask you a favour and stop reading my blog altogether. And maybe reconsider your life choices.

    I’ll repeat here that the reason why I’m going to this extent is that there is a huge difference between the political opinions and debates that we can all have, and supporting Nazis. You don’t have to agree with my political point of view to read my blog, you don’t have to agree with me to talk with me or being my friend. But if you are a Nazi sympathiser, you can get lost.


    1. you could try to argue that in this day and age there is no point in worrying about bandwidth, but then you don’t get to ever complain about the existence of CDNs, or the fact that AMP and similar tools are “undemocratizing” the web. [return]
    2. Update (2017-08-03): as many people have asked: no it’s not just any frog or any Pepe that automatically makes you a Nazi sympathisers. But the avatar was not one of the original illustrations, and the attitude of the commenter made it very clear what their “alignment” was. I mean, if they were fans of the original character, they would probably have the funeral scene as their avatar instead. [return]

    August 31, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

    At the USENIX Security Symposium 2017, Adrienne Porter Felt and April King gave a terrific presentation about HTTPS adoption and in particular showed the problems related with the long tail of websites that are not set up, or at least not set up correctly. After the talk, one of the people asking questions explicitly said that there is no point for static websites such as the one of the sushi place down the road to use HTTPS. As you can imagine, many of the people in the room (me included) disagree with this opinion drastically, and both April and Adrienne took issue with that part of the question.

    At the time on Twitter, and later that day while chatting with people, I brought up the example of Comcast injecting ads on cleartext websites – a link that itself is insecure, ironically – and April also pointed out that this is extremely common in East Asia too. A friend once complained about unexpected ads when browsing on a Vodafone 4G connection, which didn’t appear on a normal WiFi connection, which is probably a very similar situation. While this is annoying, you can at least guess what these ISPs are doing is benign, or at least not explicitly malicious.

    But you don’t have to be an ISP in the common sense to be able to inject into non-HTTPS websites. You can for instance have control over a free WiFi connection. It does not even have to be a completely open, unencrypted WiFi, as whoever has control of the system routing a WPA connection is also able to make changes to the data passed through that connection. That usually means either the local coffee shop, or the coffee shop’s sysadmin, MSP, or if you think you’re smart, your VPN provider.

    Even more importantly, all these websites are the targets for DNS hijackers, such as the one I talked about last year. Unsecured routers where it’s not possible to get a root shell – which are then not vulnerable to worms such as Mirai – can still have their DNS settings hijacked, at which point the attacker has space to redirect the resolution of some of the hostnames.

    This is even more trivial in independent coffee shops. Chains (big and small) usually sign up with a managed provider that set up various captive portals, session profiling and “growth hacks”, but smaller shops often just set up a standard router with their DSL and in many cases not even change the default passwords. And since you’re connecting from the local network, you don’t even need to figure out how to exploit it from the WAN.

    It does not take a particularly sophisticated setup to check whether the intended host supports HTTPS, and if it does not, it’s trivial to change the IP and redirect to a transparent proxy that does content injection, without the need for a “proper” man in the middle of the network. DNSSEC/DANE could protect against it, but that does not seem to be something that happens right now.

    These are all problems to the end users of course, rather than the problems of the Sushi restaurant, and I would not be surprised if the answer you would get from some of the small shops operator is that these problems should be solved by someone else and they should not spend time to figure it out themselves, as they don’t directly cause a problem to them. So let me paint a different picture.

    Let’s say that the Sushi restaurant has unfriendly competition, that is ready to pay some of those shady DNS hijackers to particularly target the restaurant’s website to play some tricks. Of course everything you can do at this point through content injection/modification you can do by defacing a website, and that would not be stopped by encrypting the connection, but that kind of defacement is usually significantly simpler to notice, as every connection would see the defaced content, including the owner’s.

    Instead, targeting a subset of connections via DNS hijacking makes it less likely that it’ll be noticed. And at that point you can make simple, subtle changes such as providing the wrong phone number (to preclude people from making reservation), changing the opening hours to something that makes it unwelcoming or even change the menu so that the prices look just high enough not to make it worth visiting. While these are only theoretical, I think any specialist who tried to do sysadmin-for-hire jobs for smaller local business has at least once heard them asking for similarly shady (or worse) tasks. And I would be surprised if nobody took these opportunities.

    But there are a number of other situations in which a non-asserted content integrity can be interesting to attackers in subtle ways, even for sites that are static, not confidential, and even not controversial — I guess everybody can agree that adult entertainment websites need to be encrypted. For instance, you could undercut referral revenue by replacing the links to Amazon and other referral programs with alternative ones (or just dropping the referral code). You could technically do the same for things like AdSense, but most of those services would check where the code is embedded in and make it very easy to figure out these types of scams, the referral programs are easier to play around with.

    What this means is that there are plenty of good reasons to actually spend time making sure small, long-tail websites are actually available over HTTPS. And yes, there are some sites where the loss of compatibility is a problem (say, VideoLAN, that still gets users of Windows XP). But in this case you can use conditional redirects, and only provide the non-HTTPS connection to users of very old browsers or operating systems, rather than still keeping it available to anyone else.

    August 28, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    Opening a bank account in the UK (August 28, 2017, 12:04 UTC)

    As I foretold in the post where I announced my move, here is the first of the rants with the problems of moving to the UK.

    The banking system of the UK, which is already a complicated pain in most countries, appears to be even more complicated. One of the problem is that almost all debit and credit cards have a nearly 3% foreign transaction fee. For those wondering what foreign transaction fees are, they are fees levied on payment executed using a currency different from the “native” currency of the card/account. The term “foreign” is often a misnomer in Europe since within Eurozone transactions may be “foreign” but there is no fee connected, since it’s a single market. Of course this does not apply for UK accounts, as the Sterling is only used in the one country.

    This makes it worse than the equivalent 1.75% foreign transaction fee of my Tesco Credit Card, since that would not apply for any expenses incurred in most of the European continent. So I really need to find a good alternative to that.

    Of course, there already is Revolut, which I spoke of before. This provide a bank account equivalent and a prepaid MasterCard that has no foreign transaction fees. Unfortunately this has a couple of limitations. The first is that this is a prepaid card, rather than a credit card. And this matters.

    In particular, hotels and car rentals (though I don’t have a license, which means I don’t use the latter) generally require you to use a credit card, because they pre-authorize a higher amount of money than you’re meant to pay at the end. if you were to do that with Revolut, you’ll end up with more money locked in for a number of days until the complete charge happens. Since at least in one case I had multiple hundreds euro locked in a pre-authorization of a credit card for two weeks, it’s not the kind of experience I would like to repeat out of habits. Most hotels would allow you to provide a different credit card for deposit and payment, that would mean I could use a normal credit card at check-in time, and then just settle the account with Revolut, but you can imagine that this is not really very handy, particularly at busy hotels during conferences, or if I’m checking out in a hurry because I’m late for my flight.

    So I started looking for various options of 0% foreign transaction fee cards, and I identified two cards in particular that fit my requirements, one from Barclays and one from NatWest. Both are premium cards that cost extra money, or require you to have a more expensive bank account, but a quick calculation shows me that I will probably make up the difference in price reasonably easily. And between the two, I focused on the NatWest, because it is part of the same group (RBS) as my current Irish bank, and I was hoping that they would make signing up for it easier.

    I couldn’t be more wrong. Even though I’m a customer of Private Banking at Ulster Bank (ROI), they couldn’t help me to set up a UK account at all. It took them one full month to find the name of a colleague of theirs I could contact in London, who then pointed me at the Global Employees service that was supposed to help me. A month after that, I still have no bank account in London, because the process requires my employer to provide a document stating not only my transfer salary, but in no irrevocable terms that the transfer will happen, and how much time I’m meant to spend in the UK.

    This is clearly impossible. First of all, since my employer does not own me, I can always change my mind, and leave the company before my transfer finalizes, so they will never declare that there is no chance I would do that (despite the fact that I don’t want to do that and I want the transfer to go through). Secondly, nobody can tell how much time I’ll be spending in the UK. It may be that I’ll live there for the rest of my life, or it may be that I will leave before the two years from Article 50 terminate, because they would make my life impossible, or the crashed economy would make it infeasible for me to keep living in the country.

    Both declarations are not really possible to provide, and the fact that the assigned contact has been contacting my HR department multiple times even though they told her twice at least that I’m the only one who can request that information have at the end ticked me off enough that I might try once to escalate this to a supervisor, but otherwise will just stop considering NatWest a feasible banking option, because the last thing I want to do is dealing with drones.

    August 26, 2017
    Sebastian Pipping a.k.a. sping (homepage, bugs)
    GIMP 2.9.6 now in Gentoo (August 26, 2017, 19:53 UTC)

    Here’s what upstream has to say about the new release 2.9.6. Enjoy 🙂

    August 25, 2017
    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

    I am very happy to be a supporter of the Internet Archive. Not only they provide the Wayback Machine, which allowed me to fix up a significant amount of dangling or broken links in my own blog over time, but also helped me recover content that I thought lost, either because it was my old, ranty teenager blog, or because it was mangled by a botched WordPress migration.

    And yet, this does not even begin covering the amount of information that the Archive is preserving and making available to the world for the future. A couple of weeks ago I had some spare time on my hands, that I could not spend writing a blog post or writing code (long story), and instead spent it perusing Wikipedia pages about ‘90s tech (why? because I feel nostalgic sometimes), and found out two interesting things: both Internet Archive and (in a smaller part) Google Books, make available “ancient” issues of old computer magazines, such as PC Magazine (US) or Computer Gaming World.

    Indeed, I ended up using this information to extend a bit the Future Wars and The Colonel’s Bequest — mostly thanks to the fact that the (now defunct) Amiga Reviews website provided issue and page numbers for the games’ reviews. It was fun to run some of the articles from these magazines while I was doing these cleanups, and they made me wish I had more time to read through, particularly the technical magazines, and see what kind of information is now not well known or understood. Unfortunately, as far as I can tell, despite having a lot of good scans, there is no easy “table of content” for the issues, that could be used to identify which issue may have useful information on a topic.

    I’m also now wondering if I should find a way to get to the Internet Archive, or some other organization all the old paper copies of magazines that I’m holding at my mother’s house in Italy, so that they become more accessible to the rest of the world. At least one of those magazines stopped publishing altogether, and there are a few that include very useful information on libraries, APIs and file formats that are hard to find out about nowadays. I even remember one of those magazines talking about programming for the Nintendo GameBoy (in the early noughties!).

    In addition to a whole lot of paper, I’m sure at home I have a number of CDs and DVDs that were provided with those magazines, which to be honest nowadays I’m not entirely sure how legal they were. For the most part, they have been redistributing shareware that came from various websites, which in the nineties and early noughties was definitely something useful, as home Internet connections were extremely slow and limited, and having the data on a CD would be much simpler. One of the extra services that at least one of these magazines (simply called Computer Magazine) would also provide monthly updates of common drivers, and antivirus definitions. And sometimes they would end up having an infected file in the CD, and you would only find out a month later, oops.

    One of the reasons why I started reading Computer Magazine, anyway, was that they also promised to provide, each month, a complete commercial software release available with the magazine. Indeed, they started with Macromedia software, including xRes 2.0 and following a few months later with Freehand — neither software exist anymore, Macromedia was bought out by Adobe at some point after that and those two particular pieces of software then folded into Photoshop and Illustrator. At some point they also provided Borland C++ Builder (1.0 complete, 3.0 demo), which probably paid off for Borland since at some point I actually bought a license for C++ Builder to write software for one of my customers.

    But at some point, one of the sister magazines to this one, also provided in their CDs a DOS-based Gameboy emulator, and a number of ROMs, including Super Mario Land! I know now that this was blatant piracy, but at the time it was just a lot of fun. And the idea that the CDs (except for the “complete software” ones) were redistributable ended up having me polishing the bundled archives of emulator and ROMs (one copy of the emulator executable per ROM), and even played with InstallShield (a demo of which they also provided in one of their CDs) to have an installer for the whole set, that also added entries to the Start Menu. Yes that was what I used to do on my free time as a hobby. You can see I have not changed that much.

    What should I do with all those disks? Some of that content would still be relevant enough that you wouldn’t want to just have them online, for instance the GameBoy ROMs. Also, some of the “full software” is probably still usable on modern (32-bit) Windows, and so it might not be of interest to the copyright holders for it not to be completely published. But at the same time, I think these are actually the kind of content that should not just disappear. Historical memory is definitely important.

    I also have at least one full box of Italian-edition The Games Machine, but I should probably ask the people I know that still write for that magazine if they might actually have the masters, instead of relying on low-quality scans. Oh well, will do that separately.

    August 23, 2017
    Sven Vermeulen a.k.a. swift (homepage, bugs)
    Using nVidia with SELinux (August 23, 2017, 17:04 UTC)

    Yesterday I've switched to the gentoo-sources kernel package on Gentoo Linux. And with that, I also attempted (succesfully) to use the propriatary nvidia drivers so that I can enjoy both a smoother 3D experience while playing minecraft, as well as use the CUDA support so I don't need to use cloud-based services for small exercises.

    The move to nvidia was quite simple, as the nvidia-drivers wiki article on the Gentoo wiki was quite easy to follow.

    Sebastian Pipping a.k.a. sping (homepage, bugs)
    Expat 2.2.4 released (August 23, 2017, 16:52 UTC)

    Expat 2.2.4 has recently been released. It features one major bugfix regarding files encoded as UTF-8, and improvements to the build system.

    If you are using a more ancient version of Visual Studio like 2012, please check the post-2.2.4 commits in Git for related fixes to compilation.

    Also, founding of Rhodri’s work on Expat by the Core Infrastructure Initiative is coming to an end. If you can fund additional developers for work on Expat — including smooth integration of by-default protection against billion laughs denial-of-service attacks — please get in touch.

    Sebastian Pipping

    August 22, 2017
    Sven Vermeulen a.k.a. swift (homepage, bugs)
    Switch to Gentoo sources (August 22, 2017, 17:04 UTC)

    You've might already read it on the Gentoo news site, the Hardened Linux kernel sources are removed from the tree due to the grsecurity change where the grsecurity Linux kernel patches are no longer provided for free. The decision was made due to supportability and maintainability reasons.

    That doesn't mean that users who want to stick with the grsecurity related hardening features are left alone. Agostino Sarubbo has started providing sys-kernel/grsecurity-sources for the users who want to stick with it, as it is based on minipli's unofficial patchset. I seriously hope that the patchset will continue to be maintained and, who knows, even evolve further.

    Personally though, I'm switching to the Gentoo sources, and stick with SELinux as one of the protection measures. And with that, I might even start using my NVidia graphics card a bit more, as that one hasn't been touched in several years (I have an Optimus-capable setup with both an Intel integrated graphics card and an NVidia one, but all attempts to use nouveau for the one game I like to play - minecraft - didn't work out that well).

    Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
    Apache, ETag and “Not Modified” (August 22, 2017, 16:04 UTC)

    In my previous post on the matter I incorrectly blamed NewsBlur – which I still recommend as the best feed reader I’ve ever used! – for not correctly supporting HTTP features to avoid wasting bandwidth for fetching repeatedly unmodified content.

    As Daniel and Samuel pointed out immediately, NewsBlur does support those features, and indeed I even used it as an example four years ago, oops for my memory being terrible that way, and me assuming the behaviour from the logs rather than inspecting the requests. And indeed the requests were not only correct, but matched perfectly what Apache reported:

    --6190ee48-B--
    GET /index.xml HTTP/1.1
    Host: blog.flameeyes.eu
    Connection: keep-alive
    Accept-Encoding: gzip, deflate
    Accept: application/atom+xml, application/rss+xml, application/xml;q=0.8, text/xml;q=0.6, */*;q=0.2
    User-Agent: NewsBlur Feed Fetcher - 59 subscribers - http://www.newsblur.com/site/195958/flameeyess-weblog (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36)
    A-IM: feed
    If-Modified-Since: Wed, 16 Aug 2017 04:22:52 GMT
    If-None-Match: "27dc5-556d73fd7fa43-gzip"
    
    --6190ee48-F--
    HTTP/1.1 200 OK
    Strict-Transport-Security: max-age=31536000; includeSubDomains
    Last-Modified: Wed, 16 Aug 2017 04:22:52 GMT
    ETag: "27dc5-556d73fd7fa43-gzip"
    Accept-Ranges: bytes
    Vary: Accept-Encoding
    Content-Encoding: gzip
    Cache-Control: max-age=1800
    Expires: Wed, 16 Aug 2017 18:56:33 GMT
    Content-Length: 54071
    Keep-Alive: timeout=15, max=99
    Connection: Keep-Alive
    Content-Type: application/xml
    

    So what is going on here? Well, I started looking around, both because I now felt silly, and because I owed more than just an update on the post and an apology to Samuel. And a few searches later, I found Apache bug #45023 that reports how mod_deflate prevents all 304 responses from being issued. This is a bit misleading (as you can still have them in some situations), but it is indeed what is happening here, and it is a breakage introduced by Apache 2.4.

    What’s going on? Well, let’s first start to figure out why I could see some 304, but not from NewsBlur. Willreadit was one of the agents that received 304 responses at least some of the time and in the landing page it says explicitly that it supports If-Modified-Since. In particular, it does not support If-None-Match.

    The If-None-Match header in the request compares with the ETag header (Entity Tag) in the response coming from Apache. This header is generally considered opaque, and the client should have no insights in what it is meant to do. The server generally calculates its value based on either a checksum of the file (e.g. md5) or based on file size and last-modified time. On Apache HTTP Server, the FileETag directive is used to define which properties of the served files are used to generate the value provided in the response. The default that I’m using is MTime Size, which effectively means that changing the file in any way causes the ETag to change. The size part might actually be redundant here, since modification time is usually enough for my use cases, but this is the default…

    The reason why I’m providing both Last-Modifed and ETag headers in the response is that HTTP client can just as well only implement one of the two methods, rather than both, particularly as they may think that handling ETag is easier as it’s an opaque string, rather than information that can be parsed — but it really should be considered opaquely as well as it’s noted in RFC2616. Entity Tags are also more complicated because they can be used to collapse caching of different entities (identifed by an URL) within the same space (hostname) by caching proxies. I have lots of doubts that this usage is in use, so I’m not going to consider it a valid one, but your mileage may vary. In particular, since the default uses size and modification time, it ends up always matching the Last-Modified header, for a given entity, and the If-Modified-Since request would be just enough.

    But when you provide both If-Modified-Since and If-None-Match, you’re asking for both conditions to be true, and so Apache will validate both. And here is where the problem happens: the -gzip suffix – which you can see in the header of the sample request above – is added at different times in the HTTPD process, and in particular it makes it so that the If-None-Match will never match the generated ETag, because the comparison is with the version without -gzip appended. This makes sense in context, because if you have a shared caching proxy, you may have different user agents that support different compression algorithms. Unfortunately, this effectively makes it so that entity tags disable Not Modified states for all the clients that do care about the tags. Those few clients that received 304 responses from my blog before were just implementing If-Modified-Since, and were getting the right behaviour (which is why I thought the title of the bug was misleading).

    So how do you solve this? In the bug I already noted above, there is a suggestion by Joost Dekeijzer to use the following directive in your Apache config:

    RequestHeader edit "If-None-Match" '^"((.*)-gzip)"$' '"$1", "$2"'
    

    This adds a version of the entity tag without the suffix to the list of expected entity tags, which “fools” the server into accepting that the underlying file didn’t change and that there is no need to make any change there. I tested with that and it does indeed fix NewsBlur and a number of other use cases, including browsers! But it has the side effect of possibly poisoning shared caches. Shared caches are not that common, but why risking it? So I decided onto a slightly different option

    FileETag None
    

    This disable the generation of Entity Tags for file-based entities (i.e. static files), forcing browsers and feed readers to rely on If-Modified-Since exclusively. If clients only implement If-None-Match semantics, then this second option loses the ability to receive 304 responses. I have actually no idea which clients would do that, since this is the more complicated semantics, but I guess I’ll find out. I decided to give a try to this option for two reasons: it should simplify Apache’s own runtime, because it does not have to calculate these tags at any point now, and because effectively they were encoding only the modification time, which is literally what Last-Modified provides! I had for a while assumed that the tag was calculated based on a (quick and dirty) checksum, instead of just size and modification time, but clearly I was wrong.

    There is another problem at this point, though. For this to work correctly, you need to make sure that the modification time of files is consistent with them actually changing. If you’re using a static site generator that produces multiple outputs for a single invocation, which includes both Hugo and FSWS, you would have a problem, because the modification time of every file is now the execution time of the tool (or just about).

    The answer to this is to build the output in a “staging” directory and just replace the files that are modified, and rsync sounds perfect for the job. But the more obvious way to do so (rsync -a) will do exactly the opposite of what you want, as it will preserve the timestamp from the source directory — which mean it’ll replace the old timestamp with the new one for all files. Instead, what you want to use is rsync -rc: this uses a checksum to figure out which files have changed, and will not preserve the timestamp but rather use the timestamp of rsync, which is still okay — theoretically, I think rsync -ac should work, since it should only preserve the timestamp only of the files that were modified, but since the serving files are still all meant to have the same permissions, and none be links, I found being minimal made sense.

    So anyway, I’ll hopefully have some more data soon about the bandwidth saving. I’m also following up with whatever may not be supporting properly If-Modified-Since, and filing bugs for those software/services that allow it.

    Update (2017-08-23): since now it’s a few days since I fixed up the Apache configuration, I can confirm that the daily bandwidth used by “viewed hits” (as counted by Awstats) went down to ⅓ of what it used to be, to around 60MB a day. This should be accounting not only for the feed readers now properly getting a 304, but also for browsers of readers who no longer have to fetch the full page when, for instance, replying to comments. Googlebot also is getting a lot more 304, which may actually have an impact on its ability to keep up with the content, so I guess I will report back.

    Alexys Jacob a.k.a. ultrabug (homepage, bugs)
    py3status v3.6 (August 22, 2017, 06:00 UTC)

    After four months of cool contributions and hard work on normalization and modules’ clean up, I’m glad to announce the release of py3status v3.6!

    Milestone 3.6 was mainly focused about existing modules, from their documentation to their usage of the py3 helper to streamline their code base.

    Other improvements were made about error reporting while some sneaky bugs got fixed along the way.

    Highlights

    Not an extensive list, check the changelog.

    • LOTS of modules streamlining (mainly the hard work of @lasers)
    • error reporting improvements
    • py3-cmd performance improvements

    New modules

    • i3blocks support (yes, py3status can now wrap i3blocks thanks to @tobes)
    • cmus module: to control your cmus music player, by @lasers
    • coin_market module: to display custom cryptocurrency data, by @lasers
    • moc module: to control your moc music player, by @lasers

    Milestone 3.7

    This milestone will give a serious kick into py3status performance. We’ll do lots of profiling and drastic work to reduce py3status CPU and memory footprints!

    For now we’ve been relying a lot on threads, which is simple to operate but not that CPU/memory friendly. Since i3wm users rightly care for their efficiency we think it’s about time we address this kind of points in py3status.

    Stay tuned, we have some nice ideas in stock 🙂

    Thanks contributors!

    This release is their work, thanks a lot guys!

    • aethelz
    • alexoneill
    • armandg
    • Cypher1
    • docwalter
    • enguerrand
    • fmorgner
    • guiniol
    • lasers
    • markrileybot
    • maximbaz
    • tablet-mode
    • paradoxisme
    • ritze
    • rixx
    • tobes
    • valdur55
    • vvoland
    • yabbes

    August 19, 2017
    Hardened Linux kernel sources removal (August 19, 2017, 00:00 UTC)

    As you may know the core of sys-kernel/hardened-sources has been the grsecurity patches. Recently the grsecurity developers have decided to limit access to these patches. As a result, the Gentoo Hardened team is unable to ensure a regular patching schedule and therefore the security of the users of these kernel sources. Thus, we will be masking hardened-sources on the 27th of August and will proceed to remove them from the main ebuild repository by the end of September. We recommend to use sys-kernel/gentoo-sources instead. Userspace hardening and support for SELinux will of course remain in the Gentoo ebuild repository. Please see the full news item for additional information and links.

    August 18, 2017

    FroSCon logo

    Upcoming weekend, 19-20th August 2017, there will be a Gentoo booth again at the FrOSCon “Free and Open Source Conference” 12, in St. Augustin near Bonn! Visitors can see Gentoo live in action, get Gentoo swag, and prepare, configure, and compile their own Gentoo buttons. See you there!

    August 12, 2017
    Luca Barbato a.k.a. lu_zero (homepage, bugs)
    Optimizing rust (August 12, 2017, 19:16 UTC)

    After the post about optimization, Kostya and many commenters (me included) discussed a bit about if there are better ways to optimize that loop without using unsafe code.

    Kostya provided me with a test function and multiple implementations from him and I polished and benchmarked the whole thing.

    The code

    I put the code in a simple project, initially it was a simple main.rs and then it grew a little.

    All it started with this function:

    pub fn recombine_plane_reference(
        src: &[i16],
        sstride: usize,
        dst: &mut [u8],
        dstride: usize,
        w: usize,
        h: usize,
    ) {
        let mut idx0 = 0;
        let mut idx1 = w / 2;
        let mut idx2 = (h / 2) * sstride;
        let mut idx3 = idx2 + idx1;
        let mut oidx0 = 0;
        let mut oidx1 = dstride;
    
        for _ in 0..(h / 2) {
            for x in 0..(w / 2) {
                let p0 = src[idx0 + x];
                let p1 = src[idx1 + x];
                let p2 = src[idx2 + x];
                let p3 = src[idx3 + x];
                let s0 = p0.wrapping_add(p2);
                let d0 = p0.wrapping_sub(p2);
                let s1 = p1.wrapping_add(p3);
                let d1 = p1.wrapping_sub(p3);
                let o0 = s0.wrapping_add(s1).wrapping_add(2);
                let o1 = d0.wrapping_add(d1).wrapping_add(2);
                let o2 = s0.wrapping_sub(s1).wrapping_add(2);
                let o3 = d0.wrapping_sub(d1).wrapping_add(2);
                dst[oidx0 + x * 2 + 0] = clip8(o0.wrapping_shr(2).wrapping_add(128));
                dst[oidx0 + x * 2 + 1] = clip8(o1.wrapping_shr(2).wrapping_add(128));
                dst[oidx1 + x * 2 + 0] = clip8(o2.wrapping_shr(2).wrapping_add(128));
                dst[oidx1 + x * 2 + 1] = clip8(o3.wrapping_shr(2).wrapping_add(128));
            }
            idx0 += sstride;
            idx1 += sstride;
            idx2 += sstride;
            idx3 += sstride;
            oidx0 += dstride * 2;
            oidx1 += dstride * 2;
        }
    }
    

    Benchmark

    Kostya used perf to measure the number of samples it takes over a large number of iterations, I wanted to make the benchmark a little more portable so I used the time::PreciseTime Rust provides to measure something a little more coarse, but good enough for our purposes.

    We want to see if rewriting the loop using unsafe pointers or using high level iterators provides a decent speedup, no need to be overly precise.

    NB: I decided to not use the bencher utility provided with nightly rust to make the code even easier to use.

    +fn benchme<F>(name: &str, n: usize, mut f: F)
    +    where F : FnMut() {
    +    let start = PreciseTime::now();
    +    for _ in 0..n {
    +        f();
    +    }
    +    let end = PreciseTime::now();
    +    println!("Runtime {} {}", name, start.to(end));
    +}
    
    # cargo run --release
    

    Unsafe code

    Both me and Kostya have a C background so for him (and for me), was sort of natural embracing unsafe {} and use the raw pointers like we are used to.

    pub fn recombine_plane_unsafe(
        src: &[i16],
        sstride: usize,
        dst: &mut [u8],
        dstride: usize,
        w: usize,
        h: usize,
    ) {
        unsafe {
            let hw = (w / 2) as isize;
            let mut band0 = src.as_ptr();
            let mut band1 = band0.offset(hw);
            let mut band2 = band0.offset(((h / 2) * sstride) as isize);
            let mut band3 = band2.offset(hw);
            let mut dst0 = dst.as_mut_ptr();
            let mut dst1 = dst0.offset(dstride as isize);
            let hh = (h / 2) as isize;
            for _ in 0..hh {
                let mut b0_ptr = band0;
                let mut b1_ptr = band1;
                let mut b2_ptr = band2;
                let mut b3_ptr = band3;
                let mut d0_ptr = dst0;
                let mut d1_ptr = dst1;
                for _ in 0..hw {
                    let p0 = *b0_ptr;
                    let p1 = *b1_ptr;
                    let p2 = *b2_ptr;
                    let p3 = *b3_ptr;
                    let s0 = p0.wrapping_add(p2);
                    let s1 = p1.wrapping_add(p3);
                    let d0 = p0.wrapping_sub(p2);
                    let d1 = p1.wrapping_sub(p3);
                    let o0 = s0.wrapping_add(s1).wrapping_add(2);
                    let o1 = d0.wrapping_add(d1).wrapping_add(2);
                    let o2 = s0.wrapping_sub(s1).wrapping_add(2);
                    let o3 = d0.wrapping_sub(d1).wrapping_add(2);
                    *d0_ptr.offset(0) = clip8((o0 >> 2).wrapping_add(128));
                    *d0_ptr.offset(1) = clip8((o1 >> 2).wrapping_add(128));
                    *d1_ptr.offset(0) = clip8((o2 >> 2).wrapping_add(128));
                    *d1_ptr.offset(1) = clip8((o3 >> 2).wrapping_add(128));
                    b0_ptr = b0_ptr.offset(1);
                    b1_ptr = b1_ptr.offset(1);
                    b2_ptr = b2_ptr.offset(1);
                    b3_ptr = b3_ptr.offset(1);
                    d0_ptr = d0_ptr.offset(2);
                    d1_ptr = d1_ptr.offset(2);
                }
                band0 = band0.offset(sstride as isize);
                band1 = band1.offset(sstride as isize);
                band2 = band2.offset(sstride as isize);
                band3 = band3.offset(sstride as isize);
                dst0 = dst0.offset((dstride * 2) as isize);
                dst1 = dst1.offset((dstride * 2) as isize);
            }
        }
    }
    

    The function is faster than baseline:

        Runtime reference   PT1.598052169S
        Runtime unsafe      PT1.222646190S
    

    Explicit upcasts

    Kostya noticed that telling rust to use i32 instead of i16 gave some performance boost.

        Runtime reference       PT1.601846926S
        Runtime reference 32bit PT1.371876242S
        Runtime unsafe          PT1.223115917S
        Runtime unsafe 32bit    PT1.124667021S
    

    I’ll keep variants between i16 and i32 to see when it is important and when it is not.

    Note: Making code generic over primitive types is currently pretty painful and hopefully will be fixed in the future.

    High level abstractions

    Most of the comments to Kostya’s original post were about leveraging the high level abstractions to make the compiler understand the code better.

    Use Iterators

    Rust is able to omit the bound checks if there is a warranty that the code cannot go out of the array boundaries. Using Iterators instead of for loops over an external variables should do the trick.

    Use Chunks

    chunks and chunks_mut take a slice and provides a nice iterator that gets you at-most-N-sized pieces of the input slice.

    Since that the code works by line it is sort of natural to use it.

    Use split_at

    split_at and split_at_mut get you independent slices, even mutable. The code is writing two lines at time so having the ability to access mutably two regions of the frame is a boon.

    The (read-only) input is divided in bands and the output produced is 2 lines at time. split_at is much better than using hand-made slicing and
    split_at_mut is perfect to write at the same time the even and the odd line.

    All together

    pub fn recombine_plane_chunks_32(
        src: &[i16],
        sstride: usize,
        dst: &mut [u8],
        dstride: usize,
        w: usize,
        h: usize,
    ) {
        let hw = w / 2;
        let hh = h / 2;
        let (src1, src2) = src.split_at(sstride * hh);
        let mut src1i = src1.chunks(sstride);
        let mut src2i = src2.chunks(sstride);
        let mut dstch = dst.chunks_mut(dstride * 2);
        for _ in 0..hh {
            let s1 = src1i.next().unwrap();
            let s2 = src2i.next().unwrap();
            let mut d = dstch.next().unwrap();
            let (mut d0, mut d1) = d.split_at_mut(dstride);
            let (b0, b1) = s1.split_at(hw);
            let (b2, b3) = s2.split_at(hw);
            let mut di0 = d0.iter_mut();
            let mut di1 = d1.iter_mut();
            let mut bi0 = b0.iter();
            let mut bi1 = b1.iter();
            let mut bi2 = b2.iter();
            let mut bi3 = b3.iter();
            for _ in 0..hw {
                let p0 = bi0.next().unwrap();
                let p1 = bi1.next().unwrap();
                let p2 = bi2.next().unwrap();
                let p3 = bi3.next().unwrap();
                recombine_core_32(*p0, *p1, *p2, *p3, &mut di0, &mut di1);
            }
        }
    }
    

    It is a good improvement over the reference baseline, but still not as fast as unsafe.

        Runtime reference       PT1.621158410S
        Runtime reference 32bit PT1.467441931S
        Runtime unsafe          PT1.226046003S
        Runtime unsafe 32bit    PT1.126615305S
        Runtime chunks          PT1.349947181S
        Runtime chunks 32bit    PT1.350027322S
    

    Use of zip or izip

    Using next().unwrap() feels clumsy and force the iterator to be explicitly mutable. The loop can be written in a nicer way using the system provided zip and the itertools-provided izip.

    zip works fine for 2 iterators, then you start piling up (so, (many, (tuples, (that, (feels, lisp))))) (or (feels (lisp, '(so, many, tuples))) according to a reader). izip flattens the result so it is sort of nicers.

    pub fn recombine_plane_zip_16(
        src: &[i16],
        sstride: usize,
        dst: &mut [u8],
        dstride: usize,
        w: usize,
        h: usize,
    ) {
        let hw = w / 2;
        let hh = h / 2;
        let (src1, src2) = src.split_at(sstride * hh);
        let src1i = src1.chunks(sstride);
        let src2i = src2.chunks(sstride);
        let mut dstch = dst.chunks_mut(dstride * 2);
        for (s1, s2) in src1i.zip(src2i) {
            let mut d = dstch.next().unwrap();
            let (mut d0, mut d1) = d.split_at_mut(dstride);
            let (b0, b1) = s1.split_at(hw);
            let (b2, b3) = s2.split_at(hw);
            let mut di0 = d0.iter_mut();
            let mut di1 = d1.iter_mut();
            let iterband = b0.iter().zip(b1.iter().zip(b2.iter().zip(b3.iter())));
            for (p0, (p1, (p2, p3))) in iterband {
                recombine_core_16(*p0, *p1, *p2, *p3, &mut di0, &mut di1);
            }
        }
    }
    

    How they would fare?

        Runtime reference        PT1.614962959S
        Runtime reference 32bit  PT1.369636641S
        Runtime unsafe           PT1.223157417S
        Runtime unsafe 32bit     PT1.125534521S
        Runtime chunks           PT1.350069795S
        Runtime chunks 32bit     PT1.381841742S
        Runtime zip              PT1.249227707S
        Runtime zip 32bit        PT1.094282423S
        Runtime izip             PT1.366320546S
        Runtime izip 32bit       PT1.208708213S
    

    Pretty well.

    Looks like izip is a little more wasteful than zip currently, so looks like we have a winner 🙂

    Conclusions

    • Compared to common imperative programming patterns, using the high level abstractions does lead to a nice speedup: use iterators when you can!
    • Not all the abstractions cost zero, zip made the overall code faster while izip lead to a speed regression.
    • Do benchmark your time critical code. nightly has some facility for it BUT it is not great for micro-benchmarks.

    Overall I’m enjoying a lot writing code in Rust.

    August 08, 2017
    Alexys Jacob a.k.a. ultrabug (homepage, bugs)
    ScyllaDB meets Gentoo Linux (August 08, 2017, 14:19 UTC)

    I am happy to announce that my work on packaging ScyllaDB for Gentoo Linux is complete!

    Happy or curious users are very welcome to share their thoughts and ping me to get it into portage (which will very likely happen).

    Why Scylla?

    Ever heard of the Cassandra NoSQL database and Java GC/Heap space problems?… if you do, you already get it 😉

    I will not go into the details as their website does this way better than me but I got interested into Scylla because it fits the Gentoo Linux philosophy very well. If you remember my writing about packaging Rethinkdb for Gentoo Linux, I think that we have a great match with Scylla as well!

    • it is written in C++ so it plays very well with emerge
    • the code quality is so great that building it does not require heavy patching on the ebuild (feels good to be a packager)
    • the code relies on system libs instead of bundling them in the sources (hurrah!)
    • performance tuning is handled by smart scripting and automation, allowing the relationship between the project and the hardware is strong

    I believe that these are good enough points to go further and that such a project can benefit from a source based distribution like Gentoo Linux. Of course compiling on multiple systems is a challenge for such a database but one does not improve by staying in their comfort zone.

    Upstream & contributions

    Packaging is a great excuse to get to know the source code of a project but more importantly the people behind it.

    So here I got to my first contributions to Scylla to get Gentoo Linux as a detected and supported Linux distribution in the different scripts and tools used to automatically setup the machine it will run upon (fear not, I contributed bash & python, not C++)…

    Even if I expected to contribute using Github PRs and got to change my habits to a git-patch+mailing list combo, I got warmly welcomed and received positive and genuine interest in the contributions. They got merged quickly and thanks to them you can install and experience Scylla in Gentoo Linux without heavy patching on our side.

    Special shout out to Pekka, Avi and Vlad for their welcoming and insightful code reviews!

    I’ve some open contributions about pushing further on the python code QA side to get the tools to a higher level of coding standards. Seeing how upstream is serious about this I have faith that it will get merged and a good base for other contributions.

    Last note about reaching them is that I am a bit sad that they’re not using IRC freenode to communicate (I instinctively joined #scylla and found myself alone) but they’re on Slack (those “modern folks”) and pretty responsive to the mailing lists 😉

    Java & Scylla

    Even if scylla is a rewrite of Cassandra in C++, the project still relies on some external tools used by the Cassandra community which are written in Java.

    When you install the scylla package on Gentoo, you will see that those two packages are Java based dependencies:

    • app-admin/scylla-tools
    • app-admin/scylla-jmx

    It pained me a lot to package those (thanks to help of @monsieurp) but they are building and working as expected so this gets the packaging of the whole Scylla project pretty solid.

    emerge dev-db/scylla

    The scylla packages are located in the ultrabug overlay for now until I test them even more and ultimately put them in production. Then they’ll surely reach the portage tree with the approval of the Gentoo java team for the app-admin/ packages listed above.

    I provide a live ebuild (scylla-9999 with no keywords) and ebuilds for the latest major version (2.0_rc1 at time of writing).

    It’s as simple as:

    $ sudo layman -a ultrabug
    $ sudo emerge -a dev-db/scylla
    $ sudo emerge --config dev-db/scylla

    Try it out and tell me what you think, I hope you’ll start considering and using this awesome database!

    August 06, 2017
    Sebastian Pipping a.k.a. sping (homepage, bugs)

    Update: I moved to disroot.org now.

    August 02, 2017
    Sebastian Pipping a.k.a. sping (homepage, bugs)

    Just a quick note that Expat 2.2.3 has been released. For Windows users, it fixes DLL hijacking (CVE-2017-11742). On Linux, extracting entropy for Hash DoS protection no longer blocks, which affected D-Bus and systems that are low on entropy early in the boot process. For more details, please check the change log.

    July 27, 2017
    Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

    Last evening, I ran some updates on one of my servers. One of the updates was from MariaDB 10.1 to 10.2 (some minor release as well). After compiling, I went to restart, but it failed with:

    # /etc/init.d/mysql start
    * Checking mysqld configuration for mysql ...
    [ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys'
    [ERROR] Aborting

    * mysql config check failed [ !! ]
    * ERROR: mysql failed to start

    I’m not sure why this just hit me now, but it looks like it is a function within the init script that’s causing it to look for files in the nonexistent directory of /usr/share/mysql/ instead of the appropriate /usr/share/mariadb/. The fast fix here (so that I could get everything back up and running as quickly as possible) was to simply symlink the directory:

    cd /usr/share
    ln -s mariadb/ mysql

    Thereafter, MariaDB came up without any problem:

    # /etc/init.d/mysql start
    * Caching service dependencies ... [ ok ]
    * Checking mysqld configuration for mysql ... [ ok ]
    * Starting mysql ... [ ok ]
    # /etc/init.d/mysql status
    * status: started

    I hope that information helps if you’re in a pinch and run into the same error message.

    Cheers,
    Zach

    UPDATE: It seems as if the default locations for MySQL/MariaDB configurations have changed (in Gentoo). Please see this comment for more information about a supportable fix for this problem moving forward. Thanks to Brian Evans for the information. 🙂

    July 23, 2017
    Michał Górny a.k.a. mgorny (homepage, bugs)
    Optimizing ccache using per-package caches (July 23, 2017, 18:03 UTC)

    ccache can be of great assistance to Gentoo developers and users who frequently end up rebuilding similar versions of packages. By providing a caching compiler frontend, it can speed up builds by removing the need to build files that have not changed again. However, it uses a single common cache directory by default which can be suboptimal even if you are explicitly enabling ccache only for a subset of packages needing that.

    The likeliness of cross-package ccache hits is pretty low — majority of the hits occurs within a single package. If you use a single cache directory for all affected packages, it grows pretty quick. Besides a possible performance hit from having a lot of files in every directory, this means that packages built later can shift earlier packages out of the cache, resulting in meaninglessly lost cache hits. A simple way to avoid both of the problems is to use separate ccache directories.

    In my solution, a separate subdirectory of /var/cache/ccache is used for every package, named after the category, package name and slot. While the last one is not strictly necessary, it can be useful for slotted packages such as LLVM where I do not want frequently changing live package sources to shift the release versions out of the cache.

    To use it, put a code similar to the following in your /etc/portage/bashrc:

    if [[ ${FEATURES} == *ccache* && ${EBUILD_PHASE_FUNC} == src_* ]]; then
    	if [[ ${CCACHE_DIR} == /var/cache/ccache ]]; then
    		export CCACHE_DIR=/var/cache/ccache/${CATEGORY}/${PN}:${SLOT}
    		mkdir -p "${CCACHE_DIR}" || die
    	fi
    fi

    The first condition makes sure the code is only run when ccache is enabled, and only for src_* phases where we can rely on userpriv being used consistently. The second one makes sure the code only applies to a specific (my initial) value of CCACHE_DIR and therefore avoids both nesting the cache indefinitely when Portage calls subsequent phase functions, and applying the replacement if user overrides CCACHE_DIR.

    You need to either adjust the value used here to the directory used on your system, or change it in your /etc/portage/make.conf:

    CCACHE_DIR="/var/cache/ccache"

    Once this is done, Portage should start creating separate cache directories for every package where you enable ccache. This should improve the cache hit ratio, especially if you are using ccache for large packages (why else would you need it?). However, note that you will no longer have a single cache size limit — every package will have its own limit. Therefore, you may want to reduce the limits per-package, or manually look after the cache periodically.

    July 20, 2017
    Hanno Böck a.k.a. hanno (homepage, bugs)

    KeyLately, some attention was drawn to a widespread problem with TLS certificates. Many people are accidentally publishing their private keys. Sometimes they are released as part of applications, in Github repositories or with common filenames on web servers.

    If a private key is compromised, a certificate authority is obliged to revoke it. The Baseline Requirements – a set of rules that browsers and certificate authorities agreed upon – regulate this and say that in such a case a certificate authority shall revoke the key within 24 hours (Section 4.9.1.1 in the current Baseline Requirements 1.4.8). These rules exist despite the fact that revocation has various problems and doesn’t work very well, but that’s another topic.

    I reported various key compromises to certificate authorities recently and while not all of them reacted in time, they eventually revoked all certificates belonging to the private keys. I wondered however how thorough they actually check the key compromises. Obviously one would expect that they cryptographically verify that an exposed private key really is the private key belonging to a certificate.

    I registered two test domains at a provider that would allow me to hide my identity and not show up in the whois information. I then ordered test certificates from Symantec (via their brand RapidSSL) and Comodo. These are the biggest certificate authorities and they both offer short term test certificates for free. I then tried to trick them into revoking those certificates with a fake private key.

    Forging a private key

    To understand this we need to get a bit into the details of RSA keys. In essence a cryptographic key is just a set of numbers. For RSA a public key consists of a modulus (usually named N) and a public exponent (usually called e). You don’t have to understand their mathematical meaning, just keep in mind: They’re nothing more than numbers.

    An RSA private key is also just numbers, but more of them. If you have heard any introductory RSA descriptions you may know that a private key consists of a private exponent (called d), but in practice it’s a bit more. Private keys usually contain the full public key (N, e), the private exponent (d) and several other values that are redundant, but they are useful to speed up certain things. But just keep in mind that a public key consists of two numbers and a private key is a public key plus some additional numbers. A certificate ultimately is just a public key with some additional information (like the host name that says for which web page it’s valid) signed by a certificate authority.

    A naive check whether a private key belongs to a certificate could be done by extracting the public key parts of both the certificate and the private key for comparison. However it is quite obvious that this isn’t secure. An attacker could construct a private key that contains the public key of an existing certificate and the private key parts of some other, bogus key. Obviously such a fake key couldn’t be used and would only produce errors, but it would survive such a naive check.

    I created such fake keys for both domains and uploaded them to Pastebin. If you want to create such fake keys on your own here’s a script. To make my report less suspicious I searched Pastebin for real, compromised private keys belonging to certificates. This again shows how problematic the leakage of private keys is: I easily found seven private keys for Comodo certificates and three for Symantec certificates, plus several more for other certificate authorities, which I also reported. These additional keys allowed me to make my report to Symantec and Comodo less suspicious: I could hide my fake key report within other legitimate reports about a key compromise.

    Symantec revoked a certificate based on a forged private key

    SymantecComodo didn’t fall for it. They answered me that there is something wrong with this key. Symantec however answered me that they revoked all certificates – including the one with the fake private key.

    No harm was done here, because the certificate was only issued for my own test domain. But I could’ve also fake private keys of other peoples' certificates. Very likely Symantec would have revoked them as well, causing downtimes for those sites. I even could’ve easily created a fake key belonging to Symantec’s own certificate.

    The communication by Symantec with the domain owner was far from ideal. I first got a mail that they were unable to process my order. Then I got another mail about a “cancellation request”. They didn’t explain what really happened and that the revocation happened due to a key uploaded on Pastebin.

    I then informed Symantec about the invalid key (from my “real” identity), claiming that I just noted there’s something wrong with it. At that point they should’ve been aware that they revoked the certificate in error. Then I contacted the support with my “domain owner” identity and asked why the certificate was revoked. The answer: “I wanted to inform you that your FreeSSL certificate was cancelled as during a log check it was determined that the private key was compromised.”

    To summarize: Symantec never told the domain owner that the certificate was revoked due to a key leaked on Pastebin. I assume in all the other cases they also didn’t inform their customers. Thus they may have experienced a certificate revocation, but don’t know why. So they can’t learn and can’t improve their processes to make sure this doesn’t happen again. Also, Symantec still insisted to the domain owner that the key was compromised even after I already had informed them that the key was faulty.

    How to check if a private key belongs to a certificate?

    SSLShopper checkIn case you wonder how you properly check whether a private key belongs to a certificate you may of course resort to a Google search. And this was fascinating – and scary – to me: I searched Google for “check if private key matches certificate”. I got plenty of instructions. Almost all of them were wrong. The first result is a page from SSLShopper. They recommend to compare the MD5 hash of the modulus. That they use MD5 is not the problem here, the problem is that this is a naive check only comparing parts of the public key. They even provide a form to check this. (That they ask you to put your private key into a form is a different issue on its own, but at least they have a warning about this and recommend to check locally.)

    Furthermore we get the same wrong instructions from the University of Wisconsin, Comodo (good that their engineers were smart enough not to rely on their own documentation), tbs internet (“SSL expert since 1996”), ShellHacks, IBM and RapidSSL (aka Symantec). A post on Stackexchange is the only result that actually mentions a proper check for RSA keys. Two more Stackexchange posts are not related to RSA, I haven’t checked their solutions in detail.

    Going to Google results page two among some unrelated links we find more wrong instructions and tools from Symantec, SSL247 (“Symantec Specialist Partner Website Security” - they learned from the best) and some private blog. A documentation by Aspera (belonging to IBM) at least mentions that you can check the private key, but in an unrelated section of the document. Also we get more tools that ask you to upload your private key and then not properly check it from SSLChecker.com, the SSL Store (Symantec “Website Security Platinum Partner”), GlobeSSL (“in SSL we trust”) and - well - RapidSSL.

    Documented Security Vulnerability in OpenSSL

    So if people google for instructions they’ll almost inevitably end up with non-working instructions or tools. But what about other options? Let’s say we want to automate this and have a tool that verifies whether a certificate matches a private key using OpenSSL. We may end up finding that OpenSSL has a function x509_check_private_key() that can be used to “check the consistency of a private key with the public key in an X509 certificate or certificate request”. Sounds like exactly what we need, right?

    Well, until you read the full docs and find out that it has a BUGS section: “The check_private_key functions don't check if k itself is indeed a private key or not. It merely compares the public materials (e.g. exponent and modulus of an RSA key) and/or key parameters (e.g. EC params of an EC key) of a key pair.”

    I think this is a security vulnerability in OpenSSL (discussion with OpenSSL here). And that doesn’t change just because it’s a documented security vulnerability. Notably there are downstream consumers of this function that failed to copy that part of the documentation, see for example the corresponding PHP function (the limitation is however mentioned in a comment by a user).

    So how do you really check whether a private key matches a certificate?

    Ultimately there are two reliable ways to check whether a private key belongs to a certificate. One way is to check whether the various values of the private key are consistent and then check whether the public key matches. For example a private key contains values p and q that are the prime factors of the public modulus N. If you multiply them and compare them to N you can be sure that you have a legitimate private key. It’s one of the core properties of RSA that it’s secure based on the assumption that it’s not feasible to calculate p and q from N.

    You can use OpenSSL to check the consistency of a private key:
    openssl rsa -in [privatekey] -check

    For my forged keys it will tell you:
    RSA key error: n does not equal p q

    You can then compare the public key, for example by calculating the so-called SPKI SHA256 hash:
    openssl pkey -in [privatekey] -pubout -outform der | sha256sum
    openssl x509 -in [certificate] -pubkey |openssl pkey -pubin -pubout -outform der | sha256sum

    Another way is to sign a message with the private key and then verify it with the public key. You could do it like this:
    openssl x509 -in [certificate] -noout -pubkey > pubkey.pem
    dd if=/dev/urandom of=rnd bs=32 count=1
    openssl rsautl -sign -pkcs -inkey [privatekey] -in rnd -out sig
    openssl rsautl -verify -pkcs -pubin -inkey pubkey.pem -in sig -out check
    cmp rnd check
    rm rnd check sig pubkey.pem

    If cmp produces no output then the signature matches.

    As this is all quite complex due to OpenSSLs arcane command line interface I have put this all together in a script. You can pass a certificate and a private key, both in ASCII/PEM format, and it will do both checks.

    Summary

    Symantec did a major blunder by revoking a certificate based on completely forged evidence. There’s hardly any excuse for this and it indicates that they operate a certificate authority without a proper understanding of the cryptographic background.

    Apart from that the problem of checking whether a private key and certificate match seems to be largely documented wrong. Plenty of erroneous guides and tools may cause others to fall for the same trap.

    Update: Symantec answered with a blog post.

    July 18, 2017
    Sven Vermeulen a.k.a. swift (homepage, bugs)
    Project prioritization (July 18, 2017, 18:40 UTC)

    This is a long read, skip to “Prioritizing the projects and changes” for the approach details...

    Organizations and companies generally have an IT workload (dare I say, backlog?) which needs to be properly assessed, prioritized and taken up. Sometimes, the IT team(s) get an amount of budget and HR resources to "do their thing", while others need to continuously ask for approval to launch a new project or instantiate a change.

    Sizeable organizations even require engineering and development effort on IT projects which are not readily available: specialized teams exist, but they are governance-wise assigned to projects. And as everyone thinks their project is the top-most priority one, many will be disappointed when they hear there are no resources available for their pet project.

    So... how should organizations prioritize such projects?

    July 16, 2017
    Michał Górny a.k.a. mgorny (homepage, bugs)
    GLEP 73 check results explained (July 16, 2017, 08:40 UTC)

    The pkgcheck instance run for the Repo mirror&CI project has finished gaining a full support for GLEP 73 REQUIRED_USE validation and verification today. As a result, it can report 5 new issues defined by that GLEP. In this article, I’d like to shortly summarize them and explain how to interpret and solve the reports.

    Technical note: the GLEP number has not been formally assigned yet. However, since there is no other GLEP request open at the moment, I have taken the liberty of using the next free number in the implementation.

    GLEP73Syntax: syntax violates GLEP 73

    GLEP 73 specifies a few syntax restrictions as compared to the pretty much free-form syntax allowed by the PMS. The restrictions could be shortly summarized as:

    • ||, ^^ and ?? can not not be empty,
    • ||, ^^ and ?? can not not be nested,
    • USE-conditional groups can not be used inside ||, ^^ and ??,
    • All-of groups (expressed using parentheses without a prefix) are banned completely.

    The full rationale for the restrictions, along with examples and proposed fixes is provided in the GLEP. For the purpose of this article, it is enough to say that in all the cases found, there was a simpler (more obvious) way of expressing the same constraint.

    Violation of this syntax prevents pkgcheck from performing any of the remaining checks. But more importantly, the report indicates that the constraint is unnecessarily complex and could result in REQUIRED_USE mismatch messages that are unnecessarily confusing to the user. Taking a real example, compare:

      The following REQUIRED_USE flag constraints are unsatisfied:
        exactly-one-of ( ( !32bit 64bit ) ( 32bit !64bit ) ( 32bit 64bit ) )

    and the effect of a valid replacement:

      The following REQUIRED_USE flag constraints are unsatisfied:
    	any-of ( 64bit 32bit )

    While we could debate about usefulness of the Portage output, I think it is clear that the second output is simpler to comprehend. And the best proof is that you actually need to think a bit before confirming that they’re equivalent.

    GLEP73Immutability: REQUIRED_USE violates immutability rules

    This one is rather simple: it means this constraint may tell user to enable (disable) a flag that is use.masked/forced. Taking a trivial example:

    a? ( b )

    GLEP73Immutability report will trigger if a profile masks the b flag. This means that if the user has a enabled, the PM would normally tell him to enable b as well. However, since b is masked, it can not be enabled using normal methods (we assume that altering use.mask is not normally expected).

    The alternative is to disable a then. But what’s the point of letting user enable it if we afterwards tell him to disable it anyway? It is more friendly to disable both flags together, and this is pretty much what the check is about. So in this case, the solution is to mask a as well.

    How to read it? Given the generic message of:

    REQUIRED_USE violates immutability rules: [C] requires [E] while the opposite value is enforced by use.force/mask (in profiles: [P])

    It indicates that in profiles P (a lot of profiles usually indicates you’re looking for base or top-level arch profile), E is forced or masked, and that you probably need to force/mask C appropriately as well.

    GLEP73SelfConflicting: impossible self-conflicting condition

    This one is going to be extremely rare. It indicates that somehow the REQUIRED_USE nested a condition and its negation, causing it to never evaluate to true. It is best explained using the following trivial example:

    a? ( !a? ( b ) )

    This constraint will never be enforced since a and !a can not be true simultaneously.

    Is there a point in having such a report at all? Well, such a thing is extremely unlikely to happen. However, it would break the verification algorithms and so we need to account for it explicitly. Since we account for it anyway and it is a clear mistake, why not report it?

    GLEP73Conflict: request for conflicting states

    This warning indicates that there are at least two constraints that can apply simultaneously and request the opposite states for the same USE flag. Again, best explained on a generic example:

    a? ( c ) b? ( !c )

    In this example, any USE flag set with both a and b enabled could not satisfy the constraint. However, Portage will happily led us astray:

      The following REQUIRED_USE flag constraints are unsatisfied:
    	a? ( c )

    If we follow the advice and enable c, we get:

      The following REQUIRED_USE flag constraints are unsatisfied:
    	b? ( !c )

    The goal of this check is to avoid such a bad advices, and to require constraints to clearly indicate a suggested way forward. For example, the above case could be modified to:

    a? ( !b c ) b? ( !c )

    to indicate that a takes precedence over b, and that b should be disabled to avoid the impossible constraint. The opposite can be stated similarly — however, note that you need to reorder the constraints to make sure that the PM will get it right:

    b? ( !a !c ) a? ( c )

    How to read it? Given the generic message of:

    REQUIRED_USE can request conflicting states: [Ci] requires [Ei] while [Cj] requires [Ej]

    It means that if the user enables Ci and Cj simultaneously, the PM will request conflicting Ei and Ej. Depending on the intent, the solution might involve negating one of the conditions in the other constraint, or reworking the REQUIRED_USE towards another solution.

    GLEP73BackAlteration: previous condition starts applying

    This warning is the most specific and the least important from all the additions at the moment. It indicates that the specific constraint may cause a preceding condition to start to apply, enforcing additional requirements. Consider the following example:

    b? ( c ) a? ( b )

    If the user has only a enabled, the second rule will enforce b. Then the condition for the first rule will start matching, and additionally enforce c. Is this a problem? Usually not. However, for the purpose of GLEP 73 we prefer that the REQUIRED_USE can be enforced while processing left-to-right, in a single iteration. If a previous rule starts applying, we may need to do another iteration.

    The solution is usually trivial: to reorder (swap) the constraints. However, in some cases developers seem to prefer copying the enforcements into the subsequent rule, e.g.:

    b? ( c ) a? ( b c )

    Either way works for the purposes of GLEP 73, though the latter increases complexity.

    How to read it? Given the generic message of:

    REQUIRED_USE causes a preceding condition to start applying: [Cj] enforces [Ej] which may cause preceding [Ci] enforcing [Ei] to evaluate to true

    This indicates that if Cj is true, Ej needs to be true as well. Once it is true, a preceding condition of Ci may also become true, adding another requirement for Ei. To fix the issue, you need to either move the latter constraint before the former, or include the enforcement of Ei in the rule for Cj, rendering the application of the first rule unnecessary.

    Constructs using ||, ^^ and ?? operators

    GLEP 73 specifies a leftmost-preferred behavior for the ||, ^^ and ?? operators. It is expressed in a simple transformation into implications (USE-conditional groups). Long story short:

    • || and ^^ groups force the leftmost unmasked flag if none of the flags are enabled already, and
    • ?? and ^^ groups disable all but the leftmost enabled flag if more than one flag is enabled.

    All the verification algorithms work on the transformed form, and so their output may list conditions resulting from it. For example, the following construct:

    || ( a b c ) static? ( !a )

    will report a conflict between !b !c ⇒ a and static ⇒ !a. This indicates the fact that per the forementioned rule, || group is transformed into !b? ( !c? ( a ) ) which explains that if none of the flags are enabled, the first one is preferred, causing a conflict with the static flag.

    In this particular case you could debate that the algorithm could choose b or c instead in order to avoid the problem. However, we determined that this kind of heuristic is not a goal for GLEP 73, and instead we always obide the developer’s preference expressed in the ordering. The only exception to this rule is when the leftmost flag can not match due to a mask, in which case the first unmasked flag is used.

    For completeness, I should add that ?? and ^^ blocks create implications in the form of: a ⇒ !b !c…, b ⇒ !c… and so on.

    At some point I might work on making the reports include the original form to avoid ambiguity.

    The future

    The most important goal for GLEP 73 is to make it possible for users to install packages out-of-the-box without having to fight through mazes of REQUIRED_USE, and for developers to use REQUIRED_USE not only sparingly but whenever possible to improve the visibility of resulting package configuration. However, there is still a lot of testing, some fixing and many bikesheds before that could happen.

    Nevertheless, I think we can all agree that most of the reports produced so far (with the exception of the back-alteration case) are meaningful even without automatic enforcing of REQUIRED_USE, and fixing them would benefit our users already. I would like to ask you to look for the reports on your packages and fix them whenever possible. Feel free to ping me if you need any help with that.

    Once the number of non-conforming packages goes down, I will convert the reports successively into warning levels, making the CI report new issues and the pull request scans proactively complain about them.

    July 14, 2017
    Sebastian Pipping a.k.a. sping (homepage, bugs)
    Expat 2.2.2 released (July 14, 2017, 17:32 UTC)

    (This article first appeared on XML.com.)

    A few weeks after release 2.2.1 of the free software XML parsing library Expat, version 2.2.2 now improves on few rough edges (mostly related to compilation) but also fixes security issues.

    Windows binaries compiled with _UNICODE now use proper entropy for seeding the SipHash algorithm. On Unix-like platforms, accidentally missing out on high quality entropy sources is now prevented from going unnoticed: It would happen when some other build system than the configure script was used, e.g. the shipped CMake one or when the source code was copied into some parent project’s build system without paying attention to the new compile flags (that the configure script would auto-detect for you). After some struggle with a decision about C99, Expat requires a C99 compiler now; 18 years after its definition, that’s a defendable move. The uint64_t type and ULL integer literals (unsigned long long) for SipHash made us move.

    Expat would like to thank the community for the bug reports and patches that went into Expat 2.2.2. If you maintain a bundled copy of Expat somewhere, please make sure it gets updated.

    Sebastian Pipping
    for the Expat development team

    July 12, 2017
    Alice Ferrazzi a.k.a. alicef (homepage, bugs)
    Google-Summer-of-Code-day20 (July 12, 2017, 08:54 UTC)

    Google Summer of Code day 20

    What was my plan for today?

    • work on the livepatch downloader and make the kpatch creator flexible

    What i did today?

    • Created .travis.yml for validating changes https://github.com/aliceinwire/elivepatch/blob/master/.travis.yml
    • Finished making the live patch downloader https://github.com/aliceinwire/elivepatch/commit/6eca2eec3572cad0181b3ce61f521ff40fa85ec1
    • Testing elivepatch

    The POC generally works but I had a problem with building the Linux kernel 4.9.29 on my notebook One problem with the POC is that still some variable are hard coded.

    WARNING: Skipping gcc version matching check (not recommended)
    Skipping cleanup
    Using source directory at /usr/src/linux-4.9.29-gentoo
    Testing patch file
    checking file fs/exec.c
    Hunk #1 succeeded at 238 (offset -5 lines).
    Reading special section data
    Building original kernel
    Building patched kernel
    Extracting new and modified ELF sections
    /usr/libexec/kpatch/create-diff-object: ERROR: exec.o: find_local_syms: 136: find_local_syms for exec.c: found_none
    ERROR: 1 error(s) encountered. Check /root/.kpatch/build.log for more details.
    

    the function find_local_syms https://github.com/dynup/kpatch/blob/master/kpatch-build/lookup.c#L80

    Now i'm rebuilding everything with debug options for see some more useful information I'm also thinking to add a debug option to the elivepatch server

    One question is if can be useful to work on making a feature for getting the kernel version from the Kernel configuration file header.

    like this:

    .config
    #
    # Automatically generated file; DO NOT EDIT.
    # Linux/x86 4.9.29-gentoo Kernel Configuration
    #
    

    like parsing this for get the version file without need to give it manually.

    Another option is to passing it by rest as command line option.

    something like -g 4.9.29

    Interesting thing is that as now kernel-build have already embedded some way of dealing with most problems, and works better with distribution like ubuntu or fedora.

    like for example is already copying the .config file and building the kernel with the option that we are giving from the rest api. cp -f /home/alicef/IdeaProjects/elivepatch/elivepatch_server/config /usr/src/linux-4.9.29-gentoo/.config

    and the patch cp /home/alicef/IdeaProjects/elivepatch/elivepatch_server/1.patch kpatch.patch

    Is also checking the .config for missing configurations. grep -q CONFIG_DEBUG_INFO_SPLIT=y /home/alicef/IdeaProjects/elivepatch/elivepatch_server/config

    what i will do next time?
    * Testing elivepatch * Getting the kernel version dynamically * Updating kpatch-build for work with Gentoo better

    July 11, 2017
    Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
    Best sushi in St. Louis? J Sushi in Arnold. (July 11, 2017, 04:09 UTC)

    As a self-proclaimed connoisseur of Asian cuisine, I’m constantly searching out the best restaurants in Saint Louis of the various regions and genres (Thai, Japanese, Vietnamese, as well as sushi, dim sum, et cetera). Having been to many of the staples of St. Louis sushi—Drunken Fish, Kampai, Wasabi, Cafe Mochi, and others—I’ve always been satisfied with their offerings, but yet felt like they missed the mark in one way or another. Don’t get me wrong, all of those places have some great dishes, but I just found them to be lacking that spark to make them stand out as the leader of the pack.

    … and then it happened. One day when I was driving north on 61-67 (Jeffco Boulevard / Lemay Ferry), I noticed that the storefront in Water Tower Place that previously housed a mediocre Thai restaurant was set to reopen as a sushi joint. My first thought was “oh no, that’s probably not going to go over well in Arnold” but I hoped for the best. A couple weeks later, it opened as J Sushi. I added it to my ever-growing list of restaurants to try, but didn’t actually make it in for several more weeks.

    Salmon Killer roll at J Sushi in St. Louis, MO
    The Salmon Killer Roll with spicy crab, asparagus, salmon, cream cheese, mango sauce and Japanese mayo
    (click for full quality)

    Named for the original owner, Joon Kim, (who, as of this writing, is the owner of Shogun in Farmington, MO), J Sushi came onto the scene offering a huge variety of Japanese fare. From a smattering of traditional appetisers like tempura and gyoza, to a gigantic list of rolls and sashimi, to the “I don’t particularly care for raw fish” offerings in their Bento boxes, J Sushi offers dishes to appease just about anyone interested in trying Japanese cuisine.

    Since their initial opening, some things have changed at J Sushi. One of the biggest differences is that it is now owned by an employee that Joon himself trained in the ways of sushi over the years: Amanda, and her partner, Joseph. The two of them have taken an already-outstanding culinary experience and elevated it even further with their immediately noticeable hospitality and friendliness (not to mention, incredible aptitude for sushi)!

    VIP roll at J Sushi in St. Louis, MO
    The VIP Roll with seared salmon, and shrimp tempura… it’s on fire!
    (click for full quality)

    So, now that you have a brief history of the restaurant, let’s get to the key components that I look for when rating eateries. First and foremost, the food has to be far above par. I expect the food to not only be tasty, but also a true representation of the culture, elegantly plated, and creative. J Sushi delivers on all four of those aspects! I’ve had many of their appetisers, rolls, sushi/sashimi plates, and non-fish dishes, and have yet to find one that wasn’t good. Of course I have my favourites, but so far, nothing has hit the dreaded “do not order again” list. As for plating, the sushi chefs recognise that one firstly eats with the eyes. Dishes are presented in a clean fashion and many of them warrant taking a minute to appreciate them visually before delving in with your chopsticks.

    Second, the service has to be commendable. At J Sushi, Amanda, Joe, and the members of the waitstaff go out of their way to greet everyone as they come in and thank them after their meal. The waiters and waitresses come to the table often to check on your beverages, and to see if you need to order anything else. At a sushi restaurant, it’s very important to check for reorders as it’s commonplace to order just a couple rolls at a time. I can imagine that one of the complaints about the service is how long it takes to get your food after ordering. Though it is a valid concern, great sushi is intricate and takes time to execute properly. That being said, I have personally found the wait times to be completely acceptable, even when they’re really busy with dine-ins and take-away orders.

    Mastercard roll at J Sushi in St. Louis, MO
    The Master Card Roll with shrimp tempura, and gorgeously overlapped tuna, salmon, & mango
    (click for full quality)

    Third, the restaurant has to be a good value. Does that mean that it has to be inexpensive? No, not at all. When I’m judging a restaurant’s value, I take into consideration the quality of the ingredients used, the time and labour involved in preparation, the ambience, and the overall dining experience. J Sushi, in my opinion, excels in all of these areas, and still manages to keep their prices affordable. Yes, there are cheaper places to get sushi, and even some that offer “all you can eat” options, but you’re certainly exchanging quality for price at those types of establishments. I, for one, would rather pay a little more money to ensure that I’m getting very high quality fish (especially since the flavours and textures of the fish are exponentially heightened when consumed raw).

    The Dragon Bowl at J Sushi in St. Louis, MO
    The stunningly beautiful Dragon Bowl – as much artwork as it is food!
    (click for full quality)

    Now for the meat and potatoes (or in this case, the fish and rice): what dishes stand out to me? As I previously said, I haven’t found anything that I dislike on the menu; just dishes that I like more than others. I enjoy changing up my order and trying new things, but there are some items that I keep going back to time and time again. Here are some of my absolute favourites:

    Appetisers:

    • Japanese Crab Rangoon
      • Expecting those Chinese-style fried wontons filled with cream cheese? Think again. This amazing “roll” has spicy pulled crab and cream cheese wrapped in soy paper (Mamenori) and rice. It’s deep-fried and served with eel sauce. NOT to be missed!
    • Tuna Tataki
      • Perfectly seared (read: “nearly raw”) tuna served with shredded radish and a light sauce.

    Rolls:

    • Master Card Roll
      • Shrimp tempura and spicy tuna inside, topped with fresh tuna, salmon, and slices of mango (see the photo above).
    • Sweet Ogre Roll
      • One of my original favourites, this roll has shrimp tempura and cucumber inside. On top, there’s seared tuna, Sriracha, a little mayo, crunch, and masago.
    • Missouri Thunder Roll
    • Derby Roll
      • Spicy crab and avocado (I swap that for cucumber). Topped with eight beautifully-grilled shrimp.
    • Poison Spider Roll
      • HUGE, double-stuffed roll with a whole deep fried soft-shell crab and cucumber. On top, a bunch of spicy pulled crab, masago, crunch, and eel sauce.

    Other:

    • Tai Nigiri
      • Simple Nigiri of Red Snapper
    • Hamachi Nigiri
      • Simple Nigiri of Yellowtail
    • Sushi sampler
      • 5 pieces of various Nigiri (raw fish on rice with a little wasabi)

    If your mouth isn’t watering by now, then you must not care all that much for sushi (or Pavlov was sorely misguided 🙂 ). I hope that you try some of the amazing food that I mentioned above, but more importantly, I hope that you check out J Sushi and find your the dishes that speak to you personally!

    Cheers,
    Zach

    Important!

    The photographs in this post were taken by me. If you would like to use them elsewhere, please just give credit to Nathan Zachary and link back to my blog. Thanks!

    June 27, 2017
    Alice Ferrazzi a.k.a. alicef (homepage, bugs)
    Open Source Summit Japan-2017 (June 27, 2017, 13:57 UTC)

    Open Source Summit Japan 2017 summary

    OSS Japan 2017 was a really great experience.

    I sended my paper proposal and waited for a replay, some week after I got a
    invite to partecipate at the Kernel Keynote.
    I thought partecipating at the Kernel Keynote as mentor and doing a presentation
    was a good way to talk about Gentoo Kernel Project and how to contribute in the
    Linux Kernel and Gentoo Kernel Project.
    Also my paper got accepted so I could join OSS Japan 2017 as speaker.
    It was three really nice days.

    Presentation:

    Fast Releasing and Testing of Gentoo Kernel Packages and Future Plans of the Gentoo Kernel Project

    My talk was manly about the Gentoo Kernel related Projects past and future
    specifically about the Gentoo Kernel Continuos Integreting system we are creating:
    https://github.com/gentoo/Gentoo_kernelCI

    Why is needed:

    • We need some way for checking the linux-patches commits automatically, can also check pre-commit by pushing to a sandbox branch
    • Check the patches signatures
    • Checking the ebuild committed to https://github.com/gentoo/gentoo/commits/master/sys-kernel
    • Checking the kernel eclass commits
    • Checking the pull request to the sys-kernel/*
    • Use Qemu for testing kernel vmlinux correct execution

    For any issue or contribution feel free to send here:
    https://github.com/gentoo/Gentoo_kernelCI

    For see Gentoo Kernel CI in action:
    http://kernel1.amd64.dev.gentoo.org:8010

    slides:
    http://schd.ws/hosted_files/ossjapan2017/39/Gentoo%20Kernel%20recent%20and%20Future%20project.pdf

    Open Source Summit Japan 2017
    Keynote: Linux Kernel Panel - Moderated by Alice Ferrazzi, Gentoo Kernel Project Leader

    The keynote was with:
    Greg Kroah-Hartman - Fellow, Linux Foundation
    Steven Rostedt - VMware
    Dan Williams - Intel Open Source Technology Center
    Alice Ferrazzi - Gentoo Kernel Project Leader, Gentoo

    One interesting part was about how to contribute to the Linux Kernel.
    After some information about Linux Kernel contribution numbers the talk moved on
    ho to contribute in the Linux Kernel.
    For contribute in the Linux Kernel there is need of some understanding of C
    and running test in the Linux Kernel.
    Like fuego, kselftest, coccinelle, and many others.
    And also a good talk from Steven Rostedt about working with Real-Time patch.

    Who can find the Gentoo logo in this image:

    June 26, 2017
    Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

    With the release of Lab::Measurement 3.550 we've switched to Dist::Zilla as maintenance tool. If you're not involved in hacking Lab::Measurement, you should essentially not notice this change. However, for the authors of the package, Dist::Zilla makes it much easier to keep track of dependencies, prepare new releases, and eventually also improve and unify the documentation... At the side we've also fixed Issue 4 and Lab::Measurement should now work out of the box with recent Gnuplot on Windows again.

    June 25, 2017
    Alice Ferrazzi a.k.a. alicef (homepage, bugs)
    Open Source Summit Japan 2017 (June 25, 2017, 17:59 UTC)

    Open Source Summit Japan 2017 summary

    OSS Japan 2017 was a really great experience.

    I sended my paper proposal and waited for a replay, some week after I got a invite to partecipate at the Kernel Keynote. I thought partecipating at the Kernel Keynote was a good way to talk about Gentoo Kernel Project and how to contribute in the Linux Kernel and Gentoo Kernel Project. Also my paper got accepted so I could join OSS Japan 2017 as speaker.

    It was three really nice days.

    [[!tags draft]]