Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.
Last month I wrote a post noting who makes use of semantic data for the web in particular pointing out that Facebook, Google, Readability and Flattr all use different way to provide context to the content: OpenGraph, Schema.org, hNews and their own version of microformats respectively.
Well, NewsBlur – which, even though I criticized for the HTTP implementation, is still my best suggestion for a Google Reader replacement, if anything because it’s open source even though it’s a premium service – seems to come up with its own way to get semantic data.
The FAQ for publishers states that you can use one of a number of possible selectors to provide NewsBlur with an idea of how your content is structured — completely ignoring the fact that schema.org already includes all the structure, and it would be relatively easy to get that data explicitly. Even better, since NewsBlur has a way to show public comments within the NewsBlur interface it would be possible for it to display the comments on the post themselves, as they are also tagged and structured with the same ontology. I’ve opened an idea about it — hopefully somebody, if not the author, will feel like implementing this.
But this is by far not limited to NewsBlur! While Readability added a special case for my blog so that it actually gets the right data out of it, the content guide still only describe support for the hNews format, even though Schema.org has all the same data and more. And Flattr, well, still does not seem to care about getting data via semantic information — the best match would be support for the link relation in feeds that can be autodiscovered, but then I don’t really have an idea of where Flattr would find the metadata to create the “thing” on their side.
Please, all you guys who work on services — can we get all behind the same ontology, so that we don’t have to start adding four times redundant information on pages, increasing their size for no advantage? Please!
Yes, there goes another post writing about my flashed Kindle Fire. If you’re bored just skip it.
When I had Amazon’s operating system I tried quite a number of games, mostly “Free apps of the day” from Amazon’s appstore, or a few free (ad-supported) games — even though I did buy Rovio’s Amazing Alex& as I liked the demo quite a bit. The only game that was really unplayable on the device was Jetpack Joyride& (which is free). Even the Google Play version, with CyanogenMod, stutters enough that I don’t want to play it there, while on the other hand it works perfectly fine on my iPad and iPod Touch.
Since I haven’t even tried installing the Amazon App Store after flashing CyanogenMod on the device, I haven’t played Amazing Alex in a long time. On the other hand I played Fieldrunners HD& (link goes to Amazon) which I bought on Google Play instead, and played on the Desire HD before. This worked like a perfect charm (and if you like tower defense games, this is a terrific game, and you should give it a try!).
The first games I bought on the newly flashed Kindle Fire were Eve of Genesis& and Dark Gate (latter link goes to Google Play), thanks to Caster’s suggestion. These are classic Japanese RPGs, likely re-made from older 8- and 16-bit systems to Android and iOS, exactly what I like for the few moments I spend playing on it. They play quite nicely, even if sometimes they do stutter as well.
But the problem starts with the most recent (at the time of writing) Humble Bundle with Android 5 which I bought in the hope to play Dungeon Defenders on the tablet at least, since my Dell laptop does not play it smoothly on Windows, and my Zenbook has an HD4000 “videocard” and with that card, there’s a bug that was not fixed yet, as far as I can tell. Ryan would know better.
Unfortunately, trying to get Dungeon Defenders to play on that tablet is a bad idea, in particular the moment when you have to load the input method to type your name, it crashes completely. Other games in the bundle are not better. Splice crashes just after loading, for instance, and so did Solar 2. While Crayon Physics works, it will complain if even a single other application is running that it doesn’t have enough memory, and it’s probably correct in that.
Among the games that works, Crayon Physics is definitely worth it — I’m going to try Sword & Sworcery EP and see if that one works as well. Dynamite Jack is not my cup of tea but works great (and it shows that it was well designed and written by the way it was faster to start up that most apps).
Of course these are only some examples, but it shows two main problems: the first is that it really is necessary to put requirements on software, and try to spare as much memory as possible without making the application unusable, if you want to be compatible; the other that if you want to create a gateway app, like Humble Bundle did, you need to make sure you check the requirements before allowing the user to install the games. In this case, the tablet is obviously not supported, as I flashed an experimental, unofficial ROM myself, but I’m pretty sure that most of the Chinese tablets that I’ll find at the local Mediaworld (Italian brand for Mediamarkt) will have even less memory than the Fire.
Oh well, hopefully I’ll soon be able top lay these games on a real gaming PC, be it with Linux or Windows, thanks to Steam, and then it won’t matter that the Fire is not that powerful.
As we get a growing number of SELinux users within Gentoo Hardened and because the SELinux usage at the firm I work at is most likely going to grow as well, I decided to join the bunch of documents on SELinux that are “out there” and start a series of my own. After all, too much documentation probably doesn’t hurt, and SELinux definitely deserves a lot of documentation.
I decided to use the Gentoo Wiki for this endeavour instead of a GuideXML approach (which is the format used for Gentoo documentation on the main site). The set of tutorials that I already wrote can be found under the SELinux : Gentoo Hardened SELinux Tutorials location. Although of course meant to support the Gentoo Hardened SELinux users, I’m hoping to keep the initial set of tutorial articles deliberately distribution-independent so I can refer to them at work as well.
For now (this is a week’s work, so don’t expect this amount of tutorials to double in the next few days) I wrote about the security context of a process, how SELinux controls file and directory accesses, where to find SELinux permission denial details, controlling file contexts yourself and how a process gets into a certain context.
I hope I can keep the articles in good shape and with a gradual step-up in complexity. That does mean that most articles are not complete (for instance, when talking about domain transitions, I don’t talk about constraints that might prohibit them, or about the role and type mismatches (invalid context) that you might get, etc.) and that those details will follow in later articles. Hopefully that allows users to learn step by step.
At the end of each tutorial, you will find a “What you need to remember” section. This is a very short overview of what was said in the tutorial and that you will need to know in future articles. If you ever read a tutorial article, then this section might be sufficient for you to remember again what it was about – no need to reread the entire article.
Consider it an attempt at a tl;dr for articles ;-) Enjoy your reading, and if you have any remarks, don’t hesitate to contribute on the wiki or talk through the “Talk” pages.
When I’ve wanted to play in some new areas lately, it’s been a real frustration because Gentoo hasn’t had a complete set of packages ready in any of them. I feel like these are some opportunities for Gentoo to be awesome and gain access to new sets of users (or at least avoid chasing away existing users who want better tools):
Data science. Package Hadoop. Package streaming options like Storm. How about related tools like Flume? RabbitMQ is in Gentoo, though. I’ve heard anecdotally that a well-optimized Hadoop-on-Gentoo installation showed double-digit performance increases over the usual Hadoop distributions (i.e., not Linux distributions, but companies specializing in providing Hadoop solutions). Just heard from Tim Harder (radhermit) than he’s got some packages in progress for a lot of this, which is great news.
DevOps. This is an area where Gentoo historically did pretty well, in part because our own infrastructure team and the group at the Open Source Lab have run tools like CFEngine and Puppet. But we’re lagging behind the times. We don’t have Jenkins or Travis. Seriously? Although we’ve got Vagrant packaged, for example, we don’t have Veewee. We could be integrating the creation of Vagrant boxes into our release-engineering process.
Cloud. Public cloud and on-premise IaaS/PaaS. How about IaaS: OpenStack, CloudStack, Eucalyptus, or OpenNebula? Not there, although some work is happening for OpenStack according to Matthew Thode (prometheanfire). How about a PaaS like Cloud Foundry or OpenShift? Nope. None of the Netflix open-source tools are there. On the public side, things are a bit better — we’ve got lots of AWS tools packaged, even stretching to things like Boto. We could be integrating the creation of AWS images into our release engineering to ensure AWS users always have a recent, official Gentoo image.
Android development. Gentoo is perfect as a development environment. We should be pushing it hard for mobile development, especially Android given its Linux base. There’s a couple of halfhearted wikipages but that does not an effort make. If the SDKs and related packages are there, the docs need to be there too.
Where does Gentoo shine? As a platform for developers, as a platform for flexibility, as a platform to eke every last drop of performance out of a system. All of the above use cases are relevant to at least one of those areas.
I’m writing this post because I would love it if anyone else who wants to help Gentoo be more awesome would chip in with packaging in these specific areas. Let me know!
Update: Michael Stahnke suggested I point to some resources on Gentoo packaging, for anyone interested, so take a look at the Gentoo Development Guide. The Developer Handbook contains some further details on policy as well as info on how to get commit access by becoming a Gentoo developer.
So today’s frenzy is all about Google’s dismissal of the Reader service. While I’m also upset about that, I’m afraid I cannot really get into discussing that at this point. On the other hand, I can talk once again of my ModSecurity ruleset and in particular of the rules that validate HTTP robots all over the Internet.
One of the Google Reader alternatives that are being talked about is NewsBlur — which actually looks cool at first sight, but I (and most other people) don’t seem to be able to try it out yet because their service – I’m not going to call them servers as it seems they at least partially use AWS for hosting – fails to scale.
While I’m pretty sure that it’s an exceptional amount of load they are receiving now as everybody and their droid are trying to register to the service and import their whole Google Reader subscription list, which then needs to be fetched and added to the database, – subscriptions to my blog’s feed went from 5 to 23 in the matter of hours! – there are a few things that I can infer from the way it behaves that makes me think that somebody overlooked the need for a strong HTTP implementation.
First of all what happened was that I got a report on twitter that NewsBlur was getting a 403 fetching my blog, and that was obviously caused by my rules’ validation of the request. Looking at my logs, I found out that NewsBlur sends requests with three different User-Agents, which show a likeliness that they are implemented by three different codepaths altogether:
User-Agent: NewsBlur Feed Fetcher - 5 subscribers - http://www.newsblur.com (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.2.3 (KHTML, like Gecko) Version/5.2)
User-Agent: NewsBlur Page Fetcher (5 subscribers) - http://www.newsblur.com (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_1) AppleWebKit/534.48.3 (KHTML, like Gecko) Version/5.1 Safari/534.48.3)
User-Agent: NewsBlur Favicon Fetcher - http://www.newsblur.com
The third is the most conspicuous string, because it’s very minimal and does not follow your average string format, using the dash as separator instead of adding the URL in parenthesis next to the fetcher name (and version, more on that later).
The other two strings show that they have been taken by the string reported by Safari on OSX — but interestingly enough from two different Safari version, and one of the two has been actually stripped as well. This is really silly. While I can understand that they might want to look like Safari when fetching a page to display – mostly because there are bad hacks like PageSpeed that serve different HTML to different browsers, messing up caching – I doubt that is warranted for feeds; and even getting the Safari HTML might be a bad idea if then it’s displayed by the user with a different browser.
The code that fetches feeds and pages is likely quite different as it can be seen by the full request. From the feed fetcher:
GET /articles.atom HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: NewsBlur Feed Fetcher - 5 subscribers - http://www.newsblur.com (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.2.3 (KHTML, like Gecko) Version/5.2)
If-Modified-Since: Tue, 01 Nov 2011 23:36:35 GMT
This is a very sophisticated fetching code, as it not only properly supports compressed responses (Accept-Encoding header) but it also uses the If-None-Match and If-Modified-Since headers to not re-fetch an unmodified content. The fact that it’s pointing to November 1st of two years ago is likely due to the fact that since then my ModSecurity ruleset refused to speak with this fetcher, because of the fake User-Agent string. It also includes a proper Accept header that lists the feed types they prefer over the generic XML and other formats.
The A-Im header is not a fake or a bug; it’s actually part of RFC3229Delta encoding in HTTP and stands for Accept-Instance-Manipulation. I’ve never seen that before, but a quick search turned it out, even though the standardized spelling would be A-IM. Unfortunately, the aforementioned RFC does not define the “feed” manipulator, even though it seems to be used in the wild, and I couldn’t find a proper formal documentation of how it should work. The theory from what I can tell is that the blog engine would be able to use the If-Modified-Since header to produce on the spot a custom feed for the fetcher, that only includes entries that has been modified since that date. Cool idea, too bad it lacks a standard as I said.
The request coming in from the page fetcher is drastically different:
GET / HTTP/1.1
Accept-Encoding: gzip, deflate, compress
User-Agent: NewsBlur Page Fetcher (5 subscribers) - http://www.newsblur.com (Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_1) AppleWebKit/534.48.3 (KHTML, like Gecko) Version/5.1 Safari/534.48.3)
So we can tell two things from the comparison: this code is older (there is an earlier version of Safari being used), and not the same care has been spent as it has been on the feed fetcher (which dropped the Safari identifier itself, at least). It’s more than likely that if libraries are used to send the request, a completely different library is used here, as this request declares support for the compress encoding, not supported by the feed fetcher (and as far as I can tell, never ever used). It also is much less choosy on the formats to receive, as it accepts whatever you want to give it.
For the Italian readers: yes I intentionally picked the word choosy. While I can find Fornero an idiot as much as the next guy, I grew tired of copy-paste statuses on Facebook and comments that she should have said picky. Know your English, instead of complaining on idiocies.
The lack of If-Modified-Since here does not really mean much, because it’s also possible that they were never able to fetch the page, as they might have introduced the feature later (even though the code is likely older). But the Content-Length header sticks out like a sore thumb, and I would expect to have been put there by whatever HTTP access library they’re using.
The favicon fetcher is the one that is the most naïve and possibly the code that needs to be cleaned up the most:
GET /favicon.ico HTTP/1.1
User-Agent: NewsBlur Favicon Fetcher - http://www.newsblur.com
Here we start with nigh protocol violations, by not providing an Accept header — especially facepalming considering that this is where a static list of mime types would be the most useful, to restrict the image formats that will be handled properly! But what happens with my rules is that the Accept-Encoding there is not suitable for a bot at all! Since it does not support any compressed response, the code will now respond with a 406 Not Acceptable status code, instead of providing the icon.
I can understand that a compressed icon is more than likely to not be useful — indeed most images should not be compressed at all to be sent over HTTP, but why should you explicitly refuse it? Especially since the other two fetches properly support a sophisticated HTTP?
All in all, it seems like some of the code in NewsBlur has been bolted on after the fact, and with different levels of care. It might not be the best of time for them now to look at the HTTP implementation, but I would still suggest for it. A single pipelined request of the three components they need – instead of using Connection: close – could easily reduce the number of connections to blogs, and that would be very welcome to all the bloggers out there. And using the same HTTP code would make it easier for people like me to handle NewsBlur properly.
I would also like to have a way to validate that a given request comes from NewsBlur — like we do with GoogleBot and other crawlers. Unfortunately this is not really possible, because they use multiple servers, both on standard hostings and AWS, both on IPv4 and (possibly, one time) IPv6, so using FcRDNS is not an option.
Everybody probably already knows, that openSUSE 12.3 is going to be released this Wednesday. I’m currently in SUSE offices in Nuremberg, helping to polish last bits and pieces for the upcoming release. But more importantly, as every release, we need to celebrate it! And this time, due to the lucky circumstances, I’ll be here for Nuremberg release party!
Nuremberg release party will take place the same day as release at Artefakt, in Nuremberg’s city centre from 19:00 (local time, of course). It’s an open event so everybody is welcomed.
You can meet plenty of fellow Geekos there and there will be some food and also openSUSE beer available (some charges may apply). Most of the openSUSE Team at SUSE (former Boosters and Jos) will be there and we hope to meet every openSUSE enthusiastic, supporter or user from Nuremberg.
There will be demo computer running 12.3 and hopefully even public Google Hangout for people who wants to join us remotely – follow +openSUSE G+ page to see it if we will manage it
So see you in great numbers on Wednesday in Artefakt!
PS: If you expected announcement for Prague release party from me, don’t worry, I haven’t forgot about it, we are planning it, expect announcement soon and party in few weeks
Even though it hasn’t been an year yet that I moved to KDE, after spending a long time with GNOME 2, XFCE and then Cinnamon, over the past month or so I looked at how much of non-KDE software I could ditch this time around.
The first software I ditched was Pidgin — while the default use of GnuTLS caused some troubleKTP works quite decently. Okay some features are not fully implemented, but the basic chat works, and that’s enough for me — it’s not like I used much more than that on Pidgin either.
Unfortunately, when yesterday I decided to check whether it was possible to ditch Thunderbird for KMail, things didn’t turn out as nice. Yes, the client improved a truckload since what we had at KDE 3 time — but no, it didn’t improve enough for make it usable for me.
The obvious problem zeroth is the dependencies: to install KMail you need to build (but don’t need to enable) the “semantic desktop” — that is, Nepomuk and the content indexing. In particular it brings in Soprano and Virtuoso that have been among the least usable components when KDE4 was launched (at least Strigi is gone with 4.10; we’ll see what the future brings us). So after a night rebuilding part of the system to make sure that the flags are enabled and the packages in place, today I could try KMail.
First problem — at the first run it suggested importing data from Thunderbird — unfortunately it completely stuck there, and after over half an hour it went nowhere. No logs, no diagnostic, just stuck. I decided to ignore it and create the account manually. While KMail tried to find automatically which mail servers to use, it failed badly – I guess it tried to look for some _imap._srv.flameeyes.eu or something, which does not exist – even though Thunderbird can correctly guess that my mail servers are Google’s.
Second problem — the wizard does not make it easy to set up a new identity, which makes it tempting to add the accounts manually, but since you got three different entries that you have to add (Identity, Sending account, Receiving account), adding them in the wrong order gets you to revisit the settings quite a few times. For the curious, the order is sending, identity, receiving.
Third problem — KMail does not implement the Special Folder extension defined in RFC 6154 which GMail makes good use of (it actually implements it both with the standard extension and their own). This means that KMail will store all messages locally (drafts, sent, trash, …) unless you manually set them up. Unlike what somebody have told me, this means that the extension is completely unimplemented, not implemented only partially. I’m not surprised that it’s not implemented, by the way, due to the fact that the folders are declared in two different settings (the identity and the account).
Fourth problem — speaking about GMail, there is no direct way to handle the “archive” action, which is almost a necessity if you want to use it. While this started with GMail and as an almost exclusive to that particular service, nowadays many other services, including standalone software such as Kerio, provide the same workflow; the folder used for archiving is, once again, provided with the special-use notes discussed earlier. Even though the developers do not use GMail themselves, it feels wrong that it’s not implemented.
Fifth problem — while at it, let’s talk a moment about the IDLE command implementation (one of the extensions needed for Push IMAP). As Wikipedia says, KMail implements support for it since version 4.7 — unfortunately, it’s not using it in every case, but only if you disable the “check every X minutes” option — if that is enabled, then the IDLE command is not used. Don’t tell me it’s obvious, because even though it makes sense under some point of views, I wasn’t the only one that was tricked by that. Especially since I read that setting first as “disable if you only want manual check for new mail” — Thunderbird indeed uses IDLE even if you set the scheduled check every few minutes.
Sixth problem — there is no whitelist for remote content on HTML emails. GMail, both web and on the clients, Android and iOS, supports a complete whitelist, separate from everything else. Thunderbird supports a whitelist by adding the sender to the contacts’ list (which is honestly bothersome when adding mailing lists, like in my case). As far as I could tell, there is no way to have such a whitelist on KMail. You either got the protection enabled, or you got it disabled.
The last problem is the trickiest, and it’s hard to tell if it’s a problem at all. When I went to configure the OpenPGP key to use, it wouldn’t show me anything to select at all. I tried for the good part of an hour trying to get it to select my key, and it failed badly. When I installed Kleopatra it worked just fine; on the other hand, Pesa and other devs pointed out that it works for them just fine without Kleopatra installed.
So, what is the resolution at this point, for me? Well, I guess I’ll have to open a few bug feature requests on KDE’s Bugzilla, if I feel like it, and then I might hope for version 4.11 or 4.12 to have something that is more usable than Thunderbird. As it is, that’s not the case.
There are a bunch of minor nuisance and other things that require me to get used to them, such as the (in my view too big) folder icons (even if you change the size of the font, the size of the icon does not change), and the placement of the buttons which required me to get used to it on Thunderbird as well. But these are only minor annoyances.
What I really need for KMail to become my client is a tighter integration with GMail. It might not suit the ideals as much as one might prefer, but it is one of the most used email providers in the world nowadays, and it would go a long way for user friendliness to work nicely with it instead of making it difficult.
A few days ago I talked about Puffin Browser with the intent to discuss into more details the situation with the browsers on the Kindle Fire tablet I’m currently using.
You might remember that at the end of last year, I decided to replace Amazon’s firmware with a CyanogenMod ROM so to get something useful on it. Beside the lack of access to Google Play, one of the problems I had with Amazon’s original firmware was that the browser that it comes with is flakey to the point of uselessness.
While Amazon’s AppStore does include many of the apps I needed or wanted – including SwiftKey Tablet which is my favourite keyboard for Android – they made it impossible to install them on their own firmware. I’ve been tempted to install their AppStore on the flashed Kindle Fire and see if they would allow me to install the apps then, it would be quite a laugh.
Unfortunately, while the CM10 firmware actually allows me to make a very good use of the device, much more than I could ever have reached with the original firmware, the browsing experience still sucks big time. I’ve currently installed a number of browsers: Android’s stock browser – with its non-compliant requests – Google Chrome, Firefox, Opera and the aforementioned Puffin. There is no real winner on the lot.
The Android browser has a terrible network implementation and takes way too much time requesting and rendering pages. Google Chrome is terrible on the whole, probably because the Fire is too underpowered to run it properly, which makes it totally useless as an app. I only keep it around for testing purposes, until I get a better Android tablet.
Firefox has the best navigation support but every time I click on a field and SwiftKey has to be brought up, it takes a full minute. Whether this is a bug in SwiftKey or Firefox, I have no idea. If someone has an idea who to complain about it to, I’d love to report it and see it fixed.
Best option you get, beside Firefox, is Opera. While slightly slower than Firefox on rendering, it does not suffer from the SwiftKey bug. I’m honestly not sure at this point if the version of Opera I’m using right now renders with their own Presto engine or with WebKit which they announced they are moving to — if it’s the latter, it’s going to be a loss for me I guess, since the two surely WebKit based browsers are not behaving nicely for me here.
Now from what I said about Puffin, you’d expect it to behave properly enough. Unfortunately that is not the case. I don’t know if it’s a problem with my local bandwidth being too limited, but in general the responsiveness is worse than Opera, although not as bad as Puffin. The end result is that even the server-side rendering does not make it usable.
More reviews of software running on the Fire will follow, I suppose, unless I decide to get a newer tablet in the next weeks.
Last weekend (2.-3.3. 2013) we had a lovely conference here in Prague. People could attend to quite few very cool talks and even play OpenArena tournament :-) Anyway that ain’t so interesting for Gentoo users. The cool part for us is the Gentoo track that I tried to assemble in there and which I will try to describe here.
Setup of the venue
This was easy task as I borrowed computer room in the dormatories basement which was large enough to hold around 30 students. I just carried in my laptop, checked the beamer works. Ensured the chairs are not falling apart and replaced the broken ones. Verified the wifi works (which it did not but the admins made it working just in time). And for last brought some drinks from main track so we do not dry out.
The classroom was in bit different area than the main track I tried to put some arrows for people to find the place. But when people started getting in and calling me where the hell is the place I figured out something is wrong. This pointy was then adjusted but still it shows up that we should rather not split of the main tracks or ensure there are HUGE and clear arrows pointing in directions where people can find us.
During the day there were only three talks, two held by me and one that was not on the plan done by Theo.
I was supposed to start this talk at 10:00 but given the issue with the arrows people showed up around 10:20 so I had to cut back some informations and live examples.
Anyway I hope it was interesting hardened overview and at least Petr Krcmar wrote lots of stuff so we maybe will se some articles about it in czech media (something like “How I failed to install hardened Gentoo” :P).
Gentoo global stuff
This was more discussion about features than talk. The users were pointing out what they would like to see happening in Gentoo and what were their largest issues lately.
From issues people pointed out the broken udev update which rendered some boxes non-bootable (yes there was message but they are quite easy to overlook, I forgot to do it on one machine myself). Some sugesstions went for genkernel to actually trigger rebuild of kernel right away in post stage for user with the enabled required options. This sounds like quite nice idea, as since you are using genkernel you probably want your kernel automatically adjusted and updated for the cases where the apps require option additions. As I am not aware of the genkernel stuff I told the users to open bug about this.
Second big thing we were talking about were binary packages. The idea was to have some tinderbox which produce generic binary packages available for most useflag variants. So you could specify -K and it would use the binary form or if not provided compiled localy. For this the most work would need to be done on portage side because we would have to somehow figure out multiple versions of the same package with different enabled uses.
Theo did awesome job explaining how infra uses puppet and what services and servers we have. This was on-demand talk which people that were on-site wanted.
Hacking — aka stuff that we somehow did
Martin “plusky” Pluskal (SU) went over our prehistoric bugs from 2k5 and 2k6 and created list of cantfix ones which are no longer applicable or are new pkg requests with dead upstream. I still have to close them or give him editbugz privs (this sounds more like it as I am lazy like hell, or better make him developer :P).
Ondrej Sukup (ACR) attending over hangout worked on python-r1 porting and I commited his work to cvs.
Cyril “metan” Hrubis (SU) worked on crossdev on some magic avr bug I don’t want to hear much about but he seems optimistic that he might finish the work in near future.
David Heidelberger worked first on fixing bugs with his lappy and then helped on the bug wrangling with Martin.
Jan “yac” Matejka (SU) finished his quizzes and thus he got shiny bug and is now in lovely hands of our recruiters to became our newest addition to the team.
Michal “miska” Hrusecky (SU) worked on update of osc tools update to match latest we have in opensuse buildservice and he plans to commit them soonish to cvs.
Pavel “pavlix” Simerda (RH) who is the guy responsible for latest networkmanager bugs expressed his intentions to became dev and I agreed with him
Tampakrap (SU) worked on breaking one laptop with fresh install of Gentoo, which I then picked up and finished with some nice KDE love :-)
Amy Winston helped me a lot with setup for the venue and also kept us with Theo busy breaking her laptop, which I hope she is still happily using and does not want to kill us, other then that she focused on our sweet bugzie and wrangling. She seems not willing to finish her quizzes to became full developer, so we will have work hard on that in the future :-)
And lastly I (SU) helped users with issues they had on their local machines and explained how to avoid those or report directly to bugzie with relevant informations and so on.
In case you wonder SU = SUSE ; RH = RedHat; ACR = Armed forces CR.
For the future events we have to keep in mind that we need to better setup those and have prepared small buglists rather then wide-range ones where people spend more time picking ideal work than working on those :-)
The lunch and the afterparty were done in nice pub nearby which had decent food and plenty of beer so everyone was happy. The only problem was that it take some waiting to get the food as suddenly there were 40 people in the pub (I still think this could’ve been somehow prepared so they had only limited subset of foods really fast so you can choose between waiting a bit or picking something and going back fast).
During the night one of Gentoo attendees got quite drunk and had to be delivered home by other ogranizers as I had to leave bit early (being up from 5 am is not something I fancy).
The big problem here was with the location where one should put him, because he was not able to talk and his ID contained residency info for different city. So for the next time when you go for linux event where you don’t know much put into your pockets some paper with the address. It is superconvenient and we don’t have to bother your parents at 1 am to find out what to do with their “sweet” child.
I would like to say huge thanks to all attendees for making the event possible and also appologize for everything I frogot to mention here.
This post is mostly trivial and useless, you can skip it. Seriously.
I was musing something the other day: Typo allows to consult the whole history of my blog over time, by using a complete archive, for both the whole content, tags, and categories. These are numbered as "pages "in the archives. But they are not permanent.
Indeed, the homepage you see is counted as “page 1” — so while the pages grow further and further, the content always moves. A post that is in page 12 today will not be in a couple of months. Sure it’s still possible to find it in monthly archives (as long as the month completed) but it’s far from obvious.
This page numbering is common on most systems where you want the most recent, or most relevant, content first, such as search engines and, indeed, most news sites or blogs. But while the bottom-up order of the posts in the single page makes sense to me, the numbering still doesn’t.
What I would like would be for pages to start from page 1 (the oldest posts) and continue further, 10-by-10, until reaching page 250 (which is pretty near at this point, for this blog), for post number 2501 — unfortunately this breaks badly, as your homepage would only have an article, if the homepage corresponded to page 250 indeed. So what is that I would like?
Well, first of all, I would say that the homepage (as well as the landing pages for tags and categories) is “page 0”, and page 0 is out of the order of the archives altogether. Page 0 is bottom-up, just like we have now, and has a fixed amount of entries. Page 1 is the oldest ten (or less) posts, top-down (in ascending date order), and so forth.
What does this achieve? Well, first of all a given post will always be at a given page. There is no more sliding around of old posts, making pages actually useful links; this includes the ability for search engines to actually have meaningful search results to those pages, instead of an ever-moving target — even though I would say that they should probably check the semantic data when reading the archive pages.
At first I thought this would have reduced the cache use as well, as stopping the sliding means that the content of a given page is not changing at every single post… unfortunately at most it can help cache fragments, as adding more pages means that there will be a different “last page number” (or link), at the bottom of the page. Of course it would be possible to use a /page/last link and only count the pages immediately before and after the current one.
Oh well, I guess this adds up to the list of changes i’d like to make to Typo (but I can’t, due to time, right now).
Another month has passed, so time for a new progress meeting…
GCC v4.7 has been unmasked, allowing a large set of users to test out the new GCC. It is also expected that GCC 4.8-rc1 will hit the tree next week. In the hardened-dev overlay, hardened support for x86, amd64 and arm has been added (SPEC updates) and the remainder of architectures will be added by the end of the week.
Kernel and grSecurity/PaX
Kernel 3.7.5 had a security issue (local root privilege escalation) so 3.7.5-r1 which held a fix for this was stabilized quickly. However, other (non-security) problems have been reported, such as one with dovecot, regarding the VSIZE memory size. This should be fixed in the 3.8 series, so these are candidate for a faster stabilization. This faster stabilization is never fun, as it increases the likelihood that we miss other things, but they are needed as the vulnerability in the previous stable kernel was too severe.
Regarding XATTR_PAX, we are getting pretty close to the migration. The eclass is ready and will be announced for review on the appropriate mailinglists later this week. A small problem still remains on Paludis-using systems (Paludis does not record NEEDED.ELF.2 information – linkage information – so it is hard to get all the linkage information on a system). A different revdep-pax and migrate-pax toolset will be built that detects the necessary linkage information, but much slower than on a Portage-running system.
The 11th revision of the policies are now stable, and work is on the way for the 12th revision which will hit the tree soon. Some work is on the way for setools and policycoreutils (one due to a new release – setools – and the other one due to a build failure if PAM is not set). Both packages will hit the hardened-dev overlay soon.
A new “edition” of the selinuxnode virtual image has been pushed to the mirror system, providing a SELinux-enabled (enforcing) Gentoo Hardened system with grSecurity and PaX, as well as IMA and EVM enabled.
The 13.0 profiles have been running fine for a while at a few of our developer systems. No changes have been needed (yet) so things are looking good.
The necessary userland utilities have been moved to the main tree. The documentation for IMA/EVM has been updated as well to reflec the current state of IMA/EVM within Gentoo Hardened. IMA, even with the custom policies, seems to be working well. EVM on the other hand has some issues, so you might need to run with EVM=fix for now. Debugging on this issue is on the way.
Some of the user oriented documentation (integrity and SELinux) have been moved to the Gentoo Wiki for easier user contributions and simplified management. Other documents will follow soon.
This is the third post in a series, as it happens — part 1 and part 2 are both available.
Let’s see how am I currently set up — I’m still in Italy for less than 30 days; I have bank accounts in the US and in Italy with their associated cards, and I own four “mobile devices” — two tablets (iPad and Kindle Fire with CM10.1 so that it works), a cellphone running CM7 and an iPod Touch. The two iOS devices are associated with an American iTunes account (since that’s the only way I could buy and watch TV series in English), and thus get apps for the US region. The cellphone and the Kindle Fire are similarly associated with an account with US billing address for a little while longer, but then it seems like the Play Store restrictions apply depending on the currently-in-use SIM on the cellphone. I then have one Italian, and one US, SIMs that I can switch to — the latter does not even associate with the network because there is no roaming coverage on that contract.
This turned out quite interesting as the Starbucks application is not available with an Italian SIM, and my (Italian) bank’s application is not available with an US SIM. And this was what I complained earlier in the series.
Now I’m getting ready to move to Dublin. Among the things that I’m looking at I’ve got to understand the way the buses works… the Dublin Bus website sports a badge in the homepage that a mobile application (an App) is available on both Apple’s AppStore and on the Play Store. Unfortunately the latter (which is the one I would care about) is not compatible with any of my devices. A similar situation happened with a cab company app that a friend suggested me. Luckily it seems like getting a SIM in Ireland is quick and easy, so then I should have access to these two apps — probably losing access to some of the Italian apps I have installed.
Can somebody tell me why applications like these are limited to regions, when they are very useful for tourists, and for preparation? Sigh!
NFS elapsed time = 3765830.4643 seconds.
pretesting / nfs ratio was 0.00
Total factoring time = 3765830.6384 seconds
PRP78 = 106603488380168454820927220360012878679207958575989291522270608237193062808643
PRP78 = 102639592829741105772054196573991675900716567808038066803341933521790711307779
What does that mean?
The input number is conveniently chosen from the RSA challenge numbers
and was the "world record" until 2003. Advances in algorithms, compilers and hardware have made it possible for me to re-do that record attempt
in about a month walltime on a single machine ( 4-core AMD64).
Want to try yourself?
that's the "easiest" tool to manage. The dependencies are a bit fiddly, but it works well for up to ~512bit, maybe a bit more.
It depends on msieve, which is quite impressive, and gmp-ecm, which I find even more intriguing.
If you feel like more of a challenge:
This tool even supports multi-machine setups out of the box using ssh, but it's slightly intimidating and might not be obvious to figure out.
Also for a "small" input in the 120 decimal digits range it was about 25% slower than yafu - but it's still impressive what these tools can do.
While I was working on my previous job I have been given the task to find a way to use a Flash based application on an iPad (Android was not on the mind of anybody but me). Among the applications I have been trying for that there was Puffin Browser which is available for both iOS and Android.
I haven’t written much about this before, because it was too related to work to talk about — we were trying to get in touch with them, as we had a few issues that needed to be addressed, but since that was about six months ago now, I guess that fell through and won’t be happening anyway. Nothing that I’ll be discussing here is related to that job anyway.
So what is Puffin, and why did it relate to run a Flash application? Well, mostly it’s a browser that follows the same idea that I remember being used at least by the first Opera browser on the iPad, if I’m not mistaken. A server of theirs downloads and renders the page, and it’s displayed on the device’s screen.
Unlike most other browsers I’ve seen, though, it also renders Flash in (near) real-time, and proxies the taps as clicks. To make it nicer, it also provides a virtual mouse, and keyboard, which allows you to do operations like right clicks and drags. It’s not a bad result what you get, but there are complications.
Mostly, the complications are for server admins — not even web developers, really just server admins. The problem is that CloudMosa, the company that develops and sell this application, while using a very limited pool of IP addresses to proxy the requests, do not provide a FcRDNS to ensure that what comes, declaring itself as Puffin, is actually Puffin, and can be trusted.
You can imagine that this causes no little problem with my ruleset especially in regard to the openproxy handling. Unfortunately, and this was one of the things that caused the most problems at my previous position as well, they don’t really have a support system. They handle most of the feedback and discussion through their Facebook page. Which is to say, something very screwed up.
Okay so last time I wrote about my personal status I noted that I had something on the balance, as a new job. Now that I signed the contract I can say that I do have a new job.
This means among other things that I’ll finally be leaving Italy. My new home is going to be Dublin, Ireland. At the time of writing I’m still fretting about stuff I need to finish in Italy, in particular digitizing as many documents as possible so that my mother can search through them easily, and I can reach them if needed, contacting my doctor for a whole blood panel, and the accountant to get all the taxes straightened up.
What does this mean for my Gentoo involvement? Probably quite a bit. My new job does not involve Gentoo, which means I won’t be maintaining it any longer on paid time like I used to before. You can also probably guess that with the stress of actually having a house to take care of, I’ll end up with much less time than I have now. Which means I’ll have to scale down my involvement considerably. My GSoC project might very well be the height of my involvement from now till the end of the year.
On the personal side of things, while I’m elated to leave Italy, especially with the current political climate, I’m also quite a bit scared. I know next to nobody (Enrico excluded) in Dublin, and I know very little of Irish traditions as well. I’ve spent the past week or so reading the Irish Times just to be able to catch a glimpse of what is being discussed up there, but I’m pretty sure that’s not going to be enough.
I’m scared also because this would be the first time I actually leave alone and have to cater for everything by myself, even though with the situation it feels like I might be quite a lot more lucky than most of my peers here in Clownland Italy. I have no idea of what will actually sap away my time, although I’m pretty sure that if it turns out to be cleaning, I’ll just pay somebody to do that for me.
One question that I’ve been asked before, and to which I didn’t really have a good answer up to now is: should configure scripts fail, when a dependency is enabled explicitly, but it can’t be found? This is the automagic dependency problem, but on the other branch.
With proper automatic dependencies, if the user does not request explicitly whether to enable something or not, it’s customary that the dependency is checked for, and if found, the feature that it’s connected to is enabled. When the user has no way to opt out from it (which is bad), we call it an automagic dependency. But what happens if the user has requested it and the feature is not available?
Unfortunately, there is no standard for this, and myself I used both the “fail if asked and not found” and “warn if asked and not found” approaches. But the recent trouble between ncurses and freetype made me think that it’s important to actually make a point that there is a correct way to deal with this.
Indeed what happens is that right now, I have no way to tell you all that the tinderbox has found every single failure caused by sys-libs/ncurses[tinfo] even after the whole build completed: it might well be that a particular package, unable to link to ncurses, decided to disable it altogether. The same goes for freetype. Checking for all of that would be nice, but I have honestly no way to do it.
So to make sure that the user gets really what they want, please always make sure that you do verify that you’re proceeding how the user wanted. This makes sure that even in packaging, there won’t be any difference when a dependency is updated, or changed. In particular, with pkg-config, the kind of setup you should have is the following:
I’ll be discussing this and proposing this solution in the next update to Autotools Mythbuster (which is due anytime soon, including the usual eBook update for Kindle users). This would hopefully make sure that in the future, most configure scripts will follow this approach.
There's a lot of people who are very careful to never delete a single line from an e-mail they are replying to,
always quoting the complete history. There's also a lot of people who believe that it wastes time to eyeball such long,
useless texts. One of the fancy features introduced in this release of Trojitá,
a fast Qt IMAP e-mail client, is automatic quote collapsing. I won't show you an example of an annoying mail for obvious
reasons :), but this feature is useful even for e-mails which employ reasonable quoting strategy. It looks like this in
When you click on the ... symbols, the first level expands to reveal the following:
When everything is expanded, the end results looks like this:
This concept is extremely effective especially when communicating with a top-posting community.
We had quite some internal discussion about how to implement this feature. For those not familiar with Trojitá's
architecture, we use a properly restricted QtWebKit instance for e-mail rendering. The restrictions which are active
include click-wrapped loading of remote content for privacy (so that a spammer cannot know whether you have read their
security implications (or maybe "only" keeping your CPU busy and draining your battery by a malicious third party). We
chose in the end.
Starting with Qt 4.8, WebKit ships with support for the :checked CSS3 pseudoclass. Using this feature,
it's possible to change the style based on whether an HTML checkbox is
checked or not . In theory, that's everything one might possibly need, but there's a small catch
-- the usual way of showing/hiding contents based on a state of a checkbox hits a WebKit bug (quick summary: it's tough to have it working without the
~ adjacent-sibling selector unless you use it in one particular way). Long story short, I now know more
about CSS3 than I thought I would ever want to know, and it works (unless you're on Qt5 already where
it assert-fails and crashes the WebKit).
Speaking of WebKit, the way we use it in Trojitá is a bit unusual. The QWebView class contains full
support for scrolling, so it is not necessary to put it inside a QScrollArea. However, when working with
e-mails, one has to account for messages containing multiple body parts which have to be shown separately (again, for
both practical and security reasons). In addition, the e-mail header which is typically implemented as a custom
QWidget for flexibility, is usually intended to combine with the message bodies into a single entity to be
scrolled together. With WebKit, this is doable (after some size hints magic, and I really mean magic -- thanks
to Thomas Lübking of the KWin fame for patches), but there's a catch -- internal methods like the findText
which normally scroll the contents of the web page into the matching place no longer works when the whole web view is
embedded into a QScrollArea. I've dived into the source code of WebKit and the interesting thing is that there
is code for exactly this case, but it is only implemented in Apple's version of WebKit. The source code even says that Apple needed this for its own
Mail.app -- an interesting coincidence, I guess.
Compared with the last release, Trojitá has also gained support for "smart replying". It will now detect that a
message comes from a mailing list and Ctrl+R will by default reply to list. Thomas has added support for
saving drafts, so that you are not supposed to lose your work when you accidentally kill Trojitá anymore. There's also
been the traditional round of bug fixes and compatibility improvements. It is entertaining to see that Trojitá is
apparently triggering certain code paths in various IMAP server implementations, proprietary and free software alike,
for the first time.
The work on support for multiple IMAP accounts is getting closer to being ready for prime time. It isn't present in
the current release, though -- the GUI integration in particular needs some polishing before it hits the masses.
I'm happy to observe that Trojitá is getting features which are missing from other popular e-mail clients. I'm
especially fond of my pet contribution, the quote collapsing. Does your favorite e-mail application offer a similar
In the coming weeks, I'd like to focus on getting the multiaccounts branch merged into master, adding better
integration with the address book (Trojitá can already offer tab completion with data coming from Mutt's abook) and general GUI improvements. It would also be great to make it possible
to let Trojitá act as a handler for the mailto: URLs so that it gets invoked when you click on an e-mail
address in your favorite web browser, for example.
And finally, to maybe lure a reader or two into trying Trojitá, here's a short quote from a happy user who came to
our IRC channel a few days ago:
17:16 < Sir_Herrbatka> i had no idea that it's possible for mail client to be THAT fast
One cannot help but be happy when reading this. Thanks!
If you're on Linux, you can get the latest version of Trojitá from the OBS or the usual place.
I said this last week on Google+ when I was at a conference, and
needed to get it out there quickly, but as I keep getting emails and
other queries about this, I might as make it "official" here. For no
other reason that it provides a single place for me to point people at.
Anyway, I would like to announce that the 3.8 Linux kernel series is
NOT going to be a longterm stable kernel release. I will
NOT be maintaining it for long time, and in fact, will stop
maintaining it right after the 3.9 kernel is released.
The 3.0 and 3.4 kernel releases are both longterm, and both are going to
be maintained by me for at least 2 years. If I were to pick 3.8 right
now, that would mean I would be maintaining 3 longterm kernels, plus
whatever "normal" stable kernels are happening at that time. That is
something that I can not do without loosing even more hair than I
currently have. To do so would be insane to attempt.
At the time of writing (but I’ll delay the publication of this post a few hours), I’m uploading a new SELinux-enabled KVM guest image. This is not an update on the previous image though (it’s a reinstalled system – after all, I use VMs for testing, so it makes sense to reinstall from time to time to check if the installation instructions are still accurate). However, the focus remains the same:
A minimal Gentoo Linux installation for amd64 (x86_64) as guest within a KVM hypervisor. The image is about 190 Mb in size compressed, and 1.6 Gb in size uncompressed. The file format is Qemu’s QCOW2 so expect the image to grow as you work with it. The file systems are, in total, sized to about 50 Gb.
The installation has SELinux enabled (strict policy, enforcing mode), various grSecurity settings enabled (including PaX and TPE), but now also includes IMA (Integrity Measurement Architecture) and EVM (Extended Verification Module) although EVM is by default started in fix mode.
The image will not start any network-facing daemons by default (unlike the previous image) for security reasons (if I let this image stay around this long as I did with the previous, it’s prone to have some vulnerabilities in the future, although I’m hoping I can update the image more frequently). This includes SSH, so you’ll need access to the image console first after which you can configure the network and start SSH (run_init rc-service sshd start does the trick).
A couple of default accounts are created, and the image will display those accounts and their passwords on the screen (it is a test/play VM, not a production VM).
There are still a few minor issues with it, that I hope to fix by the next upload:
Bug 457812 is still applicable to the image, so you’ll notice lots of SELinux denials on the mknod capability. They seem to be cosmetic though.
At shutdown, udev somewhere fails with a SELinux initial context problem. I thought I had it covered, but I noticed after compressing the image that it is still there. I’ll fix it – I promise ;)
EVM is enabled in fix mode, because otherwise EVM is prohibiting mode changes on files in /run. I still have to investigate this further though – I had to use the EVM=fix workaround due to time pressure.
When uploaded, I’ll ask the Gentoo infrastructure team to synchronise the image with our mirrors so you can enjoy it. It’ll be on the distfiles, under experimental/amd64/qemu-selinux (it has the 20130224 date in the name, so you can see for yourself if the sync has already occurred or not).
A long time ago, I made a SELinux enabled VM for people to play with, displaying a minimal Gentoo installation, including the hardening features it supports (PIE/PIC toolchain, grSecurity, PaX and SELinux). I’m currently trying to create a new one, which also includes IMA/EVM, but it looks like I still have many things to investigate further…
First of all, I notice that many SELinux domains want to use the mknod capability, even for domains of which I have no idea whatsoever why they need it. I don’t notice any downsides though, and running in permissive mode doesn’t change the domain behavior. But still, I’m reluctant to mark them dontaudit as long as I’m not 100% sure.
Second, the gettys (I think it is the getty) result in a “Cannot change SELinux context: permission denied” error, even though everything is running in the right SELinux context. I still have to confirm if it really is the getty process or something else (the last run I had the impression it was a udev-related process). But there are no denials and no SELinux errors in the logs.
Third, during shutdown, many domains have problems accessing their PID files in /var/run (which is a link to /run). I most likely need to allow read privileges on all domains that have access to var_run_t towards the var_t symlinks. It isn’t a problem per se (the processes still run correctly) but ugly as hell, and if you introduce monitoring it’ll go haywire (as no PID files were either found, or were stale).
Also, EVM is giving me a hard time, not allowing me to change mode and ownership in files on /var/run. I have received some feedback from the IMA user list on this so it is still very much a work-in-progress.
Finally, the first attempt to generate a new VM resulted in a download of 817 MB (instead of the 158 MB of the previous release), so I still have to correct my USE flags and doublecheck the installed applications. Anyway, definitely to be continued. Too bad time is a scarce resource :-(
I've now been with the Linux Foundation for just over a year. When I
started, I posted a list of how you can watch to see what I've been
doing. But, given that people like to see year-end-summary
reports, the excellent graphic designers at the Linux Foundation have
put together an image summarizing my past year, in numbers:
In a try to revive the Gentoo Bugday I wrote this article in order to give some guide lines and encourage both users and developers to join. I think it would be great to get this event back and collaborate. Of course everyone can open/close bugs silently but this type of event is a good way to close bugs, attract new developers/users and improve community relations. There is no need to be a Gentoo expert. So I will give you some information about the event.
Bugday is a monthlyonline event that takes place every first Saturday of every month in #gentoo-bugs in the Freenode network. Its goal is to have users and developers collaborate to close/open bugs, update current packages and improve documentation.
Gentoo Bugday take place in our official IRC channel #gentoo-bugs @ Freenode. You can talk about almost everything. Your ebuilds, version bumps, bugs that you will choose to fix, etc.. This is a 24h event, so don’t worry about the timezone difference.
A Gentoo installation (real hardware) or in a Virtual Machine.
IRC Client to join #gentoo-bugs , #gentoo-dev-help (ebuild help) and #gentoo-wiki (wiki help)
Positive energy / Will to help.
Improve quality of Bugzilla
Improve Wiki’s documentation.
Improve community relations.
Attract new developers and users.
Fix bugs (users/developers)
Triage incoming bugs (users/developers) (Good to start!)
Version bumps (users/developers) ( Good to start!)
Improve wiki’s articles(users/developers) (Good to start!)
As one of my four talks at FOSDEM, I gave one on Gentoo titled “Package management and creation in Gentoo Linux.” The basic idea was, what could packagers and developers of other, non-Gentoo distros learn from Gentoo’s packaging format and how it’s iterated on that format multiple times over the years. It’s got some slides but the interesting part is where we run through actual ebuilds to see how they’ve changed as we’ve advanced through EAPIs (Ebuild APIs), starting at 16:39.
If you click through to YouTube, the larger (but not fullscreen) version seems to be the easiest to read.
It was scaled from 720×576 to a 480p video, so if you find it too hard to read the code, you can view the original WebM here.
The Gentoo project has its own official wiki for some time now, and we are going to use it more and more in the next few months. For instance, in the last Gentoo Hardened meeting, we already discussed that most user-oriented documentation should be put on the wiki, and I’ve heard that there are ideas on moving Gentoo project pages at large towards the wiki. And also for the regular Gentoo documentation I will be moving those guides that we cannot maintain ourselves anymore easily towards the wiki.
To support migrations of documents, I created a gxml2wiki.xsl stylesheet. Such a stylesheet can be used, together with tools like xsltproc, to transform GuideXML documents into text output somewhat suitable for the wiki. It isn’t perfect (far from it actually) but at least it allows for a more simple migration of documents with minor editing afterwards.
Currently, using it is as simple as invoking it against the GuideXML document you want to transform:
~$ xsltproc gxml2wiki.xsl /path/to/document.xml
The output shown on the screen can then be used as a page. The following things still need to be corrected manually:
Whitespace is broken, sometimes there are too many newlines. I had to make the decision to put in newlines when needed (which makes too many newlines) rather than a few newlines too few (which makes it more difficult to find where to add in).
Links need to be double/triple checked, but i’ll try to fix that in later editions of the stylesheet
Commands will have “INTERNAL” in them – you’ll need to move the commands themselves into the proper location and only put the necessary output in the pre-tags. This is because the wiki format has more structure than GuideXML in this matter, thus transformations are more difficult to write in this regard.
The stylesheet currently automatically adds in a link towards a Server and security category, but of course you’ll need to change that to the proper category for the document you are converting.
I guess many people may hit similar problems, so here is my experience of the upgrades. Generally it was pretty smooth, but required paying attention to the details and some documentation/forums lookups.
udev-171 -> udev-197 upgrade
Make sure you have CONFIG_DEVTMPFS=y in kernel .config, otherwise the system becomes unbootable for sure (I think the error message during boot mentions that config option, which is good).
The ebuild also asks for CONFIG_BLK_DEV_BSG=y, not sure if that's strictly needed but I'm including it here for completeness.
Things work fine for me without DEVTMPFS_MOUNT. I haven't tried with it enabled, I guess it's optional.
I do not have a split /usr. YMMV then if you do.
Make sure to run "rc-update del udev-postmount".
Expect network device names to change (I guess this is a non-issue for systems with a single network card). This can really mess up things in quite surprising ways. It seems /etc/udev/rules.d/70-persistent-net.rules no longer works (bug #453494). Note that the "new way" to do the same thing (http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames) is disabled by default in Gentoo (see /etc/udev/rules.d/80-net-name-slot.rules). For now I've adjusted my firewall and other configs, but I think I'll need to figure out the new persistent net naming system.
iptables-1.4.13 -> iptables-18.104.22.168 * Loading iptables state and starting firewall ... WARNING: The state match is obsolete. Use conntrack instead. iptables-restore v22.214.171.124: state: option "--state" must be specified
It can be really non-obvious what to do with this one. Change your rules from e.g. "-m state --state RELATED" to "-m conntrack --ctstate RELATED". See http://forums.gentoo.org/viewtopic-t-940302.html for more info. Also note that iptables-restore doesn't really provide good error messages, e.g. "iptables-restore: line 48 failed". I didn't find a way to make it say what exactly was wrong (the line in question was just a COMMIT line, it didn't actually identify the real offending line). These mysterious errors are usually caused by missing kernel support for some firewall features/targets.
two upgrades together
Actually what adds to the confusion is having these two upgrades done simultaneously. This makes it harder to identify which upgrade is responsible for which breakage. For an even smoother ride, I'd recommend upgrading iptables first, making sure the updated rules work, and then proceed with udev.
We've generated a new set of profiles for Gentoo installation. These are now called 13.0 instead of 10.0, e.g., "default/linux/amd64/10.0/desktop" becomes "default/linux/amd64/13.0/desktop". Everyone should upgrade as soon as possible. This brings (nearly) no user-visible changes. Some new files have been added to the profile directories that make it possible for the developers to do more fine-grained use flag masking (see PMS-5 for the details), and this formally requires a new profile tree with EAPI=5 (and a recent portage version, but anything since sys-apps/portage-126.96.36.199 should work and anything since sys-apps/portage-188.8.131.52 should be perfect). Since the 10.0 profiles will be deprecated immediately and removed in a year, emerge will suggest a replacement on every run. I strongly suggest you just follow that recommendation. One additional change has been added to the package: the "server" profiles will be removed; they do not exist in the 13.0 tree anymore. If you have used a server profile so far, you should migrate to its parent, i.e. from "default/linux/amd64/10.0/server" to "default/linux/amd64/13.0". This may change the default value of some use-flags (the setting in "server" was USE="-perl -python snmp truetype xml"), so you may want to check the setting of these flags after switching profile, but otherwise nothing happens.
While on my machine KDE 4.10.0 runs perfectly fine, unfortunately a lot of Gentoo users see immediate crashes of plasma-desktop - which makes the graphical desktop environment completely unuseable. We know more or less what happened in the meantime, just not how to properly fix it... The problem:
plasma-desktop uses a new code path in 4.10, which triggers a Qt bug leading to immediate SIGSEGV.
The Qt bug only becomes fatal for some compiler options, and only on 64bit systems (amd64).
The Qt bug may be a fundamental architectural problem that needs proper thought.
Reverting the commit to plasma-workspace that introduced the problem makes the crash go away, but plasma-desktop starts hogging 100% CPU after a while. (This is done in plasma-workspace-4.10.0-r1 as a stopgap measure.) Kinda makes sense since the commit was there to fix a problem - now we hit the original problem.
The bug seems not to occur if Qt is compiled with CFLAGS="-Os". Cause unknown.
David E. Narváez aka dmaggot wrote a patch for Qt that fixes this particular codepath but likely does not solve the global problem.
Our Gentoo Qt team understandably only wants to apply a patch if it has been accepted upstream.
Right now, the only option we (as Gentoo KDE team) have is wait for someone to pick up the phone. Either from KDE (to properly use the old codepath or provide some alternative), or from Qt (to fix the bug or apply a workaround)...
Update! Update! Read all about it!You can find the recent updates in a tree near you. They are currently keyworded, but will be stablized as soon as the arch teams find time to do so. You may not want to wait that long as it is a Denial of Service, which is not as severe as it sounds in this case. The user would have to be logged in to cause a DoS.
There have been some other updates to the PostgreSQL ebuilds as well. PostgreSQL will no longer restart if you restarted your system logger. The ebuilds install PAM service files unique to each slot so you don’t have to worry about it being removed when you uninstall an old slot. And, finally, you can write your PL/Python in Python 3.
There's been a lot of information scattered around the internet about
these topic recently, so here's my attempt to put them all in one place
to (hopefully) settle things down and give my inbox a break.
Both of these articles allude to the fact that I'm working on putting
the D-Bus protocol into the kernel, in order to help achieve these
larger goals of proper IPC for applications. And I'd like to confirm
that yes, this is true, but it's not going to be D-Bus like you know it
Our goal (and I use "goal" in a very rough term, I have 8 pages of
scribbled notes describing what we want to try to implement here), is to
provide a reliable multicast and point-to-point messaging system for the
kernel, that will work quickly and securely. On top of this kernel
feature, we will try to provide a "libdbus" interface that allows
existing D-Bus users to work without ever knowing the D-Bus daemon was
replaced on their system.
"But Greg!" some of you will shout, "What about the existing AF_BUS
kernel patches that have been floating around for a while and that you
put into the LTSI 3.4 kernel release?"
The existing AF_BUS patches are great for users who need a very
low-latency, high-speed, D-Bus protocol on their system. This includes
the crazy automotive Linux developers, who try to shove tens of
thousands of D-Bus messages through their system at boot time, all while
using extremely underpowered processors. For this reason, I included
the AF_BUS patches in the LTSI kernel release, as that limited
application can benefit from them.
Please remember the LTSI kernel is just like a distro kernel, it has no
relation to upstream kernel development other than being a consumer of
it. Patches are in this kernel because the LTSI member groups need
them, they aren't always upstream, just like all Linux distro kernels
However, given that the AF_BUS patches have been rejected by the
upstream Linux kernel developers, I advise that anyone relying on them
be very careful about their usage, and be prepared to move away from
them sometime in the future when this new "kernel dbus" code is properly
As for when this new kernel code will be finished, I can only respond
with the traditional "when it is done" mantra. I can't provide any
deadlines, and at this point in time, don't need any additional help
with it, we have enough people working on it at the moment. It's
available publicly if you really want to see it, but I'll not link to it
as it's nothing you really want to see or watch right now. When it
gets to a usable state, I'll announce it in the usual places
(linux-kernel mailing list) where it will be torn to the usual shreds
and I will rewrite it all again to get it into a mergable state.
In the meantime, if you see me at any of the many Linux conferences I'll
be attending around the world this year, and you are curious about the
current status, buy me a beer and I'll be glad to discuss it in person.
If there's anything else people are wondering about this topic, feel free
to comment on it here on google+, or email me.
It’s been a while again, so time for another Gentoo Hardened online progress meeting.
GCC 4.8 is on development stage 4, so the hardened patches will be worked on next week. Some help on it is needed to test the patches on ARM, PPC and MIPS though. For those interested, keep a close eye on the hardened-dev overlay as those will contain the latest fixes. When GCC 4.9 starts development phase 1, Zorry will again try to upstream the patches.
With the coming fixes, we might probably (need to) remove the various hardenedno* GCC profiles from the hardened Gentoo profiles. This shouldn’t impact too many users as ebuilds add in the correct flags anyhow (for instance when needing to turn off PIE/PIC).
Kernel, grSecurity and PaX
The kernel release 3.7.0 that we have stable in our tree has seen a few setbacks, but no higher version is stable yet (mainly due to the stabilization period needed). 3.7.4-r1 and 3.7.5 are prime candidates with good track record,
so we might be stabilizing 3.7.5 in the very near future (next week probably).
On the PaX flag migration (you know, from ELF-header based marking to extended attributes marking), the documentation has seen its necessary upgrades and the userland utilities have been updated to reflect the use of xattr markings. The eclass we use for the markings will use the correct utility based on the environment.
One issue faced when trying to support both markings is that some actions (like the “paxctl -Cc” which creates the PT_PAX header if it is missing) make no sense with the other (as there is no header when using XATTR_PAX). The eclass will be updated to ignore these flags when XATTR_PAX is selected.
Revision 10 is stable in the tree, and revision 11 is waiting stabilization period. A few more changes have been put in the policy repository already (which are installed when using the live ebuilds) and will of course be part of
A change in the userland utilities was also pushed out to allow permissive domains (so run a single domain in permissive mode instead of the entire system).
Finally, the SELinux eclass has been updated to remove SELinux modules from all defined SELinux module stores if the SELinux policy package is removed from the system. Before that, the user had to remove the modules from the store himself manually, but this is error-prone and easily forgotten, especially for the non-default SELinux policy stores.
All hardened subprofiles are marked as deprecated now (you’ve seen the discussions on this on the mailinglist probably on this) so we now have a sane set of hardened profiles to manage. The subprofiles were used for things like
“desktop” or “server”, whereas users can easily stack their profiles as they see fit anyhow – so there was little reason for the project to continue managing those subprofiles.
Also, now that Gentoo has released its 13.0 profile, we will need to migrate our profiles to the 13.0 ones as well. So, the idea is to temporarily support 13.0 in a subprofile, test it thoroughly, and then remove the subprofile and switch the main one to 13.0.
The documentation for IMA and EVM is available on the Gentoo Hardened project site. They currently still refer to the IMA and EVM subsystems as development-only, but they are available in the stable kernels now. Especially the default policy that is available in the kernel is pretty useful. When you want to consider custom policies (for instance with SELinux integration) you’ll need a kernel patch that is already upstreamed but not applied to the stable kernels yet.
To support IMA/EVM, a package called ima-evm-utils is available in the hardened-dev overlay, which will be moved to the main tree soon.
As mentioned before, the PaX documentation has seen quite a lot of updates. Other documents that have seen updates are the Hardened FAQ, Integrity subproject and SELinux documentation although most of them were small changes.
Another suggestion given is to clean up the Hardened project page; however, there has been some talk within Gentoo to move project pages to the Gentoo wiki. Such a move might make the suggestion easier to handle. And while on the subject of the wiki, we might want to move user guides to the wiki already.
Bug 443630 refers to segmentation faults with libvirt when starting Qemu domains on a SELinux-enabled host. Sadly, I’m not able to test libvirt myself so either someone with SELinux and libvirt
expertise can chime in, or we will need to troubleshoot it by bug (using gdb, strace’ing more, …) which might take quite some time and is not user friendly…
Various talks where held at FOSDEM regarding Gentoo Hardened, and a lot of people attended those talks. Also the round table was quite effective, with many users interacting with developers all around. For next year, chances are very high that we’ll give a “What has changed since last year” session and a round table again.
With many thanks to the usual suspects: Zorry, blueness, prometheanfire, lejonet, klondike and the several dozen contributors that are going to kill me for not mentioning their (nick)names.
Preface: It appears that I have fallen behind in my writings. It’s a shame really because I think of things that I should write in the moment and then forget. However, as I’m embracing slowish travel, sometimes I just don’t really do anything that is interesting to write about every day/week.
My last post was about my time in Greece. Since then I have been to Istanbul, Dubai, and (now) Sri Lanka. I was in Istanbul for about 10 days. My lasting impressions of Istanbul were:
+: Istanbul was the first Muslim country I’ve been to. This is is a positive because it opened up some thoughts of what to expect as I continue east. To see all the impressive mosques, to hear the azan (call to prayer) in the streets, to talk to some Turks about religion, really made it a new experience for me.
+: Istanbul receives many visitors per year, which makes it such that it is easy to converse, find stuff you need, etc
-: Istanbul receives many visitors per year, which makes it very touristy in some parts.
+: Istanbul is a huge city and there is much to see. I stepped on Asia for the first time. There are many old, old, buildings that leave you in awe. Oldest shopping area in the world, the Grand Bazaar, stuff like that.
-: Istanbul is a huge city and the public transit is not well connected, I thought.
–: Every shop owner harasses you to come in the store! The best defense that I can recommend is to walk with a purpose (like you are running an errand) but not in a hurry. This will bring the least amount of attention to yourself at risk of “missing” the finer details as you meander.
Let’s not joke anyone, Dubai was a skydiving trip, for sure. I spent 15 days in Dubai and made 30 jumps. It was a blast. I was at the dropzone most everyday and on the weather days, my generous hosts showed me around the city. I didn’t feel the need to take any pictures of the sites because, while impressive, they seemed too “fake” to me (outrageous, silly, etc). I went to the largest mall in the world, ate brunch in the shadow of the largest building in the world, largest aquarium, indoor ski hill in a desert, eventually it was just…meh. However, I will never forget “The Palm”
When deciding where to go onwards, as I knew I shouldn’t stay in Dubai too long (money matters, of course, I would spend my whole lot on fun and there is so much more to see). I ended up in Sri Lanka, because skyscanner told me there was a direct flight there on a budget airline. I don’t see the point in accepting layovers in my flight details at my pace. Then I found someone on HelpX that wanted an English teacher in exchange for accommodation. While I’m not a teacher, I am a native speaker, and that was acceptable at this level of classes. I did a week stint of that in a small village and now I’m relaxing at the beach…I’ll write more about Sri Lanka later and post pics, a fun photo so far:
I had this post in the Drafts for a while, but now it’s time to publish it since the situation does not seem to be improving at all.
As you probably now, if you want to become a Gentoo developer, you need to find yourself a mentor. This used to be easy. I mean, all you had to do was to contact the teams you were interested in contributing as a developer and then one of the team members would step up and help you with your quizzes. However, lately, I find myself in the weird situation of having to become a mentor myself because potential recruits come back to recruiters and say that they could not find someone from the teams to help them. This is sub-optimal for a couple of reasons. First of all, time constrains Mentoring someone can take days, weeks or months. Recruiting someone after being trained (properly or not), can also take days, weeks or months. So somehow, I ended up spending twice as much time as I used to. So we are back to those good old days, where someone needed to wait months before we fully recruit him. Secondly, a mentor and a recruiter should be different persons. This is necessary for recruits to gain a wider and more efficient training as different people will focus on different areas during this training period.
One may wonder, why teams are not willing to spend time to train new developers. I guess, this is because training people takes quite a lot of someone’s time and people tend to prefer fixing bugs and writing code than spending time training people. Another reason could be that teams are short on manpower, so try are mostly busy with other stuff and they just can’t do both at the same time. Others just don’t feel ready to become mentors which is rather weird because every developer was once a mentee. So it’s not like they haven’t done something similar before. Truth is that this seems to be a vicious circle. No manpower to train people -> less people are trained -> Not enough manpower in the teams.
In my opinion, getting more people on board is absolutely crucial for Gentoo. I strongly believe that people must spend time training new people because a) They could offload work to them ;) and b) it’s a bit sad to have quite a few interested and motivated people out there and not spend time to train them properly and get them on board. I sincerely hope this is a temporary situation and things will become better in the future.
ps: I will be in FOSDEM this weekend. If you are there and you would like to discuss about the Gentoo recruitment process or anything else, come and find me ;)
Let me present an informal an unofficial state of Chromium Open Source packages as I see it. Note a possible bias: I'm a Chromium developer (and this post represents my views, not the projects'), and a Gentoo Linux developer (and Chromium package maintenance lead - this is a team effort, and the entire team deserves credit, especially for keeping stable and beta ebuilds up to date).
Gentoo Linux - ships stable, beta and dev channels. Security updates are promptly pushed to stable. NaCl (NativeClient) is enabled, although pNaCl (Portable NaCl) is disabled. Up to 23 use_system_... gyp switches are enabled (depending on USE flags).
Arch Linux - ships stable channel, promptly reacts to security updates. NaCl is enabled, following Gentoo closely - I consider that good, and I'm glad people find that code useful. :) 5 use_system_... gyp switches are enabled. A notable thing is that the PKGBUILD is one of the shortest and simplest among Chromium packages - this seems to follow from The Arch Way. There is also chromium-dev on AUR - it is more heavily based on the Gentoo package, and tracks the upstream dev channel. Uses 19 use_system_... gyp switches.
FreeBSD / OpenBSD - ship stable channel, and are doing pretty well, especially when taking amount of BSD-specific patching into account. NaCl is disabled.
ALT Linux - ships stable channel. NaCl seems to be disabled by default, I'm not sure what's actually shipped in compiled package. Uses 11 use_system_... gyp switches.
Debian - ancient 6.x version in Squeeze, 22.x in sid at the time of this writing. This is two major milestones behind, and is missing security updates. Not recommended at this moment. :( If you are on Debian, my advice is to use Google Chrome, since official debs should work, and monitor state of the open source Chromium package. You can always return to it when it gets updated.
Fedora - not in official repositories, but Tom "spot" Callaway has an unofficial repo. Note: currently the version in that repo is 23.x, one major version behind on stable. Tom wrote an article in 2009 called Chromium: Why it isn't in Fedora yet as a proper package, so there is definitely an interest to get it packaged for Fedora, which I appreciate. Many of the issues he wrote about are now fixed, and I hope to work on getting the remaining ones fixed. Please stay tuned!
This is not intended to be an exhaustive list. I'm aware of openSUSE packages, there seems to be something happening for Ubuntu, and I've heard of Slackware, Pardus, PCLinuxOS and CentOS packaging. I do not follow these closely enough though to provide a meaningful "review".
Some conclusions: different distros package Chromium differently. Pay attention to the packaging lag: with about 6 weeks upstream release cycle and each major update being a security one, this matters. Support for NativeClient is another point. There are extension and Web Store apps that use it, and when more and more sites start to use it, this will become increasingly important. Then it is interesting why on some distros some bundled libraries are used even though upstream provides an option to use a system library that is known to work on other distros.
Finally, I like how different maintainers look at each other's packages, and how patches and bugs are frequently being sent upstream.
Just a simple announcement for now. It's a bit messy, but should work :D
I have packaged Openstack for Gentoo and it is now in tree, the most complete packaging is probably for Openstack Swift. Nova and some of the others are missing init scripts (being worked on). If you have problems or bugs, report as normal.
When you talk to a friend, she or he knows you are the person in question. But when you do this a friend far away through computers, you can not be sure.
That's why computers have ways to let you know if the person you are talking to is really the right person.
The ways we use today have one problem: We are not sure that they work. It may be that a bad person knows a way to be able to tell you that he is in fact your friend. We do not think that there are such ways for bad persons, but we are not completely sure.
This is why some people try to find ways that are better. Where we can be sure that no bad person is able to tell you that he is your friend. With the known ways today this is not completely possible. But it is possible in parts.
I have looked at those better ways. And I have worked on bringing these better ways to your computer.
So - do you now have an idea what I was taking about?
openSUSE 12.3 is getting closer and closer and probably one of the last changes I pushed for MySQL was switching the default MySQL implementation. So in openSUSE 12.3 we will have MariaDB as a default.
If you are following what is going on in openSUSE in regards to MySQL, you probably already know, that we started shipping MariaDB together with openSUSE starting with version 11.3 back in 2010. It is now almost three years since we started providing it. There were some little issues on the way to resolve all conflicts and to make everything work nicely together. But I believe we polished everything and smoothed all rough edges. And now everything is working nice and fine, so it’s time to change something, isn’t it? So let’s take a look of the change I made…
MariaDB as default, what does it mean?
First of all, for those who don’t know, MariaDB is MySQL fork – drop-in replacement for MySQL. Still same API, still same protocol, even same utilities. And mostly the same datafiles. So unless you have some deep optimizations depending on your current version, you should see no difference. And what will switch mean?
Actually, switching default doesn’t mean much in openSUSE. Do you remember the time when we set KDE as a default? And we still provide great Gnome experience with Gnome Shell. In openSUSE we believe in freedom of choice so even now, you can install either MySQL or MariaDB quite simply. And if you are interested, you can try testing beta versions of both – we have MySQL 5.6 and MariaDB 10.0 in server:database repo. So where is the change of default?
Actually, the only thing that changed is that everything now links against MariaDB and uses MariaDB libraries – no change from users point of view. And if you will try to update from system that used to have just one package called ‘mysql’, you’ll end up with MariaDB. And it will be default in LAMP pattern. But generally, you can still easily replace MariaDB with MySQL, if you like Oracle Yes, it is hard to make a splash with a default change if you are supporting both sides well…
What happens to MySQL?
Oracles MySQL will not go away! I’ll keep packaging their version and it will be available in the openSUSE. It’s just not going to be a default, but nothing prevents you from installing it. And if you had it in past and you are going to do just a plain upgrade, you’ll stick to it – we are not going to tell you what to use if you know what you want.
As mentioned before, being default doesn’t have many consequences. So why the switch? Wouldn’t it break stuff? Is that MariDB safe enough? Well, I’m personally using MariaDB since 2010 with few switches to MySQL and back, so it is better tested from my point of view. I originally switched for the kicks of living on the edge, but in the end I found MariaDB boringly stable (even though I run their latest alpha). I never had any serious issue with it. It also has some interesting goodies that it can offer to it’s user over MySQL. Even Wikipedia decided to switch. And our friends at Fedora are considering it too, but AFAIK they don’t have MariaDB yet in their distribution….
Don’t take it as a complain about MySQL guys and girls at Oracle, I know that they are doing a great job that even MariaDB is based on as they do periodical merges to get newest MySQL and they “just” add some more tweaks, engines and stuff.
So, as I like MariaDB and I think it’s time to move, I, as a maintainer of both, proposed to change the default. There were no strong objections, so we are doing it!
So overall, yes, we are changing default MySQL provider, but you probably wouldn’t even notice
We are currently working on integrating carbon nanotube nanomechanical systems into superconducting radio-frequency electronics. Overall objective is the detection and control of nanomechanical motion towards its quantum limit. In this project, we've got a PhD position with project working title "Gigahertz nanomechanics with carbon nanotubes" available immediately.
You will design and fabricate superconducting on-chip structures suitable as both carbon nanotube contact electrodes and gigahertz circuit elements. In addition, you will build up and use - together with your colleagues - two ultra-low temperature measurement setups to conduct cutting-edge measurements.
Good knowledge of electrodynamics and possibly superconductivity are required. Certainly helpful is low temperature physics, some sort of programming experience, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.
The combination of localized states within carbon nanotubes and superconducting contact materials leads to a manifold of fascinating physical phenomena and is a very active area of current research. An additional bonus is that the carbon nanotube can be suspended, i.e. the quantum dot between the contacts forms a nanomechanical system. In this research field a PhD position is immediately available; the working title of the project is "A carbon nanotube as a moving weak link".
You will develop and fabricate chip structures combining various superconductor contact materials with ultra-clean, as-grown carbon nanotubes. Together with your colleagues, you will optimize material, chip geometry, nanotube growth process, and measurement electronics. Measurements will take place in one of our ultra-low temperature setups.
Good knowledge of superconductivity is required. Certainly helpful is knowledge of semiconductor nanostructures and low temperature physics, as well as basic familiarity with Linux. The starting salary is 1/2 TV-L E13.
Well, all of MythTV 0.26 is now in portage, masked for testing for a few days.
If anyone is interested now is a good time to give it a try and report any issues you find. If all is quiet the masks will come off and we’ll be up-to-date (including all patches up to a few days ago).
Thanks to all who have contributed to the 0.26 bug. I can also happily report that I’m running Gentoo on my mythtv front-end, which should help me with maintaining things. MiniMyth is a great distro, but it has made it difficult to keep the front- and back-ends in sync.
the task was to combine do and re by nils frahm into a new work. i chopped “re” into loops, and rearranged sections by sight and sound for a deliberately loose feel. the resulting piece is entirely unquantized, with percussion generated from the piano/pedal action sounds of “do” set under the “re” arrangement. the perc was performed with an mpd18 midi controller in real time, and then arranged by dragging individual hits with a mouse. since the original piano recordings were improvised, tempo fluctuates at around 70bpm, and i didn’t want to lock myself into anything tighter when creating the downtempo beats.
beats performed live on the mpd18, arranged in ardour3.
normally i’d program everything to a strict grid with renoise, but for this project, i used ardour3 (available in my overlay) almost exclusively, except for a bit of sample preparation in renoise and audacity. the faint background pads/strings were created with paulstretch. my ardour3 session was filled with hundreds of samples, each one placed by hand and nudged around to keep the jazzy feel, as seen in this screenshot:
this is a very rough rework — no FX, detailed mixing/mastering, or complicated tricks. i ran outta time to do all the subtle things i usually do. instead, i spent all my time & effort on the arrangement and vibe. the minimal treatment worked better than everything i’d planned.
Gentoo Design, Copyright 2001-2012 Gentoo Foundation, Inc.
Views expressed in the content shown above do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.