Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alistair Bush
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Joe Peterson
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. José Alberto Suárez López
. Kenneth Prugh
. Krzysiek Pawlik
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Marcus Hanwell
. Mark Kowarsky
. Mark Loeser
. Markos Chandras
. Markus Ullmann
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matthias Geerdsen
. Matti Bickel
. Michal Hrusecky
. Michal Januszewski
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mounir Lamouri
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robert Buchholz
. Robin Johnson
. Romain Perier
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Steve Dibb
. Stratos Psomadakis
. Stuart Longland
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thilo Bangert
. Thomas Anderson
. Thomas Kahle
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tobias Scherbaum
. Tomáš Chvátal
. Torsten Veller
. Victor Ostorga
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
October 20, 2012, 23:04 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

October 19, 2012
Miniconf: Gentoo on the OLPC XO-1.75 (October 19, 2012, 21:02 UTC)

At the Gentoo Miniconf 2012 in Prague we will install Gentoo on the OLPC XO-1.75, an ARM based laptop designed as an educational tool for children. If you are interested in joining us, come to the Gentoo booth and start hacking with us!

—Chí-Thanh Christopher Nguyễn

October 17, 2012
2012 Gentoo Screenshot Contest Results (October 17, 2012, 20:57 UTC)

Gentoo - Still alive and kicking ...

As the quantity and quality of this year's entries will attest, Gentoo is alive, well, and taking no prisoners!

We had 70 entries for the 2012 Gentoo screenshot contest, representing 11 different window managers / desktop environments. Thanks to all that participated, the judges and likewhoa for the screenshot site.

The Winners!

New subproject: kde-stable (October 17, 2012, 18:53 UTC)

If you are a kde user, you may be interested to this new subproject:
http://www.gentoo.org/proj/en/desktop/kde/kde-stable/

Feel free to ask any doubt.

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The latest news (October 17, 2012, 10:27 UTC)

Overview of What Happened

In the last few weeks, the conference team has worked hard to prepare the conference. The main news items you should be awere of are the FAQ which has been published, the party locations and times, the call to organize BoF sessions and of course the sponsors who help make the event possible. And we’re happy to tell you that we will provide live video streams from the main rooms during the event (!!!) and we announced the Round Table sessions during the Future Media track. Last but not least, there have been some interviews with intresting speakers in the schedule!

Sneak Peek of the Conference Schedule

Let’s start with the interviews. During the last weeks, a number of interesting speakers has been interviewed, both by text and over video chat. You can find the interviews in our first sneak peek article and more in this extensive follow-up article about the Future Media track. You can find the video interviews also in our youtube channel and on our blip.tv channel.

Video!

Talking about video interviews, there will be more videos in those channels: the openSUSE Video team is gearing up to tape the talks at the event. They will even provide a live stream of the event, which you can watch via flash and on a smartphone at bambuser and via these three links via ogv feeds: Room Kirk Room McCoy and Room Scotty. Keep an eye on the wiki page as the team will add feeds to more rooms if we can get some more volunteers to help us out.

Round Table Sessions!

We’ve mentioned the special feature track ‘Future Media’ already and we’ve got an extra bite for you all: the track will feature two round table discussions, one about the value of Free and Open for our Society and one about the practicalities of doing ‘open’ projects. Find more in the schedule: Why open matters and How do you DO open?.

We need YOU!

Despite all our work, this event would be nothing without YOUR help. We’re still looking for volunteers to sign up but there’s another thing we need you for: be pro-active and get the most out of this event! That means not only sitting in the talks but also stepping up and participating in the BoF Sessions. And organize a BoF if you think there’s something to discuss!

Party time!

Of course, we’re also thinking about the social side of the event. Yes, there will surely be an extensive “hallway track” as we feature a nice area with booths and the university has lots of hallways… But sometimes it’s just nice to sit down with someone over a good beer, and this is where our parties come in. As this article explains, there will be two parties: one on Friday, as warming-up (and pre-registration) and one on Saturday, rockin’ in the city center of Prague. Note that you will need your badge to enter this party, which means you have to be registered!

Sponsors

As we wrote a few days ago, all this would not be possible without our sponsors, and we’d like to thank them A LOT for their support!

Big hugs to Platinum Sponsor SUSE, Gold Sponsor Aeroaccess, Silver Sponsor Google, Bronze Sponsor B1Systems, supporters ownCloud and Univention and of course our media partners LinuxMagazine and Root.cz. Last but not least, a big shout-out to the university which is providing this location to us!

FaQ

On a practical level, we also published our Conference FAQ answering a bunch of questions you might have about the event. If you weren’t sure about someting, check it out!

More

There will be more news in the coming days, be sure to keep an eye on news.opensuse.org for articles leading up and of course during the event. As one teaser, we’ve got the Speedy Geeko and Lightning talks schedule coming soon!

Be there!

Gentoo Miniconf, oSC12 and LinuxDays will take place at the Czech Technical University in Prague. The campus is located in the Dejvice district and is next to an underground station that gets you directly to the historic city center – an opportunity you can’t miss!

We expect to welcome about 700 Open Source developers, testers, usability experts, artists and professional attendees to the co-hosted conferences! We work together making one big, smashing event! Admission to the conference is completely free. However for oSC a professional attendee ticket is available that offers some additional benefits.

All the co-hosted conferences will start on October 20th. Gentoo Miniconf and Linuxdays end on October 21st, while the openSUSE Conference ends on October 23rd. See you there!

Dane Smith a.k.a. c1pher (homepage, stats, bugs)
New Tricks, Goals, and Ideas (October 17, 2012, 01:06 UTC)

It’s been a while since I’ve done anything visible to anyone but myself. So, what the heck have I been doing?

Well, for starts, in the past year I’ve done a serious amount of work in Python. This work was one of the reasons for my lack of motivation for Gentoo. I went from doing little programming / maintenance at work to doing it 40+ hours a week. It meant I didn’t really feel up to doing more of it in my limited spare time. So I took up a few new hobbies. I got into Photography (feel free to look under links for the photo website). I feel weird with the self promotion for that type of thing, but, c’est la vie.

As the programming at work died down some, I started to find odd projects. I spent some serious time learning Go [1] and did a few small projects of my own in that. One of those projects will be open sourced soon. I know a fair few different languages, and I know C, Python, and Java pretty decently. While I like all of the ones on that list, I can’t say that I truly buy into the philosophies. Python is great. It’s simple, it’s clean, and it “just works.” However, I find that like OpenSSL, it gives you enough room to hang yourself and everyone else in the room. The lack of strict typing coupled with the fact that it’s a scripting language are downsides (in my eyes). C, for all that it is awesome at low level work, requires so much verbosity to accomplish the simplest tasks that I tend to shy away from it for anything other than what must be done at that level. Java… is well Java. It’s a decent enough language I suppose, but being run in a VM is silly in my eyes. It, like C, suffers from being too verbose as well (again, merely my humble opinion).

Enter Go. Go has duck typed interfaces, unlike Java’s explicit ones. It’s compiled and strictly typed. It has other modern niceties (like proper strings), along with a strong tie to web development (another area C struggles with). It has numerous interesting concepts (check out defer), along with what I find to be a MUCH better approach to error handling than what exists in any of C, Java, or Python. Add in that it is concurrent by design and you have one serious language. I must say that I am thoroughly impressed. Serious Kudos to those Google guys for one awesome language.

I also picked up a Nexus 7 and started looking into how Android is built and works. I got my own custom ROM and Kernel working along with a nice Gentoo image on the SD Card. Can anyone say “Go compiler on my Nexus 7?” This work also led me to do some work as far as getting Gentoo booting on Amazon’s Elastic Compute Cloud. Building Android takes for-freaking-ever, so I figured.. why not do it in the cloud!? It works splendidly, and it is fast.

So that covers new tricks. You mentioned goals and ideas?!

First, time to get myself off the slacker wagon and back to doing something useful. I no longer repulse at the idea of developing when I get home. That helps =p. One of the first things I want to spend some time addressing is disk encryption in Gentoo. I wrote here pertaining to the state of loop-aes. Both Loop-AES and Truecrypt need to spend a little time under the microscope as to how they should be handled within Gentoo. I’ll write more on his later when I have all my ducks in a row. I have no doubt that this will be a fun topic.

I also want to look into how a language like Go fits into Gentoo. Go has it’s own build system (no Makefiles, configure scripts, or anything else) that DOES have a notion of things like CFLAGS. It also has the ability to “go get” a package and install it. To those curious check out their website. All of these lead to interesting questions from a package management point of view. I am inclined to think that Go is around to stay. I hope it is. So we may as well start looking into this now rather than later. As my father used to tell me all the time, “Proper Prior Planning Prevents Piss Poor Performance.” Time to plan =).

That is, right after I sort out the fiasco that is my bug queue. *facepalm*

[1] http://golang.com

October 15, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
box down (October 15, 2012, 07:08 UTC)

my main gentoo workstation is down. no more documentation updates from me for awhile.

it seems the desktop computer’s video card has finally bitten the dust. the monitor comes up as “no input detected” despite repeated reboots. so now i’m faced with a decision: throw in a cheap, low-end GFX card as a stopgap measure, or wash my hands of 3 to 6 years of progressive hardware failure, and do a complete rebuild. last time i put anything new in the box was probably back in 2009…said (dead) GFX card, and a side/downgraded AMD CPU. might be worth building an entirely new machine from scratch at this point.

i haven’t bothered to pay attention to the AMD-vs-Intel race for the last few years, so i’m a bit at a loss. i’ll check TechReport, SPCR, NewEgg, and all those sites, but…not being at all caught up on the bang-for-buck parts…is a bit disconcerting. i used to follow the latest trends and reviews like a true technoweenie.

and now, of course, i’m thinking in terms of what hardware lends itself to music production — USB/Firewire ports, bus latency, linux driver status for crucial bits; things like that. all very challenging to juggle after being out of it for so long.

so, who’s built their own PC lately? what’d ya use?

October 14, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Gentoo Hardened progress meeting (October 14, 2012, 13:00 UTC)

Not that long ago we had our monthly Gentoo Hardened project meeting (on October 3rd to be exact). On these meetings, we discuss the progress of the project since the last meeting.

For our toolchain domain, Zorry reported that the PIE patchset is updated for GCC, fixing bug #436924. Blueness also mentioned that he will most likely create a separate subproject for the alternative hardened systems (such as mips and arm). This is mostly for management reasons (as the information is currently scattered throughout the Gentoo project at large).

For the kernel domain, since version 3.5.4-r2 (and higher), the kernexec and uderef settings (for grSecurity) should no longer impact performance on virtualized platforms (when hardware acceleration is used of course), something that has been bothering Intel-based systems for quite some time already. Also, the problem with guest systems immediately reserving (committing) all memory on the host should be fixed with recent kernels as well. Of course, this is only true as long as you don’t sanitize your memory, otherwise all memory gets allocated regardless.

In the SELinux subproject, we now have live ebuilds allowing users to pull in the latest policy changes directly from the git repository where we keep our policy at. Also, we will see a high commit frequency in the next few weeks (or perhaps even months) as Fedora’s changes are being merged with upstream. Another change is that our patchbundles no longer contain all individual patches, but a merged patch. This increases the deployment time of a SELinux policy package considerably (up to 30% faster since patching is now only a second or less). And finally, the latest userspace utilities are in the hardened-dev overlay ready for broader testing.

grSecurity is still focusing on the XATTR-based PaX flags. The eclass (pax-utils) has been updated, and we will now be looking at supporting the PaX extended attributes for file systems such as tmpfs.

For profiles, people will notice that in the next few weeks, we will be dropping the (extremely) old SELinux profiles as the current ones have been marked stable long time ago.

In the system integrity domain, IMA is being worked on (packages and documentation) after which we’ll move to the EVM support to protect extended attributes.

And finally, klondike held a good talk about Gentoo Hardened at the Flossk conference in Kosovo.

All in all a good month of work, again with many thanks to the volunteers that are keeping Gentoo Hardened alive and kicking!

Matthew Thode a.k.a. prometheanfire (homepage, stats, bugs)
VLAN trunking to KVM VMs (October 14, 2012, 05:00 UTC)

Why this is needed

In testing linux bridging I noticed a problem that took me much longer then I feel comfortable admitting. You cannot break out the VLANs to from a physical device and also use that physical device (attached to a bridge) to forward forward the entire trunk to a set of VMs. The reason this occurs is that once linux starts inspecting for vlans on an interface to split them out it discards all those you do not have defined, so you have to trick it.

Setup

I had my Trunk on eth1. What you need to do is directly attach eth1 to a bridge (vmbr1). This bridge now has the entire trunk associated with it. Here's the fun part, you can break out vlans on the bridge, so you would have an interface for vlan 13 named vmbr1.13 and then attach that to a brige, allowing you to have a group of machines only exposed to vlan 13.

The networking goes like this.

               /-> vmbr1.13 -> vmbr13 -> VM2
eth1 -> vmbr1 ---> VM1
               \-> vmbr1.42 -> vmbr42 -> VM3

Example

Here is the script I used with proxmox (you can set up the bridge in proxmox, but not the source for the bridges data (the 'input'). This is for VLANs 1-13 and assumes you have vyatta set up the target bridges. I had this start at boot (via rc.local).

vconfig add vmbr1 2
vconfig add vmbr1 3
vconfig add vmbr1 4
vconfig add vmbr1 5
vconfig add vmbr1 6
vconfig add vmbr1 7
vconfig add vmbr1 9
vconfig add vmbr1 10
vconfig add vmbr1 11
vconfig add vmbr1 12
vconfig add vmbr1 13
ifconfig eth1 up
ifconfig vmbr1 up
ifconfig vmbr1.2 up
ifconfig vmbr1.3 up
ifconfig vmbr1.4 up
ifconfig vmbr1.5 up
ifconfig vmbr1.6 up
ifconfig vmbr1.7 up
ifconfig vmbr1.8 up
ifconfig vmbr1.9 up
ifconfig vmbr1.10 up
ifconfig vmbr1.11 up
ifconfig vmbr1.12 up
ifconfig vmbr1.13 up
brctl addif vmbr1 eth1
brctl addif vmbr2 vmbr1.2
brctl addif vmbr3 vmbr1.3
brctl addif vmbr4 vmbr1.4
brctl addif vmbr5 vmbr1.5
brctl addif vmbr6 vmbr1.6
brctl addif vmbr7 vmbr1.7
brctl addif vmbr8 vmbr1.8
brctl addif vmbr9 vmbr1.9
brctl addif vmbr10 vmbr1.10
brctl addif vmbr11 vmbr1.11
brctl addif vmbr12 vmbr1.12
brctl addif vmbr13 vmbr1.13

October 13, 2012
Patrick Lauer a.k.a. bonsaikitten (homepage, stats, bugs)
Reanimating #gentoo-commits (October 13, 2012, 13:58 UTC)

Today I got annoyed with the silence in #gentoo-commits and spent a few hours fixing that. We have a bot reporting ... well, I hope all commits, but I haven't tested it enough.

So let me explain how it works so you can be very amused ...

First stage: Get notifications
Difficulty: I can't install postcommit hooks on cvs.gentoo.org
Workaround: gentoo-commits@lists.gentoo.org emails
Code (procmailrc):

:0:
* ^TO_gentoo-commits@lists.gentoo.org
{
  :0 c
  .maildir/.INBOX.gentoo-commits/

  :0
  | bash ~/irker-wrapper.sh
}
So this runs all mails that come from the ML through a script, and puts a copy into a subfolder.

Second stage: Extracting the data
Difficulty: Email is not a structured format
Workaround: bashing things with bash until happy
Code (irker-wrapper.sh):
#!/bin/bash
# irker wrapper helper thingy

while read line; do
        # echo $line # debug
        echo $line | grep -q "X-VCS-Repository:" && REPO=${line/X-VCS-Repository: /}
        echo $line | grep -q "X-VCS-Committer:"  && AUTHOR=${line/X-VCS-Committer:/}
        echo $line | grep -q "X-VCS-Directories:"  &&  DIRECTORIES=${line/X-VCS-Directories:/}
        echo $line | grep -q "Subject:"  && SUBJECT=${line/Subject:/}
        EVERYTHING+=$line
        EVERYTHING+="\n"
done

COMMIT_MSG=`echo -e $EVERYTHING | grep "Log:" -A1 | grep -v "Log:"`

ssh commitbot@lolcode.gentooexperimental.org "{\"to\": [\"irc://chat.freenode.net/#gentoo-commits\"], \"privmsg\": \"$REPO: ${AUTHOR} ${DIRECTORIES}: $COMMIT_MSG \"}"
Why the ssh stuff? Well, the server where the mails arrive is a bit restricted, hard to run a daemon there 'n stuff, so let's just pipe it somewhere more liberal

Third stage: Sending the notifications
Difficulty: How to communicate with irkerd?
Workaround: nc, a hammer, a few thumbs
Code:
#!/bin/bash

echo $@ | nc --send-only  127.0.0.1 6659
And that's how the magic works.

Bonus trick: using command="" in ~/.ssh/authorized_keys

... and now I really need a beer :)

October 12, 2012
Raúl Porcel a.k.a. armin76 (homepage, stats, bugs)
Beaglebone documentation updated (October 12, 2012, 17:06 UTC)

Hi all,

I’ve got some reports that my Beaglebone guide is outdated and giving some troubles regarding the bootloader and kernel.

While as of vanilla kernel 3.6.1 doesn’t support the beaglebone, U-Boot 2012.10-rc3 does support it, so i’ve tested all thechanges and updated the guide accordingly.

You can find it in http://dev.gentoo.org/~armin76/arm/beaglebone/install.xml
Some changes i’ve noticed in almost a year since i did the documentation:

  • The bug (by design they said) which made the USB port stop working after unplugging a device (check my post about the Beaglebone) is now fixed
  • CPU scaling is working, although the default governor is ‘userspace’. The default speed with this governor is:

a) 600MHz if powering it using a PSU through the 5V power connector, remember that the maximum speed of the  Beaglebone is 720MHz

b) 500MHz if powering it using the mini-USB port

Have fun


October 08, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The Keynote speaker (October 08, 2012, 12:22 UTC)

The Keynote speaker for the Bootstrapping Awesome co-hosted conferences is going to be Agustin Benito Bethencourt. Agustin is currently working in Nuremberg, Germany as the openSUSE Team Lead at SUSE, and in the Free Software community he’s mostly known for his contributions to KDE and especially in the KDE eV. He is a very interesting guy, with a lot of experience about FOSS both from the community and the enterprise POV, which is also the reason I asked him to do the Keynote. I enjoy a lot working with him on organizing this conference, his experience is valuable. In this interview he talks a bit about himself, and a lot about the subject of his Keynote, the conference, openSUSE and SUSE, and about Free Software. The interview was done inside the SUSE office in Prague, with me being the “journalist” and Michal being the “camera-man”. Post-processing was done by Jos. More interviews from other speakers are about to come, so stay tuned! Enjoy!

I’m writing this post in italian language because it is intended only for italian people.

E’ da tempo che abbiamo messo su l’idea di lavorare su git per quanto riguarda la traduzione della documentazione gentoo da inglese a italiano.
Siamo già in tanti, ma se avessimo altri traduttori potremmo produrre molto di più.
Non sono richeste specifiche tecniche, se non un minimo di conoscenza della lingua inglese.

Riferimenti:
http://dev.gentoo.org/~ago/trads-it.xml
http://dev.gentoo.org/~ago/howtohelp.xml
http://www.gentoo.org/doc/it/xml-guide.xml

Se in questi documenti c’è qualcosa di poco chiaro, non esitate a contattarmi.

Chi è interessato a collaborare può scrivermi via mail all’indirizzo ago@gentoo.org aggiungendo possibilmente il tag [docs-it] ad inizio oggetto o semplicemente cliccando qui.

September 29, 2012
Mike Gilbert a.k.a. floppym (homepage, stats, bugs)
Slot-operator deps for V8 (September 29, 2012, 03:11 UTC)

The recently approved EAPI 5 adds a feature called "slot-operator dependencies" to the package manager specification. Once these dependencies are implemented in the portage tree, the package manager will be able to automatically trigger package rebuilds when library ABI changes occur. Long-term, this will greatly reduce the need for revdep-rebuild.

If you are a Chromium user on Gentoo and you don't use portage-2.2, you have probably noticed that we are using the "preserve_old_lib" kludge so that your web browser doesn't break every time you upgrade the V8 Javascript library. This leaves old versions of V8 installed on your system until you manually clean them up. With slot-operator deps, we can eliminate this kludge since portage will have enough information to know it needs to rebuild chromium automatically. It's pretty neat.

I have forked the dev-lang/v8 and www-client/chromium ebuilds into my overlay to test this new feature; we can't really apply it in the main portage tree until a new enough version of portage has been stabilized. I will be maintaining the latest chromium dev channel release, plus a couple of versions of v8 in my overlay.

If you would like to try it out, you can install my overlay with layman -a floppym. Once you've upgraded to the versions in my overlay, upgrading/downgrading dev-lang/v8 should automatically trigger a chromium rebuild.

If you run into any issues, please file a bug.

September 28, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Debugging SELinux file context mismatches (September 28, 2012, 08:52 UTC)

I originally posted the question on gentoo-hardened ML, but Sven Vermeulen advised me to file a bug, so there it is: bug #436474.

The problem I hit is that my ~/.config/chromium/ directory should have unconfined_u:object_r:chromium_xdg_config_t context, but it has unconfined_u:object_r:xdg_config_home_t instead.

I could manually force the "right" context, but it turned out even removing the directory in question and allowing the browser to re-create it still results in wrong context. Looks like something deeper is broken (maybe just on my system), and fixing the root cause is always better. After all, other people may hit this problem too.

Here is what error messages appear on chromium launch:


$ chromium
[2557:2557:1727940797:ERROR:process_singleton_linux.cc(263)] Failed to
create /home/ph/.config/chromium/SingletonLock: Permission denied
[2557:2557:1727941544:ERROR:chrome_browser_main.cc(1552)] Failed to
create a ProcessSingleton for your profile directory. This means that
running multiple instances would start multiple browser processes rather
than opening a new window in the existing process. Aborting now to avoid
profile corruption.

And SELinux messages:

# audit2allow -d
#============= chromium_t ==============
allow chromium_t xdg_config_home_t:file create;
allow chromium_t xdg_config_home_t:lnk_file { read create };

[ 107.872466] type=1400 audit(1348505952.982:67): avc: denied { read
} for pid=2166 comm="chrome" name="SingletonLock" dev="sda1" ino=522327
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=lnk_file
[ 107.873916] type=1400 audit(1348505952.983:68): avc: denied {
create } for pid=2178 comm="Chrome_FileThre"
name=".org.chromium.Chromium.ZO3dGF"
scontext=unconfined_u:unconfined_r:chromium_t
tcontext=unconfined_u:object_r:xdg_config_home_t tclass=file

If you have any ideas how to further debug it, or how to solve it, please share (e.g. comment on the bug or send me an e-mail). Thanks!

September 27, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: FAQ (September 27, 2012, 12:04 UTC)

All common questions regarding travelling, transportation, event details, sightseeing and much more, in this Frequently Asked Questions page. Feel free to ask more questions, so we can include them in the FAQ and make it more complete

David Abbott a.k.a. dabbott (homepage, stats, bugs)
epatch_user to the rescue ! (September 27, 2012, 09:38 UTC)

I was updating one of my boxens and ran into Bug 434686. In the bug Martin describes the simple way we as users can apply patches to packages that fail from bug fixes. This post is more than anything a reminder for me on how to do it. epatch_user has been blogged about before, dilfridge talks about it and says "A neat trick for testing patches in Gentoo (source-based distros are great!)".

As Martin explained in the bug and with the patch supplied by Liongene, here is how it works!

# mkdir -p /etc/portage/patches/net-print/cups-filters-1.0.24
# wget -O /etc/portage/patches/net-print/cups-filters-1.0.24/cups-filters-1.0.24-c++11.patch 'https://434686.bugs.gentoo.org/attachment.cgi?id=323788'
# emerge -1 net-print/cups-filters

Now that is cool :)

September 26, 2012
Hans de Graaff a.k.a. graaff (homepage, stats, bugs)

I've just updated the text on the Gentoo Wiki page on Ruby 1.9 to indicate that we now support eselecting ruby19 as the default ruby interpreter. This has not been tested extensively, so there may still be some problems with it. Please open bugs if you run into problems.

Most packages are now ready for ruby 1.9. If your favorite packages are not ready yet, please file a bug as well. We expect to make ruby 1.9 the default ruby interpreter in a few months time at the most. Your bug reports can help speed that up.

On a related note, we will be masking Ruby Enterprise Edition (ree18) shortly. With Ruby 1.9 now stable and well-supported we no longer see the need to also provide Ruby Enterprise Edition. This is also upstream's advice. On top of this the last few releases of ree18 never worked properly on Gentoo due to threading issues, and these are currenty already hard-masked.

Since we realize people may depend on ree18 and migration to ruby19 may not be straightforward, we intend to move slowly here. Expect a package mask within a month or so, and instead of the customary month we probably won't remove ree18 until after three months or so. That should give everyone plenty of time to migrate.

Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5-hdepend (September 26, 2012, 05:04 UTC)

In portage-2.1.11.22 and 2.2.0_alpha133 there’s support for expermental EAPI 5-hdepend which adds the HDEPEND variable which is used to represent build-time host dependencies. For build-time target dependencies, use DEPEND (if the host is the target then both HDEPEND and DEPEND will be installed on it). There’s a special “targetroot” USE flag that will be automatically enabled for packages that are built for installation into a target ROOT, and will otherwise be automatically disabled. This flag may be used to control conditional dependencies, and ebuilds that use this flag need to add it to IUSE unless it happens to be included in the profile’s IUSE_IMPLICIT variable.

For those who may not be familiar with the history of HDEPEND, it was originally suggested in bug #317337. That was in 2010, and later that year there was some discussion about it on the chromium-os-dev mailing list. Recently, I suggested on the gentoo-dev mail list that it be included in EAPI 5, but it didn’t make it in. Since then, there’s been some renewed effort , and now the patch is included in mainline Portage.

September 24, 2012
Richard Freeman a.k.a. rich0 (homepage, stats, bugs)
Gentoo EC2 Tutorial / Bootstrapping (September 24, 2012, 14:20 UTC)

I want to accomplish a few things with this post.

First, I’d like to give more attention to the work recently done by edowd on Bootstrapping Gentoo in EC2.

Second, I’d like to introduce a few enhancements I’ve made on these (some being merged upstream already).

Third, I’d like to turn this into a bit of a tutorial into getting started with EC2 as well since these scripts make it brain-dead simple.

I’ve previously written on building a Gentoo EC2 image from scratch, but those instructions do not work on EBS instances without adjustment, and they’re fairly manual. Edowd extended this work by porting to EBS and writing scripts to build a gentoo install from a stage3 on EC2. I’ve further extended this by adding a rudimentary plugin framework so that this can be used to bootstrap servers for various purposes – I’ve been inspired by some of the things I’ve seen done with Chef and while that tool doesn’t fit perfectly with the Gentoo design this is a step in that direction.

What follows is a step-by-step howto that assumes you’re reading this on Gentoo and little else, and ends up with you at a shell on your own server on EC2. Those familiar with EC2 can safely skim over the early parts until you get to the git clone step.

  1. To get started, go to aws.amazon.com, and go through the steps of creating an account if you don’t already have one. You’ll need to specify payment details/etc. If you buy stuff from amazon just use your existing account (if you want), and there isn’t much more than enabling AWS.
  2. Log into aws.amazon.com, and from the top right corner drop-down under either your name or My Account/Console choose “Security Credentials”.
  3. Browse down to access credentials, click on the X.509 certificate tab, generate a certificate, and then download both the certificate and private key files. The web services require these to do just about anything on AWS.
  4. On your gentoo system run as root emerge ec2-ami-tools ec2-api-tools. This installs the tools needed to script actions on EC2.
  5. Export into your environment (likely via .bashrc) EC2_CERT and EC2_PRIVATE_KEY. These should contain the paths to the files you created in the previous step. Congratulations – any of the ac2-api-tools should now work.
  6. We’re now going to checkout the scripts to build your server. Go to an empty directory and run git clone git://github.com/rich0/rich0-gentoo-bootstrap.git -b rich0-changes.
  7. chdir to the repository directory if necessary, and within it run ./setup_build_gentoo.sh. This creates security zones and ssh keys automatically for you, and at the end outputs command lines that will build a 32 or 64 bit server. The default security zone will accept inbound connections to anywhere, but unless you’re worried about an ssh zero-day that really isn’t a big deal.
  8. Run either command line that was generated by the setup script. The parameters tell the script what region to build the server in, what security zone to use, what ssh public key to use, and where to find the private key file for that public key (it created it for you in the current directory).
  9. Go grab a cup of coffee – here is what is happening:
    1. A spot request is created for a half decent server to be used to build your gentoo image. This is done to save money – amazon can kill your bootstrap server if they need it, and you’ll get the prevailing spot rate. You can tweak the price you’re willing to pay in the script – lower prices mean more waiting. Right now I set it pretty high for testing purposes.
    2. The script waits for an instance to be created and boot. The build server right now uses an amazon image – not Gentoo-based. That could be easily tweaked – you don’t need anything in particular to bootstrap gentoo as long as it can extract a stage3 tarball.
    3. A few build scripts are scp’ed to the server and run. The server formats an EBS partition for gentoo and mounts it.
    4. A stage3 and portage snapshot are downloaded and extracted. Portage config files (world, make.conf, etc) are populated. A script is created inside the EBS volume, and executed via chroot.
    5. That script basically does the typical handbook install (emerge sync, update world (which has all the essentials in it like dhcpcd and so on), build a kernel, configure rc files, etc.
    6. The bootstrap server terminates, leaving behind the EBS volume containing the new gentoo image. A snapshot is created of this image and registered as an AMI.
    7. A micro instance of the AMI is launched to test it. After successful testing it is terminated.
  10. After the script is finished check the output to see that the server worked. If you want it outputs a command line to make the server public – otherwise only you can see/run it.
  11. To run your server go to aws.amazon.com, sign in if necessary, browse to the EC2 dashboard. Click on AMIs on the left side, select your new gentoo AMI, and launch it (micro instances are cheap for testing purposes). Go to instances on the left side and hit refresh until your instance is running. Click on it and look down in the details for the public DNS entry.
  12. To connect to your instance run ssh -i <path to pem file in your bootstrap directory> ec2-user@<public DNS name of your server>. You can sudo to root (no password).

That’s it – you have a server in the cloud. When you’re done be sure to clean up to avoid excessive charges (a few cents an hour can add up). Check the instances section and TERMINATE (not stop) any instances that are there. You will be billed by the month for storage so de-register AMIs you don’t need and go to the snapshot section and delete their corresponding snapshots.

Now, all that is useful, but you probably want to tailor your instance. You can of course do that interactively, but if you want to script it check out the plugins in the plugin directory. Just add a path to a plugin file at the end of the command line to build the instance and it will tailor your image accordingly. I plan to clean up the scripts a bit more to move anything discretionary into the plugins (you don’t NEED fcron or atop on a server).

The plugins/desktop plugin is a work in progress, but I think it should work now (takes the better part of a day to build). It only works 32-bit right now due to the profile line. However, if you run it you should be able to connect with x2goclient and have a KDE virtual desktop. A word of warning – a micro instance is a bit underpowered for this.

And on a side note, if somebody could close bugs 427722 and 423855 that would eliminate two hacks in my plugin. The stable NX doesn’t work with x2go (I don’t know if it works for anything else), and the stable gst-plugins-xvideo is missing a dependency. The latter bug will bite anybody who tries to install a clean stage3 and emerge kde-meta.

All of this is very much a work in progress. Patches or pull requests are welcome, and edowd is maintaining a nice set of up-to-date gentoo images for public use based on his scripts.


Filed under: foss, gentoo, linux

September 22, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)
preserve-libs now available in Portage 2.1 branch (September 22, 2012, 05:22 UTC)

EAPI 5 includes support for automatic rebuilds via the slot-operator and sub-slots, which has potential to make @preserved-rebuild unnecessary (see Diego’s blog post regarding symbol collisions and bug #364425 for some examples of @preserved-rebuild shortcomings). Since this support for automatic rebuilds has potential to greatly improve the user-friendliness of preserve-libs, I have decided to make preserve-libs available in the 2.1 branch of portage (beginning with portage-2.1.11.20). It’s not enabled by default, so you’ll have to set FEATURES=”preserve-libs” in make.conf if you want to enable it. After EAPI 5 and automatic rebuilds have gained widespread adoption, I might consider enabling preserve-libs by default.

September 20, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)

In portage-2.1.11.19 and 2.2.0_alpha130 there’s support for EAPI 5, which implements all of the features that were approved by the Gentoo Council for EAPI 5. There are no differences since EAPI 5_pre2.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Stabilization hiccup with dev-perl/net-server-2.6.0 (September 20, 2012, 15:35 UTC)

What happened?

Sep 13th I stabilized net-analyzer/munin-2.0.5-r1 (security bug #412881). I use automated repoman checks and USE="-ipv6", and everything was fine at the time I committed the stabilization (also, see no mention of net-server in that security bug).

Sep 14th Seraphim Mellos filed bug #434978 about munin pulling in ~arch net-server.

Sep 16th x86@ team has been re-added to security bug #412881. Meanwhile Mr_Bones_ pinged me on irc. Also, Diego Elio Pettenò (flameeyes) filed bug #435242 against repoman not catching the dependency problem.

Sep 17th I stabilized dev-perl/net-server-2.6.0 on x86, fixing the immediate problem.

Sep 18th the repoman fix has been released in portage-2.1.11.18 and 2.2.0_alpha129.

Now the only remaining thing to do is pushing the portage/repoman fix to stable. I especially like how quickly the fix for root cause (repoman check) has been produced and released.

September 18, 2012
Jeremy Olexa a.k.a. darkside (homepage, stats, bugs)
Gentoo: IPSec, L2TP VPN for iOS (September 18, 2012, 13:07 UTC)

There are thousands of guides out there on this subject, however I still struggled to set up an IPSEC VPN at first. This is a HOWTO for my own benefit – maybe someone else will use it too. I struggled because most of the guides involved setting up the VPN on a NAT’d host and connecting to the VPN inside the network. I didn’t do that on my linode, which has a static public IP.

My objectives were clear:

  1. Create a connection point that was semi-secure while connecting to open wifi networks
  2. Bypass some “You are not in the US” restrictions while on the road

Step 1: Install applications, net-misc/openswan, net-dialup/xl2tpd
Step 2: Configure openswan:

# cat /etc/ipsec.conf 
config setup
    nat_traversal=yes
    virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:!10.152.2.0/24
    oe=off
    protostack=auto

conn L2TP-PSK-NAT
    rightsubnet=vhost:%priv
    also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
    authby=secret
    pfs=no
    auto=add
    keyingtries=3
    rekey=no
    ikelifetime=8h
    keylife=1h
    type=transport
    left=1.1.1.1
    leftprotoport=17/1701
    right=%any
    rightprotoport=17/%any
    dpddelay=15
    dpdtimeout=30
    dpdaction=clear
# cat /etc/ipsec.secrets
1.1.1.1 %any: PSK "TestSecret"

Where 1.1.1.1 is your public eth0 address and 10.152.2.0 is the subnet that xl2tpd will assign IPs from (can be anything, I picked this at the advice of a guide because it is unlikely to be assigned from a router on a public network)

Step 3: Configure xl2tpd:

# cat /etc/xl2tpd/xl2tpd.conf
[global]
ipsec saref = no

[lns default]
ip range = 10.152.2.2-10.152.2.254
local ip = 10.152.2.1
require chap = yes
refuse pap = yes
require authentication = yes
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes

The local IP must be inside the subnet but outside the IP range above.

# cat /etc/ppp/options.xl2tpd
refuse-mschap-v2
refuse-mschap
ms-dns 8.8.8.8
ms-dns 8.8.4.4
asyncmap 0
auth
lock
hide-password
local
#debug
name l2tpd
proxyarp
lcp-echo-interval 30
lcp-echo-failure 4

The ms-dns lines are configurable to any DNS server you have access to.

# cat /etc/ppp/chap-secrets
# Format:
# client server secret IP-addresses
#
# Two lines are needed since it is two-sided auth
test l2tpd testpass *
l2tpd test testpass *

Step 4: Configure kernel parameters (sysctl)

# cat /etc/sysctl.conf
# only values specific for ipsec/l2tp functioning are shown here. merge with
# existing file
# iPad VPN
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.icmp_ignore_bogus_error_responses = 1

Remember that sysctl.conf is evaluated at boot so run sysctl -p to get the settings enabled now as well.

Step 5: Configure firewall (iptables):
This is the critical step that I wasn’t grokking from the existing guides in the wild. Even when bringing the firewall down to test, you need the NAT/forwarding rules:

# iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -A FORWARD -s 10.152.2.0/24 -j ACCEPT
# iptables -A FORWARD -j REJECT
# iptables -t nat -A POSTROUTING -s 10.152.2.0/24 -o eth0 -j MASQUERADE

Step 6: Configure the device/client:
Settings -> General -> Network -> VPN -> Add VPN Configuration

L2TP
Description: Description
Server: 1.1.1.1 (or the hostname)
Account: test
RSA SecurID=OFF
Password: testpass
Secret: TestSecret
Send All Traffic=On

Step 7: Verify it works by going to some IP display webpage and it should show 1.1.1.1

Conclusion: The above examples should be enough to get the VPN working. There are some tweaking oppurtunities that I didn’t document or elaborate on. There is plenty of examples out there to look at or research, however. This was all setup without the firewall configuration and the client would connect but there would be no onward internet activity. It acted just like there was a invalid DNS server configured, at that point I looked into setting up a NAT, dnsmasq on the local interface, and other wierd things. In the end, just needed to forward the traffic properly.

With that knowledge of the firewall issue, the ultimate instructions would probably be this page: https://www.openswan.org/projects/openswan/wiki/L2TPIPsec_configuration_using_openswan_and_xl2tpd

September 14, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: room names (September 14, 2012, 16:36 UTC)

As you probably have seen in the schedule, we have multiple room that have ugly names from university like 107, 155 or 349. We would like to rename them during the conference so people can remember them more easily. So try your creativity and send us some ideas!

September 13, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: The schedule (September 13, 2012, 14:47 UTC)

The Call for Papers has ended and the schedule is now up for the four in one event that is gonna take place soon in Prague. The full schedule of all the co-hosted conferences can be found here! Don’t forget to register!

Gentoo Miniconf: It will take place on Saturday and Sunday with a plethora of amazing talks by experienced Developers and Contributors, all around Gentoo, targeting both desktop and server environments!

On Saturday morning Fabian Groffen, Gentoo Council member, along with Robin H. Johnson, member of the Board of Trustees, will give us a quick view of how those two highest authorities manage the whole project. Afterwards there are going to be a few talks regarding various topics, like managing your home directory, the KDE team workflow, the important topic of Security and a benchmarking suite, all performed by important people for the project. A cool Catalyst workshop will be next, followed by a workshop regarding Gentoo Prefix, and at the end we’re going to participate on BoFs regarding the Infrastructure and the Gentoo PR, which will cover hot topics, like the Git migration and our website. 

On Sunday we’ll see how a large company (IsoHunt) uses Gentoo, the tools it has developed and the problems it has encountered. Then, a cool talk about 3D games and graphic performance is going to take place, followed by a presentation on SHA1 and OpenPGP, which is the precursor of the Key Signing Party!! The second part of the Catalyst workshop is next, along with a Puppet workshop. At the end there are again two BoFs, the first about automated testing and the second about how we can grab more contributors and enlarge our cool project.

And a sneak peek on the other co-hosted conferences:

Future Media, which will be held on Saturday is a special feature track talking about the influence of developments in technology, social media and design on society. It will have talks like the future of Wikipedia and Open Data in general by Lydia Pintscher or using FOSS and open hardware for disaster relief by Shane Couglan.

The first day in the openSUSE Conference, Michael Meeks will tell you all aboutwhat’s new in LibreOffice, Klaas Freitag will give everyone a peek under the hood of ownCloud and for the more technical users, Stefan Seyfried will show you how to crash the Linux Kernel for fun and backtraces. Saturday night there’ll be a good party and the next day musician Sam Aaron will talk about Zen and how to Live Program music like he did during the party. Later, Libor Pecháček will explain the process of getting software from the community into commercial enterprises and at the end of the day Miguel Angel Barajas Watson will show us how a computer could win Jeopardy using SUSE, Power and Hadoop. The openSUSE event continues on Monday and Tuesday with many workshops and BoF sessions planned as well as a few large-room discussions about the future of the openSUSE development- and release process.

On Saturday the LinuxDays track features a number of Czech talks like an introduction to Gentoo by Tomáš Chvátal with his talk titled “if it moves, compile it!” (‘Pokud se to hýbe, zkompiluj to!’). Fedora is represented by Jiří Eischmann & Jaroslav Řezník later in the day. There also few real ninja-style talks about low-level programming like Petr Baudiš about low level programming and Thomas Renninger on modern CPU power usage monitoring (these both are in English). During the Saturday there will also be track of graphics workshops in Czech (Gimp, Inkscape, Scribus) followed by a 3D printing workshop (reprap!). Sunday is kicked of by Vojtěch Trefný explaining how to use Canonical’s Launchpad as a place to host your project (CZ). Those interested in networking will be taken care off by Pavel Šimerda (news from Linux Networking) and Radek Neužil who explains how to use networks securely (both CZ). You can also learn all about how to set up a Linux desktop/server solution for educational purposes (EN) and follow Vladimír Čunát talking about NixOS and the unique package manager this OS is build on. The LinuxDays track will be closed by Petr Krčmář (chief editor of root.cz) and Tomáš Matějíček (author of Slax) talking about future of Slax (CZ).

Find your way to your favorite talks. Come on, it’s easy!

September 12, 2012
Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5_pre2 (September 12, 2012, 08:47 UTC)

In portage-2.1.11.16 and 2.2.0_alpha127 there’s support for EAPI 5_pre2, which implements all of the features that were approved for EAPI 5 in the Gentoo Council meeting on September 11. The only difference from EAPI 5_pre1 is that the “user patches” feature has been removed.

September 11, 2012
Josh Saddler a.k.a. nightmorph (homepage, stats, bugs)
initramfs documentation updates (September 11, 2012, 23:31 UTC)

i just finished hacking on our XML for the month. several months ago, sven mentioned the changes needed to get the handbooks updated with initramfs/initrd instructions for separate /usr partitions. it took me a few hours, but i finally closed bug numbers 415175, 434550, 434554, and 434732. thanks to raúl for the patches.

i initially started putting in the patches as-is, but then i noticed that the initramfs descriptions were just copied from the x86+amd64 handbook. so, i stripped them out, and rewrote them as an included section common to all affected architecture handbooks. that <include> is then dynamically inserted by our XML processor, dropping the instructions into the appropriate place, so that there’s no extraneous text duplication.

the raw handbook XML looks something like this:

<pre caption="Installing the kernel">
# <i>cp arch/<keyval id="arch-sub"/>/boot/bzImage /boot/<keyval id="kernel-name"
/></i>
</pre>

</body>
</subsection>
<subsection>
<include href="hb-install-initramfs.xml"/>
</subsection>

</section>

that bit about include href="hb-install-initramfs.xml" fills in the next subsection with whatever we put in the hb-install-initramfs.xml include, which is never viewed by itself. little tricks like this make it much easier to maintain the documentation…we make one change to an include, and it’s propagated to all documents that use it. same goes for things like <keyval> — that variable is set elsewhere in our documentation, so that as kernel versions or ISO sizes change, we can update that value in one place (handbook-$ARCH.xml). every instance of the variable is automatically filled in when you view the handbook in your web browser.

not to say everything was smooth sailing while updating the handbooks…i ran into a few snags. i figured out why my initial commit attempts were blocked by our pre-commit hooks: it’s not that the xml interpreter was giving me spurious errors on each check. (“why you blocking me? i’m head of the project! DON’T YOU KNOW WHO I AM?!”) instead, i forgot a slash in a </body> element. THAT ruined the next 300 lines of code. solution: fix, re-run xmllint --valid --noout, add commit message, push to CVS.

the handbooks are now all set for the new initramfs/initrd mojo for those poor, poor souls mounting /usr on a separate partition/disk. my own partition layout is much simpler; i’ve never needed an initramfs.

September 10, 2012
Steve Dibb a.k.a. beandog (homepage, stats, bugs)

I regularly use monit to monitor services and restart them if needed (and possible).  An issue I’ve run into though with Gentoo is that openrc doesn’t act as I expect it to.  openrc keeps it’s own record of the state of a service, and doesn’t look at the actual PID to see if it’s running or not.  In this post, I’m talking about apache.

For context, it’s necessary to share what my monit configuration looks like for apache.  It’s just a simple ‘start’ for startup and ‘stop’ command for shutdown:

check process apache with pidfile /var/run/apache2.pid start program = “/etc/init.d/apache2 start” with timeout 60 seconds stop program = “/etc/init.d/apache2 stop”

When apache gets started, there are two things that happen on the system: openrc flags it as started, and apache creates a PID file.

The problem I run into is when apache dies for whatever reason, unexpectedly.  Monit will notice that the PID doesn’t exist anymore, and try to restart it, using openrc.  This is where things start to go wrong.

To illustrate what happens, I’ll duplicate the scenario by running the command myself.  Here’s openrc starting it, me killing it manually, then openrc trying to start it back up using ‘start’.

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 start
* WARNING: apache2 has already been started

You can see that ‘status’ properly returns that it has crashed, but when running ‘start’, it thinks otherwise.  So, even though an openrc status check reports that it’s dead, when running ‘start’ it only checks it’s own internal status to determine it’s status.

This gets a little weirder in that if I run ‘stop’, the init script will recognize that the process is not running, and reset’s openrc’s status to stopped.  That is actually a good thing, and so it makes running ‘stop’ a reliable command.

Resuming the same state as above, here’s what happens when I run ‘stop’:

# /etc/init.d/apache2 stop
* apache2 not running (no pid file)

Now if I run it again, it checks both the process and the openrc status, and gives a different message, the same one it would as if it was already stopped.

# /etc/init.d/apache2 stop
* WARNING: apache2 is already stopped

So, the problem this creates for me is that if a process has died, monit will not run the stop command, because it’s already dead, and there’s no reason to run it.  It will run ‘start’, which will insist that it’s already running.  Monit (depending on your configuration) will try a few more times, and then just give up completely, leaving your process completely dead.

The solution I’m using is that I will tell monit to run ‘restart’ as the start command, instead of ‘start’.  The reason for this is because restart doesn’t care if it’s stopped or started, it will successfully get it started again.

I’ll repeat my original test case, to demonstrate how this works:

# /etc/init.d/apache2 start
# pkill apache2
# /etc/init.d/apache2 status
* status: crashed
# /etc/init.d/apache2 restart
* apache2 not running (no pid file)
* Starting apache2 …

I don’t know if my expecations of openrc are wrong or not, but it seems to me like it relies on it’s internal status in some cases instead of seeing if the actual process is running.  Monit takes on that responsibility, of course, so it’s good to have multiple things working together, but I wish openrc was doing a bit more strict checking.

I don’t know how to fix it, either.  openrc has arguments for displaying debug and verbose output.  It will display messages on the first run, but not the second, so I don’t know where it’s calling stuff.

# /etc/init.d/apache2 -d -v start
<lots of output>
# /etc/init.d/apache2 -d -v start
* WARNING: apache2 has already been started

No extra output on the second one.  Is this even a ‘problem’ that should be fixed, or not?  That’s kinda where I’m at right now, and just tweaking my monit configuration so it works for me.


Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
ffmpeg saves the day (.mts files) (September 10, 2012, 07:17 UTC)

If you need to convert .mts files to .mov (so that e.g. iMovie can import them), I found ffmpeg to be the best tool for the task (I don't want to install and run "free format converters" that are usually Windows-only and come from untrusted sources). This post is inspired by iMovie and MTS blog post.

First I tried just changing the container:

for x in *.MTS; do ffmpeg -i ${x} -c copy ${x/.MTS/.mov}; done


But QuickTime could not play sound from those files because of AC-3 codec. Also, the quality of the video playback was very poor. The other command I tried was:

for x in *.MTS; do ffmpeg -i ${x} -vcodec copy -acodec mp2 -ac 2 ${x/.MTS/.mov}; done

Now QuickTime was able to play the sound, but problems with video remained. iMovie was unable to import the resulting files anyway (silently: I got no error message, just nothing happened when trying to import).

The final command that is proven to work well is this:


for x in *.MTS; do ffmpeg -i ${x} -vcodec mpeg1video -acodec mp2 -ac 2 -sameq ${x/.MTS/.mov}; done

The video has been converted perfectly, and iMovie successfully imported the movies. Note the useful bash substitution of extension, ${x/.MTS/.mov}. Enjoy!




September 08, 2012
Anthony Basile a.k.a. blueness (homepage, stats, bugs)

Hi everyone,

I’d like to announce a new initiative within the mips arch team. We are now supporting an xfce4-based desktop system for the Lemote Yeeloong netbook.  The images can be found on any gentoo mirrors, under gentoo/experimental/mips/desktop-loongson2f.  The installation instructions can be found here.  The yeeloong netbook is particularly interesting because it only uses “free” hardware, ie. hardware which doesn’t require any proprietary code.  It is manufactured by Lemote in China, and distributed and promoted in the US by “Freedom Included“.  It is how Richard Stallman does his computing.

I’m blogging because I thought it was important for Planet Gentoo to know that mips devices are currently being manufactured and used in netbooks as well as embedded systems.  The gentoo mips team has risen to the challenge of targetting these systems and maintaining natively compiled stage4′s for them.  Why stage4′s?  And why a full desktop for the yeeloong?  These processors are slow, so the time from a stage3 to a desktop is about three days for the yeeloong.  Also, the yeeloong sports a little endian mips64 processor, the loongson2f, and we support three ABIs: o32, n32 and n64, with n32 being the preferred.  This significantly increases the time to build glibc and other core packages.  I provide two images, a vanilla one and a hardened one.  The latter adds full hardening (pie, ssp, _FORTIFY_SOURCES=2, bind now, relro) to the toolchain and userland binaries as we do for amd64 and i686 in hardened gentoo.  I have not ported over the hardened kernel, however.

I allude above to “other” targetted devices.  I am also maintaining some mips uclibc systems (both hardened and vanilla) which are on the gentoo mirrors under experimental/mips/uclibc.  But I will speak more of these later as part of an initiative to maintain hardened uclibc systems on “alternative” architectures such as arm, mips, ppc as well as amd64 and i686.

You can read the full installation instructions, but here’s a quick summary, since it doesn’t follow the usual Gentoo method of starting from a stage3:

  • Prepare either a pen drive or a tftp server with a rescue image: netboot-yeeloong.img
  • Turn on the yeeloong and hit the Del key multiple times until you get the firmware promt: PMON>
  • If netbooting, add an IP address and point to the netboot-yeeloong.img.  If using a pen drive then point to thei image on the drive and boot into the rescue environment.
  • Partition and format the drive.
  • Download the desktop image from a mirror via http or ftp.  Its about 350 MB in size.
  • Unpack the image.  It contains not only the userland, but also a kernel.
  • Reboot to the PMON> prompt.  Aim to the kernel on the drive.  PMON will remember your choice and you will not have to repeat this step.

Once installed, you will log in as an ordinary user with sudo with username and password = “gentoo”.  The root password is also set to “root”.  It is an ordinary Gentoo system, so edit your make.conf, emerge –sync and add whatever packages you like!  File bugs to: blueness@gentoo.org with a CC to mips@gentoo.org.

If you have a Yeeloong or go out and buy one, consider trying out this image.

September 04, 2012
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, stats, bugs)
Another report from rarely updated system (September 04, 2012, 11:05 UTC)

This is another (second) post about updating a system I rarely updated. If you're interested, read the first post. I recommend more frequent updates, but I also want to show that it's possible to update without re-installing, and how to solve common problems.

Read more »

September 03, 2012
Doug Goldstein a.k.a. cardoe (homepage, stats, bugs)
Unofficial NVidia bugzilla? (September 03, 2012, 04:52 UTC)

The idea for this really comes from the Unofficial ATI bugzilla at http://ati.cchtml.com which appears to be successful. For NVidia issues the official way has been to email linux-bugs@nvidia.com or the unofficial method of posting on http://nvnews.net and hoping for a reply. Unfortunately I don’t find forums terribly useful for bug reports and the search functionality is less than ideal for issues.

I’ve been thinking of spinning up a Bugzilla instance for an Unofficial NVidia Bugzilla and inviting all distros to use it as well as the NVidia Linux engineers. But obviously I’d need some user/developer interest in this.

Would you use it?


Tagged: bug tracker, bugzilla, NVIDIA, nvidia-drivers

Zack Medico a.k.a. zmedico (homepage, stats, bugs)
Experimental EAPI 5_pre1 (September 03, 2012, 00:25 UTC)

In portage-2.1.11.13 and 2.2.0_alpha124 there’s support for EAPI 5_pre1, which implements all of the features that are currently in the eapi-5 branch of PMS (including the features from EAPI 4-slot-abi, which I’ve blogged about before). For additional references about the upcoming EAPI 5, see the “EAPI 5 tentative features” wiki page.

If you’d like to experiment with EAPI 5_pre1, then you can refer to the corresponding portage documentation, and you may need to pay special attention to the new “Profile IUSE Injection” feature. Since the profiles currently aren’t configured for this feature yet, you’ll have to configure these variables yourself if your experimental ebuilds reference special flags (like x86, kernel_linux, elibc_glibc, and userland_GNU) without listing them explicitly in IUSE. Here’s an abbreviated example of what the variables should look like, which you can put in make.conf:

IUSE_IMPLICIT="prefix selinux"
USE_EXPAND="ELIBC KERNEL USERLAND"
USE_EXPAND_UNPREFIXED="ARCH"
USE_EXPAND_IMPLICIT="ARCH ELIBC KERNEL USERLAND"
USE_EXPAND_VALUES_ARCH="amd64 ppc ppc64 x86 x86-fbsd x86-solaris"
USE_EXPAND_VALUES_ELIBC="FreeBSD glibc"
USE_EXPAND_VALUES_KERNEL="FreeBSD linux SunOS"
USE_EXPAND_VALUES_USERLAND="BSD GNU"

I have not populated all of the above variables exhaustively, but these values should be enough to get you started. If you need a more complete set of ARCH values to list in USE_EXPAND_VALUES_ARCH, then you can grab the exhaustive set of values from arch.list.

August 31, 2012
Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)
Bootstrapping Awesome: Need a Gentoo force! (August 31, 2012, 10:01 UTC)

The schedule of all the events will be published soon, so stay tuned!

P.S. To avoid confusion, I’m reminding everyone that the Gentoo Miniconf and the czech Linuxdays conference will be held on 20-21 October, while the openSUSE Conference has two extra days, so it will be held on 20-23 October

P.S.2 Thanks a lot to Joanna Malkogianni and Triantafyllia Androulidaki for the pacman banner

P.S.3 Thanks a lot to Anna Mineeva for the animated banner

August 27, 2012
Eray Aslan a.k.a. eras (homepage, stats, bugs)
Squid-3.2.1 in the tree (August 27, 2012, 14:15 UTC)

Squid-3.2.1 - the first non-beta release of Squid web proxy server 3.2 branch - is in the tree.  Big news is SMP scalability.  We can finally utilize multiple CPU cores natively instead of running multiple squid instances.


There are a lot of changes from previous versions.  In particular, some changes to existing directives may affect your existing traffic behaviour.  So, please be sure to read the release notes at [1] and [2] before upgrading.

There are 2 new USE flags:

  • ssl-crtd:  Adds support for dynamic SSL certificate generation in SslBump environments which allows icap inspection of SSL traffic without / with reduced certificate mismatch errors in browsers.  See [3] for further info.
  • qos:  Adds support for Quality of Service by allowing one to select a TOS / DSCP / Netfilter Mark value to mark outgoing connections with, based on where the reply was sourced.  Also turns on zero-penalty-hit config option which used to be a separate patch but now is included with squid itself.  Please see the qos_flows directive for further info [4].


One note regarding squid.conf:  By default, Gentoo provided a huge squid.conf file with lots of comments.  Upstream provides a small condensed squid.conf file which we will start to install as default from squid-3.2.1 onwards.  I always found it difficult to see what the overall squid configuration was in the previous huge squid.conf file.  Hopefully, this change will make life easier for squid admins.  The old commented squid.conf file is still available as squid.conf.documented under /etc/squid directory.  Please do try to migrate your settings to the new squid.conf file for ease of future upgrades.

August 26, 2012
Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)
Running owncloud on Gentoo stable (August 26, 2012, 18:51 UTC)

As I migrated to clean data layout (see previous post) I decided to be cool&trendy guy and fire up my own lovely cloudy service.

First my thinking was bit off regular setup, because even if we have in-tree ebuild of owncloud it hard-requires apache, which I find overkill here.

So I introduce you to secret approach how to make it work with ngnix and sqlite3. Before you say that I should use *insertothercooldbname* please think of that my deployment is only for handfull users and I tested it with 5 users connected at once each of them having access to 1 tb shared datastore and it proven fast enough.

Preparing keywords/useflags/etc

Well owncloud is testing, so unmask it:

scarabeus@htpc: /etc/portage $ cat package.keywords/own-cloud
www-apps/owncloud

We need dav for direct access and php stuff for the setup (some useflags might be useless or redundant):

scarabeus@htpc: /etc/portage $ cat package.use/own-cloud
dev-lang/php pdo sqlite3 curl xmlwriter gd truetype cgi force-cgi-redirect fpm
www-servers/nginx nginx_modules_http_dav

Now silently punt the apache away as we love nginx:

scarabeus@htpc: /etc/portage $ cat make.profile/package.provided
virtual/httpd-php-5.4

And put all this to good use by emerging required stuff:

emerge -v www-servers/nginx www-apps/owncloud

Setting up the stuff

As nginx does not have any fcgi we will use the fpm from php directly. For that we need to add it to runlevel rc-update add php-fpm default and set up a bit default number of spawned servers (config is in /etc/php/fpm-php5.4/php-fpm.conf). Also remeber to set there proper user/group there, or you won’t be able to store content in your cloud, just read from it.

Then we set up the nginx (/etc/nginx/nginx.conf and /etc/nginx/fastcgi_params). To keep this short and easy I will just post the config I used and let you to google for other nginx variables.
First the conf file:

        server {
                listen 80;
                server_name hostname;
                rewrite ^ https://$server_name$request_uri? permanent;  # enforce https
        }

        server {
                listen 443;
                server_name hostname;

                ssl on;
                ssl_certificate /etc/ssl/nginx/nginx.crt;
                ssl_certificate_key /etc/ssl/nginx/nginx.key;

                access_log /var/log/nginx/htpc.access_log main;
                error_log /var/log/nginx/htpc.error_log info;

                root /var/www/htpc/htdocs/owncloud/;

                client_max_body_size 8M;
                create_full_put_path on;
                dav_access user:rw group:rw all:r;

                index index.php;

                location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
                        deny all;
                }

                location / {
                        rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
                        rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
                        rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
                        rewrite ^/apps/calendar/caldav.php /remote.php/caldav/ last;
                        rewrite ^/apps/contacts/carddav.php /remote.php/carddav/ last;
                        rewrite ^/apps/([^/]*)/(.*\.(css|php))$ /index.php?app=$1&getfile=$2 last;
                        rewrite ^/remote/(.*) /remote.php/$1 last;

                        try_files $uri $uri/ @webdav;
                }

                location @webdav {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        fastcgi_pass 127.0.0.1:9000;
                        include fastcgi_params;
                        fastcgi_param HTTPS on;
                }

                location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
                        expires 30d;
                        access_log off;
                }

                location ~ \.php$ {
                        fastcgi_split_path_info ^(.+\.php)(/.*)$;
                        fastcgi_pass 127.0.0.1:9000;
                        include fastcgi_params;
                        fastcgi_index index.php;
                        fastcgi_intercept_errors on;
                        try_files $uri =404;
                }
        }

For the fcgi we also need some params to make the webdav work:

fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param   SCRIPT_NAME     $fastcgi_script_name;
fastcgi_param   PATH_INFO       $fastcgi_path_info;

That should be it, now we just deploy the owncloud to our webserver by webapp-config:

/usr/sbin/webapp-config -I -h htpc -u root -d /owncloud owncloud 4.0.7

After we start up the webserver and fcgi provider, we should be up and running to open the stuff in web browsers.

Few issues I didn’t manage to sort out in owncloud

  • External module to load all system users into it does not pass the auth
  • Google sync just timeouts everytime I try it (I maybe have just damn huge content here)
  • External storage support from within owncloud didn’t work for me, I just symlinked the data folder to the proper places under each user and logged into them in browser, then waited for 3 hours (1tb of data to index) and they were able to access everything.

August 25, 2012
Doug Goldstein a.k.a. cardoe (homepage, stats, bugs)
Common issues when starting out with virsh (August 25, 2012, 23:52 UTC)

I’ve been receiving a lot of questions lately from people wanting to use libvirt with virsh and not wanting to use a GUI (e.g. virt-manager). They’ll get gung-ho and install libvirt and start up virsh and be confronted with an error almost right away. Obviously from a user perspective, this is a bad experience so I think a little background is in order.

libvirt runs in two modes called system and session. These terms are identical to D-Bus so if you are familiar with that just think in those terms. If not, system is the instance that runs as a system daemon. It has an init script at /etc/init.d/libvirtd and will run as root. The session instance runs as a normal user. It is not started at boot time but dynamically by someone using virsh. The default when running virsh as root is to connect to the system instance. The default when running virsh as a normal user is to connect to the session instance. This is why people say their virtual machines have disappeared or they can’t connect typically. There are four ways to connect to the system instance as a normal user:

  • virsh -c qemu:///system
  • virsh and at the prompt connect qemu:///system
  • export LIBVIRT_DEFAULT_URI=qemu:///system and running virsh
  • edit /etc/libvirt/libvirt.conf and set uri_default=qemu:///system

Now if you haven’t built libvirt with PolicyKit support, by default only root will be able to communicate with the system instance. You will have to edit /etc/libvirt/libvirtd.conf and change unix_sock_rw_perms to something more open like 0770 or 0777 (the former will require changing unix_sock_group to a group your user is part of). Then restart libvirtd to get the new permissions.

The last issue to befall people relates to libvirt’s recent switch to using XDG_RUNTIME_DIR and XDG_CONFIG_HOME from the XDG Base Directory Spec. The defaults for these are $HOME/.cache/ and $HOME/.config/ respectively. The issue that gets people is that your X session manager creates these directories for you if they don’t exist but libvirt does not. So for people logging into a user that never uses X, they won’t have these directories. As a result when exiting virsh you will get an error that it couldn’t save your command history. Additionally you will not be able to start a session instance without these directories present. The simplest fix is to just do mkdir $HOME/{.cache,.config} and all should be well. Note: This last issue is now resolved for the forth coming 0.10.0 release.


Tagged: Gentoo, libvirt, qemu, qemu-kvm, virsh

Sven Vermeulen a.k.a. swift (homepage, stats, bugs)
Gentoo Hardened in August (August 25, 2012, 15:18 UTC)

Last wednesday Gentoo Hardened held its monthly online meeting to discuss the progress of the various subprojects, reconfirm the current project leads, talk about potential new projects and discuss some bugs that were getting on our nerves…

For the project leads, all current leads were reconfirmed: Zorry will keep tight ship as Gentoo Hardened project lead, and will also continue as the lead for the toolchain-related projects. Blueness keeps tackling the kernel, pax, grsec and rsbac subprojects, klondike the documentation and media and I will continue with the SELinux and integrity subprojects.

On the toolchain progress, Zorry is working on the 4.8 patches and hopes to be able to submit them upstream later this month. Blueness continues maintaining the uclibc architectures mentioned last month and is working on the documentation related to it.

On the kernel side, there were some reports submitted that were triggered by the integer overflow plugin. This plugin, called size_overflow aims to detect integer overflows where an increase of an integer value goes beyond its maximum and wraps around (resulting in either a negative or a small integer result). This is of course unwanted behavior, so a gcc plugin (by Emese Revfy) is used to detect such occurrences. Basically, this plugin will recalculate whatever is done with the integers on a double precision integer level and see if the logic result is the same. If it isn’t, then an overflow has most likely occurred. This is of course overly simply explained, but from what I can fond in the interwebs, not that far from the truth.

The reports are generally about network-related applications, like tor, which are terminated because something fishy occurred within the network handling code of the kernel (see for instance bug #430906).

In the SELinux camp, the documentation has been updated to inform users on how to create a new role (see also an earlier post of mine) and a few patches to the setools package have been added to support Python-2.7-only systems as well as systems using the latest swig. Also, all userspace utilities for SELinux should support both Python 2.7 and Python 3.x – the only remaining aspect is the SELinux code within Portage (see bug #430488).

Regarding grSecurity and PaX, blueness is working on the xattr PaX markings support in Gentoo, and a tracker bug has been opened to manage the changes needed. Vapier suggested to move towards xattr markings completely and drop the PT_PAX ELF header support, but this cannot be done until all file systems support user-level extended attributes. That being said, it is a good idea to do this in the long run though as extended attributes give greater flexibility and don’t manipulate the binaries of an application.

On the integrity subproject, the concepts and introduction documentation is online. I’m working on a few ebuilds that are needed to support IMA/EVM and should hopefully hit the hardened development overlay the next week. The primary focus now is to support creating a “secure image” which, when uploaded to a hosting service, would detect if the hosting service tampered with the image outside (i.e. by manipulating the image file itself).

Finally, on documentation and media, we will need to look into updating the prelude/LIDS documentation (host intrusion prevention/detection documentation) as it is quite old and obsoleted currently. Klondike also recently gave a talk about Gentoo Hardened (put the stuff online Francisco !) but I don’t recall anymore where – I’lll update when I see the meeting log ;-)

All by all a nice month! Good going guys.

August 23, 2012
Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)
Migrating disk layout from mess to raid1 (August 23, 2012, 11:46 UTC)

Imagine you are dumb guy like me, first what I did was to set up 3 1TB disks into one huge LVM copied data on it and then found out that grub2 needs more free space before the first partition to be able to load the LVM module and boot. For a while I solved this with external USB token plugged in the motherboard. But I said no more!

I bought two 3TB disks to deal with the situation, and this time I decided to do everything right and add UEFI boot instead of normal good old booting.

Disk layout

Model: ATA ST3000VX000-9YW1 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      17.4kB  512MB   512MB   fat32        primary
 2      512MB   20.0GB  19.5GB               primary
 3      20.0GB  30.0GB  9999MB  xfs          primary
 4      30.0GB  3001GB  2971GB  xfs          primary

So as you can see I created 4 partitions. First is special case and it must be always created for EFI boot. Create it larger than 200 megs, up to 500, which should be enough for everyone.

The disk layout must be set up in parted as we want GPT layout (just google how to do it, it is damn easy to use), It accept both values like 1M, 1T and percetage like 4% to specify the resulting partition size.

Setting up the RAID

We just create simple nodes and plug /dev/sda2-4 and /dev/sdb2-4 to them. Prior creating the RAID make sure you have RAID support in your kernel.

for i in {2..4}; do mknod /dev/md${i} b 9 ${i}; mdadm --create /dev/md${i} --level=1 --raid-devices=2 /dev/sda${i} /dev/sdb${i}; done

After these commands are executed we have to watch mdstat until it is prepared (note that you can work with the md disks in the meantime, just the setting of the RAID will be slower as you will be writting on the named disks.

After we check the mdstat and see that all the disks are ready for play:

croot@htpc: ~ # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md4 : active raid1 sda4[0] sdb4[1]
      2900968312 blocks super 1.2 [2/2] [UU]
      
md3 : active raid1 sda3[0] sdb3[1]
      9763768 blocks super 1.2 [2/2] [UU]
      
md2 : active raid1 sda2[0] sdb2[1]
      19030679 blocks super 1.2 [2/2] [UU]

we can proceed with data copying.

Transfering the data and setting up the system

mkfs.ext4 /dev/md2 ; mkfs.xfs /dev/md3 ; mkfs.xfs /dev/md4 # create filesystems
mkdir -p /mnt/newroot/{home,var} # create the folder struct (home and var are actually the md3 and md4 so prepare the folders for them
mount /dev/md2 /mnt/newroot
mount /dev/md3 /mnt/newroot/var
mount /dev/md4 /mnt/newroot/home

Now that we are ready we will use rsync to transfer living system and data (WARNING: shutdown everything that temper with data (like ftp/svn/git services). Only thing we are going to loose is few lines of syslog and other log services.

rsync -av /home /mnt/newroot/home # no -z as we don't need to compress
rsync -av /var /mnt/newroot/var
rsync -av / --exclude '/home' --exclude '/dev' --exclude '/lost+found' --exclude '/proc' --exclude '/sys' --exclude '/var' --exclude '/mnt' --exclude '/media' --exclude '/tmp' /mnt/newroot/ # copy all relevant stuff to newroot
mkdir -p /mnt/newroot/{dev,proc,sys,mnt,media,tmp}

After the transfer you need to edit /etc/fstab to reflect new disk layout. Update kernel (if needed to support new RAID layout) and update /etc/defaults/grub if you did RAID like me to contain domdadm line for default command.

Preparing new boot over UEFI

On your machine you need to create usb dongle which supports UEFI boot (you need to be uefi booted to setup UEFI [fcking hilarious]).

We need to download latest archboot iso 64bit (gentoo minimal didn’t contain this lovely feature).
Grab some usb disk and plug it into our machine. We will format it to 32b fat: mkfs.vfat -F32 /dev/[myusb] , mount somewhere and copy the ISO image content to the usb folder (you can enter it in mc and just F5 it if you are lazy like me, but it is working with tar, p7zip or whatever else). Shutdown the computer, unplug old disks and with manic laughter turn the machine again on.

To boot the uefi just open boot list menu and select the disk which has UEFI around its name. It will open grub2 menu where you just select first option. We should be then welcomed by lovely arch installer. Not caring about it switch to another console and open terminal. Setup again the arrays using mdadm –assemble.

for i in {2..4}; do mknod /dev/md${i} b 9 ${i}; mdadm --assemble /dev/md$i /dev/sda${i} /dev/sdb${i}; done

Then just proceed with mounting them somewhere to /mnt and chroot like you would do new gentoo install. Exact steps:

modprobe efivars # load the efi tool variables
mkdir -p /mnt/newroot/{home,var} # create the folder struct (home and var are actually the md3 and md4 so prepare the folders for them
mount /dev/md2 /mnt/newroot
mount /dev/md3 /mnt/newroot/var
mount /dev/md4 /mnt/newroot/home
mount -o rbind /dev /mnt/newroot/dev
mount -o rbind /sys /mnt/newroot/sys
mount -t proc none /mnt/newroot/proc
chroot /mnt/newroot /bin/bash
. /etc/profile
env-update

Now that we are in chroot we just install grub2 with GRUB_PLATFORMS=”efi-64″. After that we proceed easily by following wiki article.

Unmount the disk, reboot the system, unplug the flasdrive, …, profit?

August 22, 2012
When you should block a stabilization (August 22, 2012, 12:51 UTC)

Many times in the past, a lot of people found bugs for packages actually in a STABLEREQ.

Every time my answer was: make it as a block only if is a regression; and every time the next question was: “What is a regression?”.

Well, seems that word is not familiar for all, let me explain this concept.; first, see the word regression as a synonymous of worsening.

I guess, for this concept, an example should give you ‘the basic idea’

Imagine we are testing app-arch/tar-1.26-r1 (at this moment it is really ~arch).

During the testing we find 4 issue:

1)the CFLAGS variable is not respected.
2)there is a sed failure in the ebuild.
3)there is a test failure.
4)tar fails to extract some archives in a specific mode.

I chose these four problems as an example for a reason. The regressions must block the stabilization, independently of the type of problem.
Each of them represents a problem of a “different nature”.

1)means builsystem issue
2)means ebuild issue
3) and 4) means specific software problem

What you should do now? You should test the last stable version of tar (1.26 in this case) and check if you are able to reproduce these problems.

From the subsequent tests, you can see that tar-1.26 fails to respect CFLAGS(1), fails to sed one or more files(2), there are no test failures, and you can reproduce the extract issue.

Now, go to our bugzilla, and check if there are open bugs about these problems. If not, please open the bugs but pay attention about the blocks.

Since the first is reproducible in the last stable, is not a regression, it means no block.
Since the second is reproducible in the last stable, is not a regression, it means no block.
Since the third is not reproducible in the last stable, is a regression, it means block.
Since the fourth is reproducible in the last stable, is not a regression, it means no block.

For this case, you should open a new bug about the test failures and make it as a block for the current stabilization. Obviously, if there are open bugs that need a block, do it instead of open new(duplicate) bugs.

Now, apart the test failures and ignoring the failures 1 and 2, the obvious question is: “Why we should mark stable tar-1.26-r1 if it fails to extract stuff?”.
Here you should learn the regression concept; imagine you are an user, you are using tar-1.26 and you can’t extract some archives; we mark stable 1.26-r1 and you can’t do it too. There are no changes for you and no worsening. You can’t do before and you can’t do now again.

Probably this post is documented elsewhere, but I hope that can helps.

August 20, 2012
Greg KH a.k.a. gregkh (homepage, stats, bugs)
Stable kernel tree status, August, 2012 (August 20, 2012, 22:45 UTC)

As I posted to the linux-kernel mailing list, the 3.4 kernel tree will be the next -longterm kernel that I will be maintaining for at least 2 years.

Currently I'm maintaining the following stable kernel trees for the following amount of time:

  • 3.0 - for at least one more year
  • 3.4 - for at least two years
  • 3.5 - until 3.6.1 is out

Hope this helps clear up any rumors floating around. If anyone has any please let me know.

Steve Dibb a.k.a. beandog (homepage, stats, bugs)
freebsd, quick deployments, shell scripts (August 20, 2012, 18:07 UTC)

At work, I support three operating systems right now for ourselves and our clients: Gentoo, Ubuntu and CentOS.  I really like the first two, and I’m not really fond of the other one.  However, I’ve also started doing some token research into *BSD, and I am really fascinated by what I’ve found so far.  I like FreeBSD and OpenBSD the most, but those two and NetBSD are pretty similar in a lot of ways, that I’ve been shuffling between focusing solely on FreeBSD and occasionally comparing at the same time the other two distros.

As a sysadmin, I have a lot of tools that I use that I’ve put together to make sure things get done quickly. A major part of this is documentation, so I don’t have to remember everything in my head alone — which I can do, up to a point, it just gets really hard trying to remember certain arguments for some programs.  In addition to reference docs, I sometimes use shell scripts to automate certain tasks that I don’t need to watch over so much.

In a typical situation, a client needs a new VPS setup, and I’ll pick a hosting site in a round-robin fashion (I’ve learned from experience to never put all your eggs in one basket), then I’ll use my reference docs to deploy a LAMP stack as quickly as possible.  I’ve gotten my methods refined pretty well so that deploying servers goes really fast — in the case of doing an Ubuntu install, I can have the whole thing setup close to an hour.  And when I say “setup” I don’t mean “having all the packages installed.”  I mean everything installed *and* configured and ready with a user shell and database login and I can hand over access credentials and walk away.  That includes things like mail server setup, system monitoring, correct permissions and modules, etc.  Getting it done quickly is nice.

However, in those cases of quick deployments, I’ve been relying on my documentation, and it’s mostly just copy and paste commands manually, run some sed expressions, do a little vim editing and be on my way.  Looking at FreeBSD right now, and wanting to deploy a BAMP stack, I’ve been trying things a little differently — using shell scripts to deploy them, and having that automate as much as possible for me.

I’ve been thinking about shell scripting lately for a number of reasons.  One thing that’s finally clicked with me is that my skill set isn’t worth anything if a server actually goes down.  It doesn’t matter if I can deploy it in 20 minutes or three days, or if I manage to use less memory or use Percona or whatever else if the stupid thing goes down and I haven’t done everything to prevent it.

So I’ve been looking at monit a lot closer lately, which is what I use to do systems monitoring across the board, and that works great.  There’s only one problem though — monit depends on the system init scripts to run correctly, and that isn’t always the case.  The init scripts will *run*, but they aren’t very fail-proof.

As an example, Gentoo’s init script for Apache can be broken pretty easily.  If you tell it to start, and apache starts running, but crashes after initialization (there’s specifics, I just can’t remember them off the top of my head) the init script thinks that the web server is running simply because it managed to run it’s own commands successfully.  So the init system thinks Apache is running, when it’s not.  And the side effects from that are that, if you try to automatically restart it (as monit will do), the init scripts will insist that Apache is already running, and things like executing a restart won’t work, because running stop doesn’t work, and so on and so forth.  (For the record, I think it’s fair that I’m using Apache as an example, because I plan on fixing the problem and committing the updates to Gentoo when I can.  In other words, I’m not whining.)

Another reason I’m looking at shell scripting as well is that none of the three major BSD distros (FreeBSD, NetBSD, OpenBSD) ship with bash by default.  I think all three of them ship with either csh or tcsh, and one or two of them have ksh as well.  But, they all have the original Bourne shell.  I’ve tried my hand and doing some basic scripting using csh because for FreeBSD, it’s the default, and I thought, “hey, why not, it’s best to use the default tools that it ships with.”  I don’t like csh, and it’s confusing to try and script for, so I’ve given up on that dream.  However, I’m finding that writing stuff for the Bourne shell is not only really simple, but it also adds on the fact that it’s going to be portable to *all* the distros I use it on.

All of this brings me back to the point that I’m starting to use shell scripts more and more to automate system tasks.  For now, it’s system deployments and system monitoring.  What’s interesting to me is that while I enjoy programming to fix interesting problems, all of my shell scripting has always been very basic.  If this, do that, and that’s about it.  I’ve been itching to patch up the init scripts for Gentoo (Apache is not the only service that has strange issues like that — again, I can’t remember which, but I know there were some other funky issues I ran into), and looking into (more) complex scripts like that pushes my little knowledge a bit.

So, I’m learning how to do some shell scripting.  It’s kind of cool.  People always talk about, in general, about how UNIX-based systems / clones are so powerful because of how shell scripting works .. piping commands, outputting to files, etc.  I know my way around the basics well enough, but now I’m running into interesting problems that is pushing me a bit.  I think that’s really cool too.  I finally had to break down the other day and try and figure out how in the world awk actually does anything.  Once I wrapped my head around it a bit, it makes more sense.  I’m getting better with sed as well, though right now a lot of my usage is basically clubbing things to death.  And just the other day I learned some cool options that grep has as well, like matching an exact string on a line (without regular expressions … I mean, ^ and $ is super easy).

Between working on FreeBSD, trying to automate server deployments, and wanting to fix init scripts, I realized that I’m tackling the same problem in all of them — writing good scripts.  When it comes to programming, I have some really high standards for my scripts, almost to the point where I could be considered obsessive about it.  In reality, I simply stick to some basic principles.  One of them is that, under no circumstances, can the script fail.  I don’t mean in the sense of running out of memory or the kernel segfaulting or something like that.  I mean that any script should always anticipate and handle any kind of arbitrary input when it’s allowed.  If you expect a string, make sure it’s a string, and that it’s contents are within the parameters you are looking for.  In short, never assume anything.  It could seem like that takes longer to write scripts, but for me it’s always been a standard principle that it’s just part of my style. Whenever I’m reviewing someone else’s code, I’ll point to some block and say, “what’s gonna happen if this data comes in incorrectly?” to which the answer is “well, that shouldn’t happen.”  Then I’ll ask, “yes, but what if it *does*?”  I’ve upset many developers this way. :)  In my mind, could != shouldn’t.

I’m looking forward to learning some more shell scripting.  I find it frustrating when I’m trying to google some weird problem I’m running into though, because it’s so difficult to find specific results that match my issue.  It usually ends up in me just sorting through man pages to see if I can find something relative.  Heh, I remember when I was first starting to do some scripting in csh, and all the search results I got were on why I shouldn’t be using csh.  I didn’t believe them at first, but now I’ve realized the error of my ways after banging my head against the wall a few times.

In somewhat unrelated news, I’ve started using Google Plus lately to do a headdump of all the weird problems I run into during the day doing sysadmin-ny stuff.  Here’s my profile if you wanna add me to your circles.  I can’t see a way for anyone to publicly view my profile or posts though, without signing into Google.

Well, that’s my life about right now (at work, anyway).  The thing I like the most about my job (and doing systems administration full time in general) is that I’m constantly pushed to do new things, and learn how to improve.  It’s pretty cool.  I likey.  Maybe some time soon I’ll post some cool shell scripts on here.

One last thing, I’ll post *part* of what I call a “base install” for an OS.  In this case, it’s FreeBSD.  I have a few programs I want to get installed just to get a familiar environment when I’m doing an install: bash, vim and sometimes tmux.  Here’s the script I’m using right now, to get me up and running a little bit.  [Edit: Upon taking a second look at this -- after I wrote the blog post, I realized this script isn't that interesting at all ... oh well.  The one I use for deploying a stack is much more interesting.]

I have a separate one that is more complex that deploys all the packages I need to get a web stack up and running.  When those are complete, I want to throw them up somewhere.  Anyway, this is pretty basic, but should give a good idea of the direction I’m going.  Go easy on me. :)

Edit: I realized the morning after I wrote this post that not only is this shell script really basic, but I’m not even doing much error checking.  I’ll add something else in a new post.

#!/bin/sh
#
# * Runs using Bourne shell
# * shells/bash
# * shells/bash-completion
# * editors/vim-lite

# Install bash, and set as default shell
if [ ! -e /usr/local/bin/bash ] ; then
	echo "shells/bash"
	cd /usr/ports/shells/bash
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "shells/bash - found"
fi
if [ $SHELL != "/usr/local/bin/bash" ] ; then 
	chsh -s /usr/local/bin/bash > /dev/null 2>&1 || echo "chsh failed"
fi

# Install bash-completion scripts
if [ ! -e /usr/local/bin/bash_completion.sh ] ; then
	echo "shells/bash-completion"
	cd /usr/ports/shells/bash-completion
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "shells/bash-completion - found"
fi

# Install vim-lite
if [ ! -e /usr/local/bin/vim ] ; then
	echo "editors/vim-lite"
	cd /usr/ports/editors/vim-lite
	make -DBATCH install > /dev/null 2>&1
	if [ $? -ne 0 ]; then
		echo "make install failed"
		exit 1
	fi
else
	echo "editors/vim-lite - found"
fi

# If using csh, rehash PATH
cd
if [ $SHELL = "/bin/csh" ] ; then
	rehash
fi

Richard Freeman a.k.a. rich0 (homepage, stats, bugs)
Gentoo Bug Bounties (August 20, 2012, 02:55 UTC)

Some may have noticed that the Gentoo Foundation has funded a bug bounty. This is something fairly new for the Foundation, and I wanted to offer some comments on the practice. Please note that while I’d love to see some of these make their way into policy some day, these are nothing more than my own opinion, and I reserve the right to change my opinion as we gain experience.

The recent bug bounty was for bug #418431, which was to address a problem with git-svn which was holding up stabilization of the latest version of git, which is a blocker for the migration of the Portage tree to git.

What follows are some principles for the use of bug bounties and how I think we fared in this particular case. I’d like to see the use of bounties expand, as right now I believe we under-utilize our donations. However, it is important that bounties be used with care as they have the potential to cause harm or be wasteful.

One more upfront note – I supported the git-svn bounty as it was ultimately worded, as did the other Trustees. Looking back I think we could have done things a little differently, but hindsight is always 20/20, and no doubt we’ll continue to learn as we experiment with this further.

1. Bounties Should Be Used Strategically
While the Foundation has money to spend, we aren’t swimming in it, so we can’t use bounties for any little bug that annoys us. Bounties should be reserved for matters where spending a little money has a large impact.

I think we did well here – the git-svn issue was going nowhere either within Gentoo or upstream, but the number of other blockers to the git migration are fairly small and within Gentoo’s control. Getting rid of this issue should open the way towards the git migration, which is of course of strategic importance to Gentoo.

2. The Solution Must Be Sustainable
This might also be stated as “consider the total cost.” Before agreeing to fund this bug there was some due diligence to ensure that upstream would carry forward any patches we generated. The problem was the result of changes on the SVN side, and the solution included some general cleanup and refactoring of code to make git-svn more maintainable upstream. Upstream also expressed an interest in accepting the fix, and it was the opinion of the package maintainer that this would be a one-time fix as a result.

When considering whether a solution is sustainable, we need to think about how we got where we are, and consider whether we’re just going to end up back in the same place again. If the solution won’t be maintainable, then any money spent is wasted unless it truly is a one-time event.

3. Gentoo Can’t Fix It With Volunteer Effort
Gentoo is a community distribution. We have some very talented developers. We can usually fix our own problems, and doing so as a volunteer community effort is usually the healthiest solution.

The sense for git-svn was that this was an upstream problem in a language our maintainers were not comfortable with. The bug languished despite attention by several developers and discussions in other forums. It was felt that offering a bounty would allow targeted expertise to tackle the problem, which otherwise was not of great interest to our community.

A policy to not offer bounties unless a bug has been open for some period of time except in unusual circumstances would be appropriate.

4. Be Ready To Capitalize On the Work
If the work is strategic (see #1), then we ought to have a plan ready for when the bug is closed. Otherwise there really should be no urgency to pay somebody to close the bug and it is basically a pig in a snake (clear the jam, and the problem just moves one step down the chain).

I think the jury is still out on how we’re doing here. I think there is a lot of enthusiasm about git but we could have a bit more organization here. None of this is intended as a slight to those who have been laboring hard to make this work – I hope getting this blocker cleared will inspire more to step up and resolve the other issues. (I won’t say more here as I don’t want to make this about the Git migration.)

5. Define the Problem and Success
A bounty is a contract. At the very least misunderstandings can lead to hurt feelings, and at worst they can lead to HIGHLY contentious, expensive, and distracting legal action. While a 10 page document shouldn’t be necessary for a token expense, any bounty should be very up-front about what exactly is to be done, and how success will be evaluated.

I think we could have done a little better in this regard, but there was some iteration on the wording of the bounty to clarify the “victory conditions.” I think it is important to focus on outcomes – in this case we wanted code that upstream was likely to accept. I’d actually have been happier making upstream acceptance a condition of payment, but the sense was that this would be inevitable but might delay payment unduly. I think the jury is still out on this one. What is important is that we don’t just achieve technical resolution of the bug, but that we fully realize the benefits we had in mind when we funded the bounty.

6. Cover Code Ownership and Licensing
This is a work for hire – we can dictate ownership of the code (yes, I realize that the legalities of this vary internationally, but the US is the only nation that legally recognizes the Gentoo Foundation at the moment, and the US will enforce this insofar as its jurisdiction allows). Per the Gentoo Social Contract, if we’re funding the creation of code, it ought to be free (generally GPL).

This was covered in the git-svn case. We didn’t insist on ownership of copyright, but we did ensure the code was licensed using the upstream license (GPLv2). My feeling is that if the bounty really represents payment for a majority of the work Gentoo should just own the code outright. If the bounty is really a token gesture for what is mostly a volunteer effort I think the author should retain copyright as long as the code is FOSS. In practice it doesn’t matter too much, so I think we should use discretion here.

7. Offers of Bounties Should Be Fair
This topic led to some internal debate, and I think that we can probably do a little better in terms of transparency in the future. The bounty was posted publicly on the bug, and anybody already interested in the bug and on the CC list would of course have gotten notification. In retrospect I think that bounties are a significant enough occasion that perhaps a proposal should be offered for comment on -dev or -nfp and the final version announced on -dev-announce. I think that the way we handled the git-svn case met all legal obligations, but I really want to make sure that the whole community has an opportunity to participate when they come up.

Another potential issue with bounties is that you can only pay one person (unless there is some side agreement to share it), and there can be resentment if work gets done but isn’t reimbursed. This was addressed in the present case by asking anybody working on the bug to state their intent. If a bounty is very large it probably would make sense to go through a more formal bidding process and just award a contract more conventionally.

I think that this last point of fairness is actually the most critical. While messing up on any of the others could cause us to waste a few hundred dollars, getting the fairness bit wrong could literally destroy the community. When you start paying people to do what used to be volunteer work the result can be demoralizing to the community. I think the key is to only do this when the community lacks the ability/desire to do the work itself, and especially when the work lies outside of our core expertise. Paying an accounting firm a reasonable fee to ensure our taxes are filed correctly isn’t viewed with much controversy. We should try to keep bug bounties limited to similar sorts of situations.

Trustees of course have duties both under the bylaws and under US law to properly manage conflicts of interest. These certainly apply to any kind of expenditure of money.


So, what do you think? I’m very open to criticism about how we handled our first bug bounty, and how the community feels about this use of money. As is evident from the Treasurer’s Report at today’s Annual General Meeting, Gentoo is currently receiving more than it spends in donations, so I think making a little more use of this approach will allow our supporters to benefit Gentoo. Seeing donations in action probably will help encourage an increase in donation as well. However, I think we also need to tread carefully here, as the community matters far more than squashing a few bugs.

Finally, while I’d like to see policy around bounties formalized, I think doing so right away would be a mistake. I think we should try to consciously apply principles like these but wait until we see how they work in practice before trying to codify them.


Filed under: gentoo, gentoo foundation

August 19, 2012
Nathan Zachary a.k.a. nathanzachary (homepage, stats, bugs)

After reviewing several solutions to a security problem regarding screen lockers, I’ve found that the easiest workaround for switching virtual terminals and killing the screen locker application is to start one’s X session with the following command:

exec startx

That way, even if someone switches to the virtual terminal that was used to start X and presses CTRL+C, he or she will only be presented with a login prompt (instead of having full reign of the user account responsible for starting the session). Now that there’s a reasonable workaround for that problem, I set out to make keybindings and menu shortcuts for Openbox that would take care of both locking the screen, and putting my displays to sleep. Conceptually, this was a straightforward task, and I accomplished it with the following:

Openbox menu item:
<item label="Lock screen + off">
<action name="execute"><execute>/usr/bin/slock</execute></action>
<action name="execute"><command>/usr/bin/xset dpms force off</command></action>
</item>

Keybinding:
<keybind key="XF86Sleep">
<action name="execute">
<execute>/usr/bin/slock</execute>
</action>
<action name="execute">
<command>/usr/bin/xset dpms force off</command>
</action>
</keybind>

The only problem is that it doesn’t work every time. Though it tends to work nicely, there are times where slock will start, but the displays will not honour the xset command to go to sleep (I guess that when it comes to bedtime, monitors are a bit finicky like children :razz: ). I have tried adding a sleep time before the commands, thinking that there was some HID activity causing the wake, but that didn’t rectify the problem. If anyone has a proposed solution to the seemingly random failure of xset putting the displays to sleep, please let me know by leaving a comment.

Cheers,
Zach

Equo rewrite, Sabayon 10 and Google (August 19, 2012, 10:48 UTC)

The following month are expected to be really exciting (and scary, eheh), for many reasons. Explanation below.

My life is going to rapidly change in roughly one month, and when these things happen in your life, you feel scared and excited at the same time. I always tried to cope with these events by just being myself, an error-prone human being (My tech. English teacher doesn’t like me to use “human being”, but where’s the poetry then!) that always tries to enjoy life and computer science with a big smile on his face.

So, let’s start in reverse order. I have the opportunity to do the university internship at Google starting from October, more precisely at Google Ireland, which is located in Dublin. I think many of the Googlers had the same feelings I currently have before me, scared and excited at the same time, with questions like “do I deserve this?”, “am I good enough?”. As I wrote above, the only answer I have found so far is that, well, it will be challenging but, do I like boredom after all? Leveraging on professionality and humbleness is probably what makes you a good team-mate all the time. Individuals cannot scale up infinitely, that is why scaling out (as in team work) is a much better approach.

It’s been two years since I started working at Weswit, the company behind the award-winning Lightstreamer Push technology, and next month is going to be my last one there. Even though, you never know what will happen next year, once back from the internship at Google. Sure thing is, I will need a job again, and I will eventually graduate (yay!).
So yeah, during the whole University period, I kept working and besides it’s been tough, it really helped me out bidirectionally. In the end, I kept accumulating real-world expertise during this time.
Anything in my life has been risk-free, and I took the risk of leaving a great job position to pursue something I would have regretted for the rest of my life, I’m sure. On the other hand, I’m sure that at the end of the day, it will be a win-win situation. Weswit is a great company, with great people (that I want to thank for the trust they gave me) and I’m almost sure that the next one might not be my last month there (in absolute terms I mean). You never know what is going to happen in your life, and I believe there’s always a balance between bad and good things. Patience, passion and dedication is the best approach to life, by the way.

Before leaving for Dublin, we (as in the Sabayon team) are planning to release Sabayon 10. improved ZFS support,  improved Entropy & Rigo experience (all the features users asked me about have been implemented!), out of the box KMS improvements, BFQ iosched as default scheduler (I am a big fan of Paolo Valente’s work) a load of new updates (from the Linux kernel to X.Org, from GNOME to KDE through MATE) and if we have time, more Gentoo-hardened features.

Let me mention here one really nice Entropy feature I implemented last month: Entropy adopted SQLite3 as its repository model engine since day one (and it’s been a big win!), even though, the actual implementation has been always abstracted away so that upper layers never had to deal with it directly (and up to here, there is nothing exciting). Given that a file-based database, like SQLite is, is almost impossible to scale out [1], and given that I’ve been digging into MySQL for some time now, I decided it was time to write an entropy.db connector/adapter for MySQL, specifically designed for the InnoDB storage engine. And 1000 LOC just did it [2]!

As you may have seen if you’re using Sabayon and updating it daily, Entropy version has been bumped from 1.0_rcXXX to just XXX. As of today though, the latest Entropy version is 134. It might sound odd or even funny, but I was sick of seeing that 1.0_rc prefix that was just starting to look ridiculous. Entropy is just about continuous development and improvement, when I fully realized this, it was clear that there won’t be any “final”, “one-point-oh” and “one-size-fits-all done && done” version, ever. Version numbers have been always overrated, so f**k formally defined version numbers, welcome monotonically increasing sequences (users won’t care anyway, they just want the latest and greatest).

I know, I mention “Equo rewrite” in the blog post title. And here we go. The Equo codebase was one of the first and long living part of Entropy I wrote, some of the code is there since 2007, even though it went through several refinement processes, the core structure is still the same (crap). Let me roll back the clock a little bit first, when the Eit codebase [3] replaced old equo-community, reagent and activator tools, it was clear that I was going to do exactly the same thing with the Equo one, thus I wrote the whole code in an extremely modular way, to the point that extra features (or “commands” in this case) could be plugged in by 3rd parties without touching the Eit kernel at all. After almost one year, Eit has proven to be really powerful and solid to the extent that now, its architecture is landing into the much more visible next-gen Equo app.
I tell you, the process of migrating the Equo codebase over will be long. It is actually one of many background tasks I usually work on during rainy weekends. But still, expect me to experiment with new (crazy, arguable, you name it) ideas while I make progress on this task. The new Equo is codenamed “Solo”, but it’s just a way to avoid file names clashing while I port the code over. You can find the first commits on the entropy.git repo, under the “solo” branch [4].

Make sure to not miss the whole picture: we’re a team and Sabayon lives on incremental improvements (continous development, agile!). This has the big advantage that we can implement and deploy features without temporal constraints. And in the end, it’s just our (beloved) hobby!

[1] imagine a web service cluster, etc — I know, SQL in general is known for not scaling out well without sharding or other techniques, but this is outside the scope of this paragraph, and I think NoSQL is sometimes overrated as well.
[2] http://git.sabayon.org/entropy.git/tree/lib/entropy/db/mysql.py
[3] Eit is the server-side (and community-repo side) command line tool, “Eit” stands for “Entropy Infrastructure Toolkit” and it exposes repo management in a git-like fashion.
[4] http://git.sabayon.org/entropy.git/log/?h=solo


August 17, 2012
Aaron W. Swenson a.k.a. titanofold (homepage, stats, bugs)
Security Update 2012-08-17 – PostgreSQL (August 17, 2012, 17:39 UTC)

From PostgreSQL:

The PostgreSQL Global Development Group today released security updates for all active branches of the PostgreSQL database system, including versions 9.1.5, 9.0.9, 8.4.13 and 8.3.20. This update patches security holes associated with libxml2 and libxslt, similar to those affecting other open source projects. All users are urged to update their installations at the first available opportunity.

This security release fixes a vulnerability in the built-in XML functionality, and a vulnerability in the XSLT functionality supplied by the optional XML2 extension. Both vulnerabilities allow reading of arbitrary files by any authenticated database user, and the XSLT vulnerability allows writing files as well. The fixes cause limited backwards compatibility issues. These issues correspond to the following two vulnerabilities:

This release also contains several fixes to version 9.1, and a smaller number of fixes to older versions, including:

  • Updates and corrections to time zone data
  • Multiple documentation updates and corrections
  • Add limit on max_wal_senders
  • Fix dependencies generated during ALTER TABLE ADD CONSTRAINT USING INDEX.
  • Correct behavior of unicode conversions for PL/Python
  • Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).
  • Fix syslogger so that log_truncate_on_rotation works in the first rotation.
  • Only allow autovacuum to be auto-canceled by a directly blocked process.
  • Improve fsync request queue operation
  • Prevent corner-case core dump in rfree().
  • Fix Walsender so that it responds correctly to timeouts and deadlocks
  • Several PL/Perl fixes for encoding-related issues
  • Make selectivity operators use the correct collation
  • Prevent unsuitable slaves from being selected for synchronous replication
  • Make REASSIGN OWNED work on extensions as well
  • Fix race condition with ENUM comparisons
  • Make NOTIFY cope with out-of-disk-space
  • Fix memory leak in ARRAY subselect queries
  • Reduce data loss at replication failover
  • Fix behavior of subtransactions with Hot Standby

Users who are relying on the built-in XML functionality to validate external DTDs will need to implement a workaround, as this security patch disables that functionality. Users who are using xslt_process() to fetch documents or stylesheets from external URLs will no longer be able to do so. The PostgreSQL project regrets the need to disable both of these features in order to maintain our security standards. These security issues with XML are substantially similar to issues patched recently by the Webkit (CVE-2011-1774), XMLsec (CVE-2011-1425) and PHP5 (CVE-2012-0057) projects.

As with other minor releases, users are not required to dump and reload their database or use pg_upgrade in order to apply this update release; you may simply shut down PostgreSQL and update its binaries. Perform post-update steps after the database is restarted.

All supported versions of PostgreSQL are affected. See the release notes for each version for a full list of changes with details of the fixes and steps.

The latest versions are available from Portage now.

August 15, 2012
Aaron W. Swenson a.k.a. titanofold (homepage, stats, bugs)

So, apparently pgpool-II did a bit of a switcharoo some time ago, which I wasn’t too careful about. But, can you really blame me? pgpool-II’s documentation is among the worst I’ve seen. It’s a good thing they’ve commented their code, or I wouldn’t have been able to do some things cleanly.

You get a much nicer initscript now that actually works. The ebuild actually installs the SQL scripts from the aforementioned terrible documentation. In general, I’m fairly happy with the results now.

The next thing to work on is getting pgpoolAdmin into the tree as well. And, writing documentation so that people can actually understand how to accomplish a task without first translating what had been written. I’ve been working on this for a week. I need help from more experienced users of pgpool-II. I’ve started a rather bare wiki page.

Seriously. “Step 4. The file is confirmed.” What the hell is that supposed to mean? Who’s confirming it? Me? A program? Which file?!

At least it’s easier to read than MySQL’s documentation.

Addendum: I forgot to mention, that you’ll need to do an emerge –sync and that the package you’re looking for is dev-db/pgpool2-3.2.0-r1.

August 14, 2012
Sven Vermeulen a.k.a. swift (homepage, stats, bugs)

I wrote a small section on how to create additional roles to the SELinux policy offered by Gentoo Hardened. Whereas the default policy that we provide only offers a few basic roles, any policy administrator can provide additional roles for the system.

By using additional roles, you can grant users administrative rights to particular services without risking having them elevate their privileges to root (+ sysadmin). You should even allow them to get a root shell while remaining confined within their domain (and role).

August 13, 2012
Tomáš Chvátal a.k.a. scarabeus (homepage, stats, bugs)

Well not by its intentions and goals as the situation is still not perfect in cooperation between these nice projects but as applications on our beloved Gentoo.

Today I wasted bit of my time to write wrappers in openoffice-bin package and it can be installed next to libreoffice or libreoffice-bin.

Insane how few bash lines can solve stuff :-)

	# remove soffice bin
	rm -rf "${ED}${EPREFIX}/usr/bin/soffice"

	# replace all symlinks by bash shell code in order to nicely cope with
	# libreoffice
	cd "${ED}${EPREFIX}/usr/bin/"
	for i in oo*; do
		[[ ${i} == ooffice ]] && continue

		rm ${i}
		cat >> ${i} << EOF
#!/usr/bin/env bash
pushd "${EPREFIX}/usr/lib64/openoffice/program" > /dev/null
./${i/oo/s}
popd > /dev/null
EOF
		chmod +x ${i}
	done

The portage can’t handle the blockers without revbumps/rebuilds so I updated it in live/branch ebuild and with next releases (3.5 next week, 3.6 2 weeks) there won’t be any collisions and you can enjoy comparing these two suites against each other. For binary I was just too lazy so just reemerge 3.5.5.3 if you want to enjoy this.

Note: the plugin install and handling is still not fully tested in situations when you have both implementations around, but the eclass was writen with it on my mind so just try it and report bugs if it does not work. Altho there is one case I didn’t test at all -> What happens when one removes one the implementations and try to reinstall the extension. It should properly register itself under the only remaining one, but still the files will be kept in /usr/lib64/IMPLEMENTATION/…/extensions/install/ and registred in user config dir. Maybe we could run this deregister on package uninstall (portage can detect those)…

Picture to replace last paragraph and to show up how nicely it works:
lo and aoo together

Theo Chatzimichos a.k.a. tampakrap (homepage, stats, bugs)