Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

Contributors:
. Aaron W. Swenson
. Agostino Sarubbo
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Alice Ferrazzi
. Alice Ferrazzi
. Andreas K. Hüttel
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Doug Goldstein
. Eray Aslan
. Erik Mackdanz
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Miniconf 2016
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Göktürk Yüksek
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Jan Kundrát
. Jason A. Donenfeld
. Jeffrey Gardner
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Kristian Fiskerstrand
. Liam McLoughlin
. Luca Barbato
. Marek Szuba
. Mart Raudsepp
. Matt Turner
. Matthew Thode
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael G. Martins
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Sven Vermeulen
. Sven Wegener
. Tom Wijsman
. Tomáš Chvátal
. Yury German
. Zack Medico

Last updated:
November 23, 2017, 05:03 UTC

Disclaimer:
Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.


Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

November 17, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Python’s M2Crypto fails to compile (November 17, 2017, 21:29 UTC)

When updating my music server, I ran into a compilation error on Python’s M2Crypto. The error message was a little bit strange and not directly related to the package itself:

fatal error: openssl/ecdsa.h: No such file or directory

Obviously, that error is generated from OpenSSL and not directly within M2Crypto. Remembering that there are some known problems with the “bindist” USE flag, I took a look at OpenSSL and OpenSSH. Indeed, “bindist” was set. Simply removing the USE flag from those two packages took care of the problem:


# grep -i 'openssh\|openssl' /etc/portage/package.use
>=dev-libs/openssl-1.0.2m -bindist
net-misc/openssh -bindist

In this case, the problems makes sense based on the error message. The error indicated that the Elliptic Curve Digital Signature Algorithm (ECDSA) header was not found. In the previously-linked page about the “bindist” USE flag, it clearly states that having bindist set will “Disable/Restrict EC algorithms (as they seem to be patented)”.

Cheers,
Nathan Zachary

November 10, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Latex 001 (November 10, 2017, 06:03 UTC)

This is manly a memo for remember each time which packages are needed for Japanese.

Usually I use xetex with cjk for write latex document on Gentoo this is done by adding xetex and cjk support to texlive.

app-text/texlive cjk xetex app-text/texlive-core cjk xetex

for platex is needed to install

dev-texlive/texlive-langjapanese

Google-Summer-of-Code-summary week08 (November 10, 2017, 06:03 UTC)

Google Summer of Code summary week 08

What I did in this week 08 summary:

elivepatch:

  • Working with tempfile for keeping the uncompressed configuration file using the appropiate tempfile module.
  • Refactoring and code cleaning.
  • Check if the ebuild is present in the overlay before trying to merge it.
  • lpatch added locally
  • fix ebuild directory
  • Static patch and config filename on send This is useful for isolating the api requests using only the uuid as session identifier
  • Removed livepatchStatus and lpatch class configurations. Because we need a request to only be identified by is own UUID, for isolating the transaction.
  • return in case the request with same uuid is already present.
  • Fixed livepatch output name for reflecting the static setted patch name
  • module install removed as not needed
  • debug option is now also copying the build.log to the uuid directory, for investigating failed attempt * refactored uuid_dir with uuid in the case where uuid_dir is not actually containing the full path
  • adding debug_info to the configuration file if not already present however this setting is only needed for the livepatch creation (not tested yet)

Kpatc:

  • Investigate about dwarf attribute problems

What I need to do next time:

  • Finish the function for download the livepatch to the client
  • Testing elivepatch
  • Implementing the CVE patch uploader
  • Installing elivepatch to the Gentoo server
  • Fix kpatch-build for automatically work with gentoo-sources
  • Add more features to elivepatch

day 33 What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Working with tempfile for keeping the uncompressed configuration file using the appropiate tempfile module.
  • Refactoring and code cleaning.
  • Check if the ebuild is present in the overlay before trying to merge it.

what i will do next time?

* testing and improving elivepatch

day 34

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • lpatch added locally
  • fix ebuild directory
  • Static patch and config filename on send This is useful for isolating the api requests using only the uuid as session identifier
  • Removed livepatchStatus and lpatch class configurations. Because we need a request to only be identified by is own UUID, for isolating the transaction.
  • return in case the request with same uuid is already present.

we still have problem about "can'\''t find special struct alt_instr size."

I will investigate it tomorrow

what i will do next time?

  • testing and improving elivepatch
  • Investigating the missing informations in the livepatch

day 35

What was my plan for today?

  • testing and improving elivepatch

What i did today?

Today I investigate the missing information on the livepatch that we have when we don't have CONFIG_DEBUG_INFO=y in our kernel configuration. In the case we don't have debug_info in the kernel configuration we usually get missing alt_instr errors from kpatch-build and this is stopping elivepatch from creating a livepatch. This DEBUG_INFO is only needed for making the livepatch and dosen't have to be setted also in production (but not tested it yet)

Kpatch need some special section data for find where to inject the livepatch. This special section data existence is checked by kpatch-build in the given vmlinux file. The vmlinux file need CONFIG_DEBUG_INFO=y for making the debug symbols containing the special section data. This special section data is found like this:

# Set state if name matches
a == 0 && /DW_AT_name.* alt_instr[[:space:]]*$/ {a = 1; next}
b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next}
p == 0 && /DW_AT_name.* paravirt_patch_site[[:space:]]*$/ {p = 1; next}
e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next}

DW_AT_NAME is the dwarf attributei (AT) for the name of declaration as it appear in the source program.

<1><3a75de>: Abbrev Number: 118 (DW_TAG_variable)
   <3a75df>   DW_AT_name        : (indirect string, offset: 0x2878c):
__alt_instructions
   <3a75e3>   DW_AT_decl_file   : 1
   <3a75e4>   DW_AT_decl_line   : 271
   <3a75e6>   DW_AT_type        : <0x3a75d3>
   <3a75ea>   DW_AT_external    : 1
   <3a75ea>   DW_AT_declaration : 1
  • decl_file is the file containing the source declaration
  • type is the type of declaration
  • external means that the variable is visible outside of its enclosing compilation unit
  • declaration indicates that this entry represents a non-defining declaration of object

more informations can be found here http://dwarfstd.org/doc/Dwarf3.pdf

After the kpatch-build identify the various name attribute

It will re build the original kernel and the patched kernel

With the use of create-diff-object program, kpatch-build will extract new and modified ELF by using the dwarf data special section

Finally it will create the patch module using create-kpatch-module program and by using dynamic linked objects relocation (dynrelas) symbol sections changes

[continue...]

what i will do next time?

  • testing and improving elivepatch
  • Investigating the missing informations in the livepatch

day 36

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Fixed livepatch output name for reflecting the static setted patch name
  • module install removed as not needed
  • debug option is now also copying the build.log to the uuid directory, for investigating failed attempt * refactored uuid_dir with uuid in the case where uuid_dir is not actually containing the full path
  • adding debug_info to the configuration file if not already present however this setting is only needed for the livepatch creation (not tested yet)

what i will do next time?

  • testing and improving elivepatch

day 37


Google Summer of Code day41-51 (November 10, 2017, 05:59 UTC)

Google Summer of Code day 41

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: - working on making the code for sending a unknown number of files with Werkzeug

Making and Testing patch manager

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 42

What was my plan for today?

  • testing and improving elivepatch

What i did today?

elivepatch work: * Fixing sending multiple file using requests I'm making the script for incremental add patches but I'm stuck on this requests problem.

looks like I cannot concatenate files like this files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} or like this: files = {'patch': ('01.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('02.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'patch': ('03.patch', patch_01, 'multipart/form-data', {'Expires': '0'}), 'config': ('config', open(temporary_config.name, 'rb'), 'multipart/form-data', {'Expires': '0'})} getting AttributeError: 'tuple' object has no attribute 'read'

looks like requests cannot manage to send data with same key but the server part of flask-restful is using parser.add_argument('patch', action='append') for concatenate more arguments togheter. and it suggest to send the informations like this: curl http://api.example.com -d "patch=bob" -d "patch=sue" -d "patch=joe"

Unfortunatly as now I'm still trying to understand how I can do it with requests.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 43

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * Changed send_file for manage more patches at a time * Refactored functions name for reflecting the incremental patches change * Made function for list patches in the eapply_user portage user patches folder directory and temporary folder patches reflecting the applied livepatches.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 44

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * renamed command client function as internal function * put togheter the client incremental patch sender function

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 45

What was my plan for today?

  • testing and improving elivepatch

What i did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementatio - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 46

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * code refactoring

what i will do next time?

  • make the server side incremtal patch part and test it

Google Summer of Code day 47

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * testing elivepatch with incremental patches * trying to make some way for make testing more fast, I'm thinking something like unit test and integration testing. * merged build_livepatch with send_files as it can reduce the steps for making the livepatch and help making it more isolated. * going on writing the incremental patch

Building with kpatch-build takes usually too much time, this is making testing parts where building live patch is needed, long and complicated. Would probably be better to add some unit/integration testing for keep each task/function under control without needing to build live patches every time. any thoughts?

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 48

What was my plan for today?

  • making the first draft of incremental patches feature

What i did today?

elivepatch work: * added PORTAGE_CONFIGROOT for set the path from there get the incremental patches. * cannot download kernel sources to /usr/portage/distfiles/ * saving created livepatch and patch to the client but sending only the incremental patches and patch to the elivepatch server * Cleaned output from bash command

what i will do next time?

  • make the server side incremental patch part and test it

Google Summer of Code day 49

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

1) we need a way for having old genpatches. mpagano made a script for saving all the genpatches and they are saved here: http://dev.gentoo.org/~mpagano/genpatches/tarballs/ I think we can redirect the ebuild on our overlay for get the tarballs from there.

2) kpatch can work with initrd

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 50

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • refactoring code
  • starting writing first draft for the automatical kernel livepatching system

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 51

What was my plan for today?

  • testing and improving elivepatch incremental patches feature

What i did today?

  • checking eapply_user eapply_user is using patch

  • writing elivepatch wiki page https://wiki.gentoo.org/wiki/User:Aliceinwire/elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day31-40 (November 10, 2017, 05:57 UTC)

Google Summer of Code day 31

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Dispatcher.py:

  • fixed comments
  • added static method for return the kernel path
  • Added todo
  • fixed how to make directory with uuid

livepatch.py * fixed docstring * removed sudo on kpatch-build * fixed comments * merged ebuild commands in one on build livepatch

restful.py * fixed comment

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 32

What was my plan for today?

  • testing and improving elivepatch

What I did today?

client checkers: * used os.path.join for ungz function * implemented temporary folder using python tempfile

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 33

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Working with tempfile for keeping the uncompressed configuration file using the appropriate tempfile module.
  • Refactoring and code cleaning.
  • check if the ebuild is present in the overlay before trying to merge it.

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 34

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • lpatch added locally
  • fix ebuild directory
  • Static patch and config filename on send This is useful for isolating the API requests using only the uuid as session identifier
  • Removed livepatchStatus and lpatch class configurations. Because we need a request to only be identified by is own UUID, for isolating the transaction.
  • return in case the request with the same uuid is already present.

we still have that problem about "can'\''t find special struct alt_instr size."

I will investigate it tomorrow

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 35

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Kpatch needs some special section data for finding where to inject the livepatch. This special section data existence is checked by kpatch-build in the given vmlinux file. The vmlinux file need CONFIG_DEBUG_INFO=y for making the debug symbols containing the special section data. This special section data is found like this:

# Set state if name matches
a == 0 && /DW_AT_name.* alt_instr[[:space:]]*$/ {a = 1; next}
b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next}
p == 0 && /DW_AT_name.* paravirt_patch_site[[:space:]]*$/ {p = 1; next}
e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next}

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 36

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fixed livepatch output name for reflecting the static set patch name
  • module install removed as not needed
  • debug option is now also copying the build.log to the uuid directory, for investigating failed attempt
  • refactored uuid_dir with uuid in the case where uuid_dir is not actually containing the full path
  • adding debug_info to the configuration file if not already present however, this setting is only needed for the livepatch creation (not tested yet)

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 37

What was my plan for today?

  • testing and improving elivepatch

What I did today?

  • Fix return message for cve option
  • Added different configuration example [4.9.30,4.10.17]
  • Catch Gentoo-sources not available error
  • Catch missing livepatch errors on the client

Tested kpatch with patch for kernel 4.9.30 and 4.10.17 and it worked without any problem. I checked with coverage for see which code is not used. I think we can maybe remove cve option for now as not implemented yet and we could use a modular implementation of for it, so the configuration could change. We need some documentation about elivepatch on the Gentoo wiki. We need some unit tests for making development more smooth and making it more simple to check the working status with GitHub Travis.

I talked with kpatch creator and we got some feedback:

“this project could also be used for kpatch testing :)
imagine instead of just loading the .ko, the client was to kick off a series of tests and report back.”

“why bother a production or tiny machine when you might have a patch-building server”

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 38

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

What we will do next: - incremental patch tracking on the client side - CVE security vulnerability checker - dividing the repository - ebuild - documentation [optional] - modularity [optional]

Kpatch work: - Started to make the incremental patch feature - Tested kpatch for permission issue

what I will do next time?

  • testing and improving elivepatch
  • Investigating the missing information in the livepatch

Google Summer of Code day 39

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor summary

elivepatch work: - working on incremental patch features design and implementation - putting patch files under /var/run/elivepatch - ordering patch by numbers - cleaning folder when the machine is restarted - sending the patches to the server in order - cleaning client terminal output by catching the exceptions

what I will do next time?

  • testing and improving elivepatch

Google Summer of Code day 40

What was my plan for today?

  • testing and improving elivepatch

What I did today?

Meeting with mentor

summary of elivepatch work: - working with incremental patch manager - cleaning client terminal output by catching the exceptions

Making and Testing patch manager

what I will do next time?

  • testing and improving elivepatch

November 09, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google Summer of Code day26-30 (November 09, 2017, 11:14 UTC)

Google Summer of Code day 26

What was my plan for today?

  • testing and improving elivepatch

What i did today?

After discussion with my mentor. I need: * Availability and replicability of downloading kernel sources. [needed] * Represent same kernel sources as the client kernel source (use flags for now and in the future user added patches) [needed] * Support multiple request at the same time. [needed] * Modularity for adding VM machine or container support. (in the future)

Create overlay for keeping old gentoo-sources ebuild where we will keep retrocompatibility for old kernels: https://github.com/aliceinwire/gentoo-sources_overlay

Made function for download and get old kernel sources and install sources under the designated temporary folder.

what i will do next time?

  • Going on improving elivepatch

Google Summer of Code day 27

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Fixed git clone of the gentoo-sources overlay under a temporary directory
  • Fixed download directories
  • Tested elivepatch

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 28

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Testing elivepatch with multi threading
  • Removed check for uname kernel version as is getting the kernel version directly from the kernel configuration file header.
  • Starting kpatch-build under the output folder Because kpatch-build is making the livepatch under the $PWD folder we are starting it under the uuid tmp folder and we are getting the livepatch from the uuid folder. this is usefull for dealing with multi-threading
  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]

For making elivepatch multithread we can simply change app.run() of flask with app.run(threaded=True).
This will make flask spawn thread for each request (using class SocketServer.ThreadingMixIn and baseWSGIserver)1, but also if is working pretty nice, the suggested way of threading is probably using gunicorn or uWSGI.2
maybe like using flask-gunicorn: https://github.com/doobeh/flask-gunicorn

what i will do next time?

  • testing and improving elivepatch

[1] https://docs.python.org/2/library/socketserver.html#SocketServer.ThreadingMixIn [2] http://flask.pocoo.org/docs/0.12/deploying/

Google Summer of Code day 29

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • Added some helper function for code clarity [not finished yet]
  • Refactored for code clarity [not finished yet]
  • Sended pull request to kpatch for adding dynamic output folder option https://github.com/dynup/kpatch/pull/718
  • Sended pull request to kpatch for small style fix https://github.com/dynup/kpatch/pull/719

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day 30

What was my plan for today?

  • testing and improving elivepatch

What i did today?

  • fixing uuid design

moved the uuid generating to the client and read rfc 4122

that states "A UUID is 128 bits long, and requires no central registration process."

  • made regex function for checking the format of uuid is actually correct.

what i will do next time?

  • testing and improving elivepatch

Google Summer of Code day21-25 (November 09, 2017, 10:42 UTC)

Google Summer of Code day 21

What was my plan for today?

  • Testing elivepatch
  • Getting the kernel version dynamically
  • Updating kpatch-build for work with Gentoo better

What I did today?

Fixing the problems pointed out by my mentor.

  • Fixed software copyright header
  • Fixed popen call
  • Used regex for checking file extension
  • Added debug option for the server
  • Changed reserved word shadowing variables
  • used os.path.join for link path
  • dynamical pass kernel version

what I will do next time?
* Testing elivepatch * work on making elivepatch support multiple calls

Google Summer of Code day 22

What was my plan for today?

  • Testing elivepatch
  • work on making elivepatch support multiple calls

What I did today?

  • working on sending the kernel version
  • working on dividing users by UUID (Universally unique identifier)

With the first connection, because the UUID is not generated yet, it will be created and returned by the elivepatch server. The client will store the UUID in shelve so that it can authenticate next time is inquiring the server. By using python UUID I'm assigning different UUID for each user. By using UUID we can start supporting multiple calls.

what I will do next time?
* Testing elivepatch * Cleaning code

Google Summer of Code day 23

What was my plan for today?

  • Testing elivepatch
  • Cleaning code

What I did today?

  • commented some code part
  • First draft of UUID working

elivepatch server need to generate a UUID for a client connection, and assign the UUID to each client, this is needed for managing multiple request. The livepatch and configs files will be generated in different folders for each client request and returned using the UUID. Made a working draft.

what I will do next time?
* Cleaning code * testing it * Go on with programming and starting implementing the CVE

Google Summer of Code day 24

What was my plan for today?

  • Cleaning code
  • testing it
  • Go on with programming and starting implementing the CVE

What I did today?

  • Working with getting the Linux kernel version from configuration file
  • Working with parsing CVE repository

I could implement the Linux kernel version from the configuration file, and I'm working on parsing the CVE repository. Would also be a nice idea to work with the kpatch-build script for making it Gentoo compatible. But I also got into one problem, that is we need to find a way to download old Gentoo-sources ebuild for building an old kernel, where the server there isn't the needed version of kernel sources. This depends on how much back compatibility we want to give. And with old Gentoo-sources, we cannot assure that is working. Because old Gentoo-sources was using different versions of Gentoo repository, eclass. So is a problem to discuss in the next days.

what I will do next time?

  • work on the CVE repository
  • testing it

Google Summer of Code day 25

What was my plan for today?

  • Cleaning code
  • testing and improving elivepatch

What I did today?

As discussed with my mentor I worked on unifying the patch and configuration file RESTful API call. And made the function to standardize the subprocess commands.

what I will do next time?

  • testing it and improving elivepatch

November 02, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux on DELL XPS 13 9365 and 9360 (November 02, 2017, 17:33 UTC)

Since I received some positive feedback about my previous DELL XPS 9350 post, I am writing this summary about my recent experience in installing Gentoo Linux on a DELL XPS 13 9365.

This installation notes goals:

  • UEFI boot using Grub
  • Provide you with a complete and working kernel configuration
  • Encrypted disk root partition using LUKS
  • Be able to type your LUKS passphrase to decrypt your partition using your local keyboard layout

Grub & UEFI & Luks installation

This installation is a fully UEFI one using grub and booting an encrypted root partition (and home partition). I was happy to see that since my previous post, everything got smoother. So even if you can have this installation working using MBR, I don’t really see a point in avoiding UEFI now.

BIOS configuration

Just like with its ancestor, you should:

  • Turn off Secure Boot
  • Set SATA Operation to AHCI

Live CD

Once again, go for the latest SystemRescueCD (it’s Gentoo based, you won’t be lost) as it’s quite more up to date and supports booting on UEFI. Make it a Live USB for example using unetbootin and the ISO on a vfat formatted USB stick.

NVME SSD disk partitioning

We’ll obviously use GPT with UEFI. I found that using gdisk was the easiest. The disk itself is found on /dev/nvme0n1. Here it is the partition table I used :

  • 10Mo UEFI BIOS partition (type EF02)
  • 500Mo UEFI boot partition (type EF00)
  • 2Go Swap partition
  • 475Go Linux root partition

The corresponding gdisk commands :

# gdisk /dev/nvme0n1

Command: o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y ↵

Command: n ↵
Partition Number: 1 ↵
First sector: ↵
Last sector: +10M ↵
Hex Code: EF02 ↵

Command: n ↵
Partition Number: 2 ↵
First sector: ↵
Last sector: +500M ↵
Hex Code: EF00 ↵

Command: n ↵
Partition Number: 3 ↵
First sector: ↵
Last sector: +2G ↵
Hex Code: 8200 ↵

Command: n ↵
Partition Number: 4 ↵
First sector: ↵
Last sector: ↵ (for rest of disk)
Hex Code: ↵

Command: p ↵
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A73970B7-FF37-4BA7-92BE-2EADE6DDB66E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048           22527   10.0 MiB    EF02  BIOS boot partition
   2           22528         1046527   500.0 MiB   EF00  EFI System
   3         1046528         5240831   2.0 GiB     8200  Linux swap
   4         5240832      1000215182   474.4 GiB   8300  Linux filesystem

Command: w ↵
Do you want to proceed? (Y/N): Y ↵

No WiFi on Live CD ? no panic

Once again on my (old?) SystemRescueCD stick, the integrated Intel 8265/8275 wifi card is not detected.

So I used my old trick with my Android phone connected to my local WiFi as a USB modem which was detected directly by the live CD.

  • get your Android phone connected on your local WiFi (unless you want to use your cellular data)
  • plug in your phone using USB to your XPS
  • on your phone, go to Settings / More / Tethering & portable hotspot
  • enable USB tethering

Running ip addr will show the network card enp0s20f0u2 (for me at least), then if no IP address is set on the card, just ask for one :

# dhcpcd enp0s20f0u2

You have now access to the internet.

Proceed with installation

The only thing to prepare is to format the UEFI boot partition as FAT32. Do not worry about the UEFI BIOS partition (/dev/nvme0n1p1), grub will take care of it later.

# mkfs.vfat -F 32 /dev/nvme0n1p2

Do not forget to use cryptsetup to encrypt your /dev/nvme0n1p4 partition! In the rest of the article, I’ll be using its device mapper representation.

# cryptsetup luksFormat -s 512 /dev/nvme0n1p4
# cryptsetup luksOpen /dev/nvme0n1p4 root
# mkfs.ext4 /dev/mapper/root

Then follow the Gentoo handbook as usual for the stage3 related next steps. Make sure you mount and bind the following to your /mnt/gentoo LiveCD installation folder (the /sys binding is important for grub UEFI):

# mount -t proc none /mnt/gentoo/proc
# mount -o bind /dev /mnt/gentoo/dev
# mount -o bind /sys /mnt/gentoo/sys

make.conf settings

I strongly recommend using at least the following on your /etc/portage/make.conf :

GRUB_PLATFORM="efi-64"
INPUT_DEVICES="evdev synaptics"
VIDEO_CARDS="intel i965"

USE="bindist cryptsetup"

The GRUB_PLATFORM one is important for later grub setup and the cryptsetup USE flag will help you along the way.

fstab for SSD

Don’t forget to make sure the noatime option is used on your fstab for / and /home.

/dev/nvme0n1p2    /boot    vfat    noauto,noatime    1 2
/dev/nvme0n1p3    none     swap    sw                0 0
/dev/mapper/root  /        ext4    noatime   0 1

Kernel configuration and compilation

I suggest you use a recent >=sys-kernel/gentoo-sources-4.13.0 along with genkernel.

  • You can download my kernel configuration file (iptables, docker, luks & stuff included)
  • Put the kernel configuration file into the /etc/kernels/ directory (with a training s)
  • Rename the configuration file with the exact version of your kernel

Then you’ll need to configure genkernel to add luks support, firmware files support and keymap support if your keyboard layout is not QWERTY.

In your /etc/genkernel.conf, change the following options:

LUKS="yes"
FIRMWARE="yes"
KEYMAP="1"

Then run genkernel all to build your kernel and luks+firmware+keymap aware initramfs.

Grub UEFI bootloader with LUKS and custom keymap support

Now it’s time for the grub magic to happen so you can boot your wonderful Gentoo installation using UEFI and type your password using your favourite keyboard layout.

  • make sure your boot vfat partition is mounted on /boot
  • edit your /etc/default/grub configuration file with the following:
GRUB_CMDLINE_LINUX="crypt_root=/dev/nvme0n1p4 keymap=fr"

This will allow your initramfs to know it has to read the encrypted root partition from the given partition and to prompt for its password in the given keyboard layout (french here).

Now let’s install the grub UEFI boot files and setup the UEFI BIOS partition.

# grub-install --efi-directory=/boot --target=x86_64-efi /dev/nvme0n1
Installing for x86_64-efi platform.
Installation finished. No error reported

It should report no error, then we can generate the grub boot config:

# grub-mkconfig -o /boot/grub/grub.cfg

You’re all set!

You will get a gentoo UEFI boot option, you can disable the Microsoft Windows one from your BIOS to get straight to the point.

Hope this helped!

October 16, 2017
Greg KH a.k.a. gregkh (homepage, bugs)
Linux Kernel Community Enforcement Statement FAQ (October 16, 2017, 09:05 UTC)

Based on the recent Linux Kernel Community Enforcement Statement and the article describing the background and what it means , here are some Questions/Answers to help clear things up. These are based on questions that came up when the statement was discussed among the initial round of over 200 different kernel developers.

Q: Is this changing the license of the kernel?

A: No.

Q: Seriously? It really looks like a change to the license.

A: No, the license of the kernel is still GPLv2, as before. The kernel developers are providing certain additional promises that they encourage users and adopters to rely on. And by having a specific acking process it is clear that those who ack are making commitments personally (and perhaps, if authorized, on behalf of the companies that employ them). There is nothing that says those commitments are somehow binding on anyone else. This is exactly what we have done in the past when some but not all kernel developers signed off on the driver statement.

Q: Ok, but why have this “additional permissions” document?

A: In order to help address problems caused by current and potential future copyright “trolls” aka monetizers.

Q: Ok, but how will this help address the “troll” problem?

A: “Copyright trolls” use the GPL-2.0’s immediate termination and the threat of an immediate injunction to turn an alleged compliance concern into a contract claim that gives the troll an automatic claim for money damages. The article by Heather Meeker describes this quite well, please refer to that for more details. If even a short delay is inserted for coming into compliance, that delay disrupts this expedited legal process.

By simply saying, “We think you should have 30 days to come into compliance”, we undermine that “immediacy” which supports the request to the court for an immediate injunction. The threat of an immediate junction was used to get the companies to sign contracts. Then the troll goes back after the same company for another known violation shortly after and claims they’re owed the financial penalty for breaking the contract. Signing contracts to pay damages to financially enrich one individual is completely at odds with our community’s enforcement goals.

We are showing that the community is not out for financial gain when it comes to license issues – though we do care about the company coming into compliance.  All we want is the modifications to our code to be released back to the public, and for the developers who created that code to become part of our community so that we can continue to create the best software that works well for everyone.

This is all still entirely focused on bringing the users into compliance. The 30 days can be used productively to determine exactly what is wrong, and how to resolve it.

Q: Ok, but why are we referencing GPL-3.0?

A: By using the terms from the GPLv3 for this, we use a very well-vetted and understood procedure for granting the opportunity to come fix the failure and come into compliance. We benefit from many months of work to reach agreement on a termination provision that worked in legal systems all around the world and was entirely consistent with Free Software principles.

Q: But what is the point of the “non-defensive assertion of rights” disclaimer?

A: If a copyright holder is attacked, we don’t want or need to require that copyright holder to give the party suing them an opportunity to cure. The “non-defensive assertion of rights” is just a way to leave everything unchanged for a copyright holder that gets sued.  This is no different a position than what they had before this statement.

Q: So you are ok with using Linux as a defensive copyright method?

A: There is a current copyright troll problem that is undermining confidence in our community – where a “bad actor” is attacking companies in a way to achieve personal gain. We are addressing that issue. No one has asked us to make changes to address other litigation.

Q: Ok, this document sounds like it was written by a bunch of big companies, who is behind the drafting of it and how did it all happen?

A: Grant Likely, the chairman at the time of the Linux Foundation’s Technical Advisory Board (TAB), wrote the first draft of this document when the first copyright troll issue happened a few years ago. He did this as numerous companies and developers approached the TAB asking that the Linux kernel community do something about this new attack on our community. He showed the document to a lot of kernel developers and a few company representatives in order to get feedback on how it should be worded. After the troll seemed to go away, this work got put on the back-burner. When the copyright troll showed back up, along with a few other “copycat” like individuals, the work on the document was started back up by Chris Mason, the current chairman of the TAB. He worked with the TAB members, other kernel developers, lawyers who have been trying to defend these claims in Germany, and the TAB members’ Linux Foundation’s lawyers, in order to rework the document so that it would actually achieve the intended benefits and be useful in stopping these new attacks. The document was then reviewed and revised with input from Linus Torvalds and finally a document that the TAB agreed would be sufficient was finished. That document was then sent to over 200 of the most active kernel developers for the past year by Greg Kroah-Hartman to see if they, or their company, wished to support the document. That produced the initial “signatures” on the document, and the acks of the patch that added it to the Linux kernel source tree.

Q: How do I add my name to the document?

A: If you are a developer of the Linux kernel, simply send Greg a patch adding your name to the proper location in the document (sorting the names by last name), and he will be glad to accept it.

Q: How can my company show its support of this document?

A: If you are a developer working for a company that wishes to show that they also agree with this document, have the developer put the company name in ‘(’ ‘)’ after the developer’s name. This shows that both the developer, and the company behind the developer are in agreement with this statement.

Q: How can a company or individual that is not part of the Linux kernel community show its support of the document?

A: Become part of our community! Send us patches, surely there is something that you want to see changed in the kernel. If not, wonderful, post something on your company web site, or personal blog in support of this statement, we don’t mind that at all.

Q: I’ve been approached by a copyright troll for Netfilter. What should I do?

A: Please see the Netfilter FAQ here for how to handle this

Q: I have another question, how do I ask it?

A: Email Greg or the TAB, and they will be glad to help answer them.

Linux Kernel Community Enforcement Statement (October 16, 2017, 09:00 UTC)

By Greg Kroah-Hartman, Chris Mason, Rik van Riel, Shuah Khan, and Grant Likely

The Linux kernel ecosystem of developers, companies and users has been wildly successful by any measure over the last couple decades. Even today, 26 years after the initial creation of the Linux kernel, the kernel developer community continues to grow, with more than 500 different companies and over 4,000 different developers getting changes merged into the tree during the past year. As Greg always says every year, the kernel continues to change faster this year than the last, this year we were running around 8.5 changes an hour, with 10,000 lines of code added, 2,000 modified, and 2,500 lines removed every hour of every day.

The stunning growth and widespread adoption of Linux, however, also requires ever evolving methods of achieving compliance with the terms of our community’s chosen license, the GPL-2.0. At this point, there is no lack of clarity on the base compliance expectations of our community. Our goals as an ecosystem are to make sure new participants are made aware of those expectations and the materials available to assist them, and to help them grow into our community.  Some of us spend a lot of time traveling to different companies all around the world doing this, and lots of other people and groups have been working tirelessly to create practical guides for everyone to learn how to use Linux in a way that is compliant with the license. Some of these activities include:

Unfortunately the same processes that we use to assure fulfillment of license obligations and availability of source code can also be used unjustly in trolling activities to extract personal monetary rewards. In particular, issues have arisen as a developer from the Netfilter community, Patrick McHardy, has sought to enforce his copyright claims in secret and for large sums of money by threatening or engaging in litigation. Some of his compliance claims are issues that should and could easily be resolved. However, he has also made claims based on ambiguities in the GPL-2.0 that no one in our community has ever considered part of compliance.  

Examples of these claims have been distributing over-the-air firmware, requiring a cell phone maker to deliver a paper copy of source code offer letter; claiming the source code server must be setup with a download speed as fast as the binary server based on the “equivalent access” language of Section 3; requiring the GPL-2.0 to be delivered in a local language; and many others.

How he goes about this activity was recently documented very well by Heather Meeker.

Numerous active contributors to the kernel community have tried to reach out to Patrick to have a discussion about his activities, to no response. Further, the Netfilter community suspended Patrick from contributing for violations of their principles of enforcement. The Netfilter community also published their own FAQ on this matter.

While the kernel community has always supported enforcement efforts to bring companies into compliance, we have never even considered enforcement for the purpose of extracting monetary gain.  It is not possible to know an exact figure due to the secrecy of Patrick’s actions, but we are aware of activity that has resulted in payments of at least a few million Euros.  We are also aware that these actions, which have continued for at least four years, have threatened the confidence in our ecosystem.

Because of this, and to help clarify what the majority of Linux kernel community members feel is the correct way to enforce our license, the Technical Advisory Board of the Linux Foundation has worked together with lawyers in our community, individual developers, and many companies that participate in the development of, and rely on Linux, to draft a Kernel Enforcement Statement to help address both this specific issue we are facing today, and to help prevent any future issues like this from happening again.

A key goal of all enforcement of the GPL-2.0 license has and continues to be bringing companies into compliance with the terms of the license. The Kernel Enforcement Statement is designed to do just that.  It adopts the same termination provisions we are all familiar with from GPL-3.0 as an Additional Permission giving companies confidence that they will have time to come into compliance if a failure is identified. Their ability to rely on this Additional Permission will hopefully re-establish user confidence and help direct enforcement activity back to the original purpose we have all sought over the years – actual compliance.  

Kernel developers in our ecosystem may put their own acknowledgement to the Statement by sending a patch to Greg adding their name to the Statement, like any other kernel patch submission, and it will be gladly merged. Those authorized to ‘ack’ on behalf of their company may add their company name in (parenthesis) after their name as well.

Note, a number of questions did come up when this was discussed with the kernel developer community. Please see Greg’s FAQ post answering the most common ones if you have further questions about this topic.

October 13, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Gentoo Linux listed RethinkDB’s website (October 13, 2017, 08:22 UTC)

 

The rethinkdb‘s website has (finally) been updated and Gentoo Linux is now listed on the installation page!

Meanwhile, we have bumped the ebuild to version 2.3.6 with fixes for building on gcc-6 thanks to Peter Levine who kindly proposed a nice PR on github.

September 27, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Important!

This is a workaround for a FreeType/fontconfig problem, but my be applicable in other cases as well. For Gentoo users, the related bug is 631502.

Recently, after updating to Mozilla Firefox to version 52 or later (55.0.2, in my case), and Mozilla Thunderbird to version 52 or later (52.3.0, in my case), I found that fonts were rendering horribly under Linux. It looked essentially like there was no anti-aliasing or hinting at all.

Come to find out, this was due to a change in the content rendering engine, which is briefly mentioned in the release notes for Firefox 52 (but it also applies to Thunderbird). Basically, in Linux, the default engine changed from cairo to Google’s Skia.

Ugly fonts in Firefox and Thunderbird under Linux - skia and cairo

For each application, the easiest method for getting the fonts to render nicely again is to make two changes directly in the configuration editor. To do so in Firefox, simply go to the address bar and type about:config. Within Thunderbird, it can be launched by going to Menu > Preferences > Advanced > Config Editor. Once there, the two keys that need to change are:

gfx.canvas.azure.backends
gfx.content.azure.backends

They likely have values of “skia” or a comma-separated list with “skia” being the first value. On my Linux hosts, I changed the value from skia back to cairo, restarted the applications, and all was again right in the world (or at least in the Mozilla font world 😛 ).

Hope that helps.

Cheers,
Zach

September 26, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux Userspace 2.7 (September 26, 2017, 12:50 UTC)

A few days ago, Jason "perfinion" Zaman stabilized the 2.7 SELinux userspace on Gentoo. This release has quite a few new features, which I'll cover in later posts, but for distribution packagers the main change is that the userspace now has many more components to package. The project has split up the policycoreutils package in separate packages so that deployments can be made more specific.

Let's take a look at all the various userspace packages again, learn what their purpose is, so that you can decide if they're needed or not on a system. Also, when I cover the contents of a package, be aware that it is based on the deployment on my system, which might or might not be a complete installation (as with Gentoo, different USE flags can trigger different package deployments).

September 11, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Authenticating with U2F (September 11, 2017, 16:25 UTC)

In order to further secure access to my workstation, after the switch to Gentoo sources, I now enabled two-factor authentication through my Yubico U2F USB device. Well, at least for local access - remote access through SSH requires both userid/password as well as the correct SSH key, by chaining authentication methods in OpenSSH.

Enabling U2F on (Gentoo) Linux is fairly easy. The various guides online which talk about the pam_u2f setup are indeed correct that it is fairly simple. For completeness sake, I've documented what I know on the Gentoo Wiki, as the pam_u2f article.

September 06, 2017
Greg KH a.k.a. gregkh (homepage, bugs)
4.14 == This years LTS kernel (September 06, 2017, 14:41 UTC)

As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)

As always, if people have questions about this, email me and I will be glad to discuss it, or talk to me in person next week at the LinuxCon^WOpenSourceSummit or Plumbers conference in Los Angeles, or at any of the other conferences I’ll be at this year (ELCE, Kernel Recipes, etc.)

September 04, 2017
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
A Late GUADEC 2017 Post (September 04, 2017, 03:20 UTC)

It’s been a little over a month since I got back from Manchester, and this post should’ve come out earlier but I’ve been swamped.

The conference was absolutely lovely, the organisation was a 110% on point (serious kudos, I know first hand how hard that is). Others on Planet GNOME have written extensively about the talks, the social events, and everything in between that made it a great experience. What I would like to write about is about why this year’s GUADEC was special to me.

GNOME turning 20 years old is obviously a large milestone, and one of the main reasons I wanted to make sure I was at Manchester this year. There were many occasions to take stock of how far we had come, where we are, and most importantly, to reaffirm who we are, and why we do what we do.

And all of this made me think of my own history with GNOME. In 2002/2003, Nat and Miguel came down to Bangalore to talk about some of the work they were doing. I know I wasn’t the only one who found their energy infectious, and at Linux Bangalore 2003, they got on stage, just sat down, and started hacking up a GtkMozEmbed-based browser. The idea itself was fun, but what I took away — and I know I wasn’t the only one — is the sheer inclusive joy they shared in creating something and sharing that with their audience.

For all of us working on GNOME in whatever way we choose to contribute, there is the immediate gratification of shaping this project, as well as the larger ideological underpinning of making everyone’s experience talking to their computers better and free-er.

But I think it is also important to remember that all our efforts to make our community an inviting and inclusive space have a deep impact across the world. So much so that complete strangers from around the world are able to feel a sense of belonging to something much larger than themselves.

I am excited about everything we will achieve in the next 20 years.

(thanks go out to the GNOME Foundation for helping me attend GUADEC this year)

Sponsored by GNOME!

August 26, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)
GIMP 2.9.6 now in Gentoo (August 26, 2017, 19:53 UTC)

Here’s what upstream has to say about the new release 2.9.6. Enjoy 🙂

August 23, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using nVidia with SELinux (August 23, 2017, 17:04 UTC)

Yesterday I've switched to the gentoo-sources kernel package on Gentoo Linux. And with that, I also attempted (succesfully) to use the propriatary nvidia drivers so that I can enjoy both a smoother 3D experience while playing minecraft, as well as use the CUDA support so I don't need to use cloud-based services for small exercises.

The move to nvidia was quite simple, as the nvidia-drivers wiki article on the Gentoo wiki was quite easy to follow.

August 22, 2017
Sven Vermeulen a.k.a. swift (homepage, bugs)
Switch to Gentoo sources (August 22, 2017, 17:04 UTC)

You've might already read it on the Gentoo news site, the Hardened Linux kernel sources are removed from the tree due to the grsecurity change where the grsecurity Linux kernel patches are no longer provided for free. The decision was made due to supportability and maintainability reasons.

That doesn't mean that users who want to stick with the grsecurity related hardening features are left alone. Agostino Sarubbo has started providing sys-kernel/grsecurity-sources for the users who want to stick with it, as it is based on minipli's unofficial patchset. I seriously hope that the patchset will continue to be maintained and, who knows, even evolve further.

Personally though, I'm switching to the Gentoo sources, and stick with SELinux as one of the protection measures. And with that, I might even start using my NVidia graphics card a bit more, as that one hasn't been touched in several years (I have an Optimus-capable setup with both an Intel integrated graphics card and an NVidia one, but all attempts to use nouveau for the one game I like to play - minecraft - didn't work out that well).

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v3.6 (August 22, 2017, 06:00 UTC)

After four months of cool contributions and hard work on normalization and modules’ clean up, I’m glad to announce the release of py3status v3.6!

Milestone 3.6 was mainly focused about existing modules, from their documentation to their usage of the py3 helper to streamline their code base.

Other improvements were made about error reporting while some sneaky bugs got fixed along the way.

Highlights

Not an extensive list, check the changelog.

  • LOTS of modules streamlining (mainly the hard work of @lasers)
  • error reporting improvements
  • py3-cmd performance improvements

New modules

  • i3blocks support (yes, py3status can now wrap i3blocks thanks to @tobes)
  • cmus module: to control your cmus music player, by @lasers
  • coin_market module: to display custom cryptocurrency data, by @lasers
  • moc module: to control your moc music player, by @lasers

Milestone 3.7

This milestone will give a serious kick into py3status performance. We’ll do lots of profiling and drastic work to reduce py3status CPU and memory footprints!

For now we’ve been relying a lot on threads, which is simple to operate but not that CPU/memory friendly. Since i3wm users rightly care for their efficiency we think it’s about time we address this kind of points in py3status.

Stay tuned, we have some nice ideas in stock 🙂

Thanks contributors!

This release is their work, thanks a lot guys!

  • aethelz
  • alexoneill
  • armandg
  • Cypher1
  • docwalter
  • enguerrand
  • fmorgner
  • guiniol
  • lasers
  • markrileybot
  • maximbaz
  • tablet-mode
  • paradoxisme
  • ritze
  • rixx
  • tobes
  • valdur55
  • vvoland
  • yabbes

August 21, 2017
sys-kernel/grsecurity-sources available! (August 21, 2017, 15:11 UTC)

Is known that the grsecurity project since few weeks made available the grsecurity patches only for their customers. In the meantime some people made their fork of the latest publicly available patches.

At Gentoo, for some reasons (which I respect) explained by the news item and on the mailing lists, the maintainer decided to drop the hardened-sources package at the end of September 2017

Then, I decided to make my own ebuild that uses the Genpatches plus the Unofficial forward ports of the last publicly available grsecurity patch.

Before you wondering about the code of the ebuild, let me explain the logic used:

1) The ebuild was done in this way because the version bump should result in a copy-paste on the ebuild side.
2) I don’t use the GENPATCHES variable from the kernel eclass because of the previously explained point 1.
3) I generate the tarball via a bash script which takes the genpatches, take the unofficial-grsecurity-patches and deletes the unwanted patches from the genpatches tarball (i.e. in hardened-sources we had UNIPATCH_EXCLUDE=”1500_XATTR_USER_PREFIX.patch 2900_dev-root-proc-mount-fix.patch”).
4) I don’t use the UNIPATCH_EXCLUDE variable because because of the previously explained point 3.

Don’t expect a version bump on each minor release unless there are critical bugs and/or dangerous security bugs. So please not file version bump requests on bugzilla.

If you have any issue regarding grsecurity itself, please file a bug on the github issue tracker and if you will mention the issue elsewhere, please specify that the issue is with the unofficial grsecurity port. This will avoid to “damage” the grsecurity image/credibility.

The ebuild is available into my overlay
If you have trouble on how to install that ebuild, please follow the layman article on our wiki, basically you need:

root ~ $ layman -S && layman -a ago

USE IT AT YOUR OWN RISK 😉

August 19, 2017
Hardened Linux kernel sources removal (August 19, 2017, 00:00 UTC)

As you may know the core of sys-kernel/hardened-sources has been the grsecurity patches. Recently the grsecurity developers have decided to limit access to these patches. As a result, the Gentoo Hardened team is unable to ensure a regular patching schedule and therefore the security of the users of these kernel sources. Thus, we will be masking hardened-sources on the 27th of August and will proceed to remove them from the main ebuild repository by the end of September. We recommend to use sys-kernel/gentoo-sources instead. Userspace hardening and support for SELinux will of course remain in the Gentoo ebuild repository. Please see the full news item for additional information and links.

August 18, 2017

FroSCon logo

Upcoming weekend, 19-20th August 2017, there will be a Gentoo booth again at the FrOSCon “Free and Open Source Conference” 12, in St. Augustin near Bonn! Visitors can see Gentoo live in action, get Gentoo swag, and prepare, configure, and compile their own Gentoo buttons. See you there!

August 08, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
ScyllaDB meets Gentoo Linux (August 08, 2017, 14:19 UTC)

I am happy to announce that my work on packaging ScyllaDB for Gentoo Linux is complete!

Happy or curious users are very welcome to share their thoughts and ping me to get it into portage (which will very likely happen).

Why Scylla?

Ever heard of the Cassandra NoSQL database and Java GC/Heap space problems?… if you do, you already get it 😉

I will not go into the details as their website does this way better than me but I got interested into Scylla because it fits the Gentoo Linux philosophy very well. If you remember my writing about packaging Rethinkdb for Gentoo Linux, I think that we have a great match with Scylla as well!

  • it is written in C++ so it plays very well with emerge
  • the code quality is so great that building it does not require heavy patching on the ebuild (feels good to be a packager)
  • the code relies on system libs instead of bundling them in the sources (hurrah!)
  • performance tuning is handled by smart scripting and automation, allowing the relationship between the project and the hardware is strong

I believe that these are good enough points to go further and that such a project can benefit from a source based distribution like Gentoo Linux. Of course compiling on multiple systems is a challenge for such a database but one does not improve by staying in their comfort zone.

Upstream & contributions

Packaging is a great excuse to get to know the source code of a project but more importantly the people behind it.

So here I got to my first contributions to Scylla to get Gentoo Linux as a detected and supported Linux distribution in the different scripts and tools used to automatically setup the machine it will run upon (fear not, I contributed bash & python, not C++)…

Even if I expected to contribute using Github PRs and got to change my habits to a git-patch+mailing list combo, I got warmly welcomed and received positive and genuine interest in the contributions. They got merged quickly and thanks to them you can install and experience Scylla in Gentoo Linux without heavy patching on our side.

Special shout out to Pekka, Avi and Vlad for their welcoming and insightful code reviews!

I’ve some open contributions about pushing further on the python code QA side to get the tools to a higher level of coding standards. Seeing how upstream is serious about this I have faith that it will get merged and a good base for other contributions.

Last note about reaching them is that I am a bit sad that they’re not using IRC freenode to communicate (I instinctively joined #scylla and found myself alone) but they’re on Slack (those “modern folks”) and pretty responsive to the mailing lists 😉

Java & Scylla

Even if scylla is a rewrite of Cassandra in C++, the project still relies on some external tools used by the Cassandra community which are written in Java.

When you install the scylla package on Gentoo, you will see that those two packages are Java based dependencies:

  • app-admin/scylla-tools
  • app-admin/scylla-jmx

It pained me a lot to package those (thanks to help of @monsieurp) but they are building and working as expected so this gets the packaging of the whole Scylla project pretty solid.

emerge dev-db/scylla

The scylla packages are located in the ultrabug overlay for now until I test them even more and ultimately put them in production. Then they’ll surely reach the portage tree with the approval of the Gentoo java team for the app-admin/ packages listed above.

I provide a live ebuild (scylla-9999 with no keywords) and ebuilds for the latest major version (2.0_rc1 at time of writing).

It’s as simple as:

$ sudo layman -a ultrabug
$ sudo emerge -a dev-db/scylla
$ sudo emerge --config dev-db/scylla

Try it out and tell me what you think, I hope you’ll start considering and using this awesome database!

July 27, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Last evening, I ran some updates on one of my servers. One of the updates was from MariaDB 10.1 to 10.2 (some minor release as well). After compiling, I went to restart, but it failed with:

# /etc/init.d/mysql start
* Checking mysqld configuration for mysql ...
[ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys'
[ERROR] Aborting

* mysql config check failed [ !! ]
* ERROR: mysql failed to start

I’m not sure why this just hit me now, but it looks like it is a function within the init script that’s causing it to look for files in the nonexistent directory of /usr/share/mysql/ instead of the appropriate /usr/share/mariadb/. The fast fix here (so that I could get everything back up and running as quickly as possible) was to simply symlink the directory:

cd /usr/share
ln -s mariadb/ mysql

Thereafter, MariaDB came up without any problem:

# /etc/init.d/mysql start
* Caching service dependencies ... [ ok ]
* Checking mysqld configuration for mysql ... [ ok ]
* Starting mysql ... [ ok ]
# /etc/init.d/mysql status
* status: started

I hope that information helps if you’re in a pinch and run into the same error message.

Cheers,
Zach

UPDATE: It seems as if the default locations for MySQL/MariaDB configurations have changed (in Gentoo). Please see this comment for more information about a supportable fix for this problem moving forward. Thanks to Brian Evans for the information. 🙂

July 23, 2017
Michał Górny a.k.a. mgorny (homepage, bugs)
Optimizing ccache using per-package caches (July 23, 2017, 18:03 UTC)

ccache can be of great assistance to Gentoo developers and users who frequently end up rebuilding similar versions of packages. By providing a caching compiler frontend, it can speed up builds by removing the need to build files that have not changed again. However, it uses a single common cache directory by default which can be suboptimal even if you are explicitly enabling ccache only for a subset of packages needing that.

The likeliness of cross-package ccache hits is pretty low — majority of the hits occurs within a single package. If you use a single cache directory for all affected packages, it grows pretty quick. Besides a possible performance hit from having a lot of files in every directory, this means that packages built later can shift earlier packages out of the cache, resulting in meaninglessly lost cache hits. A simple way to avoid both of the problems is to use separate ccache directories.

In my solution, a separate subdirectory of /var/cache/ccache is used for every package, named after the category, package name and slot. While the last one is not strictly necessary, it can be useful for slotted packages such as LLVM where I do not want frequently changing live package sources to shift the release versions out of the cache.

To use it, put a code similar to the following in your /etc/portage/bashrc:

if [[ ${FEATURES} == *ccache* && ${EBUILD_PHASE_FUNC} == src_* ]]; then
	if [[ ${CCACHE_DIR} == /var/cache/ccache ]]; then
		export CCACHE_DIR=/var/cache/ccache/${CATEGORY}/${PN}:${SLOT}
		mkdir -p "${CCACHE_DIR}" || die
	fi
fi

The first condition makes sure the code is only run when ccache is enabled, and only for src_* phases where we can rely on userpriv being used consistently. The second one makes sure the code only applies to a specific (my initial) value of CCACHE_DIR and therefore avoids both nesting the cache indefinitely when Portage calls subsequent phase functions, and applying the replacement if user overrides CCACHE_DIR.

You need to either adjust the value used here to the directory used on your system, or change it in your /etc/portage/make.conf:

CCACHE_DIR="/var/cache/ccache"

Once this is done, Portage should start creating separate cache directories for every package where you enable ccache. This should improve the cache hit ratio, especially if you are using ccache for large packages (why else would you need it?). However, note that you will no longer have a single cache size limit — every package will have its own limit. Therefore, you may want to reduce the limits per-package, or manually look after the cache periodically.

July 16, 2017
Michał Górny a.k.a. mgorny (homepage, bugs)
GLEP 73 check results explained (July 16, 2017, 08:40 UTC)

The pkgcheck instance run for the Repo mirror&CI project has finished gaining a full support for GLEP 73 REQUIRED_USE validation and verification today. As a result, it can report 5 new issues defined by that GLEP. In this article, I’d like to shortly summarize them and explain how to interpret and solve the reports.

Technical note: the GLEP number has not been formally assigned yet. However, since there is no other GLEP request open at the moment, I have taken the liberty of using the next free number in the implementation.

GLEP73Syntax: syntax violates GLEP 73

GLEP 73 specifies a few syntax restrictions as compared to the pretty much free-form syntax allowed by the PMS. The restrictions could be shortly summarized as:

  • ||, ^^ and ?? can not not be empty,
  • ||, ^^ and ?? can not not be nested,
  • USE-conditional groups can not be used inside ||, ^^ and ??,
  • All-of groups (expressed using parentheses without a prefix) are banned completely.

The full rationale for the restrictions, along with examples and proposed fixes is provided in the GLEP. For the purpose of this article, it is enough to say that in all the cases found, there was a simpler (more obvious) way of expressing the same constraint.

Violation of this syntax prevents pkgcheck from performing any of the remaining checks. But more importantly, the report indicates that the constraint is unnecessarily complex and could result in REQUIRED_USE mismatch messages that are unnecessarily confusing to the user. Taking a real example, compare:

  The following REQUIRED_USE flag constraints are unsatisfied:
    exactly-one-of ( ( !32bit 64bit ) ( 32bit !64bit ) ( 32bit 64bit ) )

and the effect of a valid replacement:

  The following REQUIRED_USE flag constraints are unsatisfied:
	any-of ( 64bit 32bit )

While we could debate about usefulness of the Portage output, I think it is clear that the second output is simpler to comprehend. And the best proof is that you actually need to think a bit before confirming that they’re equivalent.

GLEP73Immutability: REQUIRED_USE violates immutability rules

This one is rather simple: it means this constraint may tell user to enable (disable) a flag that is use.masked/forced. Taking a trivial example:

a? ( b )

GLEP73Immutability report will trigger if a profile masks the b flag. This means that if the user has a enabled, the PM would normally tell him to enable b as well. However, since b is masked, it can not be enabled using normal methods (we assume that altering use.mask is not normally expected).

The alternative is to disable a then. But what’s the point of letting user enable it if we afterwards tell him to disable it anyway? It is more friendly to disable both flags together, and this is pretty much what the check is about. So in this case, the solution is to mask a as well.

How to read it? Given the generic message of:

REQUIRED_USE violates immutability rules: [C] requires [E] while the opposite value is enforced by use.force/mask (in profiles: [P])

It indicates that in profiles P (a lot of profiles usually indicates you’re looking for base or top-level arch profile), E is forced or masked, and that you probably need to force/mask C appropriately as well.

GLEP73SelfConflicting: impossible self-conflicting condition

This one is going to be extremely rare. It indicates that somehow the REQUIRED_USE nested a condition and its negation, causing it to never evaluate to true. It is best explained using the following trivial example:

a? ( !a? ( b ) )

This constraint will never be enforced since a and !a can not be true simultaneously.

Is there a point in having such a report at all? Well, such a thing is extremely unlikely to happen. However, it would break the verification algorithms and so we need to account for it explicitly. Since we account for it anyway and it is a clear mistake, why not report it?

GLEP73Conflict: request for conflicting states

This warning indicates that there are at least two constraints that can apply simultaneously and request the opposite states for the same USE flag. Again, best explained on a generic example:

a? ( c ) b? ( !c )

In this example, any USE flag set with both a and b enabled could not satisfy the constraint. However, Portage will happily led us astray:

  The following REQUIRED_USE flag constraints are unsatisfied:
	a? ( c )

If we follow the advice and enable c, we get:

  The following REQUIRED_USE flag constraints are unsatisfied:
	b? ( !c )

The goal of this check is to avoid such a bad advices, and to require constraints to clearly indicate a suggested way forward. For example, the above case could be modified to:

a? ( !b c ) b? ( !c )

to indicate that a takes precedence over b, and that b should be disabled to avoid the impossible constraint. The opposite can be stated similarly — however, note that you need to reorder the constraints to make sure that the PM will get it right:

b? ( !a !c ) a? ( c )

How to read it? Given the generic message of:

REQUIRED_USE can request conflicting states: [Ci] requires [Ei] while [Cj] requires [Ej]

It means that if the user enables Ci and Cj simultaneously, the PM will request conflicting Ei and Ej. Depending on the intent, the solution might involve negating one of the conditions in the other constraint, or reworking the REQUIRED_USE towards another solution.

GLEP73BackAlteration: previous condition starts applying

This warning is the most specific and the least important from all the additions at the moment. It indicates that the specific constraint may cause a preceding condition to start to apply, enforcing additional requirements. Consider the following example:

b? ( c ) a? ( b )

If the user has only a enabled, the second rule will enforce b. Then the condition for the first rule will start matching, and additionally enforce c. Is this a problem? Usually not. However, for the purpose of GLEP 73 we prefer that the REQUIRED_USE can be enforced while processing left-to-right, in a single iteration. If a previous rule starts applying, we may need to do another iteration.

The solution is usually trivial: to reorder (swap) the constraints. However, in some cases developers seem to prefer copying the enforcements into the subsequent rule, e.g.:

b? ( c ) a? ( b c )

Either way works for the purposes of GLEP 73, though the latter increases complexity.

How to read it? Given the generic message of:

REQUIRED_USE causes a preceding condition to start applying: [Cj] enforces [Ej] which may cause preceding [Ci] enforcing [Ei] to evaluate to true

This indicates that if Cj is true, Ej needs to be true as well. Once it is true, a preceding condition of Ci may also become true, adding another requirement for Ei. To fix the issue, you need to either move the latter constraint before the former, or include the enforcement of Ei in the rule for Cj, rendering the application of the first rule unnecessary.

Constructs using ||, ^^ and ?? operators

GLEP 73 specifies a leftmost-preferred behavior for the ||, ^^ and ?? operators. It is expressed in a simple transformation into implications (USE-conditional groups). Long story short:

  • || and ^^ groups force the leftmost unmasked flag if none of the flags are enabled already, and
  • ?? and ^^ groups disable all but the leftmost enabled flag if more than one flag is enabled.

All the verification algorithms work on the transformed form, and so their output may list conditions resulting from it. For example, the following construct:

|| ( a b c ) static? ( !a )

will report a conflict between !b !c ⇒ a and static ⇒ !a. This indicates the fact that per the forementioned rule, || group is transformed into !b? ( !c? ( a ) ) which explains that if none of the flags are enabled, the first one is preferred, causing a conflict with the static flag.

In this particular case you could debate that the algorithm could choose b or c instead in order to avoid the problem. However, we determined that this kind of heuristic is not a goal for GLEP 73, and instead we always obide the developer’s preference expressed in the ordering. The only exception to this rule is when the leftmost flag can not match due to a mask, in which case the first unmasked flag is used.

For completeness, I should add that ?? and ^^ blocks create implications in the form of: a ⇒ !b !c…, b ⇒ !c… and so on.

At some point I might work on making the reports include the original form to avoid ambiguity.

The future

The most important goal for GLEP 73 is to make it possible for users to install packages out-of-the-box without having to fight through mazes of REQUIRED_USE, and for developers to use REQUIRED_USE not only sparingly but whenever possible to improve the visibility of resulting package configuration. However, there is still a lot of testing, some fixing and many bikesheds before that could happen.

Nevertheless, I think we can all agree that most of the reports produced so far (with the exception of the back-alteration case) are meaningful even without automatic enforcing of REQUIRED_USE, and fixing them would benefit our users already. I would like to ask you to look for the reports on your packages and fix them whenever possible. Feel free to ping me if you need any help with that.

Once the number of non-conforming packages goes down, I will convert the reports successively into warning levels, making the CI report new issues and the pull request scans proactively complain about them.

July 04, 2017
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
New autoconf, new problems (July 04, 2017, 18:03 UTC)

So a new release of autoconf is in the tree, it’s version 2.62. Guess what? It’s far from glitches-free. Although I suppose the biggest problem is from programs that bend the rules autoconf gives you. While the warnings it spews up during Amarok build are, well, just a warning, which will probably be fixed before a version of autoconf making those problems fatal, there are subtle bugs that might appear, like -bug #217154 which made PAM fail to build.

For a series of fortunate circumstances, since last week I’ve restarted packaging Ruby libraries, that I needed both for my own blog (Typo) and other stuff. As usual, I’m trying to avoid packaging gems since they can be quite of a pain in the bum to deal with. But since that’s something it’s known already I want to post a few more notes, especially in light of the fact we’ll hopefully be having a decent way to package gems in Gentoo by the end of the Summer.

Again gitarella (July 04, 2017, 18:03 UTC)

So, last night I couldn’t sleep, and I then decided to continue with what I do when I cannot sleep: code. I think this would be one of the last times that I have time to do this tho, but I’d wait the official results to say that. Now gitarella loads fine repositories in which there are merges (commits with more than one parent), as ruby-hunspell came to be, plus I’ve added the tag display à-la GitWeb, and added an option to rename the title of Gitarella pages.

No more envelopes! (July 04, 2017, 18:03 UTC)

And finally the job is complete! Tomorrow I’ll probably pass reading Gibson’s Neuromancer to relax, as I’m the tiredness personified. Now, after having to take a forced break yesterday to fix a nasty KDE regression with KSpell2 (you know, zander, ranting won’t bring you anywhere…), I’m working on the next ebuild for Amarok, 1.4.4. With this version, there is a partial Karma device support (as I wrote here); but I didn’t get much requests, and no devs seem to have a Rio Karma device (as far as I remember).

QA by disagreement (July 04, 2017, 18:03 UTC)

A few months ago I criticised Qt team for not following QA indications regarding the installation of documentation. Now, I wish to apologize and thank them: they were tremendously useful in identifying one area of Gentoo that needs fixes in both the QA policy and the actual use of it. The problem: the default QA rules, and the policies encoded in both the Ebuild HOWTO (that should be deprecated!) and Portage itself, is to install the documentation of packages into /usr/share/doc/${PF}, a path that changes between different revisions of the same package.

Blast from the recent past; two years ago (July 04, 2017, 18:03 UTC)

Today I was feeling somewhat blue, mostly because I’m demotivated to do most stuff, and I wanted to see what it was like to work in Gentoo two years ago. One thing I read is that a little shy of exactly two years ago, ICQ broke Kopete, just like they did yesterday. Interestingly enough, even though a workaround has been found for Kopete 0.12 (the one shipped with KDE 3.5), there is no bump I see in the tree this time.

Powered by.. Gentoo/FreeBSD (July 04, 2017, 18:03 UTC)

Okay, I want to thanks really, really much blackace for the “*Powered by Gentoo/FreeBSD*” logo I’ve put on the right on the blog :) I’m also trying to understand how to set a favicon with Gentoo/FreeBSD’s logo, but I’m not yet successful yet. Mostly because I suck at webmastering. This blog might remain a single experiment, a test, as Daniel (dsd) seems to be ready to update b2evolution on Planet Gentoo, so that it will improve antispam capabilities.

Some more notes about the setup I’m following for the vserver (where this blog and xine’s bugtracker’s run, the last one I think was similar, but about awstats. You can see the running thread: running stuff as user rather than root, especially if the user has no access to any particular resource. So this time my target is git-daemon; I already set it up some time ago to run as nobody (just changed the start-stop-daemon parameters) but there is some problem with that: you either give access to the git repositories to world, or to nobody, which might be used by other services too.

Back to ... KDE‽ (July 04, 2017, 18:03 UTC)

Now this is one of the things that you wouldn’t expect — Years after I left KDE behind me, today I’m back using it.. and honestly I feel again at home. I guess the whole backend changes that got through KDE4 were a bit too harsh for me at the time, and I could feel the rough edges. But after staying with GNOME2, XFCE, then Cinnamon.. this seems to be the best fit for me at this point.

Maintaining backports with GIT (July 04, 2017, 18:03 UTC)

I have written last week of the good feeling of merging patches upstream – even though since then I don’t think I got anything else merged … well, beside the bti fixes that I sent Greg – this week, let’s start with the opposite problem: how can you handle backports sanely, and have a quick way to check what was merged upstream? Well, the answer, at least for the software that is managed upstream with GIT, is quite easy to me.

June 27, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Open Source Summit Japan-2017 (June 27, 2017, 13:57 UTC)

Open Source Summit Japan 2017 summary

OSS Japan 2017 was a really great experience.

I sended my paper proposal and waited for a replay, some week after I got a
invite to partecipate at the Kernel Keynote.
I thought partecipating at the Kernel Keynote as mentor and doing a presentation
was a good way to talk about Gentoo Kernel Project and how to contribute in the
Linux Kernel and Gentoo Kernel Project.
Also my paper got accepted so I could join OSS Japan 2017 as speaker.
It was three really nice days.

Presentation:

Fast Releasing and Testing of Gentoo Kernel Packages and Future Plans of the Gentoo Kernel Project

My talk was manly about the Gentoo Kernel related Projects past and future
specifically about the Gentoo Kernel Continuos Integreting system we are creating:
https://github.com/gentoo/Gentoo_kernelCI

Why is needed:

  • We need some way for checking the linux-patches commits automatically, can also check pre-commit by pushing to a sandbox branch
  • Check the patches signatures
  • Checking the ebuild committed to https://github.com/gentoo/gentoo/commits/master/sys-kernel
  • Checking the kernel eclass commits
  • Checking the pull request to the sys-kernel/*
  • Use Qemu for testing kernel vmlinux correct execution

For any issue or contribution feel free to send here:
https://github.com/gentoo/Gentoo_kernelCI

For see Gentoo Kernel CI in action:
http://kernel1.amd64.dev.gentoo.org:8010

slides:
http://schd.ws/hosted_files/ossjapan2017/39/Gentoo%20Kernel%20recent%20and%20Future%20project.pdf

Open Source Summit Japan 2017
Keynote: Linux Kernel Panel - Moderated by Alice Ferrazzi, Gentoo Kernel Project Leader

The keynote was with:
Greg Kroah-Hartman - Fellow, Linux Foundation
Steven Rostedt - VMware
Dan Williams - Intel Open Source Technology Center
Alice Ferrazzi - Gentoo Kernel Project Leader, Gentoo

One interesting part was about how to contribute to the Linux Kernel.
After some information about Linux Kernel contribution numbers the talk moved on
ho to contribute in the Linux Kernel.
For contribute in the Linux Kernel there is need of some understanding of C
and running test in the Linux Kernel.
Like fuego, kselftest, coccinelle, and many others.
And also a good talk from Steven Rostedt about working with Real-Time patch.

Who can find the Gentoo logo in this image:

June 25, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google-Summer-of-Code-summary week04 (June 25, 2017, 17:59 UTC)

Google Summer of Code summary week 04

What I did in this week 04 summary:

elivepatch:

  • Created the elivepatch client command line argument parser
  • Added function for sending patch and configuration files
  • Divided the call for sending (patch, config) and the call for building the livepatch
  • made send_file function more generic for sending all kind of files using RESTful api
  • Cleaned code following pep8
  • Decided to use only SSL and to don't use basic auth
  • Sending informations about the kernel version when requesting a livepatch build
  • We can now build livepatch using the RESTful API
  • Returning information about the livepatch building status

Kpatch:

  • Working on making kpatch-build working also with gentoo with all the features (As now kpatch-build can only automatically build livepatch for Ubuntu, Debian, Red Hat, Fedora)

Others:

  • Ask infra for a server for install the elivepatch server

What I need to do next time:

  • Finish the function for download the livepatch to the client
  • Testing elivepatch
  • Implementing the CVE patch uploader
  • Installing elivepatch to the Gentoo server
  • Fix kpatch-build for automatically work with gentoo-sources
  • Add more features to elivepatch

Google-Summer-of-Code-day18 (June 25, 2017, 17:59 UTC)

Google Summer of Code day 18

What was my plan for today?

  • going on with the code for retriving the livepatch and installing it

What i did today?

checked about kpatch-build required folder.

kpatch-build find_dirs function:

find_dirs() {
  if [[ -e "$SCRIPTDIR/create-diff-object" ]]; then
      # git repo
      TOOLSDIR="$SCRIPTDIR"
      DATADIR="$(readlink -f $SCRIPTDIR/../kmod)"
  elif [[ -e "$SCRIPTDIR/../libexec/kpatch/create-diff-object" ]]; then
      # installation path
      TOOLSDIR="$(readlink -f $SCRIPTDIR/../libexec/kpatch)"
      DATADIR="$(readlink -f $SCRIPTDIR/../share/kpatch)"
  else
      return 1
  fi
}

$SCRIPTDIR is the kpatch-build directory. kpatch-build is installed in /usr/bin/ so /usr/kmod /usr/libexe are all under such directory.

error "CONFIG_FUNCTION_TRACER, CONFIG_HAVE_FENTRY, CONFIG_MODULES, CONFIG_SYSFS, CONFIG_KALLSYMS_ALL kernel config options are required" Require by kmod/core.c: https://github.com/dynup/kpatch/blob/master/kmod/core/core.c#L62

We probably need someway for check that this setting are configured in the kernel we are going to build.

Updating kpatch-build for work automatically with gentoo (as now fedora for example can automatically download the kernel rpm and install it, we could do similar thing with gentoo): https://github.com/aliceinwire/kpatch/commits/gentoo

Starting to write the live patch downloader: https://github.com/aliceinwire/elivepatch/commit/d26611fb898223f2ea2dcf323078347ca928cbda

Now the elivepatch server can call and build the livepatch with kpatch:

sudo kpatch-build -s /usr/src/linux-4.10.14-gentoo/ -v /usr/src/linux-4.10.14-gentoo//vmlinux -c config 1.patch --skip-gcc-check
ERROR: kpatch build failed. Check /root/.kpatch/build.log for more details.
127.0.0.1 - - [25/Jun/2017 05:27:06] "POST /elivepatch/api/v1.0/build_livepatch HTTP/1.1" 201 -
WARNING: Skipping gcc version matching check (not recommended)
Using source directory at /usr/src/linux-4.10.14-gentoo
Testing patch file
checking file fs/exec.c
Hunk #1 succeeded at 259 (offset 16 lines).
Reading special section data
Building original kernel

Fixed some minor pep8

what i will do next time?
* work on the livepatch downloader and make the kpatch creator flexible

June 23, 2017
Alice Ferrazzi a.k.a. alicef (homepage, bugs)
Google-Summer-of-Code-day17 (June 23, 2017, 23:12 UTC)

Google Summer of Code day 16

What was my plan for today?

  • going on with the code for retriving the livepatch and installing it
  • Ask infra for a server where to install elivepatch sever

What i did today?
Sended request for the server that will offer the elivepatch service as talked with my mentor. https://bugs.gentoo.org/show_bug.cgi?id=622476

Fixed some pep8 warnings.

Livepatch server is now returning information about the livepatch building status.

Removed basic auth as we will go with SSL.

The client is now sending information about the kernel version when requesting a new build.

The kernel directory under the server is now a livepatch class variable.

what i will do next time?

  • going on with the code for retriving the livepatch and installing it

June 16, 2017
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)

Recently, Gentoo Linux put GCC 6.3 (released in December 2016) into the testing branch. For a source-based distribution like Gentoo, GCC is a critical part of the toolchain, and sometimes can lead to packages failing to compile or run. I recently ran into just this problem with Audacity. The error that I hit was not during compilation, but during runtime:


$ audacity
Fatal Error: Mismatch between the program and library build versions detected.
The library used 3.0 (wchar_t,compiler with C++ ABI 1009,wx containers,compatible with 2.8),
and your program used 3.0 (wchar_t,compiler with C++ ABI 1010,wx containers,compatible with 2.8).
Aborted

The Gentoo Wiki has a nice, detailed page on Upgrading GCC, and explicitly calls out ABI changes. In this particular case of Audacity, the problematic library is referenced in the error above: “wx containers”. WX containers are handled by the wxGTK package. So, I simply needed to rebuild the currently-installed version of wxGTK to fix this particular problem.

Hope that helps.

Cheers,
Zach

June 15, 2017
Hanno Böck a.k.a. hanno (homepage, bugs)
Don't leave Coredumps on Web Servers (June 15, 2017, 09:20 UTC)

CoreCoredumps are a feature of Linux and other Unix systems to analyze crashing software. If a software crashes, for example due to an invalid memory access, the operating system can save the current content of the application's memory to a file. By default it is simply called core.

While this is useful for debugging purposes it can produce a security risk. If a web application crashes the coredump may simply end up in the web server's root folder. Given that its file name is known an attacker can simply download it via an URL of the form https://example.org/core. As coredumps contain an application's memory they may expose secret information. A very typical example would be passwords.

PHP used to crash relatively often. Recently a lot of these crash bugs have been fixed, in part because PHP now has a bug bounty program. But there are still situations in which PHP crashes. Some of them likely won't be fixed.

How to disclose?

With a scan of the Alexa Top 1 Million domains for exposed core dumps I found around 1.000 vulnerable hosts. I was faced with a challenge: How can I properly disclose this? It is obvious that I wouldn't write hundreds of manual mails. So I needed an automated way to contact the site owners.

Abusix runs a service where you can query the abuse contacts of IP addresses via a DNS query. This turned out to be very useful for this purpose. One could also imagine contacting domain owners directly, but that's not very practical. The domain whois databases have rate limits and don't always expose contact mail addresses in a machine readable way.

Using the abuse contacts doesn't reach all of the affected host operators. Some abuse contacts were nonexistent mail addresses, others didn't have abuse contacts at all. I also got all kinds of automated replies, some of them asking me to fill out forms or do other things, otherwise my message wouldn't be read. Due to the scale I ignored those. I feel that if people make it hard for me to inform them about security problems that's not my responsibility.

I took away two things that I changed in a second batch of disclosures. Some abuse contacts seem to automatically search for IP addresses in the abuse mails. I originally only included affected URLs. So I changed that to include the affected IPs as well.

In many cases I was informed that the affected hosts are not owned by the company I contacted, but by a customer. Some of them asked me if they're allowed to forward the message to them. I thought that would be obvious, but I made it explicit now. Some of them asked me that I contact their customers, which again, of course, is impractical at scale. And sorry: They are your customers, not mine.

How to fix and prevent it?

If you have a coredump on your web host, the obvious fix is to remove it from there. However you obviously also want to prevent this from happening again.

There are two settings that impact coredump creation: A limits setting, configurable via /etc/security/limits.conf and ulimit and a sysctl interface that can be found under /proc/sys/kernel/core_pattern.

The limits setting is a size limit for coredumps. If it is set to zero then no core dumps are created. To set this as the default you can add something like this to your limits.conf:

* soft core 0

The sysctl interface sets a pattern for the file name and can also contain a path. You can set it to something like this:

/var/log/core/core.%e.%p.%h.%t

This would store all coredumps under /var/log/core/ and add the executable name, process id, host name and timestamp to the filename. The directory needs to be writable by all users, you should use a directory with the sticky bit (chmod +t).

If you set this via the proc file interface it will only be temporary until the next reboot. To set this permanently you can add it to /etc/sysctl.conf:

kernel.core_pattern = /var/log/core/core.%e.%p.%h.%t

Some Linux distributions directly forward core dumps to crash analysis tools. This can be done by prefixing the pattern with a pipe (|). These tools like apport from Ubuntu or abrt from Fedora have also been the source of security vulnerabilities in the past. However that's a separate issue.

Look out for coredumps

My scans showed that this is a relatively common issue. Among popular web pages around one in a thousand were affected before my disclosure attempts. I recommend that pentesters and developers of security scan tools consider checking for this. It's simple: Just try download the /core file and check if it looks like an executable. In most cases it will be an ELF file, however sometimes it may be a Mach-O (OS X) or an a.out file (very old Linux and Unix systems).

Image credit: NASA/JPL-Université Paris Diderot

June 10, 2017
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo Puppet Portage Package Provider (June 10, 2017, 05:00 UTC)

Why do this

The previus built in puppet portage package provider (I'm just going to shorten it to PPPP) only supported very simplistic package interactions. Mainly package name (with slot) install and uninstall. This has proven fairly limiting, if you want to install a specific version of a package and lock it down you were forced to call out to exec or editing package.{mask,unmask,keywords} files.

The new provider (which will be built into puppet in 5.0 or puppet-agent-2.0) supports all the package provider attributes.

How do I get this awesome thing

Emerge puppet or puppet-agent with the experimental use flag.

What it can do

You can use the following attributes with the new PPPP.

  • Name - The full package atom works now, using qatom on the backend.
  • ensure - now allowing a package purge as well (CONFIG_PROTECT="-*").
  • install_options - you can now pass options to emerge (--deep or --usepkgonly for example).
  • uninstall_options - just like install_options

Being able to call out specific versions and per package install options will give much greater flexability.

fin

Here is the pull request that upstream puppet merged.

If you have any questions I'm on freenode as prometheanfire.

June 03, 2017
Matthew Thode a.k.a. prometheanfire (homepage, bugs)
Gentoo portage templates (June 03, 2017, 05:00 UTC)

Why do this

Gentoo is known for being somewhat complex to manage, making clusters of gentoo machines even more complex in most scenarios. Using the following methods the configuration becomes easier.

By the end of this you should be able to have a default hiera configuration for Gentoo while still being able to override it for specific use cases. What makes the method I chose particularly powerful is the ability to delete default vales entirely, not just setting them to something else.

Most of these methods came from my experience with chef that I thought would apply well to other config engines. While some don't like shoving logic to the configuration template engine, I'm open to suggestions.

Requirements

  • Puppet 4.x or puppet-agent with hiera support.
  • Puppet's stdlib installed (specifically for delete_values stuff).
  • (optional) use puppetserver instead of running this oneoff.
  • Hiera configured to use the following configuration.

Hiera config

:merge_behavior: deeper
:deep_merge_options:
  :merge_hash_arrays: true

Basic Setup

  • Convert the common portage configuraitons to templates.
  • Convert the data in those templates to a datastructure.
  • Use hiera to write the defaults / node overrides.
  • Call the templates via a puppet module.

Datastructures

The easiest way of explaing how this works is to say that the only data stored in the 'deepest' value is ever going to be True or False. The reason for this is a because using deep_merge in hiera is an additive process and we need a flag to remove things further down the line.

The datastructure itself is fairly simple, here is an excerpt from my setup.

make_conf:
  emerge_default_opts:
    "--quiet-build": true
    "--changed-use": true

If I wanted to disable --quiet-build down the line you would just set the value to False in a higher precidence (the specific node config instead of the general location.

make_conf:
  emerge_default_opts:
    "--quiet-build": false

Configuration Templates

The templates themselves are epp based, not erb (the old method).

package.keywords

For this one I'll also supply how I auto-set the right archetecture, works for amd64 at least.

Hiera data

"app-admin/paxtest ~%{facts.architecture}": true

Template

<%- |$packages| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

<%- keys(delete_values($packages, false)).each |$package| { -%>
<%= "$package" %>
<%- } -%>

This one is the simplest, if a value for the key (paxtest in this case) is set to false, don't use it, the remaining keys are then set as plan text.

package.use

<%- |$packages| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

<%- keys(delete_values($packages, false)).each |$package| { -%>
  <%- if ! empty(keys(delete_values($packages[$package], false))) { -%>
<%= "$package" %> <%= join(keys(delete_values($packages[$package], false)), ' ') %>
  <%- } -%>
<%- } -%>

This one is fairly straight forward as well, for each package that isn't disabled, if there are keys for the package (signifying use flags, needed because we remove the unset flags) then set them. This combines the flags set from all levels in hiera.

make.conf

This will be the most complicated one, but it's also likely to be the most important. I'll explain a bit about it after the paste.

<%- |$config| -%>
# THIS FILE WAS GENERATED BY PUPPET, CHANGES WILL BE OVERWRITTEN

CFLAGS="<%= join(keys(delete_values($config['cflags'], false)), ' ') %>"
CXXFLAGS="<%= join(keys(delete_values($config['cxxflags'], false)), ' ') %>"
CHOST="<%= $config['chost'] %>"
MAKEOPTS="<%= join(keys(delete_values($config['makeopts'], false)), ' ') %>"
CPU_FLAGS_X86="<%= join(keys(delete_values($config['cpu_flags_x86'], false)), ' ') %>"
ABI_X86="<%= join(keys(delete_values($config['abi_x86'], false)), ' ') %>"

USE="<%= join(keys(delete_values($config['use'], false)), ' ') %>"

GENTOO_MIRRORS="<%= join(keys(delete_values($config['gentoo_mirrors'], false)), ' ') %>"
<% if has_key($config, 'portage_binhost') { -%>
  <%- if $config['portage_binhost'] != false { -%>
PORTAGE_BINHOST="<%= $config['portage_binhost'] %>"
  <%- } -%>
<% } -%>

FEATURES="<%= join(keys(delete_values($config['features'], false)), ' ') %>"
EMERGE_DEFAULT_OPTS="<%= join(keys(delete_values($config['emerge_default_opts'], false)), ' ') %>"
PKGDIR="<%= $config['pkgdir'] %>"
PORT_LOGDIR="<%= $config['port_logdir'] %>"
PORTAGE_GPG_DIR="<%= $config['portage_gpg_dir'] %>"
PORTAGE_GPG_KEY='<%= $config['portage_gpg_key'] %>'

GRUB_PLATFORMS="<%= join(keys(delete_values($config['grub_platforms'], false)), ' ') %>"
LINGUAS="<%= join(keys(delete_values($config['linguas'], false)), ' ') %>"
L10N="<%= join(keys(delete_values($config['l10n'], false)), ' ') %>"

PORTAGE_ELOG_CLASSES="<%= join(keys(delete_values($config['portage_elog_classes'], false)), ' ') %>"
PORTAGE_ELOG_SYSTEM="<%= join(keys(delete_values($config['portage_elog_system'], false)), ' ') %>"
PORTAGE_ELOG_MAILURI="<%= $config['portage_elog_mailuri'] %>"
PORTAGE_ELOG_MAILFROM="<%= $config['portage_elog_mailfrom'] %>"

<% if has_key($config, 'accept_licence') { -%>
ACCEPT_LICENSE="<%= join(keys(delete_values($config['accept_licence'], false)), ' ') %>"
<%- } -%>
POLICY_TYPES="<%= join(keys(delete_values($config['policy_types'], false)), ' ') %>"
PAX_MARKINGS="<%= join(keys(delete_values($config['pax_markings'], false)), ' ') %>"

USE_PYTHON="<%= join(keys(delete_values($config['use_python'], false)), ' ') %>"
PYTHON_TARGETS="<%= join(keys(delete_values($config['python_targets'], false)), ' ') %>"
RUBY_TARGETS="<%= join(keys(delete_values($config['ruby_targets'], false)), ' ') %>"
PHP_TARGETS="<%= join(keys(delete_values($config['php_targets'], false)), ' ') %>"

<% if has_key($config, 'source') { -%>
source <%= join(keys(delete_values($config['source'], false)), ' ') %>
<%- } -%>

The basic idea of this is that you pass in the full make.conf datastructre you will generate as a single variable. Everything else is pulled (or elemated from that).

Each option that is selected already has all the options merged, but this could mean both the disabled versions of a given value could be still there, this is removed using the delete_values($config['foo'], false) bit.

The puppet module itself

It's fairly easy to call, just make sure the template is in the template location and do it as follows.

file { '/etc/portage/make.conf':
  content => epp('portage/portage-make_conf.epp', {'config' => hiera_hash('portage')['make_conf']})
}

fin

If you have any questions I'm on freenode as prometheanfire.

May 16, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous blog post, I demonstrated how to use the PIV feature of a Yubikey to add a 2nd factor authentication to SSH.

Careful readers such as Grzegorz Kulewski pointed out that using the GPG capability of the Yubikey was also a great, more versatile and more secure option on the table (I love those community insights):

  • GPG keys and subkeys are indeed more flexible and can be used for case-specific operations (signing, encryption, authentication)
  • GPG is more widely used and one could use their Yubikey smartcard for SSH, VPN, HTTP auth and code signing
  • The Yubikey 4 GPG feature supports 4096 bit keys (limited to 2048 for PIV)

While I initially looked at the GPG feature, its apparent complexity got me to discard it for my direct use case (SSH). But I couldn’t resist the good points of Grzegorz and here I got back into testing it. Thank you again Grzegorz for the excuse you provided 😉

So let’s get through with the GPG feature of the Yubikey to authenticate our SSH connections. Just like the PIV method, this one has the  advantage to allow a 2nd factor authentication while using the public key authentication mechanism of OpenSSH and thus does not need any kind of setup on the servers.

Method 3 – SSH using Yubikey and GPG

Acknowledgement

The first choice you have to make is to decide whether you allow your master key to be stored on the Yubikey or not. This choice will be guided by how you plan to use and industrialize your usage of the GPG based SSH authentication.

Consider this to choose whether to store the master key on the Yubikey or not:

  • (con) it will not allow the usage of the same GPG key on multiple Yubikeys
  • (con) if you loose your Yubikey, you will have to revoke your entire GPG key and start from scratch (since the secret key is stored on the Yubikey)
  • (pro) by storing everything on the Yubikey, you won’t necessary have to have an offline copy of your master key (and all the process that comes with it)
  • (pro) it is easier to generate and store everything on the key and is then a good starting point for new comers or rare GPG users

Because I want to demonstrate and enforce the most straightforward way of using it, I will base this article on generating and storing everything on a Yubikey 4. You can find useful links at the end of the article pointing to reference on how to do it differently.

Tools installation

For this to work, we will need some tools on our local machine to setup our Yubikey correctly.

Gentoo users should install those packages:

emerge -av dev-libs/opensc sys-auth/ykpers app-crypt/ccid sys-apps/pcsc-tools app-crypt/gnupg

Gentoo users should also allow the pcscd service to be hotplugged (started automatically upon key insertion) by modifying their /etc/rc.conf and having:

rc_hotplug="pcscd"

Yubikey setup

The idea behind the Yubikey setup is to generate and store the GPG keys directly on our Yubikey and to secure them via a PIN code (and an admin PIN code).

  • default PIN code: 123456
  • default admin PIN code: 12345678

First, insert your Yubikey and let’s change its USB operating mode to OTP+U2F+CCID with MODE_FLAG_EJECT flag.

ykpersonalize -m86
Firmware version 4.3.4 Touch level 783 Program sequence 3

The USB mode will be set to: 0x86

Commit? (y/n) [n]: y

NOTE: if you have an older version of Yubikey (before Sept. 2014), use -m82 instead.

Then, we can generate a new GPG key on the Yubikey. Let’s open the smartcard for edition.

gpg --card-edit --expert

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0005435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

Then switch to admin mode.

gpg/card> admin
Admin commands are allowed

We can start generating the Signature, Encryption and Authentication keys on the Yubikey. During the process, you will be prompted alternatively for the admin PIN and PIN.

gpg/card> generate 
Make off-card backup of encryption key? (Y/n) 

Please note that the factory settings of the PINs are
   PIN = '123456'     Admin PIN = '12345678'
You should change them using the command --change-pin

I advise you say Yes to the off-card backup of the encryption key.

Yubikey 4 users can choose a 4096 bits key, let’s go for it for every key type.

What keysize do you want for the Signature key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits
Note: There is no guarantee that the card supports the requested size.
      If the key generation does not succeed, please check the
      documentation of your card to see what sizes are allowed.
What keysize do you want for the Encryption key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits
What keysize do you want for the Authentication key? (2048) 4096
The card will now be re-configured to generate a key of 4096 bits

Then you’re asked for the expiration of your key. I choose 1 year but it’s up to you (leave 0 for no expiration).

Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at mer. 15 mai 2018 21:42:42 CEST
Is this correct? (y/N) y

Finally you give GnuPG details about your user ID and you will be prompted for a passphrase (make it strong).

GnuPG needs to construct a user ID to identify your key.

Real name: Ultrabug
Email address: ultrabug@nospam.com
Comment: 
You selected this USER-ID:
    "Ultrabug <ultrabug@nospam.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

If you chose to make an off-card backup of your key, you will also get notified of its location as well the revocation certificate.

gpg: Note: backup of card key saved to '/home/ultrabug/.gnupg/sk_8E407636C9C32C38.gpg'
gpg: key 22A73AED8E766F01 marked as ultimately trusted
gpg: revocation certificate stored as '/home/ultrabug/.gnupg/openpgp-revocs.d/A1580FD98C0486D94C1BE63B22A73AED8E766F01.rev'
public and secret key created and signed.

Make sure to store that backup in a secure and offline location.

You can verify that everything went good and take this chance to note the public key ID.

gpg/card> verify

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0001435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa4096 rsa4096 rsa4096
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 4
Signature key ....: A158 0FD9 8C04 86D9 4C1B E63B 22A7 3AED 8E76 6F01
 created ....: 2017-05-16 20:43:17
Encryption key....: E1B6 7009 907D 1D94 B200 37D7 8E40 7636 C9C3 2C38
 created ....: 2017-05-16 20:43:17
Authentication key: AAED AB8E E055 41B2 EFFF 62A4 164F 873A 75D2 AD6B
 created ....: 2017-05-16 20:43:17
General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>
sec> rsa4096/22A73AED8E766F01 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/164F873A75D2AD6B created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/8E407636C9C32C38 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106

You’ll find the public key ID on the “General key info” line (22A73AED8E766F01):

General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>

Quit the card edition.

gpg/card> quit

It is then convenient to upload your public key to a key server, whether public or on your own web server (you can also keep it to be used and imported directly from an USB stick).

Export the public key:

gpg --armor --export 22A73AED8E766F01 > 22A73AED8E766F01.asc

Then upload it to your http server or a public server (needed if you want to be able to easily use the key on multiple machines):

# Upload it to your http server
scp 22A73AED8E766F01.asc user@server:public_html/static/22A73AED8E766F01.asc

# OR upload it to a public keyserver
gpg --keyserver hkps://hkps.pool.sks-keyservers.net --send-key 22A73AED8E766F01

Now we can finish up the Yubikey setup. Let’s edit the card again:

gpg --card-edit --expert

Reader ...........: Yubico Yubikey 4 OTP U2F CCID (0001435106) 00 00
Application ID ...: A7560001240102010006054351060000
Version ..........: 2.1
Manufacturer .....: Yubico
Serial number ....: 75435106
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa4096 rsa4096 rsa4096
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 4
Signature key ....: A158 0FD9 8C04 86D9 4C1B E63B 22A7 3AED 8E76 6F01
 created ....: 2017-05-16 20:43:17
Encryption key....: E1B6 7009 907D 1D94 B200 37D7 8E40 7636 C9C3 2C38
 created ....: 2017-05-16 20:43:17
Authentication key: AAED AB8E E055 41B2 EFFF 62A4 164F 873A 75D2 AD6B
 created ....: 2017-05-16 20:43:17
General key info..: pub rsa4096/22A73AED8E766F01 2017-05-16 Ultrabug <ultrabug@nospam.com>
sec> rsa4096/22A73AED8E766F01 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/164F873A75D2AD6B created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
ssb> rsa4096/8E407636C9C32C38 created: 2017-05-16 expires: 2018-05-16
 card-no: 0001 05435106
gpg/card> admin

Make sure that the Signature PIN is forced to request that your PIN is entered when your key is used. If it is listed as “not forced”, you can enforce it by entering the following command:

gpg/card> forcesig

It is also good practice to set a few more settings on your key.

gpg/card> login
Login data (account name): ultrabug

gpg/card> lang
Language preferences: en

gpg/card> name 
Cardholder's surname: Bug
Cardholder's given name: Ultra

Now we need to setup the PIN and admin PIN on the card.

gpg/card> passwd 
gpg: OpenPGP card no. A7560001240102010006054351060000 detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 1
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 3
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? Q

If you uploaded your public key on your web server or a public server, configure it on the key:

gpg/card> url
URL to retrieve public key: http://ultrabug.fr/keyserver/22A73AED8E766F01.asc

gpg/card> quit

Now we can quit the gpg card edition, we’re done on the Yubikey side!

gpg/card> quit

SSH client setup

This is the setup on the machine(s) where you will be using the GPG key. The idea is to import your key from the card to your local keyring so you can use it on gpg-agent (and its ssh support).

You can skip the fetch/import part below if you generated the key on the same machine than you are using it. You should see it listed when executing gpg -k.

Plug-in your Yubikey and load the smartcard.

gpg --card-edit --expert

Then fetch the key from the URL to import it to your local keyring.

gpg/card> fetch

Then you’re done on this part, exit gpg and update/display& your card status.

gpg/card> quit

gpg --card-status

You can verify the presence of the key in your keyring:

gpg -K
sec>  rsa4096 2017-05-16 [SC] [expires: 2018-05-16]
      A1580FD98C0486D94C1BE63B22A73AED8E766F01
      Card serial no. = 0001 05435106
uid           [ultimate] Ultrabug <ultrabug@nospam.com>
ssb>  rsa4096 2017-05-16 [A] [expires: 2018-05-16]
ssb>  rsa4096 2017-05-16 [E] [expires: 2018-05-16]

Note the “Card serial no.” showing that the key is actually stored on a smartcard.

Now we need to configure gpg-agent to enable ssh support, edit your ~/.gnupg/gpg-agent.conf configuration file and make sure that the enable-ssh-support is present:

default-cache-ttl 7200
max-cache-ttl 86400
enable-ssh-support

Then you will need to update your ~/.bashrc file to automatically start gpg-agent and override ssh-agent’s environment variables. Add this at the end of your ~/.bashrc file (or equivalent).

# start gpg-agent if it's not running
# then override SSH authentication socket to use gpg-agent
pgrep -l gpg-agent &>/dev/null
if [[ "$?" != "0" ]]; then
 gpg-agent --daemon &>/dev/null
fi
SSH_AUTH_SOCK=/run/user/$(id -u)/gnupg/S.gpg-agent.ssh
export SSH_AUTH_SOCK

To simulate a clean slate, unplug your card then kill any running gpg-agent:

killall gpg-agent

Then plug back your card and source your ~/.bashrc file:

source ~/.bashrc

Your GPG key is now listed in you ssh identities!

ssh-add -l
4096 SHA256:a4vsJM6Sw1Rt8orvPnI8nvNUwHbRQ67ylnoTxruozK9 cardno:000105435106 (RSA)

You will now be able to get the SSH public key hash to copy to your remote servers using:

ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCVDq24Ld/bOzc3yNnY6fF7FNfZnb6wRVdN2xMo1YiA5pz20y+2P1ynq0rb6l/wsSip0Owq4G6fzaJtT1pBUAkEJbuMvZBrbYokb2mZ78iFZyzAkCT+C9YQtvPClFxSnqSL38jBpunZuuiFfejM842dWdMNK3BTcjeTQdTbmY+VsVOw7ppepRh7BWslb52cVpg+bHhB/4H0BUUd/KHZ5sor0z6e1OkqIV8UTiLY2laCCL8iWepZcE6n7MH9KItqzX2I9HVIuLCzhIib35IRp6d3Whhw3hXlit4/irqkIw0g7s9jC8OwybEqXiaeQbmosLromY3X6H8++uLnk7eg9RtCwcWfDq0Qg2uNTEarVGgQ1HXFr8WsjWBIneC8iG09idnwZNbfV3ocY5+l1REZZDobo2BbhSZiS7hKRxzoSiwRvlWh9GuIv8RNCDss9yNFxNUiEIi7lyskSgQl3J8znDXHfNiOAA2X5kVH0s6AQx4hQP9Dl1X2Em4zOz+yJEPFnAvE+XvBys1yuUPq1c3WKMWzongZi8JNu51Yfj7Trm74hoFRn+CREUNpELD9JignxlvkoKAJpWVLdEu1bxJ7jh7kcMQfVEflLbfkEPLV4nZS4sC1FJR88DZwQvOudyS69wLrF3azC1Gc/fTgBiXVVQwuAXE7vozZk+K4hdrGq4u7Gw== cardno:000105435106

This is what ends up in ~/.ssh/authorized_keys on your servers.

When connecting to your remote server, you will be prompted for the PIN!

Conclusion

Using the GPG feature of your Yubikey is very convenient and versatile. Even if it is not that hard after all, it is interesting and fair to note that the PIV method is indeed more simple to implement.

When you need to maintain a large number of security keys in an organization and that their usage is limited to SSH, you will be inclined to stick with PIV if 2048 bits keys are acceptable for you.

However, for power users and developers, usage of GPG is definitely something you need to consider for its versatility and enhanced security.

Useful links

You may find those articles useful to setup your GPG key differently and avoid having the secret key tied to your Yubikey.

Sebastian Pipping a.k.a. sping (homepage, bugs)

Hi!

When I started fetchcommandwrapper about 6 years ago it was a proof of concept: It plugged into portage replacing wget for downloads, facilitating ${GENTOO_MIRRORS} and aria2 to both download faster and distribute loads across mirrors. A hack for sure, but with some potential.

Back then public interest was non-existent, fetchcommandwrapper had some issues — e.g. metadata.xsd downloads failed and some sites rejected downloading before it made aria2 dress like wget — and I stopped using it myself, eventually.

With the latest bug reports, bugfixes and release of version 0.8 in Gentoo, fetchcommandwrapper is ready for general use now. To give it a shot, you emerge app-portage/fetchcommandwrapper and append source /usr/share/fetchcommandwrapper/make.conf to /etc/portage/make.conf. Done.

If you have extra options that you would like to pass to aria2c, put them in ${FETCHCOMMANDWRAPPER_EXTRA}, or ${FETCHCOMMANDWRAPPER_OPTIONS} for fetchcommendwrapper itself; for example

FETCHCOMMANDWRAPPER_OPTIONS="--link-speed=600000"

tells fetchcommandwrapper that my download link has 600KB/s only and makes aria2 in turn drop connections to mirrors that cannot keep up with at least a third of that, so that faster mirrors get a chance to take their place.

For non-ebuild bugs, feel free to use https://github.com/gentoo/fetchcommandwrapper/issues to report.

Best, Sebastian

May 12, 2017
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous blog post, I demonstrated how to use a Yubikey to add a 2nd factor (2FA) authentication to SSH using pam_ssh and pam_yubico.

In this article, I will go further and demonstrate another method using Yubikey’s Personal Identity Verification (PIV) capability.

This one has the huge advantage to allow a 2nd factor authentication while using the public key authentication mechanism of OpenSSH and thus does not need any kind of setup on the servers.

Method 2 – SSH using Yubikey and PIV

Yubikey 4 and NEO also act as smartcards supporting the PIV standard which allows you to store a private key on your security key through PKCS#11. This is an amazing feature which is also very good for our use case.

Tools installation

For this to work, we will need some tools on our local machines to setup our Yubikey correctly.

Gentoo users should install those packages:

emerge dev-libs/opensc sys-auth/ykpers sys-auth/yubico-piv-tool sys-apps/pcsc-lite app-crypt/ccid sys-apps/pcsc-tools sys-auth/yubikey-personalization-gui

Gentoo users should also allow the pcscd service to be hotplugged (started automatically upon key insertion) by modifying their /etc/rc.conf and having:

rc_hotplug="pcscd"

Yubikey setup

The idea behind the Yubikey setup is to generate and store a private key in our Yubikey and to secure it via a PIN code.

First, insert your Yubikey and let’s change its USB operating mode to OTP+CCID.

ykpersonalize -m2
Firmware version 4.3.4 Touch level 783 Program sequence 3

The USB mode will be set to: 0x2

Commit? (y/n) [n]: y

Then, we will create a new management key:

key=`dd if=/dev/random bs=1 count=24 2>/dev/null | hexdump -v -e '/1 "%02X"'`
echo $key
D59E46FE263DDC052A409C68EB71941D8DD0C5915B7C143A

Replace the default management key (if prompted, copy/paste the key printed above):

yubico-piv-tool -a set-mgm-key -n $key --key 010203040506070801020304050607080102030405060708

Then change the default PIN code and PUK code of your Yubikey

yubico-piv-tool -a change-pin -P 123456 -N <NEW PIN>

yubico-piv-tool -a change-puk -P 12345678 -N <NEW PUK>

Now that your Yubikey is secure, let’s proceed with the PCKS#11 certificate generation. You will be prompted for your management key that you generated before.

yubico-piv-tool -s 9a -a generate -o public.pem -k

Then create a self-signed certificate (only used for libpcks11) and import it in the Yubikey:

yubico-piv-tool -a verify-pin -a selfsign-certificate -s 9a -S "/CN=SSH key/" -i public.pem -o cert.pem
yubico-piv-tool -a import-certificate -s 9a -i cert.pem

Here you are! You can now export your public key to use with OpenSSH:

ssh-keygen -D opensc-pkcs11.so -e
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWtqI37jwxYMJ9XLq9VwHgJlhZViPVAGIUfMm8SAlfs6cka4Cj570lkoGK04r8JAVJFy/iKfhGpL9N9XuartfIoq6Cg/6Qvg3REupuqs51V2cBaC/gnWIQ7qZqlzBulvcOvzNfHFD/lX42J58+E8tWnYg6GzIsImFZQVpmI6SxNfSmVQIqxIufInrbQaI+pKXntdTQC9wyNK5FAA8TXAdff5ZDnmetsOTVble9Ia5m6gqM7MnxNZ56uDpn+6lCxRZSW+Ln2PDE7sivVcST4qpfwY4P4Lrb3vrjCGODFg4xmGNKXsLi2+uZbs5rW7bg4HFO50kKDucPV1M+rBWA9999

Copy to your servers your SSH public key to your usual ~/.ssh/authorized_keys file in your $HOME.

Testing PIV secured SSH

Plug-in your Yubikey, and then SSH to your remote server using the opensc-pkcs11 library. You will be prompted for your PIN and then successfully logged in 🙂

ssh -I opensc-pkcs11.so cheetah
Enter PIN for 'PIV_II (PIV Card Holder pin)':

You can then configure SSH to use it by default for all your hosts in your ~/.ssh/config

Host=*
PKCS11Provider /usr/lib/opensc-pkcs11.so

Using PIV with ssh-agent

You can also use ssh-agent to avoid typing your PIN every time.

When asked for the passphrase, enter your PIN:

ssh-add -s /usr/lib/opensc-pkcs11.so
Enter passphrase for PKCS#11: 
Card added: /usr/lib/opensc-pkcs11.so

You can verify that it worked by listing the available keys in your ssh agent:

ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCWtqI37jwxYMJ9XLq9VwHgJlhZViPVAGIUfMm8SAlfs6cka4Cj570lkoGK04r8JAVJFy/iKfhGpL9N9XuartfIoq6Cg/6Qvg3REupuqs51V2cBaC/gnWIQ7qZqlzBulvcOvzNfHFD/lX42J58+E8tWnYg6GzIsImFZQVpmI6SxNfSmVQIqxIufInrbQaI+pKXntdTQC9wyNK5FAA8TXAdff5ZDnmetsOTVble9Ia5m6gqM7MnxNZ56uDpn+6lCxRZSW+Ln2PDE7sivVcST4qpfwY4P4Lrb3vrjCGODFg4xmGNKXsLi2+uZbs5rW7bg4HFO50kKDucPV1M+rBWA9999 /usr/lib64/opensc-pkcs11.so

Enjoy!

Now you have a flexible yet robust way to authenticate your users which you can also extend by adding another type of authentication on your servers using PAM.

I recently worked a bit at how we could secure better our SSH connections to our servers at work.

So far we are using the OpenSSH public key only mechanism which means that there is no password set on the servers for our users. While this was satisfactory for a time we think that this still suffers some disadvantages such as:

  • we cannot enforce SSH private keys to have a passphrase on the user side
  • the security level of the whole system is based on the protection of the private key which means that it’s directly tied to the security level of the desktop of the users

This lead us to think about adding a 2nd factor authentication to SSH and about the usage of security keys.

Meet the Yubikey

Yubikeys are security keys made by Yubico. They can support multiple modes and work with the U2F open authentication standard which is why they got my attention.

I decided to try the Yubikey 4 because it can act as a smartcard while offering these interesting features:

  • Challenge-Response
  • OTP
  • GPG
  • PIV

Method 1 – SSH using pam_ssh + pam_yubico

The first method I found satisfactory was to combine pam_ssh authentication module along with pam_yubico as a 2nd factor. This allows server side passphrase enforcement on SSH and the usage of the security key to login.

TL;DR: two gotchas before we begin

ADVISE: keep a root SSH session to your servers while deploying/testing this so you can revert any change you make and avoid to lock yourself out of your servers.

Setup pam_ssh

Use pam_ssh on the servers to force usage of a passphrase on a private key. The idea behind pam_ssh is that the passphrase of your SSH key serves as your SSH password.

Generate your SSH key pair with a passphrase on your local machine.

ssh-keygen -f identity
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in identity.
Your public key has been saved in identity.pub.
The key fingerprint is:
SHA256:a2/HNCe28+bpMZ2dIf9bodnBwnmD7stO5sdBOV6teP8 alexys@yazd
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                o|
|            . ++o|
|        S    BoOo|
|         .  B %+O|
|        o  + %+*=|
|       . .. @ .*+|
|         ....%B.E|
+----[SHA256]-----+

You then must copy your private key (named identity with no extension) to your servers under  the ~/.ssh/login-keys.d/ folder.

In your $HOME on the servers, you will get something like this:

.ssh/
├── known_hosts
└── login-keys.d
    └── identity

Then you can enable the pam_ssh authentication. Gentoo users should enable the pam_ssh USE flag for sys-auth/pambase and re-install.

Add this at the beginning of the file /etc/pam.d/ssh

auth    required    pam_ssh.so debug

The debug flag can be removed after you tested it correctly.

Disable public key authentication

Because it takes precedence over the PAM authentication mechanism, you have to disable OpenSSH PubkeyAuthentication authentication on /etc/ssh/sshd_config:

PubkeyAuthentication no

Enable PAM authentication on /etc/ssh/sshd_config

ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM yes

Test pam_ssh

Now you should be prompted for your SSH passphrase to login through SSH.

➜  ~ ssh cheetah
SSH passphrase:

Setup 2nd factor using pam_yubico

Now we will make use of our Yubikey security key to add a 2nd factor authentication to login through SSH on our servers.

Because the Yubikey is not physically plugged on the server, we cannot use an offline Challenge-Response mechanism, so we will have to use a third party to validate the challenge. Yubico gracefully provide an API for this and the pam_yubico module is meant to use it easily.

Preparing your account using your Yubikey (on your machine)

First of all, you need to get your Yubico API key ID from the following URL:

You will get a Client ID (this you will use) and Secret Key (this you will keep safe).

Then you will need to create an authorization mapping file which basically link your account to a Yubikey fingerprint (modhex). This is equivalent to saying “this Yubikey belongs to this user and can authenticate him”.

First, get your modhex:

Using this modhex, create your mapping file named authorized_yubikeys which will be copied to ~/.yubico/authorized_yubikeys on the servers (replace LOGIN_USERNAME with your actual account login name).

LOGIN_USERNAME:xxccccxxuuxx

NOTE: this mapping file can be a centralized one (in /etc for example) to handle all the users from a server. See the the authfile option on the doc.

Setting up OpenSSH (on your servers)

You must install pam_yubico on the servers. For Gentoo, it’s as simple as:

emerge sys-auth/pam_yubico

Copy your authentication mapping file to your home under the .yubico folder on all servers. You should get this:

.yubico/
└── authorized_yubikey

Configure pam to use pam_yubico. Add this after the pam_ssh on the file /etc/pam.d/ssh which should look like this now:

auth    required    pam_ssh.so
auth    required    pam_yubico.so id=YOUR_API_ID debug debug_file=/var/log/auth.log

The debug and debug_file flags can be removed after you tested it correctly.

Testing pam_yubico

Now you should be prompted for your SSH passphrase and then for your Yubikey OTP to login through SSH.

➜  ~ ssh cheetah
SSH passphrase: 
YubiKey for `ultrabug':

About the Yubico API dependency

Careful readers will notice that using pam_yubico introduces a strong dependency on the Yubico API availability. If the API becomes unreachable or your internet connection goes down then your servers would be unable to authenticate you!

The solution I found to this problem is to instruct pam to ignore the Yubikey authentication when pam_yubico is unable to contact the API.

In this case, the module will return a AUTHINFO_UNAVAIL code to PAM which we can act upon using the following syntax. The /etc/pam.d/ssh first lines should be changed to this:

auth    required    pam_ssh.so
auth    [success=done authinfo_unavail=ignore new_authtok_reqd=done default=die]    pam_yubico.so id=YOUR_API_ID debug debug_file=/var/log/auth.log

Now you can be sure to be able to use your Yubikey even if the API is down or unreachable 😉

April 30, 2017
Sebastian Pipping a.k.a. sping (homepage, bugs)

Hey there!

If you are not subscribed to the new Gentoo packages feed, let me quickly introduce you to SafeEyes that I started using on a daily basis. It has found it’s way into Gentoo as x11-misc/safeeyes now. This article does a good job:

SafeEyes Protects You From Eye Strain When Working On The Computer (webupd8.org)

Best, Sebastian