Author Archives: jejb

Using letsencrypt certificates with DANE

If, like me, you run your own cloud server, at some point you need TLS certificates to export secure services.  Like a lot of people I object to paying a so called X.509 authority a yearly fee just to get a certificate, so I’ve been using a free startcom one for a while.  With web browsers delisting startcom, I’m unable to get a new usable certificate from them, so I’ve been investigating letsencrypt instead (if you’re in to fun ironies, by the way, you can observe that currently the letsencrypt site isn’t using a letsencrypt certificate, perhaps indicating the administrative difficulty of doing so).

The problem with letsencrypt certificates is that they currently have a 90 day expiry, which means you really need to use an automated tool to keep your TLS certificate current.  Fortunately the EFF has developed such a tool: certbot (they use a letsencrypt certificate for their site, indicating you can have trust that they do know what they’re doing).  However, one of the problems with certbot is that, by default, it generates a new key each time the certificate is renewed.  This isn’t a problem for most people, but if you use DANE records, it causes significant issues.

Why use both DANE and letsencrypt?

The beauty of DANE, as I’ve written before, is that it gives you a much more secure way of identifying your TLS certificate (provided you run DNSSEC).  People verifying your certificate may use DANE as the only verification mechanism (perhaps because they also distrust the X.509 authorities) which means the best practice is to publish a DANE TLSA record for each service and also use an X.509 authority rooted certificate.  That way your site just works for everyone.

The problem here is that being DNS based, DANE records can be cached for a while, so it can take a few days for DANE certificate updates to propagate through the DNS infrastructure. DANE records have an answer for this: they have a mode where the record identifies only the hash of the public key used by the website, not the certificate itself, which means you can change your certificate as much as you want provided you keep the same public/private key pair.  And here’s the rub: if certbot is going to try to give you a new key on each renewal, this isn’t going to work.

The internet society also has posts about this.

Making certbot work with DANE

Fortunately, there is a solution: the certbot manual mode (certonly) takes a –csr flag which allows you to construct your own certificate request to send to letsencrypt, meaning you can keep a fixed key … at the cost of not using most of the certbot automation.  So, how do you construct a correct csr for letsencrypt?  Like most free certificates, letsencrypt will only allow you to specify the certificate commonName, which must be a DNS pointer to the actual website.  If you need a certificate that covers multiple sites, all the other sites must be enumerated in the x509 v3 extensions field subjectAltName.  Let’s look at how openssl can generate such a certificate request.  One of the slight problems is that openssl, being a cranky tool, does not allow you to specify a subjectAltName on the command line, so you have to construct a special configuration file for it.  I called mine letsencrypt.conf

[req]
prompt = no
distinguished_name = req_dn
req_extensions = req_ext

[req_dn]
commonName = bedivere.hansenpartnership.com

[req_ext]
subjectAltName=@alt_names

[alt_names]
DNS.1=bedivere.hansenpartnership.com
DNS.2=www.hansenpartnership.com
DNS.3=hansenpartnership.com
DNS.4=blog.hansenpartnership.com

As you can see, I’ve given my canonical server (bedivere) as the common name and then four other subject alt names.  Once you have this config file tailored to your needs, you merely run

openssl req -new -key <mykey.key> -config letsencrypt.conf -out letsencrypt.csr

Where mykey.key is the path to your private key (you need a private key because even though the CSR only contains the public key, it is also signed).  However, once you’ve produced this letsencrypt.csr, you no longer need the private key and, because it’s undated, it will now work forever, meaning the infrastructure you put into place with certbot doesn’t need to be privileged enough to access your private key.  Once this is done, you make sure you have TLSA 3 1 1 records pointing to the hash of your public key (here’s a handy website to generate them for you) and you never need to alter your DANE record again.  Note, by the way that letsencrypt certificates can be used for non-web purposes (I use mine for encrypted SMTP as well), so you’ll need one DANE record for every service you use them for.

Putting it all together

Now that you have your certificate request, depending on what version of certbot you have, you may need it in DER format

openssl req -in letsencrypt.csr -out letsencrypt.der -outform DER

And you’re ready to run the following script from cron

#!/bin/bash
date=$(date +%Y-%m-%d)
dir=/etc/ssl/certs
cert="${dir}/letsencrypt-${date}.crt"
fullchain="${dir}/letsencrypt-${date}.pem"
chain="${dir}/letsencrypt-chain-${date}.pem"
csr=/etc/ssl/letsencrypt.der
out=/tmp/certbot.out

##
# certbot handling
#
# first it cannot replace certs, so ensure new locations (date suffix)
# each time mean the certificate is unique each time.  Next, it's
# really chatty, so the only way to tell if there was a failure is to
# check whether the certificates got updated and then get cron to
# email the log
##

certbot certonly --webroot --csr ${csr} --preferred-challenges http-01 -w /var/www --fullchain-path ${fullchain} --chain-path ${chain} --cert-path ${cert} > ${out} 2>&1

if [ ! -f ${fullchain} -o ! -f ${chain} -o ! -f ${cert} ]; then
    cat ${out}
    exit 1;
fi

# link into place

# cert only (apache needs)
ln -sf ${cert} ${dir}/letsencrypt.crt
# cert with chain (stunnel needs)
ln -sf ${fullchain} ${dir}/letsencrypt.pem
# chain only (apache needs)
ln -sf ${chain} ${dir}/letsencrypt-chain.pem

# reload the services
sudo systemctl reload apache2
sudo systemctl restart stunnel4
sudo systemctl reload postfix

Note that this script needs the ability to write files and create links in /etc/ssl/certs (can be done by group permission) and the systemctl reloads need the following in /etc/sudoers

%LimitedAdmins ALL=NOPASSWD: /bin/systemctl reload apache2
%LimitedAdmins ALL=NOPASSWD: /bin/systemctl reload postfix
%LimitedAdmins ALL=NOPASSWD: /bin/systemctl restart stunnel4

And finally you can run this as a cron script under whichever user you’ve chosen to have sufficient privilege to write the certificates.  I run this every month, so I know I if anything goes wrong I have at least two months to fix it.

Oh, and just in case you need proof that I got this all working, here you are!

TPM2 and Linux

Recently Microsoft started mandating TPM2 as a hardware requirement for all platforms running recent versions of windows.  This means that eventually all shipping systems (starting with laptops first) will have a TPM2 chip.  The reason this impacts Linux is that TPM2 is radically different from its predecessor TPM1.2; so different, in fact, that none of the existing TPM1.2 software on Linux (trousers, the libtpm.so plug in for openssl, even my gnome keyring enhancements) will work with TPM2.  The purpose of this blog is to explore the differences and how we can make ready for the transition.

What are the Main 1.2 vs 2.0 Differences?

The big one is termed Algorithm Agility.  TPM1.2 had SHA1 and RSA2048 only.  TPM2 is designed to have many possible algorithms, including support for elliptic curve and a host of government mandated (Russian and Chinese) crypto systems.  There’s no requirement for any shipping TPM2 to support any particular algorithms, so you actually have to ask your TPM what it supports.  The bedrock for TPM2 in the West seems to be RSA1024-2048, ECC and AES for crypto and SHA1 and SHA256 for hashes1.

What algorithm agility means is that you can no longer have root keys (EK and SRK see here for details) like TPM1.2 did, because a key requires a specific crypto algorithm.  Instead TPM2 has primary “seeds” and a Key Derivation Function (KDF).  The way this works is that a seed is simply a long string of random numbers, but it is used as input to the KDF along with the key parameters and the algorithm and out pops a real key based on the seed.  The KDF is deterministic, so if you input the same algorithm and the same parameters you get the same key again.  There are four primary seeds in the TPM2: Three permanent ones which only change when the TPM2 is cleared: endorsement (EPS), Platform (PPS) and Storage (SPS).  There’s also a Null seed, which is used for ephemeral keys and changes every reboot.  A key derived from the SPS can be regarded as the SRK and a key derived from the EPS can be regarded as the EK. Objects descending from these keys are called members of hierarchies2. One of the interesting aspects of the TPM is that the root of a hierarchy is a key not a seed (because you need to exchange secret information with the TPM), and that there can be multiple of these roots with different key algorithms and parameters.

Additionally, the mechanism for making use of keys has changed slightly.  In TPM 1.2 to import a secret key you wrapped it asymmetrically to the SRK and then called LoadKeyByBlob to get a use handle.  In TPM2 this is a two stage operation, firstly you import a wrapped (or otherwise protected) private key with TPM2_Import, but that returns a private key structure encrypted with the parent key’s internal symmetric key.  This symmetrically encrypted key is then loaded (using TPM2_Load) to obtain a use handle whenever needed.  The philosophical change is from online keys in TPM 1.2 (keys which were resident inside the TPM) to offline keys in TPM2 (keys which now can be loaded when needed).  This philosophy has been reinforced by reducing the space available to keep keys loaded in TPM2 (see later).

Playing with TPM2

If you have a recent laptop, chances are you either have or can software upgrade to a TPM2.  I have a dell XPS13 the skylake version which comes with a software upgradeable Nuvoton TPM.  Dell kindly provides a 1.2->2 switching program here, which seems to work under Freedos (odin boot) so I have a physical TPM2 based system.  For those of you who aren’t so lucky, you can still play along, but you need a TPM2 emulator.  The best one is here; simply download and untar it then type make in the src directory and run it as ./tpm_server.  It listens on two TCP ports, 2321 and 2322, for TPM commands so there’s no need to install it anywhere, it can be run directly from the source directory.

After that, you need the interface software called tss2.  The source is here, but Fedora 25 and recent Ubuntu already package it.  I’ve also built openSUSE packages here.  The configuration of tss2 is controlled by environment variables.  The most important one is TPM_INTERFACE_TYPE which tells it how to connect to the TPM2.  If you’re using a simulator, you set this to “socsim” and if you have a real TPM2 device you set it to “dev”.  One final thing about direct device connection: in tss2 there’s no daemon like trousers had to broker the connection, all your users connect directly to the TPM2 device /dev/tpm0.  To do this, the device has to support read and write by arbitrary users, so its permissions need to be 0666.  I’ve got a udev script to achieve this

# tpm 2 devices need to be world readable
SUBSYSTEM=="tpm", ACTION=="add", MODE="0666"

Which goes in /etc/udev/rules.d/80-tpm-2.rules on openSUSE.  The next thing you need to do, if you’re running the simulator, is power it on and start it up (for a real device, this is done by the bios):

tsspowerup
tssstartup

The simulator will now create a NVChip file wherever you started it to store NV ram based objects, which it will read on next start up.  The first thing you need to do is create an SRK and store it in NV memory.  Microsoft uses the well known key handle 81000001 for this, so we’ll do the same.  The reason for doing this is that a real TPM takes ages to run the KDF for RSA keys because it has to look for prime numbers:

jejb@jarvis:~> TPM_INTERFACE_TYPE=socsim time tsscreateprimary -hi o -st -rsa
Handle 80000000
0.03 user 0.00 system 0:00.06 elapsed

jejb@jarvis:~> TPM_INTERFACE_TYPE=dev time tsscreateprimary -hi o -st -rsa
Handle 80000000
0.04 user 0.00 system 0:20.51 elapsed

As you can see: the simulator created a primary storage key (the SRK) in a few milliseconds, but it took my real TPM2 20 seconds to do it3 … not something you want to wait for, hence the need to store this permanently under a well known key handle and get rid of the temporary copy

tssevictcontrol -hi o -ho 80000000 -hp 81000001
tssflushcontext -ha 80000000

tssevictcontrol tells the TPM to copy the key at transient handle 800000004  to permanent NV handle 81000001 and tssflushcontext erases the transient key.  Flushing transient objects is very important, because TPM2 has a lot less transient storage space than TPM1.2 did; usually only about three handles worth.  You can tell how much you have by doing

tssgetcapability -cap 6|grep -i transient
TPM_PT 0000010e value 00000003 TPM_PT_HR_TRANSIENT_MIN - the minimum number of transient objects that can be held in TPM RAM

Where the value (00000003) tells me that the TPM can store at least 3 transient objects.  After that you’ll start getting out of space errors from it.

The final step in taking ownership of a TPM2 is to set the authorization passwords.  Each of the four hierarchies (Null, Owner, Endorsement, Platform) and the Lockout has a possible authority password.  The Platform authority is cleared on startup, so there’s not much point setting it (it’s used by the BIOS or Firmware to perform TPM functions).  Of the other four, you really only need to set Owner, Endorsement and Lockout (I use the same password for all of them).

tsshierarchychangeauth -hi l -pwdn <your password>
tsshierarchychangeauth -hi e -pwdn <your password>
tsshierarchychangeauth -hi o -pwdn <your password>

After this is done, you’re all set.  Note that as well as these authorizations, each object can have its own authorization (or even policy), so the SRK you created earlier still has no password, allowing it to be used by anyone.  Note also that the owner authorization controls access to the NV memory, so you’ll need to supply it now to make other objects persistent.

An Aside about Real TPM2 devices and the Resource Manager

Although I’m using the code below to store my keys in the TPM2, there’s a couple of practical limitations which means it won’t work for you if you have multiple TPM2 using applications without a kernel update.  The two problems are

  1. The Linux Kernel TPM2 device /dev/tpm0 only allows one user at once.  If a second application tries to open the device it will get an EBUSY which causes TSS_Create() to fail.
  2. Because most applications make use of transient key slots and most TPM2s have only a couple of these, simultaneous users can end up running out of these and getting unexpected out of space errors.

The solution to both of these is something called a Resource Manager (RM).  What the RM does is effectively swap transient objects in and out of the TPM as needed to prevent it from running out of space.  Linux has an issue in that both the kernel and userspace are potential users of TPM keys so the resource manager has to live inside the kernel.  Jarkko Sakkinen has preliminary resource manager patches here, and they will likely make it into kernel 4.11 or 4.12.  I’m currently running my laptop with the RM patches applied, so multiple applications work for me, but since these are preliminary patches, I wouldn’t currently advise others to do this.  The way these patches work is that once you declare to the kernel via an ioctl that you want to use the RM, every time you send a command to the TPM, your context gets swapped in, the command is executed, the context is swapped out and the response sent meaning that no other user of the TPM sees your transient objects.  The moment you send the ioctl, the TPM device allows another user to open it as well.

Using TPM2 as a keystore

Once the implementation is sorted out, openssl and gnome-keyring patches can be produced for TPM2.  The only slight wrinkle is that for create_tpm2_key you require a parent key to exist in the NV storage (at the 81000001 handle we talked about previously).  So to convert from a password protected openssh RSA key to a TPM2 based one, you do

create_tpm2_key -a -p 81000001 -w id_rsa id_rsa.tpm
mv id_rsa.tpm id_rsa

And then gnome keyring manager will work nicely (make sure you keep a copy of your original private key in case you want to move to a new laptop or reseed the TPM2).  If you use the same TPM2 password as your original key password, you won’t even need to update the gnome loginkeyring for the new password.

Conclusions

Because of the lack of an in-kernel Resource Manager, TPM2 is ready for experimentation in Linux but definitely not ready for prime time yet (unless you’re willing to patch your kernel).  Hopefully this will change in the 4.11 or 4.12 kernel when the Resource Manager finally goes upstream5.

Looking forwards to the new stack, the lack of a central daemon is really a nice feature: tcsd crashing used to kill all of my TPM key based applications, but with tss2 having no central daemon, everything has just worked(tm) so far.  A kernel based RM also means that the kernel can happily use the TPM (for its trusted keys and disk encryption) without interfering with whatever userspace is doing.

TPM enabling gnome-keyring

One of the questions about the previous post on using your TPM as a secure key store was “could the TPM be used to protect ssh keys?”  The answer is yes, because openssh uses openssl (so you can simply convert an openssh private key to a TPM private key) but the ssh-agent wouldn’t work because ssh-add passes in the private keys by their component primes, which is not possible when the private key is guarded by the TPM.  However,  that made me actually look at gnome-keyring to figure out how it worked and whether it could be used with the TPM.  The answer is yes, but there are also some interesting side effects of TPM enabling gnome-keyring.

Gnome-keyring Architecture

Gnome-keyring consists of essentially three components: a pluggable store backend, a secure passphrase holder (which is implemented as a backend store) and an agent frontend.  The frontend and backend talk to each other using the pkcs11 protocol.  This also means that the backend can serve anything that also speaks pkcs11, which most encryption systems do.  The stores consist of a variety of file or directory backed keys (for instance the ssh-store simply loads all the ssh keys in your $HOME/.ssh; the secret-store uses the gnome default keyring store, $HOME/.local/shared/keyring/,  to store collections of passwords)  The frontends act as bridges between a variety of external protocols and the keyring daemon.  They take whatever external protocol they’re speaking in, convert the request to pkcs11 and query the backends for the information.  The most important frontend is the login one which is called by gnome-keyring-daemon at start of day to unlock the secret-store which contains all your other key passwords using your login password as the key.  The ssh-agent frontend speaks the ssh agent protocol and converts key and signing requests to pkcs11 which is mostly served by the ssh-store.  The gpg-agent store speaks a very cut down version of the gpg agent protocol: basically all it does is allow you to store gpg key passwords in the secret-store; it doesn’t do any cryptographic operations.

Pkcs11 Essentials

Pkcs11 is a highly complex protocol designed for opening sessions which query or operate on tokens, which roughly speaking represent a bundle of objects (If you have a USB crypto key, that’s usually represented as a token and your stored keys as objects).  There is some intermediate stuff about slots, but that mostly applies to tokens which may be offline and need insertion, which isn’t relevant to the gnome keyring.  The objects may have several levels of visibility, but the most common is public (always visible) and private (must be logged in to the token to see them).  Objects themselves have a variety of attributes, some of which depend on what type of object they are and some of which are universal (like id and label).  For instance, an object representing an RSA public key would have the public exponent and the public modulus as queryable attributes.

The pkcs11 protocol also has a reasonably comprehensive object finding protocol.  An arbitrary list of attributes and values can be passed to the query and it will return all objects that fully match.  The token identifier is a query attribute which may be present but doesn’t have to be, so if you omit it, you end up searching over every token the pkcs11 subsystem knows about.

The other operation that pkcs11 understands is logging into a token with a pin (which is pkcs11 speak for a passphrase).  The pin doesn’t have to be supplied by the entity performing the login, it may be supplied to the token via an out of band mechanism (for instance the little button on the yukikey, or even a real keypad).  The important thing for gnome keyring here is that logging into a token may be as simple as sending the login command and letting the token sort out the authorization, which is the way gnome keyring operates.

You also need to understand is conventions about searching for keys.  The pkcs11 standard recommends (but does not require) that public and private key pairs, which exist as two separate objects, should have the same id attribute.  This means that if you want to find an rsa private key, the way you do this is by searching the public objects for the exponent and modulus.  Once this is returned, you log into the token and retrieve the private key object by the public key id.

The final thing to understand is that once you have the private key object, it merely acts as an entitlement to have the token perform certain private key operations for you (like encryption, decryption or signing); it doesn’t mean you have access to the private key itself.

Gnome Keyring handling of ssh keys

Once the ssh-agent side of gnome keyring receives a challenge, it must respond by returning the private key signature of the challenge.  To do this, it searches the pkcs11 for the key used for the challenge (for RSA keys, it searches by modulus and exponent, for DSA keys it searches by signature primes, etc).  One interesting point to note is the search isn’t limited to the gnome keyring ssh token store, so if the key is found anywhere, in any pkcs11 token, it will be used.  The expectation is that they key id attribute will be the ssh fingerprint and the key label attribute will be the ssh comment, but these aren’t used in the actual search.  Once the public key is found, the agent logs into the token, retrieves the private key by label and id and proceeds to get the private key to sign the challenge.

Adding TPM key handling to the ssh store

Gnome keyring is based on GNU libgcrypt which handles all cryptographic objects via s-expressions (which are basically lisp like strings).  Gcrypt itself doesn’t seem to have an s-expression for a token, so the actual signing will be done inside the keyring code.  The s-expression I chose to represent a TPM based private key is

(private-key
  (rsa
    (tpm
      (blob <binary key blob>)
      (auth <authoriztion>))))

The rsa is necessary for it to be recognised for RSA signatures.  Now the ssh-store is modified to recognise the PEM guards for TPM keys, load the key blob up into the s-expression and ask the secret-store for the authorization passphrase which is also loaded.  In theory, it might be safer to ask the secret store for the authorization at key use time, but this method mirrors what happens to private keys, which are decrypted at load time and stored as s-expressions containing the component primes.

Now the RSA signing code is hooked to check for TPM s-expressions and divert the signature to the TPM if they’re found.  Once this is done, gnome keyring is fully enabled for TPM based ssh keys.  The initial email thread about this is here, and an openSUSE build repository of the modified code is here.

One important design constraint for this is that people (well, OK me) have a lot of ssh keys, sometimes more than could be effectively stored by the TPM in its internal shielded memory (plus if you’re like me, you’re using the TPM for other things like VPN), so the design of the keyring TPM additions is not to burden that memory further, thus TPM keys are only loaded into the TPM whenever they’re needed for an operation and are unloaded otherwise.  This means the TPM can scale to hundreds of keys, but at the expense of taking longer: Instead of simply asking the TPM to sign something, you have to first ask the TPM to load and unwrap (which is an RSA operation) the key, then use it to sign.  Effectively it’s two expensive TPM operations per real cryptographic function.

Using TPM based ssh keys

Once you’ve installed the modified gnome-keyring package, you’re ready actually to make use of it.  Since we’re going to replace all your ssh private keys with TPM equivalents, which are keyed to a given storage root key (SRK)1 which changes every time the TPM is cleared, it is prudent to take a backup of all your ssh keys on some offline encrypted USB stick, just in case you ever need to restore them (or transfer them to a new laptop2).

cd ~/.ssh/
for pub in *rsa*.pub; do
    priv=$(basename $pub .pub)
    echo $priv
    create_tpm_key -m -a -w $priv ${priv}.tpm
    mv ${priv}.tpm $priv
done

You’ll be prompted first to create a TPM authorization password for the key, then to verify it, then to give the PEM password for the ssh key (for each key).  Note, this only transfers RSA keys (the only algorithm the TPM can handle) and also note the script above is overwriting the PEM private key, so make sure you have a backup.  The create_tpm_key command comes from the openssl_tpm_engine package, which I’ve patched here to support random migration authority and well known SRK authority.

All you have to do now is log out and back in (to restart the gnome-keyring daemon with the new ssh keystore) and you’re using TPM based keys for all your ssh operations.  I’ve noticed that this adds a couple of hundred milliseconds per login, so if you batch stuff over ssh, this is why your scripts are slower.

Using Your TPM as a Secure Key Store

One of the new features of Linux Plumbers Conference this year was the TPM Microconference, which facilitated great discussions both in the session itself and in the hallways.  Quite a bit of discussion was generated by the Beginner’s Guide to the TPM talk I gave, mostly because I blamed the Trusted Computing Group for the abject failure to adopt TPMs for anything citing the incredible complexity of their stack.

The main thing that came out of this discussion was that a lot of this stack complexity can be hidden from users and we should concentrate on making the TPM “just work” for all cryptographic functions where we have parallels in the existing security layers (like the keystore).  One of the great advantages of the TPM, instead of messing about with USB pkcs11 tokens, is that it has a file format for TPM keys (I’ll explain this later) which can be used directly in place of standard private key files.  However, before we get there, lets discuss some of the basics of how your TPM works and how to make use of it.

TPM Basics

Note that all of what I’m saying below applies to a 1.2 TPM (the type most people have in their laptops) 2.0 TPMs are now appearing on the market, but chances are you have a 1.2.

A TPM is traditionally delivered in your laptop in an uninitialised state.  In older laptops, the TPM is traditionally disabled and you usually have to find an entry in the BIOS menu to enable it.  In more modern laptops (thanks to Windows 10) the TPM is enabled in the bios and ready for the OS install to make use of it.  All TPMs are delivered with one manufacturer set key called the Endorsement Key (EK).  This key is unique to your TPM (like an identifying label) and is used as part of the attestation protocol.  Because the EK is a unique label, the attestation protocol is rather complex involving a so called privacy CA to protect your identity, but because it isn’t necessary to use the TPM as a secure keystore, I won’t cover it further.

The other important key, which you have to generate, is called the Storage Root Key.  This key is generated internally within the TPM once somebody takes ownership of it.  The package you need to begin using the tpm is tpm-tools, which is packaged by most distros. You must also have the Linux TSS stack trousers installed (just installing tpm-tools will often pull this in) and have the tcsd part of trousers running (usually systemctl start tcsd; systemctl enable tcsd). I tend to configure my TPM with an owner password (for things like resetting dictionary attacks) but a well known storage root key authority.  To do this from a fully cleared and enabled TPM, execute

tpm_takeownership -z

And give your chosen owner password when prompted.  If you get an error, chances are you need to go back to the BIOS menu and actively clear and reset the TPM (usually under the security options).

Aside about Authority and the Trusted Security Stack

To the TPM, an “authority” is a 20 byte number you use to prove you’re allowed to manipulate whatever object you’re trying to use.  The TPM typically has a well known way of converting typed passwords into these 20 byte codes.  The way you prove you know the authority is to add a Hashed Message Authentication Code (HMAC) to your TPM command.  This means that the hash can only be generated by someone who knows the authority for the object, but anyone seeing the hash cannot derive the authority from it.  The utility of this is that the trousers library (tspi) generates the HMAC before the TPM command is passed to the central daemon (tcsd) meaning that nothing except you and the TPM know the authority

trousersThe final thing about authority you need to know is that the TPM has a concept of “well known authority” which simply means supply 20 bytes of zeros.  It’s kind of paradoxical to have a secret everyone knows, however, there are reasons for this:  For most objects in the TPM whether you require authority to use them is optional, but for some it is mandatory.  For objects (like the SRK) where authority is mandatory, using the well known authority is equivalent to saying actually I don’t need authorization for this object.

The Storage Root Key (SRK)

Once you’ve generated this above, the TPM keeps the secret part permanently hidden, but can be persuaded to give anyone the public part.  In TPM 1.2, the SRK is a RSA 2048 key.  On most modern TPMs, you have to tell the tpm you want anyone to be able to read the public part of the storage root key, which you do with this command

tpm_restrictsrk -a

You’ll get prompted for the owner password.  Once you execute this command, anyone who knows the SRK authority (which you’ve set to be well known) is allowed to read the public part.

Why all this fuss about SRK authorization?  Well, traditionally, the TPM is designed for use in a hostile multi-user environment.  In the relaxed, no authorization, environment I’ve advised you to set up, anyone who knows the SRK can upload any storage object (like a key or protected blob) into the TPM.  This means, since the TPM has very limited storage, that they could in theory do a DoS attack against the TPM simply by filling it with objects.  On a laptop where there’s only one user (you) this is not usually a concern, hence the advice to use a well known authority, which makes the TPM much easier to use.

The way external objects (like keys or data blobs) are uploaded into the TPM is that they all have a parent (which must be a storage key) and they are encrypted to the public part of this key (in TPM parlance, this is called wrapping).  The TPM can have deep key hierarchies (all eventually parented to the SRK), but for a laptop, it makes sense simply to use the SRK as the only storage key and wrap everything for it as the parent.  Now here’s the reason for the well known authority: to upload an object into the TPM, it not only needs to be wrapped to the parent key, you also need to use the parent key authority to perform the upload.  The object you’re using also has a separate authority.  This means that when you upload and use a key, if you’ve set a SRK password, you’ll end up having to type both the SRK password and the key password pretty much every time you use it, which is a bit of a pain.

The tools used to create wrapped keys are found in the openssl_tpm_engine package.  I’ve done a few patches to make it easier to use (mostly by trying well known authority first before asking for the SRK password), so you can see my patched version here.  The first thing you can do is take any PEM key file you have and wrap it for your tpm

create_tpm_key -m -w test.key test.tpm.key

This creates a TPM key file test.tpm.key containing a wrapped key for your TPM with no authority (to add an authority password, use the -a option).  If you cat the test.tpm.key file, you’ll see it looks like a standard PEM file, except the guards are now

-----BEGIN TSS KEY BLOB-----
-----END TSS KEY BLOB-----

This key is now wrapped for your TPM’s SRK and would only be usable on your laptop.  If you’re fortunate enough to be using an application linked with gnutls, you can simply use this key with the URI  tpmkey:file=<path to test.tpm.key>.  If you’re using openssl, you need to patch it to get it to use TPM keys easily (see below).

The ideal, however, would be since these are PEM files with unique guards, any ssl provider should simply recognise the guards  and load the key into the TPM  This means that in order to use a TPM key, you take a standard PEM private key file, transform it into a TPM key file and then simply copy it back to where the original key file was being used from and voila! you’re using a TPM based key.  This is what the openssl patches below do.

Getting TPM keys to “just work” with openssl

In openssl, external encryption processors, like the TPM or USB keys are used by things called engines.  The engine you need for the TPM is also in the openssl_tpm_engine package, so once you’ve installed that package, the engine is available.  Unfortunately, openssl doesn’t naturally use a particular engine unless told to do so (most of the openssl tools have a -engine option for this).  However, having to specify the engine in every application somewhat spoils the “just works” aspect we’re looking for, so the openssl patches here allow an engine to specify that it knows how to parse a PEM file and can load a key from it.  This allows you simply to replace the original key file with a TPM protected key file and have your application continue working with it.

As a demo of the usefulness, I’m using it on my current laptop with all my VPN keys.  It is also possible to use it with openssh keys, since they’re standard PEM files.  However, the way openssh works with agents means that the agent cannot handle the keys and you have to type the password (if you set one one the key) each time you use it.

It should be noted that the idea of having PEM based TPM keys just work in openssl is encountering resistance.  However, it does just work in gnutls (provided you change the file name to be a tpmkey:file= URL).

Conclusions (or How Well is it Working?)

As I said above, I’m currently using this scheme for my openvpn and ssh keys.  I have to confess, since I use openssh a lot, I got very tired of having to type the password on every ssh operation, so I’ve gone back to using non-TPM based keys which can be handled by the agent.  Fixing this is on my list of things to look at.  However, I still am using TPM based keys for my openvpn.

Even for openvpn, though there are hiccoughs: the trousers daemon, tcsd, crashes periodically on my platform.  When it does, the vpn goes down (because the VPN needs a key based authentication transaction every hour to rotate the symmetric encryption keys).  Unfortunately, just restarting tcsd isn’t enough because the design of trousers doesn’t seem to be robust to this failure (even though the tspi part linked with the application could recreate all the keys), so the VPN itself must be restarted when this happens, which makes it rather user unfriendly.  Fixing trousers to cope with tcsd failure is also on my list of things to fix …

Home Automation: Coping with Insecurity in the IoT

Reading Matthew Garret’s exposés of home automation IoT devices makes most engineers think “hell no!” or “over my dead body!”.  However, there’s also the siren lure that the ability to program your home, or update its settings from anywhere in the world is phenomenally useful:  for instance, the outside lights in my house used to depend on two timers (located about 50m from each other).  They were old, loud (to the point the neighbours used to wonder what the buzzing was when they visited) and almost always wrongly set for turning the lights on at sunset.  The final precipitating factor for me was the need to replace our thermostat, whose thermistor got so eccentric it started cooling in winter; so away went all the timers and their loud noises and in came a z-wave based home automation system, and the guilty pleasure of having an IoT based home automation system.  Now the lights precisely and quietly turn on at sunset and off at 23:00 (adjusting themselves for daylight savings); the thermostat is accessible from my phone, meaning I can adjust it from wherever I happen to be (including Hong Kong airport when I realised I’d forgotten to set it to energy saving mode before we went on holiday).  Finally, there’s waking up at 3am to realise your wife has fallen asleep over her book again and being able to turn off her reading light from your alarm clock without having to get out of bed … Automation bliss!

We all want the convenience; the trick is to work around the rampant insecurity that comes with today’s IoT to avoid your home automation system being part of the DDoS bot net that brings down the internet.

Selecting your network

For me, nothing IP/Wifi based was partly due to Matthew’s blog and partly because my home Wifi network looks different from everyone else’s: I actually run an internal, secure, home network that is wired and have my Wifi sit unsecured and outside the firewall.  This goes back to the good old days of expecting to find wifi wherever you travelled and returning the courtesy by ensuring your wifi was accessible, but it does mean that any wifi connected device would be outside my firewall and open to all, which, given the general insecurity of the devices, makes this a non-starter.

The next level down is to use a private network, like zigbee or z-wave.  I chose z-wave because it covers longer distances (which I need) and it doesn’t interfere with wifi (I have a hard time covering the entire house, even with two wifi access points).  Z-wave also looks secure, but, if you dig deeply, you find that there are flaws in the protocol that lay you open to a local attacker.  This, by the way, shows the futility of demanding security from IoT vendors who really don’t understand how to do it: a flawed security implementation is pretty much as bad as no security at all.

Once this decision is made, the next is to choose a gateway to the internet that does what you want, namely give you remote control without giving up your security.

Gateway Phone Home?

A surprising number of z-wave controllers are of the phone home type (this means phone their manufacturer’s home, not you), and almost all of these simply won’t work if they’re not allowed to phone home.  Google comprehensively demonstrated the issues this raises with nest: lots of early adopters now have so much non-functional junk.

For me, there was also the burned hand experience with Google services: whenever I travel, I invariably get locked out because of some pseudo-security issue and it takes a fight to get back in again. This ultimately precipitated my move away from the Google cloud and on to Owncloud for calendar and contacts, but also means I really don’t want to have to trust another external service for my home automation.

Given the significantly limited choice of non-phone home z-wave controllers, I chose the HomeSeer Zee S2.  It’s basically a raspberry pi with a z-wave dongle and Linux.  If you’re into Linux on evereything, you should be aware that the home automation system is actually written in .net and it uses mono to bridge the gap; an odd choice given that there’s no known windows platform that could actually possibly run this system.

Secure Internet based Automation

The ZS2 does actually come with wifi, but given my already listed wifi problems, it’s actually plugged into my secure wired network with all phone home capabilities disabled.  Great, but that means it’s only accessible over a VPN and I want to be able to control it from things like my phone, where running a VPN is cumbersome, so lets do some magic tricks to make it securely accessible by any member of the family from any device.

Obviously, since I already run Owncloud, I have a server of my own in a co-located site.  It’s this server I propose to use as my secure gateway.  The obvious way of doing this is simply proxying the ZS2 controller web page, but there are a couple of problems: firstly if I do it globally the ZS2 will be visible to port scans and secondly it only actually has an unencrypted web page with http authentication, meaning the login credentials would go over the internet in clear text … oops!

The solution to the first of these is to make the web page only accessible to authenticated devices.  My current method is to use firewall whitelisting and a hook to an existing service authentication to open up the port.  So in the firewall mangle table, all the ports which require whitelisting are marked.  Then, in the input firewall, any packet so marked is checked against the whitelist for a matching source IP.  If a match is found, then the packet is permitted, otherwise it is denied.

Whitelisting itself is done by a simple pam script

#!/usr/bin/perl
use Socket;

$xt_file = '/proc/net/xt_recent/whitelist';

$name = $ENV{'PAM_RHOST'};
if ($name =~ m/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$/) {
 $addr = $name;
} else {
 $ip = gethostbyname($name);
 $addr = inet_ntoa($ip);
}

open(FD, ">$xt_file");
print FD "+$addr\n";
close(FD);

exit 0

And this script is executed from the dovecot pam file as

# add session to cause ip address of successful login to be whitelisted
session optional pam_exec.so /etc/pam.d/whitelist.pl

Meaning that any IP address that gets an authenticated imap connection (which is basically everybody’s internet device, since they all connect to email) is now allowed to access the authenticated ports.  Since imap requires re-authentication after a configurable timeout, the whitelist entry only lasts for just over that timeout and hey presto, we have our secured port system.

Obviously, this isn’t foolproof: in particular whitelisting by external IP means that anyone sharing the same ip address via nat (like at a hotel) also has access to the secured ports, but it does cut down enormously on generic internet visibility.

The final thing is to add security to the insecure web page, so anyone in the path to my internet host can’t sniff the password.  This is easily achieved by an stunnel redirect from the secure incoming port to the ZS2 over the VPN that connects to the internal network.  The beauty of this is that stunnel can now use the existing web certificate for my internet host to afford protection from man in the middle attacks as well.

Last thoughts about Security

Obviously, the security above isn’t perfect.  Anyone sharing my external IP would be able to run a port scan and (if they’re clever) detect the https port the ZS2 is on.  However, it does require a lot of luck to do this and, obviously, even if they’re in the fortunate position of sharing an IP address, I’ve changed the default password, so the recent Mirai attack wouldn’t have been able to compromise the device.

Do I think this is good enough security: absolutely.  In security, the bear principle applies: in that when escaping from a ravenous bear, you don’t have to be able to run faster than the bear itself, you merely need to be able to run faster than the slowest other potential food source …  In internet terms, this means that while there are so many completely insecure devices out there, no-one can be bothered to hack a moderately secure system like mine because the customisation makes it quite a bit harder.  It’s also instructive to think that the bear principle is why Linux has such a security reputation: it’s not that we have perfect security against virus and trojan systems, it’s just that Windows was always so much worse …

Eventually, something like Mirai will look to attack the ZS2 web server itself (it is .net based, after all) rather than simply try a list of default passwords and then I’ll need to be a bit more clever, but while everyone else is so much more insecure, that day will be long delayed.

Unprivileged Build Containers

A while ago, a goal I set myself was to be able to maintain my build and test environments for architecture emulation containers without having to do any of the tasks as root and without creating any suid binaries to do this.  One of the big problems here is that distributions get annoyed (and don’t run correctly) if root doesn’t own most of the files … for instance the installers all check to see that the file got installed with the correct ownership and permissions and fail if they don’t.  Debian has an interesting mechanism, called fakeroot, to get around this using a preload library intercepting the chmod and chown system calls, but it’s getting a bit hackish to try to extend this to work permanently for an emulation container.

The correct way to do this is with user namespaces so the rest of this post will show you how.  Before we get into how to use them, lets begin with the theory of how user namespaces actually work.

Theory of User Namespaces

A user namespace is the single namespace that can be created by an unprivileged user.  Their job is to map a set of interior (inside the user namespace) uids, gids and projids1 to a set of exterior (outside the user namespace).

The way this works is that the root user namespace simply has a 1:1 identity mapping of all 2^32 identifiers, meaning it fully covers the space.  However, any new user namespace only need remap a subset of these.  Any id that is not mapped into the user namespace becomes inaccessible to that namespace.  This doesn’t mean completely inaccessible, it just means any resource owned or accessed by an unmapped id treats an attempted access (even from root in the namespace) as though it were completely unprivileged, so if the resource is readable by any id, it can still be read even in a user namespace where its owning id is unmapped.

User namespaces can also be nested but the nested namespace can only map ids that exist in its parent, so you can only reduce but not expand the id space by nesting.  The way the nested mapping works is that it remaps through all the parent namespaces, so what appears on the resource is still the original exterior ids.

User Namespaces also come with an owner (the uid/gid of the process that created the container).  The reason for this is that this owner is allowed to execute setuid/setgid to any id mapped into the namespace, so the owning uid/gid pair is the effective “root” of the container.  Note that this setuid/setgid property works on entry to the namespace even if uid 0 is not mapped inside the namespace, but won’t survive once the first setuid/setgid is issued.

The final piece of the puzzle is that every other namespace also has an owning user namespace, so while I cannot create a mount namespace as unprivileged user jejb, I can as remapped root inside my user namespace

jejb@jarvis:~> unshare --mount
unshare: unshare failed: Operation not permitted
jejb@jarvis:~> nsenter --user=/tmp/userns
root@jarvis:~# unshare --mount
root@jarvis:~#

And once created, I can always enter this mount namespace provided I’m also in my user namespace.

Setting up Unprivileged Namespaces

Any system user can actually create a user namespace.  However a non-root (meaning not uid zero in the parent namespace) user cannot remap any exterior id except their own.  This means that, because a build container needs a range of ids, it’s not possible to set up the intial remapped namespace without the help of root.  However, once that is done, the user can pretty much do every other operation2

The way remap ranges are set up is via the uid_map, gid_map and projid_map files sitting inside the /proc/<pid> directory.  These files may only be written to once and never updated3

As an example, to set up a build container, I need a remapping for every id that would be created during installation.  Traditionally for Linux, these are ids 0-999.  I want to remap them down to something unprivileged, say 100,000 so my line entry for this is

0 100000 1000

However, I also want an identity mapping for my own id (currently I’m at uid 1000), so I can still use my home directory from within the build container.  This also means I can create the roots for the containers within my home directory.  Finally, the nobody user and nobody,nogroup groups also need to be mapped, so the final uid map entries look like

0 100000 1000
1000 1000 1
65534 101001 1

For the groups, it’s even more complex because on openSUSE, I’m a member of the users group (gid 100) which sits in the middle of the privileged 0-999 group range, so the gid_map entry I use is

0 100000 100
100 100 1
101 100100 899
65533 101000 2

Which is almost up to the kernel imposed limit of five separate lines.

Finally, here’s how to set this up and create a binding for the user namespace.  As myself (I’m uid 1000 user name jejb) I do

jejb@jarvis:~> unshare --user
nobody@jarvis:~> echo $$
20211
nobody@jarvis:~>

Note that I become nobody inside the container because currently the map files are unwritten so there are no mapped ids at all.  Now as root, I have to write the mapping files and bind the entry file to the namespace somewhere

jarvis:/home/jejb # echo 1|awk '{print "0 100000 1000\n1000 1000 1\n65534 101001 1"}' > /proc/20211/uid_map
jarvis:/home/jejb # echo 1|awk '{print "0 100000 100\n100 100 1\n101 100100 899\n65533 101000 2"}' > /proc/20211/gid_map
jarvis:/home/jejb # touch /tmp/userns
jarvis:/home/jejb # mount --bind /proc/20211/ns/user /tmp/userns

Now I can exit my user namespace because it’s permanently bound and the next time I enter it I become root inside the container (although with uid 100000 outside)

jejb@jarvis:~> nsenter --user=/tmp/userns
root@jarvis:~# id
uid=0(root) gid=0(root) groups=0(root)
root@jarvis:~# su - jejb
jejb@jarvis:~> id
uid=1000(jejb) gid=100(users) groups=100(users)

Giving me a user namespace with sufficient mapped ids to create a build container.

Unprivileged Architecture Emulation Containers

Essentially, I can use the user namespace constructed above to bootstrap and enter the entire build container and its mount namespace with one proviso that I have to have a pre-created devices directory because I don’t possess the mknod capability as myself, so my container root also doesn’t possess it.  The way I get around this is to create the initial dev directory as root and then change the ownership to 100000.100000 (my unprivileged ids)

jejb@jarvis:~/containers/debian-amd64/dev> ls -l
total 0
lrwxrwxrwx 1 100000 100000 13 Feb 20 09:45 fd -> /proc/self/fd/
crw-rw-rw- 1 100000 100000 1, 7 Feb 20 09:45 full
crw-rw-rw- 1 100000 100000 1, 3 Feb 20 09:45 null
lrwxrwxrwx 1 100000 100000 8 Feb 20 09:45 ptmx -> pts/ptmx
drwxr-xr-x 2 100000 100000 6 Feb 20 09:45 pts/
crw-rw-rw- 1 100000 100000 1, 8 Feb 20 09:45 random
drwxr-xr-x 2 100000 100000 6 Feb 20 09:45 shm/
lrwxrwxrwx 1 100000 100000 15 Feb 20 09:45 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 100000 100000 15 Feb 20 09:45 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 100000 100000 15 Feb 20 09:45 stdout -> /proc/self/fd/1
crw-rw-rw- 1 100000 100000 5, 0 Feb 20 09:45 tty
crw-rw-rw- 1 100000 100000 1, 9 Feb 20 09:45 urandom
crw-rw-rw- 1 100000 100000 1, 5 Feb 20 09:45 zero

This seems to be sufficient of a dev skeleton to function with.  For completeness sake, I placed my bound user namespace into /run/build-container/userns and following the original architecture container emulation post, with modified susebootstrap and build-container scripts.  The net result is that as myself I can now enter and administer and update the build and test architecture emulation container with no suid required

jejb@jarvis:~> nsenter --user=/run/build-container/userns --mount=/run/build-container/ppc64
root@jarvis:/# id
uid=0(root) gid=0(root) groups=0(root)
root@jarvis:/# uname -m
ppc64
root@jarvis:/# su - jejb
jejb@jarvis:~> id
uid=1000(jejb) gid=100(users) groups=100(users)

The only final wrinkle is that root has to set up the user namespace on every boot, but that’s only because there’s no currently defined operating system way of doing this.

Are GPLv2 and CDDL incompatible?

Canonical recently threw this issue into sharp relief by their decision to ship CDDL licensed ZFS as a module of the GPLv2 licensed Linux kernel.  Reading their legal justification for this leaves me somewhat unconvinced because it’s essentially the same “not a derivative work” argument that a number of dubious actors have used to justify binary modules.  So what I’d like to do is look at this issue from a completely different viewpoint.  First by accepting the premise that CDDL and GPLv2 are incompatible (since there’s some debate on this) and secondly by accepting the even more controversial proposition that creating a kernel module is a derivative work.  I don’t want to debate these premises because it’s a worst case assumption I’m using as inputs to make the following analysis possible.

What is compliance?

One of the curious thing about CDDL and GPLv2 is that they’re both copyleft (albeit in differing forms) and the compliance requirements: the release of complete corresponding source code for your binary containing the licensed work.  In fact, the only significant difference is the requirement for build scripts, which is in GPLv2 but not in CDDL.  Therefore you can say that if you follow the compliance regime for GPLv2 on CDDL code, you’ll be in full compliance with the CDDL.  The licences do, in fact, have compatible compliance requirements.  This fact becomes very relevant when you consider the requirements for bringing a copyright lawsuit in the first place.

Where’s the Harm?

Copyright law is something called a tort in law.  That essentially means a branch of law for resolving disputes between individual parties.  However, in order to stem what could be seen as frivolous lawsuits, bringing a claim under tort law requires not just a showing that someone broke the rules of whatever agreement they were under but also that quantifiable harm resulted1.  The essential elements of a tort claim are a showing of a violation, a theory of the harm produced and a request for damages based on the harm2.

All of the bodies which do GPL enforcement recently published a charter in which they make clear that the sole requirement from an enforcement action should be compliance with the terms of the licence.  However, as I showed above, it is perfectly possible to be compatibly in compliance with both the CDDL and the GPLv2.  So the question becomes if the party is already in compliance, even though there’s a technical violation of the terms of the licence produced by the combination, what would our theory of harm be given that we don’t really seem to have anything extra we’d ask of the violating party?

Of course, one can wax lyrical about the “harm to the licence” of allowing incompatible combinations.  However, here we’re on a very sticky wicket because there have been a lot of works published (including by the FSF itself) bemoaning the problems of licence incompatibility.  Indeed, part of the design of the GPLv3 process was to make the licence more amenable to combination with other open source licences.  So suddenly becoming ardent advocates for the benefits of licence incompatibility is probably to be unlikely to fly before a judge as a theory of harm.

Conclusion

What the above analysis shows is that even though we presumed combination of GPLv2 and CDDL works to be a technical violation, there’s no way actually to prosecute such a violation because we can’t develop a convincing theory of harm resulting.  Because this makes it impossible to take the case to court, effectively it must be concluded that the combination of GPLv2 and CDDL, provided you’re following a GPLv2 compliance regime for all the code, is allowable.  This conclusion is the same as the one Canonical reached, but the means by which I got there are very different.

Note that this conclusion would apply to mixing any open source licence with GPLv2: provided the compliance regimes are compatible and the stricter one is followed, it’s difficult to develop a theory of harm and thus the combination is allowable.

Final Thought

The above analysis is all from the point of view of the Linux kernel compliance activities.  However, with ZFS, there is another copyright holder: Oracle.  Nothing prevents Oracle suing for copyright violation with a theory of harm that says something like the CDDL was deliberately designed to be incompatible with GPLv2 to prevent ZFS being shipped in Linux and as the shipper of products base on ZFS, they’ve suffered commercial harm (which would be quantifiable) by this action.

efitools arm 32 bit build fixed

It turned out there was a bug in gnu-efi, in the linker script.  I’ve added a local fix for this to the OBS UEFI project and filed a bug with gnu-efi on sourceforge.  The net result is that the arm 32 bit binaries are now passing most of their tests (the remaining problems may be due to faults in the UEFI environment, still investigating).  I should also have the OVMF image for arm (the ArmVirt package) building for 32 bit arm shortly.

I’ve released version 1.6.1 of efitools to make sure a new version gets rebuilt.

efitools for arm released

Thanks to using architecture emulation containers, I can now build and test efitools for arm (both 64 bit: aarch64 and 32 bit armv7l) on my x86_64 laptop.  To test the actual EFI binaries, you need the arm equivalent of OVMF which is ArmVirt.  I’ve got the build service spitting out the images (still called OVMF because of package naming requirements) for those who want to try this at home.  To run the UEFI image, you need qemu-system-<arch> which, on SUSE, is provided by the qemu-arm package.  The Linaro ARM64 site has detailed instructions, but for the impatient, the required command line (for aarch64) is

qemu-system-aarch64 \
        -m 2048 \
        -cpu cortex-a57 \
        -M virt \
        -bios /home/jejb/ovmf-aarch64.bin \
        -serial stdio \
        -drive if=none,file=fat:.,id=hd0 \
        -device virtio-blk-device,drive=hd0

Which brings up the UEFI bios image and mounts the current directory as an additional fat volume.  The Linaro site recommends 1024 for the memory size, but if you want to boot a kernel or do anything fancy, you’ll find 2048 is the minimum.  The same thing will work for 32 bit arm (except that you want to remove the -cpu line).

To get efitools to build, I had to get the arm versions of sbsigntools up and running, which I have.  The source tree I used for this is here.  There were a few modifications necessary to build arm, but nothing drastic.  Since the Ubuntu version of the git tree appears to be dead, I’ve merged back all the ubuntu patches and will maintain this going forwards.  I’ll bump the release version to 0.8 when I’m sure everything is stable.

Status

Using this, the current status is that the aarch64 binaries are all running fine and have verified nicely.  Unfortunately, there’s still some problem with generating efi binaries for the 32 bit arm systems.  Most of the simple binaries are running OK, but the more complex ones (like KeyTool) are showing symptoms of data relocation problems which I’m still investigating.  I’ll push a new release of efitools when I can get this sorted out, although it looks like a more generic problem inside gnu-efi itself.  Edk2 builds of am32 efi binaries seem to be working, so I’ll see if I can deduce what’s missing from gnu-efi.

Constructing Architecture Emulation Containers

Usually container related stuff goes on $EMPLOYER blog, but this time, I had a container need for my hobbies. the problem: how to build and test efitools for arm and aarch64 while not possessing any physical hardware.  The solution is to build an architecture emulation container using qemu and mount namespaces such that when its entered you find yourself in your home directory but with the rest of Linux running natively (well emulated natively via qemu) as a new architecture.  Binary emulation in Linux is nothing new: the binfmt_misc kernel module does it, and can execute anything provided you’ve told it what header to expect and how to do the execution.  Most distributions come with a qemu-linux-user package which will usually install the necessary binary emulators via qemu to run non-native binaries.  However, there’s a problem here: the installed binary emulator usually runs as /usr/bin/qemu-${arch}, so if you’re running a full operating system container, you can’t install any package that would overwrite that.  Unfortunately for me, the openSUSE Build Service package osc requires qemu-linux-user and would cause the overwrite of the emulator and the failure of the container.  The solution to this was to bind mount the required emulator into the / directory, where it wouldn’t be overwritten and to adjust the binfmt_misc paths accordingly.

Aside about binfmt_misc

The documentation for this only properly seems to exist in the kernel Documentation directory as binfmt_misc.txt.  However, very roughly, the format is

:name:type:offset:magic:mask:interpreter:flags

name is just a handle which will appear in /proc/sys/fs/binfmt_misc, type is M for magic or E for extension (Magic means recognise the type by the binary header, the usual UNIX way and E means recognise the type by the file extension, the Windows way). offset is where in the file to find the magic header to recognise;  magic and mask are the mask to and the binary string with and the magic to find once the masking is done.  Interpreter is the name of the interpreter to execute and flags tells binfmt_misc how to execute the interpreter.  For qemu, the flags always need to be OC meaning open the binary and generate credentials based on it (this can be seen as a security problem because the interpreter will execute with the same user and permissions as the binary, so you have to trust it).

If you’re on a systemd system, you can put all the above into /etc/binfmt.d/file.conf and systemd will feed it to binfmt_misc on boot.  Here’s an example of the aarch64 emulation file I use.

Bootstrapping

To bring up a minimal environment that’s fully native, you need to bootstrap it by installing just enough binaries using your native system before you can enter the container.  At a minimum, this is enough shared libraries and binaries to run the shell.  If you’re on a debian system, you probably already know how to use debootstrap to do this, but if you’re on openSUSE, like me, this is a much harder proposition because persuading zypper to install non native binaries isn’t easy.  The first thing you need to know is that you need to install an architectural override for libzypp in the file pointed to by the ZYPP_CONF environment variable. Here’s an example of a susebootstrap shell script that will install enough of the architecture to run zypper (so you can install all the packages you actually need).  Just run it as (note, you must have the qemu-<arch> binary installed because the installer will try to run pre and post scripts which may fail if they’re binary unless the emulation is working):

susebootstrap --arch <arch> <location>

And the bootstrap image will be build at <location> (I usually choose somewhere in my home directory, but you can use /var/tmp or anywhere else in your filesystem tree).  Note this script must be run as root because zypper can’t change ownership of files otherwise.  Now you are ready to start the architecture emulation container with <location> as the root.

Building an Architecture Emulation Container

All you really need now is a mount namespace with <location> as the real root and all the necessary Linux filesystems like /sys and /proc mounted.  Additionally, you usually want /home and I also mount /var/tmp so there’s a standard location for all my obs build directories.  Building a mount namespace is easy: simply unshare –mount and then bind mount everything you need.  Finally you use pivot_root to swap the new and old roots and unmount -l the old root (-l is necessary because the mount point is in-use outside the mount namespace as your real root, so you just need it unbinding, you don’t need to wait until no-one is using it).

All of this is easily scripted and I created this script to perform these actions.  As a final act, the script binds the process and creates an entry link in /run/build-container/<arch>.  This is the command line I used for the example below:

build-container --arch arm --location /home/jejb/tmp/arm

Now entering the build container is easy (you still have to enter the namespace as root, but you can exec su – <user> to become whatever your non-root user is).

jejb@jarvis:~> sudo -s
jarvis:/home/jejb # uname -m
x86_64
jarvis:/home/jejb # nsenter --mount=/run/build-container/arm
jarvis:/ # uname -m
armv7l
jarvis:/ # exec su - jejb
jejb@jarvis:~> uname -m
armv7l
jejb@jarvis:~> pwd
/home/jejb

And there you are, all ready to build binaries and run them on an armv7 system.

Aside about systemd and Shared Subtrees

On a normal linux system, you wouldn’t need to worry about any of this, but if you’re running systemd, you do, because systemd has some very inimical properties (to mount namespaces) you need to be aware of.

In Linux, a bind mount creates a subtree.  Because you can bind mount from practically anywhere to anywhere, you can have many such subtrees that are substantially related.  The default way to create subtrees is “private” this means that even if the subtrees are effectively the same set of files, a mount operation on one isn’t seen by any of the others.  This is great, because it’s precisely what you want for containers.  However, if a subtree is set to shared (with the mount –make-shared command) then all mount and unmount operations a propagated to every shared copy.  The reason this matters for systemd is because systemd at start of day sets every mount point in the system to shared.  Unless you re-privatise the bind mounts as you create the architecture emulation container, you’ll notice some very weird effects.  Firstly, because pivot_root won’t pivot to a shared subtree, that call will fail but secondly, you’ll notice that when you umount -l /old-root it will propagate to the real root and unmount everything (like your root /proc /dev and /sys) effectively rendering your system unusable.  the mount –make-rprivate /old-root recursively descends the /old-root and sets all the mounts to private so the umount -l simply detached the /old-root instead of propagating all the umount events.