Author Archives: jejb

A Modest Proposal on the DCO

In this post, I discussed why corporations are having trouble regarding the DCO as sufficient for contributions to projects using licences which require patent grants.  The fear being that rogue corporations could legitimately claim that under the DCO they were authorizing their developers as agents for copyrights but not for patents.  Rather than argue about the legality of this trick, I think it will be much more productive to move the environment forwards to a place where it simply won’t work.  The key to doing this is to change the expectations of the corporate players which moves them to the point where they expect that a corporate signoff under the DCO gives agency for both patents and copyrights because once this happens for most of them (the good actors), the usual estoppal rules would make it apply to all.

The fact is that even though corporate lawyers fear that agency might not exist for patent grants via DCO signoffs in contributions, all legitimate corporate entities who make bona fide code contributions wish to effect this anyway; that’s why they go to the additional lengths of setting up Contributor Licence Agreements and signing them.  The corollary here is that really only a bad actor in the ecosystem wishes to perpetuate the myth that patents aren’t handled by the DCO.  So if all good actors want the system to work correctly anyway, how do we make it so?

The lever that will help to make this move is a simple pledge, which can be published on a corporate website,  that allows corporations expecting to make legitimate contributions to patent binding licences under the DCO to do so properly without needing any additional Contributor Licence Agreements.  Essentially it would be an explicit statement that when their developers submit code to a project under the DCO using a corporate signoff, they’re acting as agents for the necessary patent and copyright grants, meaning you can always trust a DCO signoff from that corporation.  When enough corporations do this, it becomes standard practice and thus expectations on the DCO have moved to the point we originally assumed they were at, so here’s the proposal for what such a statement would look like.


 

Corporate Contribution Pledge

Preamble

It is our expectation that any DCO signoff from a corporate email address binds that corporation to grant all necessary copyright and, where required, patent rights to satisfy the terms of the licence.  Accordingly, we are publishing this pledge to illustrate how, as a matter of best practice, we implement this expectation.

For the purposes of this pledge, our corporate email domain is @bigcorp.com and its subdomains.

Limitations

  1. This pledge only applies to projects which use an OSI accepted Open Source  licence and which also use a developer certificate of origin (DCO).
  2. No authority is given under this pledge to sign contribution agreements on behalf of the company or otherwise bind it except by contributing code under an OSI approved licence and DCO process.
  3. No authority is given under this pledge if a developer, who may be our employee, posts patches under an email address which is not our corporate email domain above.
  4. No trademarks of this corporation may ever be bound under this pledge.
  5. Except as stated below, no other warranty, express or implied, is made on behalf of the contribution, including, but not limited to, fitness of the code for a specific purpose or merchantability.  The entire risk of the quality and performance of this contribution rests with the recipient.

Warranties

  1. Our corporation trains its Open Source contributors carefully to understand when they may and may not post patches from our corporate email domain and to obtain all necessary internal clearances according to our processes before making such a posting.
  2. When one of our developers posts a patch to a project under an OSI approved licence with a DCO Signed-off-by: from our corporate email domain, we authorise that developer to be our agent in the minimum set of patent and copyright grants that are required to satisfy the terms of the OSI approved licence for the contribution.

The DCO, Patents and OpenStack

Historically, the Developer Certificate of Origin originally adopted by the Linux Kernel in 2005 has seen widespread use within a large variety of Open Source projects.  The DCO is designed to replace a Contributor Licence Agreement with a simple Signed-off-by attestation which is placed into the commit message of the source repository itself, thus meaning that all the necessary DCO attestations are automatically available to anyone who downloads the source code repository.  It also allows (again, through the use of a strong source control system) the identification of who changed any given line of code within the source tree and all their DCO signoffs.

The legal basis of the DCO is that it is an attestation by an individual developer that they have sufficient rights in the contribution to submit it under the project (or file) licence.

The DCO and Corporate Contributions

In certain jurisdictions, particularly the United States of America, when you work as a software developer for a Corporation, they actually own, exclusively, the copyright of any source code you produce under something called the Work for Hire doctrine.  So the question naturally arises: if the developer who makes the Signed-off-by attestation  doesn’t actually own any rights in the code, how is that attestation valid and how does the rights owning entity (the corporation) actually license the code correctly to make the contribution?

The answer to that question resides in something called the theory of agency.  Agency is the mechanism by which individuals give effect to actions of a corporation.  For example, being a nebulous entity with no actual arms or legs, a corporation cannot itself sign any documents.  Thus, when a salesman signs a contract to supply widgets on behalf of a corporation, he is acting as the agent of that corporation.  His signature on the sales contract becomes binding on the corporation as if the corporation itself had made it.  However, there’s a problem here: how does the person who paid for and is expecting the delivery of widgets know that the sales person is actually authorised to be an agent of the corporation?  The answer here is again in the theory of agency: as long as the person receiving the widgets had reasonable cause to think that the salesperson signing the contract is acting as an agent of the corporation.  Usually all that’s required is that the company gave the salesperson a business card and a title which would make someone think they were authorised to sign contracts (such as “Sales Manager”).

Thus, the same thing applies to developers submitting patches on behalf of a corporation.  They become agents of that corporation when making DCO attestations and thus, even if the contribution is a work for hire and the copyright owned by the corporation, the DCO attestation the developer makes is still binding on the corporation.

Email addresses matter

Under the theory of agency, it’s not sufficient to state “I am an agent”, there must be some sign on behalf of the corporation that they’re granting agency (in the case of the salesperson, it was a business card and checkable title).  For developers making contributions with a Signed-off-by, the best indication of agency is to do the signoff using a corporate email address.  For this reason, the Linux kernel has adopted the habit of not accepting controversial patches without a corporate signoff.

Patents and the DCO

The Linux Kernel uses GPLv2 as its licence.  GPLv2 is solely a copyright licence and has nothing really to say about patents, except that if you assert against the project, you lose your right to distribute under GPLv2.  This is what is termed an implied patent licence, but it means that the DCO signoff for GPLv2 only concerns copyrights.  However, there are many open source licences (like Apache-2 or GPLv3) which require explicit patent grants as well as copyright ones, so can the DCO give all the necessary rights, including patent ones, to make the contribution on behalf of the corporation?  The common sense answer, which is if the developer is accepted as an agent for copyright, they should also be an agent for patent grants, isn’t as universally accepted as you might think.

The OpenStack problem

OpenStack has been trying for years to drop its complex contributor licence infrastructure in favour of a simple DCO attestation.  Most recently, the Technical Committee made that request of the board in 2014 and it was finally granted in a limited fashion on November 2015.  The recommendation of the OpenStack counsel was accepted and the DCO was adopted for individuals only, keeping the contributor licence agreements for corporations.  The given reason for this is that the corporate members of OpenStack want more assurance that corporations are correctly granting their patents in their contributions than they believe the DCO gives (conversely, individuals aren’t expected to have any patents, so, for them, the DCO applies just fine since it’s effectively only a copyright attestation they’re giving).

Why are Patents such an Issue?

Or why do lots of people think developers aren’t agents for patents in contributions unlike for copyrights?  The essential argument (as shown here) is that corporations as a matter of practise, do not allow developers (or anyone else except specific title holders) to be agents for patent transactions and thus there should not be an expectation, even when they make a DCO attestation using a corporate email signoff, that they are.

One way to look at this is that corporations have no choice but to make developers agents for the copyright because without that, the DCO attestation is false since the developers themselves has no rights to a work for hire piece of code.  However, some corporations think they can get away with not making developers agents for patents because the contribution and the licence do not require this to happen.  The theory here is that the developer is making an agency grant for the copyright, but an individual grant of the patents (and, since developers don’t usually own patents, that’s no grant at all).  Effectively this is a get out of jail free card for corporations to cheat on the patent requirements of the licence.

Does this interpretation really hold water?  Well, I don’t think so, because it’s deceptive.  It’s deliberately trying to evade the responsibilities for patents that the licences require.  Usually under the theory of agency, deceptive practises are barred.  However, the fear that a court might be induced to accept this viewpoint is sufficient to get the OpenStack board to require that corporations sign a CLA to ensure that patents are well and truly bound.  The problem with this viewpoint is that, if it becomes common enough, it ends up being de facto what the legal situation actually is (because the question courts most often ask in cases about agency is what would the average person think, so a practise that becomes standard in the industry ipso facto becomes what the average reasonable person would think).  Effectively therefore, the very act of OpenStack acting on its fear causes the thing they fear eventually to become true.  The only way to reverse this now is to change the current state of how the industry views patents and the DCO … and that will be the subject of another post.

Respect and the Linux Kernel Mailing Lists

I recently noticed that Sarah Sharp resigned publicly from the kernel giving a failure to impose a mandatory code of conduct as the reason and citing interaction problems, mainly on the mailing lists.  The net result of this posting, as all these comments demonstrate, is to imply directly that nothing has ever changed.  This implication is incredibly annoying, firstly because it is actually untrue, secondly because it does more to discourage participation than the behaviour that is being complained about and finally because it totally disrespects and ignores the efforts of hundreds of people who, over the last decade or so, have been striving to improve all interactions around Linux … a rather nice irony given that “respect” is listed as one of the issues for the resignation.  I’d just like to remind everyone of the history of these efforts and what the record shows they’ve achieved.

The issue of respect on the Mailing lists goes way back to the beginnings of Linux itself, but after the foundation of the OSDL (precursor to the Linux Foundation) Technical Advisory Board (TAB), one of its first issues from OSDL member companies was the imbalance between Asian and European/American contributions to the kernel.  The problems were partly to do with Management culture and partly because the lack of respect on the various mailing lists was directly counter to the culture of respect in a lot of Asian countries and disproportionately discouraged contributions from that region.  The TAB largely works behind the scenes, but some aspects of the effort filtered into the public domain as can be seen with a session on developer relations at the 2007 kernel summit (and, in fact, at a lot of other kernel summits since then).  Progress was gradual, and influenced by a large number of people, but the climate did improve.  I have to confess that I don’t follow LKML (not because of the flame war issues, simply because it’s too much of a firehose); however, the lists I do participate in (linux-scsi, linux-ide, linux-mm, linux-fsdevel, linux-efi, linux-arch, linux-parisc) haven’t seen any flagrantly disrespectful and personally insulting posts for several years now.  Indeed, when an individual came along who could almost have been flame bait for this with serial efforts to get incorrect and badly thought out patches into the kernel (I won’t give cites here to avoid stigmatising individuals) they met with a large reserve of patience and respectful and helpful advice before finally being banned from the lists for being incorrigible … no insults or flames at all.

Although I’d love to take credit for some of this, I’ve got to say that I think the biggest influencer towards civility is actually the “professionalisation”  of Linux: Employers pay people to work on Linux but the statements of those people become identified with their employers (no matter how many disclaimers they have) … in many ways, Open Source engineers are the new corporate spokespeople.  All employers bear this in mind when they hire and they certainly look over the mailing lists to see how people behave.  The net result is really that the only people who can afford to be rude or abusive are those who don’t think they have much chance of a long term career in Linux.

So, by and large, I’m proud of the achievements we’ve made in civility and the way we have improved over the years.  Are we perfect? by no means (but then perfection in such a large community isn’t a realistic goal).  However, we have passed our stress test: that an individual with bad patches to several mailing lists was met with courtesy and helpful advice, in spite of serially repeating the behaviour.

In conclusion, I’d just like to note that even the thread that gave rise to Sarah’s desire to pursue a code of conduct is now over two years old and try as they might, no-one’s managed to come up with a more recent example and no-one has actually invoked the voluntary code of conflict, which was the compromise for not having a mandatory code of conduct.  If it were me, I’d actually take that as a sign of success …

DNSSEC, DANE and the failure of X.509

As a few people have noticed, I’m a bit of an internet control freak: In an age of central “cloud based” services, I run pretty much my own everything (blog, mail server, DNS, OpenID, web page etc.).  That doesn’t make me anti-cloud; I just believe in federation instead of centralisation.  In particular, I believe in owning my own content and obeying my own rules rather than those of $BIGCLOUDPROVIDER.

In the modern world, this is perfectly possible: I rent a co-lo site and I have a DNS delegation so I can run and tune my own services how I wish.  I need a secure web server for a few things (OpenID, an email portal, secure log in for my blog etc) but I’ve always used a self-signed certificate.  Why?  well having to buy one from a self appointed X.509 root of trust always really annoyed me.  Firstly because they do very little for the money; secondly because it means effectively giving my security to some self appointed entity; and thirdly, as all the compromises and misuse attests, the X.509 root of trust model is fundamentally broken.

In the ordinary course of events, none of this would affect me.  However, recently, curl, which is used as the basis of most OpenID implementations took to verifying X.509 certificate chains, meaning that OpenID simply stopped working for ID providers with self signed certificates and at a stroke I was locked out of quite a few internet sites.  The only solution is to give up on OpenID or swallow pride and get a chained X.509 certificate.  Fortunately startssl will issue free certificates and the Linux Foundation is also getting into the game, so the first objection is overcome but not the other two.

So, what’s the answer?  As a supporter of cloud federation, I really like the monkeysphere approach which links ssl certificate verification directly to the user’s personal pgp web of trust.  Unfortunately, that also means that monkeysphere suffers from all the usual web of trust problems, the biggest being that it’s pretty much inaccessible to non-techies who just don’t understand why they should invest time in building up their own trust contacts.  That’s not to say that the web of trust can’t be made accessible in a simple fashion to everyone and indeed google is working on a project along these lines; however, today the reality is that today it isn’t.

Enter DANE.  At is most basic, DANE is a protocol that links certificate verification to the DNS.  It means that because anyone who owns a domain must have a DNS entry somewhere and the ability to modify it, they can directly link their certificate verification to this ability.  To my mind, this represents a nice compromise between making the system simple for end users and the full federation of the web of trust.  The implementation of DANE relies on DNSSEC (which is a royal pain to set up and get right, but I’ll do another blog post about that).  This means that effectively DANE has the same operational model as X.509, because DNSSEC is a hierarchically rooted trust model.  It also means that the delegation record to your domain is managed by your registrar and could be compromised if your registrar is.  However, as long as you trust the DNSSEC root and your registrar, the ability to generate trusted certificates for your domain is delegated to you.  So how is this different from X509?  Surely abusive registrars could cause similar problems as abusive or negligent X.509 roots?  That’s true, but an abusive registrar can only affect their own domain and delegates, they can’t compromise everyone else (unlike X.509), so to give an example of recent origin: the Chinese registrar could falsify the .cn domain, but wouldn’t be able to issue false certificates for the .com one.  The other reason for hope is that DNSSEC is at the root of the scheme to protect the DNS infrastructure itself from attack.  This makes the functioning and administration of DNSSEC a critical task for ICANN itself, so it’s a fair bet to assume that any abuse by a registrar won’t just result in a light slap on the wrist and a vague threat to delist their certificate in some browsers, but will have ICANN threatening to revoke their accreditation and with it, their entire business model as a domain registrar.

In many ways, the foregoing directly links the business model of the registrars to making DNSSEC work correctly for you.  In theory, the same is true of the X.509 CA roots of trust, of course, but there’s no one sitting at the top making sure they behave (and the fabric of the internet isn’t dependent on securing this behaviour, even if there were).

Details of DANE

In spite of the statements above, DANE is designed to complement X.509 as well as replace it.  Dane has four separate certificate verification styles, two of which integrate with X.509 and solve specific threats in its model (the actual solution is called pinning, a way of protecting yourself from the proliferation of X509 CAs all of whom could issue certificates for your site):

  • Mode 0 – X.509 CA pinning: The TLSA record identifies the exact CA the TLS Certificate must chain to.  This certificate must also be a trusted root in the X.509 certificate database.
  • Mode 1 – Certificate Contstraint: The TLSA record identifies the site certificate and that certificate must also pass X.509 validation
  • Mode 2 – Trust Anchor Assertion: The TLSA record specifies the certificate to which the  TLS Certificate must chain to under standard X.509 validation, but this certificate is not expected to be an X.509 root of trust.
  • Mode 3 – Domain Issued Certificates: The TLSA record specifies exactly the TLS certificate which the service must use.  This allows domains securely to specify verifiable self signed certificates.

Mode 3 is most commonly used to specify an exact certificate outside of the X.509 chain.  Mode 2 can be useful, but the site must have access to an external certificate store (using the DNS CERT records) or the TLSA record must specify the full certificate for it to work.

Who Supports DANE?

This is the big problem:  For certificates distributed via DANE to be useful, there must be support for them in browsers.  For Mozilla, there is the DANE validator extension but in spite of several attempts, nothing actually built into the browser certificate verifier itself.  The most complete patch set is from the DNSSEC people and there’s also a Mozilla inspired one about how they will add it one day but right at the moment it isn’t a priority.  The Chromium browser has had a similar attitude.  The conspiracy theorists are quick to point out that this is because the browser companies derive considerable revenue from the CA system, which is in itself a multi-billion dollar industry and thus there’s active lobbying against anything that would dilute the power, and hence perceived value, of the CA roots.  There is some evidence for this position in that Google recognises that certificate pinning, which DANE supports, can protect against recent fake google certificate attacks, but, while supporting DNSSEC (at least for validation, the google DNS doesn’t secure itself via DNSSEC), they steadfastly ignore DANE certificate pinning and instead have a private arrangement with Mozilla.

I learned long ago: never to ascribe to malice (or conspiracy) what can be easily explained by incompetence (or political problems).  In this case, the world was split long ago into using openssl for security (in spite of the problematic licence) or using nss (the Netscape Security Services).  Mozilla, of course, uses the latter but every implementation of DANE for mozilla (including the patches in the bugzilla) use openssl.   I actually have an experimental build of mozilla with DANE, but incorporating the two separate SSL systems is a real pain.  I think it’s safe to say that until someone comes up with a nss based DANE verifier, the DANE patches for mozilla still aren’t yet up to the starting blocks, and thus conspiracy allegations are somewhat premature.  Unfortunately, the same explanation applies to chromium: for better or worse, it’s currently using nss for certificate validation as well.

Getting your old Sync Server to work with New Firefox

Much has been written about Mozilla trying to force people to use their new sync service.  If, like me, you run your own sync server for Firefox, you’ve mostly been ignoring this because there’s still no real way of running your own sync server for the new service (and if you simply keep upgrading, Firefox keeps working with your old server).

However, recently I had cause to want to connect my old sync server to a new installation of firefox without just copying over all the config files (one of the config settings broke google docs and I couldn’t figure out which one it was, so I figured I’d just blow the entire config away and restore from sync).  Long ago Mozilla disabled the ability to connect newer Firefoxes to an old sync server, so this is an exposé of how to do it.  I did actually search the internet for this one, but no-one else seems to have figured it out (or if they have, they’re not known to the search engines).

There are two config files you need to update get new Firefox to connect to sync (note, I did this with Firefox 37; I’ve not tested it with a different version, but I’m pretty sure it will work).  The first is that you need to put your sync key and weave user login into logins.json.  Since the password and user are encypted in this file, the easiest way is to use a password manager extension, like Saved Password Editor add on.  Then you need two new password entries of type “Annotated” under the host chrome://weave.  For each, your username is your weave username.  For the first, you’re going to add your weave password under the annotation “Mozilla Services Password”.  For the second, add the Firefox  key with all the dashes removed as the password under the annotation “Mozilla Services Encryption Passphrase”.  If you’ve got all this right, password manager will show this (my username is jejb):

tmpNext you’re going to close firefox and manually edit the prefs.js file.  To sync completely from scratch, this just needs three entries, so firstly strip out every preference that begins ‘services.sync.’ and then add three new lines

user_pref("services.sync.account", "<my account>");
user_pref("services.sync.serverURL", "<my weave URL>");
user_pref("services.sync.username", "<my weave user name>");

For most people, the account and weave user name are the same.  Now start Firefox and it should just sync on its own.  To check that you got this right, go to the Sync tab of preferences and you should see something like this

tmp

And that’s it.  You’re all done.

Squirrelmail and imaps

Somewhere along the way squirrelmail stopped working with my dovecot imap server, which runs only on the secure port (imaps).  I only ever use webmail as a last resort, so the problem may be left over from years ago.  The problem is that I’m getting a connect failure but an error code of zero and no error message.  This is what it actually shows

Error connecting to IMAP server "localhost:993".Server error: (0)

Which is very helpful.  Everything else works with imaps on this system, so why not squirrelmail?

The answer, it seems, is buried deep inside php.  Long ago, when php first started using openssl, it pretty much did no peer verification.  Nowadays it does.  I know I ran into this a long time ago, so the self signed certificate my version of dovecot is using is present in the /etc/ssl/certs directory where php looks for authoritative certificates.  Digging into the sources of squirrelmail, it turns out this php statement (with the variables substituted) is the failing one

$imap_stream = @fsockopen('tls://localhost', 993, $errno, $errstr, 15);

It’s failing because $imap_stream is empty, but, as squirrelmail claims, it’s actually failing with a zero error code.  After several hours of casting about with the fairly useless php documentation, it turns out that php has an interactive mode where it will actually give you all the errors.  executing this

echo 'fsockopen("tls://localhost",993,$errno,$errmsg,15);'|php -a

Finally tells me what’s wrong

Interactive mode enabled

PHP Warning: fsockopen(): Peer certificate CN=`bedivere.hansenpartnership.com' did not match expected CN=`localhost' in php shell code on line 1
PHP Warning: fsockopen(): Failed to enable crypto in php shell code on line 1
PHP Warning: fsockopen(): unable to connect to tls://localhost:993 (Unknown error) in php shell code on line 1

So that’s it: php has tightened up the certificate verification not only to validate the certificate itself, but also to check that the CN matches the requested service.  In this case, because I’m connecting over the loopback device (localhost) instead of the internet to the DNS name, that CN check has failed and lead to the results I’m seeing.  Simply fixing squirrelmail to connect to imaps over the fully qualified hostname instead of localhost gets everything working again.

Anatomy of the UEFI Boot Sequence on the Intel Galileo

The Basics

UEFI boot officially has three phases (SEC, PEI and DXE).  However, the DXE phase is divided into DXEBoot and DXERuntime (the former is eliminated after the call to ExitBootSerivices()).  The jobs of each phase are

  1. SEC (SECurity phase). This contains all the CPU initialisation code from the cold boot entry point on.  It’s job is to set the system up far enough to find, validate, install and run the PEI.
  2. PEI (Pre-Efi Initialization phase).  This configures the entire platform and then loads and boots the DXE.
  3. DXE (Driver eXecution Environment).  This is where the UEFI system loads drivers for configured devices, if necessary; mounts drives and finds and executes the boot code.  After control is transferred to the boot OS, the DXERuntime stays resident to handle any OS to UEFI calls.

How it works on Quark

This all sounds very simple (and very like the way an OS like Linux boots up).  However, there’s a very crucial difference: The platform really is completely unconfigured when SEC begins.  In particular it won’t have any main memory, so you begin in a read only environment until you can configure some memory.  C code can’t begin executing until you’ve at least found enough writable memory for a stack, so the SEC begins in hand crafted assembly until it can set up a stack.

On all x86 processors (including the Quark), power on begins execution in 16 bit mode at the ResetVector (0xfffffff0). As a helping hand, the default power on bus routing has the top 128KB of memory mapped into the top of SPI flash (read only, of course) via a PCI routing in the Legacy Bridge, meaning that the reset vector executes directly from the SPI Flash (this is actually very slow: SPI means Serial Peripheral Interface, so every byte of SPI flash has to be read serially into the instruction cache before it can be executed).

The hand crafted assembly clears the cache, transitions to Flat32 bit execution mode and sets up the necessary x86 descriptor tables.  It turns out that memory configuration on the Quark SoC is fairly involved and complex so, in order to spare the programmer from having to do this all in assembly, there’s a small (512kB) static ram chip that can be easily configured, so the last assembly job of the SEC is to configure the eSRAM (to a fixed address at 2GB), set the top as the stack, load the PEI into the base (by reconfiguring the SPI flash mapping to map the entire 8MB flash to the top of memory and then copying the firmware volume containing the PEI) and begin executing.

QuarkPlatform Build Oddities

Usually the PEI code is located by the standard Flash Volume code of UEFI and the build time PCDs (Platform Configuration Database entries) which use the values in the Flash Definition File to build the firmware.  However, the current Quark Platform package has a different style because it rips apart and rebuilds the flash volumes, so instead of using PCDs, it uses something it calls Master Flash Headers (MFHs) which are home grown for Quark.  These are a fixed area of the flash that can be read as a database giving the new volume layout (essentially duplicating what the PCDs would normally have done).  Additionally the Quark adds a non-standard signature header occupying 1k to each flash volume which serves two purposes: For the SECURE_LD case, it actually validates the volume, but for the three items in the firmware that don’t have flash headers (the kernel, the initrd and the grub config) it serves to give the lengths of each.

Laying out Flash Rom

This is a really big deal for most embedded systems because the amount of flash available is really limited.  The Galileo board is nice because it supplies 8MB of flash … which is huge in embedded terms.  All flash is divided into Flash Volumes1.  If you look at OVMF for instance, it builds its flash as four volumes: Three for the three SEC, PEI and DXE phases and one for the EFI variables.  In EdkII, flash files are built by the flash definition file (the one with a .fdf ending).  Usually some part of the flash is compressed and has to be inflated into memory (in OVMF this is PEI and DXE) and some are designed to be execute in place (usually SEC).  If you look at the Galileo layout, you see that it has a big SEC phase section (called BOOTROM_OVERRIDE) designed for the top 128kb of the flash , the usual variable area and then five additional sections, two for PEI and DXE and three recovery ones. (and, of course, an additional payload section for the OS that boots from flash).

Embedded Recovery Sections

For embedded devices (and even normal computers) recovery in the face of flash failure (whether from component issues or misupdate of the flash) is really important, so the Galileo follows a two stage fallback process.  The first stage is to detect a critical error signalled by the platform sticky bit, or recovery strap in the SEC and boot up to the fixed phase recovery which tries to locate a recovery capsule on the USB media2. The other recovery is a simple copy of the PEI image for fallback in case the primary PEI image fails (by now you’ll have realised there are three separate but pretty much identical copies of PEI in the flash rom).  One of the first fixes that can be made to the Quark build is to consolidate all of these into a single build description.

Putting it all together: Implementing a compressed PEI Phase

One of the first things I discovered when trying to update the UEFI version to something more modern is that the size of the PEI phase overflows the allowed size of the firmware volume.  This means either redo the flash layout or compress the PEI image.  I chose the latter and this is the story of how it went.

The first problem is that debug prints don’t work in the SEC phase, which is where the changes are going to have to be.  This is a big stumbling block because without debugging, you never know where anything went wrong.  It turns out that UEFI nicely supports this via a special DebugLib that outputs to the serial console, but that the Galileo firmware build has this disabled by this line:

[LibraryClasses.IA32.SEC]
...
 DebugLib|MdePkg/Library/BaseDebugLibNull/BaseDebugLibNull.inf

The BaseDebugLibNull does pretty much what you expect: throws away all Debug messages.  When this is changed to something that outputs messages, the size of the PEI image explodes again, mainly because Stage1 has all the SEC phase code in it.  The fix here is only to enable debugging inside the QuarkResetVector SEC phase code.  You can do this in the .dsc file with

 QuarkPlatformPkg/Cpu/Sec/ResetVector/QuarkResetVector.inf {
   <LibraryClasses>
     DebugLib|MdePkg/Library/BaseDebugLibSerialPort/BaseDebugLibSerialPort.inf
 }

And now debugging works in the SEC phase!

It turns out that a compressed PEI is possible but somewhat more involved than I imagined so that will be the subject of the next blog post.  For now I’ll round out with other oddities I discovered along the way

Quark Platform SEC and PEI Oddities

On the current quark build, the SEC phase is designed to be installed into the bootrom from 0xfffe 0000 to 0xffff ffff.  This contains a special copy of the reset vector (In theory it contains the PEI key validation for SECURE_LD, but in practise the verifiers are hard coded to return success).  The next oddity is that the stage1 image, which should really just be the PEI core actually contains another boot from scratch SEC phase, except this one is based on the standard IA32 reset vector code plus a magic QuarkSecLib and then the PEI code.  This causes the stage1 bring up to be different as well, because usually, the SEC code locates the PEI core in stage1 and loads, relocates and executes it starting from the entry point PeiCore().  However, quark doesn’t do this at all.  It relies on the Firmware Volume generator populating the first ZeroVector (an area occupying the first 16 bytes of the Firmware Volume Header) with a magic entry (located in the ResetVector via the magic string ‘SPI Entry Point ‘ with the trailing space).  The SEC code indirects through the ZeroVector to this code and effectively re-initialises the stack and begins executing the new SEC code, which then locates the internal copy of the PEI core and jumps to it.

Secure Boot on the Intel Galileo

The first thing to do is set up a build environment.  The Board support package that Intel supplies comes with a vast set of instructions and a three stage build process that uses the standard edk2 build to create firmware volumes, rips them apart again then re-lays them out using spi-flashtools to include the Arduino payload (grub, the linux kernel, initrd and grub configuration file), adds signed headers before creating a firmware flash file with no platform data and then as a final stage adds the platform data.  This is amazingly painful once you’ve done it a few times, so I wrote my own build script with just the essentials for building a debug firmware for my board (it also turns out there’s a lot of irrelevant stuff in quarkbuild.sh which I dumped).  I’m also a git person, not really an svn one, so I redid the Quark_EDKII tree as a git tree with full edk2 and FatPkg as git submodules and a single build script (build.sh) which pulls in all the necessary components and delivers a flashable firmware file.  I’ve linked the submodules to the standard tianocore github ones.  However, please note that the edk2 git pack is pretty huge and it’s in the nature of submodules to be volatile (you end up blowing them away and re-pulling the entire repo with git submodule update a lot, especially because the build script ends up patching the submodules) so you’ll want to clone your own edk2 tree and then add the submodule as a reference.  To do this, execute

git submodule init
git submodule update --reference <wherever you put your local edk2 tree> .module/edk2

before running build.sh; it will save a lot of re-cloning of the edk2 tree. You can also do this for FatPkg, but it’s tiny by comparison and not updated by the build scripts.

A note on the tree format: the Intel supplied Quark_EDKII is a bit of a mess: one of the requirements for the edk2 build system is that some files have to have dos line endings and some have to have unix.  Someone edited the Quark_EDKII less than carefully and the result is a lot of files where emacs shows splatterings of ^M, which annoys me incredibly so I’ve sanitised the files to be all dos or all unix line endings before importing.

A final note on the payload:  for the purposes of UEFI testing, it doesn’t matter and it’s another set of code to download if you want the Arduino payload, so the layout in my build just adds the EFI Shell as the payload for easy building.  The Arduino sections are commented out in the layout file, so you can build your own Arduino payload if you want (as long as you have the necessary binaries).  Also, I’m building for Galileo Kips Bay D … if you have a gen 2, you need to fix the platform-data.ini file.

An Aside about vga over serial consoles

If, like me, you’re interested in the secure boot aspect, you’re going to be spending a lot of time interacting with the VGA console over the serial line and you want it to look the part.  The default VGA PC console is very much stuck in an 80s timewarp.  As a result, the world has moved on and it hasn’t leading to console menus that look like this

vgaThis image is directly from the Intel build docs and actually, this is Windows, which is also behind the times1; if you fire this up from an xterm, the menu looks even worse.  To get VGA looking nice from an xterm, first you need to use a vga font (these have the line drawing characters in the upper 128 bytes), then you need to turn off UTF-8 (otherwise some of the upper 128 character get seen as UTF-8 encodings), turn off the C1 control characters and set a keyboard mapping that sends what UEFI is expecting as F7.  This is what I end up with

xterm -fn vga -geometry 80x25 +u8 -kt sco -k8 -xrm "*backarrowKeyIsErase: false"

And don’t expect the -fn vga to work out of the box on your distro either … vga fonts went out with the ark, so you’ll probably have to install it manually.  With all of this done, the result is

vga-xtermAnd all the function keys work.

Back to our regularly scheduled programming: Secure Boot

The Quark build environment comes with a SECURE_LD mode, which, at first sight, does look like secure boot.  However, what it builds is a cryptographic verification system for the firmware payloads (and disables the shell for good measure).  There is also an undocumented SECURE_BOOT build define instead; unfortunately this doesn’t even build (it craps out trying to add a firmware menu: the Quark is embedded and embedded don’t have firmware menus). Once this is fixed, the image will build but it won’t boot properly.  The first thing to discover is that ASSERT() fails silently on this platform.  Why? Well if you turn on noisy assert, which prints the file and line of the failure, you’ll find that the addition of all these strings make the firmware volumes too big to fit into their assigned spaces … Bummer.  However printing nothing is incredibly useless for debugging, so just add a simple print to let you know when ASSERT() failure is hit.  It turns out there are a bunch of wrong asserts, including one put in deliberately by intel to prevent secure boot from working because they haven’t tested it.  Once you get rid of these,  it boots and mostly works.

Mostly because there’s one problem: authenticating by enrolled binary hashes mostly fails.  Why is this?  Well, it turns out to be a weakness in the Authenticode spec which UEFI secure boot relies on.  The spec is unclear on padding and alignment requirements (among a lot of other things; after all, why write a clear spec when there’s only going to be one implementation … ?).  In the old days, when we used crcs, padding didn’t really matter because additional zeros didn’t affect the checksum but in these days of cryptographic hashes, it does matter.  The problem is that the signature block appended to the EFI binary has an eight byte alignment.  If you add zeros at the end to achieve this, those zeros become part of the hash.  That means you have to hash IA32 binaries as if those padded zeros existed otherwise the hash is different for signed and unsigned binaries. X86-64 binaries seem to be mostly aligned, so this isn’t noticeable for them.  This problem is currently inherent in edk2, so it needs patching manually in the edk2 tree to get this working.

With this commit, the Quark build system finally builds a working secure boot image.  All you need to do is download the ia32 efitools (version 1.5.2 or newer — turns out I also didn’t compute hashes correctly) to an sd card and you’re ready to play.

Adventures in Embedded UEFI with Intel Galileo

At one of the Intel Technology Days conferences a while ago, Intel gave us a gift of a Galileo board, which is based on the Quark SoC, just before the general announcement.  The promise of the Quark SoC was that it would be a fully open (down to the firmware) embedded system based on UEFI.  When the board first came out, though, the UEFI code was missing (promised for later), so I put it on a shelf and forgot about it.   Recently, the UEFI Security Subteam has been considering issues that impinge on embedded architectures (mostly arm) so having an actual working embedded development board could prove useful.  This is the first part of the story of trying to turn the Galileo into an embedded reference platform for UEFI.

The first problem with getting the Galileo working is that if you want to talk to the UEFI part it’s done over a serial interface, with a 3.5″ jack connection.  However, a quick trip to amazon solved that one.  Equipped with the serial interface, it’s now possible to start running UEFI binaries.   Just using the default firmware (with no secure boot) I began testing the efitools binaries.  Unfortunately, they were building the size of the secure variables (my startup.nsh script does an append write to db) and eventually the thing hit an assert failure on entering the UEFI handoff.  This led to the discovery that the recovery straps on the board didn’t work, there’s no way to clear the variable NVRAM and the only way to get control back was to use an external firmware flash tool.  So it’s off for an unexpected trip to uncharted territory unless I want the board to stay bricked.

The flash tool Intel recommends is the Dediprog SF 100.  These are a bit expensive (around US$350) and there’s no US supplier, meaning you have to order from abroad, wait ages for it to be delivered and deal with US customs on import, something I’ve done before and have no wish to repeat.  So, casting about for a better solution, I came up with the Bus Pirate.  It’s a fully open hardware component (including a google code repository for the firmware and the schematics) plus it’s only US$35 (that’s 10x cheaper than the dediprog), available from Amazon and works well with Linux (for full disclosure, Amazon was actually sold out when I bought mine, so I got it here instead).

The Bus Pirate comes as a bare circuit board (no case or cables), so you have to buy everything you might need to use it extra.  I knew I’d need a ribbon cable with SPI plugs (the Galileo has an SPI connector for the dediprog), so I ordered one with the card.  The first surprise when the card arrived was that the USB connector is actually Mini B not the now standard Micro connector.  I’ve not seen one of those since I had an Android G1, but, after looking in vain for my old android one, Staples still has the cables.    The next problem is that, being open hardware, there are multiple manufacturers.  They all supply a nice multi coloured ribbon cable, but there are two possible orientations and both are produced.  Turns out I have a sparkfun cable which is the opposite way around from the colour values in the firmware (which is why the first attempt to talk to the chip didn’t work).  The Galileo has diode isolators so the SPI flash chip can be powered up and operated independently by the Bus Pirate;  accounting for cable orientation, and disconnecting the Galileo from all other external power, this now works.  Finally, there’s a nice Linux project, flashrom, which allows you to flash all manner of chips and it has a programmer mode for the Bus Pirate.  Great, except that the default USB serial speed is 115200 and at that baud rate, it takes about ten minutes just to read an 8MB SPI flash (flashrom will read, program and then verify, giving you about 25 mins each time you redo the firmware).  Speeding this up is easy: there’s an unapplied patch to increase the baud rate to 2Mbit and I wrote some code to flash only designated areas of the chip (note to self: send this upstream).  The result is available on the OpenSUSE build service.  The outcome is that I’m now able to build and reprogram the firmware in around a minute.

By now this is two weeks, a couple of hacks to a tool I didn’t know I’d need and around US$60 after I began the project, but at least I’m now an embedded programmer and have the scars to prove it.  Next up is getting Secure Boot actually working ….

efitools now working for both x86 and x86_64

Some people noticed a while ago that version 1.5.1 was building for ia32, but 1.5.2 is the release that’s tested and works (1.5.1 had problems with the hash computations).  The primary reason for this work is bringing up secure boot on the Intel Quark SoC board (I’ve got the Galileo Kipps Bay D but the link is to the new Gen 2 board).  There’s a corresponding release of OVMF, with a fix for a DxeImageVerification problem that meant IA32 hashes weren’t getting computed properly.   I’ll post more on the Galileo and what I’ve been doing with it in a different post (tagged for embedded, since it’s mostly a story of embedded board programming).

With these tools, you can now test out IA32 images for both UEFI and the key manipulation and signing tools.