As a few people have noticed, I’m a bit of an internet control freak: In an age of central “cloud based” services, I run pretty much my own everything (blog, mail server, DNS, OpenID, web page etc.). That doesn’t make me anti-cloud; I just believe in federation instead of centralisation. In particular, I believe in owning my own content and obeying my own rules rather than those of $BIGCLOUDPROVIDER.
In the modern world, this is perfectly possible: I rent a co-lo site and I have a DNS delegation so I can run and tune my own services how I wish. I need a secure web server for a few things (OpenID, an email portal, secure log in for my blog etc) but I’ve always used a self-signed certificate. Why? well having to buy one from a self appointed X.509 root of trust always really annoyed me. Firstly because they do very little for the money; secondly because it means effectively giving my security to some self appointed entity; and thirdly, as all the compromises and misuse attests, the X.509 root of trust model is fundamentally broken.
In the ordinary course of events, none of this would affect me. However, recently, curl, which is used as the basis of most OpenID implementations took to verifying X.509 certificate chains, meaning that OpenID simply stopped working for ID providers with self signed certificates and at a stroke I was locked out of quite a few internet sites. The only solution is to give up on OpenID or swallow pride and get a chained X.509 certificate. Fortunately startssl will issue free certificates and the Linux Foundation is also getting into the game, so the first objection is overcome but not the other two.
So, what’s the answer? As a supporter of cloud federation, I really like the monkeysphere approach which links ssl certificate verification directly to the user’s personal pgp web of trust. Unfortunately, that also means that monkeysphere suffers from all the usual web of trust problems, the biggest being that it’s pretty much inaccessible to non-techies who just don’t understand why they should invest time in building up their own trust contacts. That’s not to say that the web of trust can’t be made accessible in a simple fashion to everyone and indeed google is working on a project along these lines; however, today the reality is that today it isn’t.
Enter DANE. At is most basic, DANE is a protocol that links certificate verification to the DNS. It means that because anyone who owns a domain must have a DNS entry somewhere and the ability to modify it, they can directly link their certificate verification to this ability. To my mind, this represents a nice compromise between making the system simple for end users and the full federation of the web of trust. The implementation of DANE relies on DNSSEC (which is a royal pain to set up and get right, but I’ll do another blog post about that). This means that effectively DANE has the same operational model as X.509, because DNSSEC is a hierarchically rooted trust model. It also means that the delegation record to your domain is managed by your registrar and could be compromised if your registrar is. However, as long as you trust the DNSSEC root and your registrar, the ability to generate trusted certificates for your domain is delegated to you. So how is this different from X509? Surely abusive registrars could cause similar problems as abusive or negligent X.509 roots? That’s true, but an abusive registrar can only affect their own domain and delegates, they can’t compromise everyone else (unlike X.509), so to give an example of recent origin: the Chinese registrar could falsify the .cn domain, but wouldn’t be able to issue false certificates for the .com one. The other reason for hope is that DNSSEC is at the root of the scheme to protect the DNS infrastructure itself from attack. This makes the functioning and administration of DNSSEC a critical task for ICANN itself, so it’s a fair bet to assume that any abuse by a registrar won’t just result in a light slap on the wrist and a vague threat to delist their certificate in some browsers, but will have ICANN threatening to revoke their accreditation and with it, their entire business model as a domain registrar.
In many ways, the foregoing directly links the business model of the registrars to making DNSSEC work correctly for you. In theory, the same is true of the X.509 CA roots of trust, of course, but there’s no one sitting at the top making sure they behave (and the fabric of the internet isn’t dependent on securing this behaviour, even if there were).
Details of DANE
In spite of the statements above, DANE is designed to complement X.509 as well as replace it. Dane has four separate certificate verification styles, two of which integrate with X.509 and solve specific threats in its model (the actual solution is called pinning, a way of protecting yourself from the proliferation of X509 CAs all of whom could issue certificates for your site):
- Mode 0 – X.509 CA pinning: The TLSA record identifies the exact CA the TLS Certificate must chain to. This certificate must also be a trusted root in the X.509 certificate database.
- Mode 1 – Certificate Contstraint: The TLSA record identifies the site certificate and that certificate must also pass X.509 validation
- Mode 2 – Trust Anchor Assertion: The TLSA record specifies the certificate to which the TLS Certificate must chain to under standard X.509 validation, but this certificate is not expected to be an X.509 root of trust.
- Mode 3 – Domain Issued Certificates: The TLSA record specifies exactly the TLS certificate which the service must use. This allows domains securely to specify verifiable self signed certificates.
Mode 3 is most commonly used to specify an exact certificate outside of the X.509 chain. Mode 2 can be useful, but the site must have access to an external certificate store (using the DNS CERT records) or the TLSA record must specify the full certificate for it to work.
Who Supports DANE?
This is the big problem: For certificates distributed via DANE to be useful, there must be support for them in browsers. For Mozilla, there is the DANE validator extension but in spite of several attempts, nothing actually built into the browser certificate verifier itself. The most complete patch set is from the DNSSEC people and there’s also a Mozilla inspired one about how they will add it one day but right at the moment it isn’t a priority. The Chromium browser has had a similar attitude. The conspiracy theorists are quick to point out that this is because the browser companies derive considerable revenue from the CA system, which is in itself a multi-billion dollar industry and thus there’s active lobbying against anything that would dilute the power, and hence perceived value, of the CA roots. There is some evidence for this position in that Google recognises that certificate pinning, which DANE supports, can protect against recent fake google certificate attacks, but, while supporting DNSSEC (at least for validation, the google DNS doesn’t secure itself via DNSSEC), they steadfastly ignore DANE certificate pinning and instead have a private arrangement with Mozilla.
I learned long ago: never to ascribe to malice (or conspiracy) what can be easily explained by incompetence (or political problems). In this case, the world was split long ago into using openssl for security (in spite of the problematic licence) or using nss (the Netscape Security Services). Mozilla, of course, uses the latter but every implementation of DANE for mozilla (including the patches in the bugzilla) use openssl. I actually have an experimental build of mozilla with DANE, but incorporating the two separate SSL systems is a real pain. I think it’s safe to say that until someone comes up with a nss based DANE verifier, the DANE patches for mozilla still aren’t yet up to the starting blocks, and thus conspiracy allegations are somewhat premature. Unfortunately, the same explanation applies to chromium: for better or worse, it’s currently using nss for certificate validation as well.