Category Archives: Linux

General Linux and Open Source related blog posts

DNSSEC, DANE and the failure of X.509

As a few people have noticed, I’m a bit of an internet control freak: In an age of central “cloud based” services, I run pretty much my own everything (blog, mail server, DNS, OpenID, web page etc.).  That doesn’t make me anti-cloud; I just believe in federation instead of centralisation.  In particular, I believe in owning my own content and obeying my own rules rather than those of $BIGCLOUDPROVIDER.

In the modern world, this is perfectly possible: I rent a co-lo site and I have a DNS delegation so I can run and tune my own services how I wish.  I need a secure web server for a few things (OpenID, an email portal, secure log in for my blog etc) but I’ve always used a self-signed certificate.  Why?  well having to buy one from a self appointed X.509 root of trust always really annoyed me.  Firstly because they do very little for the money; secondly because it means effectively giving my security to some self appointed entity; and thirdly, as all the compromises and misuse attests, the X.509 root of trust model is fundamentally broken.

In the ordinary course of events, none of this would affect me.  However, recently, curl, which is used as the basis of most OpenID implementations took to verifying X.509 certificate chains, meaning that OpenID simply stopped working for ID providers with self signed certificates and at a stroke I was locked out of quite a few internet sites.  The only solution is to give up on OpenID or swallow pride and get a chained X.509 certificate.  Fortunately startssl will issue free certificates and the Linux Foundation is also getting into the game, so the first objection is overcome but not the other two.

So, what’s the answer?  As a supporter of cloud federation, I really like the monkeysphere approach which links ssl certificate verification directly to the user’s personal pgp web of trust.  Unfortunately, that also means that monkeysphere suffers from all the usual web of trust problems, the biggest being that it’s pretty much inaccessible to non-techies who just don’t understand why they should invest time in building up their own trust contacts.  That’s not to say that the web of trust can’t be made accessible in a simple fashion to everyone and indeed google is working on a project along these lines; however, today the reality is that today it isn’t.

Enter DANE.  At is most basic, DANE is a protocol that links certificate verification to the DNS.  It means that because anyone who owns a domain must have a DNS entry somewhere and the ability to modify it, they can directly link their certificate verification to this ability.  To my mind, this represents a nice compromise between making the system simple for end users and the full federation of the web of trust.  The implementation of DANE relies on DNSSEC (which is a royal pain to set up and get right, but I’ll do another blog post about that).  This means that effectively DANE has the same operational model as X.509, because DNSSEC is a hierarchically rooted trust model.  It also means that the delegation record to your domain is managed by your registrar and could be compromised if your registrar is.  However, as long as you trust the DNSSEC root and your registrar, the ability to generate trusted certificates for your domain is delegated to you.  So how is this different from X509?  Surely abusive registrars could cause similar problems as abusive or negligent X.509 roots?  That’s true, but an abusive registrar can only affect their own domain and delegates, they can’t compromise everyone else (unlike X.509), so to give an example of recent origin: the Chinese registrar could falsify the .cn domain, but wouldn’t be able to issue false certificates for the .com one.  The other reason for hope is that DNSSEC is at the root of the scheme to protect the DNS infrastructure itself from attack.  This makes the functioning and administration of DNSSEC a critical task for ICANN itself, so it’s a fair bet to assume that any abuse by a registrar won’t just result in a light slap on the wrist and a vague threat to delist their certificate in some browsers, but will have ICANN threatening to revoke their accreditation and with it, their entire business model as a domain registrar.

In many ways, the foregoing directly links the business model of the registrars to making DNSSEC work correctly for you.  In theory, the same is true of the X.509 CA roots of trust, of course, but there’s no one sitting at the top making sure they behave (and the fabric of the internet isn’t dependent on securing this behaviour, even if there were).

Details of DANE

In spite of the statements above, DANE is designed to complement X.509 as well as replace it.  Dane has four separate certificate verification styles, two of which integrate with X.509 and solve specific threats in its model (the actual solution is called pinning, a way of protecting yourself from the proliferation of X509 CAs all of whom could issue certificates for your site):

  • Mode 0 – X.509 CA pinning: The TLSA record identifies the exact CA the TLS Certificate must chain to.  This certificate must also be a trusted root in the X.509 certificate database.
  • Mode 1 – Certificate Contstraint: The TLSA record identifies the site certificate and that certificate must also pass X.509 validation
  • Mode 2 – Trust Anchor Assertion: The TLSA record specifies the certificate to which the  TLS Certificate must chain to under standard X.509 validation, but this certificate is not expected to be an X.509 root of trust.
  • Mode 3 – Domain Issued Certificates: The TLSA record specifies exactly the TLS certificate which the service must use.  This allows domains securely to specify verifiable self signed certificates.

Mode 3 is most commonly used to specify an exact certificate outside of the X.509 chain.  Mode 2 can be useful, but the site must have access to an external certificate store (using the DNS CERT records) or the TLSA record must specify the full certificate for it to work.

Who Supports DANE?

This is the big problem:  For certificates distributed via DANE to be useful, there must be support for them in browsers.  For Mozilla, there is the DANE validator extension but in spite of several attempts, nothing actually built into the browser certificate verifier itself.  The most complete patch set is from the DNSSEC people and there’s also a Mozilla inspired one about how they will add it one day but right at the moment it isn’t a priority.  The Chromium browser has had a similar attitude.  The conspiracy theorists are quick to point out that this is because the browser companies derive considerable revenue from the CA system, which is in itself a multi-billion dollar industry and thus there’s active lobbying against anything that would dilute the power, and hence perceived value, of the CA roots.  There is some evidence for this position in that Google recognises that certificate pinning, which DANE supports, can protect against recent fake google certificate attacks, but, while supporting DNSSEC (at least for validation, the google DNS doesn’t secure itself via DNSSEC), they steadfastly ignore DANE certificate pinning and instead have a private arrangement with Mozilla.

I learned long ago: never to ascribe to malice (or conspiracy) what can be easily explained by incompetence (or political problems).  In this case, the world was split long ago into using openssl for security (in spite of the problematic licence) or using nss (the Netscape Security Services).  Mozilla, of course, uses the latter but every implementation of DANE for mozilla (including the patches in the bugzilla) use openssl.   I actually have an experimental build of mozilla with DANE, but incorporating the two separate SSL systems is a real pain.  I think it’s safe to say that until someone comes up with a nss based DANE verifier, the DANE patches for mozilla still aren’t yet up to the starting blocks, and thus conspiracy allegations are somewhat premature.  Unfortunately, the same explanation applies to chromium: for better or worse, it’s currently using nss for certificate validation as well.

Getting your old Sync Server to work with New Firefox

Much has been written about Mozilla trying to force people to use their new sync service.  If, like me, you run your own sync server for Firefox, you’ve mostly been ignoring this because there’s still no real way of running your own sync server for the new service (and if you simply keep upgrading, Firefox keeps working with your old server).

However, recently I had cause to want to connect my old sync server to a new installation of firefox without just copying over all the config files (one of the config settings broke google docs and I couldn’t figure out which one it was, so I figured I’d just blow the entire config away and restore from sync).  Long ago Mozilla disabled the ability to connect newer Firefoxes to an old sync server, so this is an exposé of how to do it.  I did actually search the internet for this one, but no-one else seems to have figured it out (or if they have, they’re not known to the search engines).

There are two config files you need to update get new Firefox to connect to sync (note, I did this with Firefox 37; I’ve not tested it with a different version, but I’m pretty sure it will work).  The first is that you need to put your sync key and weave user login into logins.json.  Since the password and user are encypted in this file, the easiest way is to use a password manager extension, like Saved Password Editor add on.  Then you need two new password entries of type “Annotated” under the host chrome://weave.  For each, your username is your weave username.  For the first, you’re going to add your weave password under the annotation “Mozilla Services Password”.  For the second, add the Firefox  key with all the dashes removed as the password under the annotation “Mozilla Services Encryption Passphrase”.  If you’ve got all this right, password manager will show this (my username is jejb):

tmpNext you’re going to close firefox and manually edit the prefs.js file.  To sync completely from scratch, this just needs three entries, so firstly strip out every preference that begins ‘services.sync.’ and then add three new lines

user_pref("services.sync.account", "<my account>");
user_pref("services.sync.serverURL", "<my weave URL>");
user_pref("services.sync.username", "<my weave user name>");

For most people, the account and weave user name are the same.  Now start Firefox and it should just sync on its own.  To check that you got this right, go to the Sync tab of preferences and you should see something like this

tmp

And that’s it.  You’re all done.

Squirrelmail and imaps

Somewhere along the way squirrelmail stopped working with my dovecot imap server, which runs only on the secure port (imaps).  I only ever use webmail as a last resort, so the problem may be left over from years ago.  The problem is that I’m getting a connect failure but an error code of zero and no error message.  This is what it actually shows

Error connecting to IMAP server "localhost:993".Server error: (0)

Which is very helpful.  Everything else works with imaps on this system, so why not squirrelmail?

The answer, it seems, is buried deep inside php.  Long ago, when php first started using openssl, it pretty much did no peer verification.  Nowadays it does.  I know I ran into this a long time ago, so the self signed certificate my version of dovecot is using is present in the /etc/ssl/certs directory where php looks for authoritative certificates.  Digging into the sources of squirrelmail, it turns out this php statement (with the variables substituted) is the failing one

$imap_stream = @fsockopen('tls://localhost', 993, $errno, $errstr, 15);

It’s failing because $imap_stream is empty, but, as squirrelmail claims, it’s actually failing with a zero error code.  After several hours of casting about with the fairly useless php documentation, it turns out that php has an interactive mode where it will actually give you all the errors.  executing this

echo 'fsockopen("tls://localhost",993,$errno,$errmsg,15);'|php -a

Finally tells me what’s wrong

Interactive mode enabled

PHP Warning: fsockopen(): Peer certificate CN=`bedivere.hansenpartnership.com' did not match expected CN=`localhost' in php shell code on line 1
PHP Warning: fsockopen(): Failed to enable crypto in php shell code on line 1
PHP Warning: fsockopen(): unable to connect to tls://localhost:993 (Unknown error) in php shell code on line 1

So that’s it: php has tightened up the certificate verification not only to validate the certificate itself, but also to check that the CN matches the requested service.  In this case, because I’m connecting over the loopback device (localhost) instead of the internet to the DNS name, that CN check has failed and lead to the results I’m seeing.  Simply fixing squirrelmail to connect to imaps over the fully qualified hostname instead of localhost gets everything working again.

Getting a Windows Printer (Ricoh Aficio SP 204) natively running on Linux

Printing and scanning has always been the bane of Linux.  I thought I solved it three years ago by getting a nice network printer (HP OfficeJet Pro 8600) which spoke postscript and could scan to folder (provided you have samba installed).  Unfortunately, this is an inkjet printer and about three months ago the initial cartridges (which are deliberately lightly loaded) ran out of ink.  Purchasing new ones (it’s colour so I need four) turned out to be an arm and a leg (or 2x what the printer cost to buy in the first place).  Three months after replacement, the whole thing died with a call HP technicians error.  This turns out mostly to mean my ink cartridges are leaking.  Sure enough the entire inside is awash with a substance more costly than liquid gold … plus it’s now all over my shirt and trousers.  Trying to clean it out just gets ink all over the desk and some important papers.  Of course, since it’s a UK purchased printer and I’m now living in the US, HP support “cannot help”.  Vowing never to purchase another @!**#@ inkjet printer as long as I live, it’s time to find a cheap multi-function laser (did I mention the scanner function on the HP won’t work either now because when it gets this error it locks every function).

Investigating lasers, the cheapest multi-function seems to be a Ricoh Aficio SP204 N (the N means netowrk connected, which is nice) for US$60, which is a bargain, plus it’s a laser.  Google confirms it can scan to pdf (via file share or email), the only drawback is that the printer engine is “windows only” (one of those direct render on the system and send to printer ones).  Further googling around for the printer and linux drivers (and even DDST, the Ricoh name for their direct rendering protocol on linux) yields nothing.  Looks like I’ll be writing a driver when it arrives.  Fortunately, there is a way of running it (using KVM instead of VirtualBox) providing you have a windows virtual machine, so that’s the initial plan.  The only other annoyance is it doesn’t do duplex (either for scanning or printing).  Bummer, but you can’t have everything for US$60.

When the printer arrives, it turns out it has a web interface (yay) but you can’t program scan destinations with it (and without scan destinations, it won’t scan) … bummer.  Install the windows virtual machine with the Ricoh driver and use the tool to program scan to email; amazingly enough it all works correctly (it even scans in colour).  Followed the redirection directions with ghostview, ghostscript and redmon and successfully attach the printer to Linux.

Now to get the thing working under linux.  First step is to use tcpdump to track the communication between the windows machine and the printer:

tcpdump -n -w /tmp/trace.ricoh -i eth0 <printer ip>

And then print something.  Looking at the trace file in wireshark, the windows driver uses the HP Jetdirect port (9100).  In wireshark, select the first packet to this port and right click on “follow TCP stream”.  That gives the whole file the windows system sent.  Now save it to a file (tmp.winprint) and see if that’s enough the get the printer going.  You do this by sending the saved file to the printer with netcat:

nc <printer ip> 9100 < tmp.winprint

Wonder of wonders, it prints the same page again, so now we have the correct format to send.  A quick view with emacs reveals a HP PJL (Print Job Language) encoded header and footer with binary data in between.  This is the header:

^[%-12345X@PJL 
@PJL SET TIMESTAMP=2014/07/22 18:31:06 
@PJL SET FILENAME=gsprint 
@PJL SET COMPRESS=JBIG 
@PJL SET USERNAME=SYSTEM 
@PJL SET COVER=OFF 
@PJL SET HOLD=OFF 
@PJL SET PAGESTATUS=START 
@PJL SET COPIES=1 
@PJL SET MEDIASOURCE=TRAY1 
@PJL SET MEDIATYPE=PLAINRECYCLE 
@PJL SET PAPER=A4 
@PJL SET PAPERWIDTH=4961 
@PJL SET PAPERLENGTH=7016 
@PJL SET RESOLUTION=600 
@PJL SET IMAGELEN=61304

And this is the footer:

@PJL SET DOTCOUNT=995745 
@PJL SET PAGESTATUS=END 
@PJL EOJ 
^[%-12345X

So it all seems relatively straightforwards: each page is rendered as a pixel map in the jbig compressed image format (it’s lossless, like gif) and the header describes exactly the size and dimensions of the image.  So getting it working seems to be very straightforward: just generate the jbig images and slap on the header and footer.  Ghostscript doesn’t render natively to jbig, but it will render to ppm and the jbigkit renders what I need.  The image dimensions can be obtained from the ppm with the ImageMagick identify command.  The only fly in the ointment is the DOTCOUNT.  This shows per page how many black pixels are printed and must be something to do with the way the printer tracks the cartridge use; however, it can be faked for the moment.

The jbig format is also used in faxes, so I asked google if anyone else had a piece of code that does the split.  Since it would have the “@PJL SET COMPRESS=JBIG” line, I did a code search for that;  what do you know, it turns up a linux driver for the Ricoh Aficio SP 100:

https://github.com/madlynx/ricoh-sp100

To add insult to injury, the READMEs mention terms I’ve been searching for for ages (like Ricoh and DDST) and the driver filter even has them in the file name … honestly, for being allegedly the primary search engine of the internet, you’d sometimes wonder if google could find its own arse with both hands.

So, download this and install it and, yay, it works.  Looks like the only real difference between the SP 100 and the SP 204 is that the latter has a higher resolution mode (1200×600) and also can be adjusted to use a bypass tray (which is set in the header too).

I’ve done an initial package here and will be updating for the SP 204 additional features.