, Matasano 2007-04-05
Alice wants a.victim.orgs IP. She asks the DNS.
So, a.victim.org is 184.108.40.206. Simple enough? Lets add a wrinkle:
The question, what is a.victim.org?, and the answer, 220.127.116.11!, are records inside of DNS messages. The DNS - the whole distributed database, that is - is made up entirely of these records. The records are keyed by name, type, and class, and contain bundles of data.
A subtlety, which well be returning to - Alice asks for z.victim.org:
z.victim.org doesnt exist. Alice doesnt get a record; she gets an error packet. Theres a difference.
Moving on. Alice didnt really ask the DNS anything:
Alices stub resolver asks her cache to find out about a.victim.com from the victim.com authority servers. The stub is her Macs built-in resolver library, which her browser is linked to. The cache is the nameserver SpeakEasy, her ISP, gave her. The authority is named, AKA BIND, running on ns.victim.com.
Enough remedial DNS. Back to DNSSEC. Mallory can spoof responses to Alices DNS queries. We want to stop her. We use cryptography:
Sign the records (RRs in the jargon) with a signature based on an RSA keypair. Stick the public key in the DNS where Alices cache can find it. Her cache gets a public key attached to victim.com and an a.victim.com is 18.104.22.168 RR signed under that public key.
If youre with me so far then, with one important exception, youve got the service model for DNSSEC - meaning, what DNSSEC is trying to accomplish. Caches can look at records and tell if theyre valid. Mallory cant forge records without subverting the keys.
How do we manage and protect those keys? Good question. Lets introduce some notation:
We can generate and store RSA keypairs. We can fingerprint keys (public keys) by taking a hash or MAC of them, just like SSH and PGP do. And we can use private keys to generate signatures of blobs (or hashes of blobs) of data.
With that in mind:
Alices cache ships preconfigured to trust a root key, which basically never changes. Exactly the same way Alices browser works with SSL to use Verisigns CA. Again, lets call that a trust anchor; its a secure starting point for DNSSEC queries.
A.VICTIM.ORG. is linked from the root, to ORG, to VICTIM.ORG, containing A.VICTIM.ORG. ORG and VICTIM are delegations (because different people run the roots, ORG, and VICTIM).
In DNSSEC, authentication follows delegations. Sometimes. More on that later.
The root vouches for a key in ORG using a fingerprint record (a Delegation Signer, or DS). Roots voucher for ORG is signed under roots key, so Mallory cant change it: Alice knows the roots key and now ORGs key.
Repeat until you get the record you want. This is an authentication chain.
And its fine, as far as it goes, though youve got to remember that it took something like 13 years to get here.
Im describing RFC4033 DNSSECbis (bis! Still dont believe me about the invasion of the OSI-snatchers?) from 2006. The original DNSSEC proposals date back to the early 90s, when government contractor TIS Labs (Marcus Ranums old haunt) convinced the DOD to let them try to secure the DNS.
The DNSSEC attempt preceding RFC4033 put signatures in the wrong place, so that if the key for COM ever got compromised, tens of millions of DNS records would have to be replaced.
That proposal lived, as the DNSSEC standard until the end of 2001.
Youre right. Its a cheap shot to attack the standards process itself for being unstable and untenable. Paul Vixie does a much better job:
dnssec is the worst design-by-committee effort ive ever seen, both in terms of how late it is, how fuzzy the goals have been, how often the goals have changed, and how complicated and heavy it is now that it is trying to be all-things-to-all-people.
(Im not sure if I agree or disagree with Vixie - allowing for how unqualified I am to make judgements on the IETF - but Im pretty sure that the answer of having him and Jim Reid raise $1.6MM to create a members-only Shadow IETF that costs $10k to vote was a step in the wrong direction).
But whatever. Back to the protocol, as it exists today. Ive got something cool to show you. Bear with me, this is going to seem redundant.
Insecure DNS. The host a exists, and z doesnt:
Now, secure DNS. The host a exists, and z doesnt:
Uh-oh. DNSSEC protects records. The DNS. It doesnt protect DNS packets. Those are just a means to an end. So without magic, we cant distinguish between a real error and a fake one!
Thats sort of a problem. And I sincerely mean sort of.
For one thing, Mallory can deny service to Alice and Bob by forging errors. But so what? In a DNSSEC world, Mallory can deny service to the entire Internet from a small botnet by saturating DNS servers with expensive crypto operations.
On the other hand, being able to forge errors also allows Mallory to change the semantic meaning of some DNS zones. The best example is MX, the mail exchanger record: MX tells Sendmail and Exchange where to send your mail. Without an MX, mail goes to your hostname. So Mallory can send mail to the wrong host.
Reasonable people disagree about whether this is all a real problem, but the consensus is that DNSSEC needs to solve it. So we get authenticated denial (or provable non-existence, PNE, DNSSEC). Check this out:
You take all the RRs in your zone. You sort them - which requires its own standard because crypto needs canonicalization - and them link the names of the RRs together with another kind of record called an NXT - errr, sorry, NSEC. An NSEC links two DNS names together, basically asserting no name comes between these two in the sort.
Now, when you ask for a record that doesnt exist, the DNS can inform you of that reliably. You get an unexpectedly wrong NSEC record back from the query. It says, look, your record falls in between two names linked by an NSEC, in a gap where no name can be. And that record is signed, proving the nonexistence of the name.
And thats fine, because - oh wait, you in the back there, you have something to say?
Oh. Crap. Youre right. Bad guys can just walk the NSEC chain to list all the names in your zone. Yeah, thats pretty much exactly the same thing as allowing zone transfers. I know youve had zone transfers blocked since 1998 - yeah? IETF mailing list guy in the front row? With your hand raised? Whats that?
Well, OK, sure, maybe all DNS names should be public. But IETF guy, the names in your zone read like this:
decmultia.greybeard.com vax-in-my-basement.greybeard.com sun-ipx.greybeard.com symbolics.greybeard.com slackware.greybeard.com knights-that-say-nee.greybeard.com
And that guy in the back? His names go something like this:
money0.secretcustomer0.bigbank.com money0.secretcustomer1.bigbank.com vulnerable.managementsystem.bigbank.com payrollsystem.bigbank.com linux-box.boss-not-supposed-to-know-about.bigbank.com
And lets not even start talking about the guys who run COM. The RRs in COM right now are just a couple of tiny NSs. They keep the whole database resident in memory. But DNSSECbis requires them to set up NSEC records for them, too. And sort them. And, by the way, something I havent really pointed out yet about DNSSEC RRs - know how Ive been acting like they look like this:
Well, actually, they look like this:
RSA signatures are kind of painful that way. The COM guys better find bigger boxes.
The IETF has a solution for this problem, which Ben Laurie helped write. Its not a standard yet, and so not really part of DNSSECbis. But its funny, and not hard to explain. Its called NSEC3.
Heres the idea: turn the NSEC chain into a Unix password file. Make the names into salted crypt(3)-style hashes. You can prove an NSEC3 record matches what you were looking for or not, but you cant turn the NSEC chain back into the zone dump. At least not without crack(8).
I have misgivings about this idea. You can scale password files up with compute using adaptive hashing, because people dont log in to their computers very often - but people use the DNS all the time, so you cant make a lookup step take 5 seconds.
Fortunately, theres another IETF solution to the problem. Its called minimum coverage NSEC, also known as whitelies, involves the servers forging NSEC records to trick people trying to dump the zone, seems to require on-demand signing of RRs (a no-no - your privkeys are supposed to be in a vault in Tuscon), and nobody is ever going to deploy any of this so lets move on.
At this point Im reminded of a classic from the Western canon:
Mr. Simpson, we dont play God here.
What? You do nothing BUT play God. And I think your Octoparrot would agree.
Squawk! Polly shouldnt be!
One more wrinkle. This looks like a total non-starter for the COM guys. So they were demanding an extension, informally known as opt-in, which lets them mark whole stretches of the COM RR space as insecure so they dont have to run their name servers on Beowulf clusters just to let 10 guys mess with DNSSECbis.
But none of these things are in the standard. Which is finished and were supposed to deploy. NSEC? NSEC3? Whitelies? Opt-In? Or can we just do away with authenticated denial altogether and go with signing only DNSSECbis?
Another issue. DNSSECbis protects RRs, which are like the nuclei of DNSSEC cells. But the cell walls are still fragile, and other things besides normal DNS rely on them - for instance, DNS dynamic update, which is its own debacle. Want secure dynamic update? Thats a different standard, either called TSIG (which secures DNS the same way DESLogin secured Telnet), or SIG(0), which uses public keys. These are different standards than DNSSEC.
Another issue, which may have been obvious:
Alices stub resolver library is not going to support DNSSECbis any time soon (note to my Apple and Microsoft readers who spend money to audit their OS and runtime code: Alices stub resolver library is not going to support DNSSECbis any time soon. Get it?).
So DNSSECbis doesnt protect the last mile between Alice and her ISP. But thats OK, that last mile is behind a firewall!
Another issue: nobody knows when or where or how or who or why the roots are going to be signed and who will own the key. Without dignifying this Homeland Security wants the root signing key controversey, lets acknowledge that the dream of a solid authentication chain running from a.victim.com all the way back through COM to the root isnt happening anytime soon.
Thats OK. The ISC, authors of BIND, have set up a parallel ICANN for DNSSEC, and a protocol extension to match it. If the ICANN/IANA roots wont secure a.victim.org, DLV will secure a.victim.org.dlv.isc.org for you. And all you have to do is trust yet another server, this time run by the ISC. And also accept DLV as part of the DNSSECbis standard, which it isnt, yet.
Having fun yet? Anyone want to lay odds on DNSSECbisbis (or whatever the CCITT folks do after bis) happening?
Which, thankfully, brings me to the last part of my argument.
Have you ever set up an SSL certificate?
Have you ever set up a zone with an AXFR secondary in BIND?
These are not two great tastes that go together. But it gets worse. It always does. Because heres something else that youve probably done: youve visited a totally legitimate HTTPS website that had an expired or transiently broken SSL cert.
When that happens, you get a click-through warning, which you dutifully ignore. Thats a part of the Internet that the DNSSEC people dont like. And I feel their pain. But.
When a browser sees an invalid SSL certificate, it has the option of popping up a warning dialog that you can ignore. Now, this is what DNS lookup code looks like in your applications now:
if(!(hp = gethostbyname("a.victim.com"))) fatality();
And, this is what DNS lookup code looks like in your application after DNSSEC:
if(!(hp = gethostbyname("a.victim.com"))) fatality();
And, fatality is what you get if any piece of this DNSSEC puzzle fails. All the signatures on all those records out there? They expire and need to be renewed. And they have be kept in sync across the whole system. And they have to be configured, by tens of thousands of people of wildly differing skill levels.
Thats because the DNS interfaces on every host on the Internet have no good way of signalling failure. In fact, because DNSSECbis wont extend down the last mile, like I mentioned, the protocol wont have a good way of signalling failure. Which means that all the transient glitches you get with SSL, which is a vastly simpler and smaller system, are total failures in DNS.
Still a believer? Theres more to come.