Wow, this is terribly misleading DNSSEC propaganda. It tells me:
"Protected from redirection to false IP addresses (DNSSEC)"
What does that mean? It means that whatever other DNS server I use seems to verify DNSSEC signatures (I use Google's DNS fwiw). Yet this doesn't provide any reasonable sense of protection, as the connection to that DNS server may very well be compromised.
This would very well show DNSSEC protection in an open public wifi if the provider decided to enable DNSSEC.
The question for many residential internet users is: Just because I set my DNS to Google's, do my request really arrive there? Or does my ISP use transparent DNS proxies?
I know that for many ISPs around here (Telekom especially), setting your DNS doesn't have any effect unless you run a local resolver (or DNScrypt).
It's a different threat model. Classical DNS (i.e. without port randomization and a whole host of other tricks) is very easy to spoof from all over the internet.
Inserting yourself between a client and a server is way more difficult.
Note that from the point of traffic analysis, you still don't want your TLS traffic to go through a third party.
So if your thread model mostly includes nation-state attackers then DNSSEC is only useful for DANE. If you also want to secure a lookup of, for example, pool.ntp.org then DNSSEC for A and AAAA records also makes sense.
> If you also want to secure a lookup of, for example, pool.ntp.org then DNSSEC for A and AAAA records also makes sense.
The fun part begins when you realize you can't validate DNSSEC because your time drifts too much. So how do you get your initial sync from pool.ntp.org with DNSSEC validation enabled?
If the DNSSEC validating resolver is a server then it is usually not an issue. Most server hardware has battery backed real time clocks. In the odd case that you are bootstrapping a server you would have to set the time manually or make setting the time part of bootstrap process.
For embedded systems that don't have a battery backed real time clock and want to do local DNSSEC validation this is indeed an issue.
There are plenty of hacks to make it work, but no real standard.
Yes. I like what they are trying to do, but they don't seem to actually think things through. It took me forever to get some of the server tests fixed, where it would report that my server didn't properly support IPv6 (in DNS) when in fact it did and just their test was wrong.
For me it says now 'your DNS service providers are:' and then the name of the netblock owner. The actual name server is in my network.
I thought DNSSEC was supposed to be verifiable by the client? If it isn't then it's pointless in the way that you suggest, but I find it hard to believe that hole was left.
Congratulations, you've understood the main hole of dnssec.
The thing is: you can verify dnssec on the client. In theory. It's just that 99,9% (rough estimate, may be higher) of people don't.
You'd have to run your own resolver. Which might work, if your ISP isn't doing funny things with your DNS traffic. Which some ISPs do. Which means it can't be deployed widely.
This thing was built in the 90s when people assumed you have some dns that some admin you trust manages in some trusted network. Moving it to today's internet is pretty much impossible.
What do you mean, "run your own resolver"? That's a fancy name for the library that the application uses to speak DNS, not a separate thing that has to be set up, run, etc, separately.
Instead of asking "what IP is Google.com?" to your configured DNS server, you traverse the whole chain. First, you refer to your list of root zones: which servers can answer about .com? Ok, ask them which DNS servers google.com should have. Next, send the request directly to those servers. Now you get a response that you can use.
This chain can get really long depending on the service's DNS configuration. And this whole time every request has to come back DNSSEC signed.
If I run my own resolver, with a hardcoded [1] trust anchor, how could an ISP affect me regardless of what funny things it does with my DNS traffic...?
Well, the traffic is not encrypted or otherwise protected, so a firewall trying to be "smart" could do all kinds of things. E.g. not letting you connect to other DNS servers at all or filtering all queries with unusual record types.
> Are we on the same page that with DNSSEC activated on a local resolver one would either get an authentic answer, or nothing at all?
Sure. But it's not very relevant, because almost nobody does that. And that's unlikely to change, because getting nothing at all isn't a very desirable state of affairs.
And given that forcing local DNSSEC resolvers in an OS or a browser would likely mean that a large share of your userbase will get nothing at all this is pretty much impractial.
> And that's unlikely to change, because getting nothing at all isn't a very desirable state of affairs.
It worked for HTTPS - more and more browser builds refuse to show you stuff, with no workaround, even if there is nothing wrong with the certificates ( cough-sha1-cough-or-cough-chrome-cert-transparency-cough ). Yet I don't see any users revolt.
Claiming that having an all-or-nothing HTTPS is a-ok, yet having all-or-nothing DNS is unacceptable is... inconsistent.
Correct. And if the local resolver is sufficiently close to your client that there is very low risk of an attacker getting into your local network, then you can have a higher degree of trust in those validated answers from your resolver.
Only if your ISP doesn't molest DNS packets. More importantly: this only works for a small set of nerds; it doesn't scale to every user on the Internet --- this is the worst kind of "insecurity for thee not me". For refusing to make compromises like this, and instead insisting that sound cryptography be made available to all users, Moxie and Trevor just won the Levchin Prize at RWC.
Notice also that Signal provides a massive amount of cryptographic security to billions of people without needing a PKI controlled at its roots by world governments.
It can be verified by the client [1] (wikipedia link also has RFC source), but typically the verification is done by the resolver, which introduces the problem that a client has to trust the resolver and the network from resolver to the client (last mile problem)
You are right, but the wordings are chosen with the average internet user in mind. Luckily the 'DNSSEC'-word between brackets at the end let's you, the more tech savvy user, know what was really ment.
OK, so for those of us naive about DNS security, can someone summarize the current best practice for DNS on gateway routers and roaming endpoints (laptops)?
The short answer is: pretty much everyone uses normal DNS, because the many show-stopping problems with DNSSEC includes the insane design decision not to protect the "last mile" between the stub resolver on your own machine and the "DNS server" (technically: recursive cache) that DHCP configures.
If you're using Google's DNS, it will (pretty much pointlessly) validate DNSSEC records for you --- but the link between your computer and Google's DNS servers are completely unprotected (any attacker could simply trick your browser into believing there was no such thing as DNSSEC).
This doesn't much matter because only a tiny, tiny fraction of all DNS records are DNSSEC-signed. The modal experience for companies that do take the trouble to sign their DNS records is "taken offline completely by DNSSEC configuration mistakes". There is virtually no upside to participating.
The good news about all of this is that there's really nothing you need to do to have good DNS OPSEC. Just do what everyone else does, including pretty much all security people: delegate security to a higher layer of the Internet stack.
Google of course wants people to continue to use their DNS resolvers. So it is in their interest to focus only on techniques to improve access to their resolvers.
One thing that happened in recent years is that a very nice library called 'getdns' has been developed. Getdns does local DNSSEC validation but also contains various ways of accessing DNS servers and resolvers ("Roadblock Avoidance")
I use getdns in ssh for SSHFP, to obtain SSH key fingerprints from DNS. If DNSSEC doesn't work then SSH fails (or complains about an insecure connection). So far my experience is that is works.
The problem with DNSSEC local validation is that it doesn't protect your privacy.
So there are two techniques under development to address that. One is to run DNS directly over TLS. The second is to run DNS over HTTPS.
Running DNS over TLS has to advantage that the semantics are clear (just DNS over TCP but then encrypted) but the downside that the port may be blocked.
DNS over HTTPS is unlikely to get blocked, but there are too many ways to transmit DNS over HTTPS, so it may take some time for that to get sorted out.
Of course, moving DNS from a lightweight UDP exchange to TLS or HTTPS requires quite a bit more resources on the server side.
So, local DNSSEC validation works. It is just matter of turning it on. Server side, if the admins are behind a DNSSEC validating resolver then they quickly figure how to avoid breaking it.
When it comes to privacy, if you send all your DNS queries to Google, who else do you care about who might be watching your DNS traffic?
In my case it falsely report that everything is ok because I have uMatrix enabled, which prohibits outside requests. When I disable it, it shows "not protected".
"Protected from redirection to false IP addresses (DNSSEC)"
What does that mean? It means that whatever other DNS server I use seems to verify DNSSEC signatures (I use Google's DNS fwiw). Yet this doesn't provide any reasonable sense of protection, as the connection to that DNS server may very well be compromised.
This would very well show DNSSEC protection in an open public wifi if the provider decided to enable DNSSEC.