>Let's kill it off and focus on efforts that solve real problems
Like what? It's pointless to distract from constructive proposals with no supporting material for what the problems are with the proposed solution, and then claim there are other things that need our attention and then not even pay lip service to them.
DNSSEC is quite famously a solution in search of a problem. It's pretty explicitly designed to secure the requests from the resolver to the nameservers, not from the actual end-users--and the number of actual attacks that would be prevented as a result are pretty damn minuscule.
Here's a fun exercise: find the most reputable security or cryptography person you can that has publicly said nice things about DNSSEC. For extra fun, try to find one that isn't in some way affiliated with the IETF (most security and cryptography people aren't, so that shouldn't be too hard).
Well I don't know about people but I do know that a lot of big names in networking support and enable DNSSEC by default. So again, the sentiment isn't that famous because it's a widely used security feature. named has it on by default.
Here's a fun exercise: You could just say what you need to say to make a point and educate the people reading this thread. Otherwise your comment is kind of detracting from constructive conversation.
It was the first thing he said: there are very few real attacks that DNSSEC protects against. In other words: the benefit does not outweigh the cost.
That is a damning argument and there is nothing else to say, until someone (e.g. you, but don't feel pressured) comes forward with a good counter example.
> there are very few real attacks that DNSSEC protects against.
That's not a very good argument when you're talking about securing systems. We don't actively mitigate attacks based on the prevalence of known instances of that attack. You do so beforehand. Attack vectors, before they are ever or even actively used, are considered. The most relevant, recent, and well known example of this is heartbleed.
Your best example of an attack mitigating by deploying a languishing IETF standard is Heartbleed, which was caused by the adoption of a languishing IETF standard extension to TLS.
Heartbleed was caused by a class of code error prevalent in the toolchain used to write that code. It was not caused by the specification itself.
Here is the exact change if you're able to read the code: https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h... -- if that is not within your skillset to read that change, the patch is essentially doing a bounds check on a set area of memory. The language being used is notorious for allowing this type of code error and is the source of many exploits and bugs. It has nothing to do with the specification.
>Programming languages commonly associated with buffer over-reads include C and C++, which provide no built-in protection against using pointers to access data in any part of virtual memory, and which do not automatically check that reading data from a block of memory is safe; respective examples are attempting to read more elements than contained in an array, or failing to append a trailing terminator to a null-terminated string. Bounds checking can prevent buffer over-reads
We mitigate against attack vectors before there are known instances of software using those attack vectors. I'm not sure why you want to argue against that, or even how because we do it all the time. Take Spectre/Meltdown as another example. Preemptive measures were taken before any known usage of that attack vector had been discovered.
You're missing the point. The TLS heatbeat feature is a feature that is wanted by essentially nobody. Adding support for it means you're adding code--which naturally has the potential for bugs--to a server install, which increases the scope for vectors. Furthermore, as an underused feature, it's not going to get the same amount of code review as more heavily trafficked portions of the codebase.
Sure, the actual bug is a coding 101 bug, but the social process that caused a coding 101 bug to potentially compromise the vast majority of server installations is why Heartbleed is important. One of the lessons is that you need more reason to implement a feature than "it's a feature that can be implemented."
I don't think I am. You're focusing on one particular thing and invoking platitudes that are completely irrelevant. You're actively attempting to change the conversation topic to fit your viewpoint. That's not okay. No one is talking about the philosophy of feature development and adoption. I gave another example of where we preemptively mitigate issues that has nothing to do with philosophical waxing on "did we really need to do this" or "nobody really wanted that feature": Specter/Meltdown.
So two things, to get this conversation back on track:
1) The idea that DNNSEC is worthless/useless/bad is not "famous" by any reasonable use of that term. I explicitly stated it was only something you see in particular circles. The fact that the actual most popular piece of software used to manage DNS enables DNNSEC by default, and one of the biggest 3rd party public providers for DNS has it opt-out only is a testament to this fact. It's myopic to think that other people are aware of these things when the entire ecosystem is signaling quite the opposite.
2) DNSSEC might be useless, but in abstract terms arguing against systems designed to mitigate attacks simply because those attacks haven't been seen in the wild is a bankrupt idea in the world of security. So if you want to say DNSSEC is a bad idea then stick with that, but don't say it's a bad idea because we haven't seen an actual attack with what DNSSEC is trying to prevent. The claim again, is bankrupt when you're talking about securing systems.
>but the social process that caused a coding 101 bug to potentially compromise the vast majority of server installations is why Heartbleed is important.
I actually want to speak to this, because it's kind of a popular thing to say in the world of software development, but it fundamentally misunderstands what's actually happening.
You're right that a social process allowed the coding 101 bug to happen and exist for so long, but you're wrong about which social process. People's motivations for a feature have no bearing on how exploitable code gets written. Because features that people do want are victim to the same kind of errors in code. If you immediately falsify your perspective about which social process is in play you see it becomes irrelevant. You can completely take people's motivations out of the equation and the same process and class of error exists that allows the exploits and bugs to happen.
I am not saying that increases in LOC does not correlate with an increase in bugs. That's a fairly standard fact. I am saying that people's motivations for features have no bearing on how open source projects fall victim to shoddy code.
The social process that IS to blame is the fact that people don't invest in open source software, yet depend on it for their entire ecosystem. A student wrote some code, submitted it, and it was reviewed by one person. That code was never really looked at again, not really tested, but allowed to exist in the software for years. If the social ecosystem around open source software was one where more eyes had been laid on the code created it might not have happened all together, and relying on a student to write the code is a travesty in its own right all together.
But people's motivations for writing code? Not really relevant in terms of LOC, exploits, and code quality.
is not "famous" by any reasonable use of that term.
You're getting really hung up on "famously" which doesn't even literally mean "is famous". If that's your #1 point, you might as well drop it and move on to something else.
"famous" is the first thing you latch onto at the start of this thread. You mention it repeatedly throughout. You've labeled it #1 a comment or so ago. I'm trying to tell you it's a minor, silly misunderstanding, largely on your part.
DNSSEC isn't famously bad the way Battlefield Earth and Gigli are; it's famously bad among people who know stuff about DNS security. You seem shocked that experts disdain it, and were unaware of AGL's "Why Not DANE In Browsers". Perhaps, before litigating DNSSEC further, you might read up on it a bit more.
Either way, calling commenters on HN "trolls" violates the HN civility rules, and you need to stop doing that. You can read more about that in the Guidelines, linked at the bottom of this page.
There are wheels within wheels in this village, and fires within fires! When Reverend Hale comes, you will proceed to look for signs of witchcraft here.
I stated as such in my comment you're replying to. The software that basically everyone uses for DNS management. Google DNS supports it by default, requiring you to opt out. I gave you an example, it would be greatly appreciated for you to provide a substantive, informative comment about the problems with DNSSEC or at the very least provide other material authored by other people (blogs/articles/etc) who are famous security experts who do not support DNSSEC -- you seem to know who to go to.
Your argument is that DNSSEC has expert support and advocacy because the reference implementation of DNSSEC... exists? Yes: people who work on BIND probably do by and large support DNSSEC.
Google does not generally support DNSSEC. It's easy to find Google security engineers criticizing it, but the most notable example would be AGL's "Why Not DANE In Browsers", which discusses why Chrome stopped supporting DNSSEC.
>Your argument is that DNSSEC has expert support and advocacy because the reference implementation of DNSSEC.
Actually if you read exactly what I wrote I merely countered that it's not a famous sentiment, which the comment I was replying to said it was. I said it was a sentiment only seen, at least from my perspective, in particular circles and really only in passing.
I would consider the fact that Google DNS and named support it by default to be a significantly appropriate counter to the whole "everyone knows this isn't really a good tool/specification" sentiment. And it's an excellent negation of "it's easy to find Google engineers criticizing it" when their own service and the company they work/did work for is using it by default with their DNS service. Indeed, while I thank you for actually linking something I and others can read, the fact that some engineer in the ether -- and not to impugn his work, credibility, or how well he is known in certain circles -- is writing on the subject doesn't really have the same publicity as services and defacto software enabling DNNSEC by default.
Here's the important point: Because of the above, to the passing eye or as mere consumers of these tools and services it looks like a specification that would probably be a good idea and a credible one at that. There is nothing "famous" about it falling short, or being a solution in search of a problem when everyone who really matters in this space is using it.
Surely you must see that. Again thanks for the reading material, it is very much appreciated.
DNSSEC works perfectly fine for securing requests from end-users. Of course you need to link your application with a DNSSEC-validating library. But that's just a small matter of code.
Things become slightly more complicated when you have to deal with broken home routers. But that code exists as well.
No, it doesn't. DNSSEC is a server-to-server protocol. Your caching resolver, after completing a DNSSEC-signed lookup, signals your stub resolver that the request was "secure" by means of a single header bit. There is no security whatsoever between client resolvers and DNSSEC servers.
What you meant to say was that DNSSEC works "perfectly fine for securing requests from end-users" so long as they run their own DNSSEC cache servers. But of course, nobody is going to do that. They'll run DoH/DoTLS instead.
Another example, openssh can locally do DNSSEC validation of SSHFP records (using the ldns library). Unfortunately, it doesn't really work, but that is a different story. All of the validation code is there.
You don't seem to understand the difference between the resolver in a DNS cache server and a stub resolver. The stub DNS resolver on your desktop computer, as a general rule, does not do lookups from the DNS root all the way down to whatever leaf authority server holds the record it's trying to find. That's the job of caching resolvers --- the ones that live in servers like 8.8.8.8 and 1.1.1.1 (or whatever janky server your ISP or IT department sets you up with). If you're talking to a "DNS server" from your desktop --- like, you know, almost everyone on the Internet does --- you're not doing DNSSEC and, obviously, can't, because you've delegated to your DNS server the job of making all the intermediate requests needed to validate the chain of signatures to the root.
A search term you might find helpful: [DNS header AD bit].
You can run a local resolver that forwards all queries to 8.8.8.8 while still running dnssec validation, and get both the speed benefit of a caching infrastructure, and the authentication of dnssec. With only one trusted key.
Yes, DNSSEC works fine for endpoints if we eliminate globally the concept of a DNS client and a shared cache and just have everyone run their own DNS servers.
Your proposal drastically changes the role of all mainstream endpoint resolvers, which today delegate the recursive part of the lookup to "DNS servers" and which, in the world you're thinking of, would own that lookup completely and use DNS servers, if at all, as mere request forwarders.
Not for nothing, but even the IETF doesn't want you to do this.
Like what? It's pointless to distract from constructive proposals with no supporting material for what the problems are with the proposed solution, and then claim there are other things that need our attention and then not even pay lip service to them.