My first reaction: This never ends, does it?
Second reaction: Security is notoriously hard, nice that people are looking at the code and being thorough, it's for the collective best.
What never ends? Low severity memory safety bugs in a large C codebase? Probably not.
You may think this is a lot. It's not. This largely appears a lot because OpenSSL treats a lot of very low impact issues as vulnerabilities. A lot of other projecs would rather start arguing that this is not exploitable, should never get a CVE etc.
This is a good sign. OpenSSL is taking security seriously by treating even low impact things as vulns. More severe stuff still happens, but I'm pretty positive there is a decline in severe issues.
Exactly. With C, it will effectively never end. Libraries like that would probably benefit most from using a language with strict static analysis, like Rust. Tools Of similar nature for C do exist, but probably they are harder to apply, and likely not free.
Is HN still really up for that argument? Yes, we know the downsides of C. But the upside is that _we know the downsides of C_.
Constantly complaining about C isn't going to get anyone anywhere. It's been shown over and over again that secure C coding is possible and doable and feasible and exists in the real world. Some of the most secure software is written in C, and that's not attributed to enough fingers banging at a keyboard over a C program, but because C forces the programmer that _wants_ to write a secure program think about all the issues, the universe and everything.
So please, pave the road and show us the safer OpenSSL-rust you have. Until then good luck and thank you for your feedback.
I've spent a career of 12 years writing, reviewing and breaking C in high-security products such as HSMs. I'm still to see real world, secure C. Maybe you could point me towards the numerous real world examples of people getting this right. I'd prefer scaled-up outcomes here, not just saying 'djb wrote qmail'.
(Incidentally, I don't have a quarrel with C specifically. All memory-unsafe languages expand the range of terrible vulnerabilities available to the engineer. This is not defensible these days, in my view.)
> So please, pave the road and show us the safer OpenSSL-rust you have.
DJB wrote Qmail, then abandoned it like an unwanted child. He might be a good programmer, but he's a really bad maintainer.
OpenSSL was all but abandoned as well. At least now there's a lot of attention being focused on making it better and more maintainable.
There's numerous tire fires out there: ImageMagick, OpenSSL, some Linux kernel drivers, and other projects people just take for granted without pitching in to help fix things.
Re-writing in Rust is a form of helping, and maybe in the process we'll find bugs in the originals or wholesale replace them with something better.
> It's been shown over and over again that secure C coding is possible and doable and feasible and exists in the real world. Some of the most secure software is written in C, and that's not attributed to enough fingers banging at a keyboard over a C program, but because C forces the programmer that _wants_ to write a secure program think about all the issues, the universe and everything.
This reasoning doesn't make sense. Use a language that makes a bunch of security flaws impossible (barring OS and compiler bugs anyway) so that you can concentrate properly on the possible security flaws left. Why deliberately make life hard for yourself if there are other language choices (assuming there are other choices to C in what you're doing)?
Even the best developers make mistakes. When you're picking your stack for security sensitive work you should be picking the stack that minimises the chance of mistakes and the impact of those mistakes. C is at high risk of making mistakes and those mistakes have a high chance of being exploitable.
> Use a language that makes a bunch of security flaws impossible
The implicit assumption here is that the language isn't at the same time introducing a number of other vulnerabilities. Is there a language you would like to suggest?
I'd wager the vast, vast majority of Java, Python and OCaml projects would have significantly less security exploits compared to an equivalent C project. Simply making buffer overflows impossible (as much as they can be) takes a huge burden away from you.
I know it says "partial" implementation, but it's quite good even so, and has had everything I need. It doesn't do SSLv2, but nowadays that's more virtue than vice.
Erlang has an independent SSL implementation as well; it binds to OpenSSL but as I understand it just uses the heavy-duty math parts, not the protocol parts, which are instead written in Erlang.
Non-C SSL implementations in memory-safe languages exist.
Note that "but are they as well road-tested" or similar such things would be moving the goal posts. Not necessarily wrong or bad objections, but different objections. Probably nothing, not even the other C implementations, is as well "road tested" as OpenSSL, but, then again, OpenSSL hasn't exactly passed that road testing with flying colors now, has it?
I completely agree with everything you point out. The argument I am bringing up is not about implementation, it's about "if they didn't use C, they would have less (security) problems": No.
C is the way it is towards secure coding because there are compromises to satisfy other dimensions. <Fancy new language that promises to solve all of C's problems> is either making huge sacrifices in these other dimensions or is at best an experiment.
We understand C's weak points so well (due to being widespread, battle tested, really simple, or what have you), that most of the time it's safer (read: more secure) to tread dangerous well-understood territories carefully than uncharted ones only promising to be safe.
I would agree with you if it wasn't for Rust. If you don't know it yet, I would advise you read up about it - it is ideal for replacing C, being equal at speed and better at safety from common programming errors.
I'm only superficially familiar with Rust, but my understanding was that it was basically "C with strong memory guarantees". I naively thought that its entire purpose was to avoid such trade-offs.
This is a false choice. Go and Rust are not replacements for each other. They both have qualities which make each better suited for different environments.
Rust is actually capable of replacing all uses of C, Go's runtime will generally be an impediment to using it in certain cases.
Also I would argue that there are entire classes of bugs that are still available in Go that are not in Rust, that make it less suited for security. Null pointer exceptions, and unchecked errors are the two that come to mind.
I don't think they were offering up Go as an alternative to C, but rather giving an example of an OpenSSL implementation in a language that isn't C. Rust is mentioned because the prior comment called for the creation of a pure-Rust OpenSSL library as proof that Rust can be used in this context. No such things exists, but they were trying to offer up a similar example to prove the same point.
I don't know if this has changed, but at one point the author of crypto/tls basically said "this hasn't been audited or carefully looked at; do not use this code". Not exactly a ringing endorsement, even if he's stopped saying that.
It's easy to interpret these arguments as "C sucks, let's use Rust" instead of "C is a very mature language with enormously complicated code-bases in play running mission critical software which is not trivially replaced by something like Rust but which could stand to benefit from building things into the compiler to check for mistakes like the Rust compiler does."
Compiling C code is easy, but auditing C code is hard. Getting Rust code to compile without getting berated about every little thing is hard, but at least you're confident then you've got everything right.
I think the the opposite has been shown. It's not really a complaint, it's a reflection on the facts. C is my favorite language, and a spade is a spade.
Is the "Severity: High" issue here the kind of thing that would be caught by Rust's static analysis, though? The bug isn't a case where the language rules are broken, but rather a failure to set a reasonable limit on the resource consumption that a client can cause.
> Libraries like that would probably benefit most from using a language with strict static analysis, like Rust.
Wont help when developers then go out of their way to write a large amount of unsafe code. OpenSSL went out of its way to reimplement the C standard library wrong, making it impossible to use with analysis tools like valgrind or a checked malloc and generally confusing developers. iirc it even used malloc(pop)/ free(push) as a stack to pass data around at some point.
It's too easy to just blame C, it's their slap-dash, design-by-committee kitchen-sink features, lack of process, absense of rigor and generally amateurish "engineering," if it can be called that, of a now critical security framework. Why isn't every release first run through valgrind tools and some commercial C bug-finding tools used by embedded / safety-critical systems and why aren't there proofs of it's correctness? They have and/or could get the money/resources.
Easy is not the same as necessary. At least high-security projects like OpenSSL could enforce a policy of not using unsafe code, and it's easy to validate that there's no unsafe code or that the few known instances are very well vetted.
You could do the same with C code -- it is getting everyone to follow the policies that is the hard part. Even compiling Rust code in release mode instead of debug removes some essential safety features. Should be clear by now that "policies" just doesn't work, but no, project are repeating the exact same mistake over and over again. :)
The crucial difference is between a policy, which people can not follow (and you might not catch them), and a language or compiler restriction, which people can't work around without doing something drastic like shelling out to a new process, which you can catch (in a well built language).
Is the beauty you see in C inseparable from its unsafety? Are the unsafe parts that other languages don't have the most beautiful ones?
There's a big difference between arguing for safety and arguing for Rust specifically. Everyone likes some languages and dislikes others. Can you design a safe(r) language that you would consider to be beautiful?
What essential safety features are removed? The only thing I can think of this referring to is assertions, and we do the opposite: assert stays in all builds, debug_assert is only in debug builds.
But it can overflow, which will result in a different value that the programmer might expect, which can give an attacker a nice "information leak" or worse. This could be a security bug!
And even wrt memory, Rust (with the standard library) is not free of its own warts, see the OOM situation.
See e.g https://news.ycombinator.com/item?id=10545877 for a discussion on this.
Just to be clear: I generally like the ideas of Rust. I just dislike its presentation by some as a magic solution to all kinds of concerns.
Since this account seems to have been created solely to violate the guidelines with, we've banned it. We're happy to unban accounts if you email hn@ycombinator.com and we believe you'll post only civilly and substantively in the future.
This kind of resource DoS attack doesn't seem like it's directly related to safe/unsafe. I think you would be just as likely to forget to put a resource limit if you were writing in Haskell.
I maintain a project (http://freeradius.org) which a similar amount of code, and (arguably) similar complexity. While we've had issues, they aren't as numerous as OpenSSL. And, the issues in the latest major version are negligible.
Why? We have a requirement for code quality. The code can't just work, it has to make sense. The OpenSSL people don't seem to care about badly formatted and/or non-understandable code.
All commits MUST build cleanly without warnings on multiple operating systems, and under multiple compilers. We have test cases for a large chunk of the code base. We scan all releases through three different static analysis tools.
Security is important. We make it important because we care. I wish other projects would do the same.
There's definitely some truth to this, FreeRADIUS is just not that valuable of a target since not many people have RADIUS exposed to the world, however if you look at other highly valuable targets like OpenSSH, which is probably in the top 10 if not the most valuable target in the world, you see far fewer significant exploits. It's definitely possible to have better quality than OpenSSL and a lot of OpenSSL's issues are due to legacy code and what amount to experiments being run in production software.
No one can reasonably say that the practices of the OpenSSL programmers result in secure code. No one can reasonably say that lots of people examining it later for defects is a good idea.
We have lots of legacy code in C. The only sane way to maintain it is tests: unit tests, functional tests, and static code analysis.
> a lot of OpenSSL's issues are due to legacy code
i.e. the OpenSSL people don't care to actively maintain / clean up their software.
a) one which has tests, no build warnings, and is run through 3 different static analysis tools?
b) one which has none of those things?
2) Also, which of these products is more likely to be secure?
a) one which has a lot of third-party analysis?
b) one which has some third-party analysis?
3) Are these two questions the same?
My answer to (3) is "no".
While best combination of answers would be 1(a) and 2(a), OpenSSL is at 1(b) and 2(a). I'd bet they still have more security issues than FreeRADIUS, which is at 1(a) and 2(b).
I'm a huge fan of freeradius. I have tried multiple propiortary radius servers and I can honestly say freeradius is the most flexible most reliable radius server in the world. Thanks so much for your effort.
Short answer: unlikely to end anytime soon, but likely that we start to focus on higher level bugs more than lower level issues.
Long answer: If Unix and C hadn't won the fight in the 1980s like VHS did, we wouldn't deal with such low-level bugs. It took 30 years, but we're finally getting CPUs (like lowRISC) that were inspired by BS5000 or i960 and the languages to go with it are getting mainstream. To be fair, had we as an industry taken security more seriously decades ago, we would have written critical pieces of the stack in a high-assurance Ada profile, and put microkernel design capability kernels into production. We can extend 1960s kernel designs with all kinds of features, but without a coherent design it's impossible to provide the same assurance. This is why it's great that we have L4 descendants that incorporate a capability scheme and also support a multikernel scheme (ala Barrelfish) for making better use of a cluster of cpu cores.
>we would have written critical pieces of the stack in a high-assurance Ada profile, and put microkernel design capability kernels into production
You're writing on a computer magnitudes faster than the fastest computers of the 70s, with significant compiler improvement to boot.
Until about 10 years ago, hand crafted Assembler was still faster than C, and the speed improvements were actually needed.
This "let's write a text editor which is pretty much an advanced nano/pico in JS and have it take up 145MB" is only possible when everyone has a supercomputer in their hands anyways
And in other areas, but that's largely because C isn't expressive enough, not because we lack the engineering know-how to compile fast vectorisation code.
We can't burn the world down and remake it. These things WILL happen over time, the next generations definitely are focused on safety and correctness. 1 we need to remove unsafe-machismo for the landscape, vector that energy into correctness instead of living with danger. 2, we need to figure out how to partition, segment, wrap the unsafe legacy stuff into a safe operating regime. The single biggest win is if we could load a library in another process and make cross-process calls to it.
> The single biggest win is if we could load a library in another process and make cross-process calls to it.
What?
Setting aside the fact that this can already be done in numerous different ways on many platforms, how is this a win?
It's hard enough for developers to write correct code, let alone maintaining the correctness of that code while loading and executing unknown code from other parties (see web browsers, and the last 20 years of security bugs related to plugins, extensions, addons, etc).
The user should be able to load 3rd party libraries in out of process sandboxes, transparently for applications which weren't written with that in mind. I am not asking for new programmer features, new end user / sys admin features in the loader.
If you don't know what you're talking about, please refrain from commenting. It only proves your ignorance. In the Rails land (as your profile says you are), you wouldn't have these issues, and quite frankly these types of thing seem to be beyond you at this point. You're in no position to criticize the OpenSSL project.
Luckily I moved everything to openbsd's libressl which is /mostly/ compatible.
I wonder if this bug affects them, typically the HIGH's haven't[0]
It really feels like every other week there is a bug in OpenSSL and after following along with the libressl blog I understand why- the code is an absolute mess[1]
The interesting thing about LibreSSL is that it's got such a halo around it that people just assume it's not vulnerable to any OpenSSL bugs.
Practically every OpenSSL bug posted here gets the standard "Luckily LibreSSL isn't affected by this kind of thing" response. On a couple of occasions I've taken the bait and linked to the LibreSSL source to show that the relevant bits are in fact not changed at all, so they were both vulnerable.
Not saying LibreSSL isn't doing good stuff, I just wish people would actually check if it's affected before using any opportunity to jump on the let's-hate-on-openssl bandwagon.
(I haven't looked into these bugs in the LibreSSL code since I don't have time for it right now, but I'm sure some message is forthcoming).
> On a couple of occasions I've taken the bait and linked to the LibreSSL source to show that the relevant bits are in fact not changed at all, so they were both vulnerable.
That's not a bulletproof way to assess bugs in LibreSSL because its authors also removed vulnerabilities from general helpers functions, proactively fixing issues in the code that uses them.
No one is claiming that LibreSSL has fixed all the OpenSSL bugs, the parent certainly didn't. LibreSSL has historically been vulnerable to less of the bugs than OpenSSL and, for a long time, none of the sev:high bugs.
> LibreSSL has historically been vulnerable to less of the bugs than OpenSSL and, for a long time, none of the sev:high bugs.
You say this like LibreSSL has been out for a long time. It's just barely crossed it's 2 year mark... and for most of it's life, it's difficult to call it "production ready".
Coupled with its low adoption rate (really only some select BSD's, and some adventurous linux folks), it's not surprising more vulnerabilities haven't been discovered (yet).
The OpenBSD folks do good work, but let's not pretend they are infallible.
Assuming we stay with a library written in C, a better approach would be for everyone to try to move to what's in BoringSSL, which deliberately removed most stuff, more than LibreSSL.
If we still stay with C, Ring would be a good replacement, which is partially/primarily written in Rust.
That said, LibreSSL's TLS abstractions look like a welcome improvement if you need a C API.
BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.
Although BoringSSL is an open source project, it is not intended for general
use, as OpenSSL is. We don't recommend that third parties depend upon it.
> which deliberately removed most stuff, more than LibreSSL
LibreSSL has taken fixes from BoringSSL but it has the aim of maintaining API compatibility to the point where most applications using OpenSSL should "just work" for the most part. I've not tried building against BoringSSL but form reading their documentation it seems like the API is very much a moving target.
That's right, but I know several project maintainers who are looking moving to it because BoringSSL's scope is smaller and is sufficient for their projects' needs. That's why I think the BoringSSL's API status will evolve with time. And Ring is based on it, so...
Then there's ocaml-tls which aims to provide a drop-in implementation of the OpenSSL API too.
We're finally seeing much activibity, and in that sense, I'm glad that Heartbleed happened, but it should have been earlier :).
Question: on LibreSSL systems, is the binary still named openssl? I only ask because I seem to recall OS X having LibreSSL now but I still have a openssl binary.
OK, now we can go ahead with FreeBSD 11.0-RELEASE.
In my time as security officer, it was a rare and surprising occurrence when we didn't need to hold an upcoming release due to a pending OpenSSL advisory. It got to the point of the release engineer saying "I think we're ready to start the release builds tomorrow, any news from OpenSSL" and me replying "nothing yet, but I'm sure it will come" -- their timing was absolutely impeccable.
It certainly wasn't when I was Security Officer -- it didn't exist yet.
I believe some people are working on bringing libressl into the FreeBSD base system, but there are some challenges; for example, FreeBSD supports stable branches for 5 years, while libressl follows OpenBSD's "break everything once a year" model.
If you try at all, then you're trying harder than OpenSSL. But that doesn't necessarily help if there has been so much code churn over the past 5 years that we can't figure out how or if we should apply the security patch.
if i understand correctly this one is relevant to certificate issuers that publish certificate revocation lists via the OCSP protocol; could be used for denial of service but not for hacking into the certificate issue, is that correct?
Also most bugs in OpenSSL seem to be during renegotiation of protocol zzzz defined by some obscure RFC that nobody really understands how to implement, is that correct? Why can't they simplify these protocols, do we really need these fancy renegotiation features?
Actually Daniel Bernstein says that over-complicating the protocols is a clever way to make sure that software infrastructure remains insecure.
Simplifying the protocol, making it easier to analyse (and prove) and especially removing legacy things we know are troublesome is the main goal of the draft TLS 1.3: now already live and rolled out across CloudFlare and Chrome, and which removes such things as renegotiation, all non-AEAD ciphersuites, MD5, SHA-1, static RSA, and many of the other classic TLS bugbears which keep bearing interesting vulnerabilities.
Which isn't to say the optional (but usefully fast) 0-RTT mode might not introduce a few more, for those lackadaisical devs who inevitably ignore all dire warnings and abuse it for non-idempotent requests like POSTs (or GETs that do something - bad idea!).
But even if it's not how I would have designed it afresh (the Noise Protocol Framework from Trevor Perrin is how I would have designed a new connection protocol now) it's still a big improvement over previous TLS/SSL iterations and a drastic enough change it probably deserves to be called TLS 2.0.
AIUI this affects the code that handles the TLS extension with which a client tells the server that it supports OCSP stapling, so any server with an affected openssl version would be vulnerable.
Why is the severity of the first vulnerability high? It allows a denial of service on the server's side by a client and nothing else. The second vulnerability seems to be very similar in severity, except that it can be used both against clients and servers, and yet it is of just moderate severity. What am I missing?
Does not appear that SSL Labs (https://www.ssllabs.com/ssltest) tests for this exploit (CVE-2016-6304) yet. Anybody have instruction on how to test your server?
A very popular chinese anti-virus company which fashions the US freemium model, with advertising on their homepage being the biggest revenue, and a market cap of $11.42B.
Not a hacker.