Hacker News new | past | comments | ask | show | jobs | submit login

I disagree that it makes zero sense. There are reasonable concerns at play here:

* Security testing is extremely disruptive to production systems, most especially if those systems haven't been hardened in any way. Security testers are not as a rule good at predicting how their tests can screw up a production system.

* No matter how much effort you put into a security program (Google and Facebook put a lot of effort into it; more than most people on HN can imagine), attackers still find stuff. So there's not a lot of intellectual coherence to the idea that open, no-holds-barred research applies a meaningful selective pressure.

* It can be very difficult to distinguish genuinely malicious attacks from "security research", and malicious attacks are already extraordinarily difficult to prosecute.

I'm not saying that the rules as they exist under the CFAA today make perfect sense.




And as someone who has been on both sides of doing security research and systems administration. Generally, this kind of "pro bono" work isn't telling us anything we don't know, and since its not coordinated with the target it'll potentially get system/network/security adminstrators up at 3am in their morning to respond to your probes, and will drain company resources. You're also demanding that the company address whatever it is that you find, in short order, when it may actually not be the most important thing to the business -- particularly when you announce it to the world rather than follow responsible disclosure.

When "researchers" then flip around and talk to the press and don't follow responsible disclosure, then what you're dealing with really is a hacking attempt. You're walking up to the doors and windows of a business and jiggling then to see if they're open and taking notes on what kind of locks they're using and how they could be bypassed -- without any kind of approval from the business owner. Then you're turning around and damaging the business by talking to the press about it.

Back when I was more interested in computer security (roughly '94 just like tptacek), I knew that scanning systems that I didn't own without permission would get me in trouble. We seem to have devolved a bit in our collective maturity where we think we can just fly the flag of "security researcher" and that this gives us permission to initiate what look just like attacks on systems.

If you don't own a system and don't have permission for it then don't attack it, and don't put the government in the position of trying to discriminate between a foreign government launching attacks and a "security researcher" with pure motives... And don't be too shocked if the government and legal institutions have issues in distinguishing between those two cases and throw you in jail for 15+ years. The way to avoid that outcome is not to do it. Only attack and probe systems that you own or have permissions to attack and probe. Just because you're a "security researcher" who is egotistical enough to think you can save the internet from itself, that doesn't mean you're going to get treated differently from a foreign national with less pure motives. Stay away from shit that isn't yours (and the security of the entire internet is not your sole responsibility).


For what it's worth: I can't think of anyone who has done "15 years" for CFAA violations. Is there someone who fits that description?

(Don't get me wrong; any prison time for good-faith vulnerability research, no matter how negligent or ill-advised the research is, seems like a travesty).


That was badly phrased. The point I wanted to make was that if you don't want to get into a situation where you're being threatened with 15+ years of incarceration (because prosecutors decide to try to blindly throw the book at you), then don't give them any reason to. Don't put a judge in the position of having to determine if you're an ethical grey-hat hacker, or a north korean spy, because you might lose the judge lottery (which may be shortened on appeal, but you'll be rotting in prison in the meantime).

Justice isn't the machine language of a computer that has deterministic outcomes given its inputs. You're asking humans to determine your motives, which will be necessarily be subjective. And I'm not willing to put my freedom at the risk of someone else's subjective determination. And when "security researchers" do grey-hat hacking they shouldn't be too shocked if they're arrested and charged with those kinds of crimes, because they're asking too much of the legal system.

And that doesn't mean that its 'right', and i'm totally against that kind of penalty. Even though I think its wrong to test vulnerabilities and spin around to go to the press, I see a huge difference, and think that the penalties should be closer to a slap on the wrist (a fine, and 30 days in jail / community service kind of penalty -- not 15+ years in prison).

But I'm not going to put myself into the position of making a judge and the legal system make those kinds of distinctions. What is so important about being able to do that kind of grey hat hacking that you're willing to put your own freedom into that level of jeopardy?


"...this kind of "pro bono" work isn't telling us anything we don't know..."

That's scary right there. If you're deploying something you know has vulnerabilities you have bigger problems than losing sleep at 3am. Same for operating something you know is vulnerable. You (collective, not you, personally) totally deserve to get up at 3am. It's grossly irresponsible, because what you probably don't already know is how that harmless XSS vuln you know about is really a leaf in a 7-level deep threat tree that results in information disclosure. I can just imagine that such a cavalier attitude is how the Sony PSN network got owned.

My point stands. Attack from Iran or probe from a researcher (your points in your following paragraph noted and notwithstanding)?

"...If you don't own a system and don't have permission for it then don't attack it."

That's loud and clear, for sure.


> If you're deploying something you know has vulnerabilities ...

Everything has vulnerabilities.


Would "something with known vulnerabilities" be better?

Does everything have known vulnerabilities that are not actively being worked on?


Lots of things are vulnerable to DoS attacks in a multitude of ways. Depending on the business, it's not uncommon to just say "we'll deal with it when it happens."

But, someone asks, what if the business is really really important? Then that's all the more reason to not mess with it.


What's scary is how self-centered you are. How do you know that the vulnerabilities haven't already been found by an internal security audit, and that they're in the process of being patched, but by your disclosure to the media you are petulantly demanding that the company patch your vulnerability right this instant so you can get the fame and ego gratification from it?

All large companies have vulnerabilites, there's always work that needs to get done, that always get triaged according to impact and then people who ideally should have 40-hour work weeks have to start patching code, then it needs to get Q/A tested to prove that rolling it out won't break everything else, and all that takes time.

And I have worked for companies that took security seriously and worked for companies that had laughable security practices. In either situation, having 'help' from external 'security researchers' was not useful. In the case where companies were run competently it just means that you cause people to scramble and push solutions before they're ready. In the case where companies were not run competently it just causes people to scramble and does nothing to affect the underlying shittiness of the company. You are not going to be able to fix shitty companies. Its not your job to stop future Sony PSN networks from getting hacked, you can't do that, and you should stop thinking you can, and stop using that as justification for your own actions.


TLDR: I strongly disagree.

"...especially if those systems haven't been hardened..."

Well that's just it, isn't it? If the system hasn't been hardened then it wouldn't hold anything of interest and therefore wouldn't be targeted by either friendly researchers or malicious adversaries.

If a system holds value it should be appropriately secured. That must include dealing with attacks as part of business as usual.

As for meaningful, selective pressure - well then why bother with bug bounties? Even Microsoft, the only organisation at that level with a published SDL [edit: security development lifecycle], offers them now.

SDL ref. http://msdn.microsoft.com/en-us/library/windows/desktop/cc30...

I've had my rant. Will shut up now.


Microsoft has been throwing huge amounts of money at this problem for over a decade, and Microsoft's systems are not perfectly or even (if we take a hard, dispassionate look at it) acceptably secured against serious attackers. And I think Microsoft does a better job on this than almost anyone else.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: