Hacker News new | past | comments | ask | show | jobs | submit login
US cybercrime laws being used to target security researchers (theguardian.com)
273 points by wglb on May 29, 2014 | hide | past | favorite | 83 comments



Note that this concerns the subset of security research that involves actively talking to computer systems owned by other people, presumably in production, on the public Internet.

Most security research does not in fact work this way. Consider, for instance, virtually any memory corruption vulnerability; while it was once straightforward (in the 90s) to work out an exploit "blind", today, researchers virtually always have their targets "up on blocks", connected to specialized debugging tools.

I am a little surprised that we are only now hearing about high-profile researchers getting dinged for actively scanning for actual vulnerabilities in other people's deployed systems. It has pretty much always been unlawful to do that.†

(These are descriptive comments, not normative ones. My take on unauthorized testing of systems in production is complicated, but does not mirror that of the CFAA).

It's for this reason that you should be especially appreciative of firms, like Google and Facebook, that post public bug bounties and research pages --- those firms are essentially granting permission for anonymous researchers to test their systems. They don't have to do that. Without those notices, they have the force of law available to prevent people from conducting those tests.

(Background, for what it's worth: full time vulnerability researcher, started in '94.)

Caveat: it does depend on the vulnerability you're testing for. There are a number of flaws you could test for that would be very difficult to make a case out of. But testing deployed systems without authorization is always risky.


Google, Facebook, and now over 70 companies do grant tacit permission for anyone to test their systems, and will pay the researchers for a disclosure, as long as they follow the program rules, which are usually quite reasonable. I'm serious when I say that few people are more thankful than myself for the existence of security bug bounties.

However, one thing has always crossed my mind: since the legal definition of authorization is still very fuzzy, what stops a third party from going after a researcher, even though the company who owns the server which was technically hacked has no interest in filing any complaint against the researcher?

To clarify my question, the recent Brazilian law regarding computer hacking establishes that only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.? My understanding of american law is very weak, but I know that, for some crimes, the victim does not have a say, i.e. the state will prosecute regardless of the victim's will.


> only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.?

After a US law enforcement agency has been notified of a complaint by a victim of a crime they forward it to a prosecutor. At this point the victim can no longer drop the charges. The only person who can drop the case then is the prosecuting lawyer. They occasionally do drop cases where it doesn't make sense anymore. But procescuters don't get 'cybercrime' cases very often and they often make headlines , especially these days, so I doubt many lawyers would voluntarily drop that opportunity for their resumes and work on the usual murder or drug trials instead.


There is something really disturbing about a system that allows personal ambition to play such an important role in how the institution of justice operates in effect, at least in specialised matters like this.

Expensive attorneys and ambitious prosecutors, each trying to twist half-truths to, more or less, ignorant judges and jurys. Makes me wonder if some of these servants of justice are forgetting that, their specific role aside, as the above description suggests, their common goal is to reach an honest conclusion about whether someone actually did something wrong, which implies everyone's effort to understand in what ways are the related actions harmful and how does that harm balance against fundamental freedoms.


Judges and juries aren't ignorant. They just don't give a shit about the things that are important to you. To your typical juror off the street, "hacking" into a computer for ostensibly "white hat" reasons is no different than breaking into a store to "test the alarm system." The reaction is not "oh yes, we have to make sure our legal system is flexible enough to accommodate this sort of 'security research'" but rather, first, "I don't believe you" or, at best, "didn't your mother ever tell you it is wrong to mess with other peoples' things without permission?"


To your typical juror off the street, "hacking" into a computer for ostensibly "white hat" reasons is no different than breaking into a store to "test the alarm system."

That sounds a lot like ignorance.


Because there is an obvious difference between "understanding of the facts of a case" and "different value systems used to evaluate those facts", the ignorance here might be in your assertion.


You don't think the supposed difference in values is the result of ignorance of how the internet works? Or what a security researcher does? Or that security researchers exist as a hobby and profession? Or that the security of the internet at large depends on people who do this? That the every-other-month theft of giant numbers of credit cards or passwords can be prevented if white hat hackers find the security hole first? I would expect most people don't know that big companies like Facebook or Google offer bounties to people who find exploits, or that bugs that threaten the entire internet are routinely found by people who donate their time in order to protect people they don't even know, and who don't know they exist.

The facts of the case: someone broke into a computer system without permission.

The inability to interpret those facts in the light of what a security researcher does isn't a result of different values, but a lack of knowledge of the context. People who don't know how computers or the internet work are open to being told whatever story the prosecution decides to spin.

Edit: I think the ignorance is actually made clear by the example in the GP. Imagine some good Samaritan is walking past a jewelry store after closing time. They notice that the front door is ajar, and upon testing they find that the alarm doesn't go off when they enter the store. So they call the owners and wait in the store until the owner can get there and make sure the store is secure.

Do you think it's likely that this person would be prosecuted? Or, if they were, that the prosecutors and judge would throw the book at them to "make an example"? People understand that scenario and are likely to treat it with leniency in a way that they don't understand the equivalent scenario in computing.

P.S., Always a pleasure to be slapped down by tptacek :)


In your jewellery store example I think it may be reasonable to prosecute the person.

In increasing levels of seriousness:

1. The person is walking by the store and, in the course of their everyday activity, sees that the door is ajar; they then contact the owner. This seems fine to me.

2. The person is walking by the store, sees the door ajar, and then altering their normal activities decide to actively test the door to see if they can break into the store; they can and then contact the owner. This seems dodgy to me.

3. The person chooses to visit each jewellery store in town to see if any have a door ajar. This definitely seems inappropriate.

The reason I come down opposed to the person in the second example is two-fold.

Firstly, ignoring intent, where do you draw the line on an acceptable level of 'break the security' activity?

- Thinking that the door is ajar and pushing on it?

- Seeing that the lock is vulnerable and picking it?

- Finding a ground floor window and breaking through it with a brick?

The resolution I choose is that if you have gone out of your way to subvert the security of my stuff without my consent then you have crossed the line. Gray is black.

Second, I don't care about your intent. Every security system will break at some point, and so I view the existence of doors and locks as mainly being about roughly outlining the boundaries that I expect to be respected. If I want to improve my security then I'll hire someone to advise me on how to do it. If I come home tonight to find a stranger who has broken into my house in order to prove that it's possible then (1) I already know, and (2) they have just caused the harm which they are nominally trying to protect me against.


They notice that the front door is ajar

But most likely a security researcher will fire off some multiple of a thousand probes to see if the door is open. Collateral damage is likely. This is not what is happening in your jewelry store door case.

That the every-other-month theft of giant numbers of credit cards or passwords can be prevented: These things can be prevented by the folks in charge paying attention to the alarms going off in the back.


But they should give a shit before they can pass judgement, because they're not important just to me and because there are actual victims involved (which might be different than the accusants).

What if no private data were actually accessed, let's say if the researcher only compromised his own account.

Or about the case that he hacked a device that he bought, violating the Acceptable Use Policy of the producer.

Or the case where someone automated the retrieval of data that he already had legally access to, like, if I recall correctly, Aaron Swartz.

All these examples are unique and would fail any physical-world analogies, so they should be examined and judged differently, by people that do give a shit, want to take the effort to understand their unique aspects -and are actually able to. I'm not sure if that's the case.

My general point is about how we found ourselves in a system where justice servants, like prosecutors, appear to treat their job "just like any job" (at least in cases that they might consider abstract -"hacking", less clear and direct effects than "murder"), where they can put their careers first and ignore any consequences to others. Or that someone has to bear enormous defense costs to stand a chance, or be coerced to plead guilty or abstain from exercising what should be his right, out of fear of finding himself involved in such a situation.


The whole point of juries is for them to judge you against the norms of society at large. The fact that a small group of people might be operating under different norms is irrelevant. They don't have to understand your values in order to judge you. All they need to understand are the facts and the law.

The prevailing norm is that property rights are sacrosanct, and any invasion of those rights is considered suspicious and explanations about benevolent intent are disbelieved. There is no general right to "tinker" with other peoples' property without permission, for fun, for research, or for any other reason. We are not a society that requires security measures to be effective in order to serve as a signal to keep out. A velvet rope is as effective as a steel door for the purposes of signaling that access is not allowed.

This is not a matter of prosecutors putting their careers ahead of the spirit of the law. It's about hackers not understanding that we're a society that requires you to keep your hands to yourself.

NB: I have a beef with the CFAA, but it's not with the spirit of the law, but rather the fact that criminal penalties under the CFAA are totally out of line with those in analogous physical scenarios. The standards for trespass on digital networks shouldn't be higher than the standards for trespass in the physical world. But juries can't do anything about this problem, and judges really can't either. It's Congress's problem for putting the felony escalation provision in there.


Researches that trespass a digital network aren't the only ones who are affected, though. Let's say, a quote from the OP article:

"Lanier said that after finding severe vulnerabilities in an unnamed “embedded device marketed towards children” and reporting them to the manufacturer, he received calls from lawyers threatening him with action. [...] As is often the case with CFAA things when they go to court, the lawyers and even sometimes the technical people or business people don't understand what it is you actually did. There were claims that we were 'hacking into their systems'.

The threat of a CFAA prosecution forced Lanier and his team to walk away from the research."


There's nothing to that anecdote other than a company getting mad about exposing defects in a product and their lawyer making a nasty phone call.

The CFAA is vague and over-broad, you won't get any disagreement from me on that. Applying it in a case involving a device you bought and own is totally inconsistent with traditional norms of private property. But those are edge cases. The actual prosecutions people get up in arms about aren't edge cases. They pertain to conduct that clearly violates the norms of trespassing on private property, and hackers justify their actions by saying that those norms shouldn't apply to digital networks. Juries, unsurprisingly, don't buy that. So hackers and the broader tech community call them "ignorant."


You've also got the Sony VS Hotz lawsuit, where Hotz was forced to back-off. Edge cases, maybe, but demonstrate that not everybody draws the line at the same place.

For you, someone finding a vulnerability in the software that provides a network service, hosted in some server he doesn't own, is clearly trespassing private property -even if he only accesses his own account's data- but finding a vulnerability in the software that comes bundled on a device he bought, is not.

For Sony, let's say, both constitutes violations of her property -it's her software, she owns it and she doesn't care if the carrier is her server or the device she just sold you. In both cases she only gives you permission to use her software in a certain way, which excludes any sort of hacking.

Maybe the reason that many draw the line to the medium, is because it is easier to visually compare a computer network to a physical property than a device that you have bought (but has data you don't own)?

But is the physical ownership of the medium that carries the data what matters or the ownership of the actual data that are being accessed? If it's the medium, why, when the really important thing that the owner cares to protect is, in almost all cases, the data?

Not trying to argue, just expressing some questions that I think are tricky and deserve more thought than they get. In any case, I think physical and digital property analogies can only take us that far, so I try to keep clear of them.


Sony vs. George Hotz was a civil case in which the CFAA played a small role compared to the numerous other statutes invoked, and that case ended in a settlement.

What we are talking about in this thread is the supposed criminalization of security research. If you're trying to get someone to take the other side of the argument that security research is needlessly legally risky, you're probably not going to find many takers. There is a world of difference, however, between being sued and being imprisoned.


Apologies for drifting the thread out of the CFAA scope, I was never specifically referring to CFAA to be honest -sorry if it seemed that I was.


>They don't have to understand your values in order to judge you. All they need to understand are the facts and the law.

That could be said for racist laws just as well (e.g Jim Crow stuff).

Even if they don't have to "understand his values", they should be made to, and the law is bad in this regard.

Hence, I don't see the point in pointing out the status quo and what privileges they have in a neutral manner. Seems like apologist to me.


You sound like you're drawing a normative conclusion from positive facts (i.e. the is/ought problem).

The fact that juries judge things from a certain perspective says nothing about whether they ought to do so or not.


and more to this...

usually these cases go to special DA Investigation units who invest huge sums of money to determine who was involved and to what extent (think full blown computer forensic investigations to gather evidence). even when the cases make absolutely no sense to pursue, they will persist on the sole outcome of recovering some of these costs... i know first hand, borderline extortion.


"...testing deployed systems without authorization is always risky..."

While what you describe may be the sad reality, it makes zero sense. If a legit researcher, 'specially one that's being transparent about it, researches any domestic system, then that's got to be better than the Iranians, Russians or Chinese doing it (which they do anyway).

But hey, what do we know anyway. There's probably some benefit that makes it preferable for a foreign party to uncover our vulnerabilities without our knowledge.


I disagree that it makes zero sense. There are reasonable concerns at play here:

* Security testing is extremely disruptive to production systems, most especially if those systems haven't been hardened in any way. Security testers are not as a rule good at predicting how their tests can screw up a production system.

* No matter how much effort you put into a security program (Google and Facebook put a lot of effort into it; more than most people on HN can imagine), attackers still find stuff. So there's not a lot of intellectual coherence to the idea that open, no-holds-barred research applies a meaningful selective pressure.

* It can be very difficult to distinguish genuinely malicious attacks from "security research", and malicious attacks are already extraordinarily difficult to prosecute.

I'm not saying that the rules as they exist under the CFAA today make perfect sense.


And as someone who has been on both sides of doing security research and systems administration. Generally, this kind of "pro bono" work isn't telling us anything we don't know, and since its not coordinated with the target it'll potentially get system/network/security adminstrators up at 3am in their morning to respond to your probes, and will drain company resources. You're also demanding that the company address whatever it is that you find, in short order, when it may actually not be the most important thing to the business -- particularly when you announce it to the world rather than follow responsible disclosure.

When "researchers" then flip around and talk to the press and don't follow responsible disclosure, then what you're dealing with really is a hacking attempt. You're walking up to the doors and windows of a business and jiggling then to see if they're open and taking notes on what kind of locks they're using and how they could be bypassed -- without any kind of approval from the business owner. Then you're turning around and damaging the business by talking to the press about it.

Back when I was more interested in computer security (roughly '94 just like tptacek), I knew that scanning systems that I didn't own without permission would get me in trouble. We seem to have devolved a bit in our collective maturity where we think we can just fly the flag of "security researcher" and that this gives us permission to initiate what look just like attacks on systems.

If you don't own a system and don't have permission for it then don't attack it, and don't put the government in the position of trying to discriminate between a foreign government launching attacks and a "security researcher" with pure motives... And don't be too shocked if the government and legal institutions have issues in distinguishing between those two cases and throw you in jail for 15+ years. The way to avoid that outcome is not to do it. Only attack and probe systems that you own or have permissions to attack and probe. Just because you're a "security researcher" who is egotistical enough to think you can save the internet from itself, that doesn't mean you're going to get treated differently from a foreign national with less pure motives. Stay away from shit that isn't yours (and the security of the entire internet is not your sole responsibility).


For what it's worth: I can't think of anyone who has done "15 years" for CFAA violations. Is there someone who fits that description?

(Don't get me wrong; any prison time for good-faith vulnerability research, no matter how negligent or ill-advised the research is, seems like a travesty).


That was badly phrased. The point I wanted to make was that if you don't want to get into a situation where you're being threatened with 15+ years of incarceration (because prosecutors decide to try to blindly throw the book at you), then don't give them any reason to. Don't put a judge in the position of having to determine if you're an ethical grey-hat hacker, or a north korean spy, because you might lose the judge lottery (which may be shortened on appeal, but you'll be rotting in prison in the meantime).

Justice isn't the machine language of a computer that has deterministic outcomes given its inputs. You're asking humans to determine your motives, which will be necessarily be subjective. And I'm not willing to put my freedom at the risk of someone else's subjective determination. And when "security researchers" do grey-hat hacking they shouldn't be too shocked if they're arrested and charged with those kinds of crimes, because they're asking too much of the legal system.

And that doesn't mean that its 'right', and i'm totally against that kind of penalty. Even though I think its wrong to test vulnerabilities and spin around to go to the press, I see a huge difference, and think that the penalties should be closer to a slap on the wrist (a fine, and 30 days in jail / community service kind of penalty -- not 15+ years in prison).

But I'm not going to put myself into the position of making a judge and the legal system make those kinds of distinctions. What is so important about being able to do that kind of grey hat hacking that you're willing to put your own freedom into that level of jeopardy?


"...this kind of "pro bono" work isn't telling us anything we don't know..."

That's scary right there. If you're deploying something you know has vulnerabilities you have bigger problems than losing sleep at 3am. Same for operating something you know is vulnerable. You (collective, not you, personally) totally deserve to get up at 3am. It's grossly irresponsible, because what you probably don't already know is how that harmless XSS vuln you know about is really a leaf in a 7-level deep threat tree that results in information disclosure. I can just imagine that such a cavalier attitude is how the Sony PSN network got owned.

My point stands. Attack from Iran or probe from a researcher (your points in your following paragraph noted and notwithstanding)?

"...If you don't own a system and don't have permission for it then don't attack it."

That's loud and clear, for sure.


> If you're deploying something you know has vulnerabilities ...

Everything has vulnerabilities.


Would "something with known vulnerabilities" be better?

Does everything have known vulnerabilities that are not actively being worked on?


Lots of things are vulnerable to DoS attacks in a multitude of ways. Depending on the business, it's not uncommon to just say "we'll deal with it when it happens."

But, someone asks, what if the business is really really important? Then that's all the more reason to not mess with it.


What's scary is how self-centered you are. How do you know that the vulnerabilities haven't already been found by an internal security audit, and that they're in the process of being patched, but by your disclosure to the media you are petulantly demanding that the company patch your vulnerability right this instant so you can get the fame and ego gratification from it?

All large companies have vulnerabilites, there's always work that needs to get done, that always get triaged according to impact and then people who ideally should have 40-hour work weeks have to start patching code, then it needs to get Q/A tested to prove that rolling it out won't break everything else, and all that takes time.

And I have worked for companies that took security seriously and worked for companies that had laughable security practices. In either situation, having 'help' from external 'security researchers' was not useful. In the case where companies were run competently it just means that you cause people to scramble and push solutions before they're ready. In the case where companies were not run competently it just causes people to scramble and does nothing to affect the underlying shittiness of the company. You are not going to be able to fix shitty companies. Its not your job to stop future Sony PSN networks from getting hacked, you can't do that, and you should stop thinking you can, and stop using that as justification for your own actions.


TLDR: I strongly disagree.

"...especially if those systems haven't been hardened..."

Well that's just it, isn't it? If the system hasn't been hardened then it wouldn't hold anything of interest and therefore wouldn't be targeted by either friendly researchers or malicious adversaries.

If a system holds value it should be appropriately secured. That must include dealing with attacks as part of business as usual.

As for meaningful, selective pressure - well then why bother with bug bounties? Even Microsoft, the only organisation at that level with a published SDL [edit: security development lifecycle], offers them now.

SDL ref. http://msdn.microsoft.com/en-us/library/windows/desktop/cc30...

I've had my rant. Will shut up now.


Microsoft has been throwing huge amounts of money at this problem for over a decade, and Microsoft's systems are not perfectly or even (if we take a hard, dispassionate look at it) acceptably secured against serious attackers. And I think Microsoft does a better job on this than almost anyone else.


How would you feel if I broke into your place of business, made a list of all of the things you were doing that were out of compliance with federal and state laws and regulations, then left you my card and offered to let you hire me to do legal compliance work for you?


"Broke in" rather presupposes the point.

If we're analogizing, an exterminator seeing rat droppings in your restaurant and offering to solve your problem rather than letting the department of health deal with it, is a slightly more realistic example.


Taking that a step further: it is like an exterminator going around different of restaurants, then crawling under customer's tables while they are eating, saying, "Don't mind me, just looking from rat droppings."

A more legit exterminator would agree to come past while the customers were not there.


Legally, pushing open a closed door is "breaking in." It's precisely how a lawyer would describe someone who opened the door to your business to come in and look around for stuff.


That's not quite a fair analogy. It'd be more like if you go into a bank and see a giant hole in their vault. You tell them about it and they sue you for breaking it. Meanwhile actual criminals come and go as they please anonymously. The bank's clients are the actual victims of course, it's not like this just affects the bankers.


No, you've misread the story. Nobody is being threatened simply for observing vulnerabilities. Instead, people are discovering vulnerabilities in popular software, and then exploiting them across thousands of machines to "prove" something everyone already knows: lots of people are vulnerable.


If it was well known that large and highly funded subsets of foreign militaries were roaming around breaking into everyones businesses and stealing things / exploiting the lack of legal compliance then, yes, I'd be very pleased that someone took the time to both find the mistake and give me the chance to fix it / get it fixed by them before it was used against me with legitimate malicious intent.


That's the kind of thing that's easy to say on a message board and colossally unlikely to match your revealed preferences in reality.


There actually are real-life burglars. That isn't hypothetical. Would you really grant people permission to burglarize your business to demonstrate its susceptibility to burglary?


So should someone find a remote exploit in OpenWhatever that gives them remote root access and they publicly disclose that (without having tested it on the Internet... just in their lab) then they are not subject to the CFAA?


Correct.

The CFAA requires access without authorization or exceeding authorized access. Presumably you are an authorized user of your own systems.

It is possible that some vendors may try to use User Acceptance Licenses to further restrict what actions can be taken with their software (even in case where you've purchased it and installed it on your system).

I believe (and would love to be corrected by a lawyer), that even those cases would be civilly prosecuted, and still not related to the CFAA.

This is one of the reasons why when providing penetration testing/application testing training we always took great pains to drill into their heads to never use any of those techniques on systems you do not own. Not poking around on your bank's website, etc.

If you knowingly access a system that you do not have authorization for, the owner of the system might not care (or might not notice), but under the CFAA, they can file charges against you.

Reasonable people may disagree what constitutes "exceeding authorized access" (where reasonable people might be your attorney and a prosecutor).


I have no problem with punishing unauthorized access although the punishment is stupid severe.

I mean once you've been sentenced under the CFAA you might as well have a shootout with the police or kill some people it make no difference hell the extra charges won't make much a difference you're still facing life.

Does that make sense to anybody?

What they do require though is an exception for researchers and you can define researchers anybody who discloses the vulnerability to the owner of the vulnerable system before publishing it publicly. A security researcher is required to disclose publicly the results of his research in order to be considered a researcher.

A regular hacker cannot claim to be a security researcher since hackers never disclose the vulnerabilities they find to the owner of the system even if they do share them publicly with other hackers sometimes. It is not in their interest to let the owner of the vulnerable system know they have a problem.


Correct. They are also probably (depending on circumstances) exempt from some provisions of the DMCA that might otherwise allow the research target to employ copyright law to stop them from conducting the research. US Federal Law has provisions that explicitly protect vulnerability research in some cases.


So what happens when the NSA does it without permission and keeps discovered vulnerabilities secret?

Is this setting up a precedent?


I think this article is misleading to British English readers.

> HD Moore, creator of the ethical hacking tool Metasploit and chief research officer of security consultancy Rapid7, told the Guardian he had been warned by US law enforcement last year over a scanning project called Critical.IO, which he started in 2012.

British might confuse "warning" for what's known in Britain as a "police caution", which is a extra-judicial criminal prosecution, judged summarily by police, and is also referred to as a "formal warning". Such warnings become part of their criminal record in the UK and effect things like employment, as they are in effect a criminal conviction (as I understand it, although the UK describes them as "not a criminal conviction but an admission of guilt [after being accused by the police]", which I view as an irrelevant distinction). There is no such system under federal law in the United States. A UK reader might rationally assume "police cautions" are just called "police warnings" or "US law enforcement warnings" in the US. Police cautions are not something most people in the US know about, and would probably be outraged to know of their existence. (In effect, the police say you admitted to a crime, so they go around telling everyone who asks that you're a criminal. Such as potential employers and landlords.)

At least, to me, that's the implication of the statement.


  > ...judged summarily by police...
Wikipedia claims [1] that the offender can't be summarily judged by the police because they must agree to be cautioned:

  > In order to safeguard the offender's interests, the 
  > following conditions must be met before a caution can be 
  > administered:
  >   * there must be evidence of guilt sufficient to give 
  >     a realistic prospect of conviction;
  >   * the offender must admit the offence;
  >   * the offender must understand the significance of a
  >     caution and give informed consent to being 
  >     cautioned.
Did you mean something else when you wrote "summarily judged?" Or does Wikipedia have this wrong?

[1] http://en.wikipedia.org/wiki/Police_caution#Circumstances_fo...


Only if you think a guilty plea (in a court of law) doesn't result in a summary judgement because the offender agreed to it.


A fundamental difference between online "property" and physical property is that you can never fully protect physical property. Build a stronger wall and someone can use a bigger bulldozer to break it. But build a secure website and you might find that it doesn't get hacked no matter the resources of the hacker. If it does, you have plans to limit the effects.

I wonder if people are so busy rushing to do things online they don't want to pay the cost of strong security, so they let themselves be vulnerable and need laws to protect them. As a few people have said, foreign government hackers aren't bound by such laws and even they can't get in to many sites.

If we stop seeing hackers as guilty people to blame, and think of them as an unavoidable natural presence on the internet, just like data corruption or power failures, then we won't need laws, instead we'll need safety standards and licenses for IT workers just as we do for, say gas plumbers.

Every day, spammers "hack" my web forum by solving the captcha. I don't want to find them and send them to prison. I want to build better defenses to prevent them doing it.


I disagree with people who are drawing direct analogies between someone breaking into your property to test its security and cyber security pen-testing. To me, it's more like giving your money to a bank for safe keeping with the understanding they will protect it, and then wanting to test they are actually fulfilling their promise (e.g. by going to the bank and checking they have solid thick walls, and that entry to the vault is guarded properly). Even that's not a direct analogy, as you'll likely be compensated if the bank loses your money, but you'll rarely be compensated when your personal information is disclosed. I also think there are some interesting questions raised by cloud computing. What if I were to deploy a purposefully insecure honeypot VM or application to the cloud, and an attacker managed to use that to mount an attack on other applications?


There seems to be a fundamental disagreement about the correct analogy.

Is it akin to going to a bank during normal business hours and using lawful powers of observation, i.e., implicitly authorized? Or is akin to breaking into the bank after its closed, or otherwise violating some implicit lack of authorization, e.g., going somewhere off-limits, such as trying to secretly enter the vault?

Because I think you'll recognize the inherit danger of allowing people to willy-nilly try and break into banks to "test they are actually fulfilling their promise".


Call your congress critter, form a PAC, and elected one of your own. If you are in a gerrymandered district, join the party that controls that district, and primary the congress critter out.


Take your rational thinking elsewhere. This is the Internet.


Monied interests want you to play in their safe playground without rocking the boat, the legal and technical enforcement is closing in. Slowly, but the ratchet only turns one direction. I worry that the only reason it hasn't closed in entirely is that smart people exploring is more beneficial to business than not. For now.

Over the last few weeks I've been wondering when the scale flips and general purpose computing will die outright. Things that were once considered forgone conclusions about tech are turning out to be accidents of the fact adoption starts with individuals. How long can tech empowering people continue to outrun the oldschool powers using tech to empower themselves?


Does it really only turn in one direction, though?

I hear that kind of talk a lot, usually about taxes and government programs. It seems incredibly depressing, for one thing. It's fundamentally saying that you can never win, just delay the inevitable loss.

Fortunately, it doesn't seem to be true, whether it's taxes or computers. Computers might be getting squeezed a bit now, but there have been far worse periods, followed by better. Go back in time to, I don't know, 1990. You want an OS for your PC? Sure, Windows or DOS? You want a wide-area network connection of some kind? We have a variety of choices for you, ranging from the local phone company to the local phone company, or even the local phone company.

Remember when you had to be careful never to tell the phone company that your second line was for dialup internet, because they'd charge you more if they found out you were going to use it with a modem? Remember when you had to worry that they'd figure it out anyway from your usage patterns, or that they'd just cut you off regardless because you were tying up a line for hours and hours every day?

I don't want to tell you not to fight. Certainly, there are plenty of problems right now, and it's well worth fighting. But we should realize that there are many ways that it can be and has been worse, and that the ratchet really does go both ways when people want it to.


I think my favorite example of things going the other way was when we more or less won the battle on export control laws which restricted the distribution of cryptography.


I momentarily forgot about that! You're right, though. Netscape International Edition, with 40-bit crypto for SSL. Good times.



This is both a comment on vulnerability research and a credible System Of A Down song lyric.


Yeah, it felt kinda trite writing it. I just haven't found a way to articulate the idea without asking myself "Oh, so you're still a teenager getting stoned every day thinking you have thoughts about things, hows that working out for you?" Edit: Maybe I should just lean into it and write a phrack article. I'm sorry, that's a low blow, I enjoyed phrack even when the writing style wasn't my speed.


I'll just note that the biggest "moneyed interests" in the technology industry have more or less waived most of their ammunition to stop research under the CFAA by posting public bug bounties. Not only have they made it much harder to sue researchers, but they also pay strangers to do it.


It makes me wonder who actually likes the CFAA the way it is. Does anybody? I don't see how it's helping anybody. Most of the actually malicious computer intrusions come from outside of U.S. jurisdiction. It's like trying to reduce child labor in China by increasing the breadth of the offense and severity of the penalties in Texas. The next thing you know nothing has changed in China but a father in Texas is facing felony charges for having his son stock shelves at the family business.

Who would actually oppose fixing that? Is it purely a lack of understanding the issue on the part of legislators?


The CFAA exists because during the 1980s, there was a concern that no existing statute would deter purely malicious attacks on systems, or any other attack that didn't fit the narrow definition of wire fraud.

I actually do not have a problem with the CFAA's statutory prohibitions on unauthorized access. They seem eminently sensible to me. Don't mess with systems that don't belong to you.

I do think the CFAA has a grave and dangerous flaw: I think its sentencing makes absolutely no sense. I generally do not believe that computer crimes should have sentences that scale with the iterator in a "for()" loop. In the cases where sentences could reasonable scale along with the magnitude of the attack, the meaningful scaling factor should (and I think typically does, in a sane reading of the law) come from some other crime charged along with CFAA.


I agree that significantly reducing the penalties under the CFAA would mitigate almost all of the damage it causes, but I don't see how that makes the language any better. It just limits the damage.

"Don't mess with systems that don't belong to you" worked much better in 1980 when typical computers cost a million dollars and were only expected to be used by the employees of the bank or government that owned them, because in that context you know you're authorized when you file a W2 and are issued a security badge.

Once you put systems on the internet for access by the general public it changes everything. "Mess with systems that don't belong to you" is practically the definition of The Cloud. The defining question is no longer who is authorized, because everybody is authorized, so the question becomes what everybody is authorized to do.

The problem is that nobody has any idea what that means in practice. All we can do is make some wild guesses -- maybe SQL injection against random servers of unsuspecting third parties is unauthorized access whereas typing "google.com" into a web browser without prior written permission from Google, Inc. is not. But what about changing your useragent string to Googlebot? What if that will bypass a paywall? What if that will bypass a paywall, but you're a web spider like the real Googlebot? What if you demonstrate a buffer overrun against the web host you use in order to prove their breach of a contract to keep the server patched? Can you charge a journalist for reading a company's internal documents when the company made its intranet server accessible to the internet without any authentication?

The answers to these questions depend primarily on which judge is deciding the case. Which is ridiculous, and the hallmark of a bad piece of legislation.


Well, the Weev case showed that accessing unsecured data that doesn't belong to you is punishable under the law.

He was released on appeal over a jurisdictional issue, not a statue or misapplication of the law.


> He was released on appeal over a jurisdictional issue, not a statue or misapplication of the law.

This is actually why we don't know anything from that case. District court rulings aren't binding on other courts and the appellate court apparently threw out the case without ruling on the CFAA, so there was no precedent created either way.

But if the appellate court had ruled the same way as the district court and created that precedent, I don't think you could reasonably describe that as an improvement in the CFAA situation.


Cory Doctorow's keynote at Chaos Communication Congress (2011) was about this trend. It's not technically feasible to make a turing-complete system that only does things that the creator likes. All anti-virus and DRM systems are provably limited unless general-purpose computation is removed. https://www.youtube.com/watch?v=HUEvRyemKSg


this is dumb on many layers - threatening white hat who could be held accountable but could be hired to do further audit; failing to come to grips that if you are insecure enough to threaten someone, you know - internet will find out that you rather than fixing holes in your system rather use expensive lawyers to intimidate people who on the whole trying to a good thing for you.

The whole thing about unauthorized access - not sure about. If you get burglarized and live worse part of town - because you did not lock your front door - is this you fault or criminal's? Ultimately buck stops with you, you would look very stupid arguing that a stranger walked off the street and pinched your laptop, better yet, if you leave your laptop on your front lawn.


Pardon my language, but that is such utter horseshit, and I think you know it.

Just because my door is unlocked, or my digital property is unsecured, you do NOT have permission and should not assume you can take access. That is the scummiest argument I've heard in quite some time. You do not have permission to steal something just because it's super convenient to do so; regardless of whether it is physical or digital.


It in no way diminishes the effect of a crime if a person does not lock their front door. It is not the victim's fault if they did not install bulletproof glass and employ a security guard. If you think differently you have a twisted outlook on life, a sort of might-makes-right view of righteousness.

Such rationale is the rationale of a lowlife. "The front door was unlocked so its their fault I stole from them." "If they didn't want me to steal their lawnchair, they shouldn't have left it unchained on their porch." Nothing is inexcusable with that line of thinking. "If she didn't want to get raped, she shouldn't have been all alone in the middle of the night in a dark alleyway." "If he didn't want to get brutally assaulted, he shouldn't have left such a stupid comment on HN."


Agreed. You don't get a pass for breaking into someone's house just because you say you weren't there to cause any harm. Yes, it's good to be pragmatic and understand that there's always something the owner of the house could have done to help prevent the break in -- close the door, lock them, get locks that are harder to pick, install security cameras, etc. etc. -- but the person breaking into your house is still wrong for doing so, even if they claim they just wanted to tell you about the weaknesses of your house's security. I don't see any reason the same reasoning shouldn't apply to digital property.


I think what makes it tricky is that the systems are automated and intent and authorization aren't so clear.

We never call up the owner of a web server and ask them for permission to browse their site. We just connect to port 80 or 443 and go to town. This is universally accepted as authorized use.

Now, say you're running a vulnerable sshd such that if you send just the right bytes, it'll log you in as root without the password. I imagine most will say that this is unauthorized use.

But what's the difference really? In both cases, you're asking the server to do something, and then it does it. In the real world, we have various things to look for. Private dwellings are off limits without an invitation. Elsewhere, a lock means you don't go in, even if it would be trivial to defeat. Or just a sign that says you should stay out. It's not so clear with computers.

People have been convicted of a crime for taking a public URL and chopping off the last component and getting a directory listing from the server. To one side, the fact that you had to edit the URL and the fact that the directory listing wasn't what the rest of the site was like was enough to establish that as "unauthorized". To the other side, the guy just asked the server, "Can I have what's located here?" And the server replied, "Yep, sure, here you go."

A few weeks ago, there was a story here about a blackjack player who cheated a casino out of a bunch of money. He asked for a dealer who spoke Mandarin. His confederate then asked the dealer in Mandarin to turn certain cards upside down for luck. Normally this would be fine, but the cards at this particular casino weren't quite symmetric on the back, so they could tell them apart. The request would be suspicious, but they used a language the bosses couldn't understand, so they didn't realize what was going on.

In the end, the casino sued the guy for hefty damages. And yet all he did was ask and then receive what he asked.

In many ways, servers are like that dealer. You talk to it in a weird language that the owner can't understand (or he can, but he doesn't listen in on everything) and sometimes you can ask it for something the owner would refuse, but the server/dealer says yes.

So while it's clear that walking off with somebody's laptop just because they left the front door open is wrong, it's much less clear to me where you draw the line with networked computers, and it doesn't look like others have a particularly clear idea either. Given that fundamental lack of clarity, I don't think it's completely unreasonable to characterize these guys as locating spots where access is authorized (and thus legal) but shouldn't be, rather than locating spots where unauthorized access can be gained.


The difference between lawfully entering a 7-11 and trespass is just as you describe the situation with a server. Authorization can be implied, and "unauthorization" can be trivial.

The real problem is that a lock on a door is more obvious than a URL scheme. The government is saying that entering a 7-11 that is unlocked, but walking in backwards, is criminal trespass because that's not what the 7-11 intended for the customer to do. That's nonsense. Implicit authorization in physical property is just so much more straightforward, and the government is trying to maliciously take advantage of the lack of common sense on what is unauthorized, helped along by a Congress that willfully authorizes such action.

And I like your server dealer analogy. The question is whether or not a computer is an agent of its owner and whether its decisions, right or wrong, can be relied upon in business dealings as the actions of its owner.

So what is the digital equivalent of a lock on a door? Must the law explicitly say a lock on a door signifies lack of authorization to enter? Is walking into a 7-11 store backwards implicitly unauthorized?


Comparing requests to a server to edge sorting in blackjack is really insightful. The analogy isn't totally similar, though, since in the edge sorting case, the player walks away with money, while URL chopping just gives information.

I should mention that the player involved, Phil Ivey, is probably the most famous poker player in the world. According to Wikipedia, "Ivey is regarded by numerous poker observers and contemporaries as the best all-round player in the world today...his other nickname is 'The Tiger Woods of Poker'." [1]

[1] http://en.wikipedia.org/wiki/Phil_Ivey


Where I come from it does. The police will blame the victim if they left their door unlocked and got burgled. Also, insurance will sometimes won't pay out.


A better analogy would be someone entering your house if the door is left open, and then them shouting to see if someone is home or if they left for work - in view of closing the door for them.


And then someone called the police when they found someone random was in their house without authorization. And the intruder said "I was only there to shut the door for them."


Which parallels white hats getting arrested for legitimate security research.

Hence my analogy stands.


Yes, your analogy does stand. And it stands to reason that the intruder should be punished, and/or sued, for trespass. It is not a legitimate reason to be in someone else's house.

Going around trying to open everyone's doors is a similar analogy to some other security research. And while its not as clear-cut, in fact arguably not a commonly cognizable crime, it certainly is suspicious and its reasonable for law enforcement to investigate such activity.


So if I suspect that someone else will steal from a house if the door is left open, (And I have strong evidence for this)

And I see that the door is significantly ajar (one can see valuables through the open door)

And the house appears to be empty,

And the doorway is flush against the sidewalk, where I am walking by on my way somewhere else (the door opens inwards and is not in my way)

If I knock on the door (holding it so as to not make it swing inwards further and hit the wall) and ask if anyone is there,

And recieving no responce, close the door,

I should be punished?!?

If I see someone injured and unconcious on a sidewalk, should I just walk around them in order to avoid infringing on their personal space?

What if I have relevant medical experience?

Am I to let them lie there?

If someone (a stranger) is unconscious from drinking alcohol to excess, and is lying on their back, am I to refrain from turning them on their side, and instead allow them to choke on their own vomit and die, so to avoid running afoul of laws intended to protect against pickpockets?

If someone has a problem and is in danger of significant loss, but is unaware of it, and I am unable to inform them of it, but I am able to easily lessen the danger, at no cost to them or any other person, through an interaction that bears some similarity with some action that would be reasonable to forbid due to causing harm, Should I not help that person simply due to that similarity?

Edit:

It's possible that I misunderstood what was said somewhat. I'm not sure.


I am just a simple country boy, but my understanding of "White Hat" is about hacking done with explicit permission. The article suggests activities that cross the line.


Exploring open systems is a right, hence the open internet. If you don't want people walking the streets maybe you should put up a fence, close the door. Since internet protocols provide authorization via login/authentication token functionality - that what people should use to provide restricted services - not sic lawyers when someone elses packets land in their networks. RTF-RFCs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: