I posited the potential risks of vulnerability research in this blog entry here. Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.
Admittedly I'm not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter.
I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.
KJH (Kelly Jackson Higgins -- hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it. Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.
What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group's findings:
In the report, some Web researchers say that even if they find a bug accidentally on a site, they are hesitant to disclose it to the Website's owner for fear of prosecution. "This opinion grew stronger the more they learned during dialogue with working group members from the Department of Justice," the report says.
I believe we've all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released -- for good or bad.
Ask someone like Dave Maynor if the pain is ultimately worth it. Depending upon your disposition, your mileage may vary.
That revelation is unnerving to Jeremiah Grossman, CTO and founder of WhiteHat Security and a member of the working group. "That means only people that are on the side of the consumer are being silenced for fear of prosecution," and not the bad guys.
"[Web] researchers are terrified about what they can and can't do, and whether they'll face jail or fines," says Sara Peters, CSI editor and author of the report. "Having the perspective of legal people and law enforcement has been incredibly valuable. [And] this is more complicated than we thought."
This sort of response didn't come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity. I expect to query those same folks again on the topic.
Check this out:
The report discusses several methods of Web research, such as gathering information off-site about a Website or via social engineering; testing for cross-site scripting by sending HTML mail from the site to the researcher's own Webmail account; purposely causing errors on the site; and conducting port scans and vulnerability scans.
Interestingly, DOJ representatives say that using just one of these methods might not be enough for a solid case against a [good or bad] hacker. It would take several of these activities, as well as evidence that the researcher tried to "cover his tracks," they say. And other factors -- such as whether the researcher discloses a vulnerability, writes an exploit, or tries to sell the bug -- may factor in as well, according to the report.
Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!