[ISN] Bug Bounties Exterminate Holes

InfoSec News isn at c4i.org
Fri Apr 14 02:34:39 EDT 2006


http://www.wired.com/news/columns/0,70644-0.html

By Jennifer Granick
Apr, 12, 2006 

Money changes everything. Just when security researchers and software
companies seemed to reach a consensus on the contentious issue of
publicizing information about computer security flaws, businesses that
sell vulnerability information are disturbing the peace.

Last week, at the CanSecWest computer security conference in
Vancouver, Canada, I debated the ways commercialization has changed
vulnerability reporting during a panel discussion that included
independent researchers as well as executives and employees from
Oracle, Novell, Intel, 3Com and iDefense. My conclusion is that more
commercialization means more private control, and that is not a good
thing for security.

A few years ago, hackers and software vendors vigorously argued
whether researchers should go public with security flaws so users
could protect themselves and demand better products from vendors, or
if they're better off keeping the information quiet so as not to aid
malicious attackers. Eventually, consensus formed around a middle
ground called "responsible disclosure": Researchers would generally
report their discovery of flaws, but withhold information useful to
attackers until after vendors issued a patch.

Meanwhile, vendors would publicly credit the researcher with finding
the flaw. The practice recognized the importance of public disclosure,
but sought to balance it against the danger of providing easy-to-use
tools to wannabes and script kiddies.

The détente has not been perfect. Computer security professionals,
including Oracle's Darius Wiles on our panel, continue to disagree
over how much information adequately informs the public without
helping attackers. Researchers continue to disagree with software
vendors about the amount of time it takes to fix problems in good
faith. And not all researchers or companies adhere to the responsible
disclosure framework, though many do.

Also, as college student and researcher Matt Murphy pointed out, we
ask a lot from the researcher, who performs a valuable and
labor-intensive service in finding bugs, only to give the information
to the vendor, in exchange for nothing more than the promise of a
shout-out.

Into this gap, a new type of security company has emerged: information
brokerage firms that pay researchers a finder's fee for security
holes.

Michael Sutton from iDefense told us that his company, which pays
between a few hundred dollars and $10,000 for a vulnerability, reports
the information first to the affected vendors, then passes it on to
paid subscribers. Terri Forslof's company, 3Com, also pays a bounty
for bugs, and uses the information to improve its TippingPoint
intrusion-prevention system.

I have advised two businesses that had plans to auction
vulnerabilities to the highest bidder on eBay. (After talking with me,
each decided not to take the risk.)

Some vendors have decided to pay researchers directly for bugs. For
example, Mozilla has a Bug Bounty Program that gives researchers $500
and a T-shirt for their finds.

I see real benefits to the public, researchers and vendors from this
trend to commercialization: An information broker may be better than
the researcher at communicating and working with the vendor. A
reputable broker may have better luck than an unknown researcher in
getting the vendor to take a security problem seriously and deal with
it in a timely manner. Meanwhile, the researcher gets both credit and
financial compensation. The promise of compensation will incentivize
more research, and more research means more bugs are found.

But commercialization can also be dangerous. Foreign governments,
corporate spies, the mafia, terrorists and spammers want
vulnerabilities that no one else knows about and for which there are
no patches. These groups have always been motivated to gain control of
vulnerability information at any price, even before information
brokerage became relatively commonplace.

Some members of the CanSecWest audience worried that commercialization
makes it easier for researchers to sell to the highest bidder, even if
the highest bidder has criminal intentions.

I'm more concerned that commercialization, while it promotes
discovery, will interfere with the publication of vulnerability
information. The industry adopted responsible disclosure because
almost everyone agrees that members of the public need to know if they
are secure, and because there is inherent danger in some people having
more information than others.

Commercialization throws that out the window. Brokers that disclose
bugs to their selected list of subscribers are necessarily withholding
important information from the rest of the public. Brokers may
eventually issue public advisories, but in the meantime, only the
vendor and subscribers know about the problem.

The insiders who know about the flaw could exploit it, attacking those
systems whose administrators remain unaware. Even if that doesn't
happen, the broker business depends on customers feeling the need to
pay for early notification. Toby Kohlenberg from Intel somewhat
rhetorically asked the brokers on our panel whether they expected a
company that wants all the up-to-date security information to
subscribe to multiple brokerage services at a potential cost of up to
$1 million a year.

Now that information brokers pay researchers for information, they are
going to want to control what happens with that information. Michael
Sutton, director of the iDefense Lab, says his company has no plans to
sue researchers or customers who redistribute vulnerabilities without
permission. Unauthorized disclosure, Sutton says, "is part of the
business." But at some point, an information broker that wants to
prevent researchers, customers and insiders from disclosing to
nonpaying members of the public will seek protection in intellectual
property law.

Copyright law can prevent a broker's paying customers from
redistributing a patch to those who have not paid. Trade secret law
can prevent insiders or entities under nondisclosure agreements from
informing the public about a flaw. Patent law can prevent even those
who independently discover the flaw from testing for it or patching
it.

Murphy and some other panelists argued that vendor purchase programs
like Mozilla's work better than information broker programs because
they are the most responsible form of disclosure, and vendors can use
financial incentives to drive research toward the most dangerous
flaws.

Yet, vendors have already demonstrated that they're willing to claim
intellectual property infringement when researchers seek to disclose
vulnerability information about their products. I have represented
security companies that wanted to publish information about a flaw,
but were informed by the vendor that they would be sued for trade
secret violations if they did so. In the criminal case of United
States v. Bret McDanel, a now-defunct internet messaging service
convinced the Department of Justice to prosecute a man who had the
temerity to inform customers that the service was insecure. More
recently, Cisco Systems sued researcher Michael Lynn for disclosing a
flaw in its routers. Cisco asserts that its concern was not for the
company's reputation, but for customers' security.

Regardless, if courts accept the theory that Cisco has property rights
in vulnerability information, it gives fuel to those who want to hide
that information for private gain rather than public good. Now that
vulnerability information is a commodity, there's more pressure for
the law to protect that information as a business asset, rather than
encourage its disclosure in the public interest.

We are already living in a failed, broken computer security market.  
The average customer doesn't have the knowledge to demand better
security so vendors don't have an incentive to provide it.  
Commercialization exacerbates the problem by casting vulnerabilities
as a market commodity -- no different than software or songs.

But it is different. Like clean air or public parks, the public needs
vulnerability information. Yet, like polluters or real estate
developers, there are private interests willing to pay big bucks to
ensure that information is only useful to a select few. Vulnerability
disclosure plays a special role in promoting public security. As
vulnerability brokerages grow, policy makers and courts must recognize
that this is not just another information market.

-=-

Jennifer Granick is executive director of the Stanford Law School
Center for Internet and Society, and teaches the Cyberlaw Clinic.





More information about the ISN mailing list