[ISN] Re: When security researchers become the problem
isn at c4i.org
Mon Aug 8 01:03:57 EDT 2005
Forwarded from: security curmudgeon <jericho at attrition.org>
: When security researchers become the problem
: July 27, 2005, 12:03 PM PT
: By Mary Ann Davidson
: There's a myth about security researchers that goes like this: Vendors
: are made up of indifferent slugs who wouldn't fix security
: vulnerabilities quickly--if at all--if it weren't for noble security
: researchers using the threat of public disclosure to force them to act.
A myth born out of a lot of (perceived) truth. Trying to broadly paint
this as a myth is a disservice to all the security researchers out there.
: The reality is that most vendors are trying to do better in
: vulnerability handling. Most don't need threats to do so, and some
: researchers have become the problem.
Most implies 'more than half' to me. That was not the case back when I
was a POC between security company and vendors on new found
vulnerabilities. Sun, HP, and IBM were atrocious at the time, only
responded to threats, were slugs, and wouldn't fix it quickly even
after the threat of public disclosure. The second it *did* hit the
mail list, oh-my-god-its-amazing how fast they could patch *and*
: 1. You should be able to fix this in two days Some researchers think
: they can push vendors to work faster by threatening to "tell all," and
: that if vendors really tried, they could meet the researchers' arbitrary
: 5-day, 15-day or 30-day "fix window." In reality, many of the best
: researchers aren't the ones you hear a lot about, because discretion is
: their stock in trade.
How about the arbitrary 650 day "fix window"? Some big vendors will
release an entirely new version of their OS in two years!
While you are babbling the corporate line, care to comment on 650 days
elapsing without a patch after a researcher shares a vulnerability
with Oracle? Make sure you word your response carefully in context of
: In reality, when a researcher reports a vulnerability, the fix might be
: a two-line code change and take 20 minutes to do. However, getting the
: fix in customers' hands often takes weeks. Remediation may require the
Weeks?! *Years* you ignorant pop tart.
: A two-line code change can take five minutes, but getting a fix into
: customers' hands in such a way that they will apply it takes way more
: than a few minutes.
And explaining this to researchers is a good start. explaining it
without being condescending or assuming they won't understand is even
better. Most only ask that the vendor keep them in the loop on patch
progress. Every week or so hearing "we're still working on it, bear
with us" is enough for most.
: Also, notoriety can backfire: I've known customers to terminate
: contracts with researchers for releasing exploit code. Researchers, you
: might get applause from hackers when you show off at Black Hat, but
: businesses will not pay you to slit their throats. With knowledge comes
I call bullshit. Releasing exploit code != slitting a customer throat.
To me, slitting a customer throat means releasing information in
violation of an NDA. For the few who *do* this, yes, it no doubt
negatively impacts their career and hurts their reputation. However, I
believe this to be a very tiny minority as described above.
: 3. I should always get credit for vulnerabilities I find Most vendors
: credit researchers who report vulnerabilities so that researchers will
: continue to work with them. Also, saying "Thank you for working with us"
: is just good manners. The myth is that researchers are always entitled
: to credit.
As much as vendors are entitled to early warning of the vulnerability =)
: In reality, when a researcher puts customers at risk by releasing
: exploit code for a vulnerability before the vendor has had a chance to
: fix it, it's ridiculous to expect the vendor to say, "Thank you for
: putting our customers at risk." I've never had a customer ask us for
: exploit code or exploit details, though they do want enough information
: to do a risk assessment.
Read full-disclosure or bugtraq pop tart. There have been a dozen
posts from admins that manage Oracle installations that specifically
ask for more details and/or a PoC so they can adequately and
accurately assess the risk to their environment. To think this is
unreasonable is a joke given the vague nature of the Oracle
advisories. Often times they list multiple distinct vulnerabilities
(based on the Oracle ID assigned), and give NO other way to
April 2005 for example:
OCS18 - Calendar - Network (CALENDAR) - None - Difficult - Limited
OCS19 - Calendar - Network (CALENDAR) - None - Difficult - Wide
OCS20 - Calendar - Network (CALENDAR) - None - Difficult - Limited
OCS21 - Calendar - Network (CALENDAR) - None - Difficult - Limited
OCS23 - Calendar - Network (CALENDAR) - None - Difficult - Wide
Gee, thanks! And the ones that specifically say "trivial" to exploit?
You expect any admin or security researcher to be able to figure out
if that is accurate or just how wide the impact might be? You think
they should immediately patch production machines over such notices?
Of course they do anyway, but that is because every single advisory
contains multiple remote code execution type bugs, but I digress..
: The reality is that not all researchers are noble-minded, and not all
: vendors are indifferent slugs. The other reality is that the highest
: purpose of everybody in this game should be protecting customers who use
: these products from harm.
And releasing software with Oracle's track record is a good indication
that customer protection is a distant second concern over bottom line.
Oracle Corp. Chairman and Chief Executive Officer Larry Ellison said
Thursday that Oracle software remains unbreakable and mocked a memo sent
this week by arch rival Bill Gates stressing to Microsoft Corp.'s
employees the importance of security in the company's products.
"Microsoft isn't good at security. We're good at that.." -- Larry Ellison,
More information about the ISN