[ISN] Reporting Vulnerabilities is for the Brave

InfoSec News isn at c4i.org
Tue May 23 01:22:19 EDT 2006


http://www.cerias.purdue.edu/weblogs/pmeunier/policies-law/post-38/

By Pascal Meunier
May 22nd, 2006

I was involved in disclosing a vulnerability found by a student to a
production web site using custom software (i.e., we didn't have access
to the source code or configuration information). As luck would have
it, the web site got hacked. I had to talk to a detective in the
resulting police investigation. Nothing bad happened to me, but it
could have, for two reasons.

The first reason is that whenever you do something "unnecessary", such
as reporting a vulnerability, police wonder why, and how you found
out. Police also wonders if you found one vulnerability, could you
have found more and not reported them? Who did you disclose that
information to? Did you get into the web site, and do anything there
that you shouldn't have? It's normal for the police to think that way.  
They have to. Unfortunately, it makes it very uninteresting to report
any problems.

A typical difficulty encountered by vulnerability researchers is that
administrators or programmers often deny that a problem is exploitable
or is of any consequence, and request a proof. This got Eric McCarty
in trouble - the proof is automatically a proof that you breached the
law, and can be used to prosecute you! Thankfully, the administrators
of the web site believed our report without trapping us by requesting
a proof in the form of an exploit and fixed it in record time. We
could have been in trouble if we had believed that a request for a
proof was an authorization to perform penetration testing. I believe
that I would have requested a signed authorization before doing it,
but it is easy to imagine a well-meaning student being not as cautious
(or I could have forgotten to request the written authorization, or
they could have refused to provide it..). Because the vulnerability
was fixed in record time, it also protected us from being accused of
the subsequent break-in, which happened after the vulnerability was
fixed, and therefore had to use some other means. If there had been an
overlap in time, we could have become suspects.

The second reason that bad things could have happened to me is that
I'm stubborn and believe that in a university setting, it should be
acceptable for students who stumble across a problem to report
vulnerabilities anonymously through an approved person (e.g., a staff
member or faculty) and mechanism. Why anonymously? Because student
vulnerability reporters are akin to whistleblowers. They are quite
vulnerable to retaliation from the administrators of web sites
(especially if it's a faculty web site that is used for grading). In
addition, student vulnerability reporters need to be protected from
the previously described situation, where they can become suspects and
possibly unjustly accused simply because someone else exploited the
web site around the same time that they reported the problem. Unlike
security professionals, they do not understand the risks they take by
reporting vulnerabilities (several security professionals don't yet
either). They may try to confirm that a web site is actually
vulnerable by creating an exploit, without ill intentions. Students
can be guided to avoid those mistakes by having a resource person to
help them report vulnerabilities.

So, as a stubborn idealist I clashed with the detective by refusing to
identify the student who had originally found the problem. I knew the
student enough to vouch for him, and I knew that the vulnerability we
found could not have been the one that was exploited. I was quickly
threatened with the possibility of court orders, and the number of
felony counts in the incident was brandished as justification for
revealing the name of the student. My superiors also requested that I
cooperate with the detective. Was this worth losing my job? Was this
worth the hassle of responding to court orders, subpoenas, and
possibly having my computers (work and personal) seized? Thankfully,
the student bravely decided to step forward and defused the situation.

As a consequence of that experience, I intend to provide the following
instructions to students (until something changes):

1. If you find strange behaviors that may indicate that a web site is
   vulnerable, don't try to confirm if it's actually vulnerable.

2. Try to avoid using that system as much as is reasonable.

3. Don't tell anyone (including me), don't try to impress anyone,
   don't brag that you're smart because you found an issue, and don't
   make innuendos. However much I wish I could, I can't keep your
   anonymity and protect you from police questioning (where you may
   incriminate yourself), a police investigation gone awry and
   miscarriages of justice. We all want to do the right thing, and
   help people we perceive as in danger. However, you shouldn't help
   when it puts you at the same or greater risk. The risk of being
   accused of felonies and having to defend yourself in court (as if
   you had the money to hire a lawyer - you're a student!) is just too
   high. Moreover, this is a web site, an application; real people are
   not in physical danger. Forget about it.

4. Delete any evidence that you knew about this problem. You are not
   responsible for that web site, it's not your problem - you have no
   reason to keep any such evidence. Go on with your life.

5. If you decide to report it against my advice, don't tell or ask me
   anything about it. I've exhausted my limited pool of bravery - as
   other people would put it, I've experienced a chilling effect.
   Despite the possible benefits to the university and society at
   large, I'm intimidated by the possible consequences to my career,
   bank account and sanity. I agree with HD Moore, as far as
   production web sites are concerned: "There is no way to report a
   vulnerability safely".





More information about the ISN mailing list