Doing info-sec for a small company has its highs and lows. Your organization is too small to afford market leaders, so you buy software from niche-market vendors. Software that you know is less tested, and so you feel more obligated to do due diligence in your own internal assessments. That is somewhat of a 'high' point if testing apps is something you enjoy.
If that was all, a gig like mine would be pretty nice. There is nothing more fun than finding a big-ass flaw in a piece of software (that you didn't write, that is). From that initial hunch, to that little thing you noticed that made you think you were onto something, to the cigarette breaks where you ponder new avenues to approach it from, right up to that pinnacle when you finally find that big vuln. The feeling is a bit universal too, whether testing a web app, some java servlet, or a full-on daemon.
To me, if I search a piece of software on OSVDB.org and I find nothing, then I'm nearly guaranteed that nobody has _really_ looked at this thing, or at least not published their findings. So yeah, some of the work is great. The fun tends to stop right there though.
For instance, I recently found a vulnerability in an app used throughout my employer's environment for "super-sensitive" stuff. The application isn't vastly deployed software that everyone has heard of, but it isn't phpMyTaintStain either; it's someplace comfortably in between. Absolutely no mention of it on OSVDB. I was convinced it wasn't perfect, and I was also a bit excited to start testing it as I hadn't ever tested a java applet/servlet-based application thoroughly.
It turned out to be way easier and much more like a webapp than I had thought it would be originally. After a couple hours of poking, I found a huge unauthenticated confidentiality hole. Once the euphoria wore off, I realized I had a big problem on my hands. I had to tell my employer's app owners and we had to assess risk and make a decision on what to do about it. After some quick meetings with stakeholders, we decided to severely limit access to the thing while we worked with the vendor.
The vendor refused to acknowledge it was a security issue. Odd, considering most everyone who sees the issue unmistakably agrees that it is not acceptable. Now I'm forced to play hardball, yet nobody wants to fully-disclose and destroy relations with this vendor, whose software is somewhat relied on. Meanwhile, I know there are hundreds of institutions, small and large, using this software who have no idea that it has flawed security and who would probably not find the risk acceptable. What can I do? Nothing. Oh well, sucks to be them.
Thoroughly annoying, and brought back similar sour feelings that I've experienced countless times.
I've had a vendor tell me to put a webapp firewall in front of their software. Did they offer to pay for it? NO. That would be like Toyota telling its customers to buy ejector seats (UNSUBZIDIZED ejector seats, that is) to resolve the accelerator problem in their vehicles.
I've had other vendors DEMAND I spend time helping them understand the issue, basically consulting for free for them. Have you ever knocked on a neighbor's door to tell them they left their headlights on? Did they then require you to cook them dinner? Exactly...
I've even been accused of being a spy for a company's competition (true... ask Jericho)...
ME: "Hi, you left your headlights on."
NEIGHBOR: "WHO SENT YOU? DID MY EX-WIFE SEND YOU? ARE YOU SLEEPING WITH HER?"
Then there's other vendors who make life easy. You send them an issue, you get a number back, six months later they tell you the patch just came out. No attachment, no awkwardness, no cuddling, just pure, hardcore... err...
Most vendors I've dealt with are nothing like the previous love story. Most are nightmares.
All these hassles are endured for doing things "responsibly". In 90% of all cases where I've done things "responsibly", they have cost me either ludicrous amounts of time, cost my organization money, cost my friends and co-workers time, and created a lot of stress and frustration. About 70% of the time I've done things "responsibly", the problem was never even corrected. And yet, had I gone and simply anonymously posted the findings on the Internet, chances are that the issues would have been resolved in all cases with none of the frustrations.
So what is actually responsible or ethical? The lines are blurred quite a bit. The "responsible" method is also the "painful", "expensive", and often "ineffective" method that gets little resolved for exponentially more work, time and money. Is all that waste not irresponsible? What about all of the other organizations unknowingly affected by things I've found, organizations who never got a heads-up, no less a patch, because my attempts at "responsible" disclosure failed? How is that in any way "responsible" or "ethical"? Sure, you could say "disclose responsibly to responsible vendors", but you don't know who is going to handle your findings properly until you start the process. At that point you've lost your anonymity and all the simplicity that comes with it. The minute your disclosure is tied to your organization, you have all manner of legalities and politics to deal with, both internally and externally.
I'm losing my patience with doing things "responsibly".