One of the comments to my last post asked how someone could be part of the solution, as opposed to part of the problem. Here are some thoughts on the issue, based on my experiences of being one of the people finding problems from outside, and one of the people finding and fixing problems from the inside. If you're reporting some sort of widespread issue, like phishing, just send mail to the contact address. Speaking of contact e-mail addresses, there was a proposal to have everyone take reports at security@ [whatever company]. That doesn't work for Microsoft, since that gets you the physical security people, not the software security people. OTOH, I think it's extremely well known how to report things to Microsoft – secure@microsoft.com. I don't expect many replies from a phishing scam – the site involved probably has a lot of complaints already.

There are a lot of angles to this – is the vendor used to dealing with reports? What type of problem is it? Design flaws and implementation flaws are different classes of problem. Is the flaw in shipped software, or an operational site?

First thing to do is not be one of the people creating problems – write solid, secure code yourself, and encourage anyone you work for or with to do the same. If it is an operational problem, be careful – for a lot of reasons – your well-intentioned attempt to help out some provider could land you in legal trouble. There was a case like this in the UK recently, and from what I've heard, the guy was just trying to help – I don't have any direct knowledge, wasn't there, etc. At any rate, poking at people's web sites is about as welcome as walking into someone's house when you don't know them and yelling that they left the door unlocked. Some people will just go fix the problem, others may resort to calling the authorities. As an aside, you could be stumbling on a honeypot – in the physical world, it's still a crime to take off in a car when the keys were left inside – see http://baitcar.com for some amusing and appalling videos. Another issue is that I have seen tools meant to check for SQL injection take web sites completely down – you take someone's business down, you've cost them money.

If you stumble on a problem in shipping software, then come up with a clear demo that shows the problem. There's no need to add exploit shell code to it. Then be prepared to work with the vendor – fixes can take a long time to come out, if you've found a really bad problem, or a design flaw. I've seen patches come out in anywhere from a few days, to over a year. If what you've found is a design flaw, it might take a full release to solve it – be patient. If you've found a design problem, try to have a constructive dialog. It can also take a while for widely used software to get a patch out – much of the time we spend getting patches out is in testing and getting something ready for all the languages we support.

Once the vendor does create a patch, then there's the question of how much information to give. One of the worst things to do is to put code that can easily be turned into an attack out in public. If you found something fairly new and interesting, create a demo app you can use to show it off – then you can teach everyone about what you learned without putting anyone at risk. If you found yet another integer overflow that leads to a buffer overrun, that really isn't exceptionally interesting. I'd tend not to say much in those cases, except that a problem was found in some software and they fixed it promptly.

Lots of ins and outs to this one – I'll probably blog more later – got stuff to do before it starts raining (again). Oh – one thing – there's the RFP Policy – not sure just where to find it – I'll try to find a link. At any rate, it's a pretty reasonable approach to responsible disclosure. The goal in my mind is to get the software improved, customers protected, without harming anyone in the process.