Went to an excellent all-afternoon security review workshop held by Michael Howard.  He's the author of Writing Secure Code which certainly was an eye-opener for me.  While I was already pretty aware of most of the coding issues raised in the book, many other issues like those related to canonicalization and all the web-specific issues were definitely new and frightening.

To me the best part of the talks and book was the information on how to defend in depth.  While I would like to pretend that i wrote perfect (and thus safe) code, it simply isn't true.  In fact, it's not even close.  My code is buggy and, as much as I try, I simply am not capable of writing it otherwise.   However, even though there are code bugs an attack generally goes through many levels and protecting across all of them significantly reduces the risk for users.  Michael showed us an interesting security bulletin and how security in depth came in to play:

There had been security bug pre-win2k3 that had been patched.  However, even if it hadn't been patched:
It only affected II6 which wasn't installed by default in Win2k3.  However, even if it had been installed:
The IIS service that was infected wasn't enabled by default.  However, even if it had been enabled:
It required sending an amount of information to the service that far exceeded the rational maximum that it had been capped at.  However, even if the administrator had raised the cap:
The process would have just been killed because the code had been compiled with buffer overrun checking in place.  This would have resulted in a DOS, but that's less sucky (tm) than having the box execute untrusted code.  However, even if it had been able to run executable code:
It would only have "network service" priveleges, far less than "system" or "adminstrator" thus substantially limiting the amount of damage that could have been done.

Ben, one of my colleagues, raised an interesting point.  While many of these protections were quite important, there was a risk associated with them.  Basically, while we could expect these protections to be understood by sys-admins, if we made the process too diffiuclt for home users you might have a backlash due to frustration with users seeking to disable protections because they got in the way too much.  This is a problem that I've noticed when I (or friends) have been running linux or OSX.  We've become desensitized to the "type in your password" prompts whenever you need to do an unsafe action.  I've done it a few times and then a minute later i went.... "wait! what did i just do that for".  So far I don't think i've done anything unsafe, but so far I don't think I've been attacked either. (phew!)

Because of this it's pretty important for software vendors to understand the protections that will be in place and to make sure their software plays nice with it.  For example, we've had to do that with the remote debugger.  Because of the firewall enabled by default you will (expectedly) not be able to remote debug.  The remote debugger could fail and tell you something cryptic like "could not connect", which would lead you to wonder "is the machine accessible?" "is the network screwed up?" etc. etc.  Now, you get a message telling you what the problem is ("the firewall is enabled") and options to change the settings.

There were a lot of other interesting things in the talk like how to to whitebox/blackbox security testing and how to do threat models and security planning.  It was a lot to digest, but absolutely essential, if for no otehr reason than it made me even more aware of how thinking about security and privacy needs to be done at every phase of development.

On a dissapointing note, it was pretty awful how many security issues across the entire industry are related to buffer overruns.  I don't know about you, but every time I see a stack allocated buffer I get the heeby-jeebies.  String functions and people manually manipulating pointers to buffers and strings also scare the hell out of me.  Come on people, just use std::string or ATL::CString.  You'll be happier that you did!  (or just use managed code *cough* c# *cough*)

Of course, this is just one attack area (albeit a highly noticible one due to how the slightest mistake can lead to absolutely devastating results), and there are other areas that will get you even if you do use managed code.  If there was one incredibly useful thing that this has done it's to make me realize that education is incredibly important.  And I'm talking about my own education here.  If someone wants me to do some web development for them, I know now that not only am I not qualified, there are an enormity of security issues out there that I just don't know how to deal with.  If it were ever necessary for me to work in that area then I would definitely not fool myself and I would get a hell of a lot of training before proceeding.