Get on-the-go access to the latest insights featured on our Trustworthy Computing blogs.
Steve Lipner here.
A couple of weeks ago, I participated in a panel on the ethics of security vulnerability disclosure at Black Hat in Las Vegas. I believe that I was invited for my role in Microsoft’s Security Engineering and Community team and because I’m a former director of the Microsoft Security Response Center.
The focus of the panel was on handling software vulnerabilities discovered by security researchers, but the discussion really dived into a wide variety of ethical issues around software security. For example, one panelist asked whether it was ethical for a company with billions of dollars in the bank to ship a product with known classes of vulnerabilities. Buffer overruns were mentioned specifically. A member of the audience asked whether, in today's environment, it's ethical for a software vendor to release products without conducting an extensive security review.
At Microsoft, we hear these kinds of ethical questions more often than you would think. All of them tend to come down to two common themes: How much should a vendor do and how long should a vendor wait to make a release "secure enough?” Our answer is that we do as much as we can to make our products secure, but we’re always mindful of the need to ship customers a product that will not only improve security but be timely enough so that they’ll actually use it. It is not much more ethical to work forever on a secure product that you never ship and users never use than it is to ignore security altogether.
Back in the 1980s, I led a project to build a system that could be evaluated under class A1 of the Orange Book. We did things like strict layering of the design, writing and verifying formal specifications of the system, and characterizing and removing "covert channels" that could allow the leakage of information from one security classification to another. Boy, was it secure!
Unfortunately, the development process we followed was so rigorous that it took us several quarters to turn around a major design change to eliminate a new class of attacks or fix a performance problem. I eventually made the hard decision to cancel the project because feature enhancements to competing products were moving faster than we were able to design and build our product. By the time we were ready to ship the system, it wouldn’t have been competitive in the market. Even government agencies handling very sensitive information didn't want to pay the performance and feature penalty for a system that secure.
Some folks have also advocated "throwing away the legacy" and building a completely new system unburdened by the constraints of compatibility with legacy code. This is a nice idea in theory because eliminating the constraints of compatibility appears to allow a design and implementation that prioritize security over all. But I believe it's a loser in practice. Even our A1 system was constrained by compatibility with legacy applications (we designed it as a VMM for that exact reason). We were pretty sure that a "clean sheet" incompatible design would find no customer interest at all, and I still believe we were right on that score.
My experience with the A1 system has definitely influenced my work on the SDL. While we do the very best we can, we know that perfection is not achievable. What we do is add steps to a commercially viable development lifecycle that can be accomplished by real developers on a schedule that allows them to ship competitive products. We learn from our mistakes and update the processes as we go, but we never forget that it's important to ship.
What does all this have to do with ethics? Well, I think that given the choice between shipping perfectly secure software (whatever that means) that no customers will use and shipping software with continuously improved security that will actually help customers, the better ethical path is to ship. That's a controversial view in some circles, but it's the view I've reached after working in the field for the last 35 years or so.
It appears that you are using a duty-analysis model for determining what is ethical.
In your example then, the software producer has an ethical duty both to the consumer of the software, but also to the folks paying to have the software produced. Balancing these is pretty tricky as you've indicated. The consumer is paying money for the software and has expectations, the creator is being paid to create....
How do you think the equation shifts if you're either performing the work for free, or giving away the software? Does it move the line towards more security, or towards release with less?
PingBack from http://www.bloginfosec.com/2007/08/27/microsoft-and-the-ethics-of-product-vulnerabilities/