My name is Jim Nye and I am a Senior SDET with Microsoft’s Sales and Marketing IT (SMIT) organization. As part of my role, I serve as the project performance lead on specific applications within SMIT developing performance processes and ensuring they adhere to best practices that improve quality and dependability for our end users experiences.

This is the first part of a multipart blog on Performance testing here at Microsoft. We’ll be taking a close look at our matured processes and how they have helped us achieve consistency in our application Performance goals. The next blog will outline the Application Performance Lifecycle. Additional blogs will cover each of the
six stages in detail followed by a conclusion.

Performance testing has long been considered an after-thought, began only after the application was near the end of its testing cycle. In many cases it was work that was completed by an outside team, unfamiliar with the domain specific elements of the application. In essence Performance analysis was simply done on singular points in an application and reported as a deliverable note to the release of the application. Any throttling issues, bottleneck, or bad user experiences had fixes planned in future releases often passing the pain along to the customer to “deal” with until a new release could be ready. In extreme cases where user experiences were highly affected or the application simply would not work in production a critical issue would be raised and a response team formed to find out why the performance was broken, what choke point was at fault and how to fix the issue to get the application responding once more. Even for these critical issues it was more like placing a bandage on a leaky dam and hoping the rain would stop.

Today, efforts start from the project inception and take into account the need to keep the application running smoothly and yielding a quality experience the customers love. Getting from reactive mode to a quality responsive performing application does not happen coincidentally or simply because a problem had to be fixed. It begins
with a process. The last four years, I have held the role of Performance Lead for a Web Application project from the inception through several releases and now finally through sunset of the application as we phase it out in preparation for the next generation application.  We thought it might be helpful to share with the community our learnings and findings on the processes that we developed and used over the life cycle of this application. These processes delivered a consistent application with performance tasks that guided us in delivery of over thirty releases during the life of the project without a single performance bug being found in production.

Even in our own domain this mindset doesn’t always surface right away. A long history of performance reactivity across the software industry can often lead a team to wonder why they have to adopt a practice that is contrary to an industry standard. Media is not so forgiving. If a proven software company produces an application that
is not highly responsive, critics are the first to sound alarm, soon the concern is picked up by the media and before long even long term advocates of the software company begin to voice their dissatisfaction. As industry leaders we cannot settle for less than high quality and highly responsive applications.

Over the course of the past four years I have been fortunate to lead performance on a team that accepted the practices and policies required to produce a highly responsive application. With end customers being resellers, our user base didn’t have time to sit around and wait for pages to load. Poorly performing applications are a sign of
complacency that somehow customers should have to accept “good enough” when working with our products. If they are not greeted with fast applications on the Web how can we expect them to believe our operating systems and business applications will perform any better? Over the lifetime of this project I just completed, we had no unknown performance bugs discovered in production, but more importantly, the mindset that allowing performance bugs to be released has changed. Twice we had Severity 1 performance issues outstanding at end of the release cycle and each time the bugs were escalated to the general manager for exception. The third time such a request was made the general manager firmly declined. This caused a hearty effort by the development and project team to fix the underlying issue and release high quality fix in time for production. The mindset had changed and the project team now believed they could and should always address performance issues prior to production.