While producing a high performing application, processes were required for each release and these tasks grew in size and complexity until it became necessary to document the process as a list that could be check pointed along the release cycle. Since the tasks applied to the entire project team consisting of development, quality assurance, program management, deployment, operations and business, the items required to be completed and tracked needed more and more visibility across the project. As we defined
these tasks, we consolidated and shared them with other projects here at Microsoft. This led to the development of the Application Performance Lifecycle (APL).

The APL is an iterative process used release over release, to guide the team to incorporate performance best practices for all disciplines on a project team. Below are the six phases of the APL with a brief summary of their purpose:


Every project begins with the Envision phase. From a performance standpoint, we begin planning for architecture, technology enhancements, development, team resourcing and requirements. This phase is also very important for subsequent releases and enhancements of an existing project, as we will typically receive new performance requirements and possible changes to the architecture. For these subsequent releases, we leverage current production usage to ascertain highest activity pages, poor performing pages, and review results from prior releases, and use this information to restate thresholds and other performance target loads we will be testing against.


This phase of the APL is used to firm up and produce deliverables such as the performance requirements, performance test plan and hardware architecture diagrams. Additional activities include provisioning of hardware for the performance environments and controller/agents and securing team resources.


At this point, we begin coding the performance test cases, preparing data sets, establishing and implementing monitoring of environment, prototyping test scripts and profiling the data and environment.


At this point, the project should be code complete, features locked and remediation of bugs in full swing. From a performance testing perspective, we begin executing load, stress and endurance tests. We continue to execute performance unit tests (written during build phase) and share this data as well as data generated through the load tests with the development team. Issues are isolated and recommendation made on tuning code and environment configuration.


This phase is important for making sure all deliverables are ready and in place to set the application up in production. Performance reports are reviewed with stakeholders and a determination made as to whether or not the project is meeting goals needed for release.


Once the application is live in production the performance work doesn’t stop. The operations team is running full monitoring of end user processes and workflows looking for bottlenecks. Issues and alerts are shared with the engineering team to determine root cause of any constraints, this data then feeds into improvements needed for the next iteration of the project.

Below, I’ve included a diagram that helps illustrate the process flow of the APL:

Many will look at these processes and comment that they are identical to the Software Development Lifecycle (SDLC). This is done for the purpose of paralleling the APL to the SDLC. Specific tasks can then be assigned to the cycles as deliverables and improve the quality of the application performance at the turn of the cycle.  This model is flexible for adoption under Waterfall, Agile, or a variety of other delivery patterns.

Part III of this series will detail the Envision stage of the APL.