I’ve been the owner of the PowerPivot for SharePoint Stress and R&R testing for the past year (not that somebody else was the owner before) and just recently the Performance testing owner and I started revisiting our approaches going further with our product.
This post is a quick review of stress and performance testing concepts, differences and similarities, based on my own experience and opinion, but also based on some literature such as Testing Computer Software.
Our goal on Performance Testing is to identify bottlenecks as well as to find performance bugs when compared to a well-defined benchmark. It is also to identify regression bugs on performance on the product when a new build is dropped. To execute a performance test is to execute a well-controlled process of measurement followed by the analysis of the data you just collected.
It is utterly fundamental to have a clearly defined set of expectations for a meaningful performance testing; otherwise it’d matter very little all the data the performance testing generates. In very few words, the performance test goal is to measure the execution time of a given task of the product in a well-controlled environment, so we know what we are measuring. The more details we have, that is, the more fine granularity we the performance testing measures, the better analysis we can do. Here are a few dimensions that we kept in mind for our perf testing:
· Complexity of the query issued by the client;
· Cache cold or warmed-up system;
· Quantity of simultaneous requests;
· Different SharePoint farm topologies.
It was crucial to conduct performance testing throughout the life cycle of the product development in order to establish a baseline and to identify regressions in performance introduced by new code.
My goal was to break the PowerPivot for SharePoint by overwhelming its resources during a long-haul test run. The main purpose of the test, besides having fun, was to learn how the product behaves under extreme conditions as well as to identify bugs that prevented the system to function properly under those conditions. Here are a few dimensions that we kept in mind for our stress testing:
· Amount of data travelling through the Farm;
· Quantity of users;
· Quantity of scheduled data refreshes;
Very briefly, the goal was to look for when we break, how we break and if we recover from the failure.
R&R Testing (a.k.a. Reliability and Robustness Testing)
The R&R testing consists in running the Stress Testing while taking resources away (such as disabling and the NIC, or disabling the AS service, or shutting down a server, etc) and giving them back (enabling again the NIC, starting the AS service, turning on a server, etc) for a long period of time. The goal was to learn how the PowerPivot for SharePoint behaves under those extreme conditions: if it keeps working under advert and always-changing conditions and if it recovers gracefully from the failures.
All three concepts, Performance, Stress and R&R testing differ in objectives, but they do share some things, especially with regards to the test infrastructure that needs to be developed to make them happen in an automated fashion: from the product deployment to the test drivers. And it’s also interesting to combine those test concepts, i.e. Performance and Stress, to create baselines of the product’s performance under load conditions to identify degradations of the product’s performance.
No alarms, no surprises.