Next week I am on the road to meet a few customers and discuss with them the Test offerings that my team has been working on. In particular, most of the conversations on this trip are going to revolve around our Load Test offering. I have talked about our Load Testing feature earlier in my blog, and you can find a lot more details about the product and it’s capabilities at Ed Glas’ blog. Ed and his team have done a superb job in building this product, and I am really proud of having this product as part of our repertoire for testing organizations.
The diagram below shows the architecture of our Load Test product. The load testing is orchestrated from Visual Studio. You set up the scenarios (groups of tests), their characteristics (mixes) for browser and network connections, the load pattern (constant, step, or goal based), the run settings, the data binding for individual tests (to ensure that the tests drive different behavior on different run), the time of the run, etc. Visual studio then, through a controller and one or more agents, simulates a real-life load situation for the target server (each agent simulates multiple users of the server). The target server itself is invoked through its public “http”, web-services, or other protocol interfaces. Furthermore, on the target server you can have collector agents running which can “observe” the server under load, collecting various profiling, timing, and resource usage metrics which can be sent to visual studio for analysis of how the serve is holding up.
This is a neat product and a must for any team that is shipping a server product and wants to make sure that the server will hold up in real life with millions of users pounding on it.
This post, though, isn’t so much about the product as it is about the parallels of load testing in real life.
A thought occurred to me that this world, and all of its inhabitants, is enacting nothing but a giant load testing run!
Let start with us humans. We are the individual “servers” that are being load tested. I like this analogy with a server or service – a higher goal in life is really to be of service to this web of life that we call the world. This server, at its core is made up of basic ingredients – earth, water, fire, space, and air – and has its own interfaces – the five organs of perception (sight, hearing, smell, touch, and taste), five organs of action (hands, feet etc.), and the four inner instruments of mind, intellect, memory, and ego. The server has a core set of functions (breathing, eating, waking and sleeping, etc.), and then throughout its life, being driven by the inner instruments and the organs of action and perception, indulgences in a bevy of activities that we are familiar with.
The ego perceives all of these activities as its own and thinks it is the real doer. But perhaps that isn’t the real truth and there is a “Controller” somewhere that is orchestrating all of these behind the scenes? Perhaps this giant run has been set in motion, with “data binding and data access” leading to the seemingly random and unique events of life? The server doesn’t know how long it has been scheduled to run for, and even as individual servers fall over and leave the run, newer ones are added. Moreover, are individual servers, which are being load-tested, in turn “generating load” for other servers? How complicated does this get if you throw in the other 8.4 million species in this planet into this test – are these “smaller servers” in the mix?
Is it possible that a part of the “Orchestrator” resides in each individual server as a “silent witness” simply observing what’s going on and perhaps collecting “data” that will be useful in the “final analysis”?
The customer visit next week is a good opportunity, I will be thinking deeply about our load testing product :-)