One of my former colleagues once said, "Performance is a journey and not a destination." I think this statement is so true.
Because performance is relative, it is imperative and important to define BOTH a baseline and acceptance metrics for each scenario that we test. This will also provide a common platform to quantify and define success for all the stakeholders. The beauty of establishing a baseline is that you can’t deteriorate from that run. If you do, then you know which configurations will lead you back to your baseline because that was your best run before the test started deteriorating. If your next run is an improvement, then it could be your new baseline.
There are certain things we can do statically, such as code review using FxCop, code analysis in Microsoft Visual Studio, etc. Whereas, issues like resource management can also be discovered when we run it against a given hardware. Therefore, repetitive test runs for a given scenario become useful to uncover such issues.
After several performance tuning iterations, you may still find ways to get that "extra bit" of performance out of the system, but the decision always revolves around time, energy, and resources and whether to scale up or out. I will discuss some of the best practices in an end-to-end scenario and provide links wherever possible with more detailed information. I do not discuss Microsoft SQL Server, because this topic has been covered well by my colleague in his post here.
Consider the practices that work best for your application.
There are some essential tools that you can leverage during run time to fine-tune your application:1. PerfView2. NP .NET Profiler
Nelson Dsouza is a Technical Architect at the Microsoft Technology Center (MTC) in Mumbai. He held various roles at Microsoft in the United States, including assisting developers from Fortune 500 companies. Nelson helped these companies write applications that maximize technology investments and solve real-world problems. He recently moved to Mumbai, India.