Well, if you have not noticed there are new benchmark results for the Trade application running as a Java application in WebSphere 7 and running as a .NET application, posted at http://msdn.microsoft.com/stocktrader. Also, Steven Martin, the head of my division, posted a really good blog post on these results, released last week, at:  http://blogs.msdn.com/stevemar/archive/2009/04/30/websphere-loves-windows-who-knew.aspx

Also, take a look at http://www.websphereloveswindows.com.

The thing that is new here, and sparking the usual debate that comes along with almost all benchmarks I do (including the infamous PetShop/MiddleWare/TheServerSide.com benchmarks I did in the past) is that we ran a WebSphere 7-version of IBM's trade application on a high-end IBM Power6 server--specifically a Power 570/AIX server that costs $215,000.00 even without any WebSphere licenses.  We wanted to document the following:

  • How does WebSphere 7 running on Windows Server 2008 and an HP BladeSystem (using moderate scale out vs. a RISC scale-up approach on Power6) compare in performance and total cost to the Power 570 server?  Hint:  The HP BladeSystem costs about $51,000.00
  • How does the .NET implementation running on the same HP BladeSystem compare to both WebSphere 7 on Windows and WebSphere 7 on the high-end Power6/AIX system (both in performance and middle tier app server hardware + software costs)?

Checkout http://www.websphereloveswindows.com for summary, links to full benchmark paper, etc.  You can use the provided Capacity Planner tool to test other hardware configs, comparing for example, WebSphere on a mainframe to WebSphere on other platforms; or .NET on a Windows Server 2008 platform or even Linux platform of your choosing. 

As usual, my comparison has sparked some debate.  In a thread below I re-post the latest rebuttal from an anonymous source that thinks the comparison is biased, along with my responses to the points brought up.  I work hard on the benchmarks I run, and encourage this type of feedback.  That is what full disclosure for such benchmarks, with published code, detailed tuning documents, test scripts, etc. is all about.  So comment away!