Kirk Evans Blog

.NET From a Markup Perspective

Developing Distributed Services Today

Developing Distributed Services Today

  • Comments 4

Richard Turner has published a white paper on Developing Distributed Services Today.  It hit home.

I have a customer that has traditionally used other platforms to build web applications (they went the RPC - CORBA - J2EE - EJB - JSP - RMI - route), and is now very interested in using ASP.NET.  In their past architectures, they implemented a tiered system, where the UI box sat in the DMZ, all logic was implemented on an application in a secure zone.  The recommendation from Microsoft is typically to put separate the UI, logic, and business logic into separate logical layers but to keep them running in process with the server.  I know that this customer would not accept this as an option, so I started thinking about how else we should approach this problem.  They wanted minimal code in the UI layer and the ability to secure separate layers on different physical machines.  They are also concerned about exposing services across the enterprise to gain more efficiency and reduce duplication of effort, so we needed to think pretty carefully.

Having done more than my fair share of Whitehorse demos lately, the first thing that popped in my head was to use a web service call from the UI back to the application tier.  Just use ASMX to call from the web server back to the application server (better termed as a service), ASMX with SSL if they want to secure the traffic), or ASMX with WSE to use WS-Security if they would have intermediaries.  I knew the party line from what Don Box and Richard have been saying, use ASMX now.  Web services provides the reach model we thought they wanted.  But I paused before saying it out loud.

I couldn't help but second-guess this advice, I wanted the strongest proposal for the customer.

We started looking at ES and COM+ and possibly exposing services through some other means, and the results confirmed our suspicions:  perf numbers from simple tests using ES were demonstrably better than the same workload using ASMX.  Using ES, we could get location transparency for the endpoint component, and COM+ 1.5 provides the type of security that they were looking for.  We then changed the tests to simulate real workload, and we saw that the field was much more level.  It didn't jump out at first, we started to tune and twist and twiddle, thinking maybe we would expose services through another means, but the end result came up nearly the same.  We finally noticed the pattern:  the amount of work done within these components was significant enough to render the performance of the transport insignificant.  The time spent doing real work made the milliseconds gained by using a faster transport irrelevant.  Another interesting effect was that once we looked at the documenation and skills required to implement the COM+ solution (for those unfamiliar with COM+) versus web services (which should be relevant to non-Microsoft platforms), the complexity of the solution was reduced and developer productivity would increase.  It didn't make sense in this case to offer multiple means of accessing the services given the limited benefit that would be provided.

The customer looked shocked that we would recommend ASMX, until we showed the numbers.  The funny thing is that they were even more shocked because the perf we demonstrated blew away that of their existing design.  We were able to scale the solution at a fraction of the cost, and the implementation was more straightforward for the developers to implement.

Mike Gilbert joined our Communications Sector Developer Evangelism team last year, and does a great presentation base on benchmarks on various technologies around this same concept.  His conclusions are the same: when workload is significant, the transport becomes much less significant.  It didn't really hit home until I worked through the exercise myself.

The customer is now investigating ASMX and IIS as part of their core infrastructure.  They bought off on WS-Security during my foray into Kerberos authentication last year, but only saw Microsoft as being a client technology (remember, they had a significant existing investment in EJB's on the backend, and rip & replace is not a good idea when existing solutions are functioning).  Now they are looking closer at Microsoft in the data center.  This wasn't due to angle brackets over HTTP, it was due to coming up with a design that met the customer's long term goals that didn't try to shoehorn a technology into a solution.  The advice to use ASMX is very relevant, and with Whidbey, web services performance only gets better.

We looked at exposing web services via COM+, writing some wrapper code to provide multiple endpoints, and a couple other options, but at the end of it all we realized we were overarchitecting a simple solution.  I just can't wait until this is a simple exercise of telling Indigo to talk using COM+ and providing multiple endpoints, this would have been a decision out of the hands of the developers and back in the hands of the IT Pro.

My assumptions against ASMX were tested by a couple of great programmers.  This is one component of programming that I wish Joel had mentioned when talking about a few great programmers instead of a bunch of mediocre ones:  great developers test their assumptions and strive to deliver better solutions.  I can only hope that I can measure up to Joel's criteria one day.

Page 1 of 1 (4 items)
Leave a Comment
  • Please add 7 and 8 and type the answer here:
  • Post
Translate This Page
Search
Archive
Archives