Yeees! I managed to have a palindrome post title :-) next stage is to make it bustrophedic... anyway it stands for Web Services and Small Words. Here it goes.

 

I recently had a very interesting discussion about if the increased size of messages in SOA in respect to traditional architectures is a bad thing under all aspects. Besides of the discussion results, that I'll maybe report on another post (if the other party won't do it on its own blog :-)), this fact triggered a long chain of lazy thoughts that brought me far from the original question.

 

Are you familiar with the "small world" theories? It's an interesting study of properties of archetypical network topologies, that seems to apply to an incredibly vast range of natural and artificial networks (brain cells, airports, the Internet, rivers, virus spread patterns, proteins metabolism paths, joke emails). See Nexus for a good introduction on the topic, it helped me to spend nicely a New Orleans-Milan flight (yes, I can't sleep on the plane). One very important entity in this theory is the concept of "superconnectors": those are nodes that has an unusually high number of connections, so they allow to join otherwise unrelated subnetworks. Those nodes gives the network very special properties, first of all a power law distribution. It seems that those properties can be achieved also in a highly partitioned network with few random "long distance" links.

I was wondering if those analysis tools can be used to drain conclusions on web services networks as well. There are many potential analogies that comes immediately to mind: aggregators and orchestrators services are good candidates to become superconnectors; isolated subnetworks map nicely in federations/fiefdoms; links between far (isolated) subnetworks model well the idea of a WS that helps crossing boundaries (remember PDC panels?). Provided that the analogies hold, there is potentially a lot to be discovered: how quick a change (a policy variation that implies some form or renegotiation, for example) can propagate through the various involved services, to what minimum the availability of single services can be reduced without harming the overall process health, where and how to put WSIs (Web Services Intermediaries: not to be confused with WS-I, Web Services Interoperability Organization :-)) to maximize their effects and minimizing their number at the same time...and I wonder (that was the seed of all the reasoning!) if you could draw some domain specific conclusions, like finding a correlation between typical message size (meaning number of subparts, not the sheer bytes) and the kind of service (my conjecture is that a superconnectors will handle big messages, since they would convey business process initiators/states and because the exchange rate is lower due to the overhead induced by low trust. Conversely, the tighest the network the narrower the focus of a service and the smaller the payload that needs to be exchanged. It's just a feeling, so don't hold me responsible if it is inaccurate and/or turns out false!)

 

The fact is that, for being meaningfully inspected by such a method, a network should have a BIG number of nodes: so big to benefit a view-from-above analysis (smaller nets can be handled better by knowing system details, I guess) and so heterogeneous to involve a fair range of node (service) kinds. For the order of magnitudo I'm working with right now, that could be pretty much like speaking of Asimov's storiography. but that can change fast. That said, it's important to highlight that I don't know if a WS network would follow the small world rules: the above mentioned analogies sound promising, but until a "power law" pattern (see again the book) isn't demonstrated on a real world example, nothing can be assumed.

Provided that I'm aware (and I make the reader aware as well) that all that I said is not demonstrated, I'd say that free rides like that are always a good way to waste time after dinner :-)