Windows Azure SQL Database Marketplace
[This article was contributed by the AppFabric team.]
This post is part of a series about WCF extensibility points. For a list of all previous posts and planned future ones, go to the index page.
The operation invoker is the last element in the WCF runtime which is invoked before the user code (service implementation) is reached – it’s the invoker responsibility to actually call the service operation on behalf of the runtime. IOperationInvoker implementations are called by the WCF runtime to allocate the object array used by the formatter to deserialize the response, and it’s called again when the parameters are ready and the operation is to be called. The interface defines both a synchronous and an asynchronous invocation pattern, which will be called depending on whether the operation was defined using the AsyncPattern or not. The invoker is a required element of the service operation runtime, and WCF will add one by default.
None. As with most runtime extensibility points, there are no public implementations in WCF. There are many internal implementations, such as the System.ServiceModel.Dispatcher.SyncMethodInvoker and System.ServiceModel.Dispatcher.AsyncMethodInvoker, which are the default invokers for methods without and with AsyncPattern set to true, respectively, In the post about the POCO service host I also had a simple invoker for an untyped message contract.
The IsSynchronous property is called once per dispatch operation so that WCF will set up the internal structure which will as the dispatch is being created (the information is then cached and the property is not accessed anymore afterwards). When an incoming message arrives to the service, AllocateInputs is called to allocate the array of inputs which will be passed to the formatter so that it can unwrap the parameters from the message and fill the array. <rant>This is one of those extensibility points which could not have been exposed and I don’t think anyone would miss it – pretty much all implementations simply return a new object array with the length equal to the number of inputs expected by the operation (possibly caching an empty array for operations without input parameters). In theory someone could write a more efficient implementation which would cache those object arrays (since the array is passed to the invoker when the service operation is to be called, the invoker would know when to return the array to an “input array pool”), but frankly I don’t think this is worth the hassle of the API bloat…</rant>
Getting back to the interface, after the input array has been allocated, passed to the formatter (and by any parameter inspectors which want to look at them), the invoker is called to call the operation itself. For synchronous operations, the call is simple – Invoke is called passed the service instance and the operation inputs. The method must then set the output parameter array with any out/ref parameters from the operation, and it should return the operation result. For operations implemented asynchronously, the pair InvokeBegin and InvokeEnd are called following the asynchronous programming model.
Invokers only apply to the server side, and they are set on the DispatchOperation object – each operation has its own invoker. It is typically accessed via an operation behavior in a call to ApplyDispatchBehavior.
The behavior which sets the default WCF invoker is the first one at the list of operation behaviors, even before attribute-based operation behaviors are added. This makes it safe to wrap the default invoker using an attribute-based operation behavior (unlike with message formatters, as mentioned in a previous post).
One scenario for which I’ve seen invokers being used in forums and in some blog posts is to implement a cache for expensive operations. Unlike for REST services (which have a well-defined caching story for GET requests), SOAP services (with their RPC-like semantics) don’t have any standard way of defining caching options for their operations. This example will show one possible implementation using a custom operation invoker which will either delegate a call to the service operation, or returned a cached version of the operation result to the caller, bypassing the server operation. Most implementations only show the synchronous version of invocation, so I’ve decided to expand it and create a full invoker which can deal with both sync and asynchronous operations.
The solution here will use the ASP.NET cache to store the result for the operations which are “cacheable” for web hosted scenarios, or the new MemoryCache type (from .NET 4.0) for self-hosted ones (via one conditional compilation directive). The ASP.NET cache in theory can be used even outside of ASP.NET, but since there is one for general purpose, I decided to stick with it (and not "pollute" a console project with a reference to System.Web). Both libraries provide a simple property bag semantics with expiration functionality which maps perfectly to this example.
And the disclaimer time: this is a sample for illustrating the topic of this post, this is not production-ready code. I tested it for a few contracts and it worked, but I cannot guarantee that it will work for all scenarios (please let me know if you find a bug or something missing). Also, for simplicity sake it doesn’t have a lot of error handling which a production-level code would.
On to the code. Since the invoker is bound to an operation, we’ll use an operation behavior to add the invoker to the runtime, wrapping the original invoker in our new one.
With this behavior we can now define our service contract, annotating some of the operations in the interface with a caching attribute meaning that their result can be cached up to the duration specified (30 seconds by default). Some operations (like Add) are not cached (so calls to that operation will always be routed to the service), while calls to other operations are.
I’ll skip the service implementation here – each of the operations sleeps for a second, then returns the appropriate value (sum, Reverse, Math.Pow, int.TryParse, double.TryParse) to simulate an “expensive” operation (look at the code in this post link at the bottom to see the full service code). On to the invoker code. The caching invoker will take both the original invoker (for non-cached responses) and the duration of the cache. The implementation of AllocateInputs and IsSynchronous simply delegate to the original invoker. I’ll also use two helper functions, GetCache (which simply returns the ASP.NET cache object / default MemoryCache instance) and CreateCacheKey (which creates a string value to be used as the key in the cache). Since the cache is a global object, I’m using a Guid per invoker to map the key, to prevent two different operations with similar parameters from incorrectly “sharing” the same cached result.
The synchronous version of Invoke is fairly simple – create the input keys, then check the cache for those inputs. If the result for that set of inputs is cached, return them from the cache, bypassing the service operation. Otherwise we’ll invoke the operation using the original invoker, add the results of the operation to the cache, and return the result. The class CachedResult used in this method is a simple class with two public properties, one for the return value and one for the output values from the operation.
The asynchronous version is trickier. Since we’re intercepting the call in the middle and not simply routing the call to the service, we need to follow the pattern to chain asynchronous calls which I talked about in a previous post (the new Task-based asynchronous programming model is supposed to make this a lot simpler, and it will be available in WCF on the next version of the framework; I don’t know, however, whether it will be available in the inner extensibility points such as operation invokers, as they aren’t widely used). The first thing we need to pass information via a custom class, and I’ll use a class to hold the “user state” of the asynchronous calls.
Next, we need to define an implementation of IAsyncResult which can carry over that new caching user state, while still returning to the caller the user state it passed to the Begin call.
Now we can start with the implementation of InvokeBegin. The operation starts like the synchronous version – first check whether we have the desired output in the cache. Then the method sets up the “caching user state”, an object which will be passed along the asynchronous calls and will be available at the callback. If the result is cached, we’ll delegate the call to the GetValue method on the CachedResult object, which will return the output and return value from the cache; if the result is not cached, we’ll instead delegate the call to the InvokeBegin in the original formatter.
When either call returns (i.e., the callback is called), our callback implementation will turn around and call the callback passed by the caller of its own InvokeBegin, to complete the callback chain up to the first caller. It’s then that caller’s responsibility to call InvokeEnd on the caching invoker.
When InvokeEnd is called, the caching invoker will unwrap the caching user state from the IAsyncResult parameter passed to it. If the result was cached, it will end the call of the GetValue on CacheableItem to retrieve the cached results, and then return the operation result to the caller. If the result was not cached, the method will call the InvokeEnd on the original invoker to finish the call to the service operation, and it will then insert the operation result into the cache, prior to returning the result to the caller.
This diagram (similar to the one in the post for chaining asynchronous calls) shows the case where the result is not cached. The case in which the result is cached is similar, with the call to InvokeBegin replaced by the BeginInvoke call of the cacheable item delegate, and similarly on the end call.
And finally for testing. I’m using a “timed” WriteLine function which prints out the timestamp along with every call. Notice that the calls to the Add operation (which is not cached) take about 1 second to complete. For all the other calls (which are cached), the first call takes about 1 second as well, but the next call returns almost immediately.
And that’s it for the code.
Instance providers, or how can we can control the instances of the service classes which are used by WCF.
[Code in this post]
[Back to the index]
Carlos Figueira http://blogs.msdn.com/carlosfigueira Twitter: @carlos_figueira http://twitter.com/carlos_figueira