One strategy to achieve scale-out is replicating data. Another is partitioning. Each has its own advantages and limitations; and you’d use one over the other depending on the nature of the data itself as well as the access pattern. While Service Broker naturally provides a mechanism for achieving scale-out when the data is replicated, it is also possible to build highly scalable apps that use partitioned data using Service Broker. The key to achieving this lies in data dependent routing, which is the subject of this post.

But before we go into that topic, let us look at what Service Broker offers for talking to replicated services. The main purpose of routing provided by Service Broker is to decouple the target service specification from its location. When initiating a dialog, the target service is specified using the service name and optionally the broker instance of the target broker (i.e. database). Routes are database objects that help the broker determine where to deliver messages. A route maps a service name and optionally the broker instance with the address of a remote SQL Server instance’s endpoint. So routes enable the user to move services from one database to another without requiring the application to be rewritten. A target service specification may match multiple routes, in which case the broker will chose one of them in a round-robin fashion. This policy can help you to replicate services and distribute load across them.

When the data (and hence the services providing an interface to it) are not replicated but partition, the routing infrastructure will not help. In such a case, you would want your message to be routed based on what is inside it. For example, if there is a table of sales transactions partitioned by region across several databases with a service in each database used for querying the sales information; a request for sales info by region should be routed to the service that is servicing that region. Data dependent routing can be easily implemented as a service that either forwards an incoming conversation or redirects the initiating service to the right destination. In this example, we will look at the design and implementation of a forwarding type data dependent routing service. You can obtain the code from the SSB CodeGallery here.


The above figure illustrates how the routing service routes and forwards messages. Lets say a service S3 begins a dialog with the routing service and sends it a message (1). When the routing service receives the message, it checks if a mapping exists for this conversation D1 in its mapping table. If no mapping exists, it will invoke the classifier function with the message body to obtain the target service name (2) and then begin a dialog D4 with that service (3). It then creates mappings both for the inbound as well as the outbound conversations (4) and finally it forwards the received message on conversation D4 (5).

If the target service S4 was to send a reply back on the same conversation D4, the routing service would now find mapping for D4 to D3 and simply forward the reply message on D3 without having to classify.

If the routing service receives an “End Dialog” message, it checks if an outbound conversation exists for this conversation and ends it. In either case it will end the inbound conversation. If the routing service receives an “Error” message, it checks if an outbound conversation exists for this conversation and ends it with an error that wraps the incoming error message. It will also end the inbound conversation.


The data dependent routing facility is implemented as a service called []. It uses a table [resolved_conversations] to store the mapping of inbound to outbound conversations. The service is internally activated to execute the [route_messages] stored proc. The stored proc implements the logic presented above when it receives an “End Dialog” or “Error” message. When it receives any other type of message, it will invoke a configured stored proc for performing the classification. This stored proc name is stored in the [routing_service_config] table. You can implement your custom logic in this stored proc which must have the following signature:

CREATE PROCEDURE your_custom_classifier (

      @message_body VARBINARY(MAX),

      @to_service_name NVARCHAR(256) OUTPUT)

That is, the classifier stored procedure takes in the message body as input and gives the service name as output. You can write the stored procedure in plain T-SQL as well as any CLR language by using SQLCLR hosting and the in-proc managed provider.

I have provided a sample classifier called [sample_classifier] which basically assumes XML message bodies and looks up the target service in the message itself by running an XQuery on /message/toServiceName/.


  1. The first assumption is that the routing service supports the union of all the contracts supported by the target services. Also the routing broker must contain all the associated message types.
  2. The routing broker (i.e. database) must contain Service Broker routes for all possible target services. So it is basically dependent on dialog level routing to be already setup.
  1. The first limitation is the security model. In this example, you are basically delegating the routing service to talk to the final target on your behalf. Hence you lose out on end-to-end security. The target service cannot authenticate and authorize the initiator. If this is a requirement, then it might be simpler to use a redirection approach, rather than a forwarding approach. That is, instead of the routing service forwarding messages on your behalf, it should simply classify the message and reply to the initiator with the service name. The initiator can then go begin a new dialog with the resolved target service. A more elaborate approach that maintains end-to-end security but provides forwarding involves using proxy identities. (But that’ll be a subject of another posting).
  2. The routing service itself can become a bottleneck. Solutions include preferring redirection over forwarding (will discuss this shortly) as well as replicating the routing service in multiple instances and then relying on the round-robin route picking policy to balance the load across the routing services (i.e. ‘m’ initiators talk to ‘n’ replicated routers that talk to ‘l’ partitioned targets).