A while ago I introduced the features of the Routing Service, and "Protocol Bridging" was among them. But what does this mean and what does it really do for me?
Protocol Bridging in the context of the Routing Service means two key capabilities:
Normally two services configured with different bindings wouldn’t be able to communicate. The only options would be to either update the client to use the NetTcpBinding (not always possible, especially in interoperability cases), or reconfigure the back-end service to offer an additional BasicHttpBinding endpoint for the legacy clients. This also may not be possible (depending on if you own the service or not), and would introduce a maintainability issue where now you have more endpoints to expose and publicize.
Protocol bridging at the Routing Service eliminates the requirement for your client and service to always communicate the same way. This kind of decoupling is great because it also means that additional communication requirements (such as support for a different binding) don’t need to impact “product” code, in the sense that the back-end service can remain unchanged.
However, there are still some combinations of communication patterns that don’t make sense. In general, we say that the communication shape at the Routing Service must always be symmetrical: For example, if communication is Request-Response between the client and the Routing Service, then it must also be Request Response between the Routing Service and the back-end service. This is a fairly sensible requirement: if my client is one way and the service is configured as request response, what should the Routing Service do with the response it gets? Is getting a response even expected behavior? Conversely if the client is request response and the service is one-way, then what should the Routing Service provide as a response, and when? Unfortunately, while this requirement is reasonable, there are some situations in which it is rather inconvenient. For example, a nice, though currently unsupported scenario would be for the Routing Service to be able to accept a message via BasicHttp (or some other two-way channel) and then turn around and push that message into a queue (such as MSMQ). Unfortunately, because the incoming channel in this case is request-response and MSMQ is one-way, this scenario won’t work through the Routing Service out of the box.
How does the Routing Service act on messages in order to convert them from one protocol to another? In general we follow a simple set of rules when passing messages:
Copy the body of the message to the outbound message (we don’t touch body data).
Send the message out the outbound channel, at which point all of the headers and other envelope data specific to that communication protocol/transport will be created and added.
Note that we have to take these steps on both request and response messages. The conversions done when receiving a message from a client need to be un-done when passing the response back to the client, so that it can receive a response it understands.
These processing steps take place in the Routing Service’s SoapProcessing behavior. This behavior is an endpoint behavior that we apply to all client (outgoing) endpoints when the Routing Service starts up. If you have a protocol that the Routing Service doesn’t understand, or wish to override the default processing behavior, you can disable this behavior either for the entire Routing Service or just for particular endpoints. This can be accomplished (as usual) either through configuration of the Routing Behavior (soapProcessing=”false”) or the RoutingConfiguration object (in code). To turn off Soap Processing for a particular endpoint, create a soapProcessing behavior and set the processMessages attribute to false, then attach this behavior to the endpoint you don’t want the default processing code to run at. When the Routing Behavior sets up the Routing Service, it will skip reapplying the endpoint behavior since one already exists.
That’s protocol bridging!