In Part I we discussed the Service and Internet Bus as well as some of the connectivity challenges associated with addressing individual services traversing multiple network boundaries and protocols. Here we will discuss the messaging infrastructure and the different binding and event patterns enabled by the Messaging Fabric.
The Messaging Fabric is a way to expose services on the internet, with integrated access control, from virtually any machine. Currently we require a small set of open outbound ports. In later builds we hope to be able to use port 80 for listeners.
The .NET Services SDK provides a client API to connect to and use the Service Bus. This API is modeled on, and extends, the WCF API. There are a set of Service Bus relay bindings which mirror the standard set of WCF bindings, with the exception of the MSMQ and TcpPeer bindings.
How does a basic HTTP service work? Under the covers IIS, for example, creates an HTTP listener. This involves a win32 API call to register a listener with http.sys, aka the HTTP protocol stack. This listener is a named pipe connection in the kernel and http.sys dispatches HTTP requests to the right listener. With the service bus we do the same thing: we connect to the cloud, register a listener in the cloud and the Service Bus routes messages to our socket.
We support SOAP 1.1 and 1.2. None = pure HTTP not SOAP
We compose WS-Security headers properly and recommend you use message security (encryption) so there is no way for us to look into your messages! SSL terminates at the relay so messages will travel within the messaging fabric unencrypted but we won’t look at them but if you encrypt your messages this is moot.
We support reliable messaging (WS-ReliableMessaging) and streaming (there is no example in the SDK but we support this), we support the full extensibility model, the web programming model and metadata exchange.
In other words, the Service Bus supports everything you know about WCF except atomic transactions (not yet). There are issues with having ACID transactions in the cloud where latency may be high. But the real issue is that WS-Coordination tells transaction coordinators (say, MSDTC) to cooperate and they know nothing about the Service Bus.
There’s also no RFC2617 HTTP authn or TCP-level authn with the Service Bus because the endpoint you’re talking to is the relay and we use claims-based tokens to authenticate. These schemes are totally transparent to the bus.
Receiver is behind a NAT, firewall and Dynamic IP
To listen on the internet via the bus, the receiver attaches to a relay node via NLB, and creates an outbound bi-directional socket on port 828 (SSL). There’s a pre-amble on the socket creation that says the receiver wants to subscribe to the supplied endpoint address. With one-way, provided no one is already listening on that address (or below it) the relay will set up the subscription and we’re in business
When the Sender comes along and picks some other node and wants to send a message to the receiver’s endpoint. We look up the listener for the given endpoint in the naming and routing fabric and then the message gets routed through the fabric, down the open socket and down to the receiving service.
Note: Oneway is the fundamental communication mode. All other modes use Oneway.
The NetEventRelayBinding is a variation of one way that allows you to have multiple listeners on the same name even though they are connected to different front end nodes. This is the mode that you use if you want to do publish and subscribe.
The NetTcpRelayBinding is the most efficient binding. Even though we are using SOAP (i.e. XML) messages, the binary encoding mechanism is dictionary-based so common strings get optimized away as keys in the dictionary. And the longer you send, the more efficient it gets. It’s much more efficient than HTTP which always has text-based headers. You should always use this binding unless you either want pub/sub semantics or you need interop/HTTP-based access.
Registration of the receiver is the same as with the one-way binding – an outbound bi-directional socket. The sender gets allocated a front-end node in the fabric via NLB. This blinking node has a socket-to-socket forwarder component – a socket pump – which the sender connects to using an outbound socket. The preamble to the connection has the endpoint URI the sender wants to connect to. A one-way rendezvous control message is sent through the fabric to the receiver telling the receiver to create an outbound connection to the socket forward. Thus there is a mediated socket between the sender and the receiver. You can do anything with this – we just forward the bytes…
NetTcpRelayBinding has a special mode called hybrid mode which allows direct connections between the sender and the receiver. i.e. messages no longer go via the Service Bus.
In hybrid mode, once a connection has been established via the Service Bus as usual, two one-way probing messages are sent from both the sender and the receiver (they’re null messages, the contents are not important). The Service Bus examines what IP addresses and ports are open for these probing messages. If the two machines are on the same sub-net then the addresses will be the same. If a machine is sitting behind a NAT then the port number may be incremented by the NAT between the two message calls (e.g. port n and then port n+1, or n and n+2). NATs typically do not allocate ports randomly but use a predictable algorithm. Based on the ports that the NAT selects the Service Bus tries to predict the algorithm being used and passes this information on to the caller(s). The sender and receiver then try to snap a socket based upon the predicted IP + port number. If the predictions are correct and the timing is perfect (the packets cross on the wire at the same time) a socket is created because TCP connections are non-directional! So even with firewalls and NATs involved we can get a direct connection provided the NAT port allocation is predictable and the network is not too busy (so our timing is off).
This is the same technique that many instant messaging apps like Messenger use to transfer files between users (it’s first very slow and then speeds up). This is especially important for video conversations and remote control apps where latency is an issue.
Note: There is currently a bug that prevents this from working with WS-Security and WS-ReliableMessaging because these modes open side channels. This will be fixed in the next release.
We can also use HTTP with the bus. i.e. WebHttpRelayBinding (HTTP/REST), BasicHttpRelayBinding (SOAP), WSHttpRelayBinding (SOAP) and WS2007HttpRelayBinding (SOAP)
The receiver opens a one way outbound TCP connection as usual. The sender sends an HTTP request. The bus detects SOAP 1.1 and 1.2 requests and they are treated specially. For HTTP/HTTPS messages the front-end node opens an HTTP socket forwarder and a one-way rendezvous control message is routed to the receiver instructing it to connect to the sender’s front-end node. Messages are then forwarded and if HTTP keep-alive is set the connections are kept alive.
Note: up until M5 there was a message buffer that worked around the non-connectedness of HTTP. As of M5, queues and routers are the correct way to deal with this.
Access Control Governed by Rules
Listener The receiver, before connecting to the relay, first has to acquire a token from the access control service with the #Listen claim (permission), allowing the receiver to listen on the Service Bus. Once it has this token, the receiver sends a listen request to the Service Bus with the token attached. The token is evaluated and the signature is checked and if the permission is there the one-way outbound connection can be opened. Sender The sender, before connecting to the relay, also needs to acquire a token from the Access Control service, this time with the #Send claim. This is passed on to the relay in the header of the message, the token is evaluated and removed at the relay and the message is passed on to the receiver. WS-Security headers are composed with this mechanism and are passed on as expected. If desired, you can opt out of this mechanism by using the relayClientAuthenticationType=“None”, in which case senders are unauthenticated. Then a receiver will need to handle access control rather than senders being checked when trying to access the relay.
The receiver, before connecting to the relay, first has to acquire a token from the access control service with the #Listen claim (permission), allowing the receiver to listen on the Service Bus. Once it has this token, the receiver sends a listen request to the Service Bus with the token attached. The token is evaluated and the signature is checked and if the permission is there the one-way outbound connection can be opened.
The sender, before connecting to the relay, also needs to acquire a token from the Access Control service, this time with the #Send claim. This is passed on to the relay in the header of the message, the token is evaluated and removed at the relay and the message is passed on to the receiver.
WS-Security headers are composed with this mechanism and are passed on as expected.
If desired, you can opt out of this mechanism by using the relayClientAuthenticationType=“None”, in which case senders are unauthenticated. Then a receiver will need to handle access control rather than senders being checked when trying to access the relay.
As of Milestone 5 (March 31 2009 release), we can create a name in the Service Bus namespace that has a “queue” associated with it.
The queue manager is an application that creates a queue policy for a particular ServiceBus name. You post a queue policy element into the service registry using AtomPub. The manager can be any application: the sender, the receiver or another app. It just needs the new management permission.
Once the policy has been created you can send messages to the queue using one of the standard TCP bindings or an HTTP POST, PUT, or DELETE (but not GET!). There is a 64K message limit and we can’t do streaming (it’s a queue!).
Queue ‘Tail’ Protocol
Queue ‘Tail’ Protocol
Queue ‘Head” Protocol
Queue ‘Head” Protocol
Queue “Lock” Protocol
Queue “Lock” Protocol
Routers work in a similar way to queues: you create a router policy and put it into the ServiceBus namespace using an arbitrary name. Receivers can subscribe to this name by listening to it. Any number of one way receivers can listen to a particular router. Senders can then send messages to the router using HTTP, HTTPS or one of the TCP bindings.
There is a message distribution element in the policy that you set to either “All” or “1”. If it’s set to all, all the subscribers receive the message. If it’s set to “1” only one subscriber will get the message (i.e. load balancing semantics).
As well as listening via tcp, receivers can subscribe using HTTPS. The router pushes messages down to the subscriber using either HTTP or HTTPS. Messages can be HTTP, SOAP 1.1 or SOAP 1.2.
Notes on Security
Use WS-Security and hide your payloads using message security
SSL is always used for the one-way outbound listener channels. For your connections you can just leave SSL on as there’s not much overhead.
Because SSL channels terminate at the Service Bus your messages travel in the clear through the relay fabric. If you don’t trust Microsoft encrypt your messages.
Following is a list of relevant videos, you can find these and more at http://www.microsoft.com/azure/videos.mspx