This is the sixth and final post of a walkthrough to set up a mixed-mode platform- and infrastructure-as-a-service application on Windows Azure using Couchbase and ASP.NET. For more context on the application, please review this introductory post.
To wrap up this series, there’s one last configuration step I want to cover, and that���s highlighted below in the all-up architecture diagram I introduced in the first post.
At this point, the Couchbase cluster is accessible only via Remote Desktop (RDP) or though the TapMap application deployed to the same virtual network. I can see the current RDP endpoint configuration via the portal by visiting the endpoints tab of a one of the Couchbase cluster VMs. Note too that a public port to private port mapping can be specified; in this case the public RDP endpoint isn’t exposed on the traditional port of 3389 but rather on a ‘random’ one of 59485 which makes it a bit less discoverable to a port sniffer.
Right now any administrative tasks I need to perform on the Couchbase cluster require me to launch a remote desktop session. That’s ok for demo purposes, but it’s unlikely I’d want to give my <insert your VM-hosted software here> administrator the keys to the Azure VMs, so I need to provide a way for him or her to remotely configure the cluster.
That’s accomplished by opening up endpoints of the VM just enough to let the admin traffic through. Per Couchbase’s documentation, that means opening port 8091, which grants access via to their HTML portal and the REST-based command line interface.
Creating the endpoint is quite straightforward: I select Add Endpoint from the portal interface, where I can create a new endpoint to (or load balance an existing one with) the current VM instance. For the first VM instance associated with the cloud service (here, couchbase.cloudapp.net) I can only add an endpoint, since there aren’t any existing ones.
Then I just provide a name and the port type and number desired. I can create a UDP or TCP port, and since the Couchbase administration tools use REST/HTTP calls, TCP is the obvious choice here. The default admin port for Couchbase is 8091, so that needs to be specified as the private port; however, I can expose a public port with a completely different value. Here I arbitrarily selected 16873.
Once the port is created, when I navigate to http://couchbase.cloudapp.net:16873 I’m greeted with the administration portal and can manage the server cluster remotely. Couchbase prompts for server cluster user id and password, and those are the only credentials that need to be provided to access the cluster.
The configuration at this point maps all requests for the URL http://couchbase.cloudapp.net:16873 to that one VM instance within the cluster. That may or may not be what you want depending on the context of what you’re accessing in the VM. In this scenario, any of the instances can carry out administrative tasks on the Couchbase cluster, so I may as well allow any instance in the cloud service to respond to the request, not just couchbase1.
For the other VMs in the cloud service, I likewise create an endpoint, but this time select the recently created endpoint to load-balance on. I can actually provide a different port to map to (and an endpoint name) for each individual VM participating in the load balance rotation, but in this case, all of the VMs I created were from the same image and have 8091 as the Couchbase admin port.
By the way, if you think it’s bit tedious setting up each of these ports in the portal, we’ll you’re right! And like everything else I’ve configured via the portal in the previous blog posts, you can use the Windows Azure PowerShell cmdlets to do it all from script. There’s a bit of a learning curve, but the time invested is well worth it, since you can crank out some pretty sophisticated configuration tasks with just a few lines of script. Here for instance, is the script to set the load-balanced endpoint on all of the VM instances in my Couchbase cluster.
$svcName = "couchbase"
$publicPort = 16873
$endPointName = $svcName + "Admin"
$LBSetName = $endPointName + "-" + $publicPort
foreach ($VM in Get-AzureVM -ServiceName $svcName)
Add-AzureEndpoint -Name $endPointName -LBSetName $LBSetName `
-Protocol tcp -PublicPort $publicPort -LocalPort 8091 `
-ProbeProtocol http -ProbePort $publicPort -ProbePath "/" `
-VM $VM |
So I’m done! I now have a working mixed-mode application consisting of an ASP.NET application running in two Windows Azure web role instances in one subnet of a Windows Azure Virtual Network that communicates with a Couchbase cluster running on three virtual machine instances in a second subnet. Phew!
While I learned a lot working with Couchbase and ASP.NET, I hope you can see beyond the use of those specific technologies to architectural patterns and practices you can adapt for your own application, whether you are using PHP, Node.js, Mongo, SQL Server, or any of a host of other software that you can deploy and run in the Windows Azure cloud.
Can you give us a breakdown of the cost per month to run this configuration?
As I set it up in the blog posts (keeping in mind some of the constraints I have regarding the limits of my internal account), the monthly cost would be about $350 per month, which is driven by 2 small VMs (using two to get the SLA) for the ASP.NET app and 3 VMs for the CouchBase cluster. There's also some bandwidth (download only is charged), but that's practically noise compared to the VM cost. Also the current cost for the CouchBase VMs is at 2/3rd of what it will be when the feature is fully released. The calculator at www.windowsazure.com/.../calculator can provide some additional insight.
Thank you Jim for your response on my question and your blog entry on this topic. It has been very helpful.
You're more than welcome Zack!