In the November 2009 SDK we included an official storage client library. As an improvement over the StorageClient sample we previously shipped, the new official library supports a connection string format for easy single-value configuration of storage endpoints and account information. We tried to mimic the general format of a SQL connection string; the format is a series of name=value pairs separated by semicolons.
There are three general formats for the connection string, so let’s go over each in detail.
There are only two options you can specify here. The first is self-explanatory: UseDevelopmentStorage. The only valid value for this is true – if it was false you wouldn’t have used it.
The second option is more interesting. Let’s say you’re running into an error that you don’t quite understand and you can’t get enough information from the debugger to figure out what’s going wrong. You could use Fiddler to examine the HTTP requests as they go back and forth to the development storage, but Fiddler doesn’t work when you connect to 220.127.116.11. Enter the second development storage setting: DevelopmentStorageProxyUri! Instead of the regular 127.0.0.1 the host of the Uri you specify in this setting gets used for all accesses to the development storage. Set it to http://ipv4.fiddler and presto – development storage access shows up in Fiddler!
Unfortunately there is an issue with the Fiddler support for blobs and queues returned from listing. Fiddler translates the ipv4.fiddler into 127.0.0.1 before sending the request to the development storage, the absolute Uris returned in response use 127.0.0.1 as the host instead of ipv4.fiddler. Since we don’t modify these Uris when creating objects in the November 2009 version of the storage client, operations performed on them won’t tunnel through Fiddler.
If that didn’t make any sense, fire up Fiddler and step through this code in the debugger:
var account = CloudStorageAccount.Parse("UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler");
var blobClient = account.CreateCloudBlobClient();
var queueClient = account.CreateCloudQueueClient();
var container = blobClient.GetContainerReference("testcontainer");
var blob = container.GetBlobReference("testblob");
var queue = queueClient.GetQueueReference("testqueue");
// these create requests should show up in Fiddler
blob.UploadText("some blob content");
// these list operations should show up too
var blobFromList = (from c in container.ListBlobs() select c).Cast<CloudBlob>().First();
var queueFromList = (from q in queueClient.ListQueues() select q).First();
// but these requests should not show up in Fiddler
If you take a look at blobFromList.Uri you’ll see that the host is 127.0.0.1 instead of ipv4.fiddler. You can work around this if you need to by fixing up the Uri yourself and creating a new CloudBlob object.
This is the vanilla development storage connection string. Nothing too exciting, but gets the job done.
This one is perfect for routing development storage traffic through Fiddler for debugging. Make sure you have Fiddler running before you use this one or it won’t be able to connect!
Connecting to Windows Azure Storage in the cloud is a snap with connection strings. The three settings you need to specify are AccountName, AccountKey and DefaultEndpointsProtocol. AccountName is the name of your storage account and AccountKey is the key that’s provided to you on the developer portal. DefaultEndpointsProtocol can be set to either http or https and chooses which protocol to use.
Here’s a simple example of a connection string for Windows Azure Storage in the cloud. Just replace azurestorage with your account name and base64key with the key displayed on the developer portal and you’re ready to go.
The connection string format also supports an advanced specification of all three endpoints as well as credential information such as a SAS signature for blob storage. You can use the BlobEndpoint, QueueEndpoint and TableEndpoint settings to set the endpoints to use for those services. You can also leave out endpoints for services you’re not using, but you’re required to specify at least one endpoint. It wouldn’t make a lot of sense to have account information without any endpoints to connect to. SAS signatures are specified with the SharedAccessSignature setting.
The only time you need to use connection strings of this form is for using the storage client with SAS credentials or for testing purposes, such as testing custom domains or proxies.
This last example shows how to explicitly specify an endpoint as well as use the SharedAccessSignature setting to configure the CloudStorageAccount class to create a StorageCredentialsSharedAccessSignature for you.