The following section provides some guidance about using external utilities with Azure Storage.
These are tools used by client applications that access or help with Azure Storage.
The fastest way to move data from Azure Blobs to Azure File is to use AzCopy. You should run AzCopy from a VM in the same datacenter as the destination storage account.
AzCopy is now in release 2.5 can can be found here:
Figure 1: Azure Command prompt
Here is the syntax:
Figure 2: AzCopy Syntax
Here are some things to remember:
You can copy files that are in file system directory, a blob container, a blob virtual directory, or a storage file share.
You can copy recursively as well.
You can copy a single blob or multiple with wild-cards.
You can copy across storage accounts.
With geo-redundancy you can copy blobs from secondary regions
You can also copy snapshots to another storage account
You can use response files to support automation
You can use Shared Access signatures
Log files can be generated
Works in the storage emulator
Backup up blobs
Migrate blobs to different account
The asynchronous copy blob runs in the background using spare bandwidth capacity, so there is no SLA in terms of how fast a blob will be copied
Cross account copies involve an egress fee.
You can copy the binary to where they are needed:
Figure 3: Location of AzCopy Binaries
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-asynchronous-cross-account-copy-blob.aspx for more information
To protect from the source changing, you can use a lease, introducing the concept of lock (i.e. infinite lease) which makes it easy for a client to hold on to the lease.
During a pending copy, the blob service ensures that no client requests can write to the destination blob
There are 3 ways you get charged for Azure Storage.
Azure Storage lets you specify how data gets replicated. There are 4 tiers. They differ on how they work, how much they cost, and how they perform. You should also understand the difference between a primary and secondary region.
In the event of a complete regional outage or a regional disaster in which the primary location is not recoverable, your data is still durable This means we keep three replicas in each of the locations (i.e. total of 6 copies) to ensure that each location can recover by itself from common failures (e.g., disk, node, rack, TOR failing) However, with respect to transactions, since there is a delay in the geo replication, in the event of a regional disaster it is possible that delta changes that have not yet been replicated to the secondary region may be lost if the data cannot be recovered from the primary region Regarding Azure Tables, there are no geo-replication ordering guarantees across objects with different Partition Key values, only within partitions.
When a primary region goes down, how does Azure recover your data
This table describes the capacity, throughput, and max size of a blob.
It gets cheaper with scale. But it varies by region.
Every time you have read or write operations, there is a cost.
It used to be AtomPub. But JSON makes more sense for a variety of reasons.
JSON provides minimal metadata.
Dramatic reduction payload size, saving CPU cycles, supporting higher scale, lower latency
Of course, Azure Storage supports REST, meaning that you can talk to storage from any client that can use HTTP. But more specialized SDKs are also available.
Note that GeoRedundant slows down your uploading and download of content to MS data centers. Note that GRS does not impact latency of transactions made to the primary location.
These are some free client tools that will allow you interact with data using Azure Storage Services
Microsoft does support Cross Origin Resource Sharing (CORS) - Why this is so important
This support makes it possible for client-side web applications running from a specific domain to issue requests to another domain
If CORS were not supported, you'd have to use a proxy for storage calls, limiting scale and adding an extra layer of work.
CORS makes it possible for web apps to directly place content to Azure Storage from your company web site.
More specifically, your end users could directly upload blobs using shared access signatures to a company storage account without the need of a proxy service.
You can therefore benefit from the massive scale of the Windows Azure Storage service without needing to scale out a service in order to deal with any increase in upload traffic to your website.
It is about granting a Web Browser write privilege to your company's storage account
Your web service does not need to be in the upload path of storage services
As a precaution, it is recommended that you limit the access time of the SAS token to the needed duration time in order to limit any security risks and the specific container and or blob to be uploaded
Another scenario where this is usesful is allowing users to edit data in a browser and persisting the data to Windows Azure Tables, which is a dictionary-like persistent store.
Here is some guidance on how to enable CORS
I hope that I have surfaced some key facts that are buried in blogs and in on-line documentation.