This week I had to move a couple of databases between two SQL Azure accounts. I needed to decommission one account and thus get the important stuff out before nuking it. My goal was straight forward: move two databases from one SQL Azure server (and account) to another SQL Azure server (and account). The applications have a fairly high tolerance for downtime which meant I didn’t have to concern myself with maintaining continuity.
For the schema I had two options to chose from: script the entire database (using Management Studio) or extract the database as a .dacpac. For the data I also had two options: include the data as part of the script or use the Import/Export Wizard to copy the data. As a side note, I always thought this tool was named backwards – shouldn’t it be the Export/Import tool?
I opted to go the .dacpac route for two simple reasons: first, I wanted to get the schema moved over first and validate it before moving the data and second, I wanted to have the schema in a portable format that wasn’t easily altered. Think of this as a way of preserving the integrity of the schema. Transact-SQL scripts are too easy to change without warning.
I connected Management Studio to the server being decommissioned and from each database I created a .dacpac – I did this by right-clicking the database in Object Explorer, selecting Tasks –> Extract Data-tier Application… I followed the prompts in the wizard, accepting all of the defaults. Neither of my schema are very complex so the process was extremely fast for both.
Once I had the two .dacpacs on my local machine I connected Management Studio to the new server. I expanded the Management node, right-clicked on Data-tier Applications and selected Deploy Data-tier Application. This launched the deployment wizard. I followed the wizard, accepting the defaults. I repeated this for the second database.
Now that I had the schema moved over, and since I used the Data-tier Application functionality I had confidence everything moved correctly – because I didn’t receive any errors! It’s time to move the data.
I opted to use the Import/Export wizard for this. It was a simple and straight forward. I launched the wizard, pointed to the original database, pointed to the new database, selected my tables and let ‘er rip! It was fast (neither database is very big) and painless. One thing to keep in mind when doing this is it’s not performing a server to server copy; it brings the data locally and then pushes it back up to the cloud.
The final step was to re-enable the logins. For security reasons passwords are not stored in the .dacpac. When SQL logins are created during deployment, each SQL login is created as a disabled login with a strong random password and the MUST_CHANGE clause. Thereafter, users must enable and change the password for each new SQL login (using the ALTER LOGIN command, for example). I quick little Transact-SQL and my logins are back in business.
The entire process took me about 15 minutes to complete (remember my databases are relatively small) – it was awesome!
Every time I use SQL Azure I walk away with a smile on my face. It’s extremely cool that I have all this capability at my finger tips and I don’t have to worry about managing a service, applying patches, etc.
10/13/2010: Correction: There is a systematic way to move databases between servers. This blog post explains how to create a new database as a copy of another database.The Transact-SQL is as follows:CREATE DATABASE destination_database_name AS COPY OF [source_server_name.]source_database_nameKeep in mind that your username and password must be the same on the target and source server and you must have CREATE DATABASE permissions on the target server and you must have permissions to the database on the source server. Since the DAC registration meta-data resides in MASTER it won't be copied over. After the database copy you can connect to the new database and register it as a Data-tier Application.
Surely this shouldn't have taken 15minutes though, right? If you're moving the database within the same region (i.e. datacenter) then it should be instantaneous, right? After all, its just a metadata change. You're saying - "instead of billing me for this DB on accountA, bill on accountB".
You shouldn't have to physically move anything. Not really. Dont you agree?
What I hit was more of an internal Microsoft thing than anything a customer would face. I had to change the way I was being billed for SQL Azure (yes, even employees have to pay for it ;-)). My original account was on my credit card and I was expensing it each month. I wanted to move it to a PO so it'd be auto-paid each month. This change required me to setup a brand new SQL Azure account under a different LiveID (one associated with @microsoft.com). There is currently no way (at least not one anyone was willing to tell me about) to simply switch the associated account on the backend. This meant I had to bring up a second account (with the correct billing information) and move my databases from the old to the new account.
Chalk this up to early adopter pain!
" There is currently no way ... to simply switch the associated account on the backend. "
Yeah, that's the point I'm making. There *should* be a way to do that!!
Consider this a thinly-veiled feature request given that you're a lot closer to the team than I am! :)
[Dunno what happened to that last comment. Re-posting...]
Yeah, and I think there should be. Consider this a thinly-veiled feature request given that you sit MUCH closer to the team than I do :)
That would be a great feature reqest you can make on the SQL Azure Feature Voting Forum: