This Blog provides information about running SAP Applications on the Microsoft Platform. The Blog is written by people who are working with SAP on the Microsoft Platform for decades.
Another area where new functionality was added or improved is the area of backup and restore. There are two major improvements to discuss and to introduce in this Blog.
The first one we want to discuss is the functionality of backup encryption. With SQL Server releases so far, one had two possibilities to get to encrypted backups:
The first possibility to just get to encrypted backups seem to be, let’s say, quite some efforts just for getting backups encrypted. Not to talk about the disadvantage of the backup compression not really achieving meaningful compression anymore. On the other side, over the last few years customer demand for a functionality like backup encryption did increase, ideally combined with a reasonable good compression rate by SQL Server backup compression. The answer to these demands can now be found in SQL Server 2014 in form of backup encryption. For the newly introduced backup encryption, there is no need to have the whole database encrypted using TDE. The new functionality takes the non-encrypted backup data and encrypts the data before writing it to disk. In order to preserve a great compression factor of SQL Server backup compression in conjunction with the newly introduced encryption, the compression is executed on the backup data first, before the encryption is applied to the compressed data. With this order of activities, a great compression factor can be preserved while the backup as it rests on the backup medium is encrypted.
A great article on how to set up Backup Encryption can be found on: http://msdn.microsoft.com/en-us/library/dn449489(v=sql.120).aspx
But let’s summarize the steps in order to perform an encrypted backup here as well:
Create a certificate with SQL commands like this:
There is a DMV called sys.certificates which list all the certificates. Needless to tell that the certificate is part of master DB. De-installing the SQL Server instances and re-installing that SQL Server instance will result in a LOSS of that certificate and as a result will render the backups performed using the certificate useless. UNLESS one has a backup of the certificate stored somewhere else (http://technet.microsoft.com/en-us/library/ms178578.aspx ). Having a backup of the certificate one can restore or import the certificate into the SQL Server instances one needs to restore the backups against.
An actual backup could be executed then with a command like this:
Or combining with backup compression like:
As encryption algorithms one has the choice between:
In case the backup certificate has not yet been backed up, the following message will be displayed in the results of the backup:
As mentioned before, ALL the encrypted backups are useless once certificates used for the backups are lost. There is no way back once this is the case. As a result one should keep a backup of the certificate on some other servers in order to be able to restore those or one should as mentioned in the article about encrypted backups use an asymmetric key which is administrated by an EKM system.
In order to backup and restore such a certificate, the following commands or a variations of those would do:
If this is done, the warning when performing a backup should vanish
Once the master key and certificate are backed up, one can end up easily in a situation where the key and certificate need to be restored in another instance in order to restore the backup. The steps would look like:
An important step is to open the master key before trying to create the certificate from the backup.
Afterwards, the restore command does succeed without specifying any additional option that would indicate that one needs to restore from an encrypted backup.
Since backup encryption in itself is a very CP hungry step, expectation is that adding encryption to the backup will consume CPU resources on the server the backup is taking place.
We performed some measurements on a machine with 16 CPU threads and the capability to get a throughput of performing a non-compressed backup with around 308MB/sec = 1.1 TB/h. At the end the same disks were used to read from as well as to write the backup from since the different LUNs were spread over all the disks available. The database had 16 data files. The peak of throughput w/o compression got achieved with backing up against 4 backup files. Our measurements were circling around this configuration. At the end the following statements can be made based on the measurements:
As a summary one can state that for scenarios where one needs to be careful to schedule backups around times of low workloads, one will need to be super careful using backup encryption since there definitely is higher CPU impact. One also can state that encryption will slow down backup execution. Especially using AES algorithms, the impact on lower throughput could be compensated with using SQL Server backup compression. Another recommendation can be derived out of the tests which is not to use the TRIPLE_DES algorithm because of its severe impact on CPU consumption and backup throughput.
Another extension of SQL Server 2014, or let us be honest from SQL Server 2012 SP1 CU4 on, is the capability of backing up data against Azure Storage. Means the backup device chosen is a URL of Azure Storage instead of a tape device or a disk target. The cases we wanted to address with this additional capability were customer scenarios where customers were required to store the backups outside their main datacenters for the case that the main Data Center would encounter a complete outage. On the other side, for many of those customers it is hard to afford a complete second facility which is completely out of the geographic fault zone of their main site (thinking about earthquakes, hurricanes, tornados or similar kind of strikes of nature). Another motivation to build such a feature is the case of deploying SQL Server in Azure Virtual Machine Services. Means SQL Server does run with its databases in a VM that is hosted by Azure anyway. In such a case, one wouldn’t need a VHD to perform a backup, but could rather perform the backup straight against an Azure Storage BLOB.
The functionality also opens the possibility to backup a database into Azure Storage in order to restore it against a SQL Server instance running in an Azure hosted VM.
In order to perform a backup against Azure Storage one needs:
On the SQL Server instance, one needs to create a credential in the master database like this:
The ‘identity’ is the friendly name of the storage account and not the URL. The ‘Secret key’ is the primary access key to the storage account which one can retrieve easily using the Azure Portal (Manage Access Keys).
After that we can easily backup the database against the Azure Storage location. Since we want to have our backup encrypted, we are building on top of the scenario used for encrypted backups in the first section of the article. At the end the changes to the backup command are small and the command would look like:
The obvious difference is the key word ‘TO URL’. As destination one then defines the URL to the container within the storage account (‘backups’) plus the file name of the backup.
Additional to the backup command one needs to add the credentials one defined before which basically contain the primary access key to the Azure Storage Account. As expected the command above now executed an encrypted backup against an Azure Storage Blob in a specific container of a storage account.
Note: In opposite to tape or disk targets one only can define one URL target. Testing the backup against URL from a VM running hosted by Azure Virtual Machine Services we could achieve throughputs of up to 150MB/sec. Performing such a backup from on-premise into an Azure Storage Account almost certainly will be limited by the throughput the specific server can achieve into Azure. Means backup volumes or sizes which we usually encounter in the SAP space could take quite a long time to execute when directly writing it from on-premise into Azure Storage.
Since we obviously also want to restore backups which are stored in Azure again, we also would like to look at the restore sequence. Since our backup in Azure was encrypted, the first step would be to make sure that the master key and certificate are restored in the SQL Server instance the backup is restored against (see first section of the article)
Second precondition is that a credential exists in the SQL Server instance that should restore the backup. If necessary, just create the credential(s) as shown above.
If restoring an encrypted backup, open the master key in the session in which the restore command is executed.
The third step then would be the execution of the restore, like:
As in the comments, above, opening the master key only is necessary in case of restoring an encrypted backup. For non-encrypted backups, this would not be necessary.
That is all one needs to do in order to perform a backup against Azure Storage. The scenario also is explained in some more details with more examples in this article:
In one of the next series of the blog we also will take a look at another new functionality called smart backup. However before we go deeper into that, the next article in this series will be about integrating an AlwaysOn replica which runs in Azure.
You said that the 3des double the cpu consumption when used with backup compression and that throughput impact of it can reduce the throughput of a compressed backup to half of its original throughput.does this mean that the with new functionality.in SQL Server 2014 the cpu usage and throughput have inverse proportion to each other? Like x=1/y
One could call it a bit like an inverse proportion. Fact is that backup encryption increases the CPU consumption on the one side. On the other side it has the impact of lowering the throughput. Whereas the factors of CPU increase and throughput decrease are dependent on the encryption algorithms. TRIPLE_DES_3KEYS certainly stands out as an extreme. Similar to experiences we made comparing the different algorithms used with TDE, where TRIPLE_DES_3KEYS also showed the most extreme impact in terms of CPU consumption and lower throughput. Especially when testing bulk load scenarios.
Great Post,i love native backup encryption feature
Well Done! thanks a lot!
THIS IS A WONDERFUL ARTICLE AND WELL EXPLAINED...!!! GRT :)
Great Article! thanks a lot!
I really loved all the new features, you can also check this SQL Server 2014 new features list with great info in it, some even cover specifics on how to implement each feature sqlturbo.com/sql-server-2014-top-8-new-features
There is no reason why backup encryption should effect compression ratios... if it's implemented properly. For backups - compress first, then encrypt. For restores do the opposite - decrypt first, then decompress.