With the addition of another SATA SSD as well as another SLC Duo to serve as the target destination, the same SQL database which now has a total of 233 GB of used space was backed up in less than 1 minute using the same server referenced in my prior 2.0 GBPS backup post (http://blogs.msdn.com/b/microsoftbob/archive/2012/10/18/2gbps-backup-on-12-core-server-with-7-fusion-io-cards.aspx).

SQL Query Analyzer showed the throughput rate as 3119.9224 MBPS which rounds out to 3.05 GBPS, about 2.5 times the capacity of 10Gb Ethernet and fast enough to utilize 75% of 32gb InfiniBand often used in enterprise cloud infrastructures.

sql-snip1

 

sql-snip1

Note that the effective throughput was actually closer to 3 GBPS on the prior test rather than 2 GBPS and for the same rationale the effective rate of this test was close to 4.0 GBPS. This is based on the fact that there will be about 20 – 30% pages in a typical database – with or without compression – that is encompassed by a data structure (not marked as available) but not having to be backed up.

sql-snip0

 

I think (but am not yet sure) that the backup stats are only counting pages that are backed up and some pages in the database, although shown as is use, may not actually contain any data to backup. Whether you go with the higher or lower figure, the transfer rate is still remarkable.  Even using the lower 3.0 GBPS actual transfer rate, this works out to 180GB Per Minute and nearly11TB per hour.  Not bad considering that an Oracle Exabyte Database Machine (http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf) only achieves 17 TB per hour on InfiniBand with hardware costing several million dollars, whereas my total investment here is under 50K. Even considering retail pricing on the cards, the total cost of the server is less than 180K. If we trust the backup to NULL would provide the maximum throughput of 5GBPS over 40GbPS InfiniBand , this would beat the Oracle configuration at 18TB Per Hour.  In terms of TPC-type terminology, we could say this server is providing each 1TB per hour backup speed at a cost of 10K.

This was produced by a modest 2 way 2.8 GHz X5660 configuration with 12 cores, 108GB, 8 HP IO Accelerator Cards (HP Branded Fusion-IO cards), 4 SATA SSDS (Intel and Samsung) with the O/S on 3 SAS mechanical drives using Raid-5. Details of the IO Accelerator configuration is in the prior post with the exception that the non-functioning SLC DUO is functioning in this test and the addition of 2 SATA SSDs. The additional devices were used to supplement the target backup destination which was the main limiting factor on the prior benchmark.

Backup to NUL device registered at nearly 4 GBPS, implying the bottleneck on the receiving end still even though the theoretical bandwidth of the destination should be over 4 GBPS. Based on testing using the NUL device as a backup target, It seems that this server is not able to provide more than around 8GBPS throughput combined read/write – even though the devices collectively added together are capable of over 10 GBPS – perhaps this is a BUS limitation somewhere between the PCIE bus and the processors.

The implications of this are far reaching:

Using an 8-way server with 128 or more cores would most likely allow the CPUs to compress fast enough for the data and double this to 36TB per hour, since such backup compression occurs before the data is sent over the wire. My database is actually achieving about 40% space reduction due to page and row compression and has multiple tables with over 1 billion rows, so in reality it is 1/2 TB of data. With a 128+ core system, if we factor in typical gains from backup and database compression, dedicated all the Accelerator cards to the database with an InfiniBand card using PCIE GEN 3 over a 56 Gb InfiniBand link, we could expect over 100TB per hour for a typical SQL Server backup – from a single 8-way 8U Server – costing less than 300K. This is using the first generation cards, the second generation cards double this.  Such an 8U server with PCIE GEN 3 cards could potentially saturate a 120 GB InfiniBand link with a 240TB per hour, backing up 1PB in a little over 4 hours.

To get an idea of how much 1PB is, see this link - http://gizmodo.com/5309889/how-large-is-a-petabyte – it is just 1/20th of the amount of data processed by Google on a single day and 2/3rds of all the photos on Facebook.

For more background on the configuration and methodology, see my earlier post at http://blogs.msdn.com/b/microsoftbob/archive/2012/10/18/2gbps-backup-on-12-core-server-with-7-fusion-io-cards.aspx.  You can see a demonstration of the backup and discussion. If you want to just see the actual backup execution, skip to the 7 minute mark. This is my first attempt at a youtube video, so please be merciful -