Database Volume is compressed nicely – but now we want to shrink the data files

Database Volume is compressed nicely – but now we want to shrink the data files

Rate This
  • Comments 1

Quite a few customers have implemented SQL Server Database compression on very large SAP databases and have reduced their actual database size by such a large factor that they now have terabytes of free space allocated to the SAP database files that they wish to reclaim.

These customers have several options to perform, but the options can be roughly categorized into:

  • Performing a one-time complete export and re-import of the data in which the latest SAP Tools will automatically Page Compress all the imported tables and secondary indexes. This is referred to currently as the SAP Database Independent System Copy (or, R3Load based copy). This method is often combined with a Unicode migration or is performed in order to adjust the number of data files of the SAP database. The advantage of this method is that it is an effort that requires a relatively short amount of time for execution. The disadvantage is that this is an activity which clearly requires downtime of the SAP system.
  • Or, performing a phased implementation of page compression to groups of tables and indexes to a live system using the report MSSCOMPRESS. In this approach, one would compress batches of tables while the system remains online and the business workload is running continuously against the system. The clear advantage is that no system downtime is needed. The disadvantage is that, especially with large databases, such an effort can take many weekends as we recommend online compression be performed during times of lower system workload. The other disadvantage is that one is left with an actual data volume that is 4-5 times smaller but the files which contain the data still have the original allocated size. Without reduction of the allocated file size, there really is no savings on the storage size.

In this article, we will take a look into the details of shrinking the data files in the second category and the way how we helped a few customers this year to get the process working.

What we usually recommend to use for shrinking the data files after the compression has been finished is the SQL functionality ‘DBCC SHRINKFILE’. This is a command we’ve commented about in several previous SAP Notes and whitepapers in less than positive way. But, in this single scenario it is at the end unavoidable for customers who can’t afford the downtime of a full export/import of the database in an SAP Database Independent System Copy.

Some drawbacks with ‘DBCC SHRINKFILE’

Using DBCC SHRINKFILE, one needs to keep several things in mind:

  • SHRINKFILE is single threaded. Issuing the SHRINKFILE command against a data file, there will be one thread moving content of data and index pages on a page by page basis into the range of the data file that should be retained. There is no multi-threading involved. The move of the content also is fully logged. During the move of the content, the source and destination page are exclusively page locked. The move of the content of a page is a single transaction.
  • Careless SHRINKFILE execution can unbalance data file fill ratio. We want SAP databases to use a number of evenly sized data files so we can trust an even fill ratio between the data files. We do not want to unsettle the freespace ratio between the data files so that SQL Server suddenly starts to allocate space for new data unevenly between the data files. This means one needs to shrink the different data files in smaller portions to eventually get to the sizes of data files desired.
  • One shouldn’t run too many SHRINKFILE commands on different data files in parallel. Shrinkfile is adding I/O workload, but even more important does lock pages for the moment of the move. Means running many shrinkfile operations in parallel can introduce unexpected effects to the workload. Most important, keep in mind that the transaction log volume will increase substantially or for cases of Log-Shipping, Database Mirroring and AlwaysOn, volume send to the secondary replicas will increase substantially with every shrinkfile job. Therefore running very small number (less than 5) of those shrinkfile activites simultaneously is a very good idea. And even those only with sufficient testing for the particular infrastructure, configuration and workload. The usual case is to test with one shrinkfile activity running on one file and eventually add another one or two as workload, configuration and infrastructure allows.
  • SHRINKFILE reverses page order. The logical and physical order of the pages within an extent gets reversed by SHRINKFILE. Let’s assume within the 8 pages of an extent, the logical order is exactly matching the physical order of pages within that extent. This means the first physical page also is the first logical page in regards to the sorting criteria of the index or data. But SHRINKFILE processes the data by progressing from the last page within the data file and an extent which needs to be moved. This means the SHRINKFILE logic finds the last page of an extent first and realizes that the page needs to be moved. It either finds or allocates a new extent for the page in the zone of the data file that will remain and then copies the content of the last page of the source extent into the FIRST page of the new extent. The second to last page in the source extent will become the second page in the destination extent. So by the end of the original extent, the data is now in the reverse order (physically and logically) in the new extent.
    This fact was something which created a lot of problems around a decade ago, especially with expensive SAN units, and is the reason we have cautioned against using SHRINKFILE on SAP databases in the past. In those cases the SAN storage units had their own algorithms for pre-fetching and read-ahead which were pretty much outsmarted by the fact that SQL Server read back content which got moved by SHRINKFILE. But as of our knowledge and experience today, we don’t know of any cases where the reordering of data by SHRINKFILE causes poor performance on SAN devices anymore. And especially with the new Flash based storage connected directly to servers, there should be no problems anymore.
    In the past several customers re-indexed after finishing the complete SHRINKFILE activity. However we think that such a step is not necessary anymore.

And, most importantly, we have one last behavior which we lately became aware of when several customers with extremely high workloads, especially on a small number of tables, tried to shrink files. There are some situations where SHRINKFILE will move certain page content which requires an update of SQL Server dictionary data. Some examples of these cases are: moving content of the root page of an index; or moving the first data page; or moving IAM pages. In these cases, SHRINKFILE will request a schema modification (Sch_M) lock (something we discussed in this blog: http://blogs.msdn.com/b/saponsqlserver/archive/2014/01/17/new-functionality-in-sql-server-2014-part-3-low-priority-wait.aspx ) before processing that object and continuing to shrink the file.

As we describe in detail in the blog above, uncommitted read queries will bypass the lock wait list and therefore a high enough uncommitted read load on a table can continuously delay the grant of a SCH_M lock for that same table. So in the situations described above where SHRINKFILE needs to place a SCH_M lock on a table and there’s also a high enough uncommitted read workload preventing the grant of the SCH_M lock, several of our customers were unable to complete SHRINKFILE because the process was strangling their workload and hardly shrinking the files.

Way out of the issue with SHRINKFILE and SCH_M lock request?

The solution that we applied with those customers did require a short down time of the SAP application and the usage of a SQL Server functionality which we normally don’t support with SAP Netweaver. The way we solved the issue is to isolate the table(s) which constantly were blocking the progress of SHRINKFILE and thereby even strangling major business processes into a separate SQL Server Filegroup. This method was only used in a few cases where it literally was not possible to get SHRINKFILE progressing because one really couldn’t find a way around changing the workload. In more detail the following was done:

  • IDENTIFY THE BLOCKING TABLE: We identified which table had a SCH_M lock request issued by the SHRINKFILE process using the stored procedure sap_lock. We thereby didn’t look for short term blocking incidents, but issues where one table heavily loaded with reading and modification workload blocked not only in one file, but in multiple files. Because we are discussing SAP NetWeaver systems, we are not talking about a small percentage of the database being shrunk but we are talking about recovering huge amounts of file space database being recovered, e.g. 20TB down to 5-6TB. With such a large space allocation release, there was a very high probability that a single table could block progress several times.
  • CREATE THE NEW FILEGROUP: After knowing which table was the culprit, a filegroup was created with the same number of data files on exactly the same LUNs as the original data files. The files of the new filegroup were sized large enough so that the table blocking progress could be stored in it.
  • CREATE THE SCRIPT TO RECREATE THE TABLE AND ITS OBJECTS: Use a tool like SQLServer Management Studio to create a Transact-SQL script with code to create the table and its constraints, indexes and defaults (NOTE: the script must contain the defaults which are always bound to every SAP table column).
  • COPY AND EDIT THAT SCRIPT: Copy the script (you’ll need the original later) and edit the copied script so that the objects are will now be created in the new filegroup and not in the default filegroup.
  • SWAP THE NEW TABLE FOR THE OLD: Once you have the script ready to create the new table and its associated objects, rename the original table and its associated indexes. Then, execute the script to create the new table and its indexes in the new filegroup. After that, insert the data from the old into the new table with SELECT…INTO (use bulk logged recovery model if the table is large).
  • CONFIRM ALL DATA WAS COPIED: USE SELECT COUNT(*) on the old and new tables to ensure that the row count of the new table is the same than in the old table. Then, drop the original, renamed table that exists in the default filegroup.
  • START THE SAP SYSTEM AND SHRINKFILE: Now the SAP system can be taken online again and the SHRINKFILE execution can continue over the weeks to follow.
  • RESTORE THE ORIGINAL FILEGROUP STATE: Once your SHRINKFILE process is complete, you need to move the table back into the default filegroup:
    • Stop the SAP system.
    • Execute the original script to recreate the table and its associated objects in the default filegroup.
    • Move the data from the table in the temporary filegroup into the newly created table in the default filegroup using SELECT…INTO.
    • Use SELECT COUNT(*) on both tables to ensure the same number of records are in the file in the temporary filegroup and the table in the default filegroup.
    • Drop the table and its associated objects from the temporary filegroup.
    • Delete the files in the temporary filegroup and remove the temporary filegroup from the database.
  • RESTART SAP: Your temporary workaround has now been removed and your system is back to its original state as installed by SAP.

With some customers it was enough to just move one table. With some other customers it was a handful tables. The most important points to note and consider are:

  • The usage of a filegroup in an SAP NetWeaver system is a temporary workaround for this special situation. Usage of the non-default filegroup should be restricted to as short a timeframe as possible. While using the non-default filegroup, one should not apply SAP Support Packages, EHP packages or start an SAP Upgrade.
  • When you shrink the original data files of the default filegroup, you must consider the volume of the table(s) in the temporary non-default filegroup and leave enough free space after the shrink to accommodate that data. After all, the content in the temporary filegroup will be moved back into the default filegroup after the shrink operations achieved the desired reduction.
  • After the desired size of the data files is achieved, you must restore the system to its original state as installed by SAP. You cannot leave the table(s) in a non-default filegroup.
  • Filegroups may NOT be used for SAP database in an attempt to tune performance or to emulate some kind of Storage Tiering.

As mentioned already, the method described in this blog is something which was utilized in a few cases where no other workaround was possible to get the database files shrunk during normal online hours. The workaround steps described here must be tested very thoroughly and we suggest you test it repeatedly.

Most importantly, we want to reiterate that this workaround of utilizing a second filegroup is only to be used temporarily in order to complete the file space reclamation. Over the years, people have often asked us if they should use multiple filegroups for other reasons and so far our testing with SAP has always shown the current, single filegroup method is the best architecture for SAP NetWeaver systems. This is why all SAP NetWeaver system are installed in a single, default filegroup and you must ensure you revert your system to that same architecture if you use the workaround documented here.

 

Thanks a lot to Leslie Moser from SAP for really detailing some of the steps described in this scenario.

 

 

 

Leave a Comment
  • Please add 5 and 2 and type the answer here:
  • Post
  • Juergen, very timely for us at Lubrizol - thanks.  

    Here are some notes from what we saw:

    We tested the shrinkfile on our SAP databases.

    After the compression was completed, the database were allocated at 4.2 TB, with only 1.4 TB being used with 8 datafiles.

    We ran the shrink command in our test environment (with SAP down), which has 65 GB of memory and SQL allocated 50 GB of memory.

    It took 1 hour 50 minutes for first file shrink from 525,000 MB to 475,000 MB.

    After reboot, the second file took 1 hour 25 minutes.

    On our production server, it has 132 GB of memory and 80 GB is allocated to SQL.

    It only took 14 minutes 25 second to shrink from 525,000 MB to 475,000 MB.

    After first run, the other 7 files each one took about 5 minutes to finish.

    Our assumption is that the memory does make big difference in the shrink process.  

    Thanks again...Tom

    P.S.  The test databases were a copy of Production from < 4 weeks back (if that helps)

Page 1 of 1 (1 items)