About Maintenance Plans – grooming SQL Server

Blog do Ezequiel
Portuguese PFE SQL Server Team


Latest Updates
20/11/2014 - Latest update on the AdaptiveIndexDefrag procedure v1.6. Look for it in the "Ezequiel shortcuts" section on the right or just click HERE.

20/11/2014 - Updates to Maintenance plan deployment scripts available HERE

12/11/2014 - The SQL Swiss Army Knife Series Index

About Maintenance Plans – grooming SQL Server

Rate This
  • Comments 12

Hello all,

I admit I’m not a big fan of Maintenance Plans (the feature that goes by that name) mainly because of its “do or die” approach. So, it was not by chance that I wrote and keep updating the AdaptiveIndexDefrag procedure. Good maintenance (or lack thereof) may be the difference between good and bad performance. Following up on that train of though, I’ve been meaning to share 6 scripts that I always use to automate maintenance tasks in SQL Server. In a couple minutes I have deployed a comprehensive set of SQL Agent jobs that will get the work done.

Latest update: 20-11-2014 (Check change log)
 
Download scripts HERE.

So, the first script I execute is “0_database_server_options”. This will:

  • Change the Errorlog file limitation from 8 to 15. Always nice to have information available when troubleshooting.
  • Change a few sys.configurations such as “Backup compression default”, enable “remote admin connections” and “Optimize for Ad Hoc workloads”. A couple of notes on this step:
    • About Backup Compression…
      Fact 1
      : backup compression significantly increases CPU usage.
      Fact 2: Backup compression is not only a space saving feature, but because a compressed backup is smaller than an uncompressed backup of the same data, compressing a backup typically requires less device I/O and therefore usually increases backup speed significantly. Hence, restore operations will also share from this reduced I/O.
      Also, with compressed backups as default, it is possible to issue backup commands with compression to databases that will not benefit from it, but may even be detrimental in such cases. Therefore, examples such as TDE-enabled databases can be backed up using the WITH NO_COMPRESSION notation, while omitting any compression related notation on a server that has backup compression enabled at the server option will compress backups.
      If the server is not CPU bound, and if most of the databases in your server benefit from this feature, making backup compression a default for all backup operations will generate smaller backups, therefore less backup related I/O.
    • About Optimize for Ad-Hoc workloads…
      While the Optimize for Ad-Hoc workloads server option effectively adds a small delay in a second execution, the benefits outweigh the cost in an OLTP environment: if a query is executed countless times, the query plan only gets stored in cache on the second execution which is probably not a major drawback, but if not, memory is saved by not caching query plans that won't get used.
      With this option enabled, when a query executes the first time, only the query hash (or stub) is stored in cache. If the same plan is reused, then it is deemed fit for storing the entire plan in cache.
      Enable the Optimize for Ad-Hoc workloads  server option, if most of the workload on the server is mostly made of ad-hoc queries in an OLTP server.
  • Change the model data and log files to autogrow 100MB. Obviously this is a generalization, but still more up to date than the default.
  • Remove AUTO_CLOSE and AUTO_SHRINK database options from all databases.
  • Change whatever page verify setting to CHECKSUM.

Second, and if there is no other monitoring system, I use the script “1_DatabaseMail_Config”. Just edit the script and enter the proper account information in the configuration variables near the top and run it. Replace with the information for your account.This script will:

  • Create a Database Mail profile. Usually there is a distribution list for the DBAs, so I‘m keen on using that address.
  • Create an operator using that Database Mail profile.

Third, I deploy the AdaptiveIndexDefrag procedure, usually in MSDB unless there is already a DBA database.

Fourth, I use the script “3_job_AdaptiveIndexDefrag”. This will create a daily job for the AdaptiveIndexDefrag procedure, named “Daily Index Defrag”. It will also notify the previously created operator on the job outcome. The job will always:

  • Find some of the most common (default) names for SharePoint databases (step “Sharepoint DB Exceptions”), to add the to the permanent exclusion list, if not already there. SharePoint grooms its own databases, so we should exclude them from any other automated maintenance task. If the AdaptiveIndexDefrag procedure was NOT created in MSDB, simply replace all references to MSDB for whatever database name you chose.
  • Execute the AdaptiveIndexDefrag procedure (step “Daily Index Defrag”).
    Note that in this job there are a few differences from the default values for the procedure. We are doing online index rebuild, updating all statistics, Sorting in tempDB (assuming tempDB is properly sized) and restricting maxDOP to 4.
  • Purge all historic log data for the index defrag executions using default 90 days (step “Purge Log”).

Fifth, I use the script “4_job_AdaptiveCycleErrorlog”. As the name suggests, this creates a job named “Daily Cycle Errorlog”. Runs daily, but will only cycle the Errorlog when its size is over 20MB or its age over 15 days.

Sixth, using the script “5_job_Maintenance” (“5_job_Maintenance_MEA” if you are in this region, as it affects weekends), create the job “Weekly Maintenance”, a stored procedure to handle the logic and a view in MSDB that allows to quickly check the output for each job step, XML formatted so that it’s easier to view. The weekly actions aim to execute on Fridays for MEA and Sundays for the rest of the world. This job will:

  • Perform integrity checks, as we recommend doing so at least once a week. On large databases and for more frequent checks, consider using the Physical parameter.
    If you need to exclude some database from this step, simply edit the following statement to add your exclusions in the usp_CheckIntegrity stored procedure , either by name or database_id, as follows:
SELECT database_id FROM sys.databases WHERE is_read_only = 0 AND state = 0 AND database_id <> 2

   to something like

SELECT database_id FROM sys.databases WHERE is_read_only = 0 AND state = 0 AND database_id <> 2 
AND database_id NOT IN (somedbid1, somedbid2, somedbid3)
    • Once a week, execute DBCC CheckDB on all online, read-write user databases below 1TB.
    • Daily, execute a subset of all the required integrity checks on all online, read-write user databases over 1TB.
      To prepare for this step, all tables and indexed views are divided in similar subsets, or "buckets", and each day we will verify a specific bucket. Over the course of one week, your VLDB will be checked as part of your proactive integrity checks.
  • Once a week, execute DBCC UPDATEUSAGE which corrects the rows, used pages, reserved pages, leaf pages and data page counts for each partition in a table or index. On this, although the best practice is to run routinely only if the database undergoes frequent Data Definition Language (DDL) modifications, such as CREATE, ALTER, or DROP statements, I find that running this on a weekly basis on all online, read-write user databases under 4GB in size does not pose an issue, and so I run it.
  • Once a week, executes sp_createstats with indexonly option on all online, read-write user databases. This creates statistics only on columns that are part of an existing index, and are not the first column in any index definition. By creating single-column statistics we increase the number of histograms, which can improve cardinality estimates, query plans, and query performance.
  • Once a week, clean all MSDB job history older than 30 days. Change this value if you have different requirements.
  • Once a week, clean maintenance plans txt reports older than 30 days. This is only needed if you still have some specific maintenance plan you wish to keep for some reason.

NOTE: although the job is scheduled on a daily basis, there is logic at the beginning of each step to execute the weekly steps on Sundays only (Fridays for MEA version). If you need to change the weekly steps for any other day, be sure to change the below statement(s): 

IF (SELECT 1 & POWER(2, DATEPART(weekday, GETDATE())-1)) > 0

Where 1=Sunday, 2=Monday, 4=Tuesday, 8=Wednesday, 16=Thursday, 32=Friday and 64=Saturday.

NOTE 2: If a database has Read-Only filegroups, any integrity check will fail if there are other open connections to the database. We have a choice to either set the database to single user mode or create a database snapshot prior to executing integrity checks on such databases.

    • CreateSnap Parameter:
      • 1 will create a database snapshot before running the check on the snapshot, and drop it at the end (default).
      • 0 means the integrity check might fail if there are other open connection on the database.
      • Set a custom snapshot creation path with @SnapPath parameter or the same path as the database in scope will be used.
        Ex.: @SnapPath = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Data'
    • SingleUser Parameter
      • 1 will set the database in single user mode before running the check, and to multi user afterwards.
      • 0 means the integrity check might fail if there are other open connection on the database.

Seventh, and final step (although optional), I use the script "6_Agent_Alerts" to create SQL Agent based alerts for important error messages, when there are no other monitoring tools to warn the DBA team of important events in SQL Server. The alerts cover the following error codes and severities:

  • Severity 10 - Error(s) 825, 833, 855, 856, 3452, 3619, 17179, 17883, 17884, 17887, 17888, 17890 and 28036
  • Severity 16 - Error(s) 2508, 2511, 3271, 5228, 5229, 5242, 5243, 5250, 5901, 17130 and 17300
  • Severity 17 - Error(s) 802, 845, 1101, 1105, 1121, 1214 and 9002
  • Severity 19 - Error(s) 701
  • Severity 20 - Error(s) 3624
  • Severity 21 - Error(s) 605
  • Severity 22 - Error(s) 5180 and 8966
  • Severity 23 - Error(s) 5572 and 9100
  • Severity 24 - Error(s) 823, 824 and 832

Having these (or other) scripts to deploy on your SQL Servers saves you the trouble of implementing Maintenance Plan objects, and gives you much better control and flexibility on what is happening on your maintenance cycles. After these have been running on a regular basis, you will know that your SQL Server is being proactively managed on what are the most common maintenance tasks. Of course all the logs should be reviewed regularly so that issues can be corrected.

Here are the scripts:

2_AdaptiveIndexDefrag

Hope you find these useful.

Change log:

Version

Date

Comments

1

18-09-2012

Initial release;

1.1

25-10-2012

Added script "6_Agent_Alerts.sql";
Fixed issue with "1_DatabaseMail_Config.sql" that gave an error when creating the operator;
Added notifications by email to the operator created above.

1.2

12-09-2013

Changed system database exclusion choices where used;
Changed integrity checks to always run with data purity checks;
Changed Daily Index Maintenance job to automatically exclude Sharepoint databases (based in known naming patterns).

1.2.1

08-10-2013

Changed "Daily Index Maintenance" job to automatically exclude Sharepoint and Biztalk databases (based in known naming patterns).

1.2.2

17-10-2013

Changed "Weekly Maintenance" job to implement partial integrity checks on VLDBs over the course of a week. Also changed scheduling accordingly. More information in the above setp description.

1.2.3

24-10-2013

Changed "Daily Index Maintenance" job to extend automatic exclusion of known MS Product databases that should handle their own defrag scheduling.

1.2.4

29-10-2013

Fixed error in step 3 of  "Weekly Maintenance" job.

1.2.5

13-11-2013

Fixed issue with "AdaptiveCycleErrorlog" job that would not cycle errorlog;
Added VLDBMode parameter for VLDB handling in step 5 of "Weekly Maintenance" job, now bucketizing by Tables or Filegroups. More info in the job description above.
Added SingleUser parameter in step 5 of "Weekly Maintenance" job, to circumvent issue where running a DBCC statement fails in databases with Read-Only Filegroups if there are other open connection on the database, by setting the database in scope to single user mode prior to integrity checks.

1.2.6

09-01-2014

Fixed SingleUser parameter use.

1.2.7

09-02-2014

Moved Integrity Checks logic in step 5 of "Weekly Maintenance" job to a stored proc;
Added CreateSnap parameter in step 5 of "Weekly Maintenance" job, to circumvent issue where running a DBCC statement fails in databases with Read-Only Filegroups if there are other open connection on the database, by setting the database in scope to single user mode prior to integrity checks. This is the default behavior for databases with Read-Only Filegroups.

1.2.8

18-02-2014

Fixed issue with databases and mathematical symbols in name with step 5 of "Weekly Maintenance".

1.2.9

10-05-2014

Changed VLDBMode usage for VLDB handling in step 5 of "Weekly Maintenance" job, where buckets are created only once every week;
Added MEA version of "Weekly Maintenance" - Regular version does most actions on Sundays, MEA version does most actions on Fridays.

1.3

20-11-2014

In step 5 of "Weekly Maintenance" job (Integrity Checks), added a couple parameters:

@AO_Secondary, setting in which AlwaysOn replicas we want to run integrity checks (Default is AlwaysOn primary replicas are eligible for Integrity Checks, but secondary replicas are skipped).
@Physical, which allows running PHYSICAL_ONLY on DBCC commands that support this option (Default is OFF).

Until next time!

Disclaimer: I hope that the information on these pages is valuable to you. Your use of the information contained in these pages, however, is at your sole risk. All information on these pages is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Ezequiel. Further, Ezequiel shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Social Media Sharing
|
Leave a Comment
  • Please add 4 and 6 and type the answer here:
  • Post
  • Handy toolset indeed.. Thanks Pedro

  • Hi Pedro,

    Just a small typo correction and a small suggestion:

    On the DataBase Server options script:

    Line 19:

    EXEC sys.sp_configure N'optimize for ad hoc workloads', N'1

    should be:

    EXEC sys.sp_configure N'optimize for ad hoc workloads', N'1'

    On the Adaptative Index Defrag I think would be a good idea to check if dba_db exist and create it if necessary, otherwise we receive an error that dba_db does not exist and all objects are created on current db.

    My suggestion is something like this at the beginning of the script:

    IF NOT EXISTS(SELECT [database_id] FROM sys.databases WHERE [name] = 'dba_db')

    CREATE DATABASE [dba_db]

    CONTAINMENT = NONE

    ON  PRIMARY

    ( NAME = N'dba_db', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.VERVESQL\MSSQL\DATA\dba_db.mdf' , SIZE = 102400KB , FILEGROWTH = 102400KB )

    LOG ON

    ( NAME = N'dba_db_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.VERVESQL\MSSQL\DATA\dba_db_log.ldf' , SIZE = 51200KB , FILEGROWTH = 102400KB )

    GO

    One again thanks for all your work.

    Regards

    Eduardo Barbosa

  • Hey Eduardo,

    The 1st script is fixed now. Thanks for noticing the typo.

    About the "dba_db" its just meant to be changed for whatever database a DBA chooses to create the logic. I came to realize most people that use it, deploy it in MSDB. Furthermore, the subsequent jobs, as they are scripted, will look for the sproc in MSDB, so that also needs to be changed to the corect database as needed.

    Cheers

  • Gotcha,

    Also check that the jobs are expecting the objects to be on the msdb.

    Then I suggest a "use msdb" instead with some comment above it clarifying that.

    Cheers

  • Right, what I did just now was highlight the notes/warnings on that topic that were already in the above post. Also took you advice on changing the default context to msdb in AdativeIndexDefrag - but please note that it can be deployed anywhere.

    Thanks!

    PL

  • Very nice article

  • Hi Pedro,

    It's perfect now :) ...and working smoothly on SQL Server 2012

    Cheer and Thanks

    Eduardo Barbosa

  • Great <a href="www.constantconcierge.com/">maintenance plan</a>.

  • Hi Pedro!

    In a SQL Server 2012 SP1 environment (11.0.3128.0) I had to change the step 2 (usage statistics) in order to make to jub running with success:

    Changed from:

    SET @dbname = QUOTENAME(@dbname)

    To:

    SET @dbname = @dbname

    It was failing with the error (for two databases):

    "Could not find database '[dba_db]'. The database either does not exist, or was dropped before a statement tried to use it. Verify if the database exists by querying the sys.databases catalog view. [SQLSTATE 42000] (Error 2520)  Could not find database '[VirtualManagerDB]'. The database either does not exist, or was dropped before a statement tried to use it. Verify if the database exists by querying the sys.databases catalog view. [SQLSTATE 42000] (Error 2520)  ** Finished: Jul 21 2013  8:57AM [SQLSTATE 01000] (Message 0).  The step failed."

  • Hello João,

    You mean the step 2 in the script 5_job_Maintenance.sql? Because I cannot find a "SET @dbname = QUOTENAME(@dbname)" string in the code. Maybe you're using an older version?

    Thanks!

  • Pedro,

    Indeed, we have other SQL instances without the "SET @dbname = QUOTENAME(@dbname)" string, so it is probably an older version or some in place correction.

    Thanks for the feedback.

  • Pedro,

    very nice writeup.

    I have a some concern about the robustness of the 'Sunday-check', though.

    As written:

    IF (SELECT 1 & POWER(2, DATEPART(weekday, GETDATE())-1)) > 0

    Where 1=Sunday, 2=Monday, 4=Tuesday, 8=Wednesday, 16=Thursday, 32=Friday and 64=Saturday.

    ...this appears to only work if @@DATEFIRST is set to 7. Should @@DATEFIRST be any other value, the check for Sunday won't work as expected.

    The problem with @@DATEFIRST is that anyone can change it at anytime, and on top of that, it's also sensitive to language.

    Try these two and see what happens with @@DATEFIRST

    set language 'british'

    set language 'us_english'

    I'd like to do daynumber-checks without relying on what value @@DATEFIRST happens to have, and depending on one's personal taste (and calendar), Sunday is usually either day #1 or day #7. (For me up here in Sweden, it's #7, our calendar starts with day #1 on Mondays).

    So, if you like to enumerate the week with Sunday as the first day, one could use this check, and not worry about either languagesettings or @@DATEFIRST

    -- 1=Sun, 2=Mon, 3=Tue, 4=Wed, 5=Thu, 6=Fri, 7=Sat - always and regardless of SET DATEFIRST!

    if (select daynumber = (((@@datefirst + datepart(dw, getdate()) - 1) % 7) + 1)) = 1 print 'Today is Sunday'

    Alternatively, if one like to count Sundays as the last day in the week;

    -- 1=Mon, 2=Tue, 3=Wed, 4=Thu, 5=Fri, 6=Sat, 7=Sun - always and regardless of SET DATEFIRST!

    if (select daynumber = (((@@datefirst + datepart(dw, getdate()) - 2) % 7) + 1)) = 7 print 'Today is Sunday'

    =;o)

    /Kenneth

Page 1 of 1 (12 items)