Michael De Voe, a Senior Premier Field Engineer at Microsoft, has compiled a set of recommendations for SQL Server configuration to improve performance when running Microsoft Dynamics NAV 5.0 and later versions with one of the following versions of SQL Server:
The attached document contains Michael's recommendations, including the following options and parameters:
These postings are provided "AS IS" with no warranties and confers no rights. You assume all risk for your use.
Great post ! Thanks Michael
Nice having this information consolidated into one document! Great information.
great post, thanks a lot! After nearly ten years of „NAV on SQL“ MS finally releases something like this (no offence). Well, I am dealing with NAV/SQL performance issues for almost eight years, so I have an opinion about your recommendations which I’d like to share:
“Max. Server Memory”
Cool! Your formula is great. Previously I just calculated roughly “Physical RAM minus 1 or 2 GB for the OS”. Yours is more accurate.
“Auto. Create/Update Stats”
I strongly disagree with that one. The overhead created by this is remarkably; if you look at the statistics operation with Profiler you’ll see this consumes thousands of “Reads” (= I/O) and plenty of CPU time.
Due to the fact, that every NAV user could filter on any field of every table this will result – sooner or later – that every field got its statistic, giving quite some overhead to administer, reducing read- and write performance.
The stats which are created are actually useless: in NAV 90% of all query-problems are due to insufficient indexes – a statistic alone cannot help/improve – you’ll need to e.g. add an index which automatically gets its stats.
Hence, its index stats which are crucial; and we should take care manually about their maintenance. Thus, “Auto. Create/Update Stats” IMHO should be disabled.
As you recommend, the “sp_updatestats” procedure is quite feasible, in addition I suggest to also run “sp_createstats ‘indexonly’”.
In very few cases it might indeed be necessary to update the stats more frequently, so in the “worst case” the “Auto. Update Stats” feature could be enabled, but preferably together with the option to update them “Asynchronously”.
Actually I agree with this, but I’d rather proceed somewhat differently: “Auto Grow” should be avoided as much as possible, especially on the Log. So ideally the database-files are large enough, providing plenty of free space, ca. 20-25% at minimum, the more the better. The fill-degree of files could be monitored (e.g. Alert or some TSQL); thus a Job could be established which e.g. notifies an operator once a certain threshold is reached. IMHO always a human being should take care about file-sizes, AG should be just enabled (e.g. fixed growth of 250-500MB) in case of an emergency.
“Database Compatibility Level”
I 100% agree ;c)
“Trace Flag 4136”
I agree, but having in mind that it just prevents certain issues. In some cases the TF has no impact at all (with NAV 5.0 SP1), but it should not do any harm either.
I fully agree with that! But I would not ultimately stick to the rule “1 file per CPU”, e.g. if you have two Hex Cores = 12 CPU then maybe 12 files are somewhat too many and not really necessary. SO I’d rather start with 4 or 6 files, then increase the number on demand.
This is true, of course. But reality shows, that many most systems have no severe problems with other settings or formatting. Mean, a different block-size or missing alignment is a theoretical problem, but not necessarily a practical one, depending on the disk-technology used. On larger systems with heavy physical I/O this could have some impact, as you also said.
As you said: NAV cannot benefit from this feature. So this should be enabled only if you run non-NAV application which could take advantage of this. As far as I understand this feature it works like this, that whenever a record is modified its previous version is maintained in “tempdb” so that other queries could read it without getting blocked.
So doesn’t this mean that enabling RCSI increases the pressure on “tempdb”? Isn’t that another risk to encounter different problems?
“Max. Degree of Parallelism”
In my experience setting the MAXDOP to 1 (provides single-threaded execution) is highly recommended with older NAV systems running SIFT: if the SIFT management runs multi-threaded on could encounter strange blocking or deadlocking behavior (I spare the details here).
But finally reducing the MAXDOP means limiting all processes which theoretically could run multi-threaded.
Index Maintenance is such a multi-threaded thingy, and as you suggest the MAXDOP should be reset for such purposes.
In my experience newer NAV systems (5.0 SP1 or higher = VSIFT) do not have problems with MAXDOP settings greater than 1; e.g. on a QuadCore = 4 CPU I adjust MAXDOP to 2 (= half number of CPU). This provides decent parallelism to “intelligent” processes, but also prevents that one single processes “overloads” all CPU.
“NAV Default Isolation”
I agree. But I highly recommend to really read the KB article regarding the “Phantom Read” issue.
I fully agree. It is basically a matter of “user experience”. So far I have seen that most users prefer to get a feedback in some decent time, even if it is a negative feedback like a “Lock Timeout”. If users are blocked, the NAV screen just hangs, and the user doesn’t know what’s going on … is it working? … is it blocked? … did it crash?
Thus getting a “Timeout” feedback after a couple of seconds might be feasible.
If enabled NAV sends the ROWLOCK hint, which finally means the “Lock Escalation” is prevented, even though SQL Server would always start to lock on row-level (regardless of this setting).
But IF escalations happens, then the risks of encountering blocks is increased. As you said, it is a memory issue. 32bit systems could indeed have severe problems with maintaining too many rowlocks, thus I recommend to disable it there. But 64bit systems with sufficient amount of RAM (8GB and more) usually could take it without any problems. In this case I recommend to enable “Always Rowlock” to prevent blocking issues.
1. “Statistics”: I commented about this; again:
exec sp_createstats ‘indexonly’
2. “Index defragmentation”
Yep, that’s basically the script from “Books Online” (see “sys.dm_db_index_physical_stats”; example D.).
You already improved it a little bit, but still some important features are missing; e.g. for the ALTER INDEX REBUILD it could be MAXDOP = 64, maybe FILLFACTOR, maybe SORT_IN_TEMPDB, maybe ONLINE, etc. or for the REORGANIZE the LOB_COMPACTION …
Anyway, this script is way better than the standard Maintenance Plan tasks.
3. Actually a “DB Integrity Check” is missing!
“Instant File Initialization”
Actually I have no opinion about that ;c)
“Optimize for Ad-Hoc Workloads”
Actually I find this very interesting, never thought about that. Sounds cool! I definitely will try this!
“Lock Pages in Memory”
A must have for 32bit systems using AWE; with 64bit system mostly not necessary, but does not do any harm if enabled.
What’s your advice regarding the “Cache Record Set Size”?
I’m looking forward to – hopefully – getting your comment on this; I’d be happy to exchange experiences with this somewhat “tricky business” ;c)
In very few cases it might indeed be necessary to update the stats more frequently, so in the “worst case” the “Auto. Update Stats” feature could be enabled, but preferably together with the option to update them “Asynchronously”
We will have to agree to disagree on this matter as every customer I have been too who has tried to manage statistics manually has failed and the effect of this was severely compromised performance. If the customer a staff that is very knowledgeable and skilled with SQL Server and can develop and execute a manual statistics strategy more power to them (these folks would most likely not need this article). For the vast majority of NAV customers this just is not an option and I would always recommend enabling auto-create and auto-update to prevent the performance issues caused by missing or out of date statistics.
Note: sp_createstats 'indexonly' is a very good idea for a new deployment but I think you will find that after a week or two in production "auto-create" will have created all the same stats that sp_createstats 'indexonly'. So it is not a bad idea just not completely necessary.
As I wrote in the paper the advantage of multiple files tends to drop off greatly after 8, there have been many SQL Benchmarks that have shown no real benefit after 8 TempDB data files.
RCSI really boils down to a feature that readers and writers using Read Committed isolation level will not block each other and this is done through row versioning stored in TempDB. With RCSI enabled TempDB usage will increase so I added the note to see the TempDB recommendations to the end of the RCSI recommendation
The queries that come out of NAV are very simplistic and would never really require parallel execution unless the indexes are not tuned properly. If you see an execution plan from NAV that has parallelism in it there is probably an issue with it. Now SSRS reports would be a different story as some of the SQL behind reports can be very complex, but that is a different discussion. On SQL Server Standard which is what is being run by most of the customer’s maintenance plans are limited to a single thread no matter what. With the customers I work with that are running SQL Server Enterprise Edition I always recommend that we flip MAXDOP to 0 before the execution of the plan and then flip it back to 1 after, since it is a runtime option SQL does not need to be restarted. MAXDOP = 1 is actually an across the board recommendation for all the Dynamics products.
SERIALIZABLE isolation level just is not optimal for OLTP systems.
Most ERP systems use READ COMMITTED isolation level which is even lower that REPEATABLE READ. All isolation levels have there are pro's and con's. (Concurrency vs. Consistency) For NAV customers I personally do not believe REPEATABLE READ represents much risk, and the reward of higher scalability is worth the risk. I would actually personally like to see NAV adopt a default isolation level of READ COMMITTED. :-)
I cannot completely disagree with you on this I just do not know how much it really benefits to have this enabled. In the ad-hoc and customer benchmarks I have done I have not really seen much difference in blocking with it enabled or disabled. I suppose I am trying to say I do not have a strong opinion on this setting one way or another, :-)
Actually a “DB Integrity Check” is missing!
I am not sure what you mean?
I really do not have a blanket recommendation for this. It really depends on the deployment.
Michael, thanks for posting! This is great.
Useful post, thanks!
But there's a mistake in reindex script:
...FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'DETAILED')...
instead of 'DETAILED' must be 'LIMITED'.
otherwise we can have multiply run reindex or/and defrag for thr same index.
Good point, I will make the change and get a new version if the document out.
We have a customer with SQL 2008 SP1 and a SAN containing 10 discs - configured as 5 separate discs with Raid 0/1. 4 containing NAV database and the fifth containing log.
Now the customer has "heard" that the 4 discs should be RAID 10 in one partition, so that SQL only have one drive for the database and one drive for the log.
We initially configured with 4 discs so that SQL could take advantage of one CPU per disc partition (as with tempDB).
But which would you recommend?
Great post.Thanks for sharing this useful information.
<a href="www.error-1935.com/"> error-1935 windows 7</a>