Microsoft SQL Server 2008 introduces two new compression features (available in Enterprise Edition) that are of interest to users with large databases. Data compression applies to individual tables and indexes within a database and comes in two options: ROW and PAGE. Backup compression is a new option when backing up a database to external storage. Both compression techniques can be used to reduce the amount of online storage required for databases. Data compression can be configured for an entire heap (table with no clustered index), clustered index, non-clustered index, or indexed view. For partitioned tables and indexes, the compression option can be configured separately (and differently) for each partition. Compression can also reduce disk utilization and sql memory utilization as dead is stores on disk in a compressed state and also reads in the SQL cache in a compressed state. Compression can add 10 - 30% percent increased CPU utilization depending on what tables and indexes are compressed and what level of compression is used.
For Dynamics NAV we recommend only compressing tables and indexes that have a read to write ration of 80%/20% (This is a conservative threshold) or higher as compressing tables with a higher write ratio can actually decrease performance. We also recommend using ROW compression if the space saving between ROW and PAGE level compression is less that 10%; if the difference is over 10% then we recommend PAGE compression. This is because if the space savings from PAGE compression is close to or similar to ROW compression then it is not recommended to incur the additional overhead associated with PAGE compression. An example of NAV tables that would benefit greatly from compression are the large "Entry" tables such as G/L Entry, Value Entry, and Item Ledger Entry. An example for NAV tables that would not benefit from compression and where performance may actually decrease due to compression are "Line" tables where the data is temporary in nature such as Sales Line, Purchase Line, and Warehouse Activity Line. SQL Server compression is completely transparent to the Dynamics NAV application.
Compression is done on Table or Index basis. On tables with clustered indexes the clustered index is the table so compressing the clustered index is equal to compressing the table. Running the ALTER TABLE.. WITH COMPRESSION is only necessary on HEAPS for all tables with Clustered Indexes you can use the ALTER INDEX.. WITH COMPRESSION.
How to determine the read/write ratio of an index? Luckily SQL keeps track of this for us and all we need to do is extract this data. SQL Server stores this information in the sys.dm_db_index_operational_stats DMV. Remember DMV's are "recycled" each time the SQL Server service is restarted so if SQL has only been up and running for a day or a week this information will be of minimal use. Ideally you would want the server to be up any running for several weeks and through a month end close to get a true idea of actual data access patterns of your database.
The following query will tell you how long the SQL Server instance has been up and running:
select 'Sql Server Service has been running for about '
+ cast((datediff(hh, create_date, getdate()))/24 as varchar(3)) + ' days and '
+ cast((datediff(hh, create_date, getdate())) % 24 as varchar(2)) + ' hours'
from sys.databases where name = 'tempdb'
The following query will give the approximate read write balance of all the used indexes in the database.
i.leaf_update_count * 100.0 /
(i.range_scan_count + i.leaf_insert_count
+ i.leaf_delete_count + i.leaf_update_count
+ i.leaf_page_merge_count + i.singleton_lookup_count
) as Writes,
i.range_scan_count * 100.0 /
) as Reads
FROM sys.dm_db_index_operational_stats (db_id(), NULL, NULL, NULL) i
JOIN sys.sysobjects o ON o.id = i.object_id
JOIN sys.indexes x ON x.object_id = i.object_id AND x.index_id =
JOIN sys.sysindexes s ON s.id = x.object_id and s.indid = x.index_id
WHERE (i.range_scan_count + i.leaf_insert_count
+ i.leaf_delete_count + leaf_update_count
+ i.leaf_page_merge_count + i.singleton_lookup_count) <> 0
AND objectproperty(i.object_id,'IsUserTable') = 1
You can also run a simulation with the sp_estimate_data_compression_savings stored procedure in SQL with will run a sample of the data through the chose compression level and give you the estimate of the space saved by enabling compression.
I strongly recommend reading the following MSDN article before engaging in database compression activities.
Note: You can use the following query to check to see if there are currently any compressed indexes in the database.
SELECT o.name Table_Name, p.index_id as Index_ID, i.name as Index_Name,
WHEN p.data_compression = 1 THEN 'ROW Compression'
WHEN p.data_compression = 2 THEN 'PAGE Compression'
FROM sys.partitions p
JOIN sys.objects o ON p.object_id = o.object_id
JOIN sys.sysindexes i ON o.object_id = i.id AND p.index_id = i.indid
AND p.data_compression in (1,2)
ORDER BY o.name, p.data_compression, p.index_id
Michael De Voe
Senior Premier Field Engineer
Microsoft Certified Master - SQL Server 2008
Some of you might have run into scenarios that a report that runs for some (acceptable) time on classic client takes far longer time on RTC or simply even errors out with OutOfMemory kind of exception.
Some have also asked what the limitation to number of records sent to RTC is, when experiencing this. Well, my personal environment here shows that 250.000 records will transfer, but the report will be left running for a long, long time. Far longer then the classic equivalent, which finished in say 15 minutes. So how to deal with this?
Well while as sending 250.000 rows (which equivalents to about 11.000 output pages!!!) is quite an amount of data to print, and one might assume that not too many people would want to print this much in the first place, you might not be aware your'e sending this many records, as your output may come down to mere 2 pages. So this is a blog for that kind of reports, reports that will process many rows, but really do output realtively little (and by relatively little, i mean anything under couple of hundred of pages).
Lets start by looking into what is sent to RTC. We have sections that are data containers for RDLC reports, so all data contained there would naturally be sent to RTC. But not only that! To see what's sent to RTC, look at your report metadata (export the metadata blob and save it as an xml file to check this). What you might not be aware of is that all records within metadata are sent. And metadata is really your report structure.So if your report has 3 data items, all records processed within filtered selection, that belong to those data items, will be sent to RTC. Which is why you might find out that some reports run for a long time (after sending dusins and hundreds of thousand of rows), but output only few pages, or in any case far less then the amount of records processed. This mostly happens with reports that:
1 Use data items for calculation and not necessarilly output records from these in printout (data items not used in sections at all)
2 Use dataitems for footers and headers only (data items not used in body type of sections)
3 Have 'PrintDetails' option (data item IS used in body section) which again means that regardless of the fact your user hasn't opted to print details, that data item is in the report structure hence all those records are sent to RTC anyway.
So these are the things you want to watch out for if you are writing your reports for RTC
In case of 1 and 3 => All you need here to stop records being sent to RTC is add (IF NOT PrintDetails then) CurrReport.SKIP at the end of OnAfterGetRecord trigger of the dataitem. Add at the end so you make sure your calculation is still done, but currreport.skip will then stop the actual data being sent.
In case of 2. => Don't! Using data item will send all records within filtered selection, so if you only want to print footer of a group of value entries (and don't have a body section for that data item at all), you will end up sending all these value entries to RTC, which in most installations is a lot. Use an integer data item to output the footer once per group instead, handle the summarizing of values within the group, in the code instead (keep the current report structure if you will, add currreport.skip in the end of OnAfterGetRecord trigger of the detailed item, summarize footer values in code of that trigger instead and add an integer data item to output summarized footer values).
As a test, I've done it on couple of standard reports that process a lot and tested their running time after modifications. And as assumed, millions of records process and the few pages are printed in 15 minutes, even on RTC :-)
With the release of Microsoft Dynamics NAV 2009 R2, the release of platform hotfixes for Microsoft Dynamics NAV 2009 SP1 and Microsoft Dynamics NAV 2009 R2 has been combined. This means that there will be one set of hotfix files that can be used with both Microsoft Dynamics NAV 2009 SP1 and Microsoft Dynamics NAV 2009 R2.
The combined hotfixes will start with build number 32074.
Special note for Microsoft Dynamics NAV 2009 SP1
In order to install hotfixes with build 32074 and higher, customers who have Microsoft Dynamics NAV 2009 SP1 with a build below 32074 installed, must first perform one of the following steps:
For more information about the files that need to be installed and how to install the files, see KB 2496107
For more information about how to perform a solution upgrade from Microsoft Dynamics NAV 2009 SP1 to Microsoft Dynamics NAV 2009 R2, see Install and Upgrade Instructions for Microsoft Dynamics NAV 2009 R2
There are no prerequisites for customers who have Microsoft Dynamics NAV 2009 R2 installed.
For a complete list of all platform hotfixes released for Microsoft Dynamics NAV 2009 SP1 and Microsoft Dynamics NAV 2009 R2, see the following pages on CustomerSource and PartnerSource:
Note: There are no changes to the release process for platform hotfixes for Microsoft Dynamics NAV 2009 (no service pack).
All application hotfixes released for Microsoft Dynamics NAV 2009 SP1 applies to Microsoft Dynamics NAV 2009 R2 as well.
For a complete list of all application hotfixes released for Microsoft Dynamics NAV 2009 SP1 and Microsoft Dynamics NAV 2009 R2, see the following pages on CustomerSource and PartnerSource:
Microsoft Dynamics NAV 2009 R2 uses WCF (Windows Communication Framework) for the core of its transport of data between the RoleTailored client and Microsoft Dynamics NAV Server. This usage of WCF includes transport security. The default behavior for Microsoft Dynamics NAV 2009 R2 is to use the transport security setting Encrypt and Sign, encrypting all traffic between the RoleTailored client and Microsoft Dynamics NAV Server. This behavior is controlled by the ProtectionLevel parameter in both the RoleTailored client and the Microsoft Dynamics NAV Server configuration files. All clients and their associated server must have matching values for this parameter. Encrypt and Sign security utilizes TLS (Transport Layer Security), SSL (Single Sockets Layer), and Kerberos tickets.
See also the following topics on MSDN:
Depending on how they are configured and where they are located, RoleTailored clients in an Encrypt and Sign implementation typically use one of the following security mechanisms in establishing a connection.
When clients and server are all members of the same domain or mutually trusting domains, Kerberos is an appropriate security mechanism. The initial key exchange takes place as part of the Kerberos authentication with the server, and the channel is thereafter encrypted using the standard WCF implementation. If for any reason Kerberos authentication is not accomplished then clients and server fall back to NTLM authentication to negotiate encryption keys and start encrypting the channel. For more information on this see the standard Oasis document on Kerberos.
Microsoft Dynamics NAV 2009 R2 allows you to configure the connection between Microsoft Dynamics NAV Server and RoleTailored clients to use SSL. When the connection is created via the traditional SSL method, the client validates the server's identity in the same way that an https client does on the web. The SSL certificate on the Microsoft Dynamics NAV Server computer must be a server certificate issued by a trusted Certificate Authority. The certificate must be issued to the server, and the name on the certificate must match the name of the server. Clients authenticate that the certificate is valid by looking up the certificate revocation list (either locally or by calling the Certificate Authority). After the server has been authenticated the channel is converted to a TLS channel, and the server and clients negotiate the keys. The client is then authenticated using the NTLM authentication protocol.
You must edit the Microsoft Dynamics NAV Server and RoleTailored client configuration files to configure Microsoft Dynamics NAV 2009 R2 for SSL. Specifically, you choose settings that indicate that this channel will be using an SSL certificate and that allow for user name and password NTLM authentication.
In CustomSettings.config , the Microsoft Dynamics NAV Server configuration file, you configure the following parameters:
In ClientUserSettings.config, the RoleTailored client configuration file, you configure the following parameter:
If the client detects that the server has been configured to accept Windows credentials, then it fall backs to the negotiated protocol (Kerberos/NTLMl)
For more information on SSL and TLS, see What is TLS/SSL?
In addition to a certificate issued by a Certificate Authority, Microsoft Dynamics NAV 2009 R2 also utilizes a test certificate known as a self-signed certificate. For more information on creating and installing this certificate, see Walkthrough: Implementing Security Certificates in a Test Environment, part of the Microsoft Dynamics NAV 2009 R2 documentation on MSDN. Note that implementing the procedures in the walkthrough is recommended only for testing environments. This is because the procedures specify importing the certificate's public key and marking it as trusted, and importing an empty revocation list on each client. This is both prone to error and less secure than utilizing a certificate from a Certificate Authority. Further, if someone is able to obtain the full certificate (both public and private keys) you have no way to revoke the certificate on each of the clients except by going to each client and updating the information manually.
A further change is required in the RoleTailored client configuration file if the DNS name on the certificate used on the server does not match the DNS name entered on the client for the Microsoft Dynamics NAV Server computer. Service certificates have the primary task of authenticating the server to clients. One of the initial checks when a client authenticates a server is to compare the value of the Subject field to the Uniform Resource Identifier (URI) used to contact the service: the DNS of both must match. For example, if the URI of the service is "ntp.tcp://www.DynamicsNAV.com/ServerInstance/", then the Subject field must also contain the value "www.DynamicsNAV.com".
Note that the field can contain several values, each prefixed with an initialization to indicate the value. Most commonly, the initialization is "CN" for common name, for example, "CN = www.DynamicsNAV.com". It is also possible for the Subject field to be blank, in which case the Subject Alternative Name field can contain the DNS Name value.
The RoleTailored client has a DNS name override setting (DnsIdentity) for when the names do not match. In this case the name on the certificate needs to match the name in the configuration file. The connection and encryption process is the same as with a certificate that was issued by a certificate authority.
As a follow up on previous blog post we looked a bit deeper into draft XBRL specifications for Germany and came across couple of additional interesting findings.
After importing the taxonomy you may get some lines with "Unexpected type: xxxxx".
When the Description field says "Unexpected type: xxxxx" it just means that built-in support for interpreting the custom "xxxx" type does not exist, e.g. languageItemType. When new taxonomy is created, the creator is free to define their own types. When we created the current version of the XBRL in NAV we looked at the types that seemed to be normally used at that time. Now, different taxonomies also have other types defined, e.g. Booleans. Since everything is reported as text, unknown types can always be reported back as text, and the user can write e.g. 'true' or 'false' for Booleans. What needs to be done is, look up in the guide for the specific XBRL taxonomy to find how to report e.g. "Ursprungsspache des Dokuments", probably user would write something like "German" or "Deutsch".
For usability perspective, maybe we should not write "Unexpected...." but just show the field as type Description. Originally we thought that it would be helpful for the user to know the name of the expected type.
Also, Please have a look at following KB article prepared by our support team.
Have in mind the following scenario: “Order” item with a sales order reserved against stock and sales order shipment date in frozen zone. Under these circumstances, we need to consider this scenario should only occur exceptionally in rare occassions:
· Having an “Order” item with stock should be considered as an exception
· Having a sales order with a shipment date before starting date should also be considered as an exceptio
What does NAV do here? It flags this as an exception. As mentioned before, this is an exception from a business perspective. This is not only that NAV considers this an exception. NOTE: When an exception is raised in NAV, it should not be much concerned on what is the action message actually suggested. It should be concerned on that an exception exists. And, planner should decide how to address this exception.
With Lot-for-Lot items, the reorder cycle needs to be considered when a change on the demand profile arise. As an example, we have a lot-for-lot item for which a replenishment exist which is consistent with demand profile. Now, the user changes a demand shipment date to an earlier date and the assumption is that new shipment date is within the reorder cycle. The result is that NAV suggest to Cancel the existing replenishment and creates a New one. Why?
As a general rule, it should respect the reorder cycle and reschedule the existing replenishment. However, what if an exception exists? An example is when the change on the shipment date impacts safety stock availability (stock goes below safety stock). In this case, this is an exception as well. As mentioned on an earlier post, NAV raise this as an exception so they user can decide how to address this.