Since NAV’s Stream introduction, there has been confusion about when to use READ and when to use READTEXT; when to use WRITE and when WRITEXT.
As it is documented, InStream.READ and OutStream.WRITE are meant to be in binary format, which means that Text strings will be ‘zero terminated’ when written, and a ‘zero terminated’ is expected when you are using READ; however READTEXT will not expect a ‘zero terminated’ text, but can accept one.
This very last feature makes the streams asymmetric, as you can write different binary formats than what you can read.
Take the following code snippet:
If we examine the file that gets created (in a binary text editor), you will notice that the Texts are in fact ‘zero terminated’, but the READTEXT will be able to read them individually.
Now, this is where the fun begins. We can change the READTEXT to be READ (as it should have been) and everything will continue to work exactly the same…, almost. You will notice that the readCount now (when using InS.READ) will include the zero termination, whereas before the count was only for the characters in the text. Actually, if you put a return value in the WRITE (char 13), you will notice that the zero termination is included in the count (because actually that was the binary format of the Text).
So, count in count := OutS.WRITE('My text.'); will be 9 (8 characters and a zero termination), whereas count := OutS.WRITETEXT('My text.'); will be 8. The reason why it was decided to return the written bytes, was precisely so that people would know how many bytes are sent through the Stream.
Another thing to take in consideration when working with Streams are CODE types. When writing CODE types using OutStream.Write, they are NOT written in NAV binary format. So they are not symmetric with the FILE.WRITE function; however, InStream.READ(CodeType) will accept either data written using OutStream.WRITE(CodeType), or FILE.WRITE(CodeType).
Another possibility is to write data using OutStream.WRITETEXT(Text) and read it via InStream.READ(Text). Just take in consideration that when using OutStream.WRITE(Text), the Text will be zero terminated and when using OutStream.WRITETEXT(Text), it will not.
Jorge Alberto Torres (jtorres)Software Developer Engineer
One of the queries I use the most, is the pplan-cache query from this post:
Simple query to check the recent performance history
The good thing about the query is that it shows information that could otherwise require a lot of work (collecting and analysing traces files). And the query does not need any advance work or setup. It can just be run. So it's a very easy way to receive information from a system, which is often very useful as a first step in troubleshooting performance. For more details about the result of the query, refer to the post linked above.
Below is a slightly enhanced version of the query. Since the query is based on the cache of compiled query plans, it is not a big step to extend it to also include the query plan itself, and even extract certain information from the plan if you know what you are looking for.
So this query does the same as the original one, but with the following additions:
Here is the updated query
SELECT TOP 100
SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,
WHEN -1 THEN DATALENGTH(st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2) + 1) as statement_text,
when execution_count = 0 then null
end as avg_logical_reads,
-- Query Plan Information
ph.query_plan.exist('declare namespace ns="http://schemas.microsoft.com/sqlserver/2004/07/showplan";(/ns:ShowPlanXML/ns:BatchSequence/ns:Batch/ns:Statements/ns:StmtCursor/ns:CursorPlan/@CursorRequestedType)') = 0
then '' else
ph.query_plan.value('declare namespace ns="http://schemas.microsoft.com/sqlserver/2004/07/showplan";(/ns:ShowPlanXML/ns:BatchSequence/ns:Batch/ns:Statements/ns:StmtCursor/ns:CursorPlan/@CursorRequestedType)','nvarchar (max)')
What I would really like, is to receive feedback on what parts of the query plans are useful. Then extend the query even further to include as much useful information as possible. For exdample, in some cases the query plan contains missing index-information. The lines below can be copied into the query above to include this information. Any feedback on whether this is useful or not, and whether other information from the query plans can be useful is really very welcome. You can add comments about this below.
-- Missing Indexes
Lars Lohndorf-Larsen (Lohndorf)Escalation Engineer
I must say that since I started working in the integration area of NAV (from within NAV), I got very interested in the possibilities that it would bring to NAV developers. Back then, I presented a Web Server for the NAS using the Socket ComCom and since then I have seen how people have used the streams and specifically the MSMQ Bus Adapter for integrating services and data.
When writing the code for the STREAMS (InStream and OutStream), I thought it would be very useful to write a function that could copy from one to another, reading from the InStream and writing to the OutStream. However, when using Sockets, there is an essential problem to completing this task. Since TCP data can contain anything (literally any data), and since the data size is also unknown and un-limited, how can we know when the copying has been completed?
Recently I was dealing with a problem that I found within one of the communities forums (yes, we are reading) and it was regarding XMLPorts. You need to stream the data to and from them, so I rapidly wrote a codeunit that would allow me to read the XMLPort to a file; for that, I used the COPYSTREAM function to read from the XMLPort to a FILE. A couple of minutes later, I found out that my file was not filled out correctly as it was truncated, and immediately went into the code to further debug the problem. I quickly found out that the problem was that the COPYSTREAM was not reading till the End of Stream (EOS). I was puzzled to find an error in the COPYSTREAM code after all this years, but of course I thought that the call to the EOS was simply missing. Once I saw that the call was in fact there, it finally hit me…. InStream.EOS is not working, or, it cannot really work.
The real problem here is, how do we know when a stream will not send us more data? Usually this is done through a protocol (like http, ftp, etc) but without a protocol, it is not possible to have a clear rule when we have the EOS. Is it a NULL termination? Is it 2 x CR+LF? Or what about a dot in an empty line? The truth is that, we could make rules, but we cannot have a generic answer to all possibilities.
Depending on your specific conditions, and specially the size of the data you are sending, you might not see this problem at all. But maybe, from time to time, you have noticed that data being received gets truncated and you are trying to figure out what went wrong.
To further illustrate the problem, let’s receive a file using Sockets and Save it on our local file (I’ll skip the sending code unit, but it can very easily be done following the samples provided in the Development Guide for Communication Components):
In the environment where I am writing this, I can send files without any issue up to approximately 25 Kb. After that, everything seems to work fine, except that the file gets truncated, that is, I am trying to send a 48 kb file, but get a truncated one.
In order to get rid of this problem, I suggest copying the stream one char at a time, so the code could look something like:
The retry is necessary, because we cannot rely on InS.EOS. InS.EOS also suffers from the fact that we don’t know when the stream will stop sending us data.
A nicer way to deal with this would be to first send how big the data is. That way, we don’t have to rely on time to assume that the information has indeed completed its transmission. If you are sending XML documents, you could also use the DOMDocument.load(InS) return value, to see if the received data indeed contains a valid XML document. If you have access to codeunit 7700, you can see an example on how this could be achieved (as you need to first copy the stream).
Of course, as mentioned before… This issue will only happen with the raw SocketBusAdapter.
One of the changes in Microsoft Dynamics NAV version 5, was to change from primarily making use of Fast-Forward cursor types to Dynamic cursors. The same change was implemented in version 4 from build 26410, which is Version 4, SP3 Update6(940718) + KB950920.
The change of cursor type can also mean a change in behaviour. With Dynamic cursors, SQL Server more often optimises for the ORDER BY - part of a SQL query than is the case with Fast Forward cursors. This is because a result set based on a dynamic cursor has to include new rows. IF SQL Server were to choose an index that fits the WHERE clause then it would have to sort all rows according to the ORDER BY before returning the first row to the client and that by definition is a STATIC result-set.
Take this query as an example:
undefined (("Document No_"=
ORDER BY "Item No_","Posting Date","Entry No_"
With Fast-Forward cursors, in this example SQL Server is likely to try to optimise for the WHERE clause, which is "Document No.". But with the ORDER BY clause specifying a different sorting, SQL Server may then chose a clustered index scan instead.
With Dynamic cursors, SQL Server is more likely to optimise for the ORDER BY clause, which is a valid and existing index. So in this exampe SQL server would chose an index scan on this index. You can see this by running the query from SQL Server Management Studio like this:
--declare @p3 int set @p3=16+4096+8192 -- Fast Forward
--declare @p5 int set @p5=15 – FAST 15
SELECT * FROM "Demo Database NAV (6-0)"."dbo"."CRONUS International Ltd_$Item Ledger Entry"
WHERE (("Document No_"=@P1))
You can enable / disable the 2nd and 3rd line to switch between Fast-Forward or Dynamic cursor, and see the result in the query plan that gets generated.
How does this affect NAV?The change in behaviour can mean that certain queries that ran without problems may be slow, after a customer upgrades the platform to the builds mentioned above. In cases
that we have seen, the problem has been limited to one or very few specific queries, typically on filtered forms, that were slow after upgrading the platform. Use the query from here:
Simple query to check the recent performance history II - now including Query Plan information
to help identifying which queries to trouleshoot. Note that the query shows the Cursor Type in the rightmost column. Then look at whether SQL Server has an index to match the ORDER BY clause.
Also, be extra careful using the SQLIndex-property on keys. By setting this property, the ORDER BY-clause may not match a SQL index anymore, and a Dynamic cursor will have
to scan the table.
Why this change?Internal performance tests show that overall, Dynamic cursors give better performance and fewer blocks. So while the change may cause unforeseen changes in behaviour when a customer upgrades, overall we have seen better performance with Dynamic cursors.
Recently I was working with a very interesting case, involving Automation, and AL runtime. This involves the number of decimals that NAV supports; giving the sensation that rounding is done incorrectly.
First, I would like to bring you your attention to Table 370 (Excel Buffer). This table provides a very simple way to read and write Excel files. This works pretty neatly, except when you start dealing with some “special numbers”.
The issue, really, is that decimals, when transported via COM, are encoded in a VT_R8, which means that we have 64 bits to express all decimal values (according to MSDN, VT_R8 is an IEEE 8-byte), which basically means, they cannot express extremely precise values.
Now, how exactly precisely is precise? Let’s work with 17.7.
Well, don’t think that NAV, or for that matter COM, cannot express this “simple” value, the problem is really they way decimals are encoded.
Let me first show you the issue, as it is pretty simple to reproduce.
First, create an Excel file using Microsoft Excel. On cell A1 of a Sheet, type “17.7”. Now, this by itself is tricky, because what you need to type depends on your Locale. What I mean is seventeen and 7/10 (meaning, that you might have to use comma instead of a dot).
Save the recently created file someplace that you can remember.
Now create a codeunit, with one global variable of type record of Subtype “Excel Buffer”.
Then open your file and refer to your Sheet, and read it, just like this:
It is time to see the inserted value, for that, go to Table 370. You will see that the Cell value is in fact “17.699999999999999” (if you see 17.7 is because your NAV version is not as accurate as the one I am using, or the codeunit has already been modified for rounding).
You must be thinking now that I am trying to sell you a platform bug as a feature, but this not the case. The problem, really, is that recent NAV builds recognizes more decimals (up to 18), and an 8 byte IEEE is not as precise. To illustrate this, look at the following table:
64 bit Hexadecimal (memory representation):
In here, it is clear that the hexadecimal representation “4031B33333333333”, not only refers to 17.7, but also to some “neighboring” values (actually 4 values), so, the COM algorithms seem to choose the most likely one: the middle, which in fact, is really a close approximation.
The reason why decimal accuracy was increased in NAV was precisely to improve calculations when dealing either with very large numbers, or very accurate ones, the consequence being that 64 bit representation cannot deal with so many decimals.
Fortunately, in order to prevent this from happening, you can round the values to be less precise, which in most cases will give you the values that you were in fact expecting, so for decimals, you could round to 4 decimals using variants (in order to know if you are dealing with a decimal number or not):
"Cell Value" := XlWrkSht.Range(xlColID + xlRowID).Value;
IF ("Cell Value".ISDECIMAL) THEN
"Cell Decimal Value" := "Cell Value";
"Cell Decimal Value" := ROUND("Cell Decimal Value", 0.0001);
"Cell Value as Text" := FORMAT("Cell Decimal Value");
"Cell Value as Text" := DELCHR(FORMAT(XlWrkSht.Range(xlColID + xlRowID).Value),'<',' ');
Hopefully this will clarify why you can get this approximations, and how to deal with it in C/AL.
We are glad to announce that compatibility between Business Notification and SQL Server 2008 is as of now verified and confirmed. Business Notification can run on SQL Server 2008 provided that the 2 following components are also installed:
- SQL Server Notification Services 2005
- SQL Server 2005 Service Pack 3.
Notification Services is no longer part of SQL Server 2008. What SQL Server 2005 Service Pack 3 does is that it allows Notification Services 2005 to run with the SQL Server 2008 database engine. For further information, please also refer to "What's New in SQL Server 2005 SP3":
While refering to the Business Notification Installation Guide (https://mbs.microsoft.com/customersource/documentation/setupguides/installationmanualsnav2009.htm ) as your main source of information, below are the steps that you need to include in the installation process in order to use SQL Server 2008.
1. Install SQL Server 2008.
2. Install SQL Server 2005 Notification Services only. When the SQL Server 2005 setup is started, a list of installation components is available. Selected Notification Services only and complete the installation.
3. Apply SQL Server 2005 Service Pack 3. This can be downloaded from here:
Continue with the steps described in the Business Notification Installation Guide.
It should be a common installation scenario to have NAV 2009 components installed on three different machines, one for the database hosted by SQL Server, one for the Microsoft Dynamics NAV Server (middle tier server) and the corresponding client machines. This installation is explained in MSDN “How To: Set Up Delegation”, however, what I wanted to talk about is the difference in SPN naming from 6.0 RTM and 6.0 SP1 (or hotfixes which are build 28795 or higher).
The names used before were in the format “NAVSERV_DynamicsNAV/NAVSERV.yourDomain.yourCompany.com:7046”, the new name conventions are:
Both names should be registered, to minimize possible issues.
This change should simplify the “Three tiers” installation in relatively complex name networks, for instance, when it contains multiple A or CNAME records.
We have released the source files for Dynamics NAV 2009 SP1 Help to PartnerSource. This release includes 8 different languages across 16 countries/regions, approximately 1 million files, and about 2.5 GB of content. If it were in book format, that would stack up to about 10 meters of books. That's a lot of help for a lot of users!
With these source files and the previously released Help Toolkit for SP1 (also on PartnerSource), you can modify the Help files to match customizations that you create. Go ahead, add to the stack of content and customize the Help that you provide to your users!
And as always, let us know what you think about the Help, the customization process, the tools, what's missing, and what would be more helpful.
Microsoft Dynamics NAV sustained engineering team has released the following servicepacks and platform updates (PartnerSource Login is required):
Microsoft Dynamics NAV 3.70
- Microsoft Dynamics NAV 3.70.A (SP1)
- Microsoft Dynamics NAV 3.70.B (SP2)
1. Platform Update for Microsoft Dynamics NAV 3.70.B KB890640
Microsoft Dynamics NAV 4.0 Releases:
- Microsoft Dynamics NAV 4.0
1. Platform Update for Microsoft Dynamics NAV 4.0 KB890551
2. Platform Update for Microsoft Dynamics NAV 4.0 KB908918
- Microsoft Dynamics NAV 4.0 SP1
1. Platform Update for Microsoft Dynamics NAV 4.0 SP1 KB913523
2. NODBC Update for Microsoft Dynamics NAV 4.0 SP1 KB912791
3. Platform Update for Microsoft Dynamics NAV 4.0 SP1 KB915455
- Microsoft Dynamics NAV 4.0 SP2
1. Platform Update for Microsoft Dynamics NAV 4.0 SP2 KB919407
2. NODBC Update for Microsoft Dynamics NAV 4.0 SP2 KB921893
3. Platform Update for Microsoft Dynamics NAV 4.0 SP2 KB922695
4. NODBC Update for Microsoft Dynamics NAV 4.0 SP2 KB927113
- Microsoft Dynamics NAV 4.0 SP3
1. Microsoft Dynamics NAV 4.0 SP 3 platform update rollup KB931841
2. Microsoft Dynamics NAV 4.0 SP3 Platform Update KB933727 (Installation of KB931841 is a prerequisite)
3. Microsoft Dynamics NAV 4.0 SP3 Platform Update KB936602 (Installation of KB931841 is a prerequisite)
4. Microsoft Dynamics NAV 4.0 SP3 Platform Update KB938138 (Installation of KB931841 and KB936602 is a prerequisite)
5. Microsoft Dynamics NAV 4.0 SP3 Platform Update KB940718 (Installation of Microsoft Dynamics NAV 4.0 SP3 is the only prerequisite)
- Microsoft Dynamics NAV 5.0
1. Platform Roll-Up Update for Microsoft Dynamics NAV 5.0 KB943858
- Microsoft Dynamics NAV 5.0 SP1
This blog will be updated as new platform updates for Microsoft Dynamics NAV are released.
Martin NielanderProgram Manager
Since the release of Microsoft Dynamics NAV 5.0 we have released the following knowledgebase (KB) articles (PartnerSource Login credentials are required):
1. KB938398 – XBRL
2. KB937512 – Overdue Amounts (LCY)
3. KB937722 – Accounting Period New Company
4. KB936064 – Outlook Integration
5. KB936535 – Web Links
6. KB939253 – Bank Reconciliation
7. KB938272 – Sales Person Code
8. KB936538 – Production Order
9. KB937256 – Prepayment
10. KB937995 – Sales Order
11. KB937737 – Sales Invoice
12. KB939257 – Prepayment
13. KB939259 – Payment Check
14. KB938651 – Warehouse Activity Lines
15. KB939831 – Un Apply
This blog will be updated as new KB articles for the Microsoft Dynamics NAV 5.0 C/AL application are released.
Note: It requires a developer license to change C/AL code in Microsoft Dynamics NAV. It is highly recommended to have the changes evaluated and implemented by a Microsoft Dynamics NAV partner.
Since the release of Microsoft Dynamics NAV 4.0 SP3 we have released the following knowledgebase (KB) articles (PartnerSource Login credentials are required):
1. KB940206 – Apply
2. KB934132 – Bank Account
3. KB924268 – Blanket Orders
4. KB926349 – Blanket Orders
5. KB932268 – BOM
6. KB933009 – BOM
7. KB939711 – Business Analytics
8. KB929696 – Business Notification
9. KB927354 – Costing
10. KB935713 – Credit Memo
11. KB930384 – Drop Shipment
12. KB934029 – Exchange Rate
13. KB930317 – Export to Excel
14. KB935724 – General Journal
15. KB930322 – Import from Excel
16. KB930388 – Intercompany
17. KB934156 – Item Journal
18. KB923670 – Item Tracking
19. KB934145 – Item Tracking
20. KB934568 – Item Tracking
21. KB933695 – Order Planning
22. KB937583 – Order Planning
23. KB937512 – Overdue Payments
24. KB936466 – Partial Payments
25. KB932272 – Payment Discount
26. KB937506 – Payment Discount
27. KB937183 – Payments
28. KB937507 – Payments
31. KB936561 – Planning Worksheet
32. KB939379 – Post Code
33. KB929931 – Production Orders
34. KB934129 – Production Orders
35. KB936538 – Production Orders
36. KB937582 – Production Orders
37. KB939384 – Production Orders
38. KB926250 – Production Schedule
39. KB934127 – Purchase Invoice
40. KB928389 – Purchase Order
41. KB930124 – Purchase Order
42. KB931516 – Reminders
43. KB932261 – Reports
44. KB935717 – Reports
45. KB937027 – Reports
46. KB939000 – Reports
47. KB939378 – Reports
48. KB939837 – Reports
49. KB933693 – Req. Worksheet
50. KB935723 – Req. Worksheet
51. KB936467 – Req. Worksheet
52. KB939261 – Req. Worksheet
53. KB921079 – Sales Order
54. KB930944 – Un-Apply
55. KB937025 – Un-Apply
56. KB940154 – Un-Apply
57. 933131 – UPGTK
58. 936485 – UPGTK
59. KB930391 – VAT
60. KB930591 – VAT
61. KB934150 – VAT
62. KB937022 – VAT
63. KB937268 – VAT
64. KB929348 – Warehouse
65. KB929958 – Warehouse
66. KB930125 – Warehouse
67. KB932953 – Warehouse
68. KB933670 – Warehouse
69. KB933677 – Warehouse
70. KB933696 – Warehouse
71. KB934149 – Warehouse
72. KB934512 – Warehouse
73. KB936088 – Warehouse
This blog will be updated as new KB articles for the Microsoft Dynamics NAV 4.0 SP3 C/AL application are released.
On SQL Server, you can use index hinting to force the server to use a particular index when executing queries for FINDFIRST, FINDLAST, FINDSET, FIND('-'), FIND('+'), FIND('=') and GET statements. Queries generated by the form runtime will be affected by index hints in the same way.
Index hinting can help avoid situations where SQL Server’s Query Optimizer chooses an index access method that requires many page reads and generates long-running queries with response times that vary from seconds to several minutes. Directing SQL Server to use a specific index can give instant 'correct' query executions with response times of milliseconds.
In Microsoft Dynamics NAV, index hinting is turned on by default and the application automatically uses this functionality to improve performance.
If you need to switch off or customize index hinting to fit your implementation, you must create a SQL Server table to store the configuration parameters. The parameters you enter into this table will determine some of the behavior of Microsoft Dynamics NAV when it is using this database.
In the database create a table, owned by dbo:
CREATE TABLE [$ndo$dbconfig] (config VARCHAR(512) NOT NULL)
GRANT SELECT ON [$ndo$dbconfig] TO public
(You can add additional columns to this table, if necessary. The length of the config column should be large enough to contain the necessary configuration values, as explained in the following, but need not be 512.)
The default value is IndexHint=Yes.
You can disable index hinting at any level of granularity.
There are two ways of using index hinting in your application:
· You can leave index hinting turned on and disable it in specific places.
· You can turn off index hinting and enable it in specific places.
Index hinting has been shown to optimize performance in the following scenarios:
1. Index hints prevent SQL Server from using an out of date query plan, such as a clustered index scan.
2. Index hints prevent SQL Server from scanning smaller tables and escalating locks to table locks.
SETCURRENTKEY must correspond to the filter that you want to place on the table.
GLEntry.SETCURRENTKEY(GLEntry.AccountNo);//Should be added to the code if not already presentGLEntry.SETRANGE(GLEntry.AccountNo,’1000’,’9999’);GLEntry.FINDSET();
Example: In the following C/AL code, index hinting is turned on but SETCURRENTKEY has not been used:
GLEntry.SETRANGE("G/L Account No.",'2910');GLEntry.FINDSET;
This will generate the following SQL query:
SELECT TOP 500 * FROM "W1403"."dbo"."CRONUS International Ltd_$G_L Entry" WITH (READUNCOMMITTED, INDEX("CRONUS International Ltd_$G_L Entry$0")) WHERE (("G_L Account No_"=@P1)) ORDER BY "Entry No_" ','2910'
Note that without a SETCURRENTKEY, Microsoft Dynamics NAV will hint the SQL index which corresponds to the primary key in the G/L Account table. This is not the best key to use for this query.
Conversely, in the following C/AL code, hinting is turned on and SETCURRENTKEY has been used:
GLEntry.SETCURRENTKEY("G/L Account No.");GLEntry.SETRANGE("G/L Account No.",'2910');GLEntry.FINDSET;
SELECT TOP 500 * FROM "W1403"."dbo"."CRONUS International Ltd_$G_L Entry" WITH (READUNCOMMITTED, INDEX("$1")) WHERE (("G_L Account No_"=@P1)) ORDER BY "G_L Account No_","Posting Date","Entry No_" ','2910'
Now, because the C/AL code specifies which key to use, Microsoft Dynamics NAV hints the corresponding index from the code, which ensures that the right index is always used.
If you turn index hinting off, Microsoft SQL Server will define the index automatically.
Index hinting is turned on by default but you can disable index hinting for a specific company, table or index.
Here are a few examples that illustrate how to disable index hinting by executing a statement in query analyzer:
INSERT INTO [$ndo$dbconfig] VALUES
('IndexHint=No;Company="CRONUS International Ltd."’)
('IndexHint=No;Company="CRONUS International Ltd.";Table="Sales Header')
('IndexHint=No;Company="CRONUS International Ltd.";Table="Sales Header";Index="1"')
As mentioned earlier, you can disable index hinting for the entire system and then enable it where appropriate for your application.
The index hint syntax is:
IndexHint=<Yes,No>;Company=<company name>;Table=<table name>;Key=<keyfield1,keyfield2,...>; Search Method=<search method list>;Index=<index id>
Each parameter keyword can be localized in the "Driver configuration parameters" section of the .stx file.
The guidelines for interpreting the index hint are:
• If a given keyword value cannot be matched the entry is ignored.
• The values for the company, table, key fields and search method must be surrounded by double-quotes to delimit names that contain spaces, commas etc.
• The table name corresponds to the name supplied in the Object Designer (not the Caption name).
• The Key must contain all the key fields that match the required key in the Keys window in the Table Designer.
• Search Method contains a list of search methods used in FIND statements:
• The index ID corresponds to a SQL Server index for the table: 0 represents the primary key; all other IDs follow the number included in the index name for all the secondary keys. Use the SQL Server command sp_helpindex to get information about the index ID associated with indexes on a given table. In this example we are looking for index information about the Item Ledger Entry table:
sp_helpindex 'CRONUS International Ltd_$Item Ledger Entry'
When Dynamics NAV executes a query, it checks whether or not the query is for the company, table, current key and search method listed in one of the IndexHint entries. If it is, it will hint the index for the supplied index ID in that entry.
• If the company is not supplied, the entry will match all the companies.
• If the search method is not supplied, the entry will match all the search methods.
• If the index ID is not supplied, the index hinted is the one that corresponds to the supplied key. This is probably the desired behavior in most cases.
• If the company/table/fields are renamed or the table's keys redesigned, the IndexHint entries must be modified manually.
Here are a few examples that illustrate how to add an index hint to the table by executing a statement in Query Analyzer:
('IndexHint=Yes;Company="CRONUS International Ltd.";Table="Item
Ledger Entry";Key="Item No.","Variant Code";Search Method="-
This hint will use the $3 index of the CRONUS International Ltd_$Item Ledger Entry table for FIND('-') and FIND('+') statements when the Item No.,Variant Code key is set as the current key for the Item Ledger Entry table in the CRONUS International Ltd. company.
('IndexHint=No;Company="CRONUS International Ltd.";Table="Item
The index hint entry is disabled.
This will hint the use of the Item No.,Variant Code index of the CRONUS International Ltd_$Item Ledger Entry table for FIND('-') and FIND('+') statements when the Item No.,Variant Code key is set as the current key for the Item Ledger Entry table in the CRONUS International Ltd. company.
This is probably the way that the index-hinting feature is most commonly used.
('IndexHint=Yes;Company=;Table="Item Ledger Entry";Key="Item
No.","Variant Code";Search Method="-+";Index=3')
This will hint the use of the $3 index of the CRONUS International Ltd_$Item Ledger Entry table for FIND('-') and FIND('+') statements when the Item No.,Variant Code key is set as the current key for the Item Ledger Entry table for all the companies (including a non-company table with this name) in the database.
Ledger Entry";Key="Item No.","Variant Code";Search Method=;Index=3')
This will hint the use of the $3 index of the CRONUS International Ltd_$Item Ledger Entry table for every search method when the Item No.,Variant Code key is set as the current key for the Item Ledger Entry table in the CRONUS International Ltd. company.
I have seen a number of (good) arguments that some issues with Dynamics NAV on SQL2005 are caused by a bad query optimizer in SQL2005, and that the SQL teams need to fix this. After all, some issues we see on SQL2005 did not exist in SQL2000. This blog tries to argue that SQL2005 is maybe more clever than SQL2000, and tries to give some input into the discussions about whether SQL2000 is more clever than SQL2005, and whether there is anything to fix in SQL2005 regarding this specific issue.
The scenario belows is the situation where the reuse of a cached query plan causes a Dynamics NAV client to hang while it is browsing forms.
Here is a repro scenario which will show why Dynamics NAV ends up causing a clustered index scan on SQL2005, while the same scenario on SQL2000 did not cause any such scans. It is based on a W1 5.00 demo database, and it requires a Solution Developer's license to run it. Run the steps on a SQL2000 and SQL2005 database and you will see where the differences are between these two platforms:
1. Create a new G/L Account, No 11052. Create 50.000 new records in table 17. For this purpose, it doesn't matter if you post these entries or just create a codeunit to insert the records.3. Run table 17 from Object Designer, and change the G/L Account No. to 1105 for the first 3, and the last 3 entries4. On SQL Server, update statistics on this table:update statistics [CRONUS International Ltd_$G_L Entry]5. Run Dynamics NAV with Maximized forms.6. In Dynamics NAV, go to "Chart of Accounts" and drill down on the new account 1105 and you should see 6 entries. Make sure to place the cursor on the first entry. Then close the drill-down to go back to the "Chart of Accounts".7. On SQL Server, run DBCC FREEPROCCACHE. This will clear out any cached query plans.8. Start a profiler trace - include the following events (on top of the default ones) On SQL2005: Performance:Showplan Text, on SQL2000: Performance:Execution Plan Stored Procedures:SP:CacheHit Stored Procedures:SP:CacheInsert9. In Navision, drill down on account 1105. Then move the cursor with arrow-down, until you get to the last entry. Then move back up to the top again with arrow-up.10. Stop the profiler trace.
On SQL2005, you should see one of the last entries causing a relatively large number of reads. In my tests 2079 reads. This is the offending query. The same query on SQL2000 causes much fewer reads. In my tests 126 reads.
The query looks like this:SELECT * FROM "W1500"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=@P1)) AND "G_L Account No_"=@P2 AND "Posting Date"=@P3 AND "Entry No_"<@P4 ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC ',@p3 output,@p4 output,@p5 output,N'@P1 varchar(20),@P2 varchar(20),@P3 datetime,@P4 int','1108','1108',''2007-12-31 00:00:00:000'',52761
Notice that the last parameter value is 52761. So the part of the query to focus on here, in fact reads:WHERE "Entry No_" < 52761
Then take a look at the execution plan. SQL 2005 uses the index [CRONUS International Ltd_$G_L Entry$0], which is the clustered index ("Entry No_"). SQL2000 uses the index [CRONUS International Ltd_$G_L Entry].[$1], which is the index which begins with "G_L Account No_". So based on this query it is not strange that SQL2005's plan is causing many more reads that SQL2000's plan.
Here is an important point to make: Neither SQL2000 or SQL2005 compiled the query plan for this query. You can see by the presense of SP:CacheHit events in the profiler trace, that the plan was taken from the plan cache. So in order to find out why the two versions of SQL makes different plans, we need to go to the place where the plan was made.
Go to the SP:CacheHit event and look at the data. Then go backwards in the trace until you find the SP:CacheInsert event with the same data. This is the place where the query plan was made. The query in this place looks like this:
SELECT * FROM "W1500"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=@P1)) AND "G_L Account No_"=@P2 AND "Posting Date"=@P3 AND "Entry No_"<@P4 ORDER BY "G_L Account No_" DESC,"Posting Date" DESC,"Entry No_" DESC ',@p3 output,@p4 output,@p5 output,N'@P1 varchar(20),@P2 varchar(20),@P3 datetime,@P4 int','1108','1108',''2006-12-31 23:59:59:000'',1
This time, the last parameter value is 1 (not 52761)! So this time, the part of the query to focus on is:WHERE "Entry No_" < 1
Remember that "Entry No_" is also the clustered index.
So here is the question: What is the best possible query plan for this query? And I think the answer is easy for this scenario: Use the clustered index to "scan" this one record! The number of Reads in the trace should also confirm this. In my tests, SQL2005 did 21 reads. SQL2000 did 245 Reads.
So in this case, SQL2005 makes a better plan than SQL2000!
The way that query plans are cached and reused has not changed between SQL2000 and 2005. The following points are valid for both versions:
1. When a query plan is designed, SQL will take the parameter values into consideration (In this example, whether the last parameter is 1 or 52761). This is also called parameter sniffing.2. When a query plan is reused from cache, the parameter values are NOT taken into consideration. The Query that the plan is valid for is converted into a hash-value. SQL simply looks in the plan cache if a plan exists for that hash-value, and then reuses the plan if there is. If SQL also had to revalidate the plan against the current parameter values, then this would to some extend negate the whole purpose of reusing cached plans (performance).3. SQL's query optimizer does not have any kind of risk-assessment when it designs a query plan. There are no mechanisms in place to consider "If I put this plan into cache, and it was reused with other parameters, what is the potential damage?"
These behaviours are fundamental to current and previous version of SQL, and most likely to future versions as well.
So, for this scenario we can see that: - When the plan was made, SQL2005 made the most optimized plan. - The behaviour of caching plans and reusing them are the same on both SQL2000 and SQL2005.
Without going into too many details here about how to troubleshoot a situation like this, there are various ways to handle it. The main methods for Dynamics NAV are:
- Index hints:
In this situation, if the query had included an index hint on the $1 index ("G_L Account No_"), then SQL2005 would not have chosen the clustered index as it did. The behaviour would have been like on SQL2000, and the problem query (2079 reads) would not have happened. For more details about Index Hinting in Dynamics NAV, check thhe blog "Index Hinting in Platform Update for Microsoft Dynamics NAV 4.0 SP3 KB940718".
- Recompile hints
Adding a Recompile hint to a query is a way to tell SQL Server to make a new query plan, and not take one from cache. In this way you may get query plans that are better optimized for the current parameter values, but it also adds an overhead to SQL Server because making a new query plan always takes longer than re-using a cached one.
When you create an add-on solution, you must make sure that the new objects you create are given unique IDs (UIDs) from the range assigned to the add-on in question. This is done by using the appropriate license. There will be no problem as long as only one add-on solution is included in the developer’s license.
However, if the developer’s license includes permission to create more than one add-on solution, it gets more complicated. In this case, the UIDs must be added manually during development instead of allowing them to be added automatically.
One consequence of not manually applying the UIDs could be that two separate add-on solutions are given the same range of IDs. This will generate merge errors when you try to implement the two add-on solutions in the same Microsoft Dynamics NAV database.
Manually setting the UIDs so that they comply with the appropriate add-on UID range is both a tedious and an error-prone task.
The Microsoft Dynamics NAV platform is not able to identify more than one range of unique IDs in a license.
A function (with a fixed, predefined ID) can be added to codeunit 1. This function returns the base UID for the application, much like the SetGlobalLanguage function does for the application language. The function is called whenever Microsoft Dynamics NAV needs to find the base UID for the application. Starting from that base UID, the function finds the next available UID, which is then used for the new function, text constant etc in question.
PROCEDURE GetUidOffset@212122() : Integer;
While creating an add-on solution, a developer at a Microsoft Certified Partner can change codeunit 1 (and restart Microsoft Dynamics NAV or re-open the company they’re working in – because of codeunit 1’s Single Instance-like behavior) to reflect the UID range that was assigned to the add-on in question.
Exit(Insert add-on solution ID here);
When the partner adds a new text constant, function etc., the C/AL Editor will start at the inserted add-on solution ID and find the next available number.
The procedure for creating an add-on solution and adding UIDs is:
1. Open Codeunit 1 and add PROCEDURE GetUidOffset@212122() : Integer;. This ensures that you use the appropriate add-on solution ID.
2. Save and compile.
3. Open Company.
4. Start developing ‘Add-on Solution 1’.
5. Save and compile ‘Add-on Solution 1’.
6. Change Codeunit 1 – delete PROCEDURE GetUidOffset@212122() : Integer;.
7. Save and compile Codeunit 1.
You must repeat these steps for each separate Add-on UID range you work with.
· After this workaround has been implemented in codeunit 1, you must open a company in Microsoft Dynamics NAV before you open the C/AL designer. This is due to the fact that codeunit 1 is not called before you open a company.
· If this workaround is not implemented when you are developing an add-on solution, the UIDs will be assigned according to the old implementation and might cause problems when you merge more than one add-on solution into a Microsoft Dynamics NAV database.