I spent last week performing a Microsoft Dynamics NAV 5.0 Update 1 to Microsoft Dynamics NAV 5.0 SP1 database conversion for a customer. The process went very smoothly and the customer was very excited about the increase in performance we were able to achieve.
Just as an illustration of our success, the customer ships hundreds of packages a day. This process has been very painful in the past. When the conversion was completed and a little tuning was done, we were able to achieve the following improvements …
· Create a Shipment --> Reduced by 27%
· Create a Pick --> Reduced by 15%
· Register a Pick --> Reduced by 54%
· Post a Shipment --> Reduced by 53%
Now, I mentioned a little tuning was necessary so let me elaborate on that. SP1 seems to highlight poor filtering used for the FlowFields. The tuning that I had to do amounted to determining which CALCSUMS were not performing well. I did this by using the Client Monitor in the Performance Toolkit. I was then able to examine the filters that were applied and determine what a more appropriate key would be. I added the additional key to the table along with the appropriate SumIndexField(s). This made all the difference for this customer.
This customer was also plagued by a large number of locking/blocking issues. After applying SP1, with the Bulk Inserts and Indexed Views, the locking/blocking issues became almost non-existent.
Some other pieces of information that might be helpful:
· Log file size – during the conversion, the log file did NOT grow at all.
· Database Size – when the conversion was done, this particular customer was able to get back approximately 10 GB of free space in the database out of a 65 GB database.
· Conversion Time – it took between 40 -60 seconds per GB depending on what hardware was used.
· Indexed Views – do NOT add any additional indexes to the Indexed Views. When you design the table that the Indexed View is associated with, the view will be dropped and recreated and so will any additional indexes that you might have added.
Robert Miller (rmiller)
One of the major changes in Microsoft Dynamics NAV version 5 SP1 (in relation to performance on SQL Server), is a new way to send queries to SQL Server. In previous versions of NAV, we some times saw SQL Server 2005 optimizing query plans for extreme parameter-values, which - when re-used from cache for queries with other parameter-values - could cause long response time. A behaviour which is described in more details in KB 935395 on PartnerSource (login required). Some of the updates for NAV version 4 introduced new features to give better control of the query plans that SQL Server makes, such as index hints and Recompile-option.
SP1 for NAV version 5 has restructured the way that queries are sent to SQL Server, with the aim that SQL Server will now make query plans that are optimized for average parameter-values rather than extreme parameter-values. It is also a method which lets SQL Server make the plan, without forcing it in a certain direction with index hints or recompile-option.
Before NAV SP1, a typical query could look like this:
exec sp_cursoropen @p1 output,N'SELECT * FROM "Demo Database NAV (5-0)"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=@P1)) AND "G_L Account No_"=@P2 AND "Posting Date"=@P3 AND "Entry No_">=@P4 ORDER BY "G_L Account No_","Posting Date","Entry No_" ',@p3 output,@p4 output,@p5 output,N'@P1 varchar(20),@P2 varchar(20),@P3 datetime,@P4 int','1105','','1753-01-01 00:00:00:000',0
Notice the parameter-values in bold ('1105','','1753-01-01 00:00:00:000'). SQL Server will make a query plan based on running the query with these parameter-values. It will then cache this plan, and use the same plan for other queries which are identical, but have different parameter-values.
In SP1, the query above will look like this:
declare @p1 int
declare @p5 int
declare @p6 int
exec sp_cursorprepare @p1 output,N'@P1 varchar(20),@P2 varchar(20)',N'SELECT * FROM "W1500SP1RTM"."dbo"."CRONUS International Ltd_$G_L Entry" WHERE (("G_L Account No_"=@P1)) AND "G_L Account No_">@P2 ORDER BY "G_L Account No_","Posting Date","Entry No_" ',1,@p5 output,@p6 output
select @p1, @p5, @p6
declare @p1 int
And then another query:
So before, we had one query. In SP1 we have two! So what's the benefit of that?
If you look at the first query from SP1, notice that it is a sp_cursorprepare statement, and not sp_cursoropen. So the actual query is not run at this point. More importantly, the first query does not contain the parameter-values. This is the query for which SQL Server makes the query plan. Not having the parameter-values, SQL Server makes the plan based on its statistics about the data in the table. Only after this, NAV then executes the statement in the second query (sp_cursorexecute).
This method guarantees that SQL Server's query plan will not be affected by the parameter-values. It means that some times, SQL Server is prevented from making the optimum query plan for a certain set of parameter-values. But remember that the query plan will be re-used for other parameter-values. So at the expense of having a few highly optimized queries, the method will give well optimized queries with better consistensy.
Another cost of this method, is of course that now NAV sends 2 queries instead of 1, requiring an extra roundtrip to SQL Server. But this only happens the first time the query is run. If the same query is run again, NAV will only run the second query (sp_cursorexecute).
One side effect of this is, that tracing a query in SQL Profiler becomes different. With SP1 you will see a lot of queries like the second one above, which does not show what NAV is actually doing. Take a look at the second query again:
exec sp_cursorexecute 1073741861,@p2 output,@p3 output,@p4 output,@p5 output,'1110',''
exec sp_cursorexecute 1073741861,@p2 output,@p3 output,@p4 output,@p5 output,'1110',''
With SP1 you will see a lot of queries like this, and then wonder what the actual query is. To find out, you need to use the cursor ID, and then find the original sp_cursorprepare statement, which will contain this line:
and then the actual query.
In summary, this method is designed to give persistently, good overall performance, and to avoid sudden drops in performance that could be the result of cached query plans on SQL Server.
Lars Lohndorf-Larsen (Lohndorf)Escalation Engineer
This is a follow-up to the post "how to write a simple XML document". The other post conitains a few more background details which I won't repeat here. So even if you only want to read XML Documents, then you may want to have a look there anyway.
But let's get straight to the point. This is how you can read an XML Document from Microsoft Dynamics NAV, using MSXML DOM:
1. Create a new codeunit.
2. Declare these 5 new global variables:
XMLDoc Type = Automation 'Microsoft XML, v4.0'.DOMDocument DOMNode Type = Automation 'Microsoft XML, v4.0'.IXMLDOMNode XMLNodeList Type = Automation 'Microsoft XML, v4.0'.IXMLDOMNodeList
CurrentElementName Type = Text 30i Type = Integer
3. Initialize the Dom Document: CREATE(XMLDoc); XMLDoc.async(FALSE); XMLDoc.load('C:\XML\MyXML.xml');
4. Set up a loop to browse through the nodes in the document: XMLNodeList := XMLDoc.childNodes; for i := 0 to XMLNodeList.length - 1 do begin DOMNode := XMLNodeList.item(i); ReadChildNodes(DOMNode); end;
You also have to create ReadChildNodes as a function with one parameter: CurrentXMLNode, Type = Automation 'Microsoft XML, v4.0'.IXMLDOMNode.
The reason for the functionis to enable you to run it recursively. This is useful because you don't know how many child nodes there are. And there can easily be a child node inside of another child node.
So create the function ReadChildNodes.The function has one parameter: CurrentXMLNode : Automation "'Microsoft XML, v4.0'.IXMLDOMNode"
And a number of local variables: TempXMLNodeList Automation = 'Microsoft XML, v4.0'.IXMLDOMNodeList TempXMLAttributeList Automation = 'Microsoft XML, v4.0'.IXMLDOMNamedNodeMap j Integer k Integer
The function checks to see what type of node it was called with. A node can be an element, text (data), attribute etc. The node property .nodeType tells you which it is. See the documentation (included in MSXMLSDK) for a complete list of possible nodeTypes.
This is the function:ReadChildNodes(CurrentXMLNode : Automation "'Microsoft XML, v4.0'.IXMLDOMNode")CASE FORMAT(CurrentXMLNode.nodeType) OF
'1': // Element BEGIN // Global variable CurrentElementName to keep track of what node // we are currently processing CurrentElementName := CurrentXMLNode.nodeName;
// Process Attributes // If the element has attributes, then browse through those. TempXMLAttributeList := CurrentXMLNode.attributes; FOR k := 0 TO TempXMLAttributeList.length - 1 DO ReadChildNodes(TempXMLAttributeList.item(k));
// Process Child nodes TempXMLNodeList := CurrentXMLNode.childNodes; FOR j := 0 TO TempXMLNodeList.length - 1 DO ReadChildNodes(TempXMLNodeList.item(j)); END;
'2': // Attribute BEGIN MESSAGE(CurrentElementName + ' Attribute : ' + FORMAT(CurrentXMLNode.nodeName) + ' = ' + FORMAT(CurrentXMLNode.nodeValue)); END;
'3': // Text BEGIN MESSAGE(CurrentElementName + ' = ' + FORMAT(CurrentXMLNode.nodeValue)); END;END;
Now, try to run the codeunit against any XML document, and it will give a MESSAGE for each node and each attribute. This codeunit can be used for scanning any XML document from begin to end.
Now, lets say you are looking for specific nodes in an XML document. As an example, take this document:
One of the good things in MSXML DOM is, that it reads the whole document into memory, so you can jump between nodes, and browse them, and select sub-sets based on node names. So you do not have to read a document from top to bottom if you know exactly which parts of the document you are interested in.
First, we can count how many orders the document contains (see how many "Order"-elements are under the root-element):
OnRun()CREATE(XMLDoc); XMLDoc.async(FALSE); XMLDoc.load('C:\XML\Order.xml');
XMLNodeList := XMLDoc.getElementsByTagName('Document/Order');MESSAGE('The document has %1 elements called Order',XMLNodeList.length);
This will open the XML document, then use the getElementsByTagName-function to get all elements that match the tag name. Note that the tag name you specify is case sensitive (like everything else in XML)! So in this case, it would not find elements called "order".
Once you have the XMLNodeList, then you can go through the nodes in that list, using the first example.
Finally, an example which will read the XML document shown above, and show an Amount-total for each order:
XMLNodeList := XMLDoc.getElementsByTagName('Document/Order');FOR i := 0 TO XMLNodeList.length - 1 DO BEGIN NumberDecTotal := 0; DOMNode := XMLNodeList.item(i); XMLNodeList2 := DOMNode.selectNodes('Line/Amount'); FOR j := 0 TO XMLNodeList2.length - 1 DO BEGIN DOMNode2 := XMLNodeList2.item(j); NumberAsText := FORMAT(DOMNode2.text); EVALUATE(NumberDec,NumberAsText); NumberDecTotal := NumberDecTotal + NumberDec; END; MESSAGE('Order has total amount of %1.',NumberDecTotal);END;
This example uses DomNode.SelectNodes, which will select all matching nodes. If you are looking for an individual node, then you can use DomNode.selectSingleNode which will only return one node (the first that matches).
In an XML document, everything is Text. So you have to convert to numbers where needed. Final note: Make sure to make your code more robust than this! What if Amount contains a character? What if an element is missing? If you don't take these things into consideration, then your code can fail with very unfriendly errors.
These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.
Lars Lohndorf-Larsen (Lohndorf)
Microsoft Dynamics UK
This post if for when you need to export data in the form of XML from Microsoft Dynamics NAV. If you are not familiar with how to program with MSXML DOM, then it can take a lot of time and work to create even a simple XML document. The purpose of this post is to show a simple example, and to get you started programming with MSXML DOM from NAV.
I have split this into two posts: "How to write" (this post), and "How to read" an XML Document (click here), since the methods and functions used are quite different.
Why not just use XML-Ports? You can of course, but then you limit yourself to the functionality and limitations that exist in XML-Ports. An example of a limitation in XML-Ports is that they do not support namespaces. If you think that one day you may need functionality that is not supported by XML-Ports, and for which you have to use MSXML DOM anyway, then consider to just use MXSML DOM from the start. It is more complex to begin with, but once you have a set of functions to add / search etc elements in an XML document, then it does not have to be much more complex than using XML-Ports. Note that it is not possible to access an XML docuemnt via MSXML DOM from an XML-Port while the XML-Port is accessing it. The two methods are mutually exclusive.
An XML document is organized as a heararchy of elements, and everything (elements and data) gets added piece by piece to existing elements.
Let's get started with the simplest possible XML DOcument:
Create a new codeunit with 3 variables:Name DataType Subtype LengthXMLDoc Automation 'Microsoft XML, v4.0'.DOMDocument DOMNode Automation 'Microsoft XML, v4.0'.IXMLDOMNode DomTextNode Automation 'Microsoft XML, v4.0'.IXMLDOMText
In this example I use MSXML version 4, but you can use version 5 or 6 if you prefer. Version 4 is mostly used because it most likely contains all the functionality you will need, and it will be backward compatible to more machines - even old ones.
Initialize the document:
Unless you have specific reasons not to, set asynch = FALSE. Especially when you are reading an XML document. It means that it will load the whole document into memory before it starts reading through it. If you don't set this property, you may begin to process the document before it is completely loaded into memory.
Now create a new node, and then add it to the document:
DOMNode := XMLDoc.createNode(1,'NodeName',''); XMLDoc.appendChild(DOMNode);
Then add some data. This is done in the same way as you add an element: Creat the element first, and then add it to the right place in the document:
DOMTextNode := XMLDoc.createTextNode('NodeValue'); DOMNode.appendChild(DOMTextNode);
You can add any additional elements like this: Creat the element of the type that you need (node, text, attribute etc), and then append it to the document.
To see the document, save it to disk:XMLDoc.save('C:\XML\MyXML.xml');
Note: If you run on Vista, it might not allow you to save files on the root of the c: drive. So create a new folder, for example c:\XML\
Then run this codeunit. You should get a very simple document like this:<NodeName>NodeValue</NodeName>
If you want to add further nodes to this document then repeat the steps above to create a new node, then append it to an existing node. For example, to add a new node:
Create a new variable: NewDOMNode : 'Microsoft XML, v4.0'.IXMLDOMNode
Initialize the new node: NewDOMNode := XMLDoc.createNode(1,'NewNode',''); DOMNode.appendChild(NewDOMNode);
And add text to the new node: DOMTextNode := XMLDoc.createTextNode('NewNodeValue'); NewDOMNode.appendChild(DOMTextNode);
Now, your xml document will look like this:
Adding namespace and Attributes:
You can add a name space either to the whole document, or to individual nodes. To add a name space to the document, go back to the relevant line above, and specify the 3rd parameter:
DOMNode := XMLDoc.createNode(1,'NodeName','MyNameSpace');
Finally, adding an attribute to a node is similar to adding a text element: Initialize the Attribute, then add it to the relevant node. Use the function SetNamedItem to add attributes. Let's add an attribute called ID with the value 10000:
Create a new Automation variable: XMLAttributeNode : 'Microsoft XML, v5.0'.IXMLDOMAttribute
Initialize the attribute, then add it to the node: XMLAttributeNode := XMLDoc.createAttribute('ID'); XMLAttributeNode.value := '10000'; DOMNode.attributes.setNamedItem(XMLAttributeNode);
The whole codeunit should now look like this:
// Initialize the document CREATE(XMLDoc); // Create Node and attach it to the document. Use name space MyNameSpacefunction DOMNode := XMLDoc.createNode(1,'NodeName','MyNameSpace'); XMLDoc.appendChild(DOMNode);
// Add data (text) to the node DOMTextNode := XMLDoc.createTextNode('NodeValue'); DOMNode.appendChild(DOMTextNode);
// Add an attribute to the node XMLAttributeNode := XMLDoc.createAttribute('ID'); XMLAttributeNode.value := '10000'; DOMNode.attributes.setNamedItem(XMLAttributeNode);
// Initialize a new node: NewDOMNode := XMLDoc.createNode(1,'NewNode',''); DOMNode.appendChild(NewDOMNode);
// And add text to the new node: DOMTextNode := XMLDoc.createTextNode('NewNodeValue'); NewDOMNode.appendChild(DOMTextNode);
// Finally, save the document for viewing: XMLDoc.save('C:\XML\MyXML.xml');
MSXML is further documented in MSXML Software Development Kit (MSXMLSDK). I would recommend that you download this to have the online help available for all of the MSXML DOM functions.
These postings are provided "AS IS" with no warranties and confer no rights.You assume all risk for your use.
Microsoft Dynamics UKMicrosoft Customer Service and Support (CSS) EMEA
It is with great pleasure that I can announce the release of Microsoft Dynamics NAV 5.0 SP1. During today the first countries will be available from PartnerSource and I would really like you to pay attention to the following two web sites on PartnerSource:
Microsoft Dynamics NAV 5.0 SP1 General information:https://mbs.microsoft.com/partnersource/products/navision/newsevents/news/50sp1.htm
Microsoft Dynamics NAV 5.0 SP1 Download Information:https://mbs.microsoft.com/partnersource/downloads/releases/MicrosoftDynamicsNAV50SP1.htm
March 28, 2008:Release in United States, Germany and Denmark. This is also the release date for the W1 version of Microsoft Dynamics NAV 5.0 SP1.
May 7, 2008:Release in Australia, New Zealand, Canada (EN+FR), France, India, Indonesia, Ireland, Italy, Malaysia, Netherlands, Philippines, Singapore, Spain, Thailand and United Kingdom.
June 17, 2008:Release in Austria, Belgium (NL+FR), Finland, Iceland, Mexico, Norway, Sweden, Russia, Czech Republic, Slovakia, Estonia, Hungary, Latvia, Lithuania, Poland, Portugal, Switzerland (FR+IT+DE), Bulgaria, Croatia, Romania and Slovenia
SQL Server 2005 keeps (some) history of the number and time that blocking occurred on indexes. This blog contains a query that queries this information to help identifying on which tables blocking happens most. The query shows accumulated numbers (no. of blocks, wait time, etc) since SQL Server was last restarted.
The query does have some known in-accuracies. Also, the result needs to be interpreted: It may not point directly at which indexes need changing or disabling. See below for details about this.
The query uses Dynamic Management Views, which means it will only work from SQL Server 2005 and later.
Make sure to run the query in the NAV database, otherwise you won't see the table names.
So, here it is:
--order by No_Of_Blocks desc
Inaccuracies:No_Of_Blocks is recorded accurately, but the Block_wait_time is not. SQL Server only records Block_wait_time when the block is a clear pagelock or rowlock. It will not record wait_time in case of a rangelock, which is common in NAV. Also, Block_Wait_Time only gets recorded when a transaction completes. So if a transaction is aborted after the block (for example by Lock-Timeout), then Block_Wait_Time for that transaction will not be counted. This means that the real Block_Wait_Time is likely to be higher, and it may be distributed on different tables / indexes than the query shows. Anyway, I hope the query is still accurate enough to give a good idea about where blocking occurs.
How to interpret the result:The query shows blocking per index. But you should not put too much importance in the individual index that shows blocking. Instead, look if there may be other indexes in that table which are not used. Remember, an update on Index X may require update on all other indexes in the table. So look at the whole table when deciding if the table is over-indexed. For this, use the query "Index Usage", which shows you the usage of each index in a table.
Blocking will happen on the first table in a process which is blocked. Maybe the process begins processing small tables, but the real blocking happens because processing of other tables takes a long time. In that case, the query will show the blocking on the first table, but not show blocking on the later tables, which may be where the real problem is.
I will be more than happy to receive any feedback on experiences with this query, and suggestions for how to improve it!
We get a couple of questions asked quite regularly: “How can I make my ERP system SEPA compliant” and a more basic one “What is SEPA? And how does this affect my daily work?” With this blog entry I’ll try to answer these questions.
SEPA is short for Single European Payment Area – which is set up by the European Commission as a political agreement to reinforce the Economic Monetary Union. The goal is to form a single market for payment transactions within the European Economic Area. One of the ways to do this is to create a set of standards for electronic payments using ISO20022 XML format together with a set of rules and guidelines as to how Euro payments must be handled. With SEPA fully implemented, all electronic payments in Euro within the SEPA area will be regarded as domestic payments - even if they are cross-border payments.
With a common standard it will be easier to process payments, which will lead to more efficiency in the banking system and hopefully lower prices. It will also make it possible for companies that have business activities in many countries to only interact with one bank, since the cross-border payment is assumed to be the same as the domestic payment. The long term vision is that this standard can expand and also be used in other areas such as e-invoicing and e-reconciliation.
The SEPA implementation is still in its early stages. In January 2008, the SEPA instruments for Credit Transfer became available to be used. The Credit Transfer standard will be initially used for interbank relationships with regards to cross boarder euro payments within the SEPA Countries. Banks which talks about SEPA compliant refer to this fact, meaning that they are able to create these kinds of transfers. It is important to note that right now it is only required that banks can use this format when dealing with each other; it is not required that customers adhere to these standards in either customer-to-customer or customer-to-bank situations. It is also important to note that the SEPA only covers Euro transactions – not other currencies.
Looking forward it is expected that national instruments for credit transfers, direct debits and cards are replaced by the relevant SEPA instruments by 2010 and that customers will be able to use the standards for creating electronic payments.
This will affect Microsoft Dynamics NAV customers that have banking relationships within Europe. Short term there will not be a big difference – it will not be required that customers use the SEPA standard, when they interact with their bank. Furthermore it is also only a few banks that can accept the format today. It will be possible to ask the bank for a SEPA Credit Transfer, but it will be the bank that creates the file in the proper format. It is anticipated that many of the banks within the SEPA region will be able to receive a SEPA Credit Transfer in the near future, but it is expected that there will be a transition period where they accept the old electronic formats.
One thing to prepare for is that both Bank Identifier Code (BIC) and the International Bank Account Number (IBAN) is a crucial part of the SEPA standard. These two identifiers are needed to process a SEPA Credit Transfer. This mean these numbers will have to be correct when an electronic payment is created and is something that the customers can prepare for by requiring that these fields are used and updated when a customer or a supplier is created or updated in Microsoft Dynamics NAV.
One question that we also hear and want to comment on is “If I want to make a SEPA payment, am I required to create an ISO 20022 compliant XML file”? The answer to this is both Yes and No:
There are many places on the internet where it is possible to get more information on what SEPA is and what it means to businesses in EU and to banks. Many banks are also aware of the changes to come and what effect it has to their customers. I hope that the points above indicate that SEPA is an interesting new standard that potentially can make it possible to make interactions between Microsoft Dynamics NAV and banks more effective. From our perspective a common standard will make it possible for us to create bank integration that can serve more than one country or one bank – given that the banks stick to one schema! In the short term the thing to keep in mind is that it is possible to prepare for this by updating the data on supplies and customers, so that the creation of a SEPA compliant credit transfer can be created when it is required.
Some facts around SEPA
From the European Payment Council
From the European Central Bank
From the European Commission
- Rikke Lassen
Greetings from the Microsoft Dynamics NAV User Education team. We are the team responsible for application and platform documentation, including online Help, the Application Designer’s Guide, and install and config manuals. You’ve made some complaints, and we’ve been listening. Your feedback says that it’s difficult to find the Help information you need, and the content that’s there is “superficial” and “inadequate.“ As the team’s new Content Architect, I have been working with UE writers, editors and managers to make sure you have a better experience with the next release. In this post, I want to share a couple of the changes we’re making to the NAV docs for the next version: to describe business and development processes, and establish better discoverability of information.
How Do I…
Internally, we’ve been calling one new topic type, “How Do I” topics, because they’re written to answer the question, “How do I perform this process?” Here’s a draft of one of these topics, which basically answers the question, “How do I create new vendor accounts?”
(click image to see larger size)
The links at the top of the topic indicate where you are in the larger process – in this case, you are creating new vendor accounts as part of the section, Configure Purchase Processes , in the Purchasing department. The content of the topic includes overview information on the general sequence of tasks you perform to achieve a business or development goal, and then the table describes and links to topics to help you complete that goal. For application documentation, the processes are based on the customer model (http://www.microsoft.com/dynamics/product/familiartoyourpeople.mspx).
Walkthrough topics provide end-to-end processes comprised of two or more tasks. Using the CRONUS International Ltd. demo company, the walkthrough tasks enable you to learn the steps involved in a process before you perform them using your own data. Application walkthroughs provide the beginning-to-end steps for processes like tracking sales campaign results, or calculating work in progress for a job. Development walkthroughs step you through processes like designing a customer sales order report.
In coming weeks, we will begin sharing specific versions of these topics with you, and hope you’ll give us feedback on how well they meet your needs, and the specific content you’d like to see us develop.
We’ve had a lot of feedback that finding information in the NAV documentation is difficult, so we’re working on improving discoverability from several angles. In previous versions, platform documentation has been delivered in nearly 20 different manuals. We’re compiling some of the most strategic of these manuals into online Help files for development, installation, and C/SIDE reference. These logical collections reduce the number of places you have to look for information, and since they’re Help files, you get the benefit of other discoverability aids like Search, an index, and a table of contents. In addition, platform documentation is going to be published in the MSDN library, for easy online access to topics.
Other discoverability improvements include:
Again, we welcome any suggestions, concerns, hopes, and fears you have regarding these plans or on the documentation as a whole. You can contact us directly at firstname.lastname@example.org. And if you are traveling to Convergence next week, please attend the User Education sessions we are running, on enabling partners to extend NAV Help, and on transforming forms for use in the RoleTailored client. Specific details are below.
NAV Help Extensibility Tuesday, March 11, 12:30-1:30 Room W311E
NAV 2009 Form Transformation Thursday, March 13, 12:30-1:30 Room W311E
- Michelle Fredette
Microsoft Dynamics NAV plans to support Microsoft SQL Server 2008 and Microsoft Windows Server 2008 once they are available. To ensure compatibility, Dynamics NAV development will start testing Dynamics NAV 4.0 SP3 and Dynamics NAV 5.0 SP1 on these two server platforms at the beginning of calendar year 2008.
See full announcement here.
Martin Nielander (martinni)Program Manager
As you may have heard, last week the MBS team announced that Darren Laybourn will be taking a new position as General Manager of the Outlook Mobile team at Microsoft. We also announced that I would become the new General Manager for Dynamics NAV and Mobility and continue to report to Hal Howard, who will now run all of MBS ERP R & D. As a part of my new role, my family and I will relocate from Seattle to Copenhagen, where I will work at the Microsoft Development Center Copenhagen (MDCC). In this blog post, I want to introduce myself and invite your questions or comments about these changes.
Before I do that, however, I want to thank Darren for his contribution to MBS, Dynamics NAV, and Dynamics Mobility. Darren is a true veteran of this business, having been with the combination of Great Plains Software and Microsoft Business Solutions for over 15 years. He's had an incredible, positive impact on our customers, and he's been a mentor for me through most of my career at Microsoft.
Let me start my introduction with a very brief bio. I've been with Microsoft for about six years, working on the Microsoft Business Framework, Project Fenway, and Dynamics AX. Before joining Microsoft in 2002, I ran the R & D group for a Silicon Valley start-up called Bistro that built workflow-based business applications (including financials management) for small-to-medium sized businesses. The rest of my career has been in IT consulting for large companies such as Hewlett-Packard, Charles Schwab, Ryder, Diners Club, and American Express. In short, I've worked on business applications my entire career and am no stranger to metadata, journal posting, and complex business value chains, on which our customers' success is predicated.
Nonetheless, I'm new to NAV and have a lot of learning to do. But one of the first things that I've learned is just how passionate all the stakeholders of this product are, whether they are customers, partners, or employees. It's people like you, the readers of this blog, that are making this product and the customers that use it, a success. Thank you for your support, and I hope I get a chance to meet you, work with you, and learn about the things you love, don't love, or wish you could love about NAV.
In the meantime, the NAV organization will continue moving forward according to our current roadmap, including NAV 5.0 SP1 this March and NAV 6.0 at the end of the calendar year. Our priorities haven't changed, and our commitment to the NAV product is as strong as ever. I’m excited by the future that lies ahead for NAV customers, NAV partners, and the NAV team itself!
I look forward to hearing your questions and comments about the changes.
- Dan Brown
This is a follow up from an earlier blog "Finding Index usage". In that blog, I described a very simple way to list how indexes are being used. In this blog, the query is much extended so that it now shows your Navision keys, listed by either number of updates, or by their cost divided by their usage, and it shows when an index was last used for reading. The idea is to show a list of indexes that are being maintained, but never or rarely being used.
The query uses SQL Server Dynamic Managament Views (DMW), which means it will only work for for SQL Server 2005 and later.
Feel free to add comments to this blog about how useful (or not) this query is. And about any problems you may find, and suggestions to improve it. All comments will be welcome!
To use it, copy the query below into SQL Server Management Studio. Remember to set the database to your Microsoft Dynamics NAV database (not Master or any other database). Then run it. Depending on the size of your database, it may take a few minutes to run it. First time you run it, I would recommend that you do it when the SQL Server is not otherwise busy, until you konw how long it takes:
-- use NavisionDB
-- Generate list of indexes with key list
-- populate key string
-- Generate list of Index usage
-- Select and join the two tables.
--order by Cost_Benefit desc
The query will show you one line for each index in the SQL Database. It shows you the table name, and a list of fields in the index. Note that any non-clustered index also contain the clustered index. For example on SQL Server, the key "Document No." in the "Cus. Ledger Entry table" is "Document No.","Entry No.". Also note that the indexes shown by SQL Server is not always shown in the same order as you have defined them in NAV.
The column "No_Of_Updates" basically shows you the cost of this index, since every update requires a lock as well as a write to the database. The next column, "User_Reads", shows you how often this index has been used, either from the UI, or by C/AL code. Compare these two, and you have way to compare the cost against the benefits of each index, as shown in the column "Cost_Benefit", which is simply "No_Of_Updates" / "User_Reads". The column "Last_Used_For_Reads" shows you when an index was actually used for reading.
The query sorts the indexes by "No_Of_Updates", with the most updated (most costly) index first. At the last line of the query you can change the sorting to "order by Cost_Benefit desc", and you are likely to see a different picture.
Finally, the query shows you whether each index is clustered or non-clustered.
The query will create two new tables called "z_IUQ_Temp_Index_Keys" and "zIUQ_Temp_Index_Usage". Although highly unlikely, if you already have tables with these names in your database, then the query will overwrite those without warnings. These tables collect index usage statistics, so if you need to run the query again, for example because you lost the results, or wat to run it with a different sorting, you don't have to run the whole query. Just run the last part of the query - from the section "-- Select and join the two tables.", and it will run much faster. Only after you change indexes, or want an updated view of index usage, you need to run the whole query again.
The data shown by the query is reset every time SQL Server restarts. So if you have recently restarted SQL Server, then the query may not show you the most precise picture of how the indexes are being used over time. Also consider that some indexes may only ever be used for example at end of the month / end of fiscal year, etc. So just because the query shows that a certain index was not used since SQL Server was last restarted, then this index may still be required for specific jobs.
One of the top customer questions we receive in the Fixed Assets area is about starting to use Fixed Assests instead of General Ledger.
For example, a company wants to start using the Fixed Assets module from the 1st of January 2002. Acquisitions and depreciations have been posted in the general ledger until the 31st of December 2001. Fixed assets are created in the following way:
- Henrik Sonne
The nature of a deadlock is that two processes lock resources in different orders. Deadlocks can in theory be eliminated by ensuring that all processes always lock resources in the same order. This document describes how to determine the locking order of a process in Microsoft Dynamics NAV.
Note, that I mention locking of "resources". For most of the time, this means placing locks on a table, but it could as well mean placing locks on different parts of the same table, or of different indexes in the same, or two different tables. But then we are into micro-tuning, so for this post, when I say "Resource", I mean "Table".
What is a deadlock:A deadlock will return an error message which specifically says that your activity was deadlocked. The deadlock error-message will read:
"Your activity was deadlocked with another user modifying the [xyz] table.Start again."
A blocking chain is a situation where a client hangs (becomes unresponsive) until a resource becomes available. In common language you could describe such a situation as a deadlock situation. But technically, this is not a deadlock, since the situation will be resolved eventually. In other words, to any user, an infinite block may look like a deadlock. But if you have deadlocks, then by definition, your users will have deadlock-error messages like the one shown above. One of the first steps in troubleshooting any kind of locking or blocking is to identify exactly what type of problem you have. So always make sure to first find out whether you are dealing with blocking chains, deadlocks, or other issues, since selecting the best troubleshooting methods depend on what exact issue you are looking at.
This post shows how to determine the locking order of a process in NAV. Knowing the locking order of various processes can help identifying potential deadlocks. It can also help in cases where you know that two or more different processes are causing deadlocks, by showing the locking order of each of these processes.
Pre-requisites:The tool described here is part of the Performance Troubleshooting Guide, which is available on PartnerSource (partner login required). The tools described here require a NAV Partner license.
The SQL Server Performance Troubleshooting Guide, includes an object called "Client Monitor.fob". To begin, import this into the database which contain the processes that you are troubleshooting. It does not have to be a live database - a stand-alone copy of a live database is all you need. In this scenario you are examining locking orders of processes which can be done as well in your office, on a copy or test-version of a database, as in a production environment.
To use "Client Monitor.fob", follow these steps, after importing it into the database:
1. Start Client Monitor (Tools -> Client monitor -> Start). Before starting it, go to the Options tab and de-select "Include Object table activity", and select all the options under "Advanced".2. Run the processes for which you want to detect the locking order.3. Stop "Client Monitor".4. Run form 150025 "Transactions" which was imported in "Client MOnitor.fob". This will take a while, while it collects the information that was collected by the Client Monitor. When it opens, it will show you one line for each transaction.5. For each of the lines shown in this form, click on Transaction -> "Locking order" to see the locking order of each transaction.
This shows you not just the locking order of each transaction, but you will also see the C/AL code that places the lock. If you compare this information with knowledge of what tables and processes are involved in the deadlocks you are troubleshooting, it can help you decide where locking orders need to be changed.
What locks a record?NAV will automatically place a lock as soon as any update command is used (INSERT, MODIFY, DELETE), even if the C/AL command LOCKTABLE is not used. On SQL Server, Navision will only lock the record (and potentially adjacent records - see this post).
The C/AL command LOCKTABLE in itself does not lock anything. It only puts NAV into locking-mode. Navision will then lock any records that it accesses. This is why the places in C/AL code that place locks are typically not LOCKTABLE-commands, but for example a FIND('-')-command that comes after a LOCKTABLE command.
What to do next:Once you have determined that two different processes have different locking orders, you must decide on a locking order. There are no complete guides to which order should be used, but if you look for the C/AL command RECORDLEVELLOCKING in codeunit 80, you can see that this codeunit locks a number of tables in a certain order. At least for tables that are included in this order, it makes sense to stick to this order. For tables which are not included here, you must make your own decisions. Once you have decided on a locking order, you can use one of the following tactics to change it:
Avoid locking:Some times it is not necessary for a process to access certain tables. For example, the tool may show that table 5765 – “Warehouse Request” is being locked. If a customer is not using warehousing, then the code that locks this table can some times be remarked.
Lock sooner or later:One of the most common ways to change locking order is to move a lock up so that it happens sooner in the process. For example, lets say you have this line of code: SalesOrder.INSERT;And you know that later on in the code, you will be locking one of your own tables. To change the locking order of these two tables, add: YourOwnTale.LOCKTABLE;Before the line where you lock the Sales Order table.
Master lock (semaphore):You can decide that all processes must lock a certain record before they lock anything else. In this case, only 1 process will run at the time. This will effectively eliminate any deadlocking, but it will also reduce concurrency, and can lead to blocking chains instead.
As you can see, two of the tactics above (lock sooner, and Master lock) may reduce the risk of deadlocks. But on the other hand they can also increase blocks, because (more) resource(s) get locked sooner. So always make a total assessment of the problem that deadlocks cause. Some times it may be better to have a weekly deadlock, but optimistic locking, than begin to rearrange locks and cause more blocks.
In an effort to continue to provide our customers and partners with a stronger set of tools for their businesses, the Jobs area of Microsoft Dynamics NAV 5.0 was redesigned and many new features were added to the module. Some really exciting functionality to help with productivity and flexibility such as an entire new budget structure and features like fixed item pricing, foreign currency functionality, copy job, calculate remaining usage, journal improvements and better integration with item tracking, costing and service management have been very well received.
However, it’s been brought to our attention that some customers and partners are having an issue with the new functionality. In Microsoft Dynamics NAV 5.0, the purchasing process is initiated in the purchase invoice when some customers business processes dictate that they start the process in the purchase order. Technically there is little change in the underlying functionality between releases – in both 4.0 and 5.0 ledger entries for the job were not created until the invoice is posted. This said, we can completely understand the issue and a fix allowing users to start the process in the purchase order will be in Microsoft Dynamics NAV "6.0" and we are currently investigating the issue with 5.0. For the time being, if you upgrade 5.0 and are dependent on purchase order functionality with the Jobs area, you will experience an issue.
Some customers have also had an issue with dimensions on WIP (work in progress) in the Jobs module in 5.0. A fix for job WIP dimensions will be shipped in SP1 and is already included in the Microsoft Dynamics NAV "6.0" code.
We are making a site on both Partner and CustomerSource that will link you to a short whitepaper outlining the change in functionality from 4.0 to 5.0 in more detail. I will post again when I have links are available to these sites.
- Selena Breann Jensen
If you are designing a system for high throughput and want to minimize blocking, it's important to know in advance exactly what kind of locking behaviour to expect.
On SQL Server, Microsoft Dynamics NAV uses record-level locking, as opposed to table-level locking on the native database server. In reality, SQL Server can lock a little bit more than the individual records you want to lock: It can also lock the next and the previous records. This happens when SQL Server applies a range-lock.
This post describes how and when a range-lock can occur. It applies to a normal lock, regardless of the "Always Rowlock"-setting in NAV, and lock escalation where SQL Server can escalate a record-level lock to a table-lock - these are seperate issues.
Here are some examples:
Table 7 "Standard Text", contains the following records:Code DescriptionMD Monthly DepreciationSC Shipping ChargeSUC Sale under ContractTE Travel Expenses
Table 7 is used here, because it is the simplest possible table.
The following C/AL code in NAV will lock just one record (SC - Shipping Charge):
IF NOT CONFIRM('Continue?') THEN ERROR('Stopped.');
You can confirm this by running it from one client, then open another client and try to update the records in the table.
Now, consider this C/AL code:
You might expect that it will lock the two records that fall within this filter (S*): SC and SUC. But this is where SQL Server locks a bit more: It will also lock the records around this filter (MD and TE). This is because as soon as the lock covers a range and not just one individual record, SQL Server has to protect not only the locked records, but also the range itself. In this case, the range covers 2 records, and SQL Server prevents anyone else from inserting new records in this range. It is SQL Server's way to guarantee that the range stays at these 2 records.
Depending on your C/AL code, some times SQL Server protects just the beginning, and some times just the end of the range, and some times both the beginning and the end. So when you lock a range, you should assume that that both the record just before, and just after, will also get locked.
Here is an example from a more real situation. Imagine you have a process to update sales order 2002 (posting / releasing or any other process). You may have C/AL code to lock the sales lines, like this:
SalesLine.SETRANGE("Document Type",SalesLine."Document Type"::Order);SalesLine.SETRANGE("Document No.",'2002');SalesLine.LOCKTABLE;SalesLine.FINDSET;
In this case, also the lines for order 2003 will get locked! So even when users are working on their own documents, they can still end up blocking each other.
When this is causing problems, what methods can be used?
The simple answer is, to make sure that users don't work on records that are just next to each others. One way would be to insert "ghost" records between each normal record. Another way, is to change the sorting. In the example above, if you had added this line at the beginning:SalesLine.SETCURRENTKEY("Document Type","Sell-to Customer No.");
Then order 2003 will not get locked, because with this sorting, it is not the next record.Or you can design a solution that works in more random areas of a table. Using Number Series means that active documents are often in the same area of the table. If you use random numbers, or GUID as primary keys, then the active documents would be spread through the whole table, and the risk of users updating records next to each other, would be smaller.
These are just some methods to consider, but the main aim would be to spread activity on a contented table to avoid hot-spots in that table.
Locking the last record is used in a number of places in the NAV application, including the posting routines, which normally lock the last record in the entry table. When the record you are locking is the last one in the table, then the "next record" is anywhere between the last record and infinity. So in this case, the range that is locked can become much larger.
If we go back to the Standard Text table and run this C/AL code:StandardText.LOCKTABLE;StandardText.FINDLAST;
Then the locked range will include any possible record after the last one (TE - Travel Expenses), so it will prevent from inserting any new records that would come after TE.
If designing an application from scratch, then you may want to take this behaviour into consideration. If changing an existing application, then in some cases it will require major re-design. So always also consider other ways to improve performance. There may be easier ways to resolve blocking problems.