Microsoft Dynamics AX Support

This blog contains posts by the Microsoft Dynamics AX Support teams Worldwide

White Paper on Common Upgrade Performance Tips

White Paper on Common Upgrade Performance Tips

Rate This
  • Comments 4

*** Author note *** As is mentioned in the Whitepaper itself, much of the information in the document does NOT apply to upgrading to Dynamics AX 2012 R2 and beyond. For example, if you disable the updateGABDateTimeFields method in Dynamics AX 2012 R2, because of the changes to the script itself and the R2 table structures the upgrade would end up deleting all rows from the DirPartyLocation table (thanks to Nicolas Granjon for that tip).


Hello all - I recently posted a new White Paper available on CustomerSource which can be found here Common Performance Optimizations for Dynamics AX2012 Upgrade Code that talks about some very common areas of the upgrade code that customers and partners have needed to optimize in order to get acceptable performance. If the link isn't working for you, after logging on to CustomerSource, go search the Documentation - White Papers area for the document. The majority of these issues are being considered for the base upgrade code for our next version, but for those customers and partners working on an upgrade to Dynamics AX 2012, the specific script changes in this document can save quite a bit of time with larger datasets. The areas covered in this new whitepaper are listed here: 


Preprocessing Considerations.

Cleaning up your data.

Install and configure Dynamics Performance Analyzer

Upgrading from AX 4.0 - Address pre-processing scripts (generally namedupdateParty_XXXXX)

LedgerTrans pre-processing


Delta Processing Suggestions

Temporary Indexes on Original Dynamics Tables for delete_from Statements

Address Delta Processing Script Enhancements


Microsoft Dynamics AX 2012 Upgrade Suggestions

Index Changes for Upgrade

Recordset Update/Inserts that can help optimize runs


If you have run across other areas in your own testing that you would like to see about creating similar content, please leave a comment here on the blog with some detaills and I will look into adding them into the white paper. As a sample of the content, here is a very common scenario that occurs during delta processing on larger tables. Many times the routine may seem to stall with no indication of activity, but looking at the SQL Activity Monitor you can see a DELETE statement being executed.

Temporary Indexes on Original Dynamics Tables for delete_from Statements

The way that most of the delta scripts run is to use a delete_from statement with a notexists clause to remove rows from the Shadow tables when the original record in the Dynamics table has been deleted or modified. These delete_from statements will almost always include a where condition similar to the following:

                custTrans.RecID == Shadow_CustTrans.RefRecID && custTrans.RecVersion == Shadow_CustTrans.RecVersionID

For some of the larger Dynamics tables, the delta script may appear to be "stalled" without ever changing the iteration value to indicate that it is processing rows. This can appear even if no records have been altered or deleted in the table being processed. For these "stalled" jobs, check the Processes list of the SQL Activity Monitor inside the SQL Server Management Studio and see if there is a query running which would be similar to this:


declare @P1 nvarchar(5),@P2 nvarchar(5);

set  @P1 = 'dmo';
set  @P2 = 'dmo';

DELETE FROM shadow_inventtransposting 
WHERE  (dataareaid = @P1) 

(SELECT 'x' FROM inventtransposting b 
   WHERE  ((b.dataareaid = @P2) 
   AND ((b.recid = shadow_inventtransposting.refrecid) 
   AND (b.recversion = shadow_inventtransposting.recversionid))))

If that type of DELETE query is found in the Processes list of the SQL Activity Monitor, then the recommendation would be to do the following:

1.       Kill the process by right clicking on the line in the Processes list and choosing Kill Process.

2.       This step depends on if you are running in a test environment or live production with other users in the system.

a.       If running in a test environment with no other users, then within the AOT, add an index to the original Dynamics table (in the example above it would be InventTransPosting) which would be RecID followed by RecVersion (the order is important). Save and Synchronize the table to create the new index.

b.      If running in the live environment with other users, go to SQL Management Studio and create a new index directly in SQL Server on the original Dynamics table which contains the columns DataAreaID, RecID, RecVersion  (the order is important).

3.       Then refresh the upgrade cockpit and your batch job should have ended with an error - might take a little time if a rollback is needed).

4.       Choose to Rerun the job with the error and the script should progress fairly quickly past the delete statement.

Once the delta processing is done, those indexes can be removed from the system so that they don't add more time to inserts and updates done through normal business processing. However, those indexes should be put back into place during the Single-User phase of the upgrade to guarantee quick performance during that critical down-time window.

Leave a Comment
  • Please add 1 and 5 and type the answer here:
  • Post
  • what about data upgrade from 2009? how to do this?

  • The majority of the suggestions from this document apply to both Dynamics AX 4.0 and 2009. The details related to address processing are the areas where the same changes do not apply because the address structure is quite different between the two versions.

  • Hi,

    I've encountered the exact same problem as described above, regarding the stalled delete_from for the InventTransPosting table. Actually , on the activity monitor, the process is in SUSPENDED state with PAGEIOLATCH and CXPACKET waiting types.

    I've put a new index on RecId & RecVersion on InventTransPosting, but the problem apparently still remains, even on the next sql statement where it performs a select [...] notexists join [...]. Do you know if the new index should be setup as clustered index?

    I've try rebuilded the indexes on both tables (source table & shadow) without success

  • Hello Nicolas, the index should not need to be setup as clustered, but the ordering of the columns in the index would be important. Can you verify if you look at the index on the table inside SQL Management Studio that the index is created as DataAreaID, RecID, RecVersion.

    Also, can you let me know how long the delete and select statements are taking in your case? If you can see the queries listed in the Activity Monitor under the Recent Expensive Queries section, can you right-click the query and choose Show Execution Plan and then when the plan is displayed choose to Save it. If you can do this for both queries, send them to and I can take a quick look. If I can't find anything out of the ordinary quickly, then I will let you know that we would need to open a support case to take a more detailed look at what is happening on your system.

Page 1 of 1 (4 items)