mfp's two cents

...on Dynamics AX Development!
  • mfp's two cents

    Number Sequence auto-cleanup causing blocking


    Recently a customer experienced a daily time timeout in their warehouse operations. It occurred around 10.30 every day and lasted for a few minutes. Meanwhile all warehouse operations were blocked. 

    It turned out that the culprit was a number sequence configuration. One of the many features in the AX number sequence framework is to automatically clean up unused numbers for continuous number sequences.   This feature is great for number sequences that are infrequently used.  However, for the high volume number sequences this can cause blocking problems.

    When generating a new number the auto-cleanup feature will test to see if it is time for clean-up, and if it is it will commence the clean-up right away –  – and the clean-up can take minutes.  Meanwhile SQL will hold locks prevent anyone from accessing the same number sequence.

    Here is a setup of a number sequence that daily will run the auto clean-up, and potentially lock the system meanwhile.


    And here is a job to detect similar issues in your installation:  

    static void FindNumberSequencesCausingLocksDuringCleanup(Args _args)
        utcdatetime currentSystemTime = DateTimeUtil::getSystemDateTime();
        NumberSequenceTable ns;
        while select ns
            where ns.Continuous == true &&
                  ns.CleanAtAccess == true &&
            if (DateTimeUtil::getDifference(currentSystemTime, ns.LatestCleanDateTime)
    > ns.CleanInterval*3600)       
                info(strFmt("Every %1 hour %2 will lock during cleanup, last time: %3",

    Options to consider:
    1. Does this number sequence need to be continuous at all?  Non-continuous number sequences are must faster, and do not require clean up!
    2. Is the automatic clean-up required to run automatically? It can also be run manually from the Number sequences form for example outside peak hours.
  • mfp's two cents

    Dynamics AX 2012 R3 CU9 is available


    Today marks the release of yet another cumulative update for Microsoft Dynamics AX 2012 R3. Microsoft Dynamics AX 2012 R3 CU9 is available for download on Lifecycle Services, PartnerSource and CustomerSource.


    For detailed information on the release of Microsoft Dynamics AX 2012 R3 CU9, please refer to the Dynamics AX In-Market Engineering blog.

    For more information about the improvements in WMS/TMS see the SCM team’s blog.

  • mfp's two cents

    Using the batch framework to achieve optimal performance


    Recently I learnt how powerful the batch framework in AX is – and discovered how to improve performance of long running operations.

    Most long running operations in Dynamics AX can be scheduled to run in batch.   In most cases you can explicitly define a query to select which data to process.  It is often simple to create a single query that selects all the data to process and then schedule the operation for batch.  Doing it this way will start one batch operation that process the data piece by piece. It works, it is simple to maintain – but it’s not necessarily fast.

    Instead consider defining multiple queries that each covers a portion of the data to process, and then schedule them all to run in batch at the same time!  Now suddenly you have parallel processing.

    Here is a real life example.

    In Dynamics AX 2012 R3 we had a customer with 30,000 items that needed replenishment in their picking warehouse. They used fixed locations for each item, and used Min/Max replenishment.  The replenishment operation in AX is defined using a template. The template consists of lines, each with a query to specify items and location to replenish.

    The original setup we deployed was a single replenishment template covering all 30,000 items. The total execution time was 2hr31m:

    Then we created a new template with a query that covered about half the items, and changed the original template to cover the other half. We scheduled both to start at the same time. Not surprisingly they completed much fast. Almost in half the time. Total execution time was 1hr21m:

    Repeating the pattern, we split the replenishment into 8 templates. Another drop in total execution time was observed. Down to just 40 minutes:

    At this point the system was quite loaded. CPU averaged 80% and SQL had constantly a few tasks waiting – that is a good thing, the hardware is meant to be exercised.


    With a few simple configuration changes the overall execution time was cut by a factor 4. 

    This pattern can be applied many places – they key caveat to look out for is logical dependencies between the batches created.  In the example above, it is important that two batches are not replenishing the same item or the same location. That could lead to one batch waiting for another batch to complete. The implementation will of course be transactional safe, even if there are dependencies between the batches. If there are dependencies it may not yield the same impressive results, and could result in some batches getting aborted.

  • mfp's two cents

    Microsoft’s new domicile in Denmark – first impressions


    This week I had the chance to visit the building site in Lyngby where the new domicile for Microsoft Denmark is under construction.

    I’ve been driving past it a few times and been impressed by the building’s modern and bold look – so I was curious to experience a tour on the inside.


    From the outside it appears to be an office building with a number of floors. Once I entered the building I was surprised by the light and feel of spaciousness. Inside each “tower” is an open atrium offering a direct view to sky and allowing a lot of natural light to enter.



    As you enter the building you will obviously notice the big inviting stairway. But also pay attention to all the light entering from windows. This is a semi-clouded morning in February in Denmark. The pictures are taken with my Lumia 920 Windows Phone (no flash or extended exposure time), and there were just a few artificial light sources. The building is designed to allow natural light to enter from all sides (including the roof) – and it really makes a world of difference.

    As I’m typing this up I’m looking at my almost flicker-free fluorescence lamps at MDCC – I’m not going to miss them.



    Here is one of the open offices where we will build the next generations of Dynamics AX. Again, notice the windows, the openess and all the light.


    As you can tell from the pictures, there is still a lot of fit and finish pending  – yet, if at all possible the visit left me even more excited about this year at Microsoft in Denmark.

  • mfp's two cents

    Technical Conference 2015 – Sessions available for offline watching


    All sessions from this month’s conference in Seattle are now available on Customer Source.    

    Like all previous Technical Conferences, each session was recorded, and can now be enjoyed at your convenience.

    As I’m now working on the SCM Inventory and Warehouse team, allow me do a bit of shameless promotion. We had a fantastic conference, with a lot of great sessions:
    What's new in warehouse management and manufacturing for Microsoft Dynamics AX 2012 R3 CU8
    Labels in the new Microsoft Dynamics AX 2012 R3 warehouse management
    Packing station and containerization processes in the warehouse management system for Microsoft Dynamics AX 2012 R3
    Planning and execution for the transportation management system in Microsoft Dynamics AX
    Post execution and interaction with transport representatives in Microsoft Dynamics AX
    Using Microsoft Dynamics AX 2012 R3 warehouse management in a manufacturing environment
    Warehouse exception handling in Microsoft Dynamics AX 2012 R3
    Warehouse RF scanners and Microsoft Dynamics AX
    ​WMSI and the new Microsoft Dynamics AX 2012 R3 warehouse management system
    WMSII to new Microsoft Dynamics AX 2012 R3 warehouse management

    And finally my favorite (a lot of good general insights if you are dealing with performance problems):
    Performance challenges when implementing the new advanced warehousing system in Microsoft Dynamics AX

  • mfp's two cents

    SQL–More memory and CPU is not always a win


    All computer programs will run better when adding more memory and CPU cycles – right?  Not necessarily true.

    Assuming everything else is equal, then more memory and CPU will be a win; however, all computer systems have finite resources, also memory and CPU. Granting more to one program (or service), will take it away somewhere else.

    SQL is no exception, in fact it will happily consume all the resources you grant it – at the risk of starving other systems, including the OS.


    You can control how much memory SQL can use through the MaxServerMemory property. Setting it too low, means you are throttling SQL – setting it too high and you are throttling the OS.  Despair not, help is near: Here is a blog post written by Tara Shankar Jana – with a script, that will give you the optimal setting.

    CPU – Priority boost

    It may be tempting to give the SQL process a priority boost. But don’t do this! Ever! Doing this will starve any other process (including the OS processes), and it will not make the system perform better – in most cases it will be significantly worse.

    Here is a blog post by Arvind Shyamsundar on the topic: Priority boost details and why its not recommended

    If you want to disable priority boost, you can do it using this sql script:

    sp_configure 'priority boost', 0
    reconfigure with override

    One more thing…

    Since you are reading this, you probably want to get the best performance out of SQL on the hardware available. Check one more thing: The power plan!  You likely bought the hardware for your server to use it, so make sure to set the power plan to High performance – also on any VM hosts.

    Cindy Gross has written a blog on the topic.


    What has this got to do with Dynamics AX?

    Nothing and everything. The three guidelines above apply to any use of SQL Server – including when SQL is used with Dynamics AX.

    I recently visited an AX customer with performance problems. It turned out that SQL was granted 100% of the memory on the box, it was set to run with priority boost, and the power plan was set to balanced. The first two due to best intentions, the last due to this being the default setting. This starved the OS for resources, making overall performance of the system unpredictable – some simple queries would take seconds to complete, and blocks were observed too. Getting these settings right fundamentally changed the behavior – it was like night and day.

    Kudos to Tara for educating me on these topics.

  • mfp's two cents

    Damgaard Data turns 30 this month



    Version 2 just published a nice article about Damgaard Data – the company behind DanMax, C4, C5, XAL and Axapta.

    You can read it here: In Danish and in English


  • mfp's two cents

    Trace Parser: NULL::inner–explained


    If you have analyzed AX traces in the Trace Parser, you most likely came across something like this during your efforts:


    First time I saw it, I was puzzled – an object named “NULL” with a method named “inner”, I’ve never heard of that before – what is it? I started asking around, no one seemed to know. Bing didn’t help me either. I did a search in our source files, it gave me more hits than I wanted to explorer. About to give up, it struck me, that the name is not too bad for a method that doesn’t exists on an object and is inside the caller.

    So the answer is simple and straight forward. "NULL::inner” is used for all embedded methods in X++ when shown in the trace parser. Navigating to the caller in the Trace Parser also clearly shows that an embedded method exists (with the name “doEscape()”).


    As a side note, I can mention that the Call Stack window in the X++ Debugger doesn’t include embedded methods.

  • mfp's two cents

    Garbage Collection and RPC calls in X++


    Dynamics AX is a 3-tier application that evolved from a 2-tier application.  Yes, this is right, the first versions of Axapta was solely a client side application communicating with the database. In version 2.0 the middle tier was introduced. The X++ language got a few new keywords, client and server, and the AX run-time provided the smartest marshaling of objects across the tiers on the planet. The marshaling is guaranteed to work in virtually any object graph you can instantiate. You can have client-side classes holding references to instances of server-side classes, which contains references back to other client-side objects, which references … you get the idea. All you have to do as a developer is to decorate your classes as client, server or called-from.

    You don’t have to worry about any low level details like how the instances are communicating across the wire. The key word in the previous sentence is “have” – stuff will just work, but unless you are very careful you may end up creating an extremely chatty (i.e. lot of RPC calls on the wire) implementation.  Recently I’ve seen two cases of well-intended changes that on the surface looked right, but both caused an explosion of RPC calls. Both were implemented by smart guys, and I wanted to point them to an article explaining the problem – and I realized that article didn’t exist. Until now.

    Garbage collection in AX

    The garbage collector (GC) AX is responsible for releasing memory consumed by object instances no longer in use. In .NET the GC is indeterministic, it runs when it “feels like” running, typically when the system has CPU capacity and is low on memory.   In contrast the GC in AX is deterministic – it runs every time an object goes out of scope.

    Consider this small example:

    static void GCJob1(Args _args)
        MyServerClass myServerClass;
        //Create instance
        myServerClass = new MyServerClass();
        //Release instance


    Jobs run on the client tier, so this will create an instance of the MyServerClass and release it again. MyServerClass is a trivial class with RunOn=Server.

    If we enable Client/Server trace under Tools | Options | Development, and run the job, we get:

    Create instance
    Call Server: object:
    Release instance
    Call Server: destruct class

    Notice this: The client-tier reference to the server instance, is keeping the instance alive. When the reference goes out-of-scope, then the GC takes over and calls the server to free the server memory.

    Island detection

    The GC is not just looking for instances without references – often object graphs are more complicated. To release memory that is no longer needed, the GC is looking for groups of objects without any external references – or in popular lingo: Islands. This search is potentially harmful to the performance of your application. The GC must traverse all members of any object that goes out of scope – regardless of their tier.

    Let’s build out the example by introducing a client side class that is referenced by the server instance.

    static void GCJob2(Args _args)
        MyServerClass myServerClass;
        MyClientClass myClientClass;
        //Create client instance
        myClientClass = new MyClientClass();
        //Create server instance
        myServerClass = new MyServerClass();
        //Make server instance reference client instance
        //Release instances

    Now; when myServerClass goes out-of-scope then the GC will start analyzing the object graph, and it will discover an island consisting of our two objects – despite they are on different tiers, and it will release the memory consumed.

    Pretty smart – but not for free!


    This is the resulting RPC traffic from the above job:

    Create client instance
    Create server instance
    Call Server: object:
    Make server instance reference client instance
    Call Server: object: MyServerClass.parmMyClientClass()
    Call Client: set class loop dependencies
    Call Client: test class loop dependencies
    Release instances
    Call Server: destruct class
    Call Client: destruct class

    Now suddenly we jumped from 2 RPC calls to 6! What happened? We met the GC! 

    • The first 2 calls are expected, they are a direct consequence of the methods invoked on the server tier.
    • The next 2 calls are set/test class loop dependencies. Both of these are consequences of the parm method. The set call is a result of the assignment inside the parm method. It tells the client that the server now holds a reference to the client side object. The test call is the GC looking for islands, but not finding any. When the parameter (containing the client-side object) goes out-of-scope at the end of the parm method, then GC looks for islands. As the server side class holds a client side member, then the traversal of the object graph requires a trip to the client.  
    • The last 2 calls are cleaning up. Notice that destruction is a chain reaction. First the server is called to destruct the server side object, then the server calls back to the client to destruct the client-side object.


    A real life example

    Consider a server side class that is looping over some data, and for each, say, row, it spins up another class instance on the server to do some calculations.  This is all server side, and perfect. So let’s add a client-side member to the mix.

    class MyServerClass
          MyClientClass myClientClass;

          public void run()
              int i;
              MyServerHelper myServerHelper;
              //Create client-side member
              myClientClass = new MyClientClass();

              //Loop over some data
              for (i=1; i<=10; i++)
                  myServerHelper = new MyServerHelper();


    The alarming result is 10 client calls – or one per iteration in the loop – despite the loop only contains server side logic.

    Create client-side member
    Call Client: object:
    Call Client: set class loop dependencies
    Loop over some data
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies
    Call Client: set class loop dependencies

    The assignment inside the parm method forces the AX runtime to traverse the object graph, and the object graph contains a client side instance.

    The alert reader would have recognized this as the Runbase pattern. The client side class is the operation progress bar. In Dynamics AX 2009 the operation progress bar framework regressed, as a client side reference was introduced - exposing all thousands of consumers to this specific problem. This got fixed in Dynamics AX 2012 R3.

    Symmetrical implementation

    The implementation of the GC and the supporting runtime is symmetrical on each tier – you can recognize them in action, when you come across these calls in the Trace Parser. Remember; they are always a consequence of the exercised X++ logic. I.e. something that can be addressed if required.

    Call Client: set class loop dependencies
    Call Client: test class loop dependencies
    Call Client: destruct class

    Call Server: set class loop dependencies
    Call Server: test class loop dependencies
    Call Server: destruct class


    Wrapping up

    There is only one way of understanding the impact the GC has on your implementation: Measure it!   The best tool for measurement is the Trace Parser. Alternatively, the Client/Server trace in Tools | Options | Development can be used – it will show all the RPC calls in the Message Window.

    The rule-of-thumb as a developer is to avoid class members that are living on the opposite tier. This will ensure your object graphs are single tiered, and it will make the life of the runtime and the GC much simpler, and your applications equally faster.

    There are situations where cross tier members seem unavoidable. However, there are techniques to avoid them, and achieve the same functional results. Take a look in the SysOperationProgress class in AX4 or AX 2012 R3 for an example.


    Code samples are attached. They are provided AS-IS and confers no rights.

  • mfp's two cents

    10,000 feet overview of Dynamics AX Inventory Cost Management


    The past few years I’ve ventured into the SCM code base of AX, and I most admit I found the physical operations easier to comprehend than their financial counterparts. Perhaps it is just the terminology, or perhaps its because I’m “just” an engineer. At any rate I discovered I’m not the only one struggling – so here is an high level overview of the Cost Management domain and how it is handled in Dynamics AX.

    A text-book condensed into a paragraph

    A company is profitable if it can sell its goods for a higher price than they paid for them. This difference is called the Gross profit, it can be found on the Income statement, and is the difference between Sales and Cost of goods sold (COGS). In other words, COGS directly influences the company’s profitability and the amount of tax to pay.  The General accepted accounting principles (GAAP) describes several Costing principles, i.e. ways COGS can be measured. As the costing principle(s) used can have a significant impact on the company’s financial performance, there must be full disclosure to which costing principle(s) is used. For these reasons companies are inclined to stay with their current costing principle. The history of accounting for costs predates computer science and ERP systems. Several of the costing principles in use today reflects this fact – they are designed to be applied periodically (in the past by a book keeper with a pen in his hand).  The inputs to the costing calculations are called cost elements. They include Direct cost (purchase price of finished goods or raw material) and Indirect cost (such as labor and equipment).

    Dynamics AX’s Inventory Models

    In Dynamics AX the Costing principles are called Inventory models. An Inventory model prescribes how Direct cost is determined. Indirect cost is always estimated. These estimations are provided by the Costing sheet.

    In the following consider this sequence of event:

    1. Monday:          Purchase of 1 pcs @ $10
    2. Tuesday:          Purchase of 1 pcs @ $12
    3. Wednesday:     Sale of 1 pcs (sales price is irrelevant)
    4. Thursday:         Purchase of 2 pcs @ $15 each.
    5. Friday:              Inventory Close


    AX supports these Inventory models:

    • Normal costing accounts the costs as they occurred, however various circumstances can later influence the posting, such as back-dating. An example of back-dating could be that the Thursday purchase is register as if it occurred on Tuesday. At the end of the period the Inventory Close process will take care of any differences.

      AX supports 5 variants:
      • FIFO – First in first out; In the example the cost is $10
      • LIFO – Last in last out over the entire next period; In the example the cost is $15
      • LIFO Date – Last in last out (until the sale); In the example the cost is $12
      • Weighted average – over the entire period; In the example the cost $13. ((10+12+2*15)/4)
      • Weighted average date – in the period until the sale; In the example the cost is $11. ((10+12)/2)

    • Standard cost is based on estimates for both Direct and Indirect costs. This means it can be calculated during planning and thus is a powerful operational management tool. The BOM calculation is used to estimate the costs. AX supports several versioning of the estimates – they are called Costing versions. This makes Standard cost a perpetual model, where cost always can be determined during the sale. In the example above the cost is whatever estimate is provided by the active costing version, e.g. $9. 

    • Moving average is similar to Weighted average date. However; it is a perpetual model, where cost always is determined during the sale. This means that back-dating will not impact the posted costs, and Inventory Closing is not required. In the example; the cost is $11.


    AX 2009 Costing models:

    Moving average:

    More background:

Page 1 of 20 (192 items) 12345»

mfp's two cents

...on Dynamics AX Development!