November, 2010

  • Microsoft Dynamics NAV Team Blog

    NAV 2009 Tips and Tricks: Create Notifications from Task Pages

    • 7 Comments

    You can create notifications from task pages such as customer cards or sales orders. You can use notifications as reminders, or as a way to send information to other NAV users in your company.

    A notification is displayed on the recipient's Role Center. By clicking the notification, the recipeint opens the associated task page.

    1. To create a notification, open the task page from which you want to send a notification. For example, open a customer card, sales order, or other task page.

    2. In the FactBox area, scroll down to the Notes part.

    3. In the Notes part, click to create a new note.

    4. In the Enter a new note here text box, type a note.

    5. Select a recipient.

    6. Select Notify.

    7. Click Save.

    The notification is saved on the task page.

     

    The notification is also relayed to the recipient and is displayed on his/her Role Center under My Notifications.

    For more information about usability and the RoleTailored client, see the blog post Useful, Usable, and Desirable.

  • Microsoft Dynamics NAV Team Blog

    Performance Analyzer 1.0 for Microsoft Dynamics

    • 4 Comments

    The Microsoft Premier - Dynamics team has created and compiled a set of scripts and tools for helping analyze and troubleshoot SQL Server performance issues on the Dynamics products. These are the same tools that we use on a daily basis for collecting SQL performance data and troubleshooting SQL performance issues on all our Dynamics products* and we want to make this available to our partners and customers. These tools rely heavily on the SQL Server DMVs so it is only available for SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2. 

    This tool can aid in the troubleshooting of blocking issues, index utilization, long running queries, and SQL configuration issues. Instructions for installing and using the tool are included in the download package. One nice feature of this tool is that it creates a database called DynamicsPerf and imports all the performance data collected into the DynamicsPerf DB, which can be backed up and restored on any SQL Server (2005, 2008) for later analysis making it "portable." The collection of performance data can also be automated via a SQL job for which the scripts are provided.

    Performance Analyzer 1.0 for Microsoft Dynamics can be downloaded via the following MSDN link. This tool is updated on a fairly consistent basis with bug fixes and new functionality so please check often for new versions.

    http://code.msdn.microsoft.com/DynamicsPerf

    This tool and associated scripts are released "AS IS" and are not supported.

    *There is added functionality for Dynamics AX

    -Michael De Voe

  • Microsoft Dynamics NAV Team Blog

    Integration to Payment Service in NAV 2009 R2

    • 4 Comments

    The Payment Service available from Online Services for Microsoft Dynamics ERP is an example of the growing availability of online services that users of ERP systems can benefit from connecting to. Adding functionality to the application through connecting to a service is new territory for us in the NAV development team, and we have learned a lot through this development project. We are looking forward to sharing the benefits of being able to expand the service, while we keep our focus on delivering the new NAV product.

    The Payment Service is hosted by Microsoft and the number of available Payment Providers is growing. Today there are multiple Payment Providers like First Data, Pay Pal, and Cybersource supporting the US and Canadian markets. The plan is to grow the number of payment providers, so that the rest of the world can be supported as well. We are shipping the integration for all NAV supported countries - even though the payment providers aren't available yet - so the code is ready when the service becomes available.

    The integration to the Payment Service that is included in NAV 2009 R2 will allow users of Microsoft Dynamics NAV to accept credit and debit card payments from Sales Orders, Invoices, as well as the Cash Receipt Journal. The solution will allow for both an authorization process and an automatic capture of the required amount during post as well as using it more freely on the Cash Receipt Journal.

    Adding the integration to the online services has been done with a number of goals in mind:

    • Keeping it simple: Adding the integration to the Payment Service will allow the user of NAV to work within NAV when he is accepting the credit cards as payments. There is no need for a third party add-on outside the normal environment. The payment information is built in to the existing order entry process, using the Sales Orders and the Invoices as a starting point. This means a simple payment flow that doesn't require a huge effort to learn and set up. This will support the envisioning behind adding services to existing installations: it must add to the existing functionality without making it more complex.
    • Power of Choice: Secondly, choosing the online payment service will allow the users to choose the payment provider that suits their needs the best. There can be a difference in the transaction cost per payment provider and the user is encouraged to investigate which one that fits their scenario the best. Depending on the payment provider, there is support for multiple credit cards and currencies. Out of the box there is built-in setup for Visa, MasterCard, American Express, and Discover.
    • Secure integration: Third, there has been focus on ensuring that the information that is required to handle credit card transactions is kept as secure as possible and that the design adheres to the standards of the market. To this, there are two aspects to consider: the data that is stored in the ERP system and what is being sent to the payment providers through the service.
    • The ERP data includes encrypted storage of the customer credit card number as well as securing that users don't have access to the numbers.
    • The payment service is also certified by following the guidance of the Payment Card Industry (PCI) Security Standards Council.

    Scenarios Covered by the Integration

    The areas that are relevant when describing the integration to the Payment Service can be described by the following scenarios:

    1. Authorization of the amount from the Sales Order or Invoice against the customer's credit card
    2. Capture of the actual amount and thereby also creating the payment in the system
    3. Voiding the authorized amount
    4. Refunding against an existing capture

    To describe the scenarios it is useful to think about the personas using the functionality; in this case we work with Susan, the Order Processor, as well as Arnie the Accounts Receivable Administrator.

    As a part of Susan's work, she receives and processes the incoming orders from the sales representatives. She will in some cases talk to the customer to validate the orders and ensure that items are available and that the price is correct according to the agreement. In some cases the customer can request that they want to pay using a credit card, instead of having to handle the invoice later. Susan has to ensure that the information required for using a credit card is available; if not, she will get the information from the customer.

    If Susan needs to be certain that the customer can pay the agreed amount, she can go ahead and authorize the amount against the provided credit card information. If the result is successful the sales order can be shipped. When the sales order is posted (or parts of it) the actual capture of the amount on the sales order will automatically be processed. When capture is successful the payment will be automatically registered and money will be received shortly.

    On the sales order it looks like below - there are two new fields for the credit cards - as well as a requirement to use a specific Payment Method (described below in the Getting Started section).

     

    The scenario above is the simplest process that is supported by the new payment service integration. The following scenarios is also covered in the implementation:

    1. Partial posting of the sales order will only capture the amount that is posted. The rest can be captured later.
    2. It is possible from the Cash Receipt Journal to accept multiple credit cards against the same invoice. This is done by adding multiple lines in the journal - one per credit card.
    3. It is possible to use a credit card transaction to cover more than one invoice. Again, this is only possible from the Cash Receipt Journal.
    4. It is possible to void an existing authorization in case the amount is not needed. This is only implemented in a manual step.
    5. It is possible to refund an existing transaction as well as part of a transaction. This is done through the Credit Memo.

    All of the above transactions and connection to the payment service can be seen on the specific customer as well as on the specific documents. On all places a Transaction Log has been implemented that shows the status of the current transactions and if the connections have been successful or not.

    Getting Started

    Enabling the payment integration does require a couple of steps both inside and outside NAV:

    • First of all, it is required to sign up for the Payment Service. Details can be found here: Microsoft Dynamics Online Payments Introduction. The sign up includes validating and choosing which Payment Provider to use. There is a difference in the cost and in the support, so some investigation is recommended. After signing up a Live ID and a Password is provided that is required when setting up the connection.
    • Within Microsoft Dynamics NAV 2009 R2 there are a couple of steps that needs to be completed:
      • First of all, the connection needs to be setup - and this is where the Live ID and the password are required. Please note  that the Microsoft Dynamics ERP Payment Service Connection Setup is only available in the Classic Client due to security reasons.  

    •  
      • For the connection to carry the correct currency there is a need to fill out the currency field on the General Ledger Setup. Please verify with the help document for the correct values. Below is an example in the CRCARD payment method
      • Finally it is required that a Payment Method is created that uses the Payment Processor field as well as the bank account with correct currency as signed up with

    For more information please look at the following resources:

    -Rikke Lassen

     

  • Microsoft Dynamics NAV Team Blog

    How to use an external .NET assembly in report layout for RTC reports

    • 3 Comments

    This post shows how to include an external .NET assembly in your report layout when designing reports for RTC. The focus here is NOT how to build a .NET assembly, but how you can include such an assembly in your report design. But still, we will begin by creating our own .NET assembly for use in the layout.

     

    Creating a .NET assembly

    To keep it simple, let's make a .NET assembly to just add up two numbers:

    1. Start Visual Studio.
    2. Create a new c# project of type Class Library, and name it MyReportAssembly.
    3. Change the default Class name from Class1 to AddNumbers.
    4. Create a public function called AddFunction, which takes two integers as parameters, and returns the sum of those two.

    The whole project should look like this now:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;

    namespace MyReportAssembly
    {
        public class AddNumbers
        {
            public int AddFunction(int i, int j) {
                return (i + j);
            }
        }
    }

    That's all the functionality we need - as mentioned we want to keep it simple!

    But we need to make a few more changes to the project before we can use it:

     

    Add a strong named key

    In Project Properties, on the Signing tab, select "Sign the assembly", select a New key (give it any name), and untick password protection.

     

    Add AllowPartiallyTrustedCallers property

    To allow this assembly to be called from the report layout you must set this property in Assemblyinfo.cs. So, in Assemblyinfo.cs, add these lines:

    using System.Security;

    [assembly: AllowPartiallyTrustedCallers]

    Full details of why you need those two lines here: "Asserting Permissions in Custom Assemblies"

     

    Compile to .NET 3.5

    When I built this project using Visual Studio 2010, I was not able to install the assembly. And when trying to include it in my report layout I got this error: "MyReportAssembly.dll does not contain an assembly.". So if you will be using the installation instructions below, and you are using Visual Studio 2010, then change "Target Framework" to ".NET Framework 3.5" under Project Properties on the Application tab. This is the default target framework in Visual Studio 2008. Visual Studio 2010 defaults to version 4.0. I'm sure there are better ways to instlal this and still build it for .NET 4, but that's outside of the current scope. Also if it later complains about reference to Microsoft.CSharp, then just remove that from the project under references.

     

    When this is done, build your project to create MyReportAssembly.dll

     

    Installing your new .NET assembly

    Again, the focus here is not on buildign and installing .NET assemblies, and I am no expert in that, so this is probably NOT the recommended way to install a .NET assembly, but it works just for the purpose of being able to see it in the report layout:

    Start an elevated Visual Studio Command prompt, and go to the folder where your net MyReportAssembly.dll is (C:\Users\[UserName]\Documents\visual studio 2010\Projects\MyReportAssembly\MyReportAssembly\bin\Debug\). Then run this command:


    gacutil /i MyReportAssembly.dll

    It can be uninstalled again with this command:


    gacutil /uf MyReportAssembly

    After installing it, open this folder in Windows Explorer:


    c:\Windows\Assembly


    and check that you have the MyReportAssembly there. If not, then check if the section above about compiling it to .NET 3.5 applies to you.

     

    Finally - How to use an exterenal .NET assembly in report layout
     

    So now we finally come to the point of this post: How do you use your new assembly in the report layout:

    1. Design any report, and go to the report Layout.
    2. Create a reference to your assembly: Report -> Report Properties -> References. Browse for your new .dll and select it.
    3. On the Code tab, create a function which will create an instance of your assembly and call the function:

     Public Function Addnumbers(Num1 as Integer, Num2 as Integer) as Integer
      Dim MyAssembly as MyReportAssembly.AddNumbers
      MyAssembly = new MyReportAssembly.AddNumbers()
      Return MyAssembly.AddFunction(Num1,Num2)
    End Function

     Then call this function nby adding this Expression to a TextBox:

    =Code.Addnumbers(1,2)

    And, finally, if you run the report like this, you would get the error "xyz, which is not a trusted assembly.". So back in the classic report design, in report properties, you just have to set the property EnableExternalAssemblies = Yes, and the report should run.

     

    That was a lot of work for just adding up two numbers, but hopefully is shows what steps are needed to open up your report layout to endless opportunities. Note: I have no ideas if this will work with visual assemblies or anything which contains any UI at all. Any experiences with this, feel free to add to the end of this post.

     

    As always, and especially this time since I'm in no way a c# developer:

    These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

     

    Additional Information

    If you plan to look further into this, then here are some recommended links:

    "Using Custom Assemblies with Reports"

    "Deploying a Custom Assembly"

    If your assembly calls other services, it is likely you need to consider passing on the user's credentials. For more information on that, here is a good place to begin:

    "Asserting Permissions in Custom Assemblies"

     

     

    Lars Lohndorf-Larsen

    Microsoft Dynamics UK

    Microsoft Customer Service and Support (CSS) EMEA

     

  • Microsoft Dynamics NAV Team Blog

    Outlook Synchronization and preventing duplication of contacts, tasks, etc.

    • 2 Comments

    When a new contact is created in Dynamics NAV, you may want to synchronize that contact to Outlook. Sometimes it could happen that during the next synchronization attempt, this specific contact seems to duplicate somehow. This blog describe how this could happen and what you could do to prevent this. This blog will also describe how the synchronization works in detail. A future blog will describe what to do when this situation has occurred.

    If you create a new contact in Dynamics NAV and synchronize that contact to Outlook via a normal synchronization, then a new contact will be created in the dedicated Outlook Synchronization folder in Outlook. Dynamics NAV must know that this process finished successfully, so what happens next is that a Unique Identifier is sent back from Outlook to Dynamics NAV. This Unique Identifier is stored in table 5302.

    When the contact is created in Outlook but the Unique Identifier is not sent back to Dynamics NAV, during the next synchronization attempt, a duplication could occur.

    Most of the time, this happens when the user does not know the synchronization is running in the background.

    E.g.:

    • switching off the “Show synchronization progress” option in the Outlook Add-In that shows the synchronization is running
    • switching off the “Show synchronization summary” option after a successful synchronization
    • enabling the “Schedule automatic synchronization every” option

    NOTE: using the “Schedule automatic synchronization every” in general is a bad idea because with a scheduled synchronization and with the current Outlook Synchronization solution, the progress bar and summary window will not be shown to the Outlook Synchronization user in Outlook!

    image

    With Office 2010 it is very easy to close Outlook –even when the synchronization is running! If the Unique Identifier is not sent back to Dynamics NAV, a previous attempt to synchronize new items to Outlook and the other way around will duplicate the synchronized data in Dynamics NAV or Outlook! The same scenario applies if the Outlook Synchronization User uses a laptop and closes the lid of the laptop (when he does not know the synchronization is running). Of course, this scenario could also happen if Outlook suddenly crashes; e.g. during a power failure, etc.

    There are many reason why a duplication could occur, but in general an Outlook Synchronization User should know that the Outlook Add-In is running in the background and therefore, we now do not recommend to disable the “Show synchronization progress” option and the “Show synchronization summary” option. Enabling these options again would prevent the most common cause why a duplication could occur.

    Regards,

    Marco Mels
    CSS EMEA

    This posting is provided "AS IS" with no warranties, and confers no rights

  • Microsoft Dynamics NAV Team Blog

    Unforgettable: NAV Supply Chain Management

    • 2 Comments

    Do you like that song? "Unforgettable", Nat King Cole, 1954.  And yes, you are right: this is nothing to be with NAV. But, I was asked to start posting on this blog and wanted to explain what my posts would be about.

    "Unforgettable" is what SCM (Supply Chain Management) is about when you come to the ERP arena. Most of the times, we go into the technical details of the NAV implementation and we forget what is the ultimate goal: a tool to boost the productivity, a tool to expand or extend the limits on how companies can reach or provide services.  In other words, do not forget that NAV needs to be optimal for your company productivity.

    Just to ensure we all readers of this blog have common understanding, SCM is considered from a APICS definition to be all those tools which enables to increase customer satisfaction while minimizing inventory and reducing costs. From a Dynamics NAV perspective, granules which relate to SCM are those which map the following business processes:
    - Sales & Purchase Profit (sales and purchase order management)
    - Inventory Optimization (planning, inventory management)
    - Warehouse Management
    - Cost Reduction (Cost method and application)
    - Service Management

    Now, that I introduced myself, I will be posting my first post following ...

  • Microsoft Dynamics NAV Team Blog

    Grouping in RoleTailored Client Reports

    • 1 Comments

    When moving to the RoleTailored client some people have experienced difficulties with grouping in RDLC reports in cases where the requirements are just a bit more complex than a basic grouping functionality.

    I must admit that at the first glance it does not seem simple, but after a little research it does not look dramatically complex either. That motivated me to write this blog post and share my findings with all of you NAV report developers.

    So let's take a look at the task we are trying to accomplish.

    I have a master table, which contains a list of sales people - SalesPerson. The sales people sell software partially as on-premises software and partially as a subscription. There are two tables, which contain data for these two types of sale: OnPremisesSale and SubscriptionSale.

    The example is artificial and is meant only to show different tricks on how to do grouping. The picture below shows the data for this report:

     For each sales person I need to output both sales of on-premises software and subscription sales and show the total sales. Something that looks like the following:

     

    Now we have all required information, let's start solving the task.

    1. First, I create an NAV report. Add all necessary data items, establish data item links for the proper joining of the data, and place all needed fields on the section designer in order to get them inside RDLC report.
    See the picture below.

    2. Next, I go to RDLC designer in Visual Studio. First I pick a list control, put the SalesPerson_Name field in the list, and set the grouping based on the SalesPerson_SalesPersonId field.
    3. After that, I place a row with column captions on top of the list.

    Design in Visual Studio as shown below.

    4.  Now I need to place two tables inside the list- one for the on-premises software and one for the subscriptions.
    A list can display detail rows or a single grouping level, but not both. We can work that around this limitation by adding a nested data region. Here I place a rectangle inside the list and place two tables inside this rectangle, one for On-Premises and one for Subscriptions. In each table, I add header text and add the CustomerName and Amount fields.

    5.  I also add two text boxes for the sum of amount - one inside the rectangle to show total sales for the sales person and one outside to show the overall amount. Both contain the same formula: =SUM(Fields!SubscriptionSale_Amount.Value) + SUM(Fields!OnPremisesSale_Amount.Value)

     

    The picture below shows the result of this design:

      

    6. It looks more or less correct, but there are uneven strange empty spaces between rows. In order to detect the root cause of this problem let's add visible borders to our tables. Change the BorderStyle property to Solid for one of the tables and to Dashed for another.

    So the result will look like this:

     

    7. This result indicates that our report has two problems:

    • Tables do not filter out the empty rows. The result set contains joined data for both on-premises sales and subscriptions - so each data row has an empty part of it. The Data Zoom tool (Ctrl-Alt-F1) is your best friend when you need to learn how your data looks.
    • Empty tables should be hidden from the output.

    So I will make two fixes in order to address these two bugs:

    • In the table's Property window, on the Filter tab set a filter: Fields!OnPremisesSale_CustomerName.Value > '' for one table and Fields!SubscriptionSale_CustomerName.Value > '' for another. (Note: In the Value field, you must enter two single quotes.)
    • For each table, set the Visibility expression to =IIF(CountRows("table2") > 0, False, True). The table name should be replaced with an actual table name. Table names are different, so the visibility expressions should also be different. Please avoid a copy/paste error here and do not forget to wrap the name with quotes.

    As an addition, I will make some minor changes to improve the report's readability: change fonts, font size, font style, and add separator lines.

    All these modifications will produce the following output:

     

    That is exactly it what I wanted to achieve at the beginning.

    I also have some tips, which might be helpful in your future reporting endeavors:

    1. Use lists. In many cases it is more convenient than creating a complex grouping inside a table. You can nest a list or any other data region inside the list.
    2. Do not forget that a list can display detail rows or a single grouping level, but not both. It can be worked around by adding a nested data region including another list.
    3. If there are some issues with layout, then make the invisible elements of the design temporarily visible - set visible style for table borders or change color of the different elements of the layout.
    4. Build inside Visual Studio. That can catch layout errors and reduce development time.

     - Yuri Belenky

  • Microsoft Dynamics NAV Team Blog

    NAV “core” Planning: CU 99000854 ( IV ): Planning states

    • 1 Comments

    Not sure if you are familiar with this. But, I believe it worths covering it …

    Have you ever realized NAV Planning goes through different steps or phases of the planning? StartOver, MatchDates, MatchQty, CreateSupply, ReduceSupply, CloseDemand, CloseSupply, CloseLoop. All these steps are being performed on the PlanItem procedure from CU99000854. The way planning goes through each of these steps depends on the reordering policy. But basically:

    First state is StartOver which determines if there is any demand to plan for. If so, it try to determines if there is any supply. If this is the case, next state would be MatchDates. If no supply exists, next state would be CreateSupply. If there is no other demand to plan for, next state would be ReduceSupply (if any exists) or CloseLoop (to finalize the loop through all states).

    MatchDates state. This is about matching demand and supply dates to make sure that both can be plan together (ie. supply can be used to meet demand). If cannot be met, next state would be CreateSupply (another supply needs to be created). If it can, the existing untracked quantity in the supply needs to be reduced and state would be ReduceSupply.

    MatchQty state. If supply quantity is not enough, it needs to be closed (no more available) through CloseSupply state. If this is a supply where quantity can be increased, then CloseDemand state can be next.

    CreateSupply state. This is the step where supply will be created depending on the policy.

    ReduceSupply state. Here, the untracked (unnecessary) quantity will be reduced from the supply. Next state would be to close the supply.

    CloseDemand state. In here, the demand will be closed which means that demand has been planned for. The next thing (state) to do here would be StartOver to start the planning of another demand.

    CloseSupply state. This is similar than CloseDemand state but from the supply perspective. If supply has been entirely planned, another supply needs to be considered and StartOver state will be next.

    CloseLoop state. This is where planning finalizes. Before doing that, it will determine if ROP has been crossed. If so, it will be planned for again by going through the StartOver state.

    Hopefully someone finds the above info useful. Think about a sequence of steps before plan can be considered as finalized. These sequences of steps are being done in the PlanItem procedure through a WHILE loop until plan is done.

  • Microsoft Dynamics NAV Team Blog

    What happens when both Inbound and Outbound Production bin codes are the same …

    • 1 Comments
    This is an unexpected setting for NAV. Think about this …
    - If this is the "Inbound Production Bin Code", NAV understands that stock on these are not available since is pending to be consumed in Manufacturing
    - If this is the "Outbound Production Bin Code", NAV understands this is available since stock is whatever has been already produced and it is ready for any sales, transfer, other production stage …
     
    So, depending on what production bin is, NAV determines this is available or not. And … what happens if we set both inbound and outbound production bins the same? Problems … yeap, problems. In one hand, NAV removes inventory from the stock availability when this is the inbound. Thus, it will not be available. For your curiosity, this is done on the CU7312 – SetOutboundFilter where NAV determines if the stock can be used or not.
     
    As previous post here, if users need stock from the "Inbound …" (ie. stock which was placed to be consumed but due to a sales urgency needs to be picked), this stock need to be moved (Warehouse movement) to any other bin which is able to be picked from.
     
    FYI "Open Shop Floor Bin Code" and "Adjustment Bin Code" have same treatment as the "Inbound Production Bin Code". Thus, stock there is not available for a pick.
  • Microsoft Dynamics NAV Team Blog

    NAV “core” Planning: CU 99000854 ( I )

    • 1 Comments
    Why focusing Codeunit 99000854? If you interested on what is the logic behind NAV planning, you should look into this codeunit.
     
    This codeunit is the link between how NAV planning logic is written (= "NAV Supply Planning White Paper") and how this logic is executed (=functionality around Planning and Requisition Worksheets).
     
    Based on this, I will be posting a series of entries regarding how this CU 99000854 performs. Furthermore, this series on entries will:
    - provide info about what are the demands taken into account during planning run
    - understand what are the considered supplies in NAV
    - identify the primary code related with the how NAV will treat demands and supplies (ie. will any of these be allowed to reschedule?)
    - understand why priorities are needed when netting demands/supplies
    - understand what is the forecast and blanket order consumption
    - verify what is the planning flexibility flag in NAV
     
    To start with, we will be understanding what are the demand types from a NAV planning engine perspective.
  • Microsoft Dynamics NAV Team Blog

    NAV “core” Planning: CU 99000854 ( II ): Demand types

    • 0 Comments
    First, lets review the foundation here … lets review what the "Supply Planning White Paper" mentions:
    "
    DEMAND AND SUPPLY:
    Demand is the common term used for any kind of gross demand, such as sales order and component need from a production order. In addition, the program allows more technical types of demand, such as negative inventory and purchase returns
    Planning parameters and inventory levels are other types of demand …
    "
     
    Checking the CU 99000854, what is a demand for NAV planning engine?
    a. Sales Orders
    b. Service Orders
    c. Planned Production consumption
    d. Transfer orders
     
    The actual code to find the data considered as demand can be found on the DemandToInvProfile procedure in CU 99000854.
     
    As an example …
    a. sales orders are being considered as demands by using this query:
    SalesLine.SETCURRENTKEY(Type,"No.","Variant Code","Drop Shipment","Location Code","Document Type","Shipment Date");
    SalesLine.SETFILTER("Document Type",’%1|%2′,SalesLine."Document Type"::Order,SalesLine."Document Type"::"Return Order");
    SalesLine.SETRANGE(Type,SalesLine.Type::Item);
    SalesLine.SETRANGE("No.",Item."No.");
    Item.COPYFILTER("Location Filter",SalesLine."Location Code");
    Item.COPYFILTER("Variant Filter",SalesLine."Variant Code");
    SalesLine.SETFILTER("Outstanding Qty. (Base)",’<>0′);
    SalesLine.SETFILTER("Shipment Date",’>%1&<=%2′,0D,ToDate);
     
    b. service orders …

    ServiceLine.SETCURRENTKEY(Type,"No.","Variant Code","Location Code","Posting Date");
    ServiceLine.SETRANGE("Document Type",ServiceLine."Document Type"::Order);
    ServiceLine.SETRANGE(Type,ServiceLine.Type::Item);
    ServiceLine.SETRANGE("No.",Item."No.");
    Item.COPYFILTER("Location Filter",ServiceLine."Location Code");
    Item.COPYFILTER("Variant Filter",ServiceLine."Variant Code");
    ServiceLine.SETFILTER("Outstanding Qty. (Base)",’<>0′);
    ServiceLine.SETFILTER("Posting Date",’>%1&<=%2′,0D,ToDate);
     
    c. production order consumption …
    ReqLine.SETCURRENTKEY("Ref. Order Type","Ref. Order Status","Ref. Order No.","Ref. Line No.");
    ReqLine.SETRANGE("Ref. Order Type",ReqLine."Ref. Order Type"::"Prod. Order");
    ProdOrderComp.SETCURRENTKEY("Item No.","Variant Code","Location Code",Status,"Due Date");
    ProdOrderComp.SETRANGE("Item No.",Item."No.");
    Item.COPYFILTER("Location Filter",ProdOrderComp."Location Code");
    Item.COPYFILTER("Variant Filter",ProdOrderComp."Variant Code");
    ProdOrderComp.SETRANGE(Status,ProdOrderComp.Status::Planned,ProdOrderComp.Status::Released);
    ProdOrderComp.SETFILTER("Due Date",’>%1&<=%2′,0D,ToDate);
     
    and so on with other type of demands like planned consumption (check DemandToInvProfile procedure).
     
    For any of these "queries", the exact filters used should be analyzed. For instance, sales line considered are those which have a shipment date equal or earlier than the planning ending date … regardless of the status in the sales order.
  • Microsoft Dynamics NAV Team Blog

    NAV 2009 Tips and Tricks: Link documents and URLs to task pages

    • 0 Comments

    You can use links on your task pages to guide users to additional information. The links can be the URL addresses of web sites or links to documents on a computer.

    In the following example, you can see how to add a link to a customer card task page. The link is then viewable both from the customer card and the customer list.

    1. Open a customer card to which you want to add a link to a document or URL.

    2. If the Links FactBox is not displayed on the page, customize the page to display Links.

    3. In Links, click Actions , and then click New.

    4. In the Link Address field, enter an address for the file or website, such as C:\My Documents\invoice1.doc, or www.microsoft.com.

    5. Fill in the Description field with information about the link.

    6. Click Save.

    7. In Links, click on the link in the Link Address field. The appropriate program, such as Microsoft Word or Microsoft Internet Explorer, opens and displays the link target.

    For more information about usability and the RoleTailored client, see the blog post Useful, Usable, and Desirable.

  • Microsoft Dynamics NAV Team Blog

    NAV “core” Planning: CU 99000854 ( III ): integrating MRP with Jobs planning

    • 0 Comments

    This post is not for the reader to use the suggested code but to test it and enhance it as required. Yeap, this is a disclaimer so reader knows that testing should always be done. Also, the below code is not completed but might provide a good idea on how to enhance NAV planning with Jobs material planning.

    What I would like to cover here is how to integrate MRP with Jobs so material required for jobs create action messages. Before that, we might need to refresh how CU99000854 determines where the demands are coming from. This was covered on my earlier post about CU 99000854 (Demand Types). As a remider, logic is on function DemandToInvProfile. Here it goes through the different types of demand (Service, Sales, Production, …) and make planning aware of them.

    Back to our topic, this same function is where we need to add code so MRP is aware that Job Planning Lines carry demands (material requirements). As an example, the piece of code we could consider is the following:

    // Copyright © Microsoft Corporation. All Rights Reserved.
    // This code released under the terms of the
    // Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html.)

    JobPlanningLine.SETCURRENTKEY(Type,”No.”,”Variant Code”,”Location Code”,”Planning Date”);
    JobPlanningLine.SETRANGE(Type,JobPlanningLine.Type::Item);
    JobPlanningLine.SETRANGE(“No.”,Item.”No.”);
    Item.COPYFILTER(“Location Filter”,JobPlanningLine.”Location Code”);
    Item.COPYFILTER(“Variant Filter”,JobPlanningLine.”Variant Code”);
    JobPlanningLine.SETFILTER(“Planning Date”,’>%1&<=%2',0D,ToDate);

    IF JobPlanningLine.FIND('-') THEN
    REPEAT
    InventoryProfile.INIT;
    InventoryProfile."Line No." := NextLineNo;
    InventoryProfile.TransferFromJobPlanningLine(JobPlanningLine);
    IF InventoryProfile.IsSupply THEN
    InventoryProfile.ChangeSign;
    InventoryProfile.INSERT;
    UNTIL JobPlanningLine.NEXT = 0;

    What we are doing above is using the job planning lines of type "Item" to be considered as demand and use the planning date as due date for the demand.

    Going forward, reader could take it from here to finalize the development.

  • Microsoft Dynamics NAV Team Blog

    Planning parameters vs. demand profile

    • 0 Comments

    Most of the times, when a question arise about why NAV suggest such a planning proposal, it arises the question on how planning parameters are determined by the given company. When we talk about planning parameters, we refer to: reorder point, reorder quantity and safety stock.

    The safety stock is the quantity a company would like to hold on stock to address uncertainty. Lets come back later to this.

    Reorder point is based on certainty. For a company, the certainty is based on the demand profile. So, if we have historical data which defines that the average demand is X during the vendor lead time, we should set this reorder point to a minimum of that average demand. In other words:

    - we know that vendor lead time is 7 days

    - the average demand during 7 days is about 125 pieces

    - reorder point should be 125

    Once we determine this reorder point, it will act as a trigger so when stock is on or below that stock level, it will generate an action for planners to create a replenishment. In other words, if our stock falls below 125 pieces, we need to replenish since it will take 7 days for the vendor to complete our order and we need to ensure we have enough in stock to cover the demand within those 7 days.

    Of course, there are industries where uncertainty needs to be planned for. In this case, we use the safety stock and the reorder point will be increased to account for: average demand during lead time + safety stock.

    The reorder quantity is calculated depending on the specific item type, ordering costs, or other consideration. In some cases, we reorder as much quantity as we can allocate in the warehouse space we have for this bin (maximum quantity). In other cases, this is based on the economic order quantity where we need to determine the optimal quantity to be ordered.

    In any case, planning parameters have to be thought, agreed and continuosly verified. In other case, NAV planning will not be able to provide realistic results when planning parameters are not based on what your business is about.

  • Microsoft Dynamics NAV Team Blog

    Test Automation Series 2 - Creation Functions

    • 0 Comments

    With the introduction of the NAV test features in NAV 2009 SP1 it has become easier to develop automated test suites for NAV applications. An important part of an automated test is the setup of the test fixture. In terms of software testing, the fixture includes everything that is necessary to exercise the system under test and to expect a particular outcome. In the context of testing NAV applications, the test fixture mainly consists of all values of all fields of all records in the database. In a previous post I talked about how to use a backup-restore mechanism to recreate the fixture and also about when to recreate the fixture.

    A backup-restore mechanism allows a test to use a fixture that was prebuild at some other time, that is, before the backup was created. In this post I'll discuss the possibility to create part of the fixture during the test itself. Sometimes this will be done inline, but typically the creation of new records will be delegated to creation functions that may be reused. Examples of creation functions may also be found in the Application Test Toolset.

    Basic Structure

    As an example of a creation function, consider the following function that creates a customer:

    CreateCustomer(VAR Customer : Record Customer) : Code[20] 
    
    Customer.INIT;
    Customer.INSERT(TRUE);
    CustomerPostingGroup.NEXT(RANDOM(CustomerPostingGroup.COUNT));
    Customer."Customer Posting Group" := CustomerPostingGroup.Code;
    Customer.MODIFY(TRUE);
    EXIT(Customer."No.")

    This function shows some of the idiom that is used in creation functions. To return the record an output (VAR) parameter is used. For convenience the primary key is also returned. When only the primary key is needed, this leads to slightly simplified code. Compare for instance

    CreateCustomer(Customer);
    SalesHeader.VALIDATE("Bill-to Customer No.",Customer."No.");

    with

    SalesHeader.VALIDATE("Bill-to Customer No.",CreateCustomer(Customer));

     Obviously, this is only possible when the record being created has a primary key that consists of only a single field.

    The actual creation of the record starts with initializing all the fields that are not part of the primary key (INIT). If the record type uses a number series (as does Customer), the record is now inserted to make sure any other initializations (in the insert trigger) are executed. Only then the remaining fields are set. Finally, the number of the created customer is returned.

    Field Values

    When creating a record some fields will need to be given a value. There are two ways to obtain the actual values to be used: they can be passed in via parameters or they can be generated inside the creation function. As a rule of thumb when calling a creation function from within a test function, only the information that is necessary to understand the purpose of the test should be passed in. All the other values are "don't care" values and should be generated. The advantage of generating "don't care" values inside the creation functions over the use parameters is that it becomes immediately clear which fields are relevant in a particular test by simply reading its code.

    For the generation of values different approaches may be used (depending on the type). The RANDOM, TIME, and CREATEGUID functions can all be used to generate values of different simple types (optionally combined with the FORMAT function).

    In the case a field refers to another table a random record from that table may be selected. The example shows how to use the NEXT function to move a random number of records forward. Note that the COUNT function is used to prevent moving forward too much. Also note that If this pattern is used a lot, there may be a performance impact.

    Although the use of random value makes it very easy to understand what is (not) important by reading the code, it could make failures more difficult to reproduce. A remedy to this problem is to record all the random values used, or to simply record the seed used to initialize the random number generator (the seed can be set using the RANDOMIZE function). In the latter case, the whole sequence of random values can be reproduced by using the same seed.

    As an alternative to selecting random records, a new record may be created to set a field that refers to another table.

    Primary Key

    For some record types the primary key field is not generated by a number series. In such cases a simple pattern can be applied to create a unique key as illustrated by the creation function below:

    CreateGLAccount(VAR GLAccount : Record "G/L Account") : Code[20]; 
    
    GLAccount.SETFILTER("No.",'TESTACC*');
    IF NOT GLAccount.FINDLAST THEN
    GLAccount.VALIDATE("No.",'TESTACC000');
    GLAccount.VALIDATE("No.",INCSTR(GLAccount."No."));
    GLAccount.INIT;
    GLAccount.INSERT(TRUE);
    EXIT(GLAccount."No.")

    The keys are prefixed with TESTACC to make it easy to recognize the records created by the test when debugging or analyzing test failures. This creation function will generate accounts numbered TESTACC001, TESTACC002, and so forth. In this case the keys will wrap around after 999 accounts are created, after which this creation function will fail. If more accounts are needed extra digits may simply be added.

    Input Parameters

    For some of the fields of a record you may want to control their values when using a creation function in your test. Instead of generating such values inside the creation function, input parameters may be used to pass them in.

    One of the difficulties when defining creation functions is to decide on what and how many parameters to use. In general the number of parameters for any function should be limited. This also applies to creation functions. Parameters should only be defined for the information that is relevant for the purpose of a test. Of course that may be different for each test.

    To avoid libraries that contain a large number of creation functions for each record, only include the most basic parameters. For master data typically no input parameters are required. For line data, consider basic parameters such as type, number, and amount.

    Custom Setup

    In a particular test you typically want to control a slightly different set of parameters compared to the set of parameters accepted by a creation function in an existing library. A simple solution to this problem is to update the record inline after it has been returned by the creation function. In the following code fragment, for example, the sales line returned by the creation function is updated with a unit price.

    LibrarySales.CreateSalesLine(SalesLine.Type::"G/L Account",AccountNo,Qty,SalesHeader,SalesLine,);
    SalesLine.VALIDATE("Unit Price",Amount);
    SalesLine.MODIFY(TRUE);

    When the required updates to a particular record are complex or are required often in a test codeunit, this pattern may lead to code duplication. To reduce code duplication, consider wrapping a simple creation function (located in a test helper codeunit) in a more complex one (located in a test codeunit). Suppose that for the purpose of a test a sales order needs to be created, and that the only relevant aspect of this sales order is that it is for an item and its total amount. Then a local creation function could be defined like

    CreateSalesOrder(Amount : Integer; VAR SalesHeader : Record "Sales Header") 
    
    LibrarySales.CreateSalesHeader(SalesHeader."Document Type"::Order,CreateCustomer(Customer),SalesHeader);
    LibrarySales.CreateSalesLine(SalesHeader,SalesLine.Type::Item,FindItem(Item),1,SalesLine);
    SalesLine.VALIDATE("Unit Price",Amount);
    SalesLine.MODIFY(TRUE)

    In this example a complex creation function wraps two simple creation functions. The CreateSalesHeader function takes the document type, a customer number (the customer is created here as well) as input parameters. The CreateSalesLine function takes the sales header, line type, number, and quantity as input. Here, a so-called, finder function is used that returns the number for an arbitrary item. Finder functions are a different type of helper functions that will be discussed in a future post. Finally, note that the CreateSalesLine function needs the document type and number from the header; instead of using separate parameters they are passed in together (with the SalesHeader record variable).

    Summary

    To summarize here is a list of tips to consider when defining creation functions:

    • Return the created record via an output (VAR) parameter
    • If the created record has a single-field primary key, return it
    • Make sure the assigned primary key is unique
    • If possible, have the key generated by a number series
    • The safest way to initialize a record is to make sure all triggers are executed in the same order as they would have been executed when running the scenario from the user interface. In general (when DelayedInsert=No) records are created by this sequence :
      • INIT
      • VALIDATE primary key fields
      • INSERT(TRUE)
      • VALIDATE other fields
      • MODIFY(TRUE)
    • Only use input parameters to pass in information necessary to understand the purpose of a test
    • If necessary add custom setup code inline
    • Wrap generic creation functions to create more specific creation functions
    • Instead of passing in multiple fields of a record separately, pass in the entire record instead

    These and some other patterns have also been used for the implementation of the creation functions included in the Application Test Toolset.

Page 1 of 2 (20 items) 12