Microsoft Dynamics NAV

Team Blog

  • Microsoft Dynamics NAV Team Blog

    Saying Farewell to the Native Database

    • 14 Comments
    I've met many people who imagine that everyone who works at Microsoft must be of a single mind, with one goal - much like a hive mentality. Nothing could be further from the truth. We are an organization of individuals, often passionate about our beliefs and very emotionally invested in the work that we do. We're very proud of our successes and we want to learn from our failures. We're very human. And as such, when I was asked to work with a team that would decouple the NAV product from our native database, it was with an element of sadness that I agreed. I can also admit that I wanted to do the job because, like many of you, I've worked with this product for over a decade and I wanted this one to happen by someone who understood what we were saying farewell to.    

    You shouldn't be burdened with the technical details about decoupling the files from our build system, generating the lists of extra dlls that we no longer need to install, or reviewing documentation that has references to the database format. When we are about to commit a huge change like this to our internal systems, we send an email to all team members informing them of an impending ‘Breaking Change' and to give teams a last chance to speak up about the impact this will have in their work cycles. There were almost no issues raised but I did receive a large number of comments from team members along the lines of "are you sure we want to take this away," "aren't we giving up the NAV simplicity," and "this is sad."  And a part of me agreed.

    But it's not really sad. The day we committed this change was a bit like the last day of High School. We've all had a fabulous time but our future is ahead.

    It's not sad because the original goal of Navision was not to become a best of breed database product - we wanted to build business applications. We happened to build a very clever little database at the same time but it was never the goal to have the database as the reason that people buy NAV.

    It's not sad because now we focus on SQL Server and we can ensure that SQL is the platform we do our enhancements on. And aim to improve performance on. And benchmark on.

    We can spare our Quality Assurance (Test) organization the pain of testing on a legacy platform and double our efforts into the SQL stack. We can spare our developers from maintaining data connections for two different platforms and spare ourselves the discussion about how to take advantage of SQL features without impacting the way the native database is used. 

    I'm looking forward to everything we can now do and I hope you will join us too.

    -Stuart Glasson

    For more information about the Statement of Direction for Microsoft Dynamics NAV where we talk about ending support for the native NAV database, see this blog post.

  • Microsoft Dynamics NAV Team Blog

    Creating a web service manually, the importance of the name you give it, and a few small things to remember

    • 14 Comments

    When you use the SC command line command to create a new NAV 2009 Service, how does the new service know whether it is a middle tier for RTC to connect to, or whether it is supposed to handle web service calls?

    In other words, what decides whether the new service will be "Microsoft Dynamics NAV Server" or "Microsoft Dynamics NAV Business Web Services"?

     

    It depends on the name. If it starts with "MicrosoftDynamicsNavWS", then it will be for Web Services. If the name starts with anything else, then it will be for middle tier for RTC clients.

     

    To keep things simple, just give your NAV Servers names beginning with MicrosoftDynamicsNAV / MicrosoftDynamicsNAVWS. Then if you need a second, third, etc server, add a unique name, seperated by a $-sign, for example:

    MicrosoftDynamicsNAV$Svr2

    MicrosoftDynamicsNAVWS$Svr2

     

    Here are the simple steps for how to create a new web service service, and a few more things to be aware of. Let's say that we want to start a second set of NAV Servers.

     

    First create the normal service from a command prompt:

    SC CREATE "MicrosoftDynamicsNAV$Svr2" binpath= "C:\Program Files\Microsoft Dynamics NAV\60\Service2\Microsoft.Dynamics.Nav.Server.exe" DisplayName= "MSSvr2"

    Then create the service for Web Services: 

    SC CREATE "MicrosoftDynamicsNAVWS$Svr2" binpath= "C:\Program Files\Microsoft Dynamics NAV\60\Service2\Microsoft.Dynamics.Nav.Server.exe $Svr2" DisplayName= "MSWSSvr2" type= share

     

    The additional settings you must provide as marked in bold above, and a few things that you must remember are:

     

    1)  The Name

    As described, the name must begin with MicrosoftDynamicsNAVWS if you want it to be for web services

     

    2)  Include the last part of the name in BinPath

    After the .exe in the binpath parameter you must specify the part of the name ($Svr2 in this case) that comes after MicrosoftDynamicsNAVWS. If you forget this step, you might get this error when you try to start the service:

    Windows could not start the MSWSSvr2 service on Local Computer.Error 1083: The executable program that this service is configured to run in does not implement the service.


     

    3)  Type must be share

    For the service that handles web services, add the parameter type= share. Otherwise the service will still try to start up as a middle tier (not for web services).

     

    4)  Spaces after =

    You must remember the space after each = in the command, as in for example "type= share". This is just the syntax of the SC-command.

     

    5)  DisplayName

    It doesn't matter what display name you give - this is just to find it in Services.

     

    These are just some of the small things to keep in mind. For many more details on web services go to Freddys blog, especially this post:

    Multiple Service Tiers - SP1

     

    // Lars Lohndorf-Larsen

     

  • Microsoft Dynamics NAV Team Blog

    NAV 2013 - Reports may be difficult to read when printing over Terminal Services

    • 14 Comments

    === Blog updated on March 4th ====

    Updated with the good news that we now have a hotfix for this for Windows Server 2008 R2:

    Visual elements are displayed incorrectly etc... - KB 2768741.

    ===== end of update ===========

     

     

    In NAV 2013, if you run the Windows Client via a Terminal Server or Remote Desktop Connection (RDC) or some common 3rd party application host technology, reports may be difficult to read when you print them. Preview looks fine.

     

    A report could look like this in preview (OK):

     

     

     

    But in Print Layout rendering or when printing to paper it would look like this - notice it looks like the fonts have been scrunched and it's hard to read:

     

     

     

    When does this happen?

     

    It happens when

      A)  you run NAV 2013 via a Terminal Server or via Remote Desktop Connection (RDC), and

      B)  the screen resolution on the connection is not 4/3, and

      C)  the Terminal Server machine is running Windows 2008 including Windows 2008 R2.

     

     

    The screen resolution is the important thing here. If we use a screen resolution of 4/3 (=1.33333) then all works fine. So if my RDC looks like this then I don't have this issue:

     

    On this connection the screen resolution in my connection is 1024 by 768. 1024 / 768 = 1.33333, and everything is OK.

     

     

    But if I set the screen resolution on my RDC like this:

     

     

    Then my screen resolution = 1366 / 768 = 1.779.

    1.779 <> 1.33333 and then I have the problem.

     

     Why?

     

    The root of the problem lies in graphics rendering (GDI) in Windows 2008 when it tries to scale from a screen resolution which is not 1.33333. In Windows 2012, rendering was redesigned to take advantage of hardware acceleration and ActiveX as described in this article:

    Windows 8 graphics: Microsoft has hardware accelerated all the things

    So we do not have this problem if the host machine is running Windows 2012.

    NAV 2013 uses a newer version of Report Viewer which relies more on Windows GDI than previously, which is why we have only started seeing this problem on NAV 2013 installations.

     

     

    So what can we do?

    Based on the above there are three ways out of this problem:

    1)  Ensure that all users use a screen resolution of 4/3

    2)  Avoid a host machine trying to scale a client's resolution. For example:

      - Save the report to PDF, then print from the PDF document. PDF rendering is not affected by this issue.

      - Use server-side printing - for example make use of the Job queue and let NAS print reports. Then the server can ignore what resolution the client has.

      - Use other ways than Terminal Server to simplify client installations, such as ClickOnce or possibly "Remote Apps". In this way the client runs on the local machine and not on a host machine.

    3)  Upgrade the Terminal Server / RDC host machine to Windows 2012. Client machines do not need to be upgraded.

     

     

    Workarounds 1) and 2) could be hard to enforce in some cases. But knowing about this issue, hopefully the option of using Windows 2012 can be considered.

     

     

     We will post updates here if and when we find any further details around this issue!

     

     

     

     

     

     

     

  • Microsoft Dynamics NAV Team Blog

    Coming Soon: Exporting and Importing Companies and Other Data

    • 13 Comments

    We are not quite ready yet, but we will soon announce the availability of new functionality for Microsoft Dynamics NAV 2013 R2: The ability to export business data, global data, and/or applications from one database and import the data into another database. Many of you used the FBK functionality to do this in earlier versions of Microsoft Dynamics NAV, but since we removed that backup/restore functionality in Microsoft Dynamics NAV 2013 R2, you have let us know how much you relied on it in your daily work. So we are working hard on this new functionality, and we will make an announcement when you can download the update.

    When the update is available, we will also update the Microsoft Dynamics NAV 2013 R2 Release Notes Follow-Up document, which is available here: https://mbs.microsoft.com/downloads/customer/NAV/NAV%202013%20R2/MicrosoftDynamicsNAV2013R2_ReleaseNotesFollowup.pdf

    Best regards,

    The Dynamics NAV team

  • Microsoft Dynamics NAV Team Blog

    Send email with PDF attachment in NAV 2009

    • 13 Comments

    In this post I would like to explore the possibilities to create an email from the Role Tailored client and attach an invoice as a PDF file to the email, unfortunately we have do not have this functional build into our Demo application, but let me show you how this can be do with little effort.

    First I suggest you download the fob file which contains the 5 different options I will go through here.

    When downloaded the fob file you will see that I have added 5 new actions

    image

    1. SendAsPDF(Use of codeunit to rename) recommended solution
    2. SendAsPDF(Access to Server needed)
    3. SendAsPDF(With Temp file name)
    4. SendAsPDF(User prompted to save)
    5. SaveAsPDFrecommended solution(if you just want the PDF file)

    Let me go through the different options starting from the bottom, since I recommend option 1, but I would also like to share other options for doing this, since these might be valuable for you.

    Option 5: SaveAsPDF
    In this option you will get prompted if you want to open or save the PDF. The PDF file created will be based on the select Invoice in the Posted Sales Invoices List Place

    image 

    In this option all I do is to have the server create the PDF file for me and use the new download function in NAV 2009 to retrieve the PDF file created on the server.

    Option 4: SendAsPDF(User prompted to save)
    In this option, you will first be prompted to save the file.
    Here it is important to select to “”SAVE” the PDF file on the disk, to have the correct name of the PDF file. If you select to “OPEN” the PDF filename will be given a temp name.

    After you have saved the PDF we now create the email message you will get 3 messages similar to this when this happens:

    image

    You get these message because we connect to an external component(Outlook) to the Role Tailored client. It is of course up to you if you want to set this to “Always allow”, but this would remove these messages, the next time you open the Role Tailored client.

    When you have allowed these to run, email will be created with the PDF file attached.

    image

    In this option all I do is to download the PDF to the client and then use the Mail codeunit to create the email

    Option 3: SendAsPDF(With Temp file name)
    In this option, you will not be prompted to save the PDF file.
    And the email will be created immediately. This would probably be the preferred compared to downloading this to the user disk, but we will use the PDF file created on the server, and since this file get a TEMP name, like this: “__TEMP__570eb0279b9d4b1fa837caf3a14acbf7” this option is not really good.

    Let us look at the option 1 and 2 where this issue is solved.

    Option 2: SendAsPDF(Access to Server needed)
    In this option you will not be prompted to save the file either, but here the end user will need to have access the server folder where the PDF is stored on the server. In some situation you might want this, but for security reasons you might also not want to give this access to the user.

    Option 1: SendAsPDF(Use of codeunit to rename) recommended solution
    Again in this option you will not be prompted to save the PDF file, but the PDF file will be automatically added to the email. In this option we have build a codeunit to rename the TEMP file created on the server, and end user will not need to have access to any folders on the server.

    So all in all I recommend option 1 for attaching PDF file to an email. And once again I have made all the code available here, so feel to be explore how I build this. If you feel there is an option that I missed, feel free to leave a commit or use the contact form Email.

    Thanks,
    Claus Lundstrøm, Program Manager, Microsoft Dynamics NAV

  • Microsoft Dynamics NAV Team Blog

    Merging Application Objects using Windows PowerShell

    • 13 Comments

    Upgrading a Microsoft Dynamics NAV solution is time consuming. You have to identify which changes you have to make, you have to upgrade the application objects and the application code, and you might have to move the existing data around so that it fits the new database schema. In Microsoft Dynamics NAV 2013 R2, we started delivering Windows PowerShell cmdlets and sample scripts that can help you automate different parts of the upgrade process. In the latest cumulative update, we introduce a new set of Windows PowerShell cmdlets that can help you through the code upgrade.

    You can use the new cmdlets to modify application object source files in the Microsoft Dynamics NAV 2013 R2 Development Shell, or by importing the Microsoft.Dynamics.NAV.Model.Tools.psd1 module into the Windows PowerShell Integrated Scripting Environment (ISE). The new application merge utilities install when you choose the Developer option in Microsoft Dynamics NAV 2013 R2 Cumulative Update 9 Setup, or if you add the development environment to another installation option.

    The application merge utilities include the following Windows PowerShell cmdlets:

    Name

    Description

    Merge-NAVApplicationObject

    Compares the changes that have been made between two sets of Microsoft Dynamics NAV application objects, and applies the difference to a third set of application objects. The result of the merge is a number of text files with the merged application objects. Any conflicts that the cmdlet cannot merge are identified in conflict files.

    Compare-NAVApplicationObject

    Compares text files that contain Microsoft Dynamics NAV application objects, and then calculates the delta between the two versions. The result of the comparison is a number of text files with the calculated delta.

    Update-NAVApplicationObject

    Applies a set of deltas to the specified application objects. The files that describe the delta are generated by the Compare-NAVApplicationObject cmdlet.

    Join-NAVApplicationObjectFile

    Combines multiple application object files into one text file

    Split-NAVApplicationObjectFile

    Splits a text file that contains two or more application objects into separate text files for each application object.

    Get-NAVApplicationObjectProperty

    Gets Microsoft Dynamics NAV application object properties from the specified application object text files.

    Set-NAVApplicationObjectProperty

    Sets Microsoft Dynamics NAV application object properties in the specified application object text files.

    Getting started

    You will be able to read more about the cmdlets and how to use them in the MSDN Library after the release of Microsoft Dynamics NAV ‘Crete’, but for now, you can also type Get-Help "NAV" in the Windows PowerShell ISE the Microsoft Dynamics NAV Development Shell.

    If you don’t want to use the Microsoft Dynamics NAV Development Shell, use the Windows PowerShell ISE. But before you can access the cmdlets, you must import the Microsoft.Dynamics.Nav.Model.Tools.psd1 module. Here is an example of the command you can type:

    Import-Module "${env:ProgramFiles(x86)}\Microsoft Dynamics NAV\71\RoleTailored Client\Microsoft.Dynamics.Nav.Model.Tools.psd1" -force

    Get-Help "NAV"

    Now you can see the Help for the cmdlets and take a closer look at the examples for how to use them. You can also see detailed Help for each cmdlet by typing the following command:

    Get-Help cmdletname -detailed

    And you can concentrate on the examples by typing the following command:

    Get-Help cmdletname -examples

    For all of the new cmdlets, the starting point is 3 versions of application objects that you want to merge. The following table describes the three versions of the Microsoft Dynamics NAV application that you want to compare and merge.

    Version

    Description

    ORIGINAL

    The baseline of the application merge. For example, the Microsoft release of MicrosoftDynamics NAV 2013 R2.

    MODIFIED

    The updated version of the original. For example, this can be Microsoft Dynamics NAV 2013 R2 Cumulative Update 9. Alternatively, it can be a small add-on.

    In many cases, the modified application is the version that contains fewer changes to the original than the version that is the target of the merge. This is because you want to apply fewer changes to a large application rather than applying a large change to a small application.

    TARGET

    The version of the application that you want to apply the difference between the original and the modified application to. For example, this can be your solution that you want to apply a cumulative update to. Alternatively, it can be a new major release from Microsoft that you want to apply your modified solution to.

    Each of these versions can be any version that you want to do a three-way merge between. ORIGINAL can be your add-on, MODIFIED can be a customization of your add-on, and TARGET can be a new release of Microsoft Dynamics NAV from Microsoft. But for the purposes of this blog post, we'll keep the definitions as described in the table above.

    As input to the cmdlets, you can provide a text file, a list of text files, or a folder with text files. So you need to export the relevant application objects as text files. Optionally, you can use the development environment command ExportObjects. You can export each application object to a separate text file, or you can export all objects to a single text file. Optionally, you can use the Join-NAVApplicationObjectFile and Split-NAVApplicationObjectFile cmdlets to structure the text files in the way that works better for you. Also, depending on your scenario, you can work with a subset of your application, such as all codeunits, objects within an ID range, or a specific group of objects. Use the tools to get the text files that you need, and take a look at the sample scripts for inspiration.

    The Windows PowerShell sample scripts are available in the attached compressed archive. Start by opening the HowTo-Start-Import-NAV-Module.ps1 script in the Windows PowerShell ISE, navigate the command prompt to the folder where you placed the samples, and then run the script. Then open one of the other scripts, such as HowTo-Merge-1-General.ps1, and follow the guidance in the script.

    The sample script package includes a folder with four subfolders that can help you get started with the scripts. The demonstration data in the ORIGINAL, MODIFIED, and TARGET folders illustrate the text files that are the input to the cmdlets. For clarity, we have chosen to have one application object in each file, but you can use the Join-NAVApplicationObjectFile cmdlet to combine all the text files in the MODIFIED folder in a single file, for example, before you run the script. That way you can see how the Merge-NAVApplicationObject cmdlet identifies the application objects in the combined text file. We find it easier to work with one object in each file, but the cmdlets are there so you can configure the text files in the way that works better for you.

    We suggest that you open each of the sample scripts in the Windows PowerShell IDE and read through them to get acquainted with the new cmdlets. Then, set up a small test environment of your own where you can safely use the cmdlets on your own application objects to upgrade your solution to Microsoft Dynamics NAV 2013 R2 Cumulative Update 9.

    For more information, see the Merge Application Object Source Files whitepaper, which you can download from the blog post that announced the availability of Cumulative Update 9 here: http://go.microsoft.com/fwlink/?LinkId=403646.

  • Microsoft Dynamics NAV Team Blog

    Coffee Break – keeping stuff in the cloud

    • 13 Comments

    In the last few Coffee Break posts we have been looking at virtual hard disks (VHDs) and virtual machines (VMs). Now let's combine the two in the cloud.

    Coffee Break 11 – Provisioning - Using Microsoft Azure storage to keep VHDs and other useful stuff in the cloud

    Provisioning a new instance of Microsoft Dynamics NAV is like putting individual bits together. In this Coffee Break post, we will prepare some of these bits by building a VHD up front and then upload it to Azure. Then every new virtual machine (VM) can pick a copy of this disk. A VHD like this can contain anything such as the Microsoft Dynamics NAV product DVD, application objects, add-on databases, documents, and so on. And we can save a bit of time and bandwidth compared to uploading a new DVD, for example, every time we provision a new VM in the cloud.

    Customer Story:

    The customer is frequently creating new VMs. And every new VM they create will need access to common data, such as the Microsoft Dynamics NAV product DVD.

    Prerequisites:

    1 VHD (local) and 1 VM (In Azure):

    Create a VHD: Coffee Break - Windows PowerShell and creating a Hyper-V disk

    Create a VM: Coffee Break – Using Windows PowerShell to provision virtual machines

    Get the VHD to the VM

    Once we have a VHD (locally) and a VM in Azure (whether this was created manually or via PowerShell), let's combine the two. First, upload the VHD to Azure. Connect your ISE PowerShell environment to your Azure subscription:
    Add-AzureAccount

    Create a new storage account to hold our VHD:

    $SourceStorageAccount = "mysharedstuff"

    First test that the storage name is actually available:

    Test-AzureName -Storage $SourceStorageAccount

    If it is (the line above returns False), then create it:

    New-AzureStorageAccount -StorageAccountName $SourceStorageAccount -Location "West Europe"

     

    Optionally we can add containers to this storage, just to keep things organized. First, let's create a context that includes access information for our new storage:

    $Key = (Get-AzureStorageKey -StorageAccountName $SourceStorageAccount).Primary
    $Context = New-AzureStorageContext -StorageAccountName $SourceStorageAccount -StorageAccountKey $Key

    This $Context variable gives us access to the storage, and is one we will use a couple of times.

    Create a container in our storage to store vhds:

    New-AzureStorageContainer -Context $Context -Name vhds

    Upload the VHD

    Let’s say you have a local VHD here: C:\temp\mydisctest.vhd. Then copy that up to your storage:
    Add-Azurevhd -LocalFilePath C:\temp\mydisctest.vhd -Destination "https://$SourceStorageAccount.blob.core.windows.net/vhds/mydisk.vhd"

    If you get this error: "The process cannot access the file 'C:\temp\mydisctest.vhd' because it is being used by another process." then make sure that the disk is not already mounted to your local machine. Note that if you happen to double-click on the VHD, Windows may automatically mount it. You can dismount it like this:

    Dismount-VHD -Path C:\temp\mydisctest.vhd

    (side note:) If you should ever need to copy it back, use  Save-AzureVhd.

    The Add-Azurevhd cmdlet does a clever copy by checking that this is a valid VHD, and by copying only actual data. So if the disk itself is 3GB but is only 5% full, then only a file the size of the 5% will actually be copied.

    Make a copy

    Every time we want a new disk, then we can make a new copy of this VHD, rather than attaching it directly in which case we could only use it for one VM. First create a new container for our copy. This container could relate somehow to the VM that we will be attaching the VHD to:

    New-AzureStorageContainer -Context $Context -Name myvm

    Then copy mydisk.vdh from vhds to myvm:

    $SourceURI = "https://$SourceStorageAccount.blob.core.windows.net/vhds/mydisk.vhd"

    $State = Start-AzureStorageBlobCopy -AbsoluteUri $SourceURI -DestContainer myvm -DestBlob NewVHD.vhd -Context $Context

     

    This will start an asynchronous copy. To check status before using the copy:

    $state | Get-AzureStorageBlobCopyState

     

    Get it together

    Now we have a fresh copy of the VHD in Azure, let’s attach it to a VM, using the cmdlet Add-AzureDataDisk. The two main ways we can use this cmdlet are:

    • ImportFrom – to import an existing VHD - the one we will use here
    • CreateNew – to create a new raw disk

    The syntax of the cmdlet is:
    $vm | Add-AzureDataDisk | Update-AzureVM


    So, we need to get the VM, then pipe it through in that way. First check that you have the right VM. See which ones you have available:

    Get-AzureVM
    Then pick one:
    $VM = get-azurevm -ServiceName mytestserviceabcdef -Name NAVPC

    Check that you still have the location for your VHD and then store that in another new variable:
    $NewVHD = "https://$SourceStorageAccount.blob.core.windows.net/myvm/NewVHD.vhd"

    Then put $VM through the pipe (or edit the line above if you want to have it all in just one line):
    $VM | Add-AzureDataDisk -ImportFrom -DiskLabel "MyStuff" -MediaLocation $NewVHD -LUN  3 | Update-AzureVM

    If you want to watch as it is happening, connect (RDC) to one of your Azure VMs and run File Explorer. Leave that in the background – it will update automatically and you can see when the disk is added. The disk is ready to use, and the files you put on it back when the disk was still on your local machine, are available now in your VM.

    To remove it again, run this:

    $VM | Remove-AzureDataDisk -LUN 3 -DeleteVHD | Update-AzureVM

    Check what disks (if any) are still attached:

    $VM | Get-AzureDataDisk

     

    Note:

    If you attach and then detach the disk, Azure may still hold a lease on this disk, and it may not be free to use yet or even to be deleted. This is the error you may get when trying to attach the disk again or delete it: "The MyDisk VHD is already registered with image repository as the resource with ID XYZ". In this case we may not be able to delete the disk automatically, which is why in the example above, if we remove the disk, then delete the disk as part of that operation.

    So, just to avoid problems like this: When we detach a disk, then just to keep things clean, use -DeleteVHD to remove the disk completely.

     

     

    Jasminka Thunes, Escalation Engineer Dynamics NAV EMEA

    Lars Lohndorf-Larsen, Escalation Engineer Dynamics NAV EMEA

    Bas Graaf, Senior Software Engineer Dynamics NAV

     

  • Microsoft Dynamics NAV Team Blog

    Using the Quick Filter in Microsoft Dynamics NAV

    • 12 Comments

    Many of you that have installed Microsoft Dynamics NAV 2013 R2 have for sure noticed that the quick filter behavior has been redesigned in this release and that is why, in this post, we will go through all the changes and the intention behind them.

     

    So let’s start with a bit of history; the first version of the quick filter was released in the Microsoft Dynamics NAV 2009 Windows client, where we replaced the filter window available in the development environment with a filter pane integrated on all Microsoft Dynamics NAV pages. Here the user both has the option of entering filter criteria as plain text and the ability of composing an advanced filter. If you would like to quickly search for a certain record, the recommended practice is to use the quick filter.

    Let’s say, for example, that we want to find a contact in the contact list starting with “Man”. We can construct the filter by selecting the Name field and entering the text “man” in the search field.

    Notice that all the contacts starting with “man” now appear in the results list. How is this working? Whatever we add to this filter will be internally translated to a string by adding ‘@’ in front and ‘*’ in the end. So our filter string ‘man’ becomes ‘@man*’ and the Windows client filters for any contact name that starts with “man” in upper or lower case.  

    The following table illustrates more Quick Filter search examples in Microsoft Dynamic NAV 2013.

    Search Criteria

    Interpreted as…

    Returns…

    Man

    @man*

    All records that start with the string man and case insensitive.

    Se

    @se*

    All records that start with the string se and case insensitive.

    Man*

    Starts with Man and case sensitive

    All records that start with the string Man

    'man'

    An exact string and case sensitive

    All records that match man exactly

    *1

    Ends with 1

    All records that end with 1

    @*man

    Ends with and case insensitive

    All records that end with man

    @man*

    Starts with and case insensitive

    All records that start with man

    As part of our development process we regularly perform usability studies and some of them showed the users instinctively thought of the quick filter as a search field and that is why we decided to modify the quick filter behavior to a “contains” rather than a “starts with”. So what does this mean?

    Let’s construct the same filter in the Microsoft Dynamics NAV 2013 R2. Notice that the search results list includes contact names which start with “Man” and even have “Man” in the middle of the name. 

    What has changed? The entered filter will be translated to a string by adding ‘@*’ in front and ‘*’ in the end. So our filter string ‘man’ becomes ‘@*man*’ and the Windows client filters for any contact name that contains “man” in upper or lower case.  

    To start with in the RTM version of Microsoft Dynamics NAV 2013 R2 we have considered the simple user approach where the special characters in the filter criteria are ignored. However, in the Cumulative Update 13 for Microsoft Dynamics NAV 2013, we have refined the user experience and now respect the entered filter criteria.

    The following table illustrates the Quick Filter search examples for the Cumulative Update 13 and later for Microsoft Dynamics NAV 2013.

    Search Criteria

    Interpreted as…

    Returns…

    Man

    @*man*

    All records that contain the string man and case insensitive.

    Se

    @*se*

    All records that contain the string se and case insensitive.

    Man*

    Starts with Man and case sensitive

    All records that start with the string Man

    'man'

    An exact string and case sensitive

    All records that match man exactly

    *1

    Ends with 1

    All records that end with 1

    @*man

    Ends with and case insensitive

    All records that end with man

    @man*

    Starts with and case insensitive

    All records that start with man

    We encourage you to check out the Cumulative Update 13 and we hope that this blog demystifies some of the behavioral differences of the quick filter across Microsoft Dynamics NAV product versions.

  • Microsoft Dynamics NAV Team Blog

    Microsoft Dynamics NAV/SQL Server Configuration Recommendations

    • 12 Comments

    Michael De Voe, a Senior Premier Field Engineer at Microsoft, has compiled a set of recommendations for SQL Server configuration to improve performance when running Microsoft Dynamics NAV 5.0 and later versions with one of the following versions of SQL Server:

    • Microsoft SQL Server 2005 SP3 x64
    • Microsoft SQL Server 2008 SP1 x64
    • Microsoft SQL Server 2008 R2 x64

     The attached document contains Michael's recommendations, including the following options and parameters:

    • Max Server Memory
    • Auto-Create Statistics
    • Auto-Update Statistics
    • Auto-Grow
    • Database Compatibility Level
    • Trace Flag 4136
    • Trace Flag 4119
    • Data files for TempDB
    • Disk Alignment
    • Read Committed Snapshot Isolation (RCSI)
    • Max Degree of Parallelism
    • Dynamics NAV Default Isolation Level
    • Dynamics NAV "Lock Timeout"
    • Dynamics NAV "Always Rowlock"
    • Maintenance Jobs
    • Instant File Initialization
    • Optimize for Ad Hoc Workloads
    • Page Verify
    • Lock Pages in Memory

     These postings are provided "AS IS" with no warranties and confers no rights. You assume all risk for your use.

  • Microsoft Dynamics NAV Team Blog

    Test Automation Series 1 - Test Data

    • 12 Comments

    This post discusses the use of test data when developing automated tests based on the testability features that were released with Microsoft Dynamics NAV 2009 SP1. The practices outlined here have been applied during the development of the tests included in the Application Test Toolset for Microsoft Dynamics NAV 2009 SP1.

    Overall, the execution of automated tests proceeds according to a four-phased pattern: setup, exercise, verify, teardown. The setup phase is used to place the system into a state in which it can exhibit the behavior that is the target of a test.  For data-intensive systems such as Dynamics NAV, the test data is an important part of setting system state.  To test the award of discounts, for instance, the setup phase may include the creation of a customer and setting up a discount.

    One of the biggest challenges of testing software systems is to decide what test data to use and how and when to create it.  In software testing the term fixture is used to describe the state in which the system under test must be to expect a particular outcome when exercised. The fixture is created in the Setup phase. In the case of an NAV application that state is mostly determined by the values of all fields in all records in the database. Essentially, there are two options for how to create a test fixture:

    1. Create the test fixture from within each test.
    2. Use a prebuild test fixture which is loaded before the tests execute.

    The advantage of creating the fixture in the test is that the test developer has as much control over the fixture as possible when tests execute. On the other hand, completely creating the test fixture from within a test might not be feasible in terms of development effort and performance. For example, consider all the records that need to be created for a simple test that posts a sales order: G/L Setup, G/L Account, General Business Posting Group, General Product Posting Group, General Posting Setup, VAT Business Posting Group, VAT Product Posting Group, Customer, Item, and so forth.

    The advantage of using a prebuild test fixture is that most data required to start executing test scenarios is present. In the NAV test team at Microsoft, for instance, much of the test automation is executed against the same demonstration data that is installed when the Install Demo option is selected in the product installer (i.e., CRONUS). That data is reloaded when necessary as tests execute to ensure a consistent starting state.

    In practice, a hybrid approach is often used: a common set of test data is loaded before each test executes and each test also creates additional data specific to its particular purpose.

    To reset the common set of test data (the default fixture), one can either execute code that (re)creates that data or restore an earlier created backup of that default fixture. The Application Test Toolset contains a codeunit named Backup Management. This codeunit implements a backup-restore mechanism at the application level. It may be used to backup and restore individual tables, sets of tables, or an entire company. Table 1 lists some of the function triggers available in the Backup Management codeunit. The DefaultFixture function trigger is particularly useful for recreating a fixture.

    Table 1 Backup Management

    DefaultFixture

    When executed the first time, a special backup will be created of all the records in the current company. Any subsequent time it is executed, that backup will be restored in the current company.

    BackupSharedFixture(filter)

    Creates a special backup of all tables included in the filter.

    RestoreSharedFixture

    Restores the special backup that was created earlier with BackupSharedFixture.

    BackupDatabase(name)

    Creates a named backup of the current company.

    RestoreDatabase(name)

    Restores the named backup in the current company.

    BackupTable(name,table id)

    Creates a backup of a table in a named backup.

    RestoreTable(name, table id)

    Restores a table from a named backup.

    There are both maintainability and performance perspectives on the creation of fixtures. First there is the code that creates the fixture. When multiple tests use the same fixture it makes sense to prevent code duplication and share that code by modularizing it in separate (creation) functions and codeunits.

    From a performance perspective there is the time required to create a fixture. For a large fixture this time is likely to be a significant part of the total execution time of a test. In these cases it could be considered to not only share the code that creates the fixture but also the fixture itself (i.e., the instance) by running multiple tests without restoring the default fixture. A shared fixture introduces the risk of dependencies between tests. This may result in hard to debug test failures. This problem can be mitigated by applying test patterns that minimize data sensitivity.

    A test fixture strategy should clarify how much data is included in the default fixture and how often it is restored. Deciding how often to restore the fixture is a balance between performance and reliability of tests. In the NAV test team we've tried the following strategies for restoring the fixture for each

    1. test function,
    2. test codeunit,
    3. test run, or
    4. test failure.

    With the NAV test features there are two ways of implementing the fixture resetting strategy. In all cases the fixture (re)creation code is modularized. The difference between the approaches is where the fixture creation code is called.

    The fixture can be reset from within each test function or, when a runner is used, from the OnBeforeTestRun trigger. The method of frequent fixture resets overcomes the problem of interdependent tests, but is really only suitable for very small test suites or when the default fixture can be restored very quickly:

    [Test]
    
    PROCEDURE Test()
    BackupManagement.DefaultFixture;
    // test code
    ...

    An alternative strategy is to recreate the fixture only once per test codeunit. The obvious advantage is the reduced execution time. The risk of interdependent tests is limited to the boundaries of a single test codeunit. As long as test codeunits do not contain too many test functions and they are owned by a single tester this should not give too many problems. This strategy may be implemented by the test runner, the test codeunit's OnRun trigger, or using the Lazy Setup pattern.

    With the Lazy Setup pattern, an initialization function trigger is called from each test in a test codeunit. Only the first time it is executed will it restore the default fixture. As such, the fixture is only restored once per test codeunit. This pattern works even if not all tests in a test codeunit are executed or when they are executed in a different order. The Lazy Setup may be implemented like:

    LOCAL PROCEDURE Initialize();
    
    IF Initialized THEN
    EXIT;
    BackupManagement.DefaultFixture;
    // additional fixture setup code
    ...
    Initialized := TRUE;
    COMMIT

    [Test]
    PROCEDURE Test_A()
    Initialize;
    // test code scenario A ...

    [Test]
    PROCEDURE Test_B()
    Initialize;
    // test code scenario B ...

     As the number of test codeunits grows the overhead of recreating the test fixture for each test codeunit may still get too large. For a large number of tests, resetting the fixture once per codeunit will only work when the tests are completely insensitive to any change in test data. For most tests this will not be the case.

    The last strategy only recreates the test fixture when a test fails. To detect failures caused by (hidden) data sensitivities, the failing test is then rerun against the newly created fixture. As such, some type of false positives (i.e., test failures not caused by a product defect) can be avoided. To implement this strategy the test results need to be recorded in the OnAfterTestRun trigger of the test runner to a dedicated table. When the execution of a test codeunit is finished, the test results can be examined to determine whether a test failed and the test codeunit should be run again.

    For the implementation of each of these fixture strategies it is important to consider any dependencies that are introduced between test codeunits and the test execution infrastructure (e.g., test runners). The consequence of implementing the setup of the default fixture in the test runner codeunit may be that it becomes more difficult to execute tests in other contexts. On the other hand, if test codeunits are completely self-contained it becomes really easy to import and execute them in other environments (e.g., at a customer's site).

  • Microsoft Dynamics NAV Team Blog

    Microsoft Dynamics NAV 2013 Compatiblity with Microsoft Windows 8 and Microsoft Windows Server 2012

    • 12 Comments

    We are proud to announce that Microsoft Dynamics NAV 2013 is compatible with Microsoft Windows 8 and Microsoft Windows Server 2012.

    We support the following versions of Microsoft Dynamics NAV with Microsoft Windows Server 2012 and Microsoft Windows 8:

    • Microsoft Dynamics NAV 2009 R2
    • Microsoft Dynamics NAV 2013

    For more information about requirements, see System Requirements for Microsoft Dynamics NAV 2013.

  • Microsoft Dynamics NAV Team Blog

    Application Test Toolset for Microsoft Dynamics NAV 2013

    • 11 Comments

    We recently shipped the Application Test Toolset for Microsoft Dynamics NAV 2013.

    The supplement is applicable to the following country releases of Microsoft Dynamics NAV 2013:

    W1, AU, CA, DE, DK, ES, FR, GB, IN, IT, MX, NL, NZ, SE and US

    The supplement contains the following per country:

    • Tools for managing and executing tests, capturing code coverage information, and selecting relevant tests out of available tests.
    • Between 7,000 – 9,000 real regression tests that we run, not dumbed-down sample tests.
    • Helper libraries for improving test development through reusing common functionality.
    • Searchable documentation of helper libraries with examples of how to use library functionality.

    You may attempt to apply the W1 version to other country versions, but you should expect parts of the toolset to not work.

    Installation

    To install the supplement, do the following:

    1. Download the supplement from PartnerSource here.
    2. Extract the content.
    3. Import the FOB file found under your corresponding country version folder. For example: .\AppTestToolsetNAV2013\DE\AppTestToolsetNAV2013-DE.fob
    4. After experimenting with the toolset, don’t forget to fill out the survey form and send it to Microsoft. We really appreciate your feedback and we rely on it to improve future versions of the toolset.

    How Do I Use This?

    The simplest way to make use of this supplement is to run the Test Tool page (130021) directly from the development environment. This launches the following page:

    Click Get Test Codeunits and then select All Test Codeunits.

    After Microsoft Dynamics NAV finishes loading all test codeunits, they are displayed on the Test Tool page. To run them, click the Run action and then select All.

    It will take about 1 – 2 hours depending on your machine and the setup to run all tests. When the run is completed it will show the results in the Test Tool page:

    Any changes done to the database through running of tests from the Test Tool are automatically rolled back using the Test Isolation testability feature of Microsoft Dynamics NAV 2013. (See the Additional Resources section in this post.)

    During typical development, it is unacceptable to have to wait hours to get results from tests, which is why we have built an advanced test selection feature to help identify the relevant tests. (See the Test Selection section in this post.)

    Alternatively, you can run individual tests or codeunits by selecting them and choosing either Active Line or Active Codeunit after you click the Run action.

    If any test fails, you can attach a debugger session and re-run the failing test. The debugger will then break at the line where the test failed and you will be able to inspect the call stack and examine variables to determine the underlying cause of the failure.

    Extending the Toolset With Your Own Tests

    After you have written your first test codeunit, you can easily integrate it into the tools we provide in this supplement.

    To include your own tests, in the Test Tool page, simply run the page from the development environment and click the action Get Test Codeunits and choose Select Test Codeunits. This will display a page listing all available test codeunits, including your own:

    Select the codeunits you would like to add to the tool and press OK. The new test codeunits appear at the bottom of the Test Tool list, and you can now select them and run them just like any of the tests we included.

    Again, Test Isolation prevents your tests from persisting changes to the database. During development it may be beneficial to actually see the output produced by the tests. It is possible to disable Test Isolation just by running the test codeunit directly from the development environment, however, instead we recommend attaching a debugger session, breaking at the test entry point, then stepping through test execution and inspecting variables to determine if your test is behaving as expected.

    Speeding Up Development of Your Own Tests

    The tests that we have developed are built on top of a layer of libraries that contain helper functionality to automate many aspects of Microsoft Dynamics NAV. For example, the library named Library – Sales contains functionality related to working with customers and sales documents, including creating new customers, sales headers, sales lines and posting sales documents. The library is extensive and has functionality in many areas of the product, such as finance, service, jobs, warehousing, inventory, etc.

    Instead of re-inventing the wheel when developing your own tests, we highly suggest that you look into our existing helper functionality for functions you can leverage.

    To help you find your way around the libraries, we have shipped a Microsoft Compiled HTML Help file (*.chm), which is bundled together with the .fob file you installed. When you open the .chm file, you are prompted with the following window:

    This lists all our libraries and the functions inside them. However, normally you don’t know which library to look for, You can search it from the Search tab. Try searching for "finance charge memo" and you will have a couple of choices to pick from:

    Code Coverage Tools

    Code coverage is the means of being able to track which part of the application code has been exercised during some activity. In Microsoft Dynamics NAV, code coverage is recorded by AL code line and in addition to knowing if a code line was exercised it also records the number of times it was recorded.

    The code coverage activity that we record can be any interaction with Microsoft Dynamics NAV, be it manual user interaction, automated test execution, NAS, Web services, etc. You can, of course, record code coverage of your own tests exercising your own objects.

    The toolset includes a page (130002), Code Coverage List, which you can use to track code coverage. Run the page from the development environment:

    From this page you can start/refresh/stop the code coverage recorder. If you click the Start action, the code coverage engine is turned on and code coverage is captured. However, you will not be able to see any updated information before you click either Refresh or Stop, at which time you are presented with the code coverage information:

    The information contains coverage of objects, triggers/functions and individual lines (code and empty) as determined by the column Line Type. Only lines of type Code can have coverage. Lines of type Trigger/Function show the average coverage of all code lines in the trigger/function. Lines of type Object show the average coverage of all code lines inside the object.

    From the above picture, you can read that the activity exercised 33.93% of the Currency table (4). It covered 100% of the OnModify trigger and that comes from 100% of a single Code line.

    It is often desirable to filter on Line Type = Object to first get a high-level overview of the coverage result:

    Then from here, you can filter to look at individual objects and expand the Line Type filter to include triggers/functions as well:

    This way you can drill-down into the results starting from a high-level view going to a low-level view.

    Note #1: Code coverage is recorded globally for all sessions when using this tool, so make sure you are running in a controlled environment so you don’t have any activity from unaccounted sessions.

    Note #2: Only objects that are touched by the activity are recorded, meaning the coverage of any object not in the list is implied to be zero. If you would like to force the recorder to include specific objects even if they are not covered, you can use
    the Load objects action and select the relevant objects from the subsequent page. This forces the code coverage engine to load these objects and provide information on even when no lines are covered.

    Test Selection

    Now that we have all the building blocks in place, I’d like to talk about an advanced feature we included with the tooling.

    As mentioned previously, having to wait hours to run all tests is not feasible from a development point of view. Therefore we shipped the Test Selection, which helps you narrow the set of tests down to the relevant tests.

    The feature works by analyzing the code coverage data from individual test codeunits and comparing it to the set of objects that have the Modified field set to Yes in the database.

    To use this feature, you run the Test Tool page and go to the Actions tab and click Import/Export Test Map action. On the request page, make sure the direction is Import and click OK. Browse to the bundled "AppTestToolsetNAV2013-<country
    code>-Map.txt" file and import the file. This will take a couple of seconds. After it is done, click the Get Test Codeunits action. The prompt will now include a third option:

    Select this third option and the tool will automatically detect the relevant tests to run and add them to your current suite.

    Note #1: In typical circumstances you would want to make sure your suite is empty before using this feature.

    Note #2: There is a small risk this feature will not identify all relevant tests in unusual circumstances. Thus we strongly recommend running the full regression suite before shipping anything to the customer.

    This feature also integrates with you own tests. Once enabled (by loading the Test Map), the information will auto-update when any test is run – including your own tests. This means that you can load the map, run your tests and now export the map to another text file. You can then load the new map into another database and the test selection feature will now be able to suggest your own tests based on modified objects in this other database. If your test codeunit is not present in the database, you will be prompted with a list of missing test codeunits that could not be added. Import the missing test codeunits into the database and re-run the test selection feature.

    Additional Resources

    -Simon Ejsing

  • Microsoft Dynamics NAV Team Blog

    New finsql.exe Command Prompt Options

    • 11 Comments

    Based on partner feedback and the increasingly important goal of automating processes around a Microsoft Dynamics NAV installation, we’re pleased announce that we have added company-related and database-related command-line options to finsql.exe, in addition to the existing options. Introduced in hotfix 2791548, you can use the following commands to perform development tasks without using the Microsoft Dynamics NAV Development Environment.

    • Create company - new
    • Delete company - new
    • Rename company - new
    • Create database - new
    • Compile objects
    • Export objects
    • Import objects

    Note

    To use the new and changed command-line options, you must install hotfix 2791548 (requires access to PartnerSource/CustomerSource).

    You can start the Microsoft Dynamics NAV Development Environment by running finsql.exe at the command prompt. By default, finsql.exe is located at C:\Program Files (x86)\Microsoft Dynamics NAV\70\RoleTailored Client\. The extended full syntax is:

    Finsql.exe [command=<command> | designobject=<object type> <object ID>,] [servername=<server>,] [database=<database>,] [collationname=<collation>,] [company=<company>,] [newcompany = <new company>,] [filter=<filter>], [logfile=<logfile>,] [username=<user name>,] [password=<password>,] [ntauthentication=<yes|no|0|1>]

    The full documentation is available on the MSDN Library. Additional documentation is available at Development Environment Commands.

    Parameters

    The following list describes the parameters that you can use with finsql.exe.

    collation

    The collation to use when you create a new database. The value must be one of the following:

      • A full language culture name. For example, da-DK for Danish or hu-HU for Hungarian.
      • A SQL Server collation name without case or accent. For example, Latin1_General_100.
      • A SQL Server collation name with case and accent. For example, Danish_Greenlandic_100_CS_AI.

    Use this optional parameter when you create a database with the command=createdatabase parameter.

    command

    The development command that you want to run. The possible values include:

      • CompileObjects
      • CreateCompany
      • CreateDatabase
      • DeleteCompany
      • ExportObjects
      • ImportObjects
      • RenameCompany

    If you specify this parameter and run a command, then the development environment does not open.

    For more information, see Development Environment Commands.

    company

    The name of the company that you want to create, delete, or rename.

    A company name cannot be more than 30 characters.

    If you create, delete, or rename a company with the command parameter, then the company parameter is required.

    database

    If you use the command=CreateDatabase parameter, then the database parameter specifies the name of the new database that you want to create. In this case, the database parameter is required.

    For all other commands, the database parameter specifies the database that you want to open.

    If you do not specify both the servername and the database parameter, then the database server and database that are stored in the .zup file are used. If you do not specify the database parameter but you do specify the servername parameter, then the Open Databasewindow opens so that you can specify the database name.

    For more information, see Saving Setup Parameters in the Zup File.

    designobject

    The type and ID of the object that you want to design. The possible values of the object type include:

      • Table
      • Page
      • Report
      • Codeunit
      • Query
      • XMLport

    The possible values of the object ID are 0 and any ID of an existing object. If you specify 0 for the object ID, then you open a new object to design.

    For more information, see DesignObject.

    file

    The path and file name. Use the file parameter for the following:

      • To specify the path and file name to which you export objects if you use the command=ExportObjects parameter. The file must have a .fob or .txt file name extension.
      • To specify the path and file name from which you import objects if you use the command=ImportObjects parameter. The file must have a .fob or .txt file name extension.
      • To specify the directory for the new database file if you use the command=createdatabase parameter.

    For more information, see ExportObjects and ImportObjects.

    filter

    A filter on the Object table.

    Use this parameter if you use the command parameter to compile, export, or import objects.

    For more information, see Development Environment Commands.

    id

    Specifies the ID of the .zup file. By default, the setup parameters are stored in and retrieved from the fin.zup file. If you specify the id parameter, then the setup parameters are stored in and retrieved from a file that is named <id>.zup.

    By default, the fin.zup file is located at C:\users\<user name>\AppData\Roaming\. To change the location of the .zup file, specify the path and ID in the id parameter.

    For more information, see Saving Setup Parameters in the Zup File.

    logfile

    Specifies the path and file name for the file that contains error messages that result from running finsql.exe with the command parameter. If there are no errors, then a log file is not created.

    For more information, see Development Environment Commands.

    newcompany

    The new name of the company that you rename.

    A company name cannot be more than 30 characters.

    If you rename a company with the command parameter, then the newcompany parameter is required.

    ntauthentication

    Specifies if you want to use NT authentication. The possible values are yes, no, 1, or 0. If you specify the username and password parameters, then you must specify ntauthentication=no or ntauthentication=0.

    objectcache

    Specifies the size of the object cache, in kilobytes. Objects such as code, descriptions, and windows that are used on the computer that is running the development environment are stored in the object cache.

    The default value is 32000.

    password

    Specifies the password to use with the username parameter to authenticate to the database. If you do not specify a user name and password, then the command uses the Windows user name and password of the current user to authenticate to the database.

    servername

    Specifies the name of the database server.

    If you do not specify both the servername parameter and the database parameter, then the database server and database that are stored in the .zup file are used. If you do not specify the servername parameter but you do specify the database parameter, then the Open Database window opens so that you can specify the database server name.

    For more information, see Saving Setup Parameters in the Zup File.

    temppath

    Specifies the path of the location to store temporary files that are created while Microsoft Dynamics NAV runs. These files are automatically deleted when Microsoft Dynamics NAV is closed.

    By default, these files are put in the Temp folder for the user, such as C:\Users\user name\AppData\Local\Temp\

    username

    The user name to use to authenticate to the database. The user name must exist in the database. If you do not specify a user name and password, then the command uses the Windows user name and password of the current user to authenticate to the database.

    Warning

    If User Access Control (UAC) is turned on and you do not specify to run the Command Prompt window as Administrator, then the Command Prompt window runs as a standard user. In this case, if you do not specify the username parameter and the current Windows user is an Administrator, then the command is run as the standard user.

    If you specify the username parameter, then you must also specify the password parameter and the ntauthentication parameter must be no or 0.

    For more information about database users and permissions, see Setting Database Owner and Security Administration Permissions in the MSDN Library.

     

     

    Best regards from the NAV Management Tools team

     

     

     

  • Microsoft Dynamics NAV Team Blog

    RDLC Report and Performance in Microsoft Dynamics NAV

    • 11 Comments

    It has been a while since I last blogged and I am taking the chance now to post on a very delicate argument. The main focus of my dissertation is about performance (Out Of Memory exception, typically) with Microsoft Dynamics NAV 2009 R2, Microsoft Dynamics NAV 2013, and Microsoft Dynamics NAV 2013 R2.

    I would encourage you to post your comments and thoughts. Have a good read.

    Microsoft Dynamics NAV 2009 R2 Considerations (RDLC 2005 - Report Viewer 2008)

    All the Microsoft Dynamics NAV 2009 stack (RTM, SP1, R2) is built and sealed for the x86 platform. This means that both client (Microsoft.Dynamics.NAV.Client.exe) and server (Microsoft.Dynamics.NAV.Server.exe) are 32bit components of the NAV platform. RDLC Reporting (Enhanced Reporting, in the recent NAV terminology) in Microsoft Dynamics NAV 2009 is made of a Report Viewer .NET Control targeted for WinForm, and Microsoft Dynamics NAV casts this control into a Microsoft Dynamics NAV modal page (that is a WinForm, roughly speaking) within the RTC boundaries.

    Report Viewer works, in principle, accepting 2 items:

    -          a metadata definition plain XML file (Report.rdlc) that define the structure of the report rendering runtime  

    -          a Dataset that is a serialized XML file that contains the data to be rendered in the way defined in the rdlc file definition

    With Microsoft Dynamics NAV 2009, Report Viewer works client-side to render the report to the user (Preview Layout rendering extension) and therefore it needs to have both RDLC definition and dataset streamed completely from the Server to the Client. This streaming process of Client memory population, since Microsoft Dynamics NAV 2009 SP1, is made with a chunking method that can be resumed in shorts as per below.

    SQL Server process and generate a complete Result Set. The Result Set is sent to the Microsoft Dynamics NAV Server as normal TCP packets informations and the Microsoft Dynamics NAV Server, meanwhile receiving these packets from SQL Server, is sending this Result Set in chunks to the client, clearing the Microsoft Dynamics NAV Server memory once the packet is received from the client. This has been introduced to avoid memory leak server side that works only as routing point for packets / chunks from SQL Server to the Microsoft Dynamics NAV Windows client. If you open task manager both in the Middle Tier machine and Client machine, meanwhile processing a Heavy report (or whatever report), you might notice that the memory footprint server side is constant and pretty low while the Client one is growing and growing in consumption until it reaches a physical limit.

    When it reaches its physical limit, you receive the typical error message like the one shown below (explicit Out Of Memory exception)

    And, most of the times, report viewer continue clearing the error message and simply display a single blank page (implicit Out Of Memory exception) or several pages with mixed random string value assignments (blurred Out of Memory exception).

    I do not want to go more deep into the technicalities that lies beneath but you have to consider the following:

    1. The Microsoft Dynamics NAV 2009 R2 Role Tailored client is a 32bit application (with a limit, on the chart, of 2GB memory per process).
    2. The Microsoft Dynamics NAV 2009 R2 Role Tailored client and report(s) share the same memory Application Domain (this means the same memory stack).
    3. Report Viewer control run in a sort of sandbox mode inside the Microsoft Dynamics NAV WinForm so that the memory consumption is even more limited (approx. 1GB).

    Based on the assumption above my studies on performance related to heavy reports have been the following:

    1. Report Viewer Preview rendering extension within Role Tailored Client is raising an Out Of Memory exception when Client process memory reaches 0.8 – 1.1 GB approx. (this differs between multiple factors like e.g. OS, Hardware type, Resources, etc.)
    2. Considering a typical Microsoft Dynamics NAV dataset (60 – 80 columns on average) there is a potential risk of Out Of Memory between 40K up to 100K rows range. This depends on number of columns in the dataset and quality of columns (e.g. which data type they belongs, if and how this is populated, etc.).

    If you pack up all these considerations, these are the actions that you might take (or have to) depending on your scenarios within the Microsoft Dynamics NAV 2009 R2 stack:

    1. If your report is raising an Out Of Memory exception in a range lower or close to 80/90K rows then you can try to optimize the report by reducing the Dataset. Reducing the dataset means :
      1. Write optimal code for RDLC Report (e.g. use CurrReport.SKIP when needed, avoid use data items for CALCSUMS and use record AL variables instead, rewrite the report to use drill-through to enable getting to details if required in the report - so still possible to move calculations to CSIDE – or refactor to use hyperlink to another report for details, etc.)
      2. Reduce the Dataset Columns (e.g. eliminate Section control that you do not use with RDLC report)
      3. Reduce the Dataset Rows (refactor as much as it possible to push in the dataset only the data that need to be printed)
    2. If your report is already in a range equal or higher then 80/90K then you have no other choices with NAV 2009 R2 than the following :
      1. Delete RDLC Report layout and enable Classic Client report fall back (this is the solution that I will warmly suggest and it is a really finger snap solution)
      2. (this is pretty obvious) Apply filters in the request page (or through AL Code) in order to reduce the amount of rows in the dataset and instead of print the report in one single shot, print it N times.

    And this is all about the Microsoft Dynamics NAV 2009 R2 stack and how to solve / workaround the problem in the feasible (and easiest way) within this version.

    Microsoft Dynamics NAV 2013 (RDLC 2008 – Report Viewer 2010) / NAV 2013 R2 (RDLC 2010 – Report Viewer 2012) is another story and challenge type.

    To resume, the milestone changes between Microsoft Dynamics NAV 2009 and Microsoft Dynamics NAV 2013 (and R2) are the following:

    1. Microsoft Dynamics NAV Server is now 64bit (finally…) while the Windows client still remains as 32bit application. This means that the client is still a physical bottleneck and are still valid the considerations related to memory footprint and dataset volume as reported previously for Microsoft Dynamics NAV 2009 R2.
    2. You cannot anymore enable Classic client report fallback but you have to use RDLC Report in any occasion.

    With these 2 new variables or constraints in mind, below how you could workaround / resolve the performance problem with Microsoft DynamicsNAV 2013  / Microsoft Dynamics NAV 2013 R2:

    1. Same considerations about Optimizing reports: if you receive (or think of receiving) an Out Of Memory exception you might go for optimize the report as much as you can IF you forecast that in the end your dataset will never ever exceed 70/90K rows.
    2. If you have heavy reports with a dataset volume higher than 70/90K rows then this is what you could do:
      1. Filter data and print the report N times, wherever possible (use common sense)
      2. Use the Job Queue to enable Server Side Printing. What is Server Side Printing? It is simply running Report Viewer totally in the background through NAS Services (that is using Background Sessions through STARTSESSION AL statement). Running Server Side means running under 64 bits context and therefore Report Viewer (“.NET component targeted for any CPU” = 64 bit enabled) will use ALL the memory available from the OS (e.g. if you have 32 GB it could reach up to consume all of these if you have to work with several MILLION of dataset rows – I have seen it with my own Italian eyes - ) and you will succeed in PRINT the report or, better, use SAVEASPDF to generate a PDF file to be consumed by the user.
      3. Use STARTSESSION AL statement as you like in your own custom code after gathering user filters and parameter and pass this to a Codeunit that does filter record(s) and run a SAVEASPDF in the background as per your exotic flavor.

    THE FUTURE

    The Microsoft Dynamics NAV Core team is fully aware about these scenarios and working hard on improving the RDLC Report performance and experience in future versions of Microsoft Dynamics NAV

    NOTE:

    In this blog post you will find a set of objects (1 Report, 1 Codeunit, 1 Page) to easily simulate an Out Of Memory exception or Save as PDF the report in background.

    Just import these NAV object set and run Page 50666. You can choose to simulate an Out Of Memory exception by clicking the appropriate Action and then Preview the Report or you can choose to SAVEASPDF the same Report via enabling a Background Session that would do this action Server Side. 

    Be sure to have at least 4 GB of Available Memory Server Side and just wait for the background session to end its activity and stream out the content of the report (this should take close to 5/6 minutes with a standard Cronus database, depending on resources).

    These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.

    Duilio Tacconi (dtacconi)

    Microsoft Dynamics Italy

    Microsoft Customer Service and Support (CSS) EMEA

    A special thanks to Peter Borring Sørensen & Torben Wind Meyhoff from the Microsoft Dynamics NAV core team.

  • Microsoft Dynamics NAV Team Blog

    Microsoft Dynamics NAV 2013 R2 available in the Windows Azure Portal under MSDN subscription

    • 11 Comments

    Ever wondered what it is like to work with Microsoft Dynamics NAV on Windows Azure - but found it too cumbersome even with our nice Windows Azure provisioning tools?  Then this might be something for you. 

    Today we have made Microsoft Dynamics NAV 2013 R2 easily available on Windows Azure with just a few clicks. The Microsoft Dynamics NAV 2013 R2 image is an exclusive offer for MSDN subscribers and includes a fully functional, ready to use Microsoft Dynamics NAV 2013 R2 installation including Microsoft SQL Server. The MSDN subscription gives you the well-known discounted prices and allows you to use it for development, test, and demo purposes, but not for production purposes.

    How to get it? Make sure you have an MSDN subscription and the subscription has Windows Azure benefits activated. Then follow these steps:

     

    1. Sign in to the Windows Azure Portal using your MSDN subscription.
    2. Choose the + sign at the bottom of the Portal page.
    3. Choose Compute, point to Virtual Machine, and then choose From Gallery.
    4. Select the MSDN filtering checkbox as shown in the following screenshot. This will filter the list to only show the available MSDN images.

    1. Choose the Microsoft Dynamics NAV 2013 R2 image.
    2. Choose the arrow in the lower-right corner.
    3. Fill in the relevant fields about the virtual machine as described in the following list:
      1. Specify the name of the virtual machine.
      2. Specify the size of the virtual machine, such as Medium.
      3. Specify the user name and password for connecting to the virtual machine.
      4. Specify the region  where you want to host your virtual machine, such as West Europe.
      5. Verify the remaining parameters. In most cases, the default values are sufficient.
    4. Choose the Finish icon to start the provisioning process.
    5. Once the image is provisioned after a few minutes, you can use Remote Desktop to connect to it. In the Azure Portal Virtual Machine page, choose the newly created image, and then choose Connect. Use the credentials that you specified step 7.

    When you are connected to the VM, you are welcomed to the Microsoft Dynamics NAV 2013 R2 virtual machine on Windows Azure:

    No further configuration is required and you can immediately start using the clients and the development environment.

    Best regards

    Kamil Koclega & Morten Jensen from the Dynamics NAV team

Page 3 of 50 (746 items) 12345»