mfp's two cents

...on Dynamics AX Development!
  • mfp's two cents

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009

    • 12 Comments

    Deepak Kumar and myself hosted a session on the upgrade process at Convergence in Orlando. A big thank you to everyone who attended; we are truly humbled by the great feedback we are receiving for our session. An even bigger round of applause is due to Paul Langowski from Tectura, who joined us on stage to give testimony on two "double-leap" upgrades he has worked on.

    For those of you who wasn't able to attend, I've attached the slide deck to this post.

    Here is the excerpt of the session:

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009; 3/13/2008 10:00AM-11:00AM; W204

    This session will take you through the end-to-end flow of upgrading from Microsoft Dynamics AX 3.0 to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009. You will see an overview of the code upgrade as well as the data upgrade process along with recommended best practices. Examples of the improved upgrade documentation, with a focus on the implementation guidelines that come with a dedicated section for upgrade, will also be covered. This session will also include tips on performing data upgrade and code upgrade more efficiently and effectively. Additionally, you'll learn tips on the extra steps needed to upgrade if you've extended your solution. This session is designed for partners and customers who are planning to upgrade or would like to learn more about a Microsoft Dynamics AX upgrade.

  • mfp's two cents

    Now available: Dynamics AX 2009 Pre-Release (CTP3) Demonstration Toolkit

    • 3 Comments

    We are happy to announce that the new Microsoft Dynamics AX 2009 Pre-Release (CTP3) Demonstration Toolkit is now available on Partner Source.

    The Dynamics AX 2009 Pre-Release (CTP3) VPC is an unsupported ready to use pre-release of AX 2009. The Virtual PC image of Microsoft Dynamics AX 2009 Pre-Release (CTP3) enables you to demonstrate the features of Microsoft Dynamics AX 2009 using a single PC or laptop computer.

    AX 2009 Pre-Release (CTP3) Demonstration Toolkit Demo Data.

    This version of the Dynamics AX Pre-release Demonstration Toolkit is based on the new Contoso Demo Data Company. The current demo data version will only have base data and no transactions. We are planning to upload a demo data version with transactions shortly.

  • mfp's two cents

    We did it (again)!

    • 5 Comments

    On behalf of the Dynamics AX 2009 development team I'm proud to announce that as of build 5.0.529.0 we have reached zero X++ best practice errors in the SYS layer.

    Version 4.0 was the first release of Dynamics AX without any best practice errors. Reaching this level of code quality was a mammoth effort; due to the huge backlog and many new TwC related BP rules.

    In Dynamics AX 2009 the goal was significantly easier to reach, as we didn't have any backlog. However; we have introduced more than 60 new BP rules, including validation of XML documentation, inventory dimensions, upgrade scripts, workflow, list pages, AIF and performance. On top of this the SYS layer now contains much more functionality than ever before - the AOD file alone has grown with more than 50%. All of this now conform to the Best Practice rules implemented.

    What does this mean for you as an X++ partner developer?

    • When you use the best practice tool, and it reports a BP error - then you should pay attention to it, as you introduced a problem.
    • The high quality of the code in the SYS layer should overall make your life easier, as it conforms to a set of standards, making the code easier to understand across the entire application.

    For more information on MorphX Best Practices see: Channel 9 Screencast, or MSDN.

    For more information on the importance of static code analysis see: Compiler warnings - and so what?

  • mfp's two cents

    Late night discussion on software development #3: Unit testing

    • 1 Comments

    “It is a true blessing to have lots of automated unit tests”, I say aloud trawling for a response.

    “It certainly is.” is the response I get back.

    I throw out another statement: ”It just feels so good to have over 80% code coverage. It makes you almost invulnerable. You can change just about anything, and if the tests pass, you know you did a good job.”

    “It makes you blind, is what it does.” is the unexpected answer I get back.

    I look at Søren, who is sitting quietly and comfortably in his armchair. I lean forward. “What do you mean? I feel the exact opposite of being blind – I have full transparency into my tests cases, I know exactly what parts of the code they exercise, and I know if any of them fail.”

    “Tell me, why do you write unit tests?”

    “To have high coverage numbers.”

    “And why would you like to have high coverage numbers?”

    “So I know my code is fully exercised by the unit tests.”

    “And why would you like the code to be fully exercised by unit tests?”, Søren keeps pushing.

    I feel like this is a cat-after-the-mouse-chase, but I’m up to it. “So I know my code works.”

    “Assuming your unit tests passed once, why do you keep them?”

    “Because the code may stop working due to a change.”

    “Ok, so let me ask you again: Why do you write unit tests?“

    “So I know my code works at all times”, I answer.

    “Or as I prefer to phrase it: ‘to prevent regressions.’ So let me ask you: How good is your protection against regressions if your code coverage is 80%?”

    The answer is almost out of my mouth before I stop myself. Søren has a point. 80% code coverage doesn’t mean 80% protection. But what does it mean? Just because a unit test exercises the code, it doesn’t mean it will detect all possible failures in the code. I’m silent; I don’t know what to reply.

    After a while Søren says: “Zero code coverage means you haven’t done enough, 100% code coverage means you don’t know what you got.”

    “So why do we measure code coverage at all?”

    “Because it’s easy to measure, easy to understand, and easy to fool yourself. It is like measuring the efficiency of a developer by the number of code lines he can produce. A mediocre developer writes 100 lines per hour, a good developer writes 200 lines; but a great developer only has to write 50 lines.”

    I can see his point. I would never measure a developer on how many lines of code he writes.

    “Tell me, how often do your unit tests fail?” Søren asks.

    “Occasionally; but really not that frequently.”

    “What is the value of a unit test that never fails?”

    “It ensures the code still works.”, I say feeling I'm stating the obvious.

    “Are you sure? If it never fails, how will it detect the code is buggy?”

    “So you are saying a unit test that never fails doesn’t add protection?”

    “Yes. If it never fails, it is just hot air, blinding you from real problems, and wasting your developers’ time”.

    “So we should design our unit tests to fail once in a while?”

    “Yes, a unit test that truly never fails is worthless! Actually it is even worse, it is counterproductive. It took time to write, it takes time to run, and it adds no value. “

    “Well, it does add the value that the code has been exercised.”

    “I have seen unit tests without a single assert statement. Yes, it certainly exercises some code, but unless the code crashes it offers no protection.”

    “Why would anyone write a unit test without assert statements?”

    “To get the high code coverage numbers their managers demand.”

    I pour us another glass of red wine, while I’m mentally trying to grasp the consequences of what I just learned. By measuring code coverage we are actually fostering an environment where worthless unit tests are rewarded just as much as priceless unit tests.

    “As I see it we need to distinguish the good from the bad unit tests. Any ideas?” I ask.

    “The bad unit tests come in three flavors: Those that don’t assert anything, but are solely written to achieve a code coverage bar, those that test trivial code, such as getters and setters, and those that prevent good fixes.”

    “I can understand the first two, but please elaborate on the last one.”

    “Suppose you write a unit test that asserts an icon in the toolbar has a certain resource ID. When the icon eventually is updated this unit test will fail. It adds no value, as the icon was supposed to be update. This means the developer has two hard coded lists to maintain when changing icons: one in the product code and one in the unit tests.”

    “Got it, how would you propose a better unit test for this case?”

    “Well, a unit test verifying that all icons have a valid resource ID and that no two icons share the same resource would be a good start.”

    I can certainly see the difference, the latter unit test wouldn’t need updating every time an icon was changed, but it would detect problems our customers would notice. I wonder how many of our unit tests fall into these three categories. I need to find a way to reward my developers for writing good unit tests.

    Søren interrupts my chain of thought, “What do you do when a unit test fails?”

    “We investigate. Sometimes it is the unit test that is broken; sometimes it is the product code that is broken.”

    “Who makes that investigation?”

    “The developer who has written the product code that makes the unit test fail.”

    “So basically you are letting the wolf guard the sheep!”

    “What do you mean?”

    “If developer A writes a unit test to protect his code, and developer B comes along and breaks the code, you let developer B judge whether it is his own code or developer A’s code that is to blame. Developer A would grow quite sour if he detected that developer B was lowering the defense in the unit test to get his changes through.”

    “Yes, that certainly wouldn’t be rewarding. I guess it would be much more rewarding for developer A to receive an alarm when developer B broke his test. He would then know that his test worked and he could work with developer B to resolve the issue.”

    Søren nods.

    It suddenly dawns on me. Writing a bad unit test wouldn’t be rewarding, as it would never fail; whereas a good unit test would eventually catch a fellow developer’s mistake. I know the developers in my team well enough to understand the level of sound competition such an alert system would cause. I set my glass on the table. I have some infrastructure changes to make on Monday.

  • mfp's two cents

    Do you want to write more secure code?

    • 1 Comments

    The Developer Highway Code, written by Paul Maher of DPE, is a concise handbook that captures and summarises the key security engineering activities that should be an integral part of the software development process. This companion guide should be a must for any Developer, Architect, Tester etc. undertaking software development...The book is presented in easy to read checklist form, covering essential guidance on writing and releasing secure code. And it is available for free!

    “Developers are a most critical component to a more safe computing experience for all computer users in the UK and around the world. Code written for a program or operating system, or process must be able to withstand the most aggressive attempts to ‘break it’.  From games to mission-critical operations, secure code will form the base for success or disaster.  The Developer Highway Code should be a required reading."  Edward P Gibson, Chief Security Advisor, Microsoft Ltd

     

    Where can you get The Developer Highway Code?

    Download full book only as a pdf or Download full book only as an xps

Page 1 of 1 (5 items)

March, 2008

mfp's two cents

...on Dynamics AX Development!