mfp's two cents

...on Dynamics AX Development!
  • mfp's two cents

    Do you want to write more secure code?

    • 1 Comments

    The Developer Highway Code, written by Paul Maher of DPE, is a concise handbook that captures and summarises the key security engineering activities that should be an integral part of the software development process. This companion guide should be a must for any Developer, Architect, Tester etc. undertaking software development...The book is presented in easy to read checklist form, covering essential guidance on writing and releasing secure code. And it is available for free!

    “Developers are a most critical component to a more safe computing experience for all computer users in the UK and around the world. Code written for a program or operating system, or process must be able to withstand the most aggressive attempts to ‘break it’.  From games to mission-critical operations, secure code will form the base for success or disaster.  The Developer Highway Code should be a required reading."  Edward P Gibson, Chief Security Advisor, Microsoft Ltd

     

    Where can you get The Developer Highway Code?

    Download full book only as a pdf or Download full book only as an xps

  • mfp's two cents

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009

    • 12 Comments

    Deepak Kumar and myself hosted a session on the upgrade process at Convergence in Orlando. A big thank you to everyone who attended; we are truly humbled by the great feedback we are receiving for our session. An even bigger round of applause is due to Paul Langowski from Tectura, who joined us on stage to give testimony on two "double-leap" upgrades he has worked on.

    For those of you who wasn't able to attend, I've attached the slide deck to this post.

    Here is the excerpt of the session:

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009; 3/13/2008 10:00AM-11:00AM; W204

    This session will take you through the end-to-end flow of upgrading from Microsoft Dynamics AX 3.0 to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009. You will see an overview of the code upgrade as well as the data upgrade process along with recommended best practices. Examples of the improved upgrade documentation, with a focus on the implementation guidelines that come with a dedicated section for upgrade, will also be covered. This session will also include tips on performing data upgrade and code upgrade more efficiently and effectively. Additionally, you'll learn tips on the extra steps needed to upgrade if you've extended your solution. This session is designed for partners and customers who are planning to upgrade or would like to learn more about a Microsoft Dynamics AX upgrade.

  • mfp's two cents

    Late night discussion on software development #3: Unit testing

    • 1 Comments

    “It is a true blessing to have lots of automated unit tests”, I say aloud trawling for a response.

    “It certainly is.” is the response I get back.

    I throw out another statement: ”It just feels so good to have over 80% code coverage. It makes you almost invulnerable. You can change just about anything, and if the tests pass, you know you did a good job.”

    “It makes you blind, is what it does.” is the unexpected answer I get back.

    I look at Søren, who is sitting quietly and comfortably in his armchair. I lean forward. “What do you mean? I feel the exact opposite of being blind – I have full transparency into my tests cases, I know exactly what parts of the code they exercise, and I know if any of them fail.”

    “Tell me, why do you write unit tests?”

    “To have high coverage numbers.”

    “And why would you like to have high coverage numbers?”

    “So I know my code is fully exercised by the unit tests.”

    “And why would you like the code to be fully exercised by unit tests?”, Søren keeps pushing.

    I feel like this is a cat-after-the-mouse-chase, but I’m up to it. “So I know my code works.”

    “Assuming your unit tests passed once, why do you keep them?”

    “Because the code may stop working due to a change.”

    “Ok, so let me ask you again: Why do you write unit tests?“

    “So I know my code works at all times”, I answer.

    “Or as I prefer to phrase it: ‘to prevent regressions.’ So let me ask you: How good is your protection against regressions if your code coverage is 80%?”

    The answer is almost out of my mouth before I stop myself. Søren has a point. 80% code coverage doesn’t mean 80% protection. But what does it mean? Just because a unit test exercises the code, it doesn’t mean it will detect all possible failures in the code. I’m silent; I don’t know what to reply.

    After a while Søren says: “Zero code coverage means you haven’t done enough, 100% code coverage means you don’t know what you got.”

    “So why do we measure code coverage at all?”

    “Because it’s easy to measure, easy to understand, and easy to fool yourself. It is like measuring the efficiency of a developer by the number of code lines he can produce. A mediocre developer writes 100 lines per hour, a good developer writes 200 lines; but a great developer only has to write 50 lines.”

    I can see his point. I would never measure a developer on how many lines of code he writes.

    “Tell me, how often do your unit tests fail?” Søren asks.

    “Occasionally; but really not that frequently.”

    “What is the value of a unit test that never fails?”

    “It ensures the code still works.”, I say feeling I'm stating the obvious.

    “Are you sure? If it never fails, how will it detect the code is buggy?”

    “So you are saying a unit test that never fails doesn’t add protection?”

    “Yes. If it never fails, it is just hot air, blinding you from real problems, and wasting your developers’ time”.

    “So we should design our unit tests to fail once in a while?”

    “Yes, a unit test that truly never fails is worthless! Actually it is even worse, it is counterproductive. It took time to write, it takes time to run, and it adds no value. “

    “Well, it does add the value that the code has been exercised.”

    “I have seen unit tests without a single assert statement. Yes, it certainly exercises some code, but unless the code crashes it offers no protection.”

    “Why would anyone write a unit test without assert statements?”

    “To get the high code coverage numbers their managers demand.”

    I pour us another glass of red wine, while I’m mentally trying to grasp the consequences of what I just learned. By measuring code coverage we are actually fostering an environment where worthless unit tests are rewarded just as much as priceless unit tests.

    “As I see it we need to distinguish the good from the bad unit tests. Any ideas?” I ask.

    “The bad unit tests come in three flavors: Those that don’t assert anything, but are solely written to achieve a code coverage bar, those that test trivial code, such as getters and setters, and those that prevent good fixes.”

    “I can understand the first two, but please elaborate on the last one.”

    “Suppose you write a unit test that asserts an icon in the toolbar has a certain resource ID. When the icon eventually is updated this unit test will fail. It adds no value, as the icon was supposed to be update. This means the developer has two hard coded lists to maintain when changing icons: one in the product code and one in the unit tests.”

    “Got it, how would you propose a better unit test for this case?”

    “Well, a unit test verifying that all icons have a valid resource ID and that no two icons share the same resource would be a good start.”

    I can certainly see the difference, the latter unit test wouldn’t need updating every time an icon was changed, but it would detect problems our customers would notice. I wonder how many of our unit tests fall into these three categories. I need to find a way to reward my developers for writing good unit tests.

    Søren interrupts my chain of thought, “What do you do when a unit test fails?”

    “We investigate. Sometimes it is the unit test that is broken; sometimes it is the product code that is broken.”

    “Who makes that investigation?”

    “The developer who has written the product code that makes the unit test fail.”

    “So basically you are letting the wolf guard the sheep!”

    “What do you mean?”

    “If developer A writes a unit test to protect his code, and developer B comes along and breaks the code, you let developer B judge whether it is his own code or developer A’s code that is to blame. Developer A would grow quite sour if he detected that developer B was lowering the defense in the unit test to get his changes through.”

    “Yes, that certainly wouldn’t be rewarding. I guess it would be much more rewarding for developer A to receive an alarm when developer B broke his test. He would then know that his test worked and he could work with developer B to resolve the issue.”

    Søren nods.

    It suddenly dawns on me. Writing a bad unit test wouldn’t be rewarding, as it would never fail; whereas a good unit test would eventually catch a fellow developer’s mistake. I know the developers in my team well enough to understand the level of sound competition such an alert system would cause. I set my glass on the table. I have some infrastructure changes to make on Monday.

  • mfp's two cents

    We did it (again)!

    • 5 Comments

    On behalf of the Dynamics AX 2009 development team I'm proud to announce that as of build 5.0.529.0 we have reached zero X++ best practice errors in the SYS layer.

    Version 4.0 was the first release of Dynamics AX without any best practice errors. Reaching this level of code quality was a mammoth effort; due to the huge backlog and many new TwC related BP rules.

    In Dynamics AX 2009 the goal was significantly easier to reach, as we didn't have any backlog. However; we have introduced more than 60 new BP rules, including validation of XML documentation, inventory dimensions, upgrade scripts, workflow, list pages, AIF and performance. On top of this the SYS layer now contains much more functionality than ever before - the AOD file alone has grown with more than 50%. All of this now conform to the Best Practice rules implemented.

    What does this mean for you as an X++ partner developer?

    • When you use the best practice tool, and it reports a BP error - then you should pay attention to it, as you introduced a problem.
    • The high quality of the code in the SYS layer should overall make your life easier, as it conforms to a set of standards, making the code easier to understand across the entire application.

    For more information on MorphX Best Practices see: Channel 9 Screencast, or MSDN.

    For more information on the importance of static code analysis see: Compiler warnings - and so what?

  • mfp's two cents

    Erik and Anni goes to Hollywood

    • 1 Comments

    First episode of Erik Damgaard's new endeavor is now available.

    http://www.kanal4.dk/erik_og_anni_goes_to_hollywood/

    (in Danish)

    Best wishes from here.




  • mfp's two cents

    Writing less code: The "else" statement

    • 17 Comments

    Source code is written once; and read over and over again. So make sure it is easy to read and understand.

    I keep seeing a poor coding practice. I see it in production code, in test code, in X++ code, in C# code, in C++ code, in examples on the web, when interviewing candidates. I see it where ever I look. It is a practice that adds unnecessary complexity to source code, it make the source code harder to read, and harder to write. And what disturbs me the most is, how easy it is to avoid.

    Take a look at this simple code:

    boolean isNegative(int value)
    {
        if (value<0)
        {
            return true; 
        }
        else
        {
            return false;
        }
    }  
    The "else" statement is completely superfluous. This rewrite does exactly the same, and is easier to read:
    boolean isNegative(int value)
    {
        if (value<0)
        {
            return true; 
        }

        return false;   
    }
    Yes; the compiler will optimize the "else" statement away - but that is not my point. The compiler can easily see through the weirdest constructs. I (and most other human beings) cannot. Granted; every developer worth his salt should have no problem understanding both of the above implementations. However often the requirements are more complex.
    int foo(int bar)
    {
        if ( /*expr1*/ )
        {
            throw /*some exception*/; 
        }
        else
        {
            if ( /*expr2*/ ) 
            {
                return 1;
            }
            else
            {
                if ( /*expr3*/ ) 
                {
                    return 2;
                }
                else
                {
                    if ( /*expr4*/ ) 
                    {
                        return 3;
                    }                
                    else
                    {
                        throw /*some exception*/;
                    }
                }
           }
        }
    }
    Or, this slightly better version: 
    int foo(int bar)
    {
        if ( /*expr1*/ )
        {
            throw /*some exception*/; 
        }
        else if ( /*expr2*/ ) 
        {
            return 1;
        }
        else if ( /*expr3*/ ) 
        {
            return 2;
        }
        else if ( /*expr4*/ ) 
        {
            return 3;
        }
        else
        {
            throw /*some exception*/;
        }
    }
    Could be simply rewritten as:
    int foo(int bar)
    {
        if ( /*expr1*/ )
        {
            throw /*some exception*/; 
        }
        if ( /*expr2*/ ) 
        {
            return 1;
        }
        if ( /*expr3*/ ) 
        {
            return 2;
        }
        if ( /*expr4*/ ) 
        {
            return 3;
        }      
        throw /*some exception*/;
    }
    Never-ever-under-any-circumstances-for-any-reason-what-so-ever write an "else" statement if the preceding block unconditionally returns or throws an exception.

    Now I got it off my chest :-)

  • mfp's two cents

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009 @ Convergence 2008

    • 2 Comments

    Deepak Kumar and myself will be hosting a session on the upgrade process at Convergence in Orlando next month. I hope to see you there. 

    Upgrading to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009; 3/13/2008 10:00AM-11:00AM; W204

    This session will take you through the end-to-end flow of upgrading from Microsoft Dynamics AX 3.0 to Microsoft Dynamics AX 4.0 and Microsoft Dynamics AX 2009. You will see an overview of the code upgrade as well as the data upgrade process along with recommended best practices. Examples of the improved upgrade documentation, with a focus on the implementation guidelines that come with a dedicated section for upgrade, will also be covered. This session will also include tips on performing data upgrade and code upgrade more efficiently and effectively. Additionally, you'll learn tips on the extra steps needed to upgrade if you've extended your solution. This session is designed for partners and customers who are planning to upgrade or would like to learn more about a Microsoft Dynamics AX upgrade.

  • mfp's two cents

    Anyone interested in developer documentation of X++ code?

    • 8 Comments

    A new feature in Dynamics AX 2009 enables developers to write developer documentation directly in the X++ Editor while writing the production code. The feature is very similar to the XML documentation feature in C#.

    Basically any method can contain /// (triple-slash) comments followed by structured documentation in XML format. Here is an example:

    The developer is assisted by a dynamic documentation template, which is inserted through an editor script. The template automatically builds a skeleton including all the required tags for the current method. On top of this the written documentation is validated by best practice rules; which effectively prevent the documentation from growing stale. Finally a tool enables extraction of the documentation, so it can be processed outside AX, e.g. for publishing on MSDN.

    But who really cares about this? Great tools won't make my day - I want documentation!

    Dynamics AX 2009 contains about 125.000 table and class methods in the SYS layer. To give a perspective on this number, it will take you over 15 years to document them all, if you can document 5 methods an hour.

    All the methods shipping with Dynamics AX 2009 will have some documentation:

    • 10% will be available in the X++ source code, in the product documentation and on MSDN. This content has been written by developers for developers. (and reviewed by Content Publishing).
    • 20% will only be available in the product documentation and on MSDN. This is pattern matched documentation for methods such as main(), construct(), pack(), unpack() and parm(). This documentation has been written once for each pattern, and then applied to all methods matching the pattern.
    • 70% will only be available in the product documentation and on MSDN. This is automatically generated minimal documentation for methods that we haven't manually documented. It contains syntax and other high level information.

    Or in other words:

    • You will never see the dreaded boilerplate "At the time of publication..." again.
    • About 40.000 methods contain human written developer documentation.
  • mfp's two cents

    «Inside Dynamics AX 4.0» издана в России

    • 2 Comments

    Участвуя в онлайн сообществах посвященных AX, осознаешь как важно понимать русский язык: русское сообщество AX помогает друг другу и обеспечивает обмен информацией практически по любому вопросу связанному с AX, являясь отличным примером силы онлайн сообществ. Я горжусь тем что работаю над теми же продуктами что и вы!  

     

    Я только что узнал что книга «Inside Dynamics AX 4.0» была переведена на русский язык. Надеюсь что эта книга поможет вам в ваших будущих проектах!

    Удачи!


    A big thank you to Kirill Val for helping me with my first blog in Russian.

    Here is another picture of the new book.

     

     

     

     

  • mfp's two cents

    Late night discussion on software development #2: Estimation

    • 1 Comments

    “How can it be”, I wonder out loud, “that our projects always finish on time with a little less functionality than we anticipated?”

    “As far as I understand, it is no wonder that your projects finish on time, as you have a fixed delivery date. At that date you basically ship what you’ve got.” Søren answers and sips his wine.

    “Well, shipping is a big word. After the development phase we go into a stabilizing phase which can be long and demanding. We really need these fixed dates, as our customers expect us to deliver on the date we announce. ” I feel myself already in defense, and I try to steer the discussion in the direction I want it to go, “What I struggle to understand is, that even when we start out with the best intentions, we always end up cutting functionality away.”

    “Ok,” Søren replies, folds his hands, and looks at me, “this is as good a topic as any. Let me hear how you figure out how much you can do within the development phase.”

    “First of all we have a planning phase with a long list of features and functionality that we would like to implement. This list is based on customer feedback and market demands. As you might expect the list is so long, that even with all developers in the world and an undisturbed decade, we wouldn’t be able to get to the bottom of it.“ I exaggerate to drive home my point, ”The list is then prioritized, and the developers are asked to give estimates for the top items on the list.”

    “So at this point you have already trimmed the list of features?”

    “Yes, at this point we have a net list of features. The developers go away with this list, and do their magic; you know, prototyping, designs, specifications, and so on. Then they come back with estimates. With the estimates in hand we can derive a plan. Typically this includes trimming the list of features even further. We end up with what appears to be a realistic plan. We often leave a few more features in the plan than the estimates allow for, just in case a feature is finished early.”

    “Do you have reason to believe your developers over-estimate?”

    “No, on the contrary. If they were giving too long estimates the features would have been cut early, and we wouldn’t have this problem; where we cut features as we run out of time."

    “So do you believe your developers under-estimate?”

    “I have no reason to believe so. They have been doing this for so long, and are really a great team, and they have a remarkable tendency to hit the right dates. Having said this, they do miss the dates occasionally. “

    “And what happens when they slip?”

    “Well, since they set the initial date. It is their responsibility to catch up. Typically they do overtime to get back on track.”

    “And when they don’t succeed in catching up?”

    “Then we have a problem and have to cut features from the product. Which is my headache – why can’t we implement according to our plan?”

    Søren is quiet for a moment. I use the pause in the discussion to refill our glasses.

    “Let me ask you this,” Søren says, “Suppose I asked you to paint my house. You can take as long as you like, but you need to tell me up front how long it will take. If you are not finished by the time you promised me, you’ll have to pay me a compensation for each minute you are late. How would you estimate this painting project?”

    “When you put it like that, I would naturally build a lot of safety into the estimate, so I’m certain I can finish well before time. I just wish that my developers could finish on time!”

    “Do the rules your developers play by resemble this scenario?” Søren asks.

    “Thinking about it, their behavior does, but the outcome certainly doesn’t. They are not finished well before time.”

    “That is because building software is different from painting a house. You know you are done when the last board has been painted. That is not true for software. There is always something more you can do. Adding more comments, writing more tests, refactoring, just to name a few. This is the developer’s baby; they want it to be as perfect as possible before passing it on. When they finish before time, they will spend whatever time is left polishing it off.”

    “I know, I was a developer once”, I smile, “and I’m fine with that. Delivering polished features sounds good to me. The problem is we end up cutting some.”

    “Let us look at the painting project again. Regardless of how conservative your estimate is, you may get surprised: the foundation might not be in the condition you expected, you may run out of paint, the ladder might break, and the brush might be too small, and so on. Regardless of the precautions you take real life can and will play tricks on you.”

    “Yeah, we have all been visited by Murphy.”, I smile.

    “The only sane way to deal with it is to include variation in the planning of the project.”

    “Hang on a second. “, I have to digest what I just heard, “Are you saying that whenever we finish a feature early it doesn’t mean the next feature will start early – and whenever a feature is late it delays the rest of the plan?”

    Søren is silent. I shuffle my feet, feeling awkward. After what seems like a too long pause, I say, “I think you are right; even when we build in a safety buffer into all our estimates we run out of time. What else is there to do?”

    “How about delivering even more features on time?” Søren takes a pen from his inner-pocket, leans forward and grabs a napkin. This is what he draws:

    “Basically this is what you currently do. Each feature is protected from variation by the buffer your developers build in to avoid working overtime. Suppose you remove the buffer from each feature, and add a buffer in the end instead. It probably has to be somewhat larger than the buffers you use today. Something like this”, Søren continues to draw.

    “I see the point. This way each feature will commence as soon as the previous is done, and any variation might be caught up during the project, and if not we have the buffer in the end.” This feels good, perhaps too good to be true; napkin-drawings and red wine have a tendency to simplify things. After a short while I’m able to put words to what disturbs me, “this would mean our developers have to give shorter estimates. I don’t want to go onto their turf, questioning their work.”

    “Well, you don’t have to”, Søren answers, “you have to ask for estimates differently. Instead of asking ‘When can you be finished?’, ask ‘When can you be finished with a 50 percent probability?’ The word estimation intrinsically holds a probability factor; you have to get that information for the estimate to be of true value.”

    Being a tech-guy I cannot help myself questioning something as scientific as a number, “Why 50 percent?”

    “First of all it tells the developers that it is perfectly alright to be late, actually you anticipate half the features will be late. This way you reassure the developers that it doesn’t mean they have to work extra hours if they are late.”, Søren sips his wine, ” Secondly, it also means half the features are early. This way there is a good chance of the late features being balanced out by the early features.”

    Søren draws again.

    “If you try this out, you’ll be surprised at how short the new estimates will be. In theory a 90 percent probability estimate is about twice as long as a 50 percent estimate. But to get such short estimates your team needs insight into this way of planning. And the plan has to be transparent. Your developers will need this insight to understand the importance of letting go of a feature once it is completed; instead of keep polishing it and delaying the delivery.“

    The bottle of red wine is almost empty. I feel sober.

Page 14 of 19 (187 items) «1213141516»

mfp's two cents

...on Dynamics AX Development!