There are a large number of tools available to assist you with application compatibility, and part of the challenge of becoming an app compat ninja is to understand how to apply each of these tools in the most effective way. I spoke last time about leveraging compatibility evaluators, hoping to help you work these into the process in a way more likely to make you happy. This time around, I want to back off and try to address a more general misconception:

No app compat tool is going to provide you with a to-do list of all the things you must fix in order to make your application compatible.

This is true for a couple of reasons. On one hand, it’s going to be impossible to find every possible bug, so this list is necessarily incomplete (Type II errors). On the other hand, not all of the tests we do can say definitively if the behavior we observed will create an application compatibility bug for you or not (Type I errors). In some cases, the behavior is actually wrong, but the application has handled the error gracefully. In other cases, the “test” has to use heuristics, so a negative outcome on the test doesn’t necessarily mean that the application is negatively impacted (even if it doesn’t handle errors), just that it might be.

Because of these shortcomings, the best approach to using tools is typically to use these tools to diagnose problems, not surface them.

To give a little more clarity to this statement, I thought I would walk through a few of the tools that people typically express as generating their to-do list and instead focus on where I would choose to incorporate these tools into a workflow.

Static Analysis Tools – such as AppDNA Apptitude and ChangeBase AOK (here presented in alphabetical order by company name, so please don’t infer a preference of tools based on order). These tools are very good at helping you build your initial estimate, and also raise a number of issues which you may want to fix in your application. But when it comes to fixing things, there are caveats. Both surface Type II errors out of necessity – runtime analysis can’t find all possible bugs, and static analysis finds an incomplete subset of those which are reasonably findable. It also surfaces Type I errors. As described above, some tests are heuristics, some find actual bugs, and another type of error which comes from static analysis tools uniquely: the bug could occur in code which you never run. So, even if it’s an actual bug, if you never run that code, do you even care? This is not to diminish the value of the tools (when used correctly, they can save you a lot of money), just to dissuade you form overvaluing it as your “to-do list generator”. When I look at data output from these tools, I normally go through and interpret the data by examining the tests run and, applying my knowledge of app compat, assign a relative rating for each issue, being sure to note where a given test is more or less likely to actually manifest itself as a bug. In addition, I normally also consider which issues have quick and easy (or even automated) fixes. We then work in decisions leading into install testing and runtime testing. (e.g. When you find any issues on list a, go directly to remediation. For the remaining issues, take it to the tester to see if it’s even a problem, and if they open the bug, then the remediation specialist can reference that data to perhaps accelerate the fix.)

Standard User Analyzer (SUA) / LUA Buglight – These tools check for issues running as a standard user, by reporting the events where a standard user couldn’t do something while an administrator could. In addition to the obligatory Type II errors, these tools generate a number of Type I errors because there is no guarantee that the application isn’t handling the access denied gracefully. Many applications do! In fact, if you run SUA against managed code, you’ll spot a number of Type I errors, because the framework itself is attempting something, but then handling it gracefully, and the application will run without error. (LUA Buglight avoids reporting a number of these issues because of superior noise filters.) I have seen a number of customers where “run the app through SUA / LUA Buglight” is in the workflow for every app. I prefer to see them used only in remediation. When an application fails to run as a standard user, but works when run as an administrator, use these tools to diagnose why, and save yourself some time in a debugger.

IE Compatibility Test Tool – This tool find a number of issues with applications and Internet Explorer 8, but it’s really heavily weighted towards AJAX issues driven by more stringent security requirements in the browser. There are Type II errors around ActiveX control compatibility and rendering in particular, not to mention incomplete checks for AJAX / script issues. But there are also plenty of Type I errors, where an “issue” is just the heuristic possibility of an issue, and fixing all of them would be really challenging if even possible. So again, my approach with this tool is to use it to accelerate debugging of an issue, not to pass all web apps through and work them until they pass.

One thing you want to be very careful of with all of the tools: it’s remarkably easy to surface all kinds of “issues” which would be “better” if you fixed them, but the software still lets you get your job done if you did nothing about it. Chances are, you were given the budget for an application compatibility project, not an application quality project. Application quality projects cost more than application compatibility projects – don’t create one accidentally.