I receive a number of questions on the compatibility evaluators in ACT that revolve around one central question: what are they actually good for?
Seems kind of a harsh question, eh? Well, I’m not intending to be rude. However, I do try to help people avoid assumptions that will end up making them sad. However, I have since discovered that, in my attempt to prevent you from spending a lot of money and ending up sad, I’ve erred in the direction of leaving you sad but with your money still sitting in your pocket. I guess that’s slightly better, but I’d rather you weren’t sad.
You see, a lot of people approach the Application Compatibility Toolkit with a perspective of reverence. I mean, look at the name! It’s made by the Windows team! It has to be all I need to get the job done! (In fact, if you are an ACF partner, I believe it’s even mandatory to use it.) But, if I choose to use the compatibility evaluators, what can I then do with that data?
Well, people initially assumed that they could run the evaluators and it would tell them which applications are broken, and which are not. They could then use that data to project costs for the project. Like so:
Run Evaluators | V Project Costs | V Fix all issues the agents flagged | V Ready to deploy!
And, if you do that, you end up sad. Because we’re going to let you down. We don’t find all of the apps that have problems, and we don’t find all of the issues in the apps we do flag. We are runtime evaluators, so we have to be concerned with performance. Even if we could look for all bugs (hint: we can’t), in production your users would hate us for making their apps miserably slow by looking in too many places if we tried that. So, unless your app just so happens to have a bug that’s extremely common, we won’t even notice the bug.
So, why do we have these evaluators that don’t help you either project costs or find all of your issues? Is it because we don’t know how to write programs? Nope. (Not this time at least.) You see, there is a really good use for this data, and if you pick this use, then you not only end up unsad, you may even end up happy.
Issues detected by compatibility evaluators come with a priority automatically set. We only set it to Priority 2 or Priority 3. (Setting it as Priority 1 – critical to fix – is something left for you.) What this means is that, with a Priority 2 bug, you have an application bug that is probably not automatically fixed by the OS, in a bit of code that somebody actually ran, so you probably want to fix that. If we flag it as a priority 3 bug, then it’s still a bug in a bit of code that somebody was actually running as part of their job, but it’s something that’s probably automatically fixed. For example, UACCE will flag file writes. If we predict that UAC virtualization will fix it automatically, then priority 3 (nice to fix). If we predict that UAC virtualization will not automatically fix it, then you should consider fixing it so we classify it as priority 2 (must fix).
So, I’m not so much interested in seeing the original estimate (since we miss so much stuff), but the data does come in handy down the line. For example, here is a segment of an application testing workflow that incorporates this data:
Perform Install Testing | V Any Sev2 Issues? –yes—> Remediation | no | V Perform smoke testing, user testing. etc.
Now I’m using this data in a productive way to save time from a manual effort. You know that some user ran into this problem while performing their actual work, so the data fidelity is very high. Why send a known broken application over to testing and waste manual testing hours discovering a bug you can discover with nothing more than a few mouse clicks (or perhaps may outright miss if you don’t have a good test script)?
ACT agent data is relatively inexpensive to collect if you need the inventory anyway. But, you need to avoid getting tricked by overly optimistic sales people into believing that this data is everything you could ever want, but at the same time make sure you don’t ignore valuable data. Runtime data is awesome because you know for a fact that bad thing actually happened, and if run in production you know it happened as part of doing your real work (which are the only bugs you care about).
Feed your workflow, save manual effort, and reduce your risk. Now that is what ACT agent data is good for.
And, of course, we certainly do wish that we could highlight all busted apps for your organization (to help you better estimate project cost), as well as discover all individual app issue. Static analysis tends to do a better job at the app level (is the app broken – yes or no?) simply because it can perform WAY more tests without interrupting somebody’s work. But it doesn’t do nearly as well at the issue level. In the end, a balance between runtime tools, static tools, and manual effort is what most people end up doing to build the plan that really works for them. Bringing it all together, you can build the optimal mix of low cost and reduced risk during an app compat project. Don’t ignore a component of your solution just because it isn’t perfect. Because, alas, it’s all not perfect. There are no silver bullets. But we do have a few lead ones.