Share via


Follow up: When a 'False Positive' isnt' a false positive

Since my post on Sunday, in which I talk about what a false positive is and what it isn't, I've received a couple of questions about the scenarios I outlined and how the math works so I'd like to clear some of that up.

First off, the purpose of outlining the four scenarios was to show some of the situations in which WGA would indicate that a system is not genuine or licensed (and it would be correct) but the user of that system might be somewhat suprised by that failure and might even feel as though the WGA validation failure was in error leaving them possibly with the sense of a false positive. To be clear, in each of those scenarios the copy of Windows installed would fail validation and that would be the correct determination. The crucial similarity among the four scenarios I outlined is that in each the user might be surprised by the validation failure, being surprised does not make the validation failure erroneous.

Second, as to the total number of validation failures being about one in every five attempts, the fact is that there are numerous other scenarios that could result in validation failure. In many of those other scenarios the user of the system or purchaser of the software has some knowledge that the software isn't genuine or isn't properly licensed and is perhaps not as surprised when the validation fails. There are people who likely fall all along a range of awareness from mere suspicion, owing to the fact that they got a really good price online or for used software or some other 'too good to be true' deal, to someone who has full knowledge that the software isn't genuine or licensed and even futher to those who manufacture and sell counterfeit software and are knowing perpetrators in significant and serious crime. All together people who don't know, people who know and everyone in between who have counterfeit or unlicensed software on their system amount to about one out of every five validation attempts.

Lastly, I would like to add a little more context to something I said in my last post about the number of actual false positives we've seen in the program. Over the last year there have been occasions where we have received reports of what might be real false positives. Each time we receive a report like this and can gather enough detail we perform our own tests to determine if there might be a bug in the code or something else that could negatively impact WGA's ability to accurately make a determination. The total number of what might be actual false positives found over the past year amout to only a fraction of a percent.