The other day I was presented with the results of a test conducted by a local office from another antivirus company.
Basically the test comes down to end-users being asked to uninstall their current antivirus software and install the other antivirus product.
After this is done a full system scan takes place and the number of detected malware along with the name of the previously installed product is collected to gather statistics.
The competing antivirus programs are ranked in order of the average number of malware they didn’t detect compared to the program the testresults belong to.
Anyone who has a basic understanding of computer security can see why this test is completely flawed and totally useless.
There is no verifiable set of malware samples, meaning that the other product may have identified legitimate files as being malicious.
But, even more importantly, there’s no way of telling what state the previously installed antivirus software was in.
Given the way how the test was performed it’s likely that most products were either outdated or pirated versions which can no longer be updated. And many more reasons can be thought of why this test is completely flawed.
In short it just comes down to that there are no real controllable variables, completely the opposite of that which makes a good test good.
The antivirus industry is a sensitive one, we must always take great care with what we say and what we do.
This also means that every antivirus company is responsible for the image and reputation of the industry as a whole.
Especially in case of tests this is where the antivirus experts come in. They are the people who have the skills to see which test is good and which is not and advise the marketing department accordingly.
After all, we must prevent misinformation every way we can, even if the misinformation might provide a positive outcome in the end.
The validity of tests