Focus on Apple
re: ClamXav for OS X 10.4 Aug 20 2007 04:21PM
David Harley (david a harley gmail com)
Here's a digest of some of the points that Howard and I covered off list.
I'm clarifying some of my own views here, of course, rather than presuming
to speak for Howard, but I think we agreed on more points than you might
have expected from our on list.

1) Trusting the AV industry.

Don't. I don't, and, yes, I am in some sense part of it. (See below.) There
-are- people within the av research community (both sides of the
vendor/non-vendor divide) that I often work with and respect enormously, but
they're a small part of a large industry, and I can't vouch for everyone,
especially people in marketing. (Not being marketroid-ist here: we all have
to make a buck, but it's not the sort of thing I do well.)

So I don't expect you to take everything I say as written on tablets of
stone. I'm all for healthy scepticism. What I find unhealthy is the trend to
assume that the AV industry is one big fraud, and that anything that doesn't
come from that sector is therefore true. I don't think anyone on this list
has come from that viewpoint, but a lot of people have elsewhere, and some
have made some extraordinarily vicious attacks. (Don't worry about it: I'm
not that thin-skinned... OTOH, if those attacks made me unduly defensive in
the previous discussions, I apologise unreservedly.)

I do think that Dirk Morris has claimed to be better than professional
testing labs, and doesn't (or chooses not to) understand how they function.
For instance http://blog.untangle.com/?p=95

"I'm left to assume that the testing labs are biased in their testing,
probably because they get their funding from the commercial vendors that pay
them for testing. Their customers surely wouldn't be happy if the testing
labs claimed a free and open source solution was better."

Actually, I'm fascinated by the phenomenon of public ambivalence towards the
AV industry, and have written about it many times: for instance
http://www.virusbtn.com/virusbulletin/archive/2006/11/vb200611-OK.dkb. And
probably will again.

2) About me. (It's all about me ;-))

I do supply technical consultancy and writing services both sides of the
vendor/non-vendor divide (AV companies, testing organizations, and so on). I
-don't- have anything to do with PR, marketing and so on, and if I'm ever
tempted to dabble in that, I will continue to avoid anything resembling a
conflict of interests.) Actually, I'm in the interesting position of having
been hated and badmouthed by some AV haters -and- some AV companies...
Howard did point out that there's a mini-cv I'd forgotten about at
<http://www.avien.org/dc.html>, if anyone's that interested.

3) The difficulties of testing

I don't say that only an elite group of professionals can say anything
useful about AV peroformance. I am saying that you can't produce a fair test
on the basis of misconception, muddled thinking and false authority
syndrome. If you don't know anything about testing techniques -or- malware,
the odds are pretty much against your producing a valid test.

However, the AV industry has always made it virtually impossible for anyone
outside the charmed circle to test some aspects of their products to a
standard that the industry itself finds acceptable. Some of the historical
reasons for this are honourable, but it does work to their advantage in that
they can (and do) cry foul at practically any test that shows unexpected
results. The corollary to that is that Dirk Morris, by being commendably (if
naively) honest about his methods and (oh dear) his sample set, has made it
easier to criticise him by pointing out the obvious holes in his
methodology, which seems a little hard, but not, I think, unfair.

When you look at ways of making it easier to test properly, there are a lot
of problems, and I don't have the answer to all of them. Supplying samples
to people you don't/can't trust is an obvious one. There are some partial
solutions to this: outsourcing the detection part of a comparative to a
competent agency, or such an agency (or an AV vendor) under tightly
controlled conditions (so that samples don't "escape") are possibilities.

Making people more aware of good and bad practice, teaching them (yes, I
know that sounds patronising) what they can and can't effectively do,
empowering them to run their own tests and assess the tests of others: well,
that's a personal crusade...

I do think the AV industry has a responsibility to address the whole issue
better than it does at present. I'm working on that, in my own small way.

4) The test suite

I didn't say I didn't download or look at the test suite: I did. Which is
how I know that some of the samples are not valid virus samples. I haven't
validated them in the sense of exact identification: that's a pain if you
don't work in a test lab, which I don't currently.

(To be clear on this, I don't do formal detection testing myself currently
because I haven't got the resources, though I have in the past. I do still
provide consultancy services to testing organizations from time to time,
which I suppose leaves me open to accusations of "those that can't, teach."
But my point is that you need resources as well as expertise to run a valid
detection test.)

I haven't run those sample against the scanners tested, other scanners, or
VirusTotal. That wouldn't tell me anything about the validity of the
original test, because I wouldn't be looking at the same scanner versions,
however well or badly I configured them.

5) Testing types

Howard and I were not quite on the same hymn sheet on the subject as regards
different forms of testing, notably detection, usability and configuration
testing. Howard correctly pointed out that an end user is not necessarily
going to use a configuration that will catch all samples. It's perfectly
true that most default configurations prioritize speed over deep scanning.
So there is a distinction between default detection and overall detection
capability (there's an issue here with setting heuristic levels, too.) If a
product is capable of detecting 100,000 strains, but not out of the box, and
the vendor makes it difficult for the customer to use it to its best
advantage, that's usability and configuration: it's -not- detection.
However, there's certainly an argument for testing default detection.
Unfortunately, that presents other problems: you can configure all products
to maximize detection. It's harder to test defaults, though, because the
number of variables makes it difficult to maintain parity between tested
configs. That doesn't mean, though, that it's not worth trying to do. One of
my beefs with Untangled was that they didn't seem to understand the
necessity of trying to be consistent.

If you look at Morris's blog in detail, he was aware that there was a
configuration or usability problem with WatchGuard, for instance, which he
was unable to resolve. That's an important point to make, but it's not (as
far as I know - I'm not in a position to test WatchGuard) a detection
failure, and it was presented as if it were.

6) Open source v. commercial

I haven't come across many starving OS developers either, but then they
don't usually spend their working day on maintaining a full
commercial-equivalent service. I'm not anti-OS, even in security, where the
same rules don't always apply. I've been wildly enthusiastic about snort in
print -- less so about ClamAV, but there are people working on that project
whose work I respect. But there are critical differences for users/customers
between an unsupported OS project -- yes, I know you can often buy support,
but that's a different ballgame -- and a supported commercial service.

7) Howard said...

> However, if you have paid nothing for your OS, nothing for your major
> applications, and nothing for much else beyond the hardware, it must
> be pretty hard to accept that it is worth paying for AV protection...

I understand that. Especially where there is a feeling that malware is
someone else's problem, not the Mac or Linux user's. (That's a whole
different debate, though.)

8) and...

I'm not saying that you have to be an AV researcher to test AV, though
testing is a very specific sub-field of AV research. Some of the rules for
testing AV at consumer level are the same as for other types of product, but
it's more complicated because everyone has an idea of how to use, say, a
word processor, and what to expect from it, but most people have a very
distorted idea of what AV does and how. (I happen to believe that a lot of
that is directly the fault of AV vendors, but that's yet another debate.)
Usability -is- very important, and it's an area the pro testers rarely
address in detail -- actually, that's largely because, despite my previous
observations, detection is actually conceptually easier to test than
usability, if you have the resources and knowledge to do it.

9) finally...

Authoring-wise, evaluation and testing is a topic I expect to come back to
sooner rather than later, maybe at book length, so I'm always happy to
discuss it at greater length. Certainly the discussion here has helped me
refine my own thoughts and think more about aspects that I hadn't given
sufficient consideration to. Thanks, all.

--
David Harley CISSP, Small Blue-Green World Security
Author/Editor/Consultant/Researcher
http://www.smallblue-greenworld.co.uk/
New AVIEN book: http://www.smallblue-greenworld.co.uk/Avien.html

[ reply ]


 

Privacy Statement
Copyright 2010, SecurityFocus