Focus on Linux
sshd log analyzer Jun 10 2008 07:31PM
Greg Metcalfe (metcalfegreg qwest net)
Tim Bray has had an interesting experiment going with his WideFinder and
WideFinder 2 log parsing speed project. His test box is Solaris, which
doesn't help me a lot, but it's recommended reading
http://www.tbray.org/ongoing/

I've been working on an sshd analyzer, and have written a couple of versions.
The easiest approach is Perl, but I'm thinking of revisiting a previous bash
version. I'd like to see what kind of speed bump I can get from using
multiple discrete CPUs, multiple cores on a single CPU, and both multiple
cores and cpus.

The idea is to split discrete log files up per CPU, or split(1) a large file,
then analyze. It's an obvious candidate for parallel processing, as no child
process has to wait on data from another child process. You just fork off one
proc per CPU or core for each log file (or segment), do a wait, and assemble
the results.

The problem is that it's going to be 2-3 weeks before I have access to any
reasonable spectrum of machine architectures again, and I'd like to crank
some code out before then. I have a time window, but no hardware.

What I need is a cat of /proc/cpuinfo, and a description of what sort of
system it came from (single CPU/multi-core, multi-CPU/multi-CPU,
multi-CPU/single-core). Reliably detecting how many cores are available is
obviously important. If you're willing to time test code, I'd like to know
that as well. I can supply /var/log/secure files if you haven't been
preserving yours.

Results (including GPL code) will be public, and of course assistance credit
given.

[ reply ]


 

Privacy Statement
Copyright 2010, SecurityFocus