[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Using monit to monitor hundreds of thousands of file system resource
From: |
Jan-Henrik Haukeland |
Subject: |
Re: Using monit to monitor hundreds of thousands of file system resources? |
Date: |
Sat, 01 Sep 2007 16:15:24 +0200 |
On 1. sep. 2007, at 12.07, Sergio Trejo wrote:
why couldn't monit be evolved to the next level of evolution such
that it could be just as good at scaling to hundreds of thousands
of file system resources for monitoring purposes? Personally, I
really like monit a lot and prefer to stay using it for these
purposes rather than have to spend time getting to know about
tripwire or something similar. I'd love to see an evolution of
monit for scaling to large resources. Is there a good reason why
monit should not evolve to that next level of maturity?
To do this scalable we will need to hook up monit to a database. This
is certainly possible, we even have our own GPL database library we
could use, with support for the 3 most popular open source databases[1]
However, I guess, 99.99% of those using monit will not use monit to
handle over 100K of files. With the very limited development
resources we have, I'm not sure its in our best interest to do this
now. We still have a long list of TODO items [2] that are of more
general interest. Besides doing this brings monit into the more
"enterprise" segment of complex software. Not sure we want to go
there. What do others think?
Just my $0.02
[1] http://tildeslash.com/libzdb/
[2] http://www.tildeslash.com/monit/doc/next.php