[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Optimizing large copies

From: Baker, Darryl
Subject: RE: Optimizing large copies
Date: Thu, 22 Sep 2005 14:49:11 -0400

Isn't this a job for rsync rather than cfengine? Right tool for the right

-----Original Message-----
From: address@hidden
[mailto:address@hidden Behalf
Of Martin, Jason H
Sent: Thursday, September 22, 2005 2:08 PM
To: address@hidden
Subject: Optimizing large copies

Hello, I am looking at using CFEngine to guarantee that all of my
application files are the same across a set of hosts. This involves a
few copy statements of directories that totaled together include about
8000 files for nearly 400MB (lots of small images, html's, xml's, and
such). Disregarding the initial copy, it is taking 12 minutes for the
copy statements to run, even if there are no files needing update. I've
tried it with type=checksum and the default type action, and it makes
little difference. (checkum = 12 minutes, mtime = 13 minutes). I'd
really prefer to use checksum anyway, as I'd like CFE to detect if a
change is made on the client (ex user updates CFE central repository,
does sync, updates client, does sync, file should now be back to CFE
version). As a counter-example,an rsyc of the same data takes 7 seconds.

An example 'slow' copy statement:


Can anyone suggest how to best address this? I'm tempted to put
time-class into the copy statement to only do the copy/check once a day
or on demand, however that just limits the problem to certain hours.  If
anyone has any suggestions on how to improve the performance of this I
would appreciate hearing them.

Has anyone run CFE w/profiling to see where it is spending its time?

(Client is Solaris 9 / CFE 2.1.14, policyhost is Linux / CFE 2.1.15).

Thank you,
-Jason Martin

Help-cfengine mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]