rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] rdiff-backup/rsync +ssh performance comparison


From: Vadim Kouzmine
Subject: Re: [rdiff-backup-users] rdiff-backup/rsync +ssh performance comparison
Date: Mon, 06 Feb 2006 20:49:28 -0500

On Mon, 2006-02-06 at 19:52 -0500, Douglas Bollinger wrote:
> On Mon, 06 Feb 2006 17:23:50 -0500
> Vadim Kouzmine <address@hidden> wrote:
> 
> > Performed tests copying my home directory (~1GB, 21668 files) on
> > workstation (Gentoo Linux, rdiff-backup 1.0.1, python 2.4.2, rsync
> > 2.6.0, OpenSSH_4.2p1/no_compression)
> > 
> > 1)rdiff-backup /src -> /dst  3m 11s
> > 
> > 2)rsync /src -> /dst         2m 45s
> > 
> > 3)rdiff-backup localhost::/src -> /dst:  12m 5s
> > 
> > 4)rsync localhost:/src -> /dst: 3m 9s
> > 
> > So rdiff-backup is a little slower than rsync when working on local
> > files, and 4 times slower when working trough ssh.
> 
> Looking in the mailing list archives, it seems that rsync is always quite a
> bit faster than rdiff-backup over a network, at least the initial run.
> There's been some discussion about this, but I haven't seen a real
> definitive answer on whether rdiff-backup can be tweaked to be closer to
> rsync performance.
> 
> My GUI has a display that shows network throughput out of eth0.  The most
> of eth0 I've ever seen rdiff-backup use was 2 MB/s while this hardware has
> no problem geting 11.8 MB/s with scp.  While I understand there is quite a
> bit of overhead associated with syncing a bunch of small files, I've always
> wondered why big files, say 600 MB, still only transfer at 2 MB/s with
> rdiff-backup.

Let me summarize the information I've gathered during my tests, on two
platforms (Gentoo, Trustix 2.2) and different versions of python (2.2.3,
2.3.5, 2.4.2):

- on 3GHz P4 Xeon I get ~2MB/s steady transfer through ssh, on P4 3GHz
it's ~1.9MB/s;
- I made tests ssh-ing to LOCALHOST, so no network is involved here.
Although on gigabit lan I got the same results;
- 2MB/s rate actually doesn't depend on file size, at least it seams so.
We all expect it to be fast on big files and much slower on small files,
but looks like it's limited to 2MB/s on big files and the hardware I
used is so fast so small files don't make it worse;
- initial and incremental transfer rates seem to be almost equal;
- I see 20-35MB/s transfer rate on the same hardware with scp/rsync (on
big files of course);
- system resources remain practically IDLE when rdiff-backup is working
through ssh - I observe 3-6% cpu usage, almost no context change, ~2MB/s
disk read/write activity, etc, etc;
- vmstat with big-enough interval (5-10 sec) shows constant ~2MB/s read
rate from disk, never more;
- I tried to monitor running rdiff-backup server/client processes and
ssh process it spawned, attaching strace and measuring time spent in
system routines during 30 seconds interval. All 3 processes run usual
amount of system calls, no kind system call takes more than a fraction
of a second in total.

I understand that rsync and rdiff-backup are different, share no code
and may be serve different purpose. But why does rdiff-backup work so
significantly slower through ssh, refusing to use system resources
available, when rsync works just fine in this case???


Thanks,
-- 
Vadim Kouzmine <address@hidden>





reply via email to

[Prev in Thread] Current Thread [Next in Thread]