[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Making rsync run faster
From: |
hancooper |
Subject: |
Making rsync run faster |
Date: |
Fri, 20 Aug 2021 21:23:51 +0000 |
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, August 20, 2021 7:52 PM, Koichi Murase <myoga.murase@gmail.com>
wrote:
> 2021年8月21日(土) 3:09 hancooper hancooper@protonmail.com:
>
> > I was thinking there can be ways with bash with xargs or parallel. Have
> > been focusing on making a separate process using directory depth levels.
>
> It seems that GNU parallel has an example of rsync in its man page.
> https://www.gnu.org/software/parallel/man.html#example-parallelizing-rsync
Have been trying to use an idea based on a particular depth level. `$2` being
the topmost directory.
I want to allow a maximun of nprocs jobs (nprocs instances of rsync) where the
total
number of files at the depth specified depth level are distributed among nprocs.
Would appreciate some assistance on how I could set things up. Have also
concluded
that I would need some adjustments to the destination defined by `$destin` in
the
rsync call.
dlc=$( find $2 -mindepth $3 -maxdepth $3 -type d | tr "\n" " " )
echo "rsync -av $dlc $destin &"