|
From: | Kenneth Loafman |
Subject: | Re: [Duplicity-talk] zstd compression |
Date: | Fri, 17 Apr 2020 10:51:52 -0500 |
Hi Ken,Thanks. I read the discussion in the bug report. I saw quite a bit of discussion around a new multi-processing architecture. I think I can go forward with a simple design that will work for now and be able to break into multiprocessing in the future. The idea is to add a new file-obj wrapper that will get an additional stream compressor binary (for example zstd or gzip) and apply it on the tar file before passing it on to the gpg backend.From a bit looking at the code, it seems the changes should go into `duplicity/gpg.py`, right?0. Does the above approach sound reasonable?1. Where do you think this should go in the code?2. I see GzipWriteFile, but where is the corresponding GzipReadFile? Just wondering to make sure I understand the architecture correctlyThanks,GuyOn Wed, 15 Apr 2020 at 18:26, Kenneth Loafman <address@hidden> wrote:You are the second to request this. See the bug report at https://bugs.launchpad.net/bugs/1860200. Perhaps you and Byron could collaborate to make this happen. That would be greatly appreciated....Thanks,...KenOn Wed, Apr 15, 2020 at 9:59 AM Guy Rutenberg via Duplicity-talk <address@hidden> wrote:_______________________________________________Hi,Is there a way to compress backups using [zstd]? I know that normally in duplicity the compression is left to gpg, but gpg's selection for compression is quite limited and support basically gzip and bzip2. Zstd outperforms both algorithms in terms of compression and speed.Ideally, I would like to give duplicity a command through which it pipes the tar archives before passing it on to gpg.[zstd]: https://facebook.github.io/zstd/Thanks,Guy
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
[Prev in Thread] | Current Thread | [Next in Thread] |