Hi Ken,
Thanks. I read the discussion in the bug report. I saw quite a bit of discussion around a new multi-processing architecture. I think I can go forward with a simple design that will work for now and be able to break into multiprocessing in the future. The idea is to add a new file-obj wrapper that will get an additional stream compressor binary (for example zstd or gzip) and apply it on the tar file before passing it on to the gpg backend.
From a bit looking at the code, it seems the changes should go into `duplicity/gpg.py`, right?
0. Does the above approach sound reasonable?
1. Where do you think this should go in the code?
2. I see GzipWriteFile, but where is the corresponding GzipReadFile? Just wondering to make sure I understand the architecture correctly
Thanks,
Guy