duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] How to backup stuff>5GB


From: edgar . soldin
Subject: Re: [Duplicity-talk] How to backup stuff>5GB
Date: Fri, 25 Feb 2022 23:54:52 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.6.1

hey Tashrif,

reads here, that the limitiation to 5GB can be avoided by using multi-chunk 
uploads
 
https://aws.amazon.com/s3/faqs/#:~:text=Individual%20Amazon%20S3%20objects%20can,using%20the%20Multipart%20Upload%20capability.

did you try '--s3-use-multiprocessing', '--s3-multipart-chunk-size' as 
documented in the man page
 https://duplicity.gitlab.io/duplicity-web/vers8/duplicity.1.html

did you try the prefix 'boto3+s3://' which according to man page section 'A 
NOTE ON AMAZON S3' does it automatically and is the recent, still maintained 
library to access s3?

so many questions :).. ede/duply.net

On 25.02.2022 21:14, Tashrif via Duplicity-talk wrote:
Thank you for the reference, Ken. In the meantime, I want to hack it. Which 
line in duplicity performs put request? Can I put an if condition for sigtar 
before that line so it is not attempted to put? And then I upload that sigtar 
using `aws s3 cp` command?

Best,
Tashrif

On Fri, Feb 25, 2022 at 12:40 PM Kenneth Loafman <kenneth@loafman.com 
<mailto:kenneth@loafman.com>> wrote:

    No, it's due to this bug: https://bugs.launchpad.net/duplicity/+bug/385495 
<https://bugs.launchpad.net/duplicity/+bug/385495>

    I am working on the next major revision to duplicity, 0.9.x, which will fix 
this and some others.  It's going slowly.

    ...Thanks,
    ...Ken


    On Fri, Feb 25, 2022 at 10:49 AM Tashrif <tashrifbillah@gmail.com 
<mailto:tashrifbillah@gmail.com>> wrote:

        Hi Kenneth,

        No, aws did not split the file in the bucket. I have been doing quite a 
bit of research on it. I see the following code segment in s3_boto3_backend 
only:

        duplicity/backends/s3_boto3_backend.py:141:        transfer_config = 
TransferConfig(multipart_chunksize=config.s3_multipart_chunk_size,
        duplicity/backends/s3_boto3_backend.py:142:                             
            multipart_threshold=config.s3_multipart_chunk_size)

        But I have used TARGET="s3+http://my_bucket <http://my_bucket>" which 
should be the old boto. Do you think the latter has anything to do with this error?

        Best,
        Tashrif

        On Fri, Feb 25, 2022 at 11:43 AM Kenneth Loafman <kenneth@loafman.com 
<mailto:kenneth@loafman.com>> wrote:

            Hi Tashrif,

            The sigtar size problem has been around forever.  For now I suggest 
splitting the backup into smaller portions.

            I am surprised the aws command completes properly.  Did it split 
the file in the bucket?

            ...Ken


            On Thu, Feb 24, 2022 at 10:57 PM Tashrif via Duplicity-talk 
<duplicity-talk@nongnu.org <mailto:duplicity-talk@nongnu.org>> wrote:

                During a backup task, duplicity created a 7.5 GB file at the 
very end: duplicity-full-signatures.20220222T150726Z.sigtar.gz. However, its 
upload fails with the following traceback:

                ```
                   File 
"min3-duply/lib/python3.9/site-packages/boto/s3/key.py", line 760, in send_file
                     self._send_file_internal(fp, headers=headers, cb=cb, 
num_cb=num_cb,
                   File 
"min3-duply/lib/python3.9/site-packages/boto/s3/key.py", line 957, in 
_send_file_internal
                     resp = self.bucket.connection.make_request(
                   File 
"min3-duply/lib/python3.9/site-packages/boto/s3/connection.py", line 667, in 
make_request
                     return super(S3Connection, self).make_request(
                   File 
"min3-duply/lib/python3.9/site-packages/boto/connection.py", line 1077, in 
make_request
                     return self._mexe(http_request, sender, 
override_num_retries,
                   File 
"min3-duply/lib/python3.9/site-packages/boto/connection.py", line 946, in _mexe
                     response = sender(connection, request.method, request.path,
                   File 
"min3-duply/lib/python3.9/site-packages/boto/s3/key.py", line 895, in sender
                     raise provider.storage_response_error(
                  boto.exception.S3ResponseError: S3ResponseError: 400 Bad 
Request
                <?xml version="1.0" encoding="UTF-8"?>
                <Error><Code>EntityTooLarge</Code><Message>Your proposed upload exceeds the maximum allowed 
size</Message><ProposedSize>7422574715</ProposedSize><MaxSizeAllowed>5368709120</MaxSizeAllowed><RequestId>HJD8DQ49S18RBFWQ</RequestId><HostId>7t7enU1YX/HY7ho7qA74knGEIzerBk/hDogp=</HostId></Error>

                Attempt of move Nr. 1 failed. S3ResponseError: Bad Request
                ```

                Meanwhile, `aws s3 cp 
duplicity-full-signatures.20220222T150726Z.sigtar.gz s3://my_bucket/` succeeds 
gracefully. That said, how do I enable duply/duplicity to upload files larger 
than 5GB?

                Thank you,
                Tashrif
                _______________________________________________
                Duplicity-talk mailing list
                Duplicity-talk@nongnu.org <mailto:Duplicity-talk@nongnu.org>
                https://lists.nongnu.org/mailman/listinfo/duplicity-talk 
<https://lists.nongnu.org/mailman/listinfo/duplicity-talk>


_______________________________________________
Duplicity-talk mailing list
Duplicity-talk@nongnu.org
https://lists.nongnu.org/mailman/listinfo/duplicity-talk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]