bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#59382: cp(1) tries to allocate too much memory if filesystem blocksi


From: Paul Eggert
Subject: bug#59382: cp(1) tries to allocate too much memory if filesystem blocksizes are unusual
Date: Sat, 19 Nov 2022 19:50:06 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.2

The block size for filesystems can also be quite large (currently, up to 16M).

It seems ZFS tries to "help" apps by reporting misinformation (namely a smaller block size than actually preferred) when the file is small. This is unfortunate, since it messes up cp and similar programs that need to juggle multiple block sizes. Plus, it messes up any program that assumes st_blksize is constant for the life of a file descriptor, which "cp" does assume elsewhere.

GNU cp doesn't need ZFS's "help", as it's already smart enough to not over-allocate a buffer when the input file is small but its blocksize is large. Instead, this "help" from ZFS causes GNU cp to over-allocate because it naively trusts the blocksize ZFS that reports.


The proposed patch attached removes the use of buffer_lcm()
and just picks the largest st_blksize, which would be 4MiB in your case.
It also limits the max buffer size to 32MiB in the edge case
where st_blksize returns a larger value that this.

I suppose this could break cp if st_blksize is not a power of 2, and if the file is not a regular file, and reads must be a multiple of the block size. POSIX allows such things though I expect nowadays it'd be limited to weird devices.

Although we inadvertently removed support for weird devices in 2009 by commit 55efc5f3ee485b3e31a91c331f07c89aeccc4e89, and nobody seems to care (because people use dd or whatever to deal with weird devices), I think it'd be better to limit the fix to regular files. And while we're at it we might as well resurrect support for weird devices.


+#include <assert.h>

No need for this, as static_assert works without <assert.h> in C23, and Gnulib's assert-h module support this.


+/* Set a max constraint to avoid excessive mem usage or type overflow.  */
+enum { IO_BUFSIZE_MAX = 128 * IO_BUFSIZE };
+static_assert (IO_BUFSIZE_MAX <= MIN (IDX_MAX, SIZE_MAX) / 2 + 1);

I'm leery of putting in a maximum as low as 16 MiB. Although that's OK now (it matches OpenZFS's current maximum), cp in the future will surely deal with bigger block sizes. Instead, how about if we stick with GNU's "no arbitrary limits" policy and work around the ZFS bug instead?

Something like the attached patch, perhaps?

Attachment: 0001-cp-work-around-ZFS-misinformation.patch
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]