bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#59382: cp(1) tries to allocate too much memory if filesystem blocksi


From: Korn Andras
Subject: bug#59382: cp(1) tries to allocate too much memory if filesystem blocksizes are unusual
Date: Sun, 20 Nov 2022 07:43:19 +0100

On Sat, Nov 19, 2022 at 07:50:06PM -0800, Paul Eggert wrote:

> > > The block size for filesystems can also be quite large (currently,
> > > up to 16M).
> 
> It seems ZFS tries to "help" apps by reporting misinformation (namely a
> smaller block size than actually preferred) when the file is small. This is

Just a nit: this isn't actually misinformation. ZFS uses variable "block"
sizes (it calls these blocks "records"). There is a configurable
per-filesystem maximum, which records of new writes will not exceed (but may
not reach); but existing files may use larger record sizes than what is
currently configured for the fs.

The currently-configured recordsize of the filesystem and the recordsize a
particular file was written with are not necessarily related. Depending on
the write pattern and whether the recordsize of the fs was changed during
the lifetime of the file, the same file can contain records of different
sizes. Reductio ad absurdum: the "optimal" blocksize for reading may in fact
depend on the position within the file (and only apply to the next read).

If a file fits into a single record, then, IIUC, it is actually optimal to
read it in a single operation; this is the case even if the currently
configured recordsize is smaller than what the allowed maximum was when the
file was written. If the file is highly fragmented and chunks of it are
stored on different physical media (this can easily happen if the zfs pool
was expanded with a new "vdev" during the lifetime of the file), it will in
fact be fastest to issue reads for several chunks in parallel, with read
speed possibly scaling almost linearly with the number of parallel requests.
(Not that I'm proposing cp(1) should try to figure this out, presumably in a
zfs-specific way, and actually do it.)

Since the file may contain records of various sizes that bear no relation to
the current per-fs recordsize setting, it's not immediately obvious (at
least to me) what st_blocksize zfs should report that can't be construed as
misinformation.

If you have strong opinions on the matter, you may want to explain them in
the pertinent OpenZFS issue: https://github.com/openzfs/zfs/issues/14195.

András

-- 
 I was once thrown out of a mental hospital for depressing the other patients.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]