bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#6131: [PATCH]: fiemap support for efficient sparse file copy


From: Tao Ma
Subject: bug#6131: [PATCH]: fiemap support for efficient sparse file copy
Date: Fri, 28 May 2010 15:52:26 +0800
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4

Hi Jim

On 05/27/2010 06:30 PM, Jim Meyering wrote:
jeff.liu wrote:
This is the revised version, it fixed the fiemap-start offset calculation
approach to remove it out
of the 'for (i = 0; i<  fiemap->fm_mapped_extents; i++)' loop.

Hi Jeff,

I've included below the state of my local changes.
Unfortunately, with that 5-patch series, there is always a test failure
on F13/ext4.  Maybe someone who knows more about extents can provide an
explanation?
Just want to clarify why ocfs2 didn't work here. I guess the reason also works for ext4 since both ext4 and ocfs2 use block group to organize their blocks in the volume.

I checked the perl test script to create sparse src file, it will create contiguous bytes(around 20-24k) at an interval of around 40k.So in general, these 20-24k should be contiguous. But that does exist some scenario that they could be separately into 2 extents. Consider one block group is used to allocate blocks to this file, when the block group only has 10K left while you are requiring 20K, it will use the left 10K in this group and allocate 10K from another block group. That would become 2 extents since they can't be contiguous.

So I guess the right step is to check the holes by using filefrag if you prefer this tool and want to make sure cp doesn't copy holes(I get this point from another e-mail written by you). How to find holes with filefrag? I guess it is quite simple since it also use fiemap and we can calculating holes easily by comparing the 2 consecutive records. I guess we can get what you want with ext4 after this update.

Regards,
Tao





reply via email to

[Prev in Thread] Current Thread [Next in Thread]