gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] [RFC] Accounting directory blocks in marker


From: Brian Foster
Subject: Re: [Gluster-devel] [RFC] Accounting directory blocks in marker
Date: Fri, 11 Oct 2013 08:08:04 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130805 Thunderbird/17.0.8

On 10/11/2013 07:18 AM, Krishnan Parthasarathi wrote:
> All,
> 
> Today, marker xlator doesn't account for the blocks consumed by directories.
> This document describes the approach we plan to use to add directory size
> accounting.
> 
> Extended attributes to be added:
> --------------------------------
> 
> key = trusted.quota.dir-blocks-contri.
> value = no. of ia_blocks consumed by this directory inode, as seen via 
> stat(3).
> This will be referred to as "directory sizes" herein.
> 

Hi Krishnan,

This sounds Ok to me, but is there any reason we couldn't use the
existing marker xattr naming scheme instead of creating a new key? E.g.,
a directory gets the same contribution xattr that a file gets, only it
happens to contribute to itself? Alternatively, maybe there's room for
an optimization here where the directory stat data can simply be folded
into the directory size. Thoughts?

Brian

> Note: This extended attribute will be set only on directories.
> 
> This value indicates the contribution by the directory (alone) that has been
> accounted so far, towards the parent's size.
> 
> With the above extended attribute we can keep track of changes in directory
> sizes similar to how we handle changes to file sizes. A directory's
> contribution to its parent would be as below,
> 
> Total size of a directory = Total Contribution to parent
>     = ValueOfXattr(trusted.quota.dir-blocks-contri) +
>       ValueOfXattr(trusted.quota.size)
> 
> This relation must be used for both reporting disk-usage in the context of
> quotas and for percolating (delta) accounting updates 'upwards'.
> 
> Let us know if you see any issues with this approach, as soon as possible.
> 
> thanks,
> Krish
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]