gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: [Gluster-devel] Permissions and ownership ...


From: Raghavendra G
Subject: Fwd: [Gluster-devel] Permissions and ownership ...
Date: Wed, 2 Jan 2008 10:13:44 +0400

resending to group

---------- Forwarded message ----------
From: Raghavendra G <address@hidden>
Date: Jan 2, 2008 10:13 AM
Subject: Re: [Gluster-devel] Permissions and ownership ...
To: Gareth Bult <address@hidden>


Hi Gareth,
The snapshot of logs you've sent are not sufficient. How big are the log
files? Is it possible to send the whole log files? Is it possible to give us
remote access to your system to debug the problem?

regards,


On Dec 28, 2007 7:31 PM, Gareth Bult <address@hidden> wrote:

> Hi,
>
> Without sending reams of information, this appears to be the relevant
> snipped from the client logs;
>
>  cp -rav /bin /mnt/cluster
>
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x96b6d0)
> 2007-12-28 15:29:28 D [fuse-bridge.c:422:fuse_lookup] glusterfs-fuse:
> LOOKUP 1771706/more (/bin/more)
> 2007-12-28 15:29:28 D [fuse-bridge.c :377:fuse_entry_cbk] glusterfs-fuse:
> ERR => -1 (2)
> 2007-12-28 15:29:28 D [inode.c:308:__destroy_inode] fuse/inode: destroy
> inode(0) address@hidden
> 2007-12-28 15:29:28 D [ioc-inode.c:136:ioc_inode_update] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [ioc-inode.c:144:ioc_inode_update] ioc: adding to
> inode_lru[0]
> 2007-12-28 15:29:28 D [ioc-inode.c:146:ioc_inode_update] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [inode.c:559:__create_inode] fuse/inode: create
> inode(606599)
> 2007-12-28 15:29:28 D [inode.c:351:__active_inode] fuse/inode: activating
> inode(606599), lru=13/1024
> 2007-12-28 15:29:28 D [inode.c:308:__destroy_inode] fuse/inode: destroy
> inode(0) address@hidden
> 2007-12-28 15:29:28 D [ io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [inode.c:381:__passive_inode] fuse/inode:
> passivating inode(606598), lru=14/1024
> 2007-12-28 15:29:28 D [ io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:50:ioc_get_inode] ioc: locked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:54:ioc_get_inode] ioc: unlocked
> table(0x61e970)
> 2007-12-28 15:29:28 D [io-cache.c:105:ioc_inode_flush] ioc: locked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [io-cache.c:107:ioc_inode_flush] ioc: unlocked
> inode(0x1f7f320)
> 2007-12-28 15:29:28 D [fuse-bridge.c:422:fuse_lookup] glusterfs-fuse:
> LOOKUP 1771706/rmdir (/bin/rmdir)
>
> Which yields;
>
> `/bin/more' -> `/mnt/cluster/bin/more'
> cp: failed to preserve ownership for `/mnt/cluster/bin/more': Function not
> implemented
>
> Does this help?
>
> ----- Original Message -----
> From: "Raghavendra G" <address@hidden>
> To: "Gareth Bult" < address@hidden>
> Sent: Friday, December 28, 2007 9:58:51 AM (GMT) Europe/London
> Subject: Re: [Gluster-devel] Permissions and ownership ...
>
> Hi Gareth,
> Thanks for specs. I tried to reproduce your problem on local system
> without any success. Waiting for logs.
>
> regards,
> On Dec 28, 2007 1:37 PM, Gareth Bult < address@hidden> wrote:
>
> > server;
> >
> > volume image-raw
> >   type storage/posix
> >   option directory /export/image
> > end-volume
> >
> > volume image-locks
> >   type features/posix-locks
> >   subvolumes image-raw
> >   option mandatory on
> > end-volume
> >
> > volume image-cache
> >     type performance/read-ahead
> >     subvolumes image-locks
> >     option page-size 512KB       # default is 256KB
> >     option page-count 4          # default is 2
> >     option force-atime-update no # defalut is 'no'
> > end-volume
> >
> > volume image
> >   type performance/io-threads
> >   subvolumes image-cache
> >   option thread-count 2
> >   option cache-size 4MB
> > end-volume
> >
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option auth.ip.image.allow 10.0.0.*
> >   subvolumes image
> > end-volume
> >
> > client;
> >
> > volume brick1
> >     type protocol/client
> >     option transport-type tcp/client
> >     option remote-host brick1
> >     option remote-subvolume image
> > end-volume
> >
> > volume brick2
> >     type protocol/client
> >     option transport-type tcp/client
> >     option remote-host brick2
> >     option remote-subvolume image
> > end-volume
> >
> > volume brick3
> >     type protocol/client
> >     option transport-type tcp/client
> >     option remote-host brick3
> >     option remote-subvolume image
> > end-volume
> >
> > volume afr
> >     type cluster/afr
> >     subvolumes brick1 brick2 brick3
> >     option replicate *:3
> >     option scheduler rr
> > end-volume
> >
> > volume wcache
> >     type performance/write-behind
> >     subvolumes afr
> >     option flush-behind on    # default value is 'off'
> >     option aggregate-size 1MB # default value is 0
> > end-volume
> >
> > volume ioc
> >   type performance/io-cache
> >   subvolumes wcache
> >   option page-size 1MB      # 128KB is default
> >   option cache-size 64MB    # 32MB is default
> >   option force-revalidate-timeout 5 # 1second is default
> > end-volume
> >
> >
> >
> >
> > ----- Original Message -----
> > From: "Raghavendra G" <address@hidden>
> > To: "Gareth Bult" < address@hidden>
> > Sent: Friday, December 28, 2007 4:34:18 AM (GMT) Europe/London
> > Subject: Re: [Gluster-devel] Permissions and ownership ...
> >
> >
> >
> > On Dec 28, 2007 7:47 AM, Gareth Bult <address@hidden> wrote:
> >
> > > >Hi Gareth ,
> > >
> > > Hi,
> > >
> > > >Can you please send:
> > > >1. glusterfs client and server logs. (run both client and server with
> > > options -l logfile -L DEBUG)
> > > >2. glusterfs client and server configuration files.
> > >
> > > This will take time as I need to restart the client and server's .
> > > .which will need some changes to avoid a huge self-heal ...
> > >
> >
> > what are the translators you are using on client and server side? Can
> > you please send the configuration files?  I will wait for  the logs.
> >
> > >
> > >
> > > >Also can you check whether the following commands complete
> > > successfully on a glusterfs mount:
> > > >1. chmod
> > > >2. chown
> > >
> > > Confirm:
> > >
> > > chown root <file>
> > > chmod 755 <file>
> > >
> > > Both work fine on a mounted glusterfs.
> > >
> > > >Which is the backend file system you are using?
> > >
> > > ext3.
> > >
> > > Will send through logs as soon as I can take everything down to
> > > rebuild them .. although it seems very easy to reproduce?
> > > (cp -rv /bin /glustefs)
> > >
> > > Regards,
> > > Gareth.
> > >
> > >
> > > regards,
> > >
> > > On Dec 28, 2007 3:30 AM, Gareth Bult <address@hidden > wrote:
> > >
> > > > Hi,
> > > >
> > > > I've run into another issue trying to run applications on a
> > > > glusterfs .. in particular something to down with the setting of 
> > > > permissions
> > > > / ownership.
> > > >
> > > > Here's a sample from a "Zimbra" install log;
> > > >
> > > > (Reading database ... 13490 files and directories currently
> > > > installed.)
> > > > Unpacking zimbra-core (from
> > > > .../zimbra-core_5.0.0_GA_1869.UBUNTU6_i386.deb) ...
> > > > dpkg: error processing
> > > > ./packages/zimbra-core_5.0.0_GA_1869.UBUNTU6_i386.deb (--install):
> > > >  error setting ownership of `./opt/zimbra/db/create_database.sql':
> > > > Function not implemented
> > > > dpkg-deb: subprocess paste killed by signal (Broken pipe)
> > > > Errors were encountered while processing:
> > > >  ./packages/zimbra-core_5.0.0_GA_1869.UBUNTU6_i386.deb
> > > >
> > > > Typically I see lots of errors if I try;
> > > >
> > > > address@hidden:/opt# cp -rav /bin /mnt/cluster/mail/
> > > > `/bin' -> `/mnt/cluster/mail/bin'
> > > > `/bin/dash' -> `/mnt/cluster/mail/bin/dash'
> > > > cp: failed to preserve ownership for `/mnt/cluster/mail/bin/dash':
> > > > Function not implemented
> > > > `/bin/which' -> `/mnt/cluster/mail/bin/which'
> > > > cp: failed to preserve ownership for `/mnt/cluster/mail/bin/which':
> > > > Function not implemented
> > > >
> > > > ... anyone any idea what causes this ?
> > > > ... what am I doing wrong?
> > > >
> > > > "cp -rv" works fine - no problems at all.
> > > >
> > > > Is there a solution .. this far the only "application" I've
> > > > successfully managed to make work running off a live glusterfs are xen
> > > > virtual filesystem instances ..
> > > >
> > > > tia
> > > > Gareth.
> > > >
> > > > ----- Original Message -----
> > > > From: "Gareth Bult" <address@hidden>
> > > > To: "Kevan Benson" <address@hidden >
> > > > Cc: "gluster-devel" < address@hidden>, "Gareth Bult" <
> > > > address@hidden>
> > > > Sent: Thursday, December 27, 2007 11:00:48 PM (GMT) Europe/London
> > > > Subject: Re: [Gluster-devel] Choice of Translator question
> > > >
> > > > This could be the problem.
> > > >
> > > > When I do this on a 1G file, I have 1 file in each stripe partition
> > > > of size ~ 1G.
> > > >
> > > > I don't get (n) files where n=1G/chunk size ... (!)
> > > >
> > > > If I did, I could see how it would work .. but I don't ..
> > > >
> > > > Are you saying I "definitely should" see files broken down into
> > > > multiple sub files, or were you assuming this is how it worked?
> > > >
> > > > Gareth.
> > > >
> > > >
> > > > ----- Original Message -----
> > > > From: "Kevan Benson" <address@hidden>
> > > > To: "Gareth Bult" < address@hidden>
> > > > Cc: "gluster-devel" <address@hidden>
> > > > Sent: Thursday, December 27, 2007 8:16:53 PM (GMT) Europe/London
> > > > Subject: Re: [Gluster-devel] Choice of Translator question
> > > >
> > > > Gareth Bult wrote:
> > > > >> Agreed, which is why I just showed the single file self-heal
> > > > >> method, since in your case targeted self heal (maybe before a
> > > > full
> > > > >> filesystem self heal) might be more useful.
> > > > >
> > > > > Sorry, I was mixing moans .. on the one hand there's no log hence
> > > > no
> > > > > automatic detection of out of date files (which means you need a
> > > > > manual scan), and secondly, doing a full self-heal on a large
> > > > > file-system "can" be prohibitively "expensive" ...
> > > > >
> > > > > I'm vaguely wondering if it would be possible to have a "log"
> > > > > translator that wrote changes to a namespace volume for quick
> > > > > recovery following a node restart. (as an option of course)
> > > >
> > > > An interesting thought.  Possibly something that keeps a filename
> > > > and
> > > > timestamp so other AFR members could connect and request changed
> > > > file
> > > > AFR versions since X timestamp.
> > > >
> > > > Automatic self-heal is supposed to be on the way, so I suspect they
> > > > are
> > > > already doing (or planning) something like this.
> > > >
> > > > >> I don't see how the AFR could even be aware the chunks belong to
> > > > >> the same file, so how it would know to replicate all the chunks
> > > > of
> > > > >> a file is a bit of a mystery to me.  I will admit I haven't done
> > > > >> much with the stripe translator though, so my understanding of
> > > > it's
> > > > >> operation may wrong.
> > > > >
> > > > > Mmm, trouble is there's nothing definitive in the documentation
> > > > > either way .. I'm wondering whether it's a known critical omission
> > > > > which is why it's not been documented (!) At the moment stripe is
> > > > > pretty useless without self-heal (i.e. AFR). AFR is pretty useless
> > > > > without stripe for anyone with large files. (which I'm guessing is
> > > > > why stripe was implemented after all the "stripe is bad"
> > > > > documentation) If the the two don't play well and a self-heal on a
> > > > > large file means a 1TB network data transfer - this would strike
> > > > me
> > > > > as a show stopper.
> > > >
> > > > I think the original docs said it was implemented because it was
> > > > easy,
> > > > but there wasn't a whole lot to be gained by using it.  Since then,
> > > > I've
> > > > seen people post numbers that seemed to indicate it gave a somewhat
> > > > sizable boost, but the extra complexity in introduced never made it
> > > > attractive to me.
> > > >
> > > > The possibility it could be used to greatly speed up self-heal on
> > > > large
> > > > files seems like a real good reason to use it though, so hopefully
> > > > we
> > > > can find a way to make it work.
> > > >
> > > > >> Understood.  I'll have to actually try this when I have some
> > > > time,
> > > > >> instead of just doing some armchair theorizing.
> > > > >
> > > > > Sure .. I think my tests were "proper" .. although I might try
> > > > them
> > > > > on TLA just to make sure.
> > > > >
> > > > > Just thinking logically for a second, for AFR to do chunk level
> > > > > self-heal, there must be a chunk level signature store somewhere.
> > > > ...
> > > > > where would this be ?
> > > >
> > > > Well, to AFR each chunk should just look like another file, it
> > > > shouldn't
> > > > care that it's part of a whole.
> > > >
> > > > I assume the stripe translator uses another extended attribute to
> > > > tell
> > > > what file it's part of.  Perhaps the AFR translator is stripe aware
> > > > and
> > > > that's causing the problem?
> > > >
> > > > >> Was this on AFR over stripe or stripe over AFR?
> > > > >
> > > > > Logic told me it must be AFR over stipe, but I tries it both ways
> > > > > round ..
> > > >
> > > > Let get rid of the over/under terminology (which I always seem to
> > > > think
> > > > of reverse from other people), and use a representation that's more
> > > > absolute:
> > > >
> > > > client -> XLATOR(stripe) -> XLATOR(AFR) -> diskVol(1..N)
> > > >
> > > > Throw in your network connections wherever you want, but this should
> > > > be
> > > > testable on a single box with two different directories exported as
> > > > volumes.
> > > >
> > > > The client writes to the stripe translator, which splits up the
> > > > large
> > > > file, which is then sent to the AFR translator so each chunk is
> > > > stored
> > > > redundantly in each disk volume supplied.
> > > >
> > > > If the AFR and stripe are reversed, it will have to pull all stripe
> > > > chunks to do a self heal (unless AFR is stripe aware), which isn't
> > > > what
> > > > we are aiming for.
> > > >
> > > > Is that similar to what you tested?
> > > >
> > > > --
> > > >
> > > > -Kevan Benson
> > > > -A-1 Networks
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Gluster-devel mailing list
> > > > address@hidden
> > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > > >
> > >
> > >
> > >
> > > --
> > > Raghavendra G
> > >
> > > A centipede was happy quite, until a toad in fun,
> > > Said, "Prey, which leg comes after which?",
> > > This raised his doubts to such a pitch,
> > > He fell flat into the ditch,
> > > Not knowing how to run.
> > > -Anonymous
> > >
> >
> >
> >
> > --
> > Raghavendra G
> >
> > A centipede was happy quite, until a toad in fun,
> > Said, "Prey, which leg comes after which?",
> > This raised his doubts to such a pitch,
> > He fell flat into the ditch,
> > Not knowing how to run.
> > -Anonymous
> >
>
>
>
> --
> Raghavendra G
>
> A centipede was happy quite, until a toad in fun,
> Said, "Prey, which leg comes after which?",
> This raised his doubts to such a pitch,
> He fell flat into the ditch,
> Not knowing how to run.
> -Anonymous
>



-- 
Raghavendra G

A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous



-- 
Raghavendra G

A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous


reply via email to

[Prev in Thread] Current Thread [Next in Thread]