gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] posix-locks under AFR not working for server+client


From: Krishna Srinivas
Subject: Re: [Gluster-devel] posix-locks under AFR not working for server+client in one process
Date: Fri, 17 Oct 2008 12:24:40 +0530

Rommer,
Thanks, we are working on the solution, for now please use separate process
for client and server.
Krishna

On Fri, Oct 17, 2008 at 5:21 AM, Rommer <address@hidden> wrote:
> On Wed, 15 Oct 2008 16:36:25 +0530
> "Krishna Srinivas" <address@hidden> wrote:
>
>> Rommer,
>> Thanks for that, we will get back to you.
>> Krishna
>>
>
> I think I found a problem.
> I've added debug string after fd->inode checking in pl_open_cbk
> function in posix-locks translator:
>
> ...
>    if (!fd->inode) {
>      gf_log (this->name, GF_LOG_ERROR, "fd->inode is NULL! retur...
>      STACK_UNWIND (frame, -1, EBADFD, fd);
>    }
>
>    printf("pl_open_cbk: fd=%p, fd->inode=%p, fd->inode->ino=%lu\n",
>            fd, fd->inode, (u_long)fd->inode->ino);
>
>    data_t *inode_data = dict_get (fd->inode->ctx, this->name);
> ...
>
> and found the following thing (I ran lock-test script on both nodes):
>
> (a) one process without client/server connection in afr for local brick:
> pl_open_cbk: fd=0x86e15b8, fd->inode=0x86fd1c0, fd->inode->ino=280982
> pl_open_cbk: fd=0x86fdad0, fd->inode=0x86fda68, fd->inode->ino=280982
>
> (b) one process with client/server connection in afr for local brick:
> pl_open_cbk: fd=0x92c3100, fd->inode=0x92c2df0, fd->inode->ino=280982
> pl_open_cbk: fd=0x92c3200, fd->inode=0x92c2df0, fd->inode->ino=280982
>
> If afr uses local brick without client/server connection, posix-locks
> translator gets different inode structure for one inode number.
>
> Between the afr and the local brick volume should be the translator
> which searches for identical inode structures in server's
> bound_xl->itable (i.e. io-thr's itable in my configuration).
>
> I'm using the following volumes for this now:
> volume server-local
>  type protocol/server
>  subvolumes io-thr
>  option transport-type unix/server
>  option listen-path /tmp/export.sock
>  option auth.ip.io-thr.allow *
> end-volume
> volume client-local
>  type protocol/client
>  option transport-type unix/client
>  option connect-path /tmp/export.sock
>  option remote-subvolume io-thr
> end-volume
>
> But this has bad performance.
>
> Rommer.
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]