gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] FIRST_CHILD(frame->this)->fops->create


From: Ian Latter
Subject: Re: [Gluster-devel] FIRST_CHILD(frame->this)->fops->create
Date: Fri, 07 Aug 2009 19:08:36 +1000

Hello,


> For reasons explained further below, it is not "right" to
create your
> inodes from a globally-reachable inode table (which does
not exist
> anyway). Almost all the time, you would be creating these new
> files/directories in the context of a particular call, or
have it
> triggered by a similar call. So most of the times, the
right inode
> table should be taken from loc->inode->itable, or
fd->inode->itable
> according to the particular fop in picture.
> 

Okay, I believe I understand your reasoning, but this
would not appear to alleviate my problem of trying to
access a directory that is not seen as related to the 
call of the parent xlator/brick.

i.e.  parent xlator; 
         write(/x/y/target.txt, data)
      my xlator; 
         alter that data, making notes
         write(/x/y/target.txt, altered)
         create/open(/a/b/c/file.txt)
         write(/a/b/c/file.txt, notes)
         close(/a/b/c/file.txt)

Meaning that I can readily retrieve context for
the /x/y and /x/y/target.txt relationship, but not
for the /a/b/c and /a/b/c/file.txt relationship.

This makes sense for almost every case; I don't
understand the path-translator - how does it 
avoid the need to play with the inode tables of
the parent/child to achieve its outcome?

Maybe it didn't .. hmm .. ok ... That aside;

> 
> There is a reason why just a few @this have itable while
others do
> not. On the client side, only the fuse's @this has a
proper itable
> initialized at mount time. On the server side, each
subvolume of
> protcol/server has a different itable of its own. Since
two posix
> exports from a single backend cannot share the same
itable, each of
> their itable is stored in their respective @this
structures. And this
> itable is initialized only when the first client attaches
to this as
> its remote-subvolume (i.e, during the setvolume MOP, which
is the
> handshake + authentication).

... am I right to believe that if I set up my own 
mop->subvolume that I would then be gifted with a
populated itable?

Would that be the appropriate way for me to obtain
a populated itable, even in the case where my xlator
is not an immediate child of the server xlator?

I.e. - this is my test glusterfs.vol;

volume posix
  type storage/posix
  option directory /gluster-test-mount
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume testfeature
  type features/testfeature
  subvolumes locks
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes testfeature
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume



Thanks for your help,



--
Ian Latter
Late night coder ..
http://midnightcode.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]