|
From: | Anand Avati |
Subject: | Re: [Gluster-devel] [PATCH v9] vfs_glusterfs: Samba VFS module for glusterfs |
Date: | Wed, 29 May 2013 15:27:27 -0700 |
User-agent: | Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130509 Thunderbird/17.0.6 |
On 05/29/2013 07:21 AM, Anand Avati wrote: Implement a Samba VFS plugin for glusterfs based on gluster's gfapi. This is a "bottom" vfs plugin (not something to be stacked on top of another module), and translates (most) calls into closest actions on gfapi.Anand before we push this in samba I would like to have an answer about access control. I have tried to find out exactly how access control is handled but the code is complex. However what I found so far is not encouraging. I see things like: #define GF_MAX_AUX_GROUPS 200 and then in syncop_create_frame() that value is used to cap the max number of auxiliary groups. In Linux the max number of auxiliary groups is 65536 and we have seen easily 2k auxiliary groups attached to a user in Windows domains.
Currently it is artificially limited to a number. I will work on making this dynamic. However this will be a completely internal change to glusterfs with no changes in either API or vfs_glusterfs. Thanks for the feedback.
It also seem to me that this 'frame' is stored in thread local storage and reused is found, but I do not see any code to check that the current identity still matches the process identity. It may be that I haven't found it yet but so far it looks to me that you have one shot to set the identity of the caller and then it is assumed the same for all operations ? That won't work with samba. Can you shed some light here please ?
The thread local storage is within the context of a synctask (an internal glusterfs concept, which expires with the completion of a GFAPI call). However each Samba VFS/GFAPI call will get a new frame created and euid/egid/groups freshly recalculated. Call frames are never re-used across two GFAPI calls.
Are there any further concerns? Avati
[Prev in Thread] | Current Thread | [Next in Thread] |