gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] glusterfs 1.3.9 - Could not acquire lock "[Errno 38] Fun


From: Mateusz Korniak
Subject: [Gluster-devel] glusterfs 1.3.9 - Could not acquire lock "[Errno 38] Function not implemented"
Date: Wed, 28 May 2008 00:24:31 +0200
User-agent: PLD Linux KMail/1.9.9

Hi !
I am trying to use glusterfs 1.3.9-2 (linux/i686) as general network FS, but 
run into problems with bzr which I suspect are file lock related.
Is it posible to have flocks over glusterfs mount ?
Do I need glusterfs fuse module for that?
Do I need use posix-locks translator ? 
http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#posix-locks ?

I think I need to mimic std linux fs behaviour - advisory locks.
 
Thanks a lot for that great piece of software, and thanks in advance for any 
reply or hint.


address@hidden abbon2]# bzr info
bzr: ERROR: Could not acquire lock "[Errno 38] Function not implemented"
/usr/lib/python2.4/site-packages/bzrlib/lock.py:79: UserWarning: lock on <open 
file u'/usr/lib/python2.4/site-packages/abbon2/.bzr/checkout/dirstate', 
mode 'rb' at 0xf75047b8> not released
Exception exceptions.IOError: (38, 'Function not implemented') in <bound 
method _fcntl_ReadLock.__del__ of <bzrlib.lock._fcntl_ReadLock object at 
0xf74f766c>> ignored
address@hidden abbon2]# mount
(...)
glusterfs on /usr/lib/python2.4/site-packages/abbon2 type fuse 
(rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)


address@hidden abbon2]# cat /etc/fstab
(...)
/etc/glusterfs/gw_ri_abbon2.vol  /usr/lib/python2.4/site-packages/abbon2      
glusterfs       defaults        0       0


address@hidden abbon2]# cat /etc/glusterfs/gw_ri_abbon2.vol
volume local_gw_ri_abbon2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.20.1.26         # IP address of the remote brick
  option remote-subvolume gw_ri_abbon2        # name of the remote volume
end-volume

Regards,
-- 
Mateusz Korniak




reply via email to

[Prev in Thread] Current Thread [Next in Thread]