gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Occasional I/O error


From: Krishna Srinivas
Subject: Re: [Gluster-devel] Occasional I/O error
Date: Thu, 9 Oct 2008 18:19:06 +0530

Hi Snezhana,

What is happening is, when one node was down, an entry
(dir/file/symlink) was deleted and
another entry was created of different type. During selfheal this
condition is not being handled
and I/O error is being returned. We will take care of this in the
coming release. For now please
fix this by hand at the back end.

Thanks
Krishna

On Thu, Oct 9, 2008 at 4:39 PM, Снежана Бекова <address@hidden> wrote:
>
>
>  In our directory the target of symlinks are exist. Yes, it happens during a
> self-heal - two time only on server1.
> Shall I report this bug?
>
>  Thanks,
> Snezhana
>
>  Цитат от Brent A Nelson <address@hidden>:
>
>> Just a "me-too", here; I saw this after/during a self-heal a few weeks
>> ago.  Under these circumstances, GlusterFS apparently tries to follow
>> the symlink; if the target doesn't exist (say, if the symlinks are in
>> an area meant for chroot), GlusterFS complains.
>>
>> Thanks,
>>
>> Brent
>>
>> On Tue, 7 Oct 2008, Снежана Бекова wrote:
>>
>>>
>>>
>>> Hello,
>>> I'm running glusterfs 1.4.0pre5  (glusterfs--mainline--3.0--patch-359)
>>> and fuse-2.7.3glfs10 on 2  machines with AFR with client side replication.
>>> My test setup is: 2  glusterfs servers and 2 glusterfs clients,i.e. the two
>>> mashines  (server1 and server2) are configured as server and client. I was
>>> getting occasional Input/output error when listing glisterfs (afr)
>>> directory on server1.
>>>
>>> The glusterfs client log messages are:
>>> 2008-10-06 12:55:02 E [afr_self_heal.c:123:afr_lds_setdents_cbk]
>>> afr-wwwroot: op_ret=-1 op_errno=17
>>> 2008-10-06 12:55:02 E [afr_self_heal.c:123:afr_lds_setdents_cbk]
>>> afr-wwwroot: op_ret=-1 op_errno=17
>>> 2008-10-06 12:55:02 E [fuse-bridge.c:398:fuse_entry_cbk]  glusterfs-fuse:
>>> 196: LOOKUP() / => -1 (Input/output error)
>>>
>>> The messages in the glusterfs server log are:
>>> 2008-10-06 12:54:01 C [posix.c:2756:ensure_file_type] wwwroot:  entry
>>> /wwwroot//xxx.xxxx.xx is a different type of file than expected
>>>
>>> In afr directory there are many symlinks and the file
>>> /wwwroot//xxx.xxxx.xx is a symlink. So I must stop glusterfs client  and
>>> server processes on server1, remove the symlinks, start them  again and
>>> listing the glusterfs directory to remove the problem.
>>>
>>> My config files on the two client and server mashines are:
>>> cat /etc/glusterfs/glusterfs-server.vol
>>> volume wwwroot
>>>   type storage/posix
>>>   option directory /wwwroot
>>> end-volume
>>>
>>> volume server
>>>   type protocol/server
>>>   option transport-type tcp/server
>>>   subvolumes wwwroot
>>>   option auth.addr.wwwroot.allow 10.0.0.*,127.0.0.1
>>> end-volume
>>>
>>> cat /etc/glusterfs/glusterfs-client.vol
>>> volume client-server1-wwwroot
>>>   type protocol/client
>>>   option transport-type tcp/client
>>>   option remote-host 127.0.0.1
>>>   option remote-subvolume wwwroot
>>> end-volume
>>>
>>> volume client-server2-wwwroot
>>>   type protocol/client
>>>   option transport-type tcp/client
>>>   option remote-host 10.0.0.100
>>>   option remote-subvolume wwwroot
>>> end-volume
>>>
>>> volume afr-wwwroot
>>>   type cluster/afr
>>>   subvolumes client-server1-wwwroot client-server2-wwwroot
>>> end-volume
>>>
>>> I think the problem does not exist on version 1.3.12.
>>> Maybe it is a bug or can you help me what is wrong?
>>>
>>> Thanks,
>>> Snezhana
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> address@hidden
>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel[1]
>
>
> Links:
> ------
> [1] http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]