gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Non-responding issue with GlusterFS 1.3 and Unify


From: M. Sébastien LELIEVRE
Subject: Re: [Gluster-devel] Non-responding issue with GlusterFS 1.3 and Unify
Date: Fri, 22 May 2009 18:26:51 +0200
User-agent: Thunderbird 2.0.0.21 (Windows/20090302)

Pavan Vilas Sondur a écrit :
> M. Sébastien LELIEVRE wrote:
> 
>> Vijay Bellur a écrit :
>>   
>>> M. Sébastien LELIEVRE wrote:
>>>     
>>>> Is this really to out of date ?
>>>>
>>>> Anyone ?
>>>>   
>>>>       
>>> Sebastien,
>>>
>>> Would it be possible to send us across the GlusterFS logfile?
>>>
>>> Regards,
>>> Vijay
>>>
>>>
>>>
>>>     
>> Sure !
>>
>> Here it is :
>>
>> cat /var/log/glusterfs/glusterfs.log
>>
>> 2009-05-22 09:46:43 E [protocol.c:271:gf_block_unserialize_transport]
>> remoteabfstor-ns: EOF from peer (192.168.254.14:6996)
>> 2009-05-22 09:46:43 W [client-protocol.c:4777:client_protocol_cleanup]
>> remoteabfstor-ns: cleaning up state in transport object 0x8056510
>> 2009-05-22 09:46:43 E [protocol.c:271:gf_block_unserialize_transport]
>> remoteabfstor02: EOF from peer (192.168.254.14:6996)
>> 2009-05-22 09:46:43 W [client-protocol.c:4777:client_protocol_cleanup]
>> remoteabfstor02: cleaning up state in transport object 0x8054d58
>>
>> Best Regards,
>>   
> Hi Sébastien,
> It would be very helpful if you could turn on debugging, with the
> --debug option an provide us with the logfile.
> Also, if possible, can you upgrade to version 2.0.1 with the same
> configuration and verify if the same issue is being
> encountered.
> 
> Regards,
> Pavan
> 
> 

Greetings Pavan,

Can you remind me how to get the latest branch from GlusterFS CVS ?

Best Regards,
-- 
M. Sébastien LELIÈVRE

Ingénieur Système & Base de Données

AZ Network
40, rue Ampère
61000 ALENÇON (ORNE)
FRANCE

Tel. : + 33 (0) 233 320 616
Port. : + 33 (0) 673 457 243

Poste : 120
e-mail : address@hidden





reply via email to

[Prev in Thread] Current Thread [Next in Thread]