gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ls: .: no such file or directory


From: Daniel van Ham Colchete
Subject: Re: [Gluster-devel] ls: .: no such file or directory
Date: Wed, 11 Jul 2007 17:29:47 -0300

On 7/11/07, DeeDee Park <address@hidden> wrote:

if all the bricks are not up at the time of the gluster client startup
i get the above error message. if all bricks are up, things are fine.
if the brick goes down after a client is up, things are fine -- it is only
at startup.
i'm still seeing this in the latest patch-299


I was able to reproduce the problem here.

I will have the error message if, and only if, the namespace cache brick is
offline. I have the error even if the directory is full of files. If I try
to open() a file while the namespace cache brick is down I get the
"Transport endpoint is not connected" error.

Also with patch-299.

Client spec file:volume client-1
       type protocol/client
       option transport-type tcp/client
       option remote-host 127.0.0.1
       option remote-port 6991
       option remote-subvolume brick1
end-volume

volume client-2
       type protocol/client
       option transport-type tcp/client
       option remote-host 127.0.0.1
       option remote-port 6992
       option remote-subvolume brick2
end-volume

volume client-ns
       type protocol/client
       option transport-type tcp/client
       option remote-host 127.0.0.1
       option remote-port 6999
       option remote-subvolume brick-ns
end-volume

volume afr
       type cluster/afr
       subvolumes client-1 client-2
       option replicate *:2
       option self-heal on
       option debug off
end-volume

volume unify
       type cluster/unify
       subvolumes afr
       option namespace client-ns
       option scheduler rr
       option rr.limits.min-free-disk 5
end-volume

volume writebehind
       type performance/write-behind
       option aggregate-size 131072
       subvolumes unify
end-volume

Best regards,
Daniel Colchete


reply via email to

[Prev in Thread] Current Thread [Next in Thread]