gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] error durring write for stripe & unify translator


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] error durring write for stripe & unify translator
Date: Sat, 3 Mar 2007 16:58:28 -0800
User-agent: Mutt/1.5.9i

Mic,
 Thanks for letting us know about the issue. Actually we found that the last 
moment changes I made in rr scheduler was leading to an infinite loop of 
glusterfs client :( That error is fixed now. (check the same ftp dir, you have 
pre2.2 tarball). Also, while in our testing, found that stripe was not complete.
Even thats fixed now. You can try the glusterfs with same config file now.

Regards,
Amar
(bulde on #gluster)

On Sat, Mar 03, 2007 at 06:51:23PM -0500, address@hidden wrote:
> Before I ask for help let me just say... wow! What an amazing product!
> This has the potential to shake the SAN market profoundly. I was  
> disappointed in Luster because it made itself sounds like it didn't  
> require a SAN but you folks are strait forward and to the point. Kudos!
> 
> Now on to the problem:
> The process for the glusterfs client is spiking to 100% cpu usage and  
> not responding (have to kill -9) whenever I add a stripe or unify  
> translator to the client volume spec.
> 
> There isn't anything in the client log, but the server logs show:
> [Mar 03 19:11:01] [ERROR/common-utils.c:52/full_rw()]  
> libglusterfs:full_rw: 0 bytes r/w instead of 113
> 
> This only occurs on file writes. I can touch and read files just fine.
> None of these problems appear when I just mount a remote volume  
> without the translator.
> 
> I'm using the latest glusterfs-1.3.0-pre2 code on centos with  
> fuse-2.6.3 (had to apply a patch before fuse module loaded)
> 
> 
> My client volspec is below:
> 
> volume client0
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.1.201
>   option remote-port 6996
>   option remote-subvolume testgl
> end-volume
> 
> volume client1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 192.168.1.202
>   option remote-port 6996
>   option remote-subvolume testgl
> end-volume
> 
> volume stripe
>    type cluster/stripe
>    subvolumes client1 client0
>    option stripe-size 131072 # 128k
> end-volume
> 
> #volume bricks
> #  type cluster/unify
> #  subvolumes client1 client0
> #  option scheduler rr
> #end-volume
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]