gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Problems with self-heal


From: E-Comm Factory
Subject: Re: [Gluster-devel] Problems with self-heal
Date: Tue, 19 Feb 2008 17:26:39 +0100

I also tested glusterfs-mainline-2.5 PATCH 674 with the same results.


On Tue, Feb 19, 2008 at 5:23 PM, Toni Valverde <address@hidden>
wrote:

> I also tested glusterfs-mainline-2.5 PATCH 674 with the same
> results.
>
>
> On Tue, Feb 19, 2008 at 4:05 PM, E-Comm Factory <
> address@hidden> wrote:
>
> > thanks amar
> >
> > - glusterfs--mainline--2.5 PATCH 665
> > - fuse-2.7.2glfs8
> >
> >
> >
> > On Mon, Feb 18, 2008 at 8:48 PM, Amar S. Tumballi <address@hidden>
> > wrote:
> >
> > > Can you please let us know what version of fuse and glusterfs you are
> > > running these tests from?
> > >
> > > -amar
> > >
> > > On Feb 18, 2008 11:03 PM, E-Comm Factory <address@hidden>
> > > wrote:
> > >
> > > > Hello,
> > > >
> > > > I have 2 boxes with 4 unified disks (so i have 2 volumes). Then, in
> > > > client
> > > > side, i have set afr with this 2 virtual volumes.
> > > >
> > > > For testing purposes I've deleted one file on the second afr volume
> > > > and then
> > > > tried to self-heal the global afr but it crashes with this error:
> > > >
> > > > [afr.c:2754:afr_open] disk: self heal failed, returning EIOç
> > > > [fuse-bridge.c:675:fuse_fd_cbk] glusterfs-fuse: 98: /fichero4.img =>
> > > > -1 (5)
> > > >
> > > > An strace to the pid of the glusterfs-server running on the first
> > > > afr volume
> > > > crashes too when selfhealing:
> > > >
> > > > epoll_wait(6, {{EPOLLIN, {u32=6304624, u64=6304624}}}, 2,
> > > > 4294967295) = 1
> > > > read(4, out of memory
> > > > 0x7fff9a3b1d90, 113)            = 113
> > > > read(4, Segmentation fault
> > > >
> > > > My server config file (same on both server boxes):
> > > >
> > > > # datastores
> > > > volume disk1
> > > >  type storage/posix
> > > >  option directory /mnt/disk1
> > > > end-volume
> > > > volume disk2
> > > >  type storage/posix
> > > >  option directory /mnt/disk2
> > > > end-volume
> > > > volume disk3
> > > >  type storage/posix
> > > >  option directory /mnt/disk3
> > > > end-volume
> > > > volume disk4
> > > >  type storage/posix
> > > >  option directory /mnt/disk4
> > > > end-volume
> > > >
> > > > # namespaces
> > > > volume disk1-ns
> > > >  type storage/posix
> > > >  option directory /mnt/disk1-ns
> > > > end-volume
> > > > volume disk2-ns
> > > >  type storage/posix
> > > >  option directory /mnt/disk2-ns
> > > > end-volume
> > > > #volume disk3-ns
> > > > #  type storage/posix
> > > > #  option directory /mnt/disk3-ns
> > > > #end-volume
> > > > #volume disk4-ns
> > > > #  type storage/posix
> > > > #  option directory /mnt/disk4-ns
> > > > #end-volume
> > > >
> > > > # afr de namespaces
> > > > volume disk-ns-afr
> > > >  type cluster/afr
> > > >  subvolumes disk1-ns disk2-ns
> > > >  option scheduler random
> > > > end-volume
> > > >
> > > > # unify de datastores
> > > > volume disk-unify
> > > >  type cluster/unify
> > > >  subvolumes disk1 disk2 disk3 disk4
> > > >  option namespace disk-ns-afr
> > > >  option scheduler rr
> > > > end-volume
> > > >
> > > > # performace para el disco
> > > > volume disk-fs11
> > > >  type performance/io-threads
> > > >  option thread-count 8
> > > >  option cache-size 64MB
> > > >  subvolumes disk-unify
> > > > end-volume
> > > >
> > > > # permitimos acceso a cualquier cliente
> > > > volume server
> > > >  type protocol/server
> > > >  option transport-type tcp/server
> > > >  subvolumes disk-fs11
> > > >  option auth.ip.disk-fs11.allow *
> > > > end-volume
> > > >
> > > > My client config file:
> > > >
> > > > volume disk-fs11
> > > >  type protocol/client
> > > >  option transport-type tcp/client
> > > >  option remote-host 192.168.1.34
> > > >  option remote-subvolume disk-fs11
> > > > end-volume
> > > >
> > > > volume disk-fs12
> > > >  type protocol/client
> > > >  option transport-type tcp/client
> > > >  option remote-host 192.168.1.35
> > > >  option remote-subvolume disk-fs12
> > > > end-volume
> > > >
> > > > volume disk
> > > >  type cluster/afr
> > > >  subvolumes disk-fs11 disk-fs12
> > > > end-volume
> > > >
> > > > volume trace
> > > >  type debug/trace
> > > >  subvolumes disk
> > > > #  option includes open,close,create,readdir,opendir,closedir
> > > > #  option excludes lookup,read,write
> > > > end-volume
> > > >
> > > > Anyone could help me?
> > > >
> > > > Thanks in advance.
> > > >
> > > > --
> > > > ecomm
> > > > address@hidden
> > > >  _______________________________________________
> > > > Gluster-devel mailing list
> > > > address@hidden
> > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > > >
> > >
> > >
> > >
> > > --
> > > Amar Tumballi
> > > Gluster/GlusterFS Hacker
> > > [bulde on #gluster/irc.gnu.org]
> > > http://www.zresearch.com - Commoditizing Supercomputing and
> > > Superstorage!
> >
> >
> >
> >
> > --
> > Toni Valverde
> > address@hidden
> >
> > Electronic Commerce Factory S.L.
> > C/Martin de los Heros, 59bis - 1º nº 8
> > 28008 - Madrid
>
>
>
>
> --
> Toni Valverde
> address@hidden
>
> Electronic Commerce Factory S.L.
> C/Martin de los Heros, 59bis - 1º nº 8
> 28008 - Madrid
>



-- 
Toni Valverde
address@hidden

Electronic Commerce Factory S.L.
C/Martin de los Heros, 59bis - 1º nº 8
28008 - Madrid


reply via email to

[Prev in Thread] Current Thread [Next in Thread]