gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Trying to run a mailcluster


From: Anand Avati
Subject: Re: [Gluster-devel] Trying to run a mailcluster
Date: Fri, 15 Feb 2008 20:39:45 +0530

Can you also paste logs after the entries about bailing?

avati

2008/2/15, Anand Avati <address@hidden>:
>
> Guido,
>  do you share the same machines as server and client? is your mount point
> directly under / ?
>
> avati
>
> 2008/2/15, Guido Smit <address@hidden>:
> >
> > Hi all,
> >
> > I'm trying to set up a mailcluster for a while now, using glusterfs as
> > filesystem. I've installed fuse2.7.2glfs8, glusterfs-tla662 on Centos5
> >
> > As for now, running on 3 servers. 1 configured as dedicated server, 1 as
> > client only, 1 as server and client.
> >
> > Normal file operations are doing great, speed can be a little faster,
> > but overall I'm happy with it. When starting Dovecot, the following
> > error shows up (1st line repeats for every client that logs in):
> >
> > 2008-02-15 14:27:43 E [afr.c:3730:afr_lk_cbk] afr:
> > (path=/comlog.nl/xxxxxxxxx/Maildir/dovecot.index.log child=pop2-mail)
> > op_ret=-1 op_errno=77
> > 2008-02-15 14:30:52 C [client-protocol.c:222:call_bail] pop2-mail-ns:
> > bailing transport
> > 2008-02-15 14:30:52 C [client-protocol.c:222:call_bail] pop2-mail:
> > bailing transport
> > 2008-02-15 14:32:40 C [client-protocol.c:222:call_bail] pop2-mail-ns:
> > bailing transport
> >
> > Everything stalls, no ls, no df nothing. I have to kill all dovecot
> > processes, kill glusterfs and glusterfsd
> > I've tried with an empty namespace on both servers, but it didn't
> > resolve this.
> >
> > I really need some advise here.
> >
> > My configs:
> > glusterfs-server.vol
> >
> > volume pop1-mail-ns
> >         type storage/posix
> >         option directory /home/namespace
> > end-volume
> >
> > volume pop1-mail-ds
> >         type storage/posix
> >         option directory /home/export
> > end-volume
> >
> > volume pop1-mail
> >         type features/posix-locks
> >         option mandatory on                     # enables mandatory
> > locking on all files
> >         subvolumes pop1-mail-ds
> > end-volume
> >
> > volume pop2-mail
> >         type protocol/client
> >         option transport-type tcp/client
> >         option remote-host 62.59.252.42
> >         option remote-subvolume pop2-mail-ds
> > end-volume
> >
> > volume pop2-mail-ns
> >         type protocol/client
> >         option transport-type tcp/client
> >         option remote-host 62.59.252.42
> >         option remote-subvolume pop2-mail-ns
> > end-volume
> >
> > volume afr
> >         type cluster/afr
> >         subvolumes pop1-mail pop2-mail
> > end-volume
> >
> > volume afr-ns
> >        type cluster/afr
> >        subvolumes pop1-mail-ns pop2-mail-ns
> > end-volume
> >
> > volume unify
> >         type cluster/unify
> >         option namespace afr-ns
> >         option scheduler rr
> >         subvolumes afr
> > end-volume
> >
> > volume mail-ds-readahead
> >         type performance/read-ahead
> >         option page-size 128kB                  # 256KB is the default
> > option
> >         option page-count 4                     # 2 is default option
> >         option force-atime-update off           # default is off
> >         subvolumes unify
> > end-volume
> >
> > volume mail-ds-writebehind
> >         type performance/write-behind
> >         option aggregate-size 1MB               # default is 0bytes
> >         option flush-behind on                  # default is 'off'
> >         subvolumes mail-ds-readahead
> > end-volume
> >
> > volume mail-ds
> >         type performance/io-threads
> >         option thread-count 4                   # deault is 1
> >         option cache-size 32MB                  #64MB
> >         subvolumes mail-ds-writebehind
> > end-volume
> >
> > volume server
> >         type protocol/server
> >         option transport-type tcp/server
> >         subvolumes pop1-mail-ds pop1-mail-ns mail-ds
> >         option auth.ip.pop1-mail-ds.allow 62.59.252.*,127.0.0.1
> >         option auth.ip.pop1-mail-ns.allow 62.59.252.*,127.0.0.1
> >         option auth.ip.mail-ds.allow 62.59.252.*,127.0.0.1
> > end-volume
> >
> >
> > glusterfs-client.vol:
> >
> > volume mailspool
> >         type protocol/client
> >         option transport-type tcp/client
> >         option remote-host 62.59.252.41
> >         option remote-subvolume mail-ds
> > end-volume
> >
> > volume readahead
> >         type performance/read-ahead
> >         option page-size 128kB
> >         option page-count 16
> >         option force-atime-update off # default is off
> >         subvolumes mailspool
> > end-volume
> >
> > volume writeback
> >         type performance/write-behind
> >         option aggregate-size 1MB
> >         option flush-behind on      # default is 'off'
> >         subvolumes readahead
> > end-volume
> >
> > volume iothreads
> >         type performance/io-threads
> >         option thread-count 4  # deault is 1
> >         option cache-size 32MB #64MB
> >         subvolumes writeback
> > end-volume
> >
> > volume io-cache
> >         type performance/io-cache
> >         option cache-size 128MB             # default is 32MB
> >         option page-size 1MB               #128KB is default option
> >         option priority *:0 # default is '*:0'
> >         option force-revalidate-timeout 2  # default is 1
> >         subvolumes iothreads
> > end-volume
> >
> >
> >
> > --
> > No virus found in this outgoing message.
> > Checked by AVG Free Edition.
> > Version: 7.5.516 / Virus Database: 269.20.5/1279 - Release Date:
> > 2/14/2008 6:35 PM
> >
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > address@hidden
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> If I traveled to the end of the rainbow
> As Dame Fortune did intend,
> Murphy would be there to tell me
> The pot's at the other end.




-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]