gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Re: glusterfs 2.0.4 and 2.0.6 are not working very well


From: Pavan Vilas Sondur
Subject: [Gluster-devel] Re: glusterfs 2.0.4 and 2.0.6 are not working very well with mysql
Date: Mon, 7 Sep 2009 14:45:54 +0530
User-agent: Mutt/1.5.18 (2008-05-17)

Hi Patrick,
We're looking into it. A bug has been filed to track this issue:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=246
You can track the progress of this issue there.

Pavan

On 07/09/09 09:19 +0200, Patrick Matthäi wrote:
> Any update on this issue?
> 
> 
> -----Original Message-----
> From: Patrick Matthäi 
> Sent: Tuesday, September 01, 2009 9:33 AM
> To: address@hidden
> Subject: RE: glusterfs 2.0.4 and 2.0.6 are not working very well with mysql
> 
> Hello,
> 
> yes sure.
> 
> Mysql version: 5.0.51a-24+lenny1
> Glusterfs version: 2.0.6
> Linux: 2.6.26-2
> Fuse: 2.7.4-1.1
> 
> 
> This is my glusterfsd.vol:
> 
> volume posix
>   type storage/posix
>   option directory /srv/export-ha
> end-volume
> 
> volume locks
>   type features/locks
>   subvolumes posix
> end-volume
> 
> volume brick
>   type performance/io-threads
>   option thread-count 8
>   subvolumes locks
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp
>   option auth.addr.brick.allow 192.168.222.*
>   subvolumes brick
> end-volume
> 
> 
> And here my glusterfs.vol:
> volume ha1
>         type protocol/client
>         option transport-type tcp
>         option remote-host fs-1
>         option remote-subvolume brick
> end-volume
> 
> volume ha2
>         type protocol/client
>         option transport-type tcp
>         option remote-host fs-2
>         option remote-subvolume brick
> end-volume
> 
> volume replicate
>         type cluster/replicate
>         subvolumes ha1 ha2
> end-volume
> 
> volume writebehind
>         type performance/write-behind
>         option window-size 1MB
>         subvolumes replicate
> end-volume
> 
> volume cache
>         type performance/io-cache
>         option cache-size 512MB
>         subvolumes writebehind
> end-volume
> 
> 
> Logfiles:
> 
> On starting mysql now on cluster-1 (there is nowhere any other instance and I 
> also tried out remounting it):
> 
> [2009-09-01 09:27:53] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 100: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:54] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 101: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:54] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 102: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:55] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 103: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:55] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 104: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:56] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 105: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:56] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 106: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:57] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 107: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:57] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 108: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:58] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 109: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:58] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 110: ERR => -1 (Resource temporarily unavailable)
> [2009-09-01 09:27:59] W [fuse-bridge.c:2285:fuse_setlk_cbk] glusterfs-fuse: 
> 111: ERR => -1 (Resource temporarily unavailable)
> 
> MySQL gives:
> InnoDB: Check that you do not already have another mysqld process
> InnoDB: using the same InnoDB data or log files.
> InnoDB: Unable to lock ./ibdata1, error: 11
> 
> 
> -----Original Message-----
> From: Pavan Vilas Sondur [mailto:address@hidden 
> Sent: Tuesday, September 01, 2009 9:19 AM
> To: Patrick Matthäi
> Cc: address@hidden
> Subject: Re: glusterfs 2.0.4 and 2.0.6 are not working very well with mysql
> 
> Hi Patrick,
> Can you give us the volfiles you are using, log files for all the nodes and 
> any other information wrt mysql such as the version, etc. to help us debug 
> the problem.
> 
> Pavan
> 
> On 31/08/09 18:05 +0200, Patrick Matthäi wrote:
> > Hello,
> > 
> > I tried to setup a replicate volume for mysql.
> > 
> > Setup:
> > 
> > Fs-1: first storage server
> > Fs-2: second one
> > 
> > Cluster-1 and cluster-2: same, but they mount the fs- volumes and on both 
> > machines should mysql run.
> > 
> > Every server is using Debian Lenny amd64 with the following versions:
> > 
> > ii  fuse-utils                          2.7.4-1.1                Filesystem 
> > in USErspace (utilities)
> > ii  glusterfs-client                    2.0.6-1                  clustered 
> > file-system
> > ii  glusterfs-examples                  2.0.6-1                  example 
> > files for the glusterfs server and client
> > ii  glusterfs-server                    2.0.6-1                  clustered 
> > file-system
> > ii  libfuse2                            2.7.4-1.1                Filesystem 
> > in USErspace library
> > ii  libglusterfs0                       2.0.6-1                  GlusterFS 
> > libraries and translator modules
> > 
> > /var/lib/mysql points to the glusterfs mounted /srv/mysql.
> > 
> > Starting mysql on cluster-1 => everything is fine.
> > Starting mysql on cluster-2 ends up with:
> > 
> > InnoDB: Unable to lock ./ibdata1, error: 11
> > InnoDB: Check that you do not already have another mysqld process
> > InnoDB: using the same InnoDB data or log files.
> > InnoDB: Unable to lock ./ibdata1, error: 11
> > InnoDB: Check that you do not already have another mysqld process
> > InnoDB: using the same InnoDB data or log files.
> > InnoDB: Unable to lock ./ibdata1, error: 11
> > InnoDB: Check that you do not already have another mysqld process
> > InnoDB: using the same InnoDB data or log files.
> > 
> > Well after some minutes the startup script aborts:
> > 
> > cluster-2:~# /etc/init.d/mysql start
> > Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed!
> > 
> > cluster-2:~# ps ax|grep mysql
> >  2729 pts/2    S      0:00 /bin/sh /usr/bin/mysqld_safe
> >  2769 pts/2    Sl     0:00 /usr/sbin/mysqld --basedir=/usr 
> > --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid 
> > --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock
> >  2771 pts/2    S      0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld
> >  2944 pts/2    S+     0:00 grep mysql
> > cluster-2:~# mysql -p
> > Enter password:
> > ERROR 2002 (HY000): Can't connect to local MySQL server through socket 
> > '/var/run/mysqld/mysqld.sock' (2)
> > 
> > Just because mysql didn’t set up the socket (because of the previous 
> > failure).
> > 
> > I read the thread „mysql on glusterfs“, where ist he same error with an 
> > much older version which was fixed, but now here it is broken.
> > 
> > I Hope you can help ☺
> > 
> > 
> > -------
> > 
> > Mit freundlichen Grüßen / Best regards
> > 
> > Patrick Matthäi
> > 
> > LPI-zertifizierter Linux Administrator
> > GNU/Linux Debian Developer
> >  Technik
> > 
> > ameus GmbH
> > Stettiner Straße 24
> > 33106 Paderborn
> > Tel: 05251-14807-20
> > Fax: 05251-14807-30
> > HRB 8694 AG Paderborn
> > Geschäftsführer: Stephan Winter
> > ------------------------------------------------------------
> > MEiM 2009 - Lernen Sie Ameus persönlich kennen!
> > Die Kongressmesse für Entscheider in Paderborn
> > Die Teilnahmegebühr von 149€ übernehmen wir für Sie
> > Anmeldung unter www.meim.de<http://www.meim.de/> - Buchungscode: AHGA9TF
> > Unser Vortrag: Erfolgreiche Online-Verkäufe!
> > ------------------------------------------------------------
> > 
> 
> > _______________________________________________
> > Gluster-devel mailing list
> > address@hidden
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
> 
> 

> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]