gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] [Gluster-users] glusterfs-3.4.2qa4 BUG 987555 not fi


From: BGM
Subject: Re: [Gluster-devel] [Gluster-users] glusterfs-3.4.2qa4 BUG 987555 not fixed?
Date: Thu, 19 Dec 2013 19:00:58 +0100

thanks Nils,
will try that tomorrow,
and let you know of course
Bernhard


On 19.12.2013, at 17:34, Niels de Vos <address@hidden> wrote:

> On Thu, Dec 19, 2013 at 03:44:26PM +0000, Bernhard Glomm wrote:
>> 
>> hi all
>> 
>> I'm testing
>> 
>> SRC: 
>> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.2qa4.tar.gz
>> 
>> on ubuntu 13.04
>> 
>> previous I had gluster 3.2.7 (the one from ubuntu 13.04 repository) 
>> installed.
>> I use a two sided gluster mirror to host the imagefiles of my VM
>> With gluster 3.2.7 all worked fine.
>> 
>> I upgraded to gluster 3.4.2qa4 (see above).
>> VM still worked fine, bonnie++ tests from inside the VM instances showing 
>> similar results than before
>> but than I hit the 987555 bug again
> 
> The change for that bug introduces an option to the 
> /etc/glusterfs/gluster.vol configuration file. You can now add the 
> following line to that file:
> 
>  volume management
>      ...
>      option base-port 50152
>      ...
>  end-volume
> 
> By default this is commented out with the default port (49152). In the 
> line above. 50152 is just an example, you can pick any port you like.  
> GlusterFS tries to detect if a port is in use, if it is, it'll try the 
> next one (and so on).
> 
> Also note that QEMU had a fix for this as well. With the right version 
> of QEMU, there should be no need to change this option from the default.
> Details on the fixes for QEMU are referenced in Bug 1019053.
> 
> Can you let us know if setting this option and restarting all the 
> glusterfsd processes helps?
> 
> Thanks,
> Niels
> 
>> 
>> address@hidden/1]:~ # time virsh migrate --verbose --live --unsafe --p2p 
>> --domain atom01 --desturi qemu+ssh://192.168.242.93/system
>> error: Unable to read from monitor: Connection reset by peer
>> 
>> 
>> address@hidden/0]:~ # netstat -tulpn|egrep 49152
>> tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN   
>>    3924/glusterfsd 
>> 
>> or
>> 
>> address@hidden/0]:~ # netstat -tulpn|egrep gluster
>> tcp        0      0 0.0.0.0:49155           0.0.0.0:*               LISTEN   
>>    4031/glusterfsd 
>> tcp        0      0 0.0.0.0:38468           0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:49156           0.0.0.0:*               LISTEN   
>>    4067/glusterfsd 
>> tcp        0      0 0.0.0.0:933             0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:38469           0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:49157           0.0.0.0:*               LISTEN   
>>    4109/glusterfsd 
>> tcp        0      0 0.0.0.0:49158           0.0.0.0:*               LISTEN   
>>    4155/glusterfsd 
>> tcp        0      0 0.0.0.0:49159           0.0.0.0:*               LISTEN   
>>    4197/glusterfsd 
>> tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN   
>>    2682/glusterd   
>> tcp        0      0 0.0.0.0:49160           0.0.0.0:*               LISTEN   
>>    4237/glusterfsd 
>> tcp        0      0 0.0.0.0:49161           0.0.0.0:*               LISTEN   
>>    4280/glusterfsd 
>> tcp        0      0 0.0.0.0:49162           0.0.0.0:*               LISTEN   
>>    4319/glusterfsd 
>> tcp        0      0 0.0.0.0:49163           0.0.0.0:*               LISTEN   
>>    4360/glusterfsd 
>> tcp        0      0 0.0.0.0:49165           0.0.0.0:*               LISTEN   
>>    5408/glusterfsd 
>> tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN   
>>    3924/glusterfsd 
>> tcp        0      0 0.0.0.0:2049            0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:38465           0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:49153           0.0.0.0:*               LISTEN   
>>    3959/glusterfsd 
>> tcp        0      0 0.0.0.0:38466           0.0.0.0:*               LISTEN   
>>    5418/glusterfs  
>> tcp        0      0 0.0.0.0:49154           0.0.0.0:*               LISTEN   
>>    3996/glusterfsd 
>> udp        0      0 0.0.0.0:931             0.0.0.0:*                        
>>    5418/glusterfs  
>> 
>> 
>> is there a compile option work_together_with_libvirt ;-)
>> Can anyone confirm this or has a work around?
>> 
>> best
>> 
>> Bernhard
>> 
>> P.S.: As I learned in the discussion before libvirt is counting up the ports
>> when it finds the one needed are blocked already.
>> So after 12 migration attempt the VM finally WAS migrated
>> IMHO there should/could be an option to configure the start port/port range
>> and yes, given could/should be done ALSO for libvirt,
>> fact is gluster 3.2.7 works (for me), 3.4.2 doesn't :-((
>> I really would like to try the gfapi but not for the prize of no live 
>> migration.
>> 
>> -- 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>            Bernhard Glomm
>> 
>>            IT Administration
>> 
>> 
>> 
>> 
>>                  Phone:
>> 
>> 
>>                  +49 (30) 86880 134
>> 
>> 
>>                  Fax:
>> 
>> 
>>                  +49 (30) 86880 100
>> 
>> 
>>                  Skype:
>> 
>> 
>>                  bernhard.glomm.ecologic
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>          Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 
>> 10717 Berlin | Germany
>> 
>>          GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
>> USt/VAT-IdNr.: DE811963464
>> 
>>          Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige 
>> GmbH
> 
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 
> 
> -- 
> Niels de Vos
> Sr. Software Maintenance Engineer
> Support Engineering Group
> Red Hat Global Support Services



reply via email to

[Prev in Thread] Current Thread [Next in Thread]