gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] [vdsm] How to figure out the transport type of Glust


From: Deepak C Shetty
Subject: Re: [Gluster-devel] [vdsm] How to figure out the transport type of Gluster volume from VDSM host (which is not a gluster peer) ?
Date: Mon, 06 May 2013 20:56:14 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

On 05/06/2013 08:47 PM, Shu Ming wrote:
2013-5-6 22:33, Deepak C Shetty:
Hi Lists,
I am looking at options to figure the transport type of a gluster volume (given the volfileserver:volname) from a host that is *not* part of gluster volume (aka not a gluster peer).

The context here is GlusterFS as a storage domain in oVirt/VDSM, which is currently available in upstream oVirt. This features exploits the QEMU-GlusterFS native integration, where the VM disk is specified using gluster+<transport>://... protocol.

For eg. if transport is TCP.. the URI looks liek gluster+tcp://..., otherwise gluster+rdma://...

Thus, to generate the gluster QEMU URI in VDSM, i need to know the Gluster volume's transport type and the only inputs that oVirt gets for GlusterFS storage domain are...
a) volfileserver (the host running glusterd)
b) volname (the name of the volume)

Currently i use VDSM's gluster plugin to do the eq. of "gluster volume info <volname>" to determine Gluster volume's transport type, but this won't work if the VDSM host is not a gluster peer,
What do you mean by using "gluster peer"? Does "gluster peer" mean the host is running glusterd?

In Gluster, the hosts that are part of a gluster storage volume (serving the bricks (aka storage) to the volume) are called gluster peers. So if VDSM host is not serving storage to the gluster volume, its a non-peer and you cannot invoke gluster cli from a non-peer host, simply because it wouldn't make sense for a non-participating host to know abt gluster volumes ... thus there is the --remote-host option.. which is not encouraged to use and not guaranteed to be supported in future.

Yes.. and all gluster peers run glusterd


which is a constraint! ... and I would like to fix/remove this constraint.

So i discussed a bit on #glsuter-dev IRC and want to put down the options here for the community to help provide inputs on whats the best way to approach this...

1) Use gluster --remote-host=<host_running_glusterd> volume info <volname>

This is not a supported way and there is no guarantee on how long the --remote-host option be supported in gluster, since it has some security issues

2) Use gluster ::system getspec <volname>

I tried this but it never worked for me... whats the right way of using this ?
For me.. it just returned back to shell w/o dumping the volfile at all!

3) Have oVirt user provide the transport type as well (while creating Gluster storgae domain) in addition to volfileserver:volname options

This would be easiest, since VDSM can form the gluster QEMU URI by directly using the transport type specified by the user, and this won't have a need to use the vdsm-gluster plugin, hence no need for VDSM host to be part of gluster peer...but this would mean addnl input for user to provide during Gluster domain creation and oVirt UI changes to take the transport type as input in addition to volfileserver:volname
What will happen if a user gives a wrong transport type to VDSM?

Simply said... The gluster QEMU URI will be formed wrongly, QEMU would error out, hence VDSM and hence oVirt user will see the error while creating the VM. But.. in oVirt GUI we can have a combo box prefilled with "TCP", "RDMA" so that user has to choose one of the valid types only.. then the problem of giving wrong transport type does not arise.

thanx,
deepak




Comments/Opinions/Inputs appreciated

thanx,
deepak

(P.S. cross-posting this to VDSM and Gluster devel lists, as it relates to both)

_______________________________________________
vdsm-devel mailing list
address@hidden
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]