gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU


From: Deepak Shetty
Subject: Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU
Date: Sun, 20 Apr 2014 23:57:25 +0530

This also tells us that the gfapi based validation/QE testcases needs to take this scenario in to account
so that in future this can be caught sooner :)

Bharata,
    Does the existing QEMU testcase for gfapi cover this ?

thanx,
deepak


On Fri, Apr 18, 2014 at 8:23 PM, Soumya Koduri <address@hidden> wrote:
Posted my comments in the bug link.

" glfs_init" cannot be called before as it checks for cmds_args->volfile_server which is initialized only in "glfs_set_volfile_server".
As Deepak had mentioned, we should either define a new routine to do the cleanup incase of init not done or rather modify "glfs_fini" to handle this special case as well which is better approach IMO as it wouldn't involve any changes in the applications using libgfapi.

Thanks,
Soumya


----- Original Message -----
From: "Bharata B Rao" <address@hidden>
To: "Deepak Shetty" <address@hidden>
Cc: "Gluster Devel" <address@hidden>
Sent: Friday, April 18, 2014 8:31:28 AM
Subject: Re: [Gluster-devel] Behaviour of glfs_fini() affecting QEMU

On Thu, Apr 17, 2014 at 7:56 PM, Deepak Shetty < address@hidden > wrote:




The glfs_lock indeed seems to work only when glfs_init is succesfull!
We can call glfs_unset_volfile_server for the error case of glfs_set_volfile_server as a good practice.
But it does look like we need a opposite of glfs_new (maybe glfs_destroy) for cases like these to clenaup stuff that glfs_new() allocated

thats my 2 cents... hope to hear from other gluster core folks on this

There is a launchpad bug tracking this at https://bugs.launchpad.net/qemu/+bug/1308542

Regards,
Bharata.

_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel


reply via email to

[Prev in Thread] Current Thread [Next in Thread]