gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] glusterfs-3.3.0qa34 released


From: Amar Tumballi
Subject: Re: [Gluster-devel] glusterfs-3.3.0qa34 released
Date: Wed, 18 Apr 2012 13:42:45 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120209 Thunderbird/10.0.1

On 04/18/2012 12:26 PM, Ian Latter wrote:
Hello,


   I've written a work around for this issue (in 3.3.0qa35)
by adding a new configuration option to glusterd
(ignore-strict-checks) but there are additional checks
within the posix brick/xlator.  I can see that volume starts
but the bricks inside it fail shortly there-after, and that of
the 5 disks in my volume three of them have one
volume_id and two them have another - so this isn't going
to be resolved without some human intervention.

   However, while going through the posix brick/xlator I
found the "volume-id" parameter.  I've tracked it back
to the volinfo structure in the glusterd xlator.

   So before I try to code up a posix inheritance for my
glusterd work around (ignoring additional checks so
that a new volume_id is created on-the-fly / as-needed),
does anyone know of a CLI method for passing the
volume-id into glusterd (either via "volume create" or
"volume set")?  I don't see one from the code ...
glusterd_handle_create_volume does a uuid_generate
and its not a feature of glusterd_volopt_map ...

   Is a user defined UUID init method planned for the CLI
before 3.3.0 is released?  Is there a reason that this
shouldn't be permitted from the CLI "volume create" ?


We don't want to bring in this option to CLI. That is because we don't think it is right to confuse USER with more options/values. 'volume-id' is a internal thing for the user, and we don't want him to know about in normal use cases.

In case of 'power-users' like you, If you know what you are doing, the better solution is to do 'setxattr -x trusted.volume-id $brick' before starting the brick, so posix translator anyway doesn't get bothered.

Regards,
Amar



reply via email to

[Prev in Thread] Current Thread [Next in Thread]