gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] question on glusterfs kvm performance


From: Yin Yin
Subject: Re: [Gluster-devel] question on glusterfs kvm performance
Date: Thu, 16 Aug 2012 15:43:14 +0800

Hi, Bharata B Rao:

     The problem has been solved.
     I configure QEMU with --enable-uuid, but don't install libuuid-dev rpm, so it actually use vdi.c's uuid_is_null.

    in glfs_active_subvol, it will call inode_table_new to init itable, gfid should be 0(repeat 15 times), 001.
   glusterfs's uuid_is_null   will compare all 16 bit, but qemu vdi.c's only compare 8 bit, which cause glfs client see the root inode gfid invalid.. so glfs_open failed.


Best Regards,
Yin.Yin
   

On Wed, Aug 15, 2012 at 6:24 PM, Bharata B Rao <address@hidden> wrote:
On Wed, Aug 15, 2012 at 12:14 PM, Yin Yin <address@hidden> wrote:
> Hi,Bharata B Rao:
>        I have try your patch, but has some problem. I found that both
> glusterfs and qemu have a fun uuid_is_null.
> glusterfs use contrib/uuid/isnull.c
> qemu use block/vdi.c
>
>
> I test the api/example, the glfsxmp.c call uuid_is_null in
> contrib/uuid/isnull.c
> but qemu/block/gluster.c finally call the uuid_is_null in vdi.c which cause
> vm can't boot.

So you are configuring QEMU with --disable-uuid ? Even then, there
should be no issues. I just verified that and I don't understand why
it should cause problems in VM booting.

Can you please ensure the following:

- Remove all the traces of gluster from your system (which means
removing any installed gluster rpms) before you compile gluster from
source.
- Try with my v6 patcheset
(http://lists.nongnu.org/archive/html/qemu-devel/2012-08/msg01536.html)

When you say VM isn't booting, do you see a hang or a segfault ?

Let me know how are you specifying the gluster drive (-drive
file=gluster://server[:port]/volname/image[?transport=socket])

Please verify that you are able to FUSE mount your volume before tying
with QEMU.

Regards,
Bharata.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]