gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] "failed to fetch volume file (key:dpkvol)" error , w


From: Deepak C Shetty
Subject: Re: [Gluster-devel] "failed to fetch volume file (key:dpkvol)" error , when tried as non-root (vdsm) user
Date: Thu, 13 Dec 2012 10:20:50 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

Here is the qemu process spawned by virsh ( via libvirt )...

root 2912 1 1 10:16 ? 00:00:00 /usr/local/bin/qemu-system-x86_64 -name virsh-vm-backed-by-gluster -S -M pc -cpu qemu64,-svm -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid bdbae806-c272-4b87-ae69-274fd4d57c5f -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=17-1,serial=762589AD-3D52-42C3-6F65-D682277D5B37_52:54:00:c7:66:ec,uuid=bdbae806-c272-4b87-ae69-274fd4d57c5f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/virsh-vm-backed-by-gluster.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2012-12-13T04:46:48,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x3 -drive file=gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -vga cirrus

With this also, i still get the same error which says "unabel to fetch volfile"

But gluster volume and status are fine...

$ gluster volume info dpkvol

Volume Name: dpkvol
Type: Distribute
Volume ID: 846494e5-035a-4658-9611-7f5f3a306b50
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: vm-vdsm-de-1:/home/dpkshetty/mybrick2
Options Reconfigured:
server.allow-insecure: on

$ gluster volume status dpkvol
Status of volume: dpkvol
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick vm-vdsm-de-1:/home/dpkshetty/mybrick2        49152    Y 1752
NFS Server on localhost                    38467    Y    1763


thanx,
deepak


On 12/12/2012 07:26 PM, Deepak C Shetty wrote:
Hi All,
  I am trying to setup a devpt stack as below
ovirt->vdsm->libvirt->qemu(using gluster native integration)

So before i even go to vdsm, i wanted to ensure that things work fine via virsh, the stack being the below
virsh->libvirt->qemu(using gluster native integration)

*** Note that qemu and libvirt both are having gluster native support enabled and upstream, and I am using the version which has these support.

*** Glsuter is configured as below...

$ gluster volume info dpkvol

Volume Name: dpkvol
Type: Distribute
Volume ID: 846494e5-035a-4658-9611-7f5f3a306b50
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: vm-vdsm-de-1:/home/dpkshetty/mybrick2
Options Reconfigured:
server.allow-insecure: on

*** Other system info

$ hostname
vm-vdsm-de-1

$ ls -al /home/dpkshetty/mybrick2/
total 931928
drwxrwxrwx.  3 root      root           4096 Dec 12 14:18 .
drwx------. 30 dpkshetty dpkshetty      4096 Dec 12 17:48 ..
-rwxrwxrwx. 2 vdsm kvm 954269696 Dec 12 17:57 debian_lenny_i386_standard.qcow2
drw-------.  7 vdsm      kvm            4096 Dec 12 14:18 .glusterfs


*** Now when i use the below qemu cmdline, logged in as root, it all works fine...

$ qemu-system-x86_64 -drive file=gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -vnc :1 --enable-kvm -smp 2 -m 1G

$ id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

*** The below ( exact same qemu cmdline ) fails, when logged in as user vdsm...

$ qemu-system-x86_64 -drive file=gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -vnc :1 --enable-kvm -smp 2 -m 1G qemu-system-x86_64: -drive file=gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2,if=none,id=drive-ide0-0-0: Gluster connection failed for server=vm-vdsm-de-1 port=0 volume=dpkvol image=debian_lenny_i386_standard.qcow2 transport=tcp

qemu-system-x86_64: -drive file=gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2,if=none,id=drive-ide0-0-0: could not open disk image gluster+tcp://vm-vdsm-de-1/dpkvol/debian_lenny_i386_standard.qcow2: No data available

$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

I had enabled qemu's gluster block backend logs ( with help from bharata, CCing him here ) and this is what i see there ...

[2012-12-12 13:29:53.988722] I [socket.c:3390:socket_init] 0-gfapi: SSL support is NOT enabled [2012-12-12 13:29:53.988768] I [socket.c:3405:socket_init] 0-gfapi: using system polling thread [2012-12-12 13:29:53.995202] W [socket.c:501:__socket_rwv] 0-gfapi: readv failed (No data available) [2012-12-12 13:29:53.995247] W [socket.c:1932:__socket_proto_state_machine] 0-gfapi: reading from socket failed. Error (No data available), peer (192.168.122.139:24007) [2012-12-12 13:29:53.995736] E [rpc-clnt.c:368:saved_frames_unwind] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xd0) [0x7fd8089c1cb0] (-->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3) [0x7fd8089c0113] (-->/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7fd8089c002e]))) 0-gfapi: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2012-12-12 13:29:53.994867 (xid=0x1x) [2012-12-12 13:29:53.995787] E [glfs-mgmt.c:486:mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:dpkvol) [2012-12-12 13:29:53.995819] E [glfs-mgmt.c:543:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: No data available [2012-12-12 13:29:53.995833] I [glfs-mgmt.c:546:mgmt_rpc_notify] 0-glfs-mgmt: 1 connect attempts left [2012-12-12 13:29:57.007468] W [socket.c:501:__socket_rwv] 0-gfapi: readv failed (No data available) [2012-12-12 13:29:57.007553] W [socket.c:1932:__socket_proto_state_machine] 0-gfapi: reading from socket failed. Error (No data available), peer (192.168.122.139:24007) [2012-12-12 13:29:57.007724] E [rpc-clnt.c:368:saved_frames_unwind] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xd0) [0x7fd8089c1cb0] (-->/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xc3) [0x7fd8089c0113] (-->/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7fd8089c002e]))) 0-gfapi: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2012-12-12 13:29:57.007102 (xid=0x2x) [2012-12-12 13:29:57.007755] E [glfs-mgmt.c:486:mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:dpkvol) [2012-12-12 13:29:57.007860] E [glfs-mgmt.c:543:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: No data available [2012-12-12 13:29:57.007886] I [glfs-mgmt.c:546:mgmt_rpc_notify] 0-glfs-mgmt: 0 connect attempts left

*** Few things i tried...

* I tried `chmod -R 777 *` to the /var/lib/glusterd/ - just in case the user:group:other perms are inhibiting glusterd to fetch volfile, but that also didn't work.

* When i try from virsh, the exact same error I see as above. I even tried changing dynamic_ownership and user/group of qemu process started by libvirt to root, vdsm, etc , nothing works. Gluster just keeps throwing the same error as above for all cases.

*** Summary

So net net, the way i see it.. the reason for the above error is something to do with running the qemu cmdline as non-root. Q is... What needs to be done so that gluster volume works fine when accessed from a client that is running as non root ( 'vdsm' here), the client here being qemu accessing glsuter volume via libgfapi


Appreciate any pointers to help narrow down & resolve the issue.
Let me know if more info on the setup is needed.


thanx,
deepak
P.S I am not copying qemu-devel , since I feel this is somthing to do with gluster setup/issue.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]