gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] question on glusterfs kvm performance


From: Bharata B Rao
Subject: Re: [Gluster-devel] question on glusterfs kvm performance
Date: Thu, 9 Aug 2012 15:48:04 +0530

On Wed, Aug 8, 2012 at 11:50 PM, John Mark Walker <address@hidden> wrote:
>
> ----- Original Message -----
>>
>> Or change your perspective. Do you NEED to write to the VM image?
>>
>> I write to fuse mounted GlusterFS volumes from within my VMs. The VM
>> image is just for the OS and application. With the data on a
>> GlusterFS
>> volume, I get the normal fuse client performance from within my VM.

I ran FIO on 3 scenarios and here are the comparison numbers from them:

Scenario 1: GlusterFS block backend of QEMU is used for root and data
partition (a gluster volume)
./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
-smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none
-drive file=gluster://bharata/test/F17,if=virtio,cache=none

Scenario 2: GlusterFS block backend of QEMU for root and GlusterFS
FUSE mount for data partition
./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
-smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none
-drive file=/mnt/F17,if=virtio,cache=none
(Here data partition is FUSE mounted on host at /mnt)

Scenarios 3: GlusterFS block backend of QEMU for root and FUSE
mounting gluster data partition from inside VM
./x86_64-softmmu/qemu-system-x86_64 --enable-kvm --nographic -m 1024
-smp 4 -drive file=gluster://bharata/rep/F16,if=virtio,cache=none

FIO exercises the data partition in each case.

Here are the numbers:

Scenario 1:  aggrb=47836KB/s
Scenario 2:  aggrb=20894KB/s
Scenario 3:  aggrb=36936KB/s

FIO load file I used is this:
; Read 4 files with aio at different depths
[global]
ioengine=libaio
direct=1
rw=read
bs=128k
size=512m
directory=/data1
[file1]
iodepth=4
[file2]
iodepth=32
[file3]
iodepth=8
[file4]
iodepth=16

Regards,
Bharata.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]