qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

virtio-fs performance


From: Derek Su
Subject: virtio-fs performance
Date: Wed, 15 Jul 2020 21:56:51 +0800

Hello all

I'm trying and testing the virtio-fs feature in QEMU v5.0.0.
My host and guest OS are both ubuntu 18.04 with kernel 5.4, and the
underlying storage is one single SSD.

The configuations are:
(1) virtiofsd
./virtiofsd -o 
source=/mnt/ssd/virtiofs,cache=auto,flock,posix_lock,writeback,xattr
--thread-pool-size=1 --socket-path=/tmp/vhostqemu

(2) qemu
qemu-system-x86_64 \
-enable-kvm \
-name ubuntu \
-cpu Westmere \
-m 4096 \
-global kvm-apic.vapic=false \
-netdev tap,id=hn0,vhost=off,br=br0,helper=/usr/local/libexec/qemu-bridge-helper
\
-device e1000,id=e0,netdev=hn0 \
-blockdev '{"node-name": "disk0", "driver": "qcow2",
"refcount-cache-size": 1638400, "l2-cache-size": 6553600, "file": {
"driver": "file", "filename": "'${imagefolder}\/ubuntu.qcow2'"}}' \
-device virtio-blk,drive=disk0,id=disk0 \
-chardev socket,id=ch0,path=/tmp/vhostqemu \
-device vhost-user-fs-pci,chardev=ch0,tag=myfs \
-object memory-backend-memfd,id=mem,size=4G,share=on \
-numa node,memdev=mem \
-qmp stdio \
-vnc :0

(3) guest
mount -t virtiofs myfs /mnt/virtiofs

I tried to change virtiofsd's --thread-pool-size value and test the
storage performance by fio.
Before each read/write/randread/randwrite test, the pagecaches of
guest and host are dropped.

```
RW="read" # or write/randread/randwrite
fio --name=test --rw=$RW --bs=4k --numjobs=1 --ioengine=libaio
--runtime=60 --direct=0 --iodepth=64 --size=10g
--filename=/mnt/virtiofs/testfile
done
```

--thread-pool-size=64 (default)
    seq read: 305 MB/s
    seq write: 118 MB/s
    rand 4KB read: 2222 IOPS
    rand 4KB write: 21100 IOPS

--thread-pool-size=1
    seq read: 387 MB/s
    seq write: 160 MB/s
    rand 4KB read: 2622 IOPS
    rand 4KB write: 30400 IOPS

The results show the performance using default-pool-size (64) is
poorer than using single thread.
Is it due to the lock contention of the multiple threads?
When can virtio-fs get better performance using multiple threads?


I also tested the performance that guest accesses host's files via
NFSv4/CIFS network filesystem.
The "seq read" and "randread" performance of virtio-fs are also worse
than the NFSv4 and CIFS.

NFSv4:
  seq write: 244 MB/s
  rand 4K read: 4086 IOPS

I cannot figure out why the perf of NFSv4/CIFS with the network stack
is better than virtio-fs.
Is it expected? Or, do I have an incorrect configuration?


Thanks.
Regards,

Derek



reply via email to

[Prev in Thread] Current Thread [Next in Thread]