qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ovirt-users] very very bad iscsi performance


From: Nir Soffer
Subject: Re: [ovirt-users] very very bad iscsi performance
Date: Tue, 21 Jul 2020 00:41:47 +0300

On Mon, Jul 20, 2020 at 8:51 PM Philip Brown <pbrown@medata.com> wrote:
>
> I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with 
> 10g net.
>
> I'mm experimenting with SSDs, and the performance in ovirt is way, way less 
> than I would have hoped.
> More than an order of magnitude slower.
>
> here's a datapoint.
> Im running filebench, with the OLTP workload.

Did you try fio?
https://fio.readthedocs.io/en/latest/fio_doc.html

I think this is the most common and advanced tool for such tests.

> First, i run it on one of the hosts, that has an SSD directly attached.
> create an xfs filesystem (created on a vg "device" on top of the SSD), mount 
> it with noatime, and run the benchmark.
>
>
> 37166: 74.084: IO Summary: 3746362 ops, 62421.629 ops/s, (31053/31049 r/w), 
> 123.6mb/s,    161us cpu/op,   1.1ms latency

What do you get if you login to the target on the host  and access the
LUN directly on the host?

If you create a file system on the LUN and mount it on the host?

> I then unmount it, and make the exact same device an iscsi target, and create 
> a storage domain with it.
> I then create a disk for a VM running *on the same host*, and run the 
> benchmark.

What kind of disk? thin? preallocated?

> The same thing: filebench, oltp workload, xfs filesystem, noatime.
>
>
> 13329: 91.728: IO Summary: 153548 ops, 2520.561 ops/s, (1265/1243 r/w),   
> 4.9mb/s,    289us cpu/op,  88.4ms latency

4.9mb/s looks very low. Are you testing very small random writes?

> 62,000 ops/s vs 2500 ops/s.
>
> what????
>
>
> Someone might be tempted to say, "try making the device directly available, 
> AS a device, to the VM".
> Unfortunately,this is not an option.
> My goal is specifically to put together a new, high performing storage 
> domain, that I can use as database devices in VMs.

This is something to discuss with qemu folks. oVirt is just an easy
way to manage VMs.

Please attach the VM XML using:
virsh -r dumpxml vm-name-or-id

And the qemu command line from:
/var/log/libvirt/qemu/vm-name.log

I think you will get the best performance using direct LUN. Storage
domain is best if you want
to use features provided by storage domain. If your important feature
is performance, you want
to connect the storage in the most direct way to your VM.

Mordechai, did we do any similar performance tests in our lab?
Do you have example results?

Nir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]