qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ovirt-users] very very bad iscsi performance


From: Philip Brown
Subject: Re: [ovirt-users] very very bad iscsi performance
Date: Thu, 23 Jul 2020 07:25:14 -0700 (PDT)

Im in the middle of a priority issue right now, so cant take time out to rerun 
the bench, but...
Usually in that kind of situation, if you dont turn on sync-to-disk on every 
write, you get benchmarks that are artificially HIGH.
Forcing O_DIRECT slows throughput down.
Dont you think the results are bad enough already? :-}

----- Original Message -----
From: "Stefan Hajnoczi" <stefanha@redhat.com>
To: "Philip Brown" <pbrown@medata.com>
Cc: "Nir Soffer" <nsoffer@redhat.com>, "users" <users@ovirt.org>, "qemu-block" 
<qemu-block@nongnu.org>, "Paolo Bonzini" <pbonzini@redhat.com>, "Sergio Lopez 
Pascual" <slp@redhat.com>, "Mordechai Lehrer" <mlehrer@redhat.com>, "Kevin 
Wolf" <kwolf@redhat.com>
Sent: Thursday, July 23, 2020 6:09:39 AM
Subject: Re: [BULK]  Re: [ovirt-users] very very bad iscsi performance


Hi,
At first glance it appears that the filebench OLTP workload does not use
O_DIRECT, so this isn't a measurement of pure disk I/O performance:
https://github.com/filebench/filebench/blob/master/workloads/oltp.f

If you suspect that disk performance is the issue please run a benchmark
that bypasses the page cache using O_DIRECT.

The fio setting is direct=1.

Here is an example fio job for 70% read/30% write 4KB random I/O:

  [global]
  filename=/path/to/device
  runtime=120
  ioengine=libaio
  direct=1
  ramp_time=10            # start measuring after warm-up time

  [read]
  readwrite=randrw
  rwmixread=70
  rwmixwrite=30
  iodepth=64
  blocksize=4k

(Based on 
https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html)

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]