qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/1] virtio-blk-ccw: tweak the default for num_queues


From: Halil Pasic
Subject: Re: [PATCH 1/1] virtio-blk-ccw: tweak the default for num_queues
Date: Wed, 11 Nov 2020 16:16:44 +0100

On Wed, 11 Nov 2020 13:38:15 +0100
Cornelia Huck <cohuck@redhat.com> wrote:

> Tags: inv, me 
> From: Cornelia Huck <cohuck@redhat.com>
> To: Michael Mueller <mimu@linux.ibm.com>
> Cc: Thomas Huth <thuth@redhat.com>, David Hildenbrand <david@redhat.com>, 
> "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org, Halil Pasic 
> <pasic@linux.ibm.com>, Christian Borntraeger <borntraeger@de.ibm.com>, 
> qemu-s390x@nongnu.org
> Subject: Re: [PATCH 1/1] virtio-blk-ccw: tweak the default for num_queues
> Date: Wed, 11 Nov 2020 13:38:15 +0100
> Sender: "Qemu-devel" <qemu-devel-bounces+pasic=linux.ibm.com@nongnu.org>
> Organization: Red Hat GmbH
> 
> On Wed, 11 Nov 2020 13:26:11 +0100
> Michael Mueller <mimu@linux.ibm.com> wrote:
> 
> > On 10.11.20 15:16, Michael Mueller wrote:  
> > > 
> > > 
> > > On 09.11.20 19:53, Halil Pasic wrote:    
> > >> On Mon, 9 Nov 2020 17:06:16 +0100
> > >> Cornelia Huck <cohuck@redhat.com> wrote:
> > >>    
> > >>>> @@ -20,6 +21,11 @@ static void 
> > >>>> virtio_ccw_blk_realize(VirtioCcwDevice *ccw_dev, Error **errp)
> > >>>>   {
> > >>>>       VirtIOBlkCcw *dev = VIRTIO_BLK_CCW(ccw_dev);
> > >>>>       DeviceState *vdev = DEVICE(&dev->vdev);
> > >>>> +    VirtIOBlkConf *conf = &dev->vdev.conf;
> > >>>> +
> > >>>> +    if (conf->num_queues == VIRTIO_BLK_AUTO_NUM_QUEUES) {
> > >>>> +        conf->num_queues = MIN(4, current_machine->smp.cpus);
> > >>>> +    }    
> > >>>
> > >>> I would like to have a comment explaining the numbers here, however.
> > >>>
> > >>> virtio-pci has a pretty good explanation (use 1:1 for vqs:vcpus if
> > >>> possible, apply some other capping). 4 seems to be a bit arbitrary
> > >>> without explanation, although I'm sure you did some measurements :)    
> > >>
> > >> Frankly, I don't have any measurements yet. For the secure case,
> > >> I think Mimu has assessed the impact of multiqueue, hence adding Mimu to
> > >> the cc list. @Mimu can you help us out.
> > >>
> > >> Regarding the normal non-protected VMs I'm in a middle of producing some
> > >> measurement data. This was admittedly a bit rushed because of where we
> > >> are in the cycle. Sorry to disappoint you.    
> > > 
> > > I'm talking with the perf team tomorrow. They have done some 
> > > measurements with multiqueue for PV guests and I asked for a comparison 
> > > to non PV guests as well.    
> > 
> > The perf team has performed measurements for us that show that a *PV
> > KVM guest* benefits in terms of throughput for random read, random write
> > and sequential read (no difference for sequential write) by a multi
> > queue setup. CPU cost are reduced as well due to reduced spinlock
> > contention.  
> 
> Just to be clear, that was with 4 queues?
> 
> > 
> > For a *standard KVM guest* it currently has no throughput effect. No
> > benefit and no harm. I have asked them to finalize their measurements
> > by comparing the CPU cost as well. I will receive that information on 
> > Friday.  
> 
> Thank you for checking!

The results of my measurements (normal case only) are consistent with
these findings.

My setup looks like this: A guest with 6 vcpus and an attached
virtio-blk-ccw disk backed by a raw image on a tmpfs (i.e. backed by
ram, because we are interested in virtio and not in the scsi disk). The
performance was evaluated with fio, randrw, queuedepth=1 and bs=1M. I
scaled the number of virtqueues from 1 to 5, and collected 30 data points
each.

The full fio command line I used is at the end of this mail.

For a nicer table, please see the attached png. Regarding the difference
in averages, it's little about 1,2 percent.

The percentages are with respect to average over
all queues values.

queues
1       Average - write_iops    99.45%
        Average - write_bw      99.45%
        Average - read_iops     99.44%
        Average - read_bw       99.44%
2       Average - write_iops    99.93%
        Average - write_bw      99.93%
        Average - read_iops     99.92%
        Average - read_bw       99.92%
3       Average - write_iops    100.02%
        Average - write_bw      100.02%
        Average - read_iops     100.02%
        Average - read_bw       100.02%
4       Average - write_iops    100.64%
        Average - write_bw      100.64%
        Average - read_iops     100.64%
        Average - read_bw       100.64%
5       Average - write_iops    99.97%
        Average - write_bw      99.97%
        Average - read_iops     99.97%
        Average - read_bw       99.97%
Total Average - write_iops              100.00%
Total Average - write_bw                100.00%
Total Average - read_iops               100.00%
Total Average - read_bw         100.00%

fio --ramp_time=30s --output-format=json --bs=1M --ioengine=libaio 
--readwrite=randrw --runtime=120s --size=1948m --name=measurement 
--gtod_reduce=1 --direct=1 --iodepth=1 --filename=/dev/vda  --time_based

Attachment: data.png
Description: PNG image


reply via email to

[Prev in Thread] Current Thread [Next in Thread]