qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] virtio-net: correctly report maximum tx_queue_size value


From: Michael Tokarev
Subject: Re: [PATCH] virtio-net: correctly report maximum tx_queue_size value
Date: Wed, 7 Jun 2023 12:23:09 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0

05.06.2023 17:21, Laurent Vivier wrote:
Maximum value for tx_queue_size depends on the backend type.
1024 for vDPA/vhost-user, 256 for all the others.

The value is returned by virtio_net_max_tx_queue_size() to set the
parameter:

     n->net_conf.tx_queue_size = MIN(virtio_net_max_tx_queue_size(n),
                                     n->net_conf.tx_queue_size);

But the parameter checking uses VIRTQUEUE_MAX_SIZE (1024).

So the parameter is silently ignored and ethtool reports a different
value than the one provided by the user.

    ... -netdev tap,... -device virtio-net,tx_queue_size=1024

     # ethtool -g enp0s2
     Ring parameters for enp0s2:
     Pre-set maximums:
     RX:                256
     RX Mini:   n/a
     RX Jumbo:  n/a
     TX:                256
     Current hardware settings:
     RX:                256
     RX Mini:   n/a
     RX Jumbo:  n/a
     TX:                256

    ... -netdev vhost-user,... -device virtio-net,tx_queue_size=2048

     Invalid tx_queue_size (= 2048), must be a power of 2 between 256 and 1024

With this patch the correct maximum value is checked and displayed.

For vDPA/vhost-user:

     Invalid tx_queue_size (= 2048), must be a power of 2 between 256 and 1024

For all the others:

     Invalid tx_queue_size (= 512), must be a power of 2 between 256 and 256

Fixes: 2eef278b9e63 ("virtio-net: fix tx queue size for !vhost-user")
Cc: mst@redhat.com
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
  hw/net/virtio-net.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

Is it a -stable material, or not worth fixing in -stable?
To me it looks like it should be fixed.

Thanks,

/mjt



reply via email to

[Prev in Thread] Current Thread [Next in Thread]