qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vdpa: Increase out buffer size for CVQ commands


From: Eugenio Perez Martin
Subject: Re: [PATCH] vdpa: Increase out buffer size for CVQ commands
Date: Sun, 25 Jun 2023 12:48:49 +0200

On Thu, Jun 22, 2023 at 3:07 AM Hawkins Jiawei <yin31149@gmail.com> wrote:
>
> According to the VirtIO standard, "Since there are no guarantees,
> it can use a hash filter or silently switch to
> allmulti or promiscuous mode if it is given too many addresses."
> To achive this, QEMU ignores MAC addresses and marks `mac_table.x_overflow`
> in the device internal state in virtio_net_handle_mac()
> if the guest sets more than `MAC_TABLE_ENTRIES` MAC addresses
> for the filter table.
>
> However, the problem is that QEMU never marks the `mac_table.x_overflow`
> for the vdpa device internal state when the guest sets more than
> `MAC_TABLE_ENTRIES` MAC addresses.
>
> To be more specific, currently QEMU offers a buffer size of
> vhost_vdpa_net_cvq_cmd_len() for CVQ commands, which represents the size of
> VIRTIO_NET_CTRL_MAC_TABLE_SET command with a maximum `MAC_TABLE_ENTRIES`
> MAC addresses.
>
> Consequently, if the guest sets more than `MAC_TABLE_ENTRIES` MAC addresses,
> QEMU truncates the CVQ command data and copies this incomplete command
> into the out buffer. In this situation, virtio_net_handle_mac() fails the
> integrity check and returns VIRTIO_NET_ERR instead of marking
> `mac_table.x_overflow` and returning VIRTIO_NET_OK, since the copied
> CVQ command in the buffer is incomplete and flawed.
>
> This patch solves this problem by increasing the buffer size to
> vhost_vdpa_net_cvq_cmd_page_len(), which represents the size of the buffer
> that is allocated and mmaped. Therefore, everything should work correctly
> as long as the guest sets fewer than `(vhost_vdpa_net_cvq_cmd_page_len() -
> sizeof(struct virtio_net_ctrl_hdr)
> - 2 * sizeof(struct virtio_net_ctrl_mac)) / ETH_ALEN` MAC addresses.
>
> Considering the highly unlikely scenario for the guest setting more than
> that number of MAC addresses for the filter table, this patch should
> work fine for the majority of cases. If there is a need for more than thoes
> entries, we can increase the value for vhost_vdpa_net_cvq_cmd_page_len()
> in the future, mapping more than one page for command output.
>
> Fixes: 7a7f87e94c ("vdpa: Move command buffers map to start of net device")
> Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
> ---
>  net/vhost-vdpa.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 5a72204899..ecfa8852b5 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -784,9 +784,18 @@ static int 
> vhost_vdpa_net_handle_ctrl_avail(VhostShadowVirtqueue *svq,
>      };
>      ssize_t dev_written = -EINVAL;
>
> +    /*
> +     * This code truncates the VIRTIO_NET_CTRL_MAC_TABLE_SET CVQ command
> +     * and prevents QEMU from marking `mac_table.x_overflow` in the device
> +     * internal state in virtio_net_handle_mac() if the guest sets more than
> +     * `(vhost_vdpa_net_cvq_cmd_page_len() - sizeof(struct 
> virtio_net_ctrl_hdr)
> +     * - 2 * sizeof(struct virtio_net_ctrl_mac)) / ETH_ALEN` MAC addresses 
> for
> +     * filter table.
> +     * However, this situation is considered rare, so it is acceptable.

I think we can also fix this situation. If it fits in one page, we can
still send the same page to the device and virtio_net_handle_ctrl_iov.
If it does not fit in one page, we can clear all mac filters with
VIRTIO_NET_CTRL_RX_ALLUNI and / or VIRTIO_NET_CTRL_RX_ALLMULTI.

Thanks!

> +     */
>      out.iov_len = iov_to_buf(elem->out_sg, elem->out_num, 0,
>                               s->cvq_cmd_out_buffer,
> -                             vhost_vdpa_net_cvq_cmd_len());
> +                             vhost_vdpa_net_cvq_cmd_page_len());
>      if (*(uint8_t *)s->cvq_cmd_out_buffer == VIRTIO_NET_CTRL_ANNOUNCE) {
>          /*
>           * Guest announce capability is emulated by qemu, so don't forward to
> --
> 2.25.1
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]