qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] vdpa: load vlan configuration at NIC startup


From: Jason Wang
Subject: Re: [PATCH 2/3] vdpa: load vlan configuration at NIC startup
Date: Thu, 15 Sep 2022 10:40:44 +0800

On Wed, Sep 14, 2022 at 7:33 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 9/14/2022 3:20 AM, Jason Wang wrote:
> > On Fri, Sep 9, 2022 at 4:02 PM Eugenio Perez Martin <eperezma@redhat.com> 
> > wrote:
> >> On Fri, Sep 9, 2022 at 8:40 AM Jason Wang <jasowang@redhat.com> wrote:
> >>> On Fri, Sep 9, 2022 at 2:38 PM Jason Wang <jasowang@redhat.com> wrote:
> >>>> On Wed, Sep 7, 2022 at 12:36 AM Eugenio Pérez <eperezma@redhat.com> 
> >>>> wrote:
> >>>>> To have enabled vlans at device startup may happen in the destination of
> >>>>> a live migration, so this configuration must be restored.
> >>>>>
> >>>>> At this moment the code is not accessible, since SVQ refuses to start if
> >>>>> vlan feature is exposed by the device.
> >>>>>
> >>>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> >>>>> ---
> >>>>>   net/vhost-vdpa.c | 46 ++++++++++++++++++++++++++++++++++++++++++++--
> >>>>>   1 file changed, 44 insertions(+), 2 deletions(-)
> >>>>>
> >>>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> >>>>> index 4bc3fd01a8..ecbfd08eb9 100644
> >>>>> --- a/net/vhost-vdpa.c
> >>>>> +++ b/net/vhost-vdpa.c
> >>>>> @@ -100,6 +100,8 @@ static const uint64_t vdpa_svq_device_features =
> >>>>>       BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
> >>>>>       BIT_ULL(VIRTIO_NET_F_STANDBY);
> >>>>>
> >>>>> +#define MAX_VLAN    (1 << 12)   /* Per 802.1Q definition */
> >>>>> +
> >>>>>   VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
> >>>>>   {
> >>>>>       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> >>>>> @@ -423,6 +425,47 @@ static int vhost_vdpa_net_load_mq(VhostVDPAState 
> >>>>> *s,
> >>>>>       return *s->status != VIRTIO_NET_OK;
> >>>>>   }
> >>>>>
> >>>>> +static int vhost_vdpa_net_load_single_vlan(VhostVDPAState *s,
> >>>>> +                                           const VirtIONet *n,
> >>>>> +                                           uint16_t vid)
> >>>>> +{
> >>>>> +    ssize_t dev_written = vhost_vdpa_net_load_cmd(s, 
> >>>>> VIRTIO_NET_CTRL_VLAN,
> >>>>> +                                                  
> >>>>> VIRTIO_NET_CTRL_VLAN_ADD,
> >>>>> +                                                  &vid, sizeof(vid));
> >>>>> +    if (unlikely(dev_written < 0)) {
> >>>>> +        return dev_written;
> >>>>> +    }
> >>>>> +
> >>>>> +    if (unlikely(*s->status != VIRTIO_NET_OK)) {
> >>>>> +        return -EINVAL;
> >>>>> +    }
> >>>>> +
> >>>>> +    return 0;
> >>>>> +}
> >>>>> +
> >>>>> +static int vhost_vdpa_net_load_vlan(VhostVDPAState *s,
> >>>>> +                                    const VirtIONet *n)
> >>>>> +{
> >>>>> +    uint64_t features = n->parent_obj.guest_features;
> >>>>> +
> >>>>> +    if (!(features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN))) {
> >>>>> +        return 0;
> >>>>> +    }
> >>>>> +
> >>>>> +    for (int i = 0; i < MAX_VLAN >> 5; i++) {
> >>>>> +        for (int j = 0; n->vlans[i] && j <= 0x1f; j++) {
> >>>>> +            if (n->vlans[i] & (1U << j)) {
> >>>>> +                int r = vhost_vdpa_net_load_single_vlan(s, n, (i << 5) 
> >>>>> + j);
> >>>> This seems to cause a lot of latency if the driver has a lot of vlans.
> >>>>
> >>>> I wonder if it's simply to let all vlan traffic go by disabling
> >>>> CTRL_VLAN feature at vDPA layer.
> >> The guest will not be able to recover that vlan configuration.
> > Spec said it's best effort and we actually don't do this for
> > vhost-net/user for years and manage to survive.
> I thought there's a border line between best effort and do nothing. ;-)
>
> I think that the reason this could survive is really software
> implementation specific - say if the backend doesn't start with VLAN
> filtering disabled (meaning allow all VLAN traffic to pass) then it
> would become a problem. This won't be a problem for almost PF NICs but
> may be problematic for vDPA e.g. VF backed VDPAs.

So it looks like an issue of the implementation. If CTRL_VLAN is not
negotiated, the device should disable vlan filters.


> >
> >>> Another idea is to extend the spec to allow us to accept a bitmap of
> >>> the vlan ids via a single command, then we will be fine.
> >>>
> >> I'm not sure if adding more ways to configure something is the answer,
> >> but I'd be ok to implement it.
> >>
> >> Another idea is to allow the sending of many CVQ commands in parallel.
> >> It shouldn't be very hard to achieve using exposed buffers as ring
> >> buffers, and it will short down the start of the devices with many
> >> features.
> > In the extreme case, what if guests decide to filter all of the vlans?
> > It means we need MAX_VLAN commands which may exceeds the size of the
> > control virtqueue.
> Indeed, that's a case where a single flat device state blob would be
> more efficient for hardware drivers to apply than individual control
> command messages do.

Right, so we can optimize the spec for this.

Thanks

>
> -Siwei
> >
> > Thanks
> >
> >> Thanks!
> >>
> >>> Thanks
> >>>
> >>>> Thanks
> >>>>
> >>>>> +                if (unlikely(r != 0)) {
> >>>>> +                    return r;
> >>>>> +                }
> >>>>> +            }
> >>>>> +        }
> >>>>> +    }
> >>>>> +
> >>>>> +    return 0;
> >>>>> +}
> >>>>> +
> >>>>>   static int vhost_vdpa_net_load(NetClientState *nc)
> >>>>>   {
> >>>>>       VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
> >>>>> @@ -445,8 +488,7 @@ static int vhost_vdpa_net_load(NetClientState *nc)
> >>>>>       if (unlikely(r)) {
> >>>>>           return r;
> >>>>>       }
> >>>>> -
> >>>>> -    return 0;
> >>>>> +    return vhost_vdpa_net_load_vlan(s, n);
> >>>>>   }
> >>>>>
> >>>>>   static NetClientInfo net_vhost_vdpa_cvq_info = {
> >>>>> --
> >>>>> 2.31.1
> >>>>>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]