qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 0/4] Introduce the microvm machine type


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v3 0/4] Introduce the microvm machine type
Date: Thu, 25 Jul 2019 13:01:29 +0100

On Thu, Jul 25, 2019 at 12:23 PM Paolo Bonzini <address@hidden> wrote:
> On 25/07/19 12:42, Sergio Lopez wrote:
> > Peter Maydell <address@hidden> writes:
> >> On Thu, 25 Jul 2019 at 10:59, Michael S. Tsirkin <address@hidden> wrote:
> >>> OK so please start with adding virtio 1 support. Guest bits
> >>> have been ready for years now.
> >>
> >> I'd still rather we just used pci virtio. If pci isn't
> >> fast enough at startup, do something to make it faster...
> >
> > Actually, removing PCI (and ACPI), is one of the main ways microvm has
> > to reduce not only boot time, but also the exposed surface and the
> > general footprint.
> >
> > I think we need to discuss and settle whether using virtio-mmio (even if
> > maintained and upgraded to virtio 1) for a new machine type is
> > acceptable or not. Because if it isn't, we should probably just ditch
> > the whole microvm idea and move to something else.
>
> I agree.  IMNSHO the reduced attack surface from removing PCI is
> (mostly) security theater, however the boot time numbers that Sergio
> showed for microvm are quite extreme and I don't think there is any hope
> of getting even close with a PCI-based virtual machine.
>
> So I'd even go a step further: if using virtio-mmio for a new machine
> type is not acceptable, we should admit that boot time optimization in
> QEMU is basically as good as it can get---low-hanging fruit has been
> picked with PVH and mmap is the logical next step, but all that's left
> is optimizing the guest or something else.

I haven't seen enough analysis to declare boot time optimization done.
QEMU startup can be profiled and improved.

The numbers show that removing PCI and ACPI makes things faster but
this doesn't justify removing them.  Understanding of why they are
slow is what justifies removing them.  Otherwise it could just be a
misconfiguration, inefficient implementation, etc and we've seen there
is low-hanging fruit.

How much time is spent doing PCI initialization?  Is the vmexit
pattern for PCI initialization as good as the hardware interface
allows?

Without an analysis of why things are slow it's not possible come to
an informed decision.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]