qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-6.0 v2 2/3] spapr/xive: Fix size of END table and number


From: Greg Kurz
Subject: Re: [PATCH for-6.0 v2 2/3] spapr/xive: Fix size of END table and number of claimed IPIs
Date: Fri, 4 Dec 2020 10:11:55 +0100

On Fri, 4 Dec 2020 09:46:31 +0100
Cédric Le Goater <clg@kaod.org> wrote:

> >> I don't think we need much more than patch 1 which clarifies the 
> >> nature of the values being manipulated, quantities vs. numbering.
> >>
> >> The last 2 patches are adding complexity to try to optimize the 
> >> XIVE VP space in a case scenario which is not very common (vSMT). 
> >> May be it's not worth it. 
> >>
> > 
> > Well, the motivation isn't about optimization really since
> > a non-default vSMT setting already wastes VP space because
> > of the vCPU spacing. 
> 
> I don't see any VPs being wasted when not using vSMT. What's
> your command line ?
> 

I think there's confusion here. VSMT is always being used.
When you don't specify it on the command line, the machine
code sets it internally for you to be equal to the number
of threads/core of the guest. Thanks to that, you get
consecutive vCPU ids and no VP waste. Of course, you
get the same result if you do:

-M pseries,vsmt=N -smp threads=N

If you pass different values to vsmt and threads, though,
you get the spacing and the VP waste.

> > This is more about not using values
> > with wrong semantics in the code to avoid confusion in
> > future changes.
> 
> yes.
> 
> > I agree though that the extra complexity, especially the
> > compat crust, might be excessive. 
> 
> It's nice and correct but it seems a bit like extra noise 
> if the default case is not wasting VPs. Let's check that 
> first. 
> 
> > So maybe I should just
> > add comments in the code to clarify when we're using the
> > wrong semantics ?
> 
> yes. I think this is enough.
> 

I'll do this in v3 then.

> Thanks,
> 
> C.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]