qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 3/5] spapr: Implement H_CONFER


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH v4 3/5] spapr: Implement H_CONFER
Date: Wed, 17 Jul 2019 11:51:53 +1000
User-agent: Mutt/1.12.0 (2019-05-25)

On Tue, Jul 16, 2019 at 08:25:28PM +1000, Nicholas Piggin wrote:
> David Gibson's on July 16, 2019 6:25 pm:
> > On Tue, Jul 16, 2019 at 12:47:24PM +1000, Nicholas Piggin wrote:
> >> This does not do directed yielding and is not quite as strict as PAPR
> >> specifies in terms of precise dispatch behaviour. This generally will
> >> mean suboptimal performance, rather than guest misbehaviour. Linux
> >> does not rely on exact dispatch behaviour.
> >> 
> >> Signed-off-by: Nicholas Piggin <address@hidden>
> >> ---
> >>  hw/ppc/spapr_hcall.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 48 insertions(+)
> >> 
> >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> >> index 8b208ab259..28d58113be 100644
> >> --- a/hw/ppc/spapr_hcall.c
> >> +++ b/hw/ppc/spapr_hcall.c
> >> @@ -1069,6 +1069,53 @@ static target_ulong h_cede(PowerPCCPU *cpu, 
> >> SpaprMachineState *spapr,
> >>      return H_SUCCESS;
> >>  }
> >>  
> >> +static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
> >> +                           target_ulong opcode, target_ulong *args)
> >> +{
> >> +    target_long target = args[0];
> >> +    uint32_t dispatch = args[1];
> >> +    PowerPCCPU *target_cpu = spapr_find_cpu(target);
> >> +    CPUState *target_cs = CPU(target_cpu);
> >> +    CPUState *cs = CPU(cpu);
> >> +    SpaprCpuState *spapr_cpu;
> >> +
> >> +    /*
> >> +     * This does not do a targeted yield or confer, but check the 
> >> parameter
> >> +     * anyway. -1 means confer to all/any other CPUs.
> >> +     */
> >> +    if (target != -1 && !target_cs) {
> >> +        return H_PARAMETER;
> >> +    }
> > 
> > Should we return an error if a targeted yield is attempted, rather
> > than pretend we've done it?
> 
> I don't think so, because we do _some_ kind of yield for the directed
> case which is probably better than nothing, and Linux won't fall back.
> 
> PAPR is much more strict about dispatching. The H_CONFERing vCPU must 
> not run until the target(s) has been dispatched (if runnable), for
> example. So we don't really implement it to the letter, we just do
> "some kind of yield, whatever generic tcg code has implemented".
> 
> For single threaded tcg it seems a signifcant complication to the
> round robin algorithm to add a directed yield, yet simply yielding
> to the next vCPU is a good idea here because useful work will get
> done including by the lock holder before we run again.
> 
> If multi threaded tcg performance with lot of vCPUs and lock contention
> starts becoming more important I guess directed yielding might be
> something to look at.

Ok, makes sense to me.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]