qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] target/ppc: TCG SMT support for spapr


From: Cédric Le Goater
Subject: Re: [PATCH 0/4] target/ppc: TCG SMT support for spapr
Date: Tue, 20 Jun 2023 12:27:52 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0

On 6/20/23 12:12, Nicholas Piggin wrote:
On Wed Jun 7, 2023 at 12:09 AM AEST, Cédric Le Goater wrote:
On 6/5/23 13:23, Nicholas Piggin wrote:
Previous RFC here

https://lists.gnu.org/archive/html/qemu-ppc/2023-05/msg00453.html

This series drops patch 1 from the previous, which is more of
a standalone bugfix.

Also accounted for Cedric's comments, except a nicer way to
set cpu_index vs PIR/TIR SPRs, which is not quite trivial.

This limits support for SMT to POWER8 and newer. It is also
incompatible with nested-HV so that is checked for too.

Iterating CPUs to find siblings for now I kept because similar
loops exist in a few places, and it is not conceptually
difficult for SMT, just fiddly code to improve. For now it
should not be much performane concern.

I removed hypervisor msgsnd support from patch 3, which is not
required for spapr and added significantly to the patch.

For now nobody has objected to the way shared SPR access is
handled (serialised with TCG atomics support) so we'll keep
going with it.

Cc:ing more people for possible feedback.

Not much feedback so I'll plan to go with this.

A more performant implementation might try to synchronize
threads at the register level rather than serialize everything,
but SMT shared registers are not too performance critical so
this should do for now.

yes. Could you please rebase this series on upstream ?

It would be good to add tests for SMT. May be we could extend :

  tests/avocado/ppc_pseries.py

with a couple of extra QEMU configs adding 'threads=' (if possible) and
check :

  "CPU maps initialized for Y threads per core"

and

  "smp: Brought up 1 node, X*Y CPUs"

?

Thanks,

C.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]