qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 3/5] softmmu/cpus: Only set parallel_cpus for SMP


From: Claudio Fontana
Subject: Re: [PULL 3/5] softmmu/cpus: Only set parallel_cpus for SMP
Date: Mon, 7 Sep 2020 12:05:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0

Hi Richard,

currently rebasing on top of this one,
just a question, why is the patch not directly using "current_machine"?

Is using MACHINE(qdev_get_machine()) preferrable here?

Thanks,

Claudio

On 9/3/20 11:40 PM, Richard Henderson wrote:
> Do not set parallel_cpus if there is only one cpu instantiated.
> This will allow tcg to use serial code to implement atomics.
> 
> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  softmmu/cpus.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> index a802e899ab..e3b98065c9 100644
> --- a/softmmu/cpus.c
> +++ b/softmmu/cpus.c
> @@ -1895,6 +1895,16 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
>      if (!tcg_region_inited) {
>          tcg_region_inited = 1;
>          tcg_region_init();
> +        /*
> +         * If MTTCG, and we will create multiple cpus,
> +         * then we will have cpus running in parallel.
> +         */
> +        if (qemu_tcg_mttcg_enabled()) {
> +            MachineState *ms = MACHINE(qdev_get_machine());

MachineState *ms = current_machine;
?


> +            if (ms->smp.max_cpus > 1) {
> +                parallel_cpus = true;
> +            }
> +        }
>      }
>  
>      if (qemu_tcg_mttcg_enabled() || !single_tcg_cpu_thread) {
> @@ -1904,7 +1914,6 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
>  
>          if (qemu_tcg_mttcg_enabled()) {
>              /* create a thread per vCPU with TCG (MTTCG) */
> -            parallel_cpus = true;
>              snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
>                   cpu->cpu_index);
>  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]