qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/6] spapr_numa: consider user input when defining associativ


From: Greg Kurz
Subject: Re: [PATCH 5/6] spapr_numa: consider user input when defining associativity
Date: Thu, 24 Sep 2020 12:22:51 +0200

On Wed, 23 Sep 2020 16:34:57 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> This patch puts all the pieces together to finally allow user
> input when defining the NUMA topology of the spapr guest.
> 
> We have one more kernel restriction to handle in this patch:
> the associativity array of node 0 must be filled with zeroes
> [1]. The strategy below ensures that this will happen.
> 
> spapr_numa_define_associativity_domains() will read the distance
> (already PAPRified) between the nodes from numa_state and determine
> the appropriate NUMA level. The NUMA domains, processed in ascending
> order, are going to be matched via NUMA levels, and the lowest
> associativity domain value is assigned to that specific level for
> both.
> 
> This will create an heuristic where the associativities of the first
> nodes have higher priority and are re-used in new matches, instead of
> overwriting them with a new associativity match. This is necessary
> because neither QEMU, nor the pSeries kernel, supports multiple
> associativity domains for each resource, meaning that we have to
> decide which associativity relation is relevant.
> 
> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] 
> https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 688391278e..c84f77cda7 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>      }
>  }
>  
> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)

The funky naming doesn't improve clarity IMHO. I'd rather make
it lowercase only.

> +{
> +    uint8_t numa_level;
> +
> +    switch (distance) {
> +    case 20:
> +        numa_level = 0x3;
> +        break;
> +    case 40:
> +        numa_level = 0x2;
> +        break;
> +    case 80:
> +        numa_level = 0x1;
> +        break;
> +    default:
> +        numa_level = 0;

Hmm... same level for distances 10 and 160 ? Is this correct ?

> +    }
> +
> +    return numa_level;
> +}
> +
> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
> +                                                    MachineState *ms)

Passing ms seems to indicate that it could have a different value than spapr,
which is certainly no true.

I'd rather make it a local variable:

    MachineState *ms = MACHINE(spapr);

This is an slow path : we don't really care to do dynamic type checking
multiple times.

> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * This is how the associativity domain between A and B
> +             * is calculated:
> +             *
> +             * - get the distance between them
> +             * - get the correspondent NUMA level for this distance
> +             * - the arrays were initialized with their own numa_ids,
> +             * and we're calculating the distance in node_id ascending order,
> +             * starting from node 0. This will have a cascade effect in the
> +             * algorithm because the associativity domains that node 0 
> defines
> +             * will be carried over to the other nodes, and node 1
> +             * associativities will be carried over unless there's already a
> +             * node 0 associativity assigned, and so on. This happens because
> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> +             * the associativity domain of both, for the given NUMA level.
> +             *
> +             * The PPC kernel expects the associativity domains of node 0 to
> +             * be always 0, and this algorithm will grant that by default.
> +             */
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
> +            uint32_t assoc_src, assoc_dst;
> +
> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> +
> +            if (assoc_src < assoc_dst) {
> +                spapr->numa_assoc_array[dst][n_level] = 
> cpu_to_be32(assoc_src);
> +            } else {
> +                spapr->numa_assoc_array[src][n_level] = 
> cpu_to_be32(assoc_dst);
> +            }
> +        }
> +    }
> +
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int i, j, max_nodes_with_gpus;
> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>  
>      /*
>       * For all associativity arrays: first position is the size,
> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState 
> *spapr,
>      for (i = 0; i < nb_numa_nodes; i++) {
>          spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +
> +        /*
> +         * Fill all associativity domains of the node with node_id.
> +         * This is required because the kernel makes valid associativity

It would be appreciated to have an URL to the corresponding code in the
changelog.

> +         * matches with the zeroes if we leave the matrix unitialized.
> +         */
> +        if (!using_legacy_numa) {
> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            }
> +        }
>      }
>  
>      /*
> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState 
> *spapr,
>       * 1 NUMA node) will not benefit from anything we're going to do
>       * after this point.
>       */
> -    if (spapr_machine_using_legacy_numa(spapr)) {
> +    if (using_legacy_numa) {
>          return;
>      }
>  
> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState 
> *spapr,
>      }
>  
>      spapr_numa_PAPRify_distances(machine);
> +    spapr_numa_define_associativity_domains(spapr, machine);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,




reply via email to

[Prev in Thread] Current Thread [Next in Thread]