qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/3] Add dirty-sync-missed-zero-copy migration stat


From: Peter Xu
Subject: Re: [PATCH v3 2/3] Add dirty-sync-missed-zero-copy migration stat
Date: Thu, 7 Jul 2022 13:54:19 -0400

On Mon, Jul 04, 2022 at 05:23:14PM -0300, Leonardo Bras wrote:
> Signed-off-by: Leonardo Bras <leobras@redhat.com>
> ---
>  qapi/migration.json   | 7 ++++++-
>  migration/migration.c | 2 ++
>  monitor/hmp-cmds.c    | 4 ++++
>  3 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 7102e474a6..fed08b9b88 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -55,6 +55,10 @@
>  # @postcopy-bytes: The number of bytes sent during the post-copy phase
>  #                  (since 7.0).
>  #
> +# @dirty-sync-missed-zero-copy: Number of times dirty RAM synchronization 
> could
> +#                               not avoid copying zero pages.  This is 
> between 0

Avoid copying zero pages?  Isn't this for counting MSG_ZEROCOPY fallbacks?

> +#                               and @dirty-sync-count * @multifd-channels.

I'd not name it as "dirty-sync-*" because fundamentally the accounting is
not doing like that (more in latter patch).  I also think we should squash
patch 2/3 as patch 3 only started to provide meaningful values.

> +#                               (since 7.1)
>  # Since: 0.14
>  ##
>  { 'struct': 'MigrationStats',
> @@ -65,7 +69,8 @@
>             'postcopy-requests' : 'int', 'page-size' : 'int',
>             'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64',
>             'precopy-bytes' : 'uint64', 'downtime-bytes' : 'uint64',
> -           'postcopy-bytes' : 'uint64' } }
> +           'postcopy-bytes' : 'uint64',
> +           'dirty-sync-missed-zero-copy' : 'uint64' } }
>  
>  ##
>  # @XBZRLECacheStats:
> diff --git a/migration/migration.c b/migration/migration.c
> index 78f5057373..048f7f8bdb 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1027,6 +1027,8 @@ static void populate_ram_info(MigrationInfo *info, 
> MigrationState *s)
>      info->ram->normal_bytes = ram_counters.normal * page_size;
>      info->ram->mbps = s->mbps;
>      info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
> +    info->ram->dirty_sync_missed_zero_copy =
> +            ram_counters.dirty_sync_missed_zero_copy;
>      info->ram->postcopy_requests = ram_counters.postcopy_requests;
>      info->ram->page_size = page_size;
>      info->ram->multifd_bytes = ram_counters.multifd_bytes;
> diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c
> index ca98df0495..5f3be9e405 100644
> --- a/monitor/hmp-cmds.c
> +++ b/monitor/hmp-cmds.c
> @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
>              monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n",
>                             info->ram->postcopy_bytes >> 10);
>          }
> +        if (info->ram->dirty_sync_missed_zero_copy) {
> +            monitor_printf(mon, "missed zero-copy on: %" PRIu64 " 
> iterations\n",
> +                           info->ram->dirty_sync_missed_zero_copy);

I suggest we don't call it "iterations" because it's not the generic mean
of iterations.

> +        }
>      }
>  
>      if (info->has_disk) {
> -- 
> 2.36.1
> 

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]