qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] mc146818rtc: add a way to generate RTC interrupts via QMP


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH] mc146818rtc: add a way to generate RTC interrupts via QMP
Date: Mon, 29 Apr 2024 15:39:57 +0200
User-agent: Mozilla Thunderbird

(+Peter who has more experience on such design).

On 29/4/24 13:32, Markus Armbruster wrote:
Philippe Mathieu-Daudé <philmd@linaro.org> writes:

Hi Daniil, Markus,

On 26/4/24 10:39, Markus Armbruster wrote:
Daniil Tatianin <d-tatianin@yandex-team.ru> writes:

This can be used to force-synchronize the time in guest after a long
stop-cont pause, which can be useful for serverless-type workload.

What is a "serverless-type workload"?

Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
   hw/rtc/mc146818rtc.c         | 15 +++++++++++++++
   include/hw/rtc/mc146818rtc.h |  1 +
   qapi/misc-target.json        | 16 ++++++++++++++++
   3 files changed, 32 insertions(+)

diff --git a/hw/rtc/mc146818rtc.c b/hw/rtc/mc146818rtc.c
index f4c1869232..6980a78d5f 100644
--- a/hw/rtc/mc146818rtc.c
+++ b/hw/rtc/mc146818rtc.c
@@ -116,6 +116,21 @@ void qmp_rtc_reset_reinjection(Error **errp)
       }
   }
   +void qmp_rtc_notify(Error **errp)
+{
+    MC146818RtcState *s;
+
+    /*
+     * See:
+     * https://www.kernel.org/doc/Documentation/virtual/kvm/timekeeping.txt

What part of this document explains why this change is required?
I probably missed it. Explaining it here briefly would be more
useful.

+     */
+    QLIST_FOREACH(s, &rtc_devices, link) {
+        s->cmos_data[RTC_REG_B] |= REG_B_UIE;
                                      // Update-ended interrupt enable

+        s->cmos_data[RTC_REG_C] |= REG_C_IRQF | REG_C_UF;
                                      // interrupt request flag
                                      //           update interrupt flag

+        qemu_irq_raise(s->irq);
+    }
+}
+
Note for later: qmp_rtc_notify() works on all realized mc146818rtc
devices.  Other kinds of RTC devices are silently ignored.  Just like
qmp_rtc_reset_reinjection().

IMO to avoid any future ambiguity (in heterogeneous machines), this
command must take a QOM device path (or a list of) and only notify
those.

Let's compare:

• With QOM path:

   · You need to know the machine's RTC device(s).

     Unfortunately, this is bothersome, as the QOM path is not stable.

But we'll need more of that with dynamic machines...

     For Q35, it's generally "/machine/unattached/device[N]/rtc", but N
     varies with configuration (TCG N=2, KVM N=3 for me), and it might
     vary with machine type version.  That's because the machine code
     creates ICH9-LPC without a proper name.  We do that a lot.  I hate
     it.

     Likewise for i440FX with PIIX3 instead of ICH9-LPC.

     For isapc, it's /machine/unattached/device[3].  I suspect the 3
     isn't reliable there, either.

     microvm doesn't seem to have an RTC by default.

   · If the device so named doesn't support IRQ inject, the command
     should fail.

Yes, why the management app would want to run this command if there
are not RTC on the machine?

   · Could be generalized to non-RTC devices when that's useful.

• Broadcast:

   · You don't need to know the machine's RTC device(s).

   · If there are multiple RTC devices that support IRQ inject, we inject
     for each of them.  There is no way to select specific RTCs.

   · If there is no RTC device that supports IRQ inject, the command does
     nothing silently.

     I don't like silent failures.  It could be made to fail instead.

If it wasn't for the unstable QOM path problem, I'd advise against
the broadcast interface.

Thoughts?

Something bugs me in this patch but I couldn't figure out what I am
missing. The issue is when migrated VM is restored. I don't get why
the behavior depends on an external decision (via external management
transport). Don't we have post_load() hooks for such tuning?
This device implements it in rtc_post_load().

Regards,

Phil.

PD: BTW tomorrow community call could be a good opportunity to discuss
this.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]