qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] iothread: add set_iothread_poll_* commands


From: Zhenyu Ye
Subject: Re: [RFC PATCH] iothread: add set_iothread_poll_* commands
Date: Thu, 24 Oct 2019 22:34:04 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0


On 2019/10/24 21:56, Dr. David Alan Gilbert wrote:
> * Zhenyu Ye (address@hidden) wrote:
>>
>>
>> On 2019/10/23 23:19, Stefan Hajnoczi wrote:
>>> On Tue, Oct 22, 2019 at 04:12:03PM +0800, yezhenyu (A) wrote:
>>>> Since qemu2.9, QEMU added three AioContext poll parameters to struct
>>>> IOThread: poll_max_ns, poll_grow and poll_shrink. These properties are
>>>> used to control iothread polling time.
>>>>
>>>> However, there isn't properly hmp commands to adjust them when the VM is
>>>> alive. It's useful to adjust them online when observing the impact of
>>>> different property value on performance.
>>>>
>>>> This patch add three hmp commands to adjust iothread poll-* properties
>>>> for special iothread:
>>>>
>>>> set_iothread_poll_max_ns: set the maximum polling time in ns;
>>>> set_iothread_poll_grow: set how many ns will be added to polling time;
>>>> set_iothread_poll_shrink: set how many ns will be removed from polling
>>>> time.
>>>>
>>>> Signed-off-by: Zhenyu Ye <address@hidden>
>>>> ---
>>>> hmp-commands.hx | 42 ++++++++++++++++++++
>>>> hmp.c | 30 +++++++++++++++
>>>> hmp.h | 3 ++
>>>> include/sysemu/iothread.h | 6 +++
>>>> iothread.c | 80 +++++++++++++++++++++++++++++++++++++++
>>>> qapi/misc.json | 23 +++++++++++
>>>> 6 files changed, 184 insertions(+)
>>>
>>> poll-max-ns, poll-grow, poll-shrink are properties of IOThread objects.
>>> They can already be modified at runtime using:
>>>
>>>   $ qemu -object iothread,id=iothread1
>>>   (qemu) qom-set /objects/iothread1 poll-max-ns 100000
>>>
>>> I think there is no need for a patch.
>>>
>>> Stefan
>>>
>>
>> Thanks for your review. I have considered using the `qom-set` command to 
>> modify
>> IOThread object's properties, however, this command is not friendly to 
>> primary
>> users. The help info for this command is only:
>>
>>     qom-set path property value -- set QOM property
>>
>> It's almost impossible to get the correct `path` parameter for primary user.
> 
> Is this just a matter of documenting how to do it?
> 
> It sounds like there's no need for a new QMP command though;  if you
> want an easier HMP command I'd probably still take it (because HMP is ok
> at having things for convenience) - but not if it turns out that just
> adding a paragraph of documentation is enough.
> 
> Dave
> 

I will show the differences in QMP and HMP:
If I want to set iothread1.poll-max-ns=1000 and iothread1.poll-grow=2:

Without this patch:
QMP command:

    qom-set /objects/iothread1 poll-max-ns 1000
    qom-set /objects/iothread1 poll-grow 2

HMP command:

    { "execute": "qom-set", "arguments": { "path": "/objects/iothread1",
                                           "property": "poll-max-ns", "value": 
1000 } }
    { "execute": "qom-set", "arguments": { "path": "/objects/iothread1",
                                           "property": "poll-grow", "value": 2} 
}

with this patch:
QMP command:

    iothread_set_parameter iothread1 max-ns 1000
    iothread_set_parameter iothread1 grow 2

HMP command:

    { "execute": "set-iothread-poll-params", "arguments': { "iothread-id": 
"iothread1",
                                                            "max-ns": 1000, 
"grow": 2 } }


I think the inconvenience of qom-set is how to get the correct `path` parameter.
Anyway, I will consider your advice.


>> This patch provides a more convenient and easy-use hmp&qmp interface to 
>> modify
>> these IOThread properties. I think this patch still has a little value.
>>
>> And I can implement this patch compactly by reusing your code.
>>
>> Waiting for your reply.
>>
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK
> 
> 
> .
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]