qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 5/8] qtest/libqos: Restrict CPU I/O instructions


From: Philippe Mathieu-Daudé
Subject: Re: [RFC PATCH 5/8] qtest/libqos: Restrict CPU I/O instructions
Date: Wed, 17 Mar 2021 17:24:23 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0

On 3/17/21 4:59 PM, Laszlo Ersek wrote:
> On 03/16/21 16:55, Philippe Mathieu-Daudé wrote:
>> Hi Richard and Laszlo,
>>
>> On 3/16/21 4:43 PM, Richard Henderson wrote:
>>> On 3/16/21 9:37 AM, Laszlo Ersek wrote:
>>>> (+Peter, comment below)
>>>>
>>>> On 03/15/21 00:29, Philippe Mathieu-Daudé wrote:
>>>>> Restrict CPU I/O instructions to architectures providing
>>>>> I/O bus.
>>>>>
>>>>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>>>> ---
>>>>>   tests/qtest/libqos/fw_cfg.h | 3 +++
>>>>>   tests/qtest/libqos/fw_cfg.c | 2 ++
>>>>>   2 files changed, 5 insertions(+)
>>>>>
>>>>> diff --git a/tests/qtest/libqos/fw_cfg.h b/tests/qtest/libqos/fw_cfg.h
>>>>> index c6a7cf8cf05..3bfb6d6d55b 100644
>>>>> --- a/tests/qtest/libqos/fw_cfg.h
>>>>> +++ b/tests/qtest/libqos/fw_cfg.h
>>>>> @@ -36,6 +36,8 @@ size_t qfw_cfg_get_file(QFWCFG *fw_cfg, const char
>>>>> *filename,
>>>>>     QFWCFG *mm_fw_cfg_init(QTestState *qts, uint64_t base);
>>>>>   void mm_fw_cfg_uninit(QFWCFG *fw_cfg);
>>>>> +
>>>>> +#ifdef TARGET_HAS_IOPORT
>>>>>   QFWCFG *io_fw_cfg_init(QTestState *qts, uint16_t base);
>>>>>   void io_fw_cfg_uninit(QFWCFG *fw_cfg);
>>>>>   @@ -48,6 +50,7 @@ static inline void pc_fw_cfg_uninit(QFWCFG *fw_cfg)
>>>>>   {
>>>>>       io_fw_cfg_uninit(fw_cfg);
>>>>>   }
>>>>> +#endif /* TARGET_HAS_IOPORT */
>>>>>     G_DEFINE_AUTOPTR_CLEANUP_FUNC(QFWCFG, mm_fw_cfg_uninit)
>>>>>   diff --git a/tests/qtest/libqos/fw_cfg.c b/tests/qtest/libqos/fw_cfg.c
>>>>> index 6b8e1babe51..db2b83f5212 100644
>>>>> --- a/tests/qtest/libqos/fw_cfg.c
>>>>> +++ b/tests/qtest/libqos/fw_cfg.c
>>>>> @@ -131,6 +131,7 @@ void mm_fw_cfg_uninit(QFWCFG *fw_cfg)
>>>>>       g_free(fw_cfg);
>>>>>   }
>>>>>   +#ifdef TARGET_HAS_IOPORT
>>>>>   static void io_fw_cfg_select(QFWCFG *fw_cfg, uint16_t key)
>>>>>   {
>>>>>       qtest_outw(fw_cfg->qts, fw_cfg->base, key);
>>>>> @@ -162,3 +163,4 @@ void io_fw_cfg_uninit(QFWCFG *fw_cfg)
>>>>>   {
>>>>>       g_free(fw_cfg);
>>>>>   }
>>>>> +#endif /* TARGET_HAS_IOPORT */
>>>>>
>>>>
>>>> I'm not sure the macro name is ideal; the PCI host on aarch64/"virt"
>>>> emulates IO Ports (it's possible to allocate PCI IO resources on
>>>> "virt"). From patch#3, TARGET_HAS_IOPORT does not seem to extend to
>>>> arm64.
>>>
>>> Correct, aarch64 has memory-mapped pci io resources, they are not on a
>>> separate ioport address space as for x86 and avr.
>>
>> I first wrote TARGET_CPU_HAS_IOPORT but realized architecture
>> and CPU are linked, so I elided _CPU_.
>>
>> What I'd like to clear from the QTest API is the idea that the CPU has
>> direct access to the I/O bus via I/O specific instructions.
>>
>> Any machine able to use a host <-> PCI bus chipset is able to access
>> the I/O function from the PCI bus.
>>
>> The fact that on X86 the first PCI function is wired to the same I/O
>> bus than the CPU is a machine implementation detail.
>>
>> When accessing PCI I/O ressources on Aarch64, you don't have to use
>> dedicated I/O instructions.
>>
>> Anyway for now Thomas discarded this series, as QTest is a generic API,
>> and we never had to worry about mixing address spaces so far, so not in
>> a hurry to clean this (although it would be useful to change address
>> space to access DMA or secure-CPU-view from QTest).
> 
> If this is about an "IO Bus" or "IO instructions", then we should call
> the macro TARGET_HAS_IO_BUS or "TARGET_ISA_HAS_IO" (or
> "TARGET_HAS_IO_INSNS"), or something like those. My only confusion was
> about the "IO Port" expression in the macro name; the idea is OK from my
> perspective otherwise.

TARGET_HAS_IO_BUS / TARGET_HAS_IO_INSNS LGTM
(ISA bus is not particularly relevant for the AVR target).

Thanks for the feedback :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]