qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

回复: Re: [PATCH] kvm/arm64: Fix memory section did not set to kvm


From: liucong2
Subject: 回复: Re: [PATCH] kvm/arm64: Fix memory section did not set to kvm
Date: Mon, 28 Mar 2022 20:17:40 +0800

thanks for you explain, I will learn it later.


in the scenario of rom bar size 8k, page size 64k, the value of 

'size = ROUND_UP(size, qemu_real_host_page_size)' is 64k, kvm_align_section

also return 64k bytes.  just the same size as the size of RAMBlock. I still

did not understand why it is wrong.


the guest code just print the value I read. I modify ioremap to ioremap_wc

just because ioremap cause error in arm64, detail refer to:


https://lore.kernel.org/all/20220324104928.2959545-1-liucong2@kylinos.cn/



diff --git a/drivers/gpu/drm/qxl/qxl_dev.h b/drivers/gpu/drm/qxl/qxl_dev.h

index a7bc31f6d565..f837b18c8958 100644

--- a/drivers/gpu/drm/qxl/qxl_dev.h

+++ b/drivers/gpu/drm/qxl/qxl_dev.h

@@ -222,6 +222,10 @@ struct qxl_urect {

        uint32_t right;

 };

 

+struct qxl_rom_test {

+           uint32_t magic;

+           uint32_t id;

+};

 /* qxl-1 compat: append only */

 struct qxl_rom {

        uint32_t magic;

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h

index aae90a9ee1db..74c6eea45252 100644

--- a/drivers/gpu/drm/qxl/qxl_drv.h

+++ b/drivers/gpu/drm/qxl/qxl_drv.h

@@ -196,8 +196,9 @@ struct qxl_device {

 

        resource_size_t vram_base, vram_size;

        resource_size_t surfaceram_base, surfaceram_size;

-       resource_size_t rom_base, rom_size;

+       resource_size_t rom_base, rom_base_test, rom_size, rom_size_test;

        struct qxl_rom *rom;

+       struct qxl_rom_test *rom_test;

 

        struct qxl_mode *modes;

        struct qxl_bo *monitors_config_bo;

diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c

index dc5b3850a4d4..568d674725ad 100644

--- a/drivers/gpu/drm/qxl/qxl_kms.c

+++ b/drivers/gpu/drm/qxl/qxl_kms.c

@@ -38,12 +38,17 @@ int qxl_log_level;

 static bool qxl_check_device(struct qxl_device *qdev)

 {

        struct qxl_rom *rom = qdev->rom;

+       struct qxl_rom_test *rom_test = qdev->rom_test;

 

        if (rom->magic != 0x4f525851) {

                DRM_ERROR("bad rom signature %x\n", rom->magic);

                return false;

        }

+       pr_err("liucong %s %d 0x%x", __func__, __LINE__, rom_test->magic); 

+       pr_err("liucong %s %d 0x%x 0x%x", __func__, __LINE__, rom_test->magic, rom_test->id); //no output on arm64


        DRM_INFO("Device Version %d.%d\n", rom->id, rom->update_id);

        DRM_INFO("Compression level %d log level %d\n", rom->compression_level,

                 rom->log_level);

@@ -121,7 +126,10 @@ int qxl_device_init(struct qxl_device *qdev,

        qxl_gem_init(qdev);

 

        qdev->rom_base = pci_resource_start(pdev, 2);

+       qdev->rom_base_test = pci_resource_start(pdev, 4);

        qdev->rom_size = pci_resource_len(pdev, 2);

+       qdev->rom_size_test = pci_resource_len(pdev, 4);

        qdev->vram_base = pci_resource_start(pdev, 0);

        qdev->io_base = pci_resource_start(pdev, 3);

 

@@ -165,8 +173,8 @@ int qxl_device_init(struct qxl_device *qdev,

                 (int)qdev->surfaceram_size / 1024 / 1024,

                 (int)qdev->surfaceram_size / 1024,

                 (sb == 4) ? "64bit" : "32bit");

-

-       qdev->rom = ioremap(qdev->rom_base, qdev->rom_size);

+       qdev->rom = ioremap_wc(qdev->rom_base, qdev->rom_size);

+       qdev->rom_test = ioremap_wc(qdev->rom_base_test, 2048);

        if (!qdev->rom) {

                pr_err("Unable to ioremap ROM\n");

                r = -ENOMEM;

@@ -184,7 +192,7 @@ int qxl_device_init(struct qxl_device *qdev,

                goto rom_unmap;

        }

 

-       qdev->ram_header = ioremap(qdev->vram_base +

+       qdev->ram_header = ioremap_wc(qdev->vram_base +

                                   qdev->rom->ram_header_offset,

                                   sizeof(*qdev->ram_header));

        if (!qdev->ram_header) {


Regards,

Cong.




       
主 题:Re: [PATCH] kvm/arm64: Fix memory section did not set to kvm            
日 期:2022-03-28 19:24            
发件人:Peter Maydell            
收件人:Cong Liu                    

       
On Mon, 28 Mar 2022 at 10:42, Cong Liuwrote:
> On 2022/3/25 23:00, Peter Maydell wrote:
> > This is correct behaviour. If the memory region is less than
> > a complete host page then it is not possible for KVM to
> > map it into the guest as directly accessible memory,
> > because that can only be done in host-page sized chunks,
> > and if the MR is a RAM region smaller than the page then
> > there simply is not enough backing RAM there to map without
> > incorrectly exposing to the guest whatever comes after the
> > contents of the MR.
>
> actually, even with fixed 8192 qxl rom bar size, the RAMBlock
> size corresponding to MemoryRegion will also be 64k.

Where does this rounding up happen? In any case, it would
still be wrong -- if the ROM bar is 8192 large then the
guest should get a fault writing to bytes past 8191, not
reads-as-written.

> so it can
> map into the guest as directly accessible memory. now it failed
> just because we use the wrong size. ROUND_UP(n, d) requires
> that d be a power of 2, it is faster than QEMU_ALIGN_UP().
> and the qemu_real_host_page_size should always a power of 2.
> seems we can use this patch and no need to fall back to "treat
> like MMIO device access".
>
> >
> > For memory regions smaller than a page, KVM and QEMU will
> > fall back to "treat like MMIO device access". As long as the

> I don't understand how it works, can you help explain or tell me
> which part of the code I should read to understand?

The KVM code in the kernel takes a fault because there is
nothing mapped at that address in the stage 2 page tables.
This results in kvm_handle_guest_abort() being called.
This function sorts out various cases it can handle
(eg "this is backed by host RAM which we need to page in")
and cases which are always errors (eg "the guest tried to
fetch an instruction from non-RAM"). For the cases of
"treat like MMIO device access" it calls io_mem_abort().
In io_mem_abort() we check whether the guest instruction that
did the load/store was a sensible one (this is the
kvm_vcpu_dabt_isvalid() check). Assuming that it was, then
we fill in some kvm_run struct fields with the parameters like
access size, address, etc (which the host CPU tells us in the
ESR_ELx syndrome register) cause an exit to userspace with
KVM_EXIT_MMIO as the reason.

In QEMU, the code in kvm_cpu_exec() has a case for the
KVM_EXIT_MMIO code. It just calls address_space_rw()
using the address, length, etc parameters that the kernel
gave us. If this is a load then the loaded data is filled
in in the kvm_run struct. Then it loops back around to do a
KVM_RUN ioctl, handing control back to the kernel.

In the kernel, in the arm64 kvm_arch_vcpu_ioctl_run()
we check whether we've just come back from a KVM_EXIT_MMIO
exit, and if so call kvm_handle_mmio_return(). If the
faulting instruction was a load, we read the data from
the kvm_run struct, sign extend as appropriate, and write
to the appropriate guest register. Then we increment the
guest program counter. Finally we start execution in the
guest in the normal way.

> the test code appended.
> it works with some differences between arm64 and x86. in x86, it
> printf rom_test->magic and rom_test->id correctly, but in arm64.
> it printf rom_test->magic correctly. when I try to print the
> rom_test->id. I get "load/store instruction decoding not
> implemented" error message.

You don't show the guest code, which is the thing that matters
here. In any case for the QXL ROM we already have the fix,
which is to make the ROM as big as the host page size.

-- PMM

reply via email to

[Prev in Thread] Current Thread [Next in Thread]