qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-5.2] s390x/pci: fix endianness issues


From: Cornelia Huck
Subject: Re: [PATCH for-5.2] s390x/pci: fix endianness issues
Date: Wed, 18 Nov 2020 08:49:47 +0100

On Tue, 17 Nov 2020 14:49:53 -0500
Matthew Rosato <mjrosato@linux.ibm.com> wrote:

> On 11/17/20 2:21 PM, Thomas Huth wrote:
> > On 17/11/2020 18.13, Cornelia Huck wrote:  
> >> zPCI control blocks are big endian, we need to take care that we
> >> do proper accesses in order not to break tcg guests on little endian
> >> hosts.
> >>
> >> Fixes: 28dc86a07299 ("s390x/pci: use a PCI Group structure")
> >> Fixes: 9670ee752727 ("s390x/pci: use a PCI Function structure")
> >> Fixes: 1e7552ff5c34 ("s390x/pci: get zPCI function info from host")  
> > 
> > This fixes the problem with my old Fedora 28 under TCG, too, but... do we
> > really want to store this information in big endian on the QEMU side (e.g.
> > in the QTAILQ lists)? ... that smells like trouble again in the future...
> > 
> > I think it would be better if we rather replace all those memcpy() functions
> > in the patches that you cited in the "Fixes:" lines above with code that
> > changes the endianess when we copy the date from QEMU space to guest space
> > and vice versa. What do you think?  
> 
> Hmm, that's actually a fair point...  Such an approach would have the 
> advantage of avoiding weird lines like the following:
> 
>       memory_region_add_subregion(&pbdev->iommu->mr,
> -                                pbdev->pci_group->zpci_group.msia,
> +                                ldq_p(&pbdev->pci_group->zpci_group.msia),
>                                   &pbdev->msix_notify_mr);
> 
> 
> And would keep messing with endianness largely contained to the code 
> that handles CLPs.  It does take away the niceness of being able to 
> gather the CLP response in one fell memcpy but...  It's not like these 
> are done very often (device init).
> 

Not opposed to it, can try to put a patch together and see what it
looks like. As long as we get this into 5.2 :)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]