qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problems with irq mapping in qemu v5.2


From: BALATON Zoltan
Subject: Re: Problems with irq mapping in qemu v5.2
Date: Sat, 26 Dec 2020 00:43:49 +0100 (CET)

On Tue, 22 Dec 2020, Guenter Roeck wrote:
On 12/22/20 10:23 AM, Mark Cave-Ayland wrote:
On 22/12/2020 16:16, Guenter Roeck wrote:

Hi,

commit 459ca8bfa41 ("pci: Assert irqnum is between 0 and bus->nirqs in
pci_bus_change_irq_level") added sanity checks to the interrupt number passed
to pci_bus_change_irq_level(). That makes sense, given that bus->irq_count
is indexed and sized by the number of interrupts.

However, as it turns out, the interrupt number passed to this function
is the _mapped_ interrupt number. The result in assertion failures for various
emulations.

That doesn't sound quite right. My understanding from the other boards I have 
been working on is that they use the map_irq() functions recursively so that 
the final set_irq() is on the physical pin, so it might just be that the 
assert() is simply exposing an existing bug.

Examples (I don't know if there are others):

- ppc4xx_pci_map_irq() maps the interrupt number to "slot - 1". Obviously
   that isn't a good thing to do for slot 0, and indeed results in an
   assertion as soon as slot 0 is initialized (presumably that is the root
   bridge). Changing the mapping to "slot" doesn't help because valid slots
   are 0..4, and only four interrupts are allocated.
- pci_bonito_map_irq() changes the mapping all over the place. Whatever
   it does, it returns numbers starting with 32 for slots 5..12. With
   a total number of 32 interrupts, this again results in an assertion
   failure.

ppc4xx_pci_map_irq() is definitely buggy. I just don't know what the
correct mapping should be. slot  & 3, maybe ?

Yeah that doesn't look right. Certainly both the Mac PPC machines use ((pci_dev->devfn 
>> 3)) & 3) plus the interrupt pin so I think you're right that this is missing an 
& 3 here. Does adding this allow your image to boot?


Actually, it does not help. This does:

@@ -247,7 +247,7 @@ static int ppc4xx_pci_map_irq(PCIDevice *pci_dev, int 
irq_num)

    trace_ppc4xx_pci_map_irq(pci_dev->devfn, irq_num, slot);

-    return slot - 1;
+    return slot ? slot - 1 : slot;
}

but I have no idea why.

I had a look if similar fix would work for this as for sam460ex but I'm not sure. The real Sam460EX only has one PCI slot so no need to map slots and according to U-Boot sources all PCI INT pins are connected to single IRQ on the interrupt controller. In QEMU we can have multiple PCI devices but just connecting everything up to a single interrupt seems to work. (Previously we did that in map_irq of the pci host, after clean up we model the 4 PCI interrupt lines that are then or-ed in the board code. I did not find a difference in working but the model may be a bit cleaner this way and allow reusing the pci controller in a board that may have different mapping.)

For the Bamboo board we have 4 interrupts connected to the PCI bus in the board but also have a comment in ppc4xx_pci.c near the above function saying:

/* On Bamboo, all pins from each slot are tied to a single board IRQ. This
 * may need further refactoring for other boards. */
static int ppc4xx_pci_map_irq(PCIDevice *pci_dev, int irq_num)
{
    int slot = pci_dev->devfn >> 3;

    trace_ppc4xx_pci_map_irq(pci_dev->devfn, irq_num, slot);

    return slot - 1;
}

Now I'm not sure what that comment means:

1. All PCI INTA from all slots go to one irq, PCI INTB to another and so on

or

2. All PCI interrupts (INTA-D) from first slot are connected to one irq on the board, then those from next slot are to another irq and so on

The slot - 1 mapping seems to correspond more to 2. but that also means we can only have 4 slots. I did not find a picture of the real board so don't know how many slots that has or how they are connected. I think we need more info on the real hardware to tell what's the correct mapping here.

Regards,
BALATON Zoltan

reply via email to

[Prev in Thread] Current Thread [Next in Thread]