qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/nvme: Add iothread support


From: Keith Busch
Subject: Re: [PATCH] hw/nvme: Add iothread support
Date: Tue, 26 Jul 2022 12:07:20 -0600

On Tue, Jul 26, 2022 at 11:32:57PM +0800, Jinhao Fan wrote:
> at 10:45 PM, Keith Busch <kbusch@kernel.org> wrote:
> 
> > On Tue, Jul 26, 2022 at 04:55:54PM +0800, Jinhao Fan wrote:
> >> Hi Klaus and Keith,
> >> 
> >> I just added support for interrupt masking. How can I test interrupt
> >> masking?
> > 
> > Are you asking about MSI masking? I don't think any drivers are using the
> > feature, so you'd need to modify one to test it. I can give you some 
> > pointers
> > if you are asking about MSI.
> 
> Thanks Keith,
> 
> Do I understand correctly: qemu-nvme only supports MSI-X and pin-based
> interrupts. So MSI masking here is equivalent with MSI-X masking.

It looks like qemu's nvme only supports MSI-x. I could have sworn it used to
support MSI, but I must be thinking of an unofficial fork.

I was mainly asking about MSI masking as it relates to nvme controller's
IVMS/IVMC registers. I don't think any driver is making use of these, and those
don't apply to MSI-x; just MSI and legacy.

At the PCIe level, masking MSI vectors is in Config space. Writing to Config
space is too slow to do per-interrupt, so NVMe created the IVMS/IVMC registers
to get around that.

> If the above is correct, I think I am asking about MSI masking.
> 
> BTW, a double check on ctrl.c seems to show that we only support disabling
> interrupt at CQ creation time, which is recorded in the cq->irq_enabled.
> This is different from my prior understanding that interrupts can be
> disabled at runtime by a call like Linux irq_save(). Therefore I doubt
> whether qemu-nvme supported "interrupt masking" before. How do you
> understand qemu-nvme’s interrupt masking support?

MSI-x uses MMIO for masking, so there's no need for an NVMe specific way to
mask these interrupts. You can just use the generic PCIe methods to clear the
vector's enable bit. But no NVMe driver that I know of is making use of these
either, though it should be possible to make linux start doing that.
 
The CQ irq_enabled field is there so the user can create a pure polling queue.
That's a fixed property of the queue that can't be re-enabled without
destroying and recreating.

The linux irq_save only disables local CPU interrupts from most sources. All
pci devices are unaware of this and can still send their interrupt messages to
the CPU, but the CPU won't context switch to the irq handler until after
irqrestore is called.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]