qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] memory: prevent dma-reentracy issues


From: Stefan Hajnoczi
Subject: Re: [PATCH v2] memory: prevent dma-reentracy issues
Date: Tue, 12 Jul 2022 10:34:48 +0100

On Tue, Jun 21, 2022 at 11:53:06AM -0400, Alexander Bulekov wrote:
> On 220621 1630, Peter Maydell wrote:
> > On Thu, 9 Jun 2022 at 14:59, Alexander Bulekov <alxndr@bu.edu> wrote:
> > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > > index 44dacfa224..ab1ad0f7a8 100644
> > > --- a/include/hw/pci/pci.h
> > > +++ b/include/hw/pci/pci.h
> > > @@ -834,8 +834,17 @@ static inline MemTxResult pci_dma_rw(PCIDevice *dev, 
> > > dma_addr_t addr,
> > >                                       void *buf, dma_addr_t len,
> > >                                       DMADirection dir, MemTxAttrs attrs)
> > >  {
> > > -    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len,
> > > -                         dir, attrs);
> > > +    bool prior_engaged_state;
> > > +    MemTxResult result;
> > > +
> > > +    prior_engaged_state = dev->qdev.engaged_in_io;
> > > +
> > > +    dev->qdev.engaged_in_io = true;
> > > +    result = dma_memory_rw(pci_get_address_space(dev), addr, buf, len,
> > > +                           dir, attrs);
> > > +    dev->qdev.engaged_in_io = prior_engaged_state;
> > > +
> > > +    return result;
> > 
> > Why do we need to do something in this pci-specific function ?
> > I was expecting this to only need changes at the generic-to-all-devices
> > level.
> 
> Both of these handle the BH->DMA->MMIO case. Unlike MMIO, I don't think
> there is any neat way to set the engaged_in_io flag as we enter a BH. So
> instead, we try to set it when a device initiates DMA.
> 
> The pci function lets us do that since we get a glimpse of the dev/qdev
> (unlike the dma_memory_...  functions).
...
> > > @@ -302,6 +310,10 @@ static MemTxResult dma_buf_rw(void *buf, dma_addr_t 
> > > len, dma_addr_t *residual,
> > >          xresidual -= xfer;
> > >      }
> > >
> > > +    if (dev) {
> > > +        dev->engaged_in_io = prior_engaged_state;
> > > +    }
> > 
> > Not all DMA goes through dma_buf_rw() -- why does it need changes?
> 
> This one has the same goal, but accesses the qdev through sg, instead of
> PCI.

Should dma_*() APIs take a reentrancy guard argument so that all DMA
accesses are systematically covered?

  /* Define this in the memory API */
  typedef struct {
      bool engaged_in_io;
  } MemReentrancyGuard;

  /* Embed MemReentrancyGuard in DeviceState */
  ...

  /* Require it in dma_*() APIs */
  static inline MemTxResult dma_memory_rw(AddressSpace *as, dma_addr_t addr,
                                          void *buf, dma_addr_t len,
                                          DMADirection dir, MemTxAttrs attrs,
                                          MemReentrancyGuard *guard);

  /* Call dma_*() APIs like this... */
  static inline MemTxResult pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
                                       void *buf, dma_addr_t len,
                                       DMADirection dir, MemTxAttrs attrs)
  {
      return dma_memory_rw(pci_get_address_space(dev), addr, buf, len,
                           dir, attrs, &dev->qdev.reentrancy_guard);
  }

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]