qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4] hw/nvme: Use ioeventfd to handle doorbell updates


From: Keith Busch
Subject: Re: [PATCH v4] hw/nvme: Use ioeventfd to handle doorbell updates
Date: Tue, 5 Jul 2022 12:43:58 -0600

On Tue, Jul 05, 2022 at 07:11:36PM +0200, Klaus Jensen wrote:
> On Jul  5 22:24, Jinhao Fan wrote:
> > @@ -1374,7 +1374,14 @@ static void nvme_enqueue_req_completion(NvmeCQueue 
> > *cq, NvmeRequest *req)
> >  
> >      QTAILQ_REMOVE(&req->sq->out_req_list, req, entry);
> >      QTAILQ_INSERT_TAIL(&cq->req_list, req, entry);
> > -    timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
> > +
> > +    if (req->sq->ioeventfd_enabled) {
> > +        /* Post CQE directly since we are in main loop thread */
> > +        nvme_post_cqes(cq);
> > +    } else {
> > +        /* Schedule the timer to post CQE later since we are in vcpu 
> > thread */
> > +        timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
> > +    }
> 
> Actually, we are only in the vcpu thread if we come here from
> nvme_process_db that in very rare circumstances may enqueue the
> completion of an AER due to an invalid doorbell write.
> 
> In general, nvme_enqueue_req_completion is only ever called from the
> main iothread. Which actually causes me to wonder why we defer this work
> in the first place. It does have the benefit that we queue up several
> completions before posting them in one go and raising the interrupt.
> But I wonder if that could be handled better.

I think the timer is used because of the cq_full condition. We need to restart
completions when it becomes not full, which requires a doorbell write. Having
everyone from the main iothread use the same timer as the doorbell handler just
ensures single threaded list access.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]