qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Virtio-fs] [PATCH 0/4] virtiofsd: multithreading prepa


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [Virtio-fs] [PATCH 0/4] virtiofsd: multithreading preparation part 3
Date: Mon, 12 Aug 2019 13:51:03 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

* piaojun (address@hidden) wrote:
> 
> 
> On 2019/8/12 18:05, Stefan Hajnoczi wrote:
> > On Sun, Aug 11, 2019 at 10:26:18AM +0800, piaojun wrote:
> >> On 2019/8/9 16:21, Stefan Hajnoczi wrote:
> >>> On Thu, Aug 08, 2019 at 10:53:16AM +0100, Dr. David Alan Gilbert wrote:
> >>>> * Stefan Hajnoczi (address@hidden) wrote:
> >>>>> On Wed, Aug 07, 2019 at 04:57:15PM -0400, Vivek Goyal wrote:
> >>>>> 3. Can READ/WRITE be performed directly in QEMU via a separate virtqueue
> >>>>>    to eliminate the bad address problem?
> >>>>
> >>>> Are you thinking of doing all read/writes that way, or just the corner
> >>>> cases? It doesn't seem worth it for the corner cases unless you're
> >>>> finding them cropping up in real work loads.
> >>>
> >>> Send all READ/WRITE requests to QEMU instead of virtiofsd.
> >>>
> >>> Only handle metadata requests in virtiofsd (OPEN, RELEASE, READDIR,
> >>> MKDIR, etc).
> >>>
> >>
> >> Sorry for not catching your point, and I like the virtiofsd to do
> >> READ/WRITE requests and qemu handle metadata requests, as virtiofsd is
> >> good at processing dataplane things due to thread-pool and CPU
> >> affinity(maybe in the future). As you said, virtiofsd is just acting as
> >> a vhost-user device which should care less about ctrl request.
> >>
> >> If our concern is improving mmap/write/read performance, why not adding
> >> a delay worker for unmmap which could decrease the ummap times. Maybe
> >> virtiofsd could still handle both data and meta requests by this way.
> > 
> > Doing READ/WRITE in QEMU solves the problem that vhost-user slaves only
> > have access to guest RAM regions.  If a guest transfers other memory,
> > like an address in the DAX Window, to/from the vhost-user device then
> > virtqueue buffer address translation fails.
> > 
> > Dave added a code path that bounces such accesses through the QEMU
> > process using the VHOST_USER_SLAVE_FS_IO slavefd request, but it would
> > be simpler, faster, and cleaner to do I/O in QEMU in the first place.
> > 
> > What I don't like about moving READ/WRITE into QEMU is that we need to
> > use even more virtqueues for multiqueue operation :).
> > 
> > Stefan
> 
> Thanks for your detailed explanation. If DAX is not good at small files,
> shall we just let the users choose the I/O path according to their user
> cases?

The problem is how/when to decide and where to keep policy like that.
My understanding is it's also tricky to flip in the kernel from DAX to
non-DAX for any one file.

So without knowing access patterns it's tricky.

Dave

--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]