qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Some performance numbers for virtiofs, DAX and virtio-9p


From: Vivek Goyal
Subject: Re: Some performance numbers for virtiofs, DAX and virtio-9p
Date: Fri, 11 Dec 2020 15:01:08 -0500

On Fri, Dec 11, 2020 at 02:25:17PM -0500, Vivek Goyal wrote:
> On Fri, Dec 11, 2020 at 06:29:56PM +0000, Dr. David Alan Gilbert wrote:
> 
> [..]
> > > > 
> > > > Could we measure at what point does a large window size actually make
> > > > performance worse?
> > > 
> > > Will do. Will run tests with varying window sizes (small to large)
> > > and see how does it impact performance for same workload with
> > > same guest memory.
> > 
> > I wonder how realistic it is though;  it makes some sense if you have a
> > scenario like a fairly small root filesystem - something tractable;  but
> > if you have a large FS you're not realistically going to be able to set
> > the cache size to match it - that's why it's a cache!
> 
> I think its more about active dataset size and not necessarily total
> FS size. FS might be big but if application is not accessing all of
> the in reasonabl small time window, then it does not matter.
> 
> What worries me most is that cost of reclaim of a dax range seems
> too high (or keeps the process blocked for long enogh), that it
> kills the performance. I will need to revisit the reclaim path
> and see if I can optimize something.

I see that while reclaiming a range, we are sending a remvemapping
command to virtiofsd. We are holding locks so that no new mappings
can be added to that inode.

We had decided that removemapping is not necessary. It helps in
the sense that if guest is not using a mapping, qemu will unmap
it too.

If we stop sending remove mapping, then it might improve reclaim
performance significantly. With the downside that host will
have something mapped (despite the fact that guest is not using
it anymore). And these will be cleaned up only when guest shuts
down.

Vivek




reply via email to

[Prev in Thread] Current Thread [Next in Thread]