qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/4] softmmu: Support concurrent bounce buffers


From: Markus Armbruster
Subject: Re: [PATCH v2 1/4] softmmu: Support concurrent bounce buffers
Date: Fri, 01 Sep 2023 15:41:52 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Wed, Aug 23, 2023 at 04:54:06PM -0400, Peter Xu wrote:
>> On Wed, Aug 23, 2023 at 10:08:08PM +0200, Mattias Nissler wrote:
>> > On Wed, Aug 23, 2023 at 7:35 PM Peter Xu <peterx@redhat.com> wrote:
>> > > On Wed, Aug 23, 2023 at 02:29:02AM -0700, Mattias Nissler wrote:
>> > > > diff --git a/softmmu/vl.c b/softmmu/vl.c
>> > > > index b0b96f67fa..dbe52f5ea1 100644
>> > > > --- a/softmmu/vl.c
>> > > > +++ b/softmmu/vl.c
>> > > > @@ -3469,6 +3469,12 @@ void qemu_init(int argc, char **argv)
>> > > >                  exit(1);
>> > > >  #endif
>> > > >                  break;
>> > > > +            case QEMU_OPTION_max_bounce_buffer_size:
>> > > > +                if (qemu_strtosz(optarg, NULL, 
>> > > > &max_bounce_buffer_size) < 0) {
>> > > > +                    error_report("invalid -max-ounce-buffer-size 
>> > > > value");
>> > > > +                    exit(1);
>> > > > +                }
>> > > > +                break;
>> > >
>> > > PS: I had a vague memory that we do not recommend adding more qemu 
>> > > cmdline
>> > > options, but I don't know enough on the plan to say anything real.
>> > 
>> > I am aware of that, and I'm really not happy with the command line
>> > option myself. Consider the command line flag a straw man I put in to
>> > see whether any reviewers have better ideas :)
>> > 
>> > More seriously, I actually did look around to see whether I can add
>> > the parameter to one of the existing option groupings somewhere, but
>> > neither do I have a suitable QOM object that I can attach the
>> > parameter to, nor did I find any global option groups that fits: this
>> > is not really memory configuration, and it's not really CPU
>> > configuration, it's more related to shared device model
>> > infrastructure... If you have a good idea for a home for this, I'm all
>> > ears.
>> 
>> No good & simple suggestion here, sorry.  We can keep the option there
>> until someone jumps in, then the better alternative could also come along.
>> 
>> After all I expect if we can choose a sensible enough default value, this
>> new option shouldn't be used by anyone for real.
>
> QEMU commits to stability in its external interfaces. Once the
> command-line option is added, it needs to be supported in the future or
> go through the deprecation process. I think we should agree on the
> command-line option now.
>
> Two ways to avoid the issue:
> 1. Drop the command-line option until someone needs it.

Avoiding unneeded configuration knobs is always good.

> 2. Make it an experimental option (with an "x-" prefix).

Fine if actual experiments are planned.

Also fine if it's a development or debugging aid.

> The closest to a proper solution that I found was adding it as a
> -machine property. What bothers me is that if QEMU supports
> multi-machine emulation in a single process some day, then the property
> doesn't make sense since it's global rather than specific to a machine.
>
> CCing Markus Armbruster for ideas.

I'm afraid I'm lacking context.  Glancing at the patch...

    ``-max-bounce-buffer-size size``
        Set the limit in bytes for DMA bounce buffer allocations.
    
        DMA bounce buffers are used when device models request memory-mapped 
access
        to memory regions that can't be directly mapped by the qemu process, so 
the
        memory must read or written to a temporary local buffer for the device
        model to work with. This is the case e.g. for I/O memory regions, and 
when
        running in multi-process mode without shared access to memory.
    
        Whether bounce buffering is necessary depends heavily on the device 
model
        implementation. Some devices use explicit DMA read and write operations
        which do not require bounce buffers. Some devices, notably storage, will
        retry a failed DMA map request after bounce buffer space becomes 
available
        again. Most other devices will bail when encountering map request 
failures,
        which will typically appear to the guest as a hardware error.
    
        Suitable bounce buffer size values depend on the workload and guest
        configuration. A few kilobytes up to a few megabytes are common sizes
        encountered in practice.

Sounds quite device-specific.  Why isn't this configured per device?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]