qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Intention to work on GSoC project


From: Sahil
Subject: Re: Intention to work on GSoC project
Date: Wed, 03 Apr 2024 20:06:18 +0530

Hi,

Thank you for the reply.

On Tuesday, April 2, 2024 5:08:24 PM IST Eugenio Perez Martin wrote:
> [...]
> > > > Q2.
> > > > In the Red Hat article, just below the first listing ("Memory layout of 
> > > > a
> > > > packed virtqueue descriptor"), there's the following line referring to 
> > > > the 
> > > > buffer id in "virtq_desc":
> > > > > This time, the id field is not an index for the device to look for the
> > > > > buffer: it is an opaque value for it, only has meaning for the driver.
> > > > 
> > > > But the device returns the buffer id when it writes the used descriptor 
> > > > to
> > > > the descriptor ring. The "only has meaning for the driver" part has got 
> > > > me
> > > > a little confused. Which buffer id is this that the device returns? Is 
> > > > it related
> > > > to the buffer id in the available descriptor?
> > > 
> > > In my understanding, buffer id is the element that avail descriptor
> > > marks to identify when adding descriptors to table. Device will returns
> > > the buffer id in the processed descriptor or the last descriptor in a
> > > chain, and write it to the descriptor that used idx refers to (first
> > > one in the chain). Then used idx increments.
> > > 
> > > The Packed Virtqueue blog [1] is helpful, but some details in the
> > > examples
> > > are making me confused.
> > > 
> > > Q1.
> > > In the last step of the two-entries descriptor table example, it says
> > > both buffers #0 and #1 are available for the device. I understand
> > > descriptor[0] is available and descriptor[1] is not, but there is no ID #0
> > > now. So does the device got buffer #0 by notification beforehand? If so,
> > > does it mean buffer #0 will be lost when notifications are disabled?
> 
> I guess you mean the table labeled "Figure: Full two-entries descriptor
> table".
> 
> Take into account that the descriptor table is not the state of all
> the descriptors. That information must be maintained by the device and
> the driver internally.
> 
> The descriptor table is used as a circular buffer, where one part is
> writable by the driver and the other part is writable by the device.
> For the device to override the descriptor table entry where descriptor
> id 0 used to be does not mean that the descriptor id 0 is used. It
> just means that the device communicates to the driver that descriptor
> 1 is used, and both sides need to keep the descriptor state
> coherently.
> 
> > I too have a similar question and understanding the relation between
> > buffer
> > ids in the used and available descriptors might give more insight into
> > this. For available descriptors, the buffer id is used to associate
> > descriptors with a particular buffer. I am still not very sure about ids
> > in used descriptors.
> > 
> > Regarding Q1, both buffers #0 and #1 are available. In the mentioned
> > figure, both descriptor[0] and descriptor[1] are available. This figure
> > follows the figure with the caption "Using first buffer out of order". So
> > in the first figure the device reads buffer #1 and writes the used
> > descriptor but it still has buffer #0 to read. That still belongs to the
> > device while buffer #1 can now be handled by the driver once again. So in
> > the next figure, the driver makes buffer #1 available again. The device
> > can still read buffer #0 from the previous batch of available
> > descriptors.
> > 
> > Based on what I have understood, the driver can't touch the descriptor
> > corresponding to buffer #0 until the device acknowledges it. I did find
> > the
> > figure a little confusing as well. I think once the meaning of buffer id
> > is clear from the driver's and device's perspective, it'll be easier to
> > understand the figure.
> 
> I think you got it right. Please let me know if you have further questions.

I would like to clarify one thing in the figure "Full two-entries descriptor
table". The driver can only overwrite a used descriptor in the descriptor
ring, right? And likewise for the device? So in the figure, the driver will
have to wait until descriptor[1] is used before it can overwrite it?

Suppose the device marks descriptor[0] as used. I think the driver will
not be able to overwrite that descriptor entry because it has to go in
order and is at descriptor[1]. Is that correct? Is it possible for the driver
to go "backwards" in the descriptor ring?

> > I am also not very sure about what happens when notifications are
> > disabled.
> > I'll have to read up on that again. But I believe the driver still won't
> > be able to touch #0 until the device uses it.
> 
> If one side disables notification it needs to check the indexes or the
> flags by its own means: Timers, read the memory in a busy loop, etc.

Understood. Thank you for the clarification.

I have some questions from the "Virtio live migration technical deep
dive" article [1].

Q1.
In the paragraph just above Figure 6, there is the following line:
> the vhost kernel thread and QEMU may run in different CPU threads,
> so these writes must be synchronized with QEMU cleaning of the dirty
> bitmap, and this write must be seen strictly after the modifications of
> the guest memory by the QEMU thread.

I am not clear on the last part of the statement. The modification of guest
memory is being done by the vhost device and not by the QEMU thread, right?

Q2.
In the first point of the "Dynamic device state: virtqueue state" section:
>The guest makes available N descriptors at the source of the migration,
>so its avail idx member in the avail idx is N.

I think there's a typo here: "...avail idx member in the avail ring is N"
instead of "...avail idx is N".

Regarding the implementation of this project, can the project be broken
down into two parts:
1. implementing packed virtqueues in QEMU, and
2. providing mechanisms for (live) migration to work with packed
    virtqueues.

I am ready to start working on the implementation. In one of your
previous emails you had talked about moving packed virtqueue
related implementation from the kernel's drivers/virtio/virtio_ring.c
into vhost_shadow_virtqueue.c.

My plan is to also understand how split virtqueue has been implemented
in QEMU. I think that'll be helpful when moving the kernel's implementation
to QEMU.

Please let me know if I should change my approach.

Thanks,
Sahil





reply via email to

[Prev in Thread] Current Thread [Next in Thread]