qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] virtio-serial-bus: fix memory leak while attach virtio-seria


From: Laurent Vivier
Subject: Re: [PATCH] virtio-serial-bus: fix memory leak while attach virtio-serial-bus
Date: Wed, 4 Dec 2019 10:03:07 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2

On 04/12/2019 04:02, pannengyuan wrote:
> 
> 
> On 2019/12/3 16:32, Laurent Vivier wrote:
>> On 03/12/2019 01:53, pannengyuan wrote:
>>>
>>>
>>> On 2019/12/2 21:58, Laurent Vivier wrote:
>>>> On 02/12/2019 12:15, address@hidden wrote:
>>>>> From: PanNengyuan <address@hidden>
>>>>>
>>>>> ivqs/ovqs/c_ivq/c_ovq is forgot to cleanup in
>>>>> virtio_serial_device_unrealize, the memory leak stack is as bellow:
>>>>>
>>>>> Direct leak of 1290240 byte(s) in 180 object(s) allocated from:
>>>>>     #0 0x7fc9bfc27560 in calloc (/usr/lib64/libasan.so.3+0xc7560)
>>>>>     #1 0x7fc9bed6f015 in g_malloc0 (/usr/lib64/libglib-2.0.so.0+0x50015)
>>>>>     #2 0x5650e02b83e7 in virtio_add_queue 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/hw/virtio/virtio.c:2327
>>>>>     #3 0x5650e02847b5 in virtio_serial_device_realize 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/hw/char/virtio-serial-bus.c:1089
>>>>>     #4 0x5650e02b56a7 in virtio_device_realize 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/hw/virtio/virtio.c:3504
>>>>>     #5 0x5650e03bf031 in device_set_realized 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/hw/core/qdev.c:876
>>>>>     #6 0x5650e0531efd in property_set_bool 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/qom/object.c:2080
>>>>>     #7 0x5650e053650e in object_property_set_qobject 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/qom/qom-qobject.c:26
>>>>>     #8 0x5650e0533e14 in object_property_set_bool 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/qom/object.c:1338
>>>>>     #9 0x5650e04c0e37 in virtio_pci_realize 
>>>>> /mnt/sdb/qemu-4.2.0-rc0/hw/virtio/virtio-pci.c:1801
>>>>>
>>>>> Reported-by: Euler Robot <address@hidden>
>>>>> Signed-off-by: PanNengyuan <address@hidden>
>>>>> ---
>>>>>  hw/char/virtio-serial-bus.c | 6 ++++++
>>>>>  1 file changed, 6 insertions(+)
>>>>>
>>>>> diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
>>>>> index 3325904..da9019a 100644
>>>>> --- a/hw/char/virtio-serial-bus.c
>>>>> +++ b/hw/char/virtio-serial-bus.c
>>>>> @@ -1126,9 +1126,15 @@ static void 
>>>>> virtio_serial_device_unrealize(DeviceState *dev, Error **errp)
>>>>>  {
>>>>>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>>>>>      VirtIOSerial *vser = VIRTIO_SERIAL(dev);
>>>>> +    int i;
>>>>>  
>>>>>      QLIST_REMOVE(vser, next);
>>>>>  
>>>>> +    for (i = 0; i <= vser->bus.max_nr_ports; i++) {
>>>>> +        virtio_del_queue(vdev, 2 * i);
>>>>> +        virtio_del_queue(vdev, 2 * i + 1);
>>>>> +    }
>>>>> +
>>>>
>>>> According to virtio_serial_device_realize() and the number of
>>>> virtio_add_queue(), I think you have more queues to delete:
>>>>
>>>>   4 + 2 * vser->bus.max_nr_ports
>>>>
>>>> (for vser->ivqs[0], vser->ovqs[0], vser->c_ivq, vser->c_ovq,
>>>> vser->ivqs[i], vser->ovqs[i]).
>>>>
>>>> Thanks,
>>>> Laurent
>>>>
>>>>
>>> Thanks, but I think the queues is correct, the queues in
>>> virtio_serial_device_realize is as follow:
>>>
>>> // here is 2
>>> vser->ivqs[0] = virtio_add_queue(vdev, 128, handle_input);
>>> vser->ovqs[0] = virtio_add_queue(vdev, 128, handle_output);
>>>
>>> // here is 2
>>> vser->c_ivq = virtio_add_queue(vdev, 32, control_in);
>>> vser->c_ovq = virtio_add_queue(vdev, 32, control_out);
>>>
>>> // here 2 * (max_nr_ports - 1)  ----- i is from 1 to max_nr_ports - 1
>>> for (i = 1; i < vser->bus.max_nr_ports; i++) {
>>>     vser->ivqs[i] = virtio_add_queue(vdev, 128, handle_input);
>>>     vser->ovqs[i] = virtio_add_queue(vdev, 128, handle_output);
>>> }
>>>
>>> so the total queues number is:  2 * (vser->bus.max_nr_ports + 1)
>>>
>>
>> Yes, you're right. A comment in the code would have helped or written
>> clearly like:
>>
>> for (i = 0; i < 2 * (vser->bus.max_nr_ports + 1); i++) {
>>     virtio_del_queue(vdev, i);
>> }
>>
>> Thanks,
>> Laurent
>>
>>
> yes, it would be helpful, Michael S. Tsirkin posted another way to make
> it more clear, I will reuse it in next version.

Yes, the proposition from Michael is much more better.

Thanks,
Laurent




reply via email to

[Prev in Thread] Current Thread [Next in Thread]