qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 6/6] RFH: We lost "connect" events


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH 6/6] RFH: We lost "connect" events
Date: Mon, 19 Aug 2019 12:50:58 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.2 (gnu/linux)

Daniel P. Berrangé <address@hidden> wrote:
> On Mon, Aug 19, 2019 at 12:46:20PM +0200, Juan Quintela wrote:
>> Daniel P. Berrangé <address@hidden> wrote:
>> > On Wed, Aug 14, 2019 at 04:02:18AM +0200, Juan Quintela wrote:
>> >> When we have lots of channels, sometimes multifd migration fails
>> >> with the following error:
>> >> 
>> >> Any good ideas?
>> >
>> > In inet_listen_saddr() we call
>> >
>> >     if (!listen(slisten, 1)) {
>> >
>> > note the second parameter sets the socket backlog, which is the max
>> > number of pending socket connections we allow. My guess is that the
>> > target QEMU is not accepting incoming connections quickly enough and
>> > thus you hit the limit & the kernel starts dropping the incoming
>> > connections.
>> >
>> > As a quick test, just hack this code to pass a value of 100 and see
>> > if it makes your test reliable. If it does, then we'll need to figure
>> > out a nice way to handle backlog instead of hardcoding it at 1.
>> 
>> Nice.
>> 
>> With this change I can create 100 channels on a 4 core machine without
>> any trouble.
>> 
>> How can we proceed from here?
>
> I don't think we want to expose this in the QAPI schema for the socket
> address, since the correct value is really something that QEMU should
> figure out based on usage context.
>
> Thus, I think we'll have to make it an explicit parameter to the
> qio_channel_socket_listen_{sync,async} APIs, and socket_listen()
> and inet_listen_saddr(), etc. Then the migration code can pass in
> a sensible value based on multifd usage.

ok with me.  I will give it a try.

Thanks for the tip.

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]