[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v6] Work around vhost-user-blk-test hang
From: |
Michael S. Tsirkin |
Subject: |
Re: [PATCH v6] Work around vhost-user-blk-test hang |
Date: |
Tue, 19 Oct 2021 02:20:28 -0400 |
On Mon, Oct 18, 2021 at 10:33:02PM +0000, Raphael Norwitz wrote:
> On Mon, Oct 18, 2021 at 05:50:41PM -0400, Michael S. Tsirkin wrote:
> > On Thu, Oct 14, 2021 at 04:32:23AM +0000, Raphael Norwitz wrote:
> > > The vhost-user-blk-test qtest has been hanging intermittently for a
> > > while. The root cause is not yet fully understood, but the hang is
> > > impacting enough users that it is important to merge a workaround for
> > > it.
> > >
> > > The race which causes the hang occurs early on in vhost-user setup,
> > > where a vhost-user message is never received by the backend. Forcing
> > > QEMU to wait until the storage-daemon has had some time to initialize
> > > prevents the hang. Thus the existing storage-daemon pidfile option can
> > > be used to implement a workaround cleanly and effectively, since it
> > > creates a file only once the storage-daemon initialization is complete.
> > >
> > > This change implements a workaround for the vhost-user-blk-test hang by
> > > making QEMU wait until the storage-daemon has written out a pidfile
> > > before attempting to connect and send messages over the vhost-user
> > > socket.
> > >
> > > Some relevent mailing list discussions:
> > >
> > > [1]
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=eDRDFhe3H61BSSpDvy3PKzwQIa2grX5hNMhigtjMCJ8&s=c6OKIl0NMsDqP0-ZNnVjHhDq2psXIVszz-uBKw_8pEo&e=
> > >
> > > [2]
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_YWaky-252FKVbS-252FKZjlV-40stefanha-2Dx1.localdomain_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=eDRDFhe3H61BSSpDvy3PKzwQIa2grX5hNMhigtjMCJ8&s=B4EM_0f7TXqsh18YEKOg-cFHabUjsVA5Ie1riDXaB7A&e=
> > >
> > >
> > > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > > Reviewed-by: Eric Blake <eblake@redhat.com>
> >
> >
> > Um. Does not seem to make things better for me:
> >
> > **
> > ERROR:../tests/qtest/vhost-user-blk-test.c:950:start_vhost_user_blk:
> > assertion failed (retries < PIDFILE_RETRIES): (5 < 5)
> > ERROR qtest-x86_64/qos-test - Bail out!
> > ERROR:../tests/qtest/vhost-user-blk-test.c:950:start_vhost_user_blk:
> > assertion failed (retries < PIDFILE_RETRIES): (5 < 5)
> >
> > At this point I just disabled the test in meson. No need to make
> > everyone suffer.
>
> Makes sense. Do you still want to persue the workaround?
>
> If so, can you share some details on how you're running the test?
>
> I've gone through 1000+ iterations using the script I posted here:
> https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> without hitting a failure.
Hmm my box was busy... now that it's idle I can't repro the failure.
> >
> >
> > > ---
> > > tests/qtest/vhost-user-blk-test.c | 29 ++++++++++++++++++++++++++++-
> > > 1 file changed, 28 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/tests/qtest/vhost-user-blk-test.c
> > > b/tests/qtest/vhost-user-blk-test.c
> > > index 6f108a1b62..c6626a286b 100644
> > > --- a/tests/qtest/vhost-user-blk-test.c
> > > +++ b/tests/qtest/vhost-user-blk-test.c
> > > @@ -24,6 +24,7 @@
> > > #define TEST_IMAGE_SIZE (64 * 1024 * 1024)
> > > #define QVIRTIO_BLK_TIMEOUT_US (30 * 1000 * 1000)
> > > #define PCI_SLOT_HP 0x06
> > > +#define PIDFILE_RETRIES 5
> > >
> > > typedef struct {
> > > pid_t pid;
> >
> >
> > Don't like the arbitrary retries counter.
> >
> > Let's warn maybe, but on a busy machine we might not complete this
> > in time ...
>
> So you would like it to warn and keep trying forever? Or would you
> rather set a much more lenient deadline? (1 min? 5 min?)
I'm not entirely sure ... Maybe 1 min is enough.
But we want to check that daemon is alive.
And maybe print something about still waiting for it to come up
every X seconds?
> >
> >
> > > @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line,
> > > int vus_instances,
> > > int num_queues)
> > > {
> > > const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> > > - int i;
> > > + int i, retries;
> > > + char *daemon_pidfile_path;
> > > gchar *img_path;
> > > GString *storage_daemon_command = g_string_new(NULL);
> > > QemuStorageDaemonState *qsd;
> > > @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line,
> > > int vus_instances,
> > > " -object memory-backend-memfd,id=mem,size=256M,share=on "
> > > " -M memory-backend=mem -m 256M ");
> > >
> > > + daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> > > +
> >
> > Ugh. Predictable paths directly in /tmp are problematic .. mktemp?
> >
>
> Ack
>
> > > for (i = 0; i < vus_instances; i++) {
> > > int fd;
> > > char *sock_path = create_listen_socket(&fd);
> > > @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line,
> > > int vus_instances,
> > > i + 1, sock_path);
> > > }
> > >
> > > + g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> > > + daemon_pidfile_path);
> > > +
> > > g_test_message("starting vhost-user backend: %s",
> > > storage_daemon_command->str);
> > > pid_t pid = fork();
> > > @@ -930,7 +937,27 @@ static void start_vhost_user_blk(GString *cmd_line,
> > > int vus_instances,
> > > execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
> > > exit(1);
> > > }
> > > +
> > > + /*
> > > + * FIXME: The loop here ensures the storage-daemon has come up
> > > properly
> > > + * before allowing the test to proceed. This is a workaround
> > > for
> > > + * a race which used to cause the vhost-user-blk-test to
> > > hang. It
> > > + * should be deleted once the root cause is fully understood
> > > and
> > > + * fixed.
> > > + */
> > > + retries = 0;
> > > + while (access(daemon_pidfile_path, F_OK) != 0) {
> > > + g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> > > +
> > > + retries++;
> > > + g_usleep(1000);
> > > + }
> > > +
> > > g_string_free(storage_daemon_command, true);
> > > + if (access(daemon_pidfile_path, F_OK) == 0) {
> > > + unlink(daemon_pidfile_path);
> > > + }
> > > + g_free(daemon_pidfile_path);
> > >
> > > qsd = g_new(QemuStorageDaemonState, 1);
> > > qsd->pid = pid;
> > > --
> > > 2.20.1
> >