qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 11/21] jobs: group together API calls under the same job


From: Emanuele Giuseppe Esposito
Subject: Re: [PATCH v11 11/21] jobs: group together API calls under the same job lock
Date: Sun, 18 Sep 2022 18:51:02 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0


Am 14/09/2022 um 14:36 schrieb Vladimir Sementsov-Ogievskiy:
> On 8/26/22 16:20, Emanuele Giuseppe Esposito wrote:
>> Now that the API offers also _locked() functions, take advantage
>> of it and give also the caller control to take the lock and call
>> _locked functions.
>>
>> This makes sense especially when we have for loops, because it
>> makes no sense to have:
>>
>> for(job = job_next(); ...)
>>
>> where each job_next() takes the lock internally.
>> Instead we want
>>
>> JOB_LOCK_GUARD();
>> for(job = job_next_locked(); ...)
>>
>> In addition, protect also direct field accesses, by either creating a
>> new critical section or widening the existing ones.
>>
>> Note: at this stage, job_{lock/unlock} and job lock guard macros
>> are *nop*.
>>
>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
>> ---
>>   block.c            | 17 ++++++++++-------
>>   blockdev.c         | 14 ++++++++++----
>>   blockjob.c         | 35 ++++++++++++++++++++++-------------
>>   job-qmp.c          |  9 ++++++---
>>   job.c              |  7 +++++--
>>   monitor/qmp-cmds.c |  7 +++++--
>>   qemu-img.c         | 17 +++++++++++------
>>   7 files changed, 69 insertions(+), 37 deletions(-)
>>
> 
> [..]
> 
>> diff --git a/job.c b/job.c
>> index dcd561fd46..e336af0c1c 100644
>> --- a/job.c
>> +++ b/job.c
> 
> job.c hunks are unrelated in this patch, they should go to patch 05.
> 
>> @@ -672,7 +672,7 @@ void coroutine_fn job_pause_point(Job *job)
>>       job_pause_point_locked(job);
>>   }
>>   -void job_yield_locked(Job *job)
>> +static void job_yield_locked(Job *job)
>>   {
>>       assert(job->busy);
>>   @@ -1046,11 +1046,14 @@ static void
>> job_completed_txn_abort_locked(Job *job)
>>   /* Called with job_mutex held, but releases it temporarily */
>>   static int job_prepare_locked(Job *job)
>>   {
>> +    int ret;
>> +
>>       GLOBAL_STATE_CODE();
>>       if (job->ret == 0 && job->driver->prepare) {
>>           job_unlock();
>> -        job->ret = job->driver->prepare(job);
>> +        ret = job->driver->prepare(job);
>>           job_lock();
>> +        job->ret = ret;
>>           job_update_rc_locked(job);
>>       }
>>       return job->ret;
>> diff --git a/monitor/qmp-cmds.c b/monitor/qmp-cmds.c
>> index 7314cd813d..81c8fdadf8 100644
>> --- a/monitor/qmp-cmds.c
>> +++ b/monitor/qmp-cmds.c
>> @@ -135,8 +135,11 @@ void qmp_cont(Error **errp)
>>           blk_iostatus_reset(blk);
>>       }
>>   -    for (job = block_job_next(NULL); job; job = block_job_next(job)) {
>> -        block_job_iostatus_reset(job);
>> +    WITH_JOB_LOCK_GUARD() {
>> +        for (job = block_job_next_locked(NULL); job;
>> +             job = block_job_next_locked(job)) {
>> +            block_job_iostatus_reset_locked(job);
>> +        }
>>       }
>>         /* Continuing after completed migration. Images have been
>> inactivated to
>> diff --git a/qemu-img.c b/qemu-img.c
>> index 7d4b33b3da..c8ae704166 100644
>> --- a/qemu-img.c
>> +++ b/qemu-img.c
>> @@ -912,25 +912,30 @@ static void run_block_job(BlockJob *job, Error
>> **errp)
>>       int ret = 0;
>>         aio_context_acquire(aio_context);
>> -    job_ref(&job->job);
>> +    job_lock();
>> +    job_ref_locked(&job->job);
>>       do {
>>           float progress = 0.0f;
>> +        job_unlock();
>>           aio_poll(aio_context, true);
>>             progress_get_snapshot(&job->job.progress, &progress_current,
>> -                              &progress_total);
>> +                                &progress_total);
> 
> broken indentation
> 
>>           if (progress_total) {
>>               progress = (float)progress_current / progress_total *
>> 100.f;
>>           }
>>           qemu_progress_print(progress, 0);
>> -    } while (!job_is_ready(&job->job) && !job_is_completed(&job->job));
>> +        job_lock();
>> +    } while (!job_is_ready_locked(&job->job) &&
>> +                !job_is_completed_locked(&job->job));
> 
> and here

Makes sense, I'll fix it.

Thank you,
Emanuele
> 
>>   -    if (!job_is_completed(&job->job)) {
>> -        ret = job_complete_sync(&job->job, errp);
>> +    if (!job_is_completed_locked(&job->job)) {
>> +        ret = job_complete_sync_locked(&job->job, errp);
>>       } else {
>>           ret = job->job.ret;
>>       }
>> -    job_unref(&job->job);
>> +    job_unref_locked(&job->job);
>> +    job_unlock();
>>       aio_context_release(aio_context);
>>         /* publish completion progress only when success */
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]