[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v11 00/14] TCG code quality tracking
From: |
Wu, Fei |
Subject: |
Re: [PATCH v11 00/14] TCG code quality tracking |
Date: |
Fri, 19 May 2023 09:16:16 +0800 |
User-agent: |
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 |
On 4/22/2023 12:42 AM, Alex Bennée wrote:
>
> Fei Wu <fei2.wu@intel.com> writes:
>
>> This patch series were done by Vanderson and Alex originally in 2019, I
>> (Fei Wu) rebased them on latest upstream from:
>> https://github.com/stsquad/qemu/tree/tcg/tbstats-and-perf-v10
>> and send out this review per Alex's request, I will continue to address
>> any future review comments here. As it's been a very long time and there
>> are lots of conflicts during rebase, it's my fault if I introduce any
>> problems during the process.
>
> Hi Fei,
>
> Thanks for picking this up. I can confirm that this applies cleanly to
> master and I have kicked the tyres and things still seem to work. I'm
> not sure if I can provide much review on code I wrote but a few things
> to point out:
>
> - there are a number of CI failures, mainly qatomic on 32 bit guests
> see https://gitlab.com/stsquad/qemu/-/pipelines/844857279/failures
> maybe we just disable time accounting for 32 bit hosts?
>
I sent out v12 series which fixes some CI failures. qatomic is not
touched yet, the current code with CONFIG_PROFILER should have the same
issue, what's the policy of 32 bit guests support on qemu?
Besides time, there are some other counters with uint64_t using qatomic
such as TCGProfile.table_op_count, we might switch to size_t instead?
> - we need a proper solution to the invalidation of TBs so we can
> exclude them from lists (or at least not do the attempt
> translation/fail dance). Alternatively we could page out a copy of
> the TB data to a disk file when we hit a certain hotness? How would
> this interact with the jitperf support already?
>
> - we should add some documentation to the manual so users don't have
> to figure it all out by trail and error at the HMP command line.
>
added one in docs/tb-stats.txt. Some extra bits could be added to
explain the fields of the output.
> - there may be some exit cases missed because I saw some weird TB's
> with very long IR generation times.
>
> TB id:5 | phys:0xb5f21d00 virt:0xffffcf2f17721d00 flags:0x00000051 1 inv/2
> | exec:1889055/0 guest inst cov:1.05%
> | trans:2 ints: g:4 op:32 op_opt:26 spills:0
> | h/g (host bytes / guest insts): 56.000000
> | time to gen at 2.4GHz => code:6723.33(ns) IR:2378539.17(ns)
>
Is it reproducible on your system? I didn't see it on my system, is it
possible the system events cause this?
> - code motion in 9/14 should be folded into the first patch
>
done.
btw, I also added a few comments on v12 series, could you please check
if they make sense?
Thanks,
Fei.
> Even if we can't find a solution for safely dumping TBs I think the
> series without "tb-list" is still an improvement for getting rid of the
> --enable-profiler and making info JIT useful by default.
>
> Richard,
>
> What do you think?
>