qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/4] fuzz: delay IO until they can't trigger the crash


From: Qiuhao Li
Subject: Re: [PATCH 4/4] fuzz: delay IO until they can't trigger the crash
Date: Wed, 23 Dec 2020 09:20:35 +0000
User-agent: Evolution 3.36.4-0ubuntu1

On Tue, 2020-12-22 at 13:30 -0500, Alexander Bulekov wrote:
> On 201222 1922, Qiuhao Li wrote:
> > On Mon, 2020-12-21 at 16:17 -0500, Alexander Bulekov wrote:
> > > On 201220 0256, Qiuhao Li wrote:
> > > > Since programmers usually trigger an IO just before they need
> > > > it.
> > > > Try to
> > > > delay some IO instructions may help us better understanding the
> > > > timing
> > > > context when debug.
> > > >
> > > > Tested with Bug 1908062. Refined vs. Original result:
> > > >
> > > > outl 0xcf8 0x8000081c            outl 0xcf8 0x0
> > > > outb 0xcfc 0xc3                | outl 0xcf8 0x8000081c
> > > > outl 0xcf8 0x80000804          | outb 0xcfc 0xc3
> > > > outl 0xcfc 0x10000006          | outl 0xcf8 0x80000804
> > > > write 0xc300001028 0x1 0x5a    | outl 0xcfc 0x10000006
> > > > write 0xc300001024 0x2 0x10    | write 0xc300001028 0x1 0x5a
> > > > write 0xc30000101c 0x1 0x01    | writel 0xc30000100c 0x2a6f6c63
> > > > write 0xc300003002 0x1 0x0     v write 0xc300001024 0x2 0x10
> > > > write 0x5c 0x1 0x10              write 0xc30000101c 0x1 0x01
> > > > writel 0xc30000100c 0x2a6f6c63   write 0xc300001018 0x1 0x80
> > > > write 0xc300001018 0x1 0x80      write 0x5c 0x1 0x10
> > > > outl 0xcf8 0x0                   write 0xc300003002 0x1 0x0
> > > >
> > >
> > > In this example, I can remove the outl 0xcf8 0x0, and I still see
> > > the
> > > crash, so maybe the 1st step in the minimizer is failing
> > > somewhere.. 
> >
> > I think it might because of our one-time scan and remove strategy,
> > which is not suitable for timing dependent instructions.
> >
> > For example, instruction A will indicate an address where the
> > config
> > chunk locates, and instruction B will make the configuration
> > active. If
> > we have the following instruction sequence:
> >
> > ...
> > A1
> > B1
> > A2
> > B2
> > ...
> >
> > A2 and B2 are the actual instructions that trigger the bug.
> >
> > If we scan from top to bottom, after we remove A1, the behavior of
> > B1
> > might be unknowable, including not to crash the program. But we
> > will
> > successfully remove B1 later cause A2 and B2 will crash the process
> > anyway:
> >
> > ...
> > A1
> > A2
> > B2
> > ...
> >
> > Now one more trimming will remove A1.
> >
> > As for the example I gave, the instructions before the delaying
> > minimizer are like this:
> >
> > outl 0xcf8 0x8000081c
> > outb 0xcfc 0xc3
> > outl 0xcf8 0x0                <--- The A instruction, didn't be
> > removed
> > (outl 0xcfc 0x0)              <--- The B instruction, removed
> > outl 0xcf8 0x80000804
> > outl 0xcfc 0x10000006
> > write 0xc300001024 0x2 0x10
> > write 0xc300001028 0x1 0x5a
> > write 0xc30000101c 0x1 0x01
> > writel 0xc30000100c 0x2a6f6c63
> > write 0xc300001018 0x1 0x80
> > write 0x5c 0x1 0x10
> > write 0xc300003002 0x1 0x0
> >
> > If we run the remove minimizer again, The A instruction outl 0xcf8
> > 0x0
> > will be removed.
> >
> > Since we only remove instructions, this iterative algorithm is
> > converging. Maybe we can keep removing the trace until the
> > len(newtrace) become unchanged.
> >
>
> I found a bunch of work related to this "test-case minimization".
> There
> are algorithms such as "ddmin" that try to tackle this. There might
> be
> some interesting ideas there.

Thanks, I will have a look.

> I think in the perfect case, we would need to be able to remove A and
> B
> at the same time. You described the situation where B1 might lead to
> a
> bad state without A1, but there is also the possibility that A1 might
> leave bad state around, without B1. And both of these might be true
> at
> the same time :) Probably not something we encounter very often,
> though.

You are right, and even there can be three instructions which must be removed together ;) But for now, how about we just add a if(len(newtrace) == old_len) loop  around remove minimizer? No harm.

Do you think this kind of dependence will exist in bits of the write/out commands? How about adding if(num_bits(data) == old_num) loop around the setting zero minimizer?

> > > Is the Refined one better? To me the original one read as:
> > > "Do a bunch of PCI configuration to map an MMIO BAR, then
> > > interact
> > > with
> > > the MMIO range and trigger some DMA activity". I also know
> > > exactly
> > > the
> > > line that will trigger the DMA activity and access 0x5c. With the
> > > refined one, I'm not so sure. Which line now causes the DMA read
> > > from
> > > 0x5c? writel 0xc30000100c? write 0xc300001018?
> > > Is there another example where this type of reordering makes the
> > > result
> > > easier to read?
> > >
> > > > Signed-off-by: Qiuhao Li <Qiuhao.Li@outlook.com>
> > > > ---
> > > >  scripts/oss-fuzz/minimize_qtest_trace.py | 21
> > > > +++++++++++++++++++++
> > > >  1 file changed, 21 insertions(+)
> > > >
> > > > diff --git a/scripts/oss-fuzz/minimize_qtest_trace.py
> > > > b/scripts/oss-fuzz/minimize_qtest_trace.py
> > > > index f3e88064c4..da7aa73b3c 100755
> > > > --- a/scripts/oss-fuzz/minimize_qtest_trace.py
> > > > +++ b/scripts/oss-fuzz/minimize_qtest_trace.py
> > > > @@ -214,6 +214,27 @@ def minimize_trace(inpath, outpath):
> > > >  
> > > >      assert(check_if_trace_crashes(newtrace, outpath))
> > > >  
> > > > +    # delay IO instructions until they can't trigger the crash
> > > > +    # Note: O(n^2) and many timeouts, kinda slow
> > >
> > > Maybe do a binary search instead of a linear scan for the optimal
> > > position
> > > to save some time?
> > > Also, if you re-run this multiple times, you can end up with
> > > different
> > > results, since some lines might not really be tied to a position
> > > (e.g.
> > > the outl cf8 0x0 in your example). Maybe it's not a problem, but
> > > i'm
> > > still not sure that this is making the result easier to read.
> > > -Alex
> >
> > I'm not familiar with the PCI configuration and DMA mechanism in
> > QEMU.
> > This patch is just random thinking based on empiricism. Maybe I
> > should
> > add the "RFC" tag :). In version 2, I won't post this patch.
> >
> > BTW, may I ask where I can learn about these IO emulations? How do
> > you
> > know the address corresponding to the PCI bar and DMA?
>
> On PCs, the PCI configuration space is accessed using two I/O ports:
> 0xcfc and 0xcf8. To interact further with a  PCI device, you have to
> configure its BARs (i.e. the Port IO and memory ranges that will map
> to
> device registers).
> https://en.wikipedia.org/wiki/PCI_configuration_space#Bus_enumeration
>
> So we can look at the trace again. First there are no virtio-vga
> MMIO/PIO
> ranges accessible, so the only thing the fuzzer can do is interact
> with
> its PCI configuration space using 0xCF8/CFC:
>
> outl 0xcf8 0x8000081c
> outb 0xcfc 0xc3
> ^^^ The above two lines write the value 0xc3 to PCI config address
> 0x1c
> for the vga device. You can confirm this by running the testcase with
> -trace pci\*. 0x1c is the location of the PCI register that
> represents
> BAR #3 for the device. 
> outl 0xcf8 0x80000804
> outb 0xcfc 0x06
> ^^^ These two lines write to the PCI command register (0x04) to allow
> the device to respond to memory accesses.
>
> write 0xc300001024 0x2 0x0040
> write 0xc300001028 0x1 0x5a
> write 0xc30000101c 0x1 0x01
> writel 0xc30000100c 0x20000000
> write 0xc300001016 0x3 0x80a080
> write 0xc300003002 0x1 0x80
> ^^^ Now we start to see what looks like MMIO accesses. And if we look
> at
> the output of -trace pci\* we will find that the 0xc3 value we wrote
> above, configured an MMIO range at 0xc300000000. That is why the MMIO
> accesses are close to that address.
>
> write 0x5c 0x1 0x10
> ^^^ This I am guessing is a DMA command. Usually I know this simply
> by
> looking at the [DMA] annotations in the input file to
> reorder_fuzzer_qtest_trace.py. This just means that the device tried
> to
> read from this location in memory, so the fuzzer placed some data
> there.
>
> Beyond just broadly seeing that there are some initial PCI
> configurations on registers 0xCF8/0xCFC, some accesses to addresses
> that
> look like an MMIO range, and one line that probably puts one byte at
> address 0x5c in ram, I can't really tell anything else just by
> looking
> at the trace. To write the descriptions above, I had to look at
> PCI-related resources. Im not convinced that reordering the accesses
> will really help much with this. Probably the best aid I found for
> understanding traces are good trace events (when they exist).
>
> -Alex

Thank you so much for such a detailed and patient explanation! I will use tracing to analyze IO events in the future.

The delaying minimizer seems not constructive. I won't post it in version 2.

Thanks again :)

> > > > +    i = len(newtrace) - 1
> > > > +    while i >= 0:
> > > > +        tmp_i = newtrace[i]
> > > > +        if len(tmp_i) < 2:
> > > > +            i -= 1
> > > > +            continue
> > > > +        print("Delaying ", newtrace[i])
> > > > +        for j in reversed(range(i+1, len(newtrace)+1)):
> > > > +            newtrace.insert(j, tmp_i)
> > > > +            del newtrace[i]
> > > > +            if check_if_trace_crashes(newtrace, outpath):
> > > > +                break
> > > > +            newtrace.insert(i, tmp_i)
> > > > +            del newtrace[j]
> > > > +        i -= 1
> > > > +
> > > > +    assert(check_if_trace_crashes(newtrace, outpath))
> > > > +    # maybe another removing round
> > > > +
> > > >  
> > > >  if __name__ == '__main__':
> > > >      if len(sys.argv) < 3:
> > > > -- 
> > > > 2.25.1
> > > >

reply via email to

[Prev in Thread] Current Thread [Next in Thread]