qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] hw: net: cadence_gem: Fix build errors in DB_PR


From: Bin Meng
Subject: Re: [Qemu-devel] [PATCH] hw: net: cadence_gem: Fix build errors in DB_PRINT()
Date: Thu, 8 Aug 2019 12:45:21 +0800

On Tue, Aug 6, 2019 at 6:57 PM Stefano Garzarella <address@hidden> wrote:
>
> On Mon, Aug 05, 2019 at 08:52:54AM -0700, Bin Meng wrote:
> > When CADENCE_GEM_ERR_DEBUG is turned on, there are several
> > compilation errors in DB_PRINT(). Fix them.
> >
> > Signed-off-by: Bin Meng <address@hidden>
> > ---
> >
> >  hw/net/cadence_gem.c | 7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
> > index d412085..7516e8f 100644
> > --- a/hw/net/cadence_gem.c
> > +++ b/hw/net/cadence_gem.c
> > @@ -983,8 +983,9 @@ static ssize_t gem_receive(NetClientState *nc, const 
> > uint8_t *buf, size_t size)
> >              return -1;
> >          }
> >
> > -        DB_PRINT("copy %d bytes to 0x%x\n", MIN(bytes_to_copy, rxbufsize),
> > -                rx_desc_get_buffer(s->rx_desc[q]));
> > +        DB_PRINT("copy %d bytes to " TARGET_FMT_plx "\n",
> > +                 MIN(bytes_to_copy, rxbufsize),
> > +                 rx_desc_get_buffer(s, s->rx_desc[q]));
> >
> >          /* Copy packet data to emulated DMA buffer */
> >          address_space_write(&s->dma_as, rx_desc_get_buffer(s, 
> > s->rx_desc[q]) +
> > @@ -1157,7 +1158,7 @@ static void gem_transmit(CadenceGEMState *s)
> >              if (tx_desc_get_length(desc) > sizeof(tx_packet) -
> >                                                 (p - tx_packet)) {
> >                  DB_PRINT("TX descriptor @ 0x%x too large: size 0x%x space 
> > " \
> > -                         "0x%x\n", (unsigned)packet_desc_addr,
> > +                         "0x%lx\n", (unsigned)packet_desc_addr,
>
> What about using 'z' modifier? I mean "0x%zx" to print sizeof(..).

Yes, good idea. Will do in v2. Thanks!

Regards,
Bin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]