ccrtp-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Ccrtp-devel] problem while reducing the seding rate


From: Michel de Boer
Subject: Re: [Ccrtp-devel] problem while reducing the seding rate
Date: Fri, 13 May 2005 18:06:39 +0200
User-agent: Mozilla Thunderbird 1.0 (X11/20041206)

I am lost. The problem described doesn't sound like a pre-loading scenario to me. The packets do not come out as they are expired. That means you are sending the packets slower than the rate of the real time media stream. So your media stream is broken. If you want to broad/mutli/uni-cast a real time stream you have to send out the packets in real time or pre-load at the cost of buffer space. If you send them too slow there simply is no real time stream any more.

David Sugar wrote:
If you intend to in effect pre-load ccrtp with the "entire broadcast", then you will suck up a huge amount of memory for data waiting to be sent. This is especially true if you have more than one stack active such as in a streaming server.

The idea in ccrtp was to make the sending of data time dependent, but the queueing of data time independent. However, this doesn't mean all data should be stuffed in the front; rather its submission should be paced, either in time, or by measuring packets pending to be sent. This is of course easiest to do when dealing with an already realtime data source, such as a video camera, audio card, etc. I would also like to add something where a sleep or callback (a simple virtual member would be sufficient to enable this) that can be scheduled when the transmit queue falls below a setable threashold, as it would make it easier to write applications that then pace and maintain effective packet streaming with low memory overhead when using non-realtime data sources (such as media files), rather than relying on overbuffering.

Dinil Divakaran wrote:


Sorry, I didn't mean the variable bit rate codecs. My doubt is
regarding any kind of data (audio, video etc).

Suppose we are using something like below:

     rtp_session->setExpireTimeout(10000)

then, the program sends only 3 packets. Whereas, if the above
statement is changed to

      rtp_session->setExpireTimeout(100000)

the program sends some 30+ packets.

Now, when I am writing a program to send data (be it of any
kind), I may use the program to send 1 MB of data or even
50 MB of data; so to what value should the expire timeout
be set ?

I hope, I have made myself clear this time around :)

I assume the value given to setExpireTimeout is in microseconds;
else please correct me.

- Dinil

On Thu, 12 May 2005, Michel de Boer wrote:

Why in the first place do you want to transmit the data at
a different speed? What codec do you use for the audio?

I have only experience with G.711 and GSM encoded audiostreams.
These codecs have a constant bit rate, eg. G.711 has 160 bytes/20ms,
GSM has 33 bytes/20ms So I have to make sure to send the audio streams
with these speeds. I just set the correct payload type with
setPayloadFormat and it all works fine as long as I deliver the
data at the correct rate to ccRTP.

I am not sure how RTP works with variable bit rate codecs.


Dinil Divakaran wrote:


For the time being I fixed sending 2833 events as follows:

    rtp_session->setExpireTimeout(duration of event)

This way the packets stay within the oldness check in ccRTP.


But, we can not give a static argument to setExpireTimeout since
the number of packets change depending on the data that has to
be transmitted. Hence, if the value set by setExpireTimeout is
okay for a 1 MB data, it need not be useful for sending data
larger than 1 MB. This happens since the packets do not stay
within the oldness check as the number of packets increase.


David Sugar wrote:

Oh, this is about sending 2833 events, not receiving...sorry :). Hmm, let me think about this one further!









reply via email to

[Prev in Thread] Current Thread [Next in Thread]