[R-C] Strawman: Requirements

Randell Jesup randell-ietf at jesup.org
Wed Oct 19 19:50:59 CEST 2011


On 10/19/2011 6:09 AM, Varun Singh wrote:
> On Wed, Oct 19, 2011 at 01:06, Randell Jesup<randell-ietf at jesup.org>  wrote:
>>> All measurements are taken by starting the flows, letting the mechanisms
>>> adapt for 60 seconds, and then measuring bandwidth use over a 10-second
>>> interval.
>>
>> Continuous flows are not a good model.  The "other equipment" will be a mix
>> of continuous flows (bittorrent, youtube (kinda)), etc and heavily bursty
>> flows (browsing).  We need to show it reacts well to a new TCP flow, to a
>> flow going away, and to a burst of flows like a browser makes.
>>
>
> The problem I note with bursty traffic is how do you measure fairness
> for these short bursty flows? short is probably (1-5s)
> is fairness completion time of these short flows (they shouldn't take
> more than 5 x times more time to complete, 5x is just a number I
> chose)? or fair sharing of bottleneck BW (BW/N flows)?

Honestly, I don't have a good model for that, but I know it's an very 
important usecase.

I'd say it's important that the TCP flow not take too much longer than 
on an uncongested link, but a better comparison would be to a TCP burst 
being added to a link saturated with TCP flows.  The burst does NOT have 
to complete as fast as if it was saturated with TCP, but I'd say a 
reasonable first-cut target might be 2-3x the time (since like TFRC, I 
would assume rtcweb's media flows should have a smoother and slower 
adaptation rate if possible compared to TCP AIMD, so burst behavior 
would be different).

Also, part of the evaluation is the impact on the media stream - if it 
drops too far for too long in the face of a burst, it's not good.

>> Also that it coexists with other rtcweb flows!  (which I note you include
>> below)
>>
>>> This seems like the simplest reasonable system. We might have to specify
>>> the buffer sizes on the border routers to get reproducible results.
>>
> This is an important value. Especially, in light of the buffer bloat discussion.
>
>> And the buffer-handling protocols (RED, tail-drop, etc).  I assume tail-drop
>> is by far the most common and what we'd model?
>>
>
> In NS2 simulations with cross-traffic we've got better results by
> using RED routers instead of droptail ones (for TFRC, TMMBR, NADU).
> However, as mentioned before AQM is not that common.

Does "better" mean better media quality or more bandwidth, or does 
"better" mean "more closely models real-world open-internet performance"?

>>> - A "timestamp" option for RTP packets as they leave the sender
>>
>> Are you referring to setting the RTP timestamp with the
>> "close-to-on-the-wire" time?  I have issues with this...  If it's a header
>> extension (as mentioned elsewhere) that's better, but a significant amount
>> of bandwidth for some lower-bandwidth (audio-only) cases.
>>
>> The RTP-tfrc draft message looks like 12 bytes per RTP packet.  If we're
>> running say Opus at ~11Kpbs @ 20ms, which including overhead on UDP uses
>> ~27Kbps (not taking into account SRTP), this would add 4800bps to the
>> stream, or around 18%.  Note that unlike TFRC, you need them on every
>> packet, not just one per RTT.
>>
>
> One optimization is to drop the 3-byte RTT field. Just using the "send
> timestamp (t_i)" field should be enough. Moreover, if there are other
> RTP header extensions then 0xBEDE (4-byte) RTP header extension is
> common to all of them. In which case the send timestamp adds just 5
> bytes {ID, LEN, 4-byte TS}.

I would not assume other header extensions are in use.

Note also that if not a multiple of 4 (total for all header extension 
data), you have to pad to a multiple of 4.  So 5 is the same as 8, which 
is 12 bytes total.  If you can use a 3-byte TS (which should be possible 
if it's an offset from the RTP TS specifying the delay from sampling to 
sending), then you can reduce the overhead to 8, which is the lowest 
possible.

> http://tools.ietf.org/html/rfc5450 provides a mechanism to use
> relative transmission timestamps to relative RTP timestamps and uses a
> 3-bytes instead of 4 bytes for the timestamp. I am sure if we really
> require such a timestamp there can be some optimizations done to save
> a byte or two.

Right, that's exactly the sort of thing I was thinking of.

>> Note: we need to send video and audio streams, but that might not be needed
>> here.  But it's important in a way, in that both are adaptable but with very
>> different scales.  We need both streams with very different bandwidths to
>> adapt "reasonably" to stay in the boundary. Note that in real apps we may
>> give the app a way to select the relative bandwidths and override our
>> automatic mechanisms (i.e. rebalance within our overall limit).
>>
>
> There was some discussion at the beginning if the rate control should
> be per flow, or combined? was there a consensus on that already or is
> it something we still need to verify via simulations?

I think consensus is that it be combined (see my earlier proposed 
modification to Harald's algorithm).  We could allow the receiver to use 
TMMBR on each stream individually in place of a combined message, but 
that would change the result such that it would be tougher to allow the 
application as much control over the streams as we'd like - it would put 
much more of the control in the receiver's hands.

>>> If A sends a video stream with MPB>  BW, and there is a TCP bulk
>>> transfer stream from LAN X to LAN Y, the video stream must get at least
>>> 30% of BW. ("don't let yourself be squeezed out")
>>
>> Given enough time (minutes), we *will* lose bandwidth-wise to a continuous
>> TCP stream (as we know).
>>
>
> If there are 1 or 2 TCP streams (doing bulk transfer), a single RTP
> stream can be made competitive, however as the number of streams
> increase RTP loses out. In Google's draft, they ignore losses upto 2%.
> In my own experiments, we were a bit more tolerant to inter-packet
> delay.

I wonder if a "good" delay-based algorithm can really be competitive 
with even a single long aggressive TCP flow.


-- 
Randell Jesup
randell-ietf at jesup.org


More information about the Rtp-congestion mailing list