[R-C] Timely reaction time (Re: Comments on draft-alvestrand-rtcweb-congestion-01)

Randell Jesup randell-ietf at jesup.org
Sun Apr 8 08:33:41 CEST 2012


On 4/7/2012 1:37 PM, Michael Welzl wrote:
>
> On Apr 7, 2012, at 7:19 PM, Randell Jesup wrote:
>> What's the RTO in this case, since we're talking UDP media streams?
>> TCP RTO?
>
> Something in the order of that is what I had in mind. An RTT is the
> control interval that we can and should act upon, and of course you'd
> rather have an estimate that is averaged, and you really want to avoid
> having false positives from outliers, so you want to give it a
> reasonable safety margin. The TCP RTO has all that.
>
> Note I'm not "religious" about this - I think a mechanism that would
> react e.g. an RTT late could still lead to a globally "okay" behavior
> (but that would then be worth a closer investigation). My main point is
> that it should be around an RTO, or maybe a bit more, but not completely
> detached from RTT measurements.

One thing I worry about for media streams if you mandate RTT-timeframe 
reaction (which usually means RTT-timeframe distance between 
reverse-path reporting) is the amount of potential additional 
reverse-path traffic they can engender.

This is especially relevant in UDP land you don't have ACKs, and the 
available additional path of RTCP is bandwidth-limited itself with rules 
that might not allow you to send immediately.  This could be an 
especially relevant constraint on low-RTT channels, especially if 
they're also low-bandwidth.  IIRC, the same issue/impact was flagged in 
TFRC, but I think the issue may be worse here.

In terms of global stability, an aspect of these sorts of algorithms 
that helps is that even if they may be a little slow to react(*) in a 
downward direction, they are also typically slow to re-take bandwidth. 
If they also use the slope of the delay change to estimate bandwidth, 
they can also be more accurate in their reaction to bandwidth 
availability changes, so when they do react they tend to avoid 
undershooting, and don't overshoot by much the way TCP may.

(*) It's important to note that in normal operation, a delay-sensing 
algorithm may react much *faster* than TCP even if the reaction delay is 
many RTT - because the clock starts counting for a delay-sensitive 
algorithm when the bottleneck appears, not when the buffer overflows at 
the bottleneck.  A delay sensitive algorithm that isn't faced with giant 
changes in queue depth will almost always react faster than TCP IMHO.

The remaining question is what happens when the bottleneck faces a 
massive sudden cross-flow that suddenly saturates the buffer.  As 
mentioned, if slope is used the delay-sensing algorithm may well cut 
bandwidth faster than TCP would, even if it reacts a little later in 
this case; doubly-so if the algorithm includes losses as an indication 
that not only has delay increased at the slope rate, but that on top of 
that a buffer has overflowed, and so losses should cause it to increase 
the estimate of the bandwidth change.

-- 
Randell Jesup
randell-ietf at jesup.org


More information about the Rtp-congestion mailing list