[R-C] [ledbat] LEDBAT vs RTCWeb
Randell Jesup
randell-ietf at jesup.org
Fri Apr 27 20:45:43 CEST 2012
On 4/27/2012 7:48 AM, Stefan Holmer wrote:
>
>
> On Thu, Apr 26, 2012 at 7:12 PM, Randell Jesup <randell-ietf at jesup.org
> <mailto:randell-ietf at jesup.org>> wrote:
>
> On 4/26/2012 11:49 AM, Nicholas Weaver wrote:
>> On Apr 26, 2012, at 8:05 AM, Rolf Winter wrote:
>>>> http://www.infres.enst.fr/~drossi/DATA/PRJ-Ledbat+AQM.pdf
>>>>
>>>> It seems like AQM causes LEDBAT to promote to behavior similar to TCP,
>>>> but with low delay. Since it still seems fair, this is ok, but it's no
>>>> longer a background or scavenger flow, and applications using it need
>>>> to be aware they can impact"foreground" flows if AQM is in play.
>>>> Perhaps applications need to be given awareness when LEDBAT detects
>>>> active AQM (if it can), and there's no"scavenger" CC algorithm for AQM
>>>> that I know of.
>>> And the document makes that clear and I am not sure LEDBAT actually can detect AQM.
>> One thought: Active Queue management will be either ECN or early drop. Which is a signal separate to the delay signal used which is"friendlier than TCP".
>>
>> In either case, the scavenger flow property might be, well, not fully maintained but at least encouraged by backing off more than conventional TCP would to the same signal.
>
> Correct, and that's a reasonable way to proceed - though it might
> complicate slightly (and certainly change) the LEDBAT competing with
> LEDBAT case.
>
> In my delay-sensitive CC algorithm, I detected drops, and in
> particular "fishy" drops where the end-to-end delay in the following
> packet was lower by roughly the normal arrival interval, and would
> take those as an indication that we should drop more substantially
> than 'normal'. This was targeted at tail-drop detection, especially
> congestion spikes, but for a scavenger protocol the same idea could
> apply to make it more responsive to AQM.
>
>
> Interesting idea indeed. Drops in delay will of course also happen when,
> say, one flow stops, so it's not only an indicator of tail-drop (even
> though the probability of tail-drop may be higher if the drop in delay
> is "fishy").
A drop in delay alone isn't a trigger; it's a loss combined with a drop
in delay. I.e. from a packet train point of view for 30ms packets:
<- X - 31ms - X+1 - 31ms - X+2 - 31ms - X+4(!) - 31ms - X+5
That would be a fairly typical "signal" from a tail-drop router of
buffer overflow - timing compressed by the queue. Also note the slope
(increasing delay). If it were:
<- X - 31ms - X+1 - 31ms - X+2 - 60ms - X+4(!) - 31ms - X+5
Then it would be much less "fishy". Note this is most effective in a
constrained-channel case, perhaps somewhat less so in a
contention-for-channel case - but constrained channel is the "normal"
unloaded access-link or WiFi bottleneck. And in contention, it's still
useful - you need to tune the amount a packet arrives "early", and I
also added a rule to require multiple 'fishy' losses within a period
(~2s) before it kicked in extra reduction, but that's a tuning/testing
issue.
For LEDBAT, as a scavenger protocol, it may make sense for it to respond
to all drops by reducing more than TCP would. (Drops say either you
have very short max queues (<~100ms), AQM, or other traffic has forced
the queuing delay up to the limit - in either case you want to back off
and yield. So long as all the scavengers drop roughly similar amounts,
they should be fair among themselves in the super-short-queue (or AQM)
case. In all other cases, LEDBAT would just get out of the way of
foreground better/faster. It might reduce efficiency slightly in some
cases, but I suspect not much, and the only case we really care about
(for LEDBAT) is a non-fully-loaded channel.
--
Randell Jesup
randell-ietf at jesup.org
More information about the Rtp-congestion
mailing list