[R-C] LEDBAT vs RTCWeb

Stefan Holmer stefan at webrtc.org
Tue Apr 10 16:40:28 CEST 2012


On Tue, Apr 10, 2012 at 4:02 PM, Randell Jesup <randell-ietf at jesup.org>wrote:

> On 4/10/2012 7:51 AM, Stefan Holmer wrote:
>
>>
>>
>> On Tue, Apr 10, 2012 at 12:10 PM, Harald Alvestrand
>> <harald at alvestrand.no <mailto:harald at alvestrand.no>> wrote:
>>
>>    Just to summarize what I currently understand about LEDBAT as
>>    compared to the scenario that RTCWEB envisions.....
>>
>>    - RTCWEB assumes a set of media streams, possibly supplemented by a
>>    set of data streams carrying data with real-time requirements.
>>    LEDBAT assumes that there is some amount of data that needs
>>    transferring, and that it's appropriate to delay data if congestion
>>    occurs.
>>
>>    - RTCWEB wants to make sure delay is low as long as it's not
>>    fighting with TCP, and wants its "fair share" when competing with
>>    TCP. LEDBAT wants to back off when encountering TCP, and uses low
>>    delay as a signal of "no competition is occuring"; it doesn't care
>>    about the specific delay.
>>
>>
>> I don't think it's clear that an RTCWEB flow at all times want to
>> compete with a TCP flow to get its fair share. For instance I can
>> imagine that a user may find that a delay of 1-2 seconds caused by the
>> TCP flow makes the experience too bad, and that it's better to leave
>> more bandwidth for TCP so that the TCP transfer will finish more
>> quickly. Depends on the amount of buffer bloat, the length of the TCP
>> flow, user preference, etc.
>>
>
> 1-2 seconds is effectively unusable.
>
> When we compete with a (saturating) TCP flow, there are a few options:
>
> 1) reduce bandwidth and hope the TCP flow won't take it all (not all TCP
> flows can sustain infinite bandwidth, and the TCP flow may have bottlenecks
> or RTT-based limits that stop it from taking everything) or that the TCP
> flow will use the bandwidth to end faster.
>
> 2) reduce bandwidth and hope the TCP flow may not take the extra bandwidth
> fast enough to force the queues too deep - we reduce bandwidth and cause
> the queues to drain, and TCP will keep adding to its bandwidth - but at a
> certain rate depending on RTT/etc.  We may be able to keep the queues low
> as we give up bandwidth in chunks, though eventually we will be driven down
> to our base.  If we're lucky the TCP flow will (as per 1) hit another
> limit, or will end (not unusual for web browsing!). This really is a
> variant of #1.
>
> 3) reduce bandwidth, but allow queues to rise to a degree (say 100-200ms,
> perhaps an adaptive amount).  This may allow an AQM or short-queue router
> to cause losses to the TCP flow(s) and cause them to back off.  This could
> be a secondary measure after initial bandwidth reductions.
>
> 4) switch to pure loss-based, which means letting queue depths rise to the
> point of loss.  Cx-TCP uses this (see recent (Nov? Dec?) ToN article
> referenced in my rtcweb Interim presentation from the end of Jan/early
> Feb).  In some cases this will result in seconds of delay.
>
> There might be some possible dynamic tricks, though they may not result in
> a reasonable experience, such as allowing or forcing a spike in queue
> depths to get TCP to back off a lot, then reducing bandwidth a lot to let
> them drain while TCP starts to ramp back up.  Eventually TCP will saturate
> again and force queues depths to rise, requiring you to repeat the
> behavior.  This will cause periodic bursts of delay or loss which will be
> annoying, and also may lead to poor overall link utilization (though it's
> an attempt to get a fair share most of the time for the delay-based
> protocol).  I doubt that overall this is practical.
>
>
>
I agree that those are all possible actions.


>
>
>>    - RTCWEB's RTP streams consists of unidirectional streams with
>>    (currently) fairly infrequent feedback messages. LEDBAT assumes an
>>    acknowledgement stream with nearly the same packet intervals as the
>>    forward stream.
>>
>>    My conclusion: When discussing behaviour of specific models, we can
>>    learn from LEDBAT's experiences and the scenarios it was tested in,
>>    but the design goals of LEDBAT do not resemble the design goals for
>>    congestion control in the RTCWEB scenario, and we should not expect
>>    specific properties of the implementation to fit.
>>
>>
>> I agree.
>>
>
> As do I.  Also, I *REALLY* worry about the interaction of LEDBAT flows and
> rtcweb flows...  If it targets 100ms queuing delay as the "I'm out of the
> way of TCP" level, that could seriously negatively impact us (and general
> VoIP as well, but even more so us, since we'll again get driven into the
> ground trying to keep the queues drained.  It may take longer, but LEDBAT
> flows tend to be close-to-infinite I would assume.
> If it targets 25ms, that's less problematic I suspect.
>
> I'm not saying I know there will be a problem here, but that I fear there
> will be since LEDBAT has a non-0 queuing target - it may "poison the
> waters" for any delay-based algorithm that wants to target a lower number.


Yes, having two algorithms with different delay targets compete should be
approximately the same thing as having a delay-based algorithm compete with
a loss-based algorithm, although the effects seen may be more or less bad
depending on how close the targets are. To be clear, our draft
(draft-alvestrand-rtcweb-congestion) has a 0 delay target, which means that
it will always let the queues drain before increasing the rate.


>
>
> --
> Randell Jesup
> randell-ietf at jesup.org
>
> ______________________________**_________________
> Rtp-congestion mailing list
> Rtp-congestion at alvestrand.no
> http://www.alvestrand.no/**mailman/listinfo/rtp-**congestion<http://www.alvestrand.no/mailman/listinfo/rtp-congestion>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.alvestrand.no/pipermail/rtp-congestion/attachments/20120410/2db225be/attachment.html>


More information about the Rtp-congestion mailing list