[R-C] LEDBAT vs RTCWeb

Jim Gettys jg at freedesktop.org
Tue Apr 10 21:14:51 CEST 2012


On 04/10/2012 02:58 PM, Randell Jesup wrote:
>
> 100ms is just bad, bad, bad for VoIP on the same links.  The only case
> where I'd say it's ok is where it knows it's competing with
> significant TCP flows.  If it reverted to 0 queuing delay or close
> when the channel is not saturated by TCP, then we might be ok (not
> sure).  But I don't think it does that.
>
You aren't going to see delay under saturating load under 100ms unless
the bottleneck link is running a working AQM; that's the property of
tail drop, and the "rule of thumb" for sizing buffers has been of order
100ms.  This is to ensure maximum bandwidth over continental paths of a
single TCP flow.

Unfortunately, the bloat in the broadband edge is often/usually much,
much higher than this, being best measured in seconds :-(.
http://gettys.files.wordpress.com/2010/12/uplink_buffer_all.png 
http://gettys.files.wordpress.com/2010/12/downlink_buffer_all.png
(thanks to the Netalyzr folks).

Worse yet, the broadband edge is typically a single queue today (even in
technologies that may support multiple classifications.  So your VOIP
and other traffic is likely stuck behind other traffic.  ISP's telephony
services are typically bypassing these queues.

If there is AQM, then you'll get packet marking going on (drop or ECN),
and decent latencies.

There is hope here for AQM algorithms that are self tuning: I now know
of two of such beasts, though they are a long way from "running code"
state at the moment.

So the direction I'm going to to get AQM that works..... (along with
classification...).  But the high order bit is AQM, to keep the end
point's TCP's behaving, which you can't do solely by classification.
                                - Jim



More information about the Rtp-congestion mailing list