[R-C] LEDBAT - introductions?

Jim Gettys jg at freedesktop.org
Mon Apr 9 19:54:06 CEST 2012


On 04/03/2012 07:36 AM, Harald Alvestrand wrote:
> Query:
>
> LEDBAT has been battered about several times in recent threads.
> I don't know what it is (apart from being a WG active at
> http://tools.ietf.org/wg/ledbat/ ) - could anyone give a quick summary
> of:
>
> - what it is
> - where it's at in its talk/develop/deploy/learn ycle
> - what the core properties are?
>

I'd like to put a bit of perspective back into this discussion.

The essential idea letbat has is by using delay, it tries to stay out of
the way of TCP, so that it doesn't clobber interactive web traffic. We
now know that most of the bittorrent damage was caused by bufferbloat:
even a single long lived TCP connection will saturate any edge link and
fill any sized buffer (see my demo on youtube:
http://www.youtube.com/watch?v=npiG7EBzHOU).  Worse, when delays go up,
TCP's responsiveness to competing traffic degrades as a quadratic of the
delay: 10 times extra delay means TCP won't respond to getting out of
the way for 100 times as long.

Ledbat has essentially engineered around bufferbloat to avoid hurting
interactive web traffic, and to avoid depending on diffserv, though one
of bittorrent's properties (using many TCP connections simultaneously in
its original incarnation) confused the issue. Bufferbloat is much worse
than most understand: "typical" overbuffering in today's broadband is
best measured in *seconds* rather than milliseconds in today's internet.

Once we have AQM in the edge links, the links will remain low latency
(and the many TCP connections problem of bittorrent will reappear; a
pile of bittorrent uploads will be competing with your other traffic). 
And I don't know how to fix bufferbloat in any other real way (unless
you can wave a magic wand and make *all* TCP's delay sensitive, and that
has other problems).

So as a viable congestion avoidance algorithm for real time web, ledbat
just isn't viable or useful, as far as I can see.

Ironically, diffserv has in fact been deployed without anyone except the
gaming industry noticing: this occurred since the default line
discipline in Linux is PFIFO_FAST, and it implements a particular
diffserv marking.  For better or worse, that's what's actually deployed,
and some application/devices implement diffserv marking matching that.

To be honest, I see no way around fixing bufferbloat, and I'd certainly
like some good way to mark such traffic as we can with TCP (via drops or
ECN), so the audio/video flows can react to network congestion changes. 
And the edge of the network *will* normally be congested, at least some
of the time.

Do we need congestion avoidance in RTP flows?  Certainly: I'd really
like to use any spare bandwidth for higher quality audio and video.  I'd
certainly like to mark RTP packets when congestion occurs.  And I'd like
to be able to normally use all my bandwidth, which means that (given HD
sensors are now cheap) I can easily expect outgoing (and maybe incoming)
video to saturate my edge links routinely independent of whatever other
TCP or bittorrent traffic is underway.  Again, we can/should presume the
edge links (both inside the house and in the broadband edge links) to be
running normally saturated.

But ledbat isn't a solution for real time protocol congestion avoidance,
unless you have a magic wand to change all the existing TCP's out there
in finite time (even if they don't lose relative to other TCP's, which I
expect they do; so I believe there is a positive disincentive for anyone
to change TCP's congestion avoidance algorithm).
                                    - Jim







More information about the Rtp-congestion mailing list