[R-C] Why not TFRC?
Randell Jesup
randell-ietf at jesup.org
Wed Nov 9 15:59:06 CET 2011
On 11/8/2011 8:37 PM, Wesley Eddy wrote:
> It's possible to use TFRC in order to compute a bound, and then use
> a 2nd algorithm with a focus on delay, operating below the bound set by
> TFRC.
How would TFRC produce a good bound if the delay-based algorithm keeps
delay low and avoids loss? And if so, what is using TFRC as a bound
doing for you? As far as I can see, just acting as a stopgap against
the primary algorithm going haywire, and maybe reacting when there's a
sudden bandwidth restriction - but the delay algorithm is likely to be
more aggressive in reacting to it.
TFRC would see no loss, so it will sit thinking the bandwidth available
is 2x the current send rate, roughly. Also, if for some reason you're
not trying to pump the channel full of data, TFRC will reduce it's
bandwidth estimate, since it's based on X_recv, which is the rate data
was received during the last RTT. With codec data, you don't have a
queue of data waiting to be sent, so with a short RTT you may get no
packets or just an audio packet in the previous RTT.
In TFRC, the sender's X_recv_set is "typically" only two entries, or 2
RTT. If the devices are on the same LAN, RTT may be 10ms. Even if you
extend X-recv_set, you won't generally have a good bound on bandwidth
other than "2x the instantaneous bandwidth most recently seen". But
instantaneous bandwidth can be misleading, especially with rate-limited
codecs and if you're far from the bottleneck - which is the norm, as the
bottleneck is usually the first upstream link, and you're measuring at
the other side of the far-end downstream link - dispersed packets can
re-aggregate on the faster downstream link. It also makes it impossible
to cleanly support bursty transmissions, since bandwidth ramps up (and
down) slowly. Bursts after idleness are now allowed in RFC 5348, but
only one RTT's worth at the current bandwidth estimate. Since the
bandwidth estimate degrades over time if the client is idle for even
moderate periods (think chat or in a game when it's not exchanging data
with a particular other player, or only at low BW), the value of this
burst declines as well - and the value is especially low on LANs and
low-RTT settings. (Chat with your neighbor, or with another coworker
over WiFi).
Realize that delay-based CC is almost by definition more conservative in
general than TFRC, so using TFRC as a bound doesn't buy you much if
anything, but may cause major problems with bursty use, for example.
Think push-to-talk type applications in a game with video - if the
bandwidth is there, you want to start using it from the start of the
communication, and very quickly adapt if it's not there (adapt both up
and down faster than TFRC, and maybe start higher than TFRC, especially
if we have history about the connection). The start of a video
communication is especially sensitive to low bandwidth and slow
ramping/convergence, since a "good" baseline image needs to be received
for high-quality efficient pframes to produce a good experience.
<IETF contributor hat on> If the spec requires TFRC, implementers will
likely use another algorithm underneath it or instead of it
(delay-based) and largely or completely ignore formal TFRC even if they
offically support it, since actually using TFRC would not produce a
sufficiently reliable low-delay, high-quality connection. And that will
invite incompatibilities, depending on the choices they make to "improve
on" or override TFRC.
Note bene: I'm also an implementer.
--
Randell Jesup
randell-ietf at jesup.org
More information about the Rtp-congestion
mailing list