[R-C] RRTCC issues: loss, decrease rate

Randell Jesup randell-ietf at jesup.org
Wed Aug 8 07:45:33 CEST 2012


On 8/8/2012 1:04 AM, Mo Zanaty (mzanaty) wrote:
> In the case of loss due to congestion (a full queue or AQM action), the loss itself seems like the right signal to process. Why wait to infer congestion from the subsequent delay pattern which can be speculative/unreliable rather than the loss itself?

I agree totally, one should always assume loss is some type of 
congestion (though very low levels of loss might be ignored).  This is 
an area where the current proposed algorithm can be improved.

> If the goal is to distinguish congestion from stochastic loss, that is a general problem that probably needs more thought than the RRTCC 3-sigma outlier filter, or Kalman filter (which is designed to filter stochastic jitter but not losses), or Randell's "fishy" filter. There should be ample research available on this topic from many years of TCP over wireless links.

Agreed.  The way I used it was to give a "bonus" reduction in bandwidth 
if the losses appeared 'fishy'.  Per the earlier emails, this would 
mostly happen on otherwise-mostly-idle access links or maybe during 
bursts of cross-traffic.

-- 
Randell Jesup
randell-ietf at jesup.org



More information about the Rtp-congestion mailing list