[R-C] Packet loss response - but how?
Harald Alvestrand
harald at alvestrand.no
Fri May 4 09:02:48 CEST 2012
Now that the LEDBAT discussion has died down....
it's clear to me that we've got two scenarios where we HAVE to consider
packet loss as an indicator that a congestion control algorithm based on
delay will "have to do something":
- Packet loss because of queues filled by TCP (high delay, but no way to
reduce it)
- Packet loss because of AQM-handled congestion (low delay, but packets
go AWOL anyway)
We also have a third category of loss that we should NOT consider, if we
can avoid it:
- Packet loss due to stochastic events like wireless drops.
(aside: ECN lets you see the difference between the first group and the
second: ECN markings are unambiguously in the first group. But we can't
assume that ECN is universally deployed any time soon.)
Now - the question is HOW the receiver responds when it sees packet loss.
Some special considerations:
- Due to our interactive target, there is no difference between a
massively out of order packet and a lost packet. So we can regard
anything that comes ~100 ms later than it "should" as lost.
- Due to the metronome-beat nature of most RTP packet streams, and the
notion of at least partial unreliability, the "last packet before a
pause is lost" scenario of TCP can probably be safely ignored. We can
always detect packet loss by looking at the next packet.
Thoughts?
Harald
More information about the Rtp-congestion
mailing list