[R-C] Packet loss response - but how?

Bill Ver Steeg (versteb) versteb at cisco.com
Fri May 4 15:50:42 CEST 2012


The RTP timestamps are certainly our friends.

I am setting up to run some experiments with the various common buffer
management algorithms to see what conclusions can be drawn from
inter-packet arrival times. I suspect that the results will vary wildly
from the RED-like algorithms to the more primitive tail-drop-like
algorithms. In the case of RED-like algorithms, we will hopefully not
get to much delay/bloat before the drop event provides a trigger. For
the tail-drop-like algorithms, we may have to use the increasing
delay/bloat trend as a trigger. As I think about the LEDBAT discussions,
I am concerned about the interaction between the various algorithms -
but some data should be informative. 

We may even be able to differentiate between error-driven loss and
congestion driven loss, particularly if the noise is on the last hop of
the network and thus downstream of the congested queue (which is
typically where the noise occurs). In my tiny brain, you should be able
to see a gap in the time record corresponding to a packet that was
dropped due to last-mile noise. A packet dropped in the queue upstream
of the last mile bottleneck would not have that type of time gap. You do
need to consider cross traffic in this thought exercise, but statistical
methods may be able to separate persistent congestion from persistent
noise-driven loss.

TL;DR - We can probably tell that we have queues building prior to the
actual loss event, particularly when we need to overcome limitations of
poor buffer management algorithms.

Bill VerSteeg

-----Original Message-----
From: rtp-congestion-bounces at alvestrand.no
[mailto:rtp-congestion-bounces at alvestrand.no] On Behalf Of Harald
Alvestrand
Sent: Friday, May 04, 2012 3:03 AM
To: rtp-congestion at alvestrand.no
Subject: [R-C] Packet loss response - but how?

Now that the LEDBAT discussion has died down....

it's clear to me that we've got two scenarios where we HAVE to consider 
packet loss as an indicator that a congestion control algorithm based on

delay will "have to do something":

- Packet loss because of queues filled by TCP (high delay, but no way to

reduce it)
- Packet loss because of AQM-handled congestion (low delay, but packets 
go AWOL anyway)

We also have a third category of loss that we should NOT consider, if we

can avoid it:

- Packet loss due to stochastic events like wireless drops.

(aside: ECN lets you see the difference between the first group and the 
second: ECN markings are unambiguously in the first group. But we can't 
assume that ECN is universally deployed any time soon.)

Now - the question is HOW the receiver responds when it sees packet
loss.

Some special considerations:

- Due to our interactive target, there is no difference between a 
massively out of order packet and a lost packet. So we can regard 
anything that comes ~100 ms later than it "should" as lost.
- Due to the metronome-beat nature of most RTP packet streams, and the 
notion of at least partial unreliability, the "last packet before a 
pause is lost" scenario of TCP can probably be safely ignored. We can 
always detect packet loss by looking at the next packet.

Thoughts?

                        Harald






_______________________________________________
Rtp-congestion mailing list
Rtp-congestion at alvestrand.no
http://www.alvestrand.no/mailman/listinfo/rtp-congestion


More information about the Rtp-congestion mailing list