[R-C] Strawman: Requirements

Randell Jesup randell-ietf at jesup.org
Wed Oct 19 20:13:55 CEST 2011


On 10/19/2011 7:02 AM, Harald Alvestrand wrote:
>>>> What about delay? should it be variable or fixed one-way delay
>>>> (end-to-end delay)?
>>>>
>> Any comments on end-to-end delay: fixed 50ms? 100ms? variable 20-70ms?
>> i.e a few ms on 100 Mbit links and 10s of ms on the bottleneck link (BW).

> I'd think that tests at 5 ms and 100 ms delay on the "narrow" link
> should give us reasonable scenarios - one simulates adjacent houses, the
> other one simulates the other side of a continent.

I think we need to simulate at least a subset of these, and I'd love for 
all of them:

LAN+WiFi - wifi is the constriction, no other significant delay

"local" broadband access - asymmetric access link, short additional 
delay (5-20ms)

"long-distance" - asymmetric access link, longer delay (25-75ms)

"intercontinental" - asymmetric access link, much longer delay (75-150ms).

"satellite" - asymmetric access link, 300-600ms delay(?)

Comments?  The WiFi case is important.  I'm also interested in wireless 
broadband cases (3G/4G).

>> The reason I asked, was because the direction of the message is noted
>> as "from A to B" and TMMBR should flow "from B to A" and I wasn't sure
>> if it was TMMBR or was the sender telling the receiver what it was
>> using as the current bit rate. However, REMB clarifies it.
>
> Yup.
> Branching into design for a moment: With sender-side bandwidth
> estimation, I am worried about packet rates; since RTCP packets don't
> piggyback on return traffic like TCP ACK does, having a sender-side
> computation based on RTCP (such as the basic TCPFR) requires that you
> send RTCP packets very frequently - could be up to one RTCP for every
> RTP packet in short-RTT scenarios.

Right, a definite worry of mine with sending "raw" data back; you can 
however send "partially cooked" data back when certain events occur and 
suppress "nothing interesting, move along" reports.

> Computing bandwidth at the receiver and sending the computed rate to the
> sender allows you to react quickly to changes AND keep a low RTCP
> frequency when you don't have a congestion problem. This of course
> requires that you have a sender-side algorithm that reacts appropriately
> when congestion becomes so bad that most of the RTCP packets get
> lost..... always tradeoffs!

Yes.

>> is this a moving average of 10s or 10s intervals? because I am not
>> sure why 10s instead of per second?

> I was thinking in terms of measuring the number of bytes delivered from
> second #60 to second #70 in the (simulation / test), and calling that
> "the measured value".
> If we get significantly different numbers in second #70 to #80, that's
> worthy of investigation.
>
> My reason for picking 10s is that I've done tests where it turned out
> that I was measuring effectively 4 events in the interval, and with a
> requirement that > 80% succeeded, a single packet loss was enough to
> break the test; if we measure over single seconds for a pass/fail
> criterion, I think a test will be a bit too sensitive to random
> combinations of events (double packet losses, for instance) for the kind
> of timescales we're talking about - there are only ~50 packets in the
> typical second for voice, and might be even fewer for scaled-down video
> (congestion-caused reduced-rate video may go down to 5 fps in some
> products). But it's another pragmatic number.

Yes, we can iterate on the test design as needed; we just need to 
remember these aren't carved in stone, they're tools to get at the 
useful aspects of behavior.


-- 
Randell Jesup
randell-ietf at jesup.org


More information about the Rtp-congestion mailing list