[R-C] Congestion Control BOF

Randell Jesup randell-ietf at jesup.org
Wed Oct 12 22:48:24 CEST 2011


On 10/12/2011 11:25 AM, Henrik Lundin wrote:
>         About simulation: Does anyone have any suggestions for suitable
>         simulation tools? ns2 is one choice of course, but I'm not very
>         pleased
>         with it.
>
>
>     I don't have strong opinions; I've used both dummynet (for simple
>     testing, though the latest ones are better for simulating variable
>     delay and loss), and NetEm, which I'm not an expert with, but at
>     least provides normal distributions of delay, correlation of
>     delays/loss, etc.  You'd need to use it with a rate control
>     discipline to model a router/NAT/modem.
>
>     For modern uses of dummynet in PlantLab, etc see

PlanetLab, sorry

>     http://info.iet.unipi.it/~__luigi/dummynet/
>     <http://info.iet.unipi.it/~luigi/dummynet/>
>     and
>     http://info.iet.unipi.it/~__luigi/papers/20100316-cc-__preprint.pdf
>     <http://info.iet.unipi.it/~luigi/papers/20100316-cc-preprint.pdf>
>
>     More comments in-line...
>
>
> I was primarily looking for offline simulation tools. NetEm and dummynet
> are both tools for emulating network impairments to real-time traffic,
> right?

I honestly don't have a lot of love for fully offline, non-real-time 
simulations like ns2 for this sort of work - the realtime algorithms 
have often are painful to rework to run paced by the simulation, and to 
be honest I want a reasonable approximation of a real network, where 
subjective measurements and evaluation can be quickly done.  Plus I find 
Dummynet/Netem more intuitive than what I know about using ns2 (which 
I've never actually used, but the comments from Varun Singh make me 
shudder at using it as a primary tool.

My preference is for netem and/or dummynet across a bench and also for 
connecting a device to a corporate or high-speed residential network 
while simulating a much lower-speed or loaded access link.  Hook up a 
switch or NAT and a PC to run transfers, make VoIP calls, run 
bittorrent(!!), randomly browse, watch Youtube, etc while making video 
calls on the target.  We also used to have regular residential cable 
modems as well.  Make lots of actual calls where you're talking to 
someone; humans are very good at picking out most types of errors - 
especially the ones that matter, and you can also collect statistics 
while doing so.  Combine those tools with a few good testers and you'll 
find all sorts of funny edge conditions that most straight simulations miss.

IMHO.

>             5. It appears that the code in remote_rate_control.cc
>         (i.e.receiver)
>             currently starts at the max-configured bitrate; is this correct?
>             What's the impact of this; what does the sender-side start at?
>
>
>         This is correct, and means that the receiver side does not
>         enforce or
>         ask for any rate reduction until it has experienced the first
>         over-bandwidth.
>
>
>     Ok, as per above and Magnus's comments, I tend to disagree with this
>     - it almost guarantees you'll be in recovery right at the start of
>     the call, which is not the best experience for 1-to-1 communication,
>     IMHO (and in my experience).  See my extensive discussion of
>     starting bandwidths on rtcweb (I should copy it here for archival).
>
>
> I think there has been a misunderstanding here. It is true that the code
> in the receiver-side of the CC algorithm (specifically in
> remote_rate_control.cc) starts at the max-configured rate. This is while
> waiting for it to detect the first over-use. However, and this is were
> the catch is, it does not mean that the sender starts sending at the max
> rate. In our current implementation, the sender decides the actual start
> rate. Further, since the original implementation was done based on
> TMMBR, we had to implement a stand-alone CC at the sender too, since
> it's not allowed to rely on TMMBR alone. Thus, as long as the TMMBR
> feedback is larger than what the sender's internal CC says, it will not
> listen to the TMMBR. Therefore, the receive-side initialization that you
> see does not mean that we start each call at ludicrous speed. Some of
> the things in the implementation are legacy...

Ok, good - I thought I'd asked that question when we all were chatting 
and I probably wasn't specific enough.

>     Hmmm.  Oh well.  I guarantee there are devices out there that drift
>     a lot...  And even have time go backwards (great fun there).
>
> I'm sure there are. So far we have not seen drift large enough to
> actually offset the over-/under-use detection.

I'm told some PC audio cards/interfaces have significantly drifty 
timebases - but perhaps not at a level that matters here.  When working 
with the old GIPS code, we kept a PLL tracking the apparent timebase of 
each far-end stream, updated on RTCP reports, and reset if something 
"odd" happens.


-- 
Randell Jesup
randell-ietf at jesup.org


More information about the Rtp-congestion mailing list