[R-C] Congestion Control BOF

Henrik Lundin henrik.lundin at webrtc.org
Thu Oct 13 08:37:41 CEST 2011


On Thu, Oct 13, 2011 at 12:18 AM, Varun Singh <vsingh.ietf at gmail.com> wrote:

> On Wed, Oct 12, 2011 at 23:48, Randell Jesup <randell-ietf at jesup.org>
> wrote:
> > On 10/12/2011 11:25 AM, Henrik Lundin wrote:
> >>
> > I honestly don't have a lot of love for fully offline, non-real-time
> > simulations like ns2 for this sort of work - the realtime algorithms have
> > often are painful to rework to run paced by the simulation, and to be
> honest
> > I want a reasonable approximation of a real network, where subjective
> > measurements and evaluation can be quickly done.  Plus I find
> Dummynet/Netem
> > more intuitive than what I know about using ns2 (which I've never
> actually
> > used, but the comments from Varun Singh make me shudder at using it as a
> > primary tool.
> >
>
> Just to clarify, in our simulation setup, we hooked up an encoder and
> decoder to the NS2 application. If we were simulating dummy packets
> then this simulation would run more quickly. The encoder (which runs
> outside the NS2 context) provides the H.264 frames to the ns2
> application where it is fragmented and encapsulated in RTP packets and
> transmitted over the NS2 topology. At the receiver side ns2
> application the RTP packet is passed to a real-world decoder. The main
> reason for slowness of the simulation is that the ns2 application had
> to routinely time sync with the codec (outside the NS2). It was more
> like NS2 polling the codec and saying now is 0ms do you have any data,
> 5ms, 10ms, 15ms, etc.
>
> The advantage of using NS2 is that you can have complex topologies,
> change the type of links (DSL, WLAN, 3G), queues (drop tail, RED),
> type of cross-traffic (FTP, CBR, start and stop TCP flows) and easily
> measure throughput for each flow (making analysis easier). In our
> setup, we also got a YUV file at the decoder with which we could
> calculate PSNR and/or play back next to the original YUV.
>

This is what I am looking for. I was primarily considering using some kind
of dummy load generator (video and/or audio) to produce flows with the right
characteristics (frame rate, frame size variability, I frames, etc.) and
inject this into the simulated network. Looking at metrics such as
throughput, end-to-end delay and loss rate would provide a lot of insight
into how a real media stream would be affected.

Thanks for the feedback. It seems ns2 is the weapon of choice.


> > My preference is for netem and/or dummynet across a bench and also for
> > connecting a device to a corporate or high-speed residential network
> while
> > simulating a much lower-speed or loaded access link.  Hook up a switch or
> > NAT and a PC to run transfers, make VoIP calls, run bittorrent(!!),
> randomly
> > browse, watch Youtube, etc while making video calls on the target.  We
> also
> > used to have regular residential cable modems as well.  Make lots of
> actual
> > calls where you're talking to someone; humans are very good at picking
> out
> > most types of errors - especially the ones that matter, and you can also
> > collect statistics while doing so.  Combine those tools with a few good
> > testers and you'll find all sorts of funny edge conditions that most
> > straight simulations miss.
> >
>
> IMO dummynet would work but to properly analyze the rate control, we
> must also analyse the end-to-end throughput (and/or other metrics) of
> the rtcweb media, link, and other traffic. To make sure that the media
> is fair to the cross traffic and that the cross traffic doesn't starve
> the media (e.g., in case of Bittorrent or too many TCP). For most
> scenarios this should be possible using tcpdump.
>
>
> PlanetLab will be useful if we want to emulate more complex scenarios
> (like different types of routers etc). For simple cases, using
> dummynet on a single hop (or at end-points) may be good enough.
>
> > IMHO.
> >
> >>            5. It appears that the code in remote_rate_control.cc
> >>        (i.e.receiver)
> >>            currently starts at the max-configured bitrate; is this
> >> correct?
> >>            What's the impact of this; what does the sender-side start
> at?
> >>
> >>
> >>        This is correct, and means that the receiver side does not
> >>        enforce or
> >>        ask for any rate reduction until it has experienced the first
> >>        over-bandwidth.
> >>
> >>
> >>    Ok, as per above and Magnus's comments, I tend to disagree with this
> >>    - it almost guarantees you'll be in recovery right at the start of
> >>    the call, which is not the best experience for 1-to-1 communication,
> >>    IMHO (and in my experience).  See my extensive discussion of
> >>    starting bandwidths on rtcweb (I should copy it here for archival).
> >>
> >>
> >> I think there has been a misunderstanding here. It is true that the code
> >> in the receiver-side of the CC algorithm (specifically in
> >> remote_rate_control.cc) starts at the max-configured rate. This is while
> >> waiting for it to detect the first over-use. However, and this is were
> >> the catch is, it does not mean that the sender starts sending at the max
> >> rate. In our current implementation, the sender decides the actual start
> >> rate. Further, since the original implementation was done based on
> >> TMMBR, we had to implement a stand-alone CC at the sender too, since
> >> it's not allowed to rely on TMMBR alone. Thus, as long as the TMMBR
> >> feedback is larger than what the sender's internal CC says, it will not
> >> listen to the TMMBR. Therefore, the receive-side initialization that you
> >> see does not mean that we start each call at ludicrous speed. Some of
> >> the things in the implementation are legacy...
> >
> > Ok, good - I thought I'd asked that question when we all were chatting
> and I
> > probably wasn't specific enough.
> >
> >>    Hmmm.  Oh well.  I guarantee there are devices out there that drift
> >>    a lot...  And even have time go backwards (great fun there).
> >>
> >> I'm sure there are. So far we have not seen drift large enough to
> >> actually offset the over-/under-use detection.
> >
> > I'm told some PC audio cards/interfaces have significantly drifty
> timebases
> > - but perhaps not at a level that matters here.  When working with the
> old
> > GIPS code, we kept a PLL tracking the apparent timebase of each far-end
> > stream, updated on RTCP reports, and reset if something "odd" happens.
> >
> >
> > --
> > Randell Jesup
> > randell-ietf at jesup.org
> > _______________________________________________
> > Rtp-congestion mailing list
> > Rtp-congestion at alvestrand.no
> > http://www.alvestrand.no/mailman/listinfo/rtp-congestion
> >
>
>
>
> --
> http://www.netlab.tkk.fi/~varun/
> _______________________________________________
> Rtp-congestion mailing list
> Rtp-congestion at alvestrand.no
> http://www.alvestrand.no/mailman/listinfo/rtp-congestion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.alvestrand.no/pipermail/rtp-congestion/attachments/20111013/d3e2f90f/attachment-0001.html>


More information about the Rtp-congestion mailing list