[R-C] Strawman: Requirements

Harald Alvestrand harald at alvestrand.no
Tue Oct 18 19:44:36 CEST 2011


On 10/18/11 18:44, Varun Singh wrote:
> Hi Harald,
>
> Comments inline.
>
> On Tue, Oct 18, 2011 at 17:07, Harald Alvestrand<harald at alvestrand.no>  wrote:
>> I think that in order to get off ground zero here, both in implementation,
>> testing and navigating the tortuous path towards standards-track adoption,
>> we should throw out some requirements.
>>
>> And in the spirit of rushing in where angels fear to tread, I'm going to
>> throw out some.
>>
>> MEASUREMENT SETUP
>>
>> Imagine a system consisting of:
>>
>> - A sender A attached to a 100 Mbit LAN, network X
>> - A bandwidth constricted channel between this LAN and another LAN, having
>> bandwidth BW
>> - A recipient B also attached to a 100 Mbit LAN, network Y
>> - A second sender A' attached to network X, also sending to B
>> - Other equipment that generates traffic from X to Y over the channel
>>
> Is there a proposal for what BW should be? We should test for
> different values of BW.
That's why I made it a free variable. I'm thinking that 2 MB and 200 KB 
need to be tested.
> What about delay? should it be variable or fixed one-way delay
> (end-to-end delay)?
>
> Simulate only uni-directional media flows or B sends media to A and A' ?
I suggest only uni-directional for the test cases - keep things as 
simple as possible.
If we have a proposal that passes the tests for unidirectional, but 
fails to work for bidirectional, we should add a bidirectional test to 
the test suite.
>> All measurements are taken by starting the flows, letting the mechanisms
>> adapt for 60 seconds, and then measuring bandwidth use over a 10-second
>> interval.
>>
> I think initially running simulations for 60 seconds might be alright
> but for measuring impact (to and from cross-traffic), probably running
> longer simulations of 300s (~5 mins) might be useful?
Yes - I would suggest doing it this way first, but once the test 
framework is built, also run more tests, and add stuff to the 
requirements test suite when it becomes clear that there are systems 
that pass the tests, but still fail to give adequate service.
>> This seems like the simplest reasonable system. We might have to specify the
>> buffer sizes on the border routers to get reproducible results.
>>
> For a simplest system this seems alright, however, there is a need of
> verifying if the media sender adapts when other flows (TCP or other
> media senders) start their sessions later or before the RTCWEB client.
True.
>> PROTOCOL FUNCTIONS ASSUMED AVAILABLE
>>
>> The sender and recipient have a connection mediated via ICE, protected with
>> SRTP.
>> The following functions need to be available:
>>
>> - A "timestamp" option for RTP packets as they leave the sender
> A new header extension can be defined for this, if the one in TFRC is
> not useful.
>
>> - A "bandwidth budget" RTCP message, signalling the total budget for packets
>> from A to B
> Is this like TMMBN (Notification) or should the direction be B to A?
I was thinking receiver-side estimation and a TMMBR-like message that 
applies to the whole media flow.
In internal work, we have used the name REMB - Receiver Estimated Media 
Bandwidth.
>>
>> REQUIRED RESULTS
>>
>> In each measurement scenario below, the video signal must be reproducible,
>> with at most 2% measured packet loss. (this is my proxy for "bandwidth must
>> have adapted").
>>
> This seems like a reasonable upper-bound. However, the performance may
> vary depending on wireless/wired environment and the amount of
> cross-traffic.
Yes. Again - if we test in that environment and detect that it makes a 
difference....
>> If A sends one video stream with max peak bandwidth<  BW, and there is no
>> other traffic, A will send at the maximum rate for the video stream. ("we
>> must not trip ourselves up").
>>
>> If A and A' each send two video streams with MPB>  1/2 BW, the traffic
>> measured will be at least 30% of BW for each video stream. ("we must play
>> well with ourselves")
> Just to clarify, for the whole simulation the average media rate for
> each should be over 30% of BW?
For the ten seconds of "measurement time".
>> If A sends a video stream with MPB>  BW, and there is a TCP bulk transfer
>> stream from LAN X to LAN Y, the TCP bulk transfer stream must get at least
>> 30% of BW. ("be fair to TCP")
>>
>> If A sends a video stream with MPB>  BW, and there is a TCP bulk transfer
>> stream from LAN X to LAN Y, the video stream must get at least 30% of BW.
>> ("don't let yourself be squeezed out")
>>
>> Mark - this is almost completely off the top of my head. I believe that
>> failing any of these things will probably mean that we will have a system
>> that is hard to deploy in practice, because quality will just not be good
>> enough, or the harm to others will be too great.
>>
> I'm wondering if 30% is a magic number? :)
I pulled it out of a hat :-)
I considered 25 and 40, and decided that 30 "felt" better. Very 
scientific! - but it's always better to name a number than to not name a 
number, because then people have to say if they want to go lower or 
higher. I've been in discussions where people talked about "low loss", 
and it turned out one was talking about 1% loss, and the other was 
talking about 0.001% loss.......
>> There are many ways of getting into trouble that won't be detected in the
>> simple setups below, but we may want to leave those aside until we've shown
>> that these "deceptively simple" cases can be made to work.
>>
>> We might think of this as "test driven development" - first define a failure
>> case we can measure, then test if we're failing in that particular way - if
>> no; proceed; if yes; back to the drawing board.
>>
>> What do you think?
> Without running a simulation, it is hard to say if this is reasonable.
If in either of these scenarios, we can argue that we have a well 
functioning system and the test still fails, we should make the test 
parameters less stringent.
> Cheers,
> Varun
>



More information about the Rtp-congestion mailing list