[R-C] About the all-SCTP way of doing it

Varun Singh vsingh.ietf at gmail.com
Wed Apr 4 11:11:23 CEST 2012


To emphasize what Magnus said, there is a paper titled: "The
delay-friendliness of TCP" which describes the operating region in
which TCP may be used for VoIP (it uses RTT, loss rate, timeliness as
metrics).

http://dl.acm.org/citation.cfm?id=1375464

On Wed, Apr 4, 2012 at 11:10, Magnus Westerlund
<magnus.westerlund at ericsson.com> wrote:
> Michael,
>
> Clearly the SCTP ACK do provide the sender with information. It is
> clearly possible to use ACK based schemes. However, I think there are
> other factors that makes SCTP as currently defined unsuitable.
>
> My main question to you regarding running RTP over SCTP would be how you
> would tackle what I perceive as one of the main issues when running
> real-time interactive media over congestion control mechanisms like TCP
> and TFRC. Namely the clocking of the packets. To maintain timely and
> minimal delay the media sender will need to be able to clock out the
> packets for each audio or video frame following the sampling intervals
> basically. The issue with TFRC and TCP is that they dictate when the
> packet is to be sent. Every time the CC algorithm doesn't grant
> immediately the right to transmission you do introduce end-to-end jitter
> already at pre-sending stage.
>
> This is connected to the issue that once you have encoded the content
> into some format and bit-rate it represents significant work to go back
> and change that. Thus any control signal to reduce the transmission rate
> that arrive later than the time of starting encoding is to late to
> consider in this frame. You need to send that out and adjust in the next
> video or audio frame the amount of data produced.
>
> You get a similar issue with how to deal with video streams that can be
> extremely bursty due to both application needs of I-frames and simply
> content properties (large movements or zoom etc). To stay within delay
> budget and not introduce significant playout jitter you can't really
> smooth the burst longer than approximately one video frame interval,
> something on the range of  16.67 (60 fps) - 100 (10 fps) ms. A video
> stream that is being transmitted over TCP or for that matter SCTP that
> has this behavior will in most situation blow a tight end-to-end delay
> budget. And when I say end to end I mean from video capture until video
> display. Using TCP or SCTP works fine for less time critical task,
> streaming clearly works fine, but there using a buffer of at least 2
> seconds allows for more than sufficient smoothing of the variations both
> in the CC algorithm and in the encoded content.
>
> From my perspective the toughest issues with congestion control for RTP
> are the following:
>
> 1) Tight and very low end-to-end delay requirements making introduction
> of additional delay by the congestion control very questionable. For
> really excellent quality the delay e2e must be below 200 ms, ok quality
> is achievable over e2e delays up to 400-500 ms. Beyond that you aren't
> really considering it being interactive.
>
> 2) Extremely bursty media sources, like video that can have more than a
> factor 10 in frame by frame variations in amount of data to transmit if
> you are to maintain the same video quality between frames.
>
> 3) An extremely nasty flow startup issue. For video the first frame is
> the most heavy. At the same time it is critical to get through. And in
> most cases you have no information about what introducing that amount of
> data into the network will do. In addition due to intra-prediction if
> one looses part of this image you will have significant quality
> reduction in many video frames following the first one.
>
> Just trying to make clear what the problem is.
>
> Cheers
>
> Magnus
>
>
> On 2012-04-04 00:12, Michael Welzl wrote:
>>
>> On Apr 3, 2012, at 10:16 AM, Stefan Holmer wrote:
>>
>>> Hi Michael,
>>>
>>> Some comments inline.
>>>
>>> /Stefan
>>>
>>> On Sun, Apr 1, 2012 at 11:17 AM, Michael Welzl <michawe at ifi.uio.no>
>>> wrote:
>>> Hi all,
>>>
>>> I've been giving this some more thought, and now think that it is a
>>> mistake to try to build a RTP-UDP-based congestion control mechanism
>>> with logic at the receiver, to plan when to send feedback and
>>> minimize it accordingly. The reason is that you want to sample the
>>> path often enough anyway, and that you may have SCTP operating on
>>> the data stream in parallel too. So that can give you a situation
>>> where you get a lot of ACKs all the time in SCTP, which will be
>>> ignored by the UDP-based algorithm, which is meanwhile struggling to
>>> reduce its own feedback. All this feedback is really only about the
>>> congestion status on the path anyway, i.e. SCTP ACKs give you all
>>> the information you need.
>>>
>>> I see the channel sampling frequency as one of the upsides with
>>> receive-side estimation, since you get a sample of the channel with
>>> every incoming packet without the need of the extra overhead of ACK
>>> packets. In addition, algorithms at the receive-side aren't limited
>>> to what we choose to feed back to the sender in the same way as a
>>> send-side algorithm. For instance at the receive-side we know
>>> exactly how late each packet is, if it is lost, if there's ECN, etc.
>>> We can make use of all of that, and all we have to feed back is our
>>> estimate of the available bandwidth.
>>
>> Okay, I understand that view; however, the sender is where the action
>> happens, and so it's the sender that should have that (ideally, all of
>> that) information. The less you feed back to the sender, the more
>> prone you are to effects from dropped feedback messages (congestion on
>> the back-path, or e.g. wireless noise). Besides, doing that stuff on
>> the receiver side lets you ignore the fact that you may have just the
>> right amount of ACKs with just the right type of information about
>> your path in SCTP too, and possibly send back MORE feedback than you'd
>> need to. So if that would be receiver-side logic within SCTP, taking
>> care of controlling the amount of feedback to send, what would be the
>> best way to do it? Not via some RTCP limiting rule (which doesn't
>> really apply to underlying transports anyway, I think, i.e. you can
>> send RTP over TCP and then it only applies to RTCP messages but not
>> TCP ACKs, right?), but via a mechanism that is designed to send as
>> little feedback as possible by sending just as much as is needed. For
>> TCP, RFC 5690 described that. For some new congestion control
>> mechanism, we'd want to have a similar scheme I suppose. If at all
>> needed...
>>
>>
>>> So my proposal would be:
>>>
>>> - use SCTP for everything
>>> - add per-stream congestion control to it, all on the sender side
>>> - use some other RTCWeb based signalling to negotiate the congestion
>>> control mechanism, in the style of DCCP
>>> - let all streams benefit from SCTP's ACKs; if we anyhow need to
>>> reduce the amount of feedback, we could use an appropriate means
>>> that's related to transport, and not the RTCP rule which has nothing
>>> to do with congestion control: ACK-CC, RFC 5690 would give a good
>>> basis for that.  (on a side note, when you run RTP over SCTP, you
>>> don't break any RTP rules regarding the amount of feedback I think...)
>>>
>>> This way, with all the congestion control logic happening on the
>>> sender side, it would be much easier to manage the joint congestion
>>> control behavior across streams - to control fairness among them, as
>>> desired by this group. Note that this can yield MUCH more benefits
>>> than meets the eye: e.g., if an ongoing transfer is already in
>>> Congestion Avoidance, a newly starting transfer across the same
>>> bottleneck (which you'll have to - correctly or not - assume for
>>> RTCWeb anyway) can skip Slow Start. In a prototypical example
>>> documented in:
>>>
>>> Not sure why it is easier to control the fairness among streams just
>>> because all the estimation happens at the send-side? In the receive-
>>> side approach a number for the total amount of available bandwidth
>>> between client A and client B with will be sent from from B to A. It
>>> is then up to client A to distribute that bandwidth among its
>>> streams. A newly starting transfer across the same bottleneck should
>>> share the same bandwidth, so as in your example it should be
>>> possible to skip slow start. Or maybe I'm missing your point?
>>
>> In that exact algorithm that you have specified, what you say is
>> probably correct, and it probably wouldn't matter. But then we'd have
>> to be careful with making any changes to that algorithm... to make
>> sure that no policies whatsoever are incorporated in the receiver-side
>> logic, or else the sender would have to signal the priorities to the
>> receiver... that's just a bit messy.
>>
>> ... *in a unicast setting*. Multicast may be a valid point in favor of
>> your design... do you plan for multicasting?
>>
>> Cheers,
>> Michael
>>
>> _______________________________________________
>> Rtp-congestion mailing list
>> Rtp-congestion at alvestrand.no
>> http://www.alvestrand.no/mailman/listinfo/rtp-congestion
>>
>
>
> --
>
> Magnus Westerlund
>
> ----------------------------------------------------------------------
> Multimedia Technologies, Ericsson Research EAB/TVM
> ----------------------------------------------------------------------
> Ericsson AB                | Phone  +46 10 7148287
> Färögatan 6                | Mobile +46 73 0949079
> SE-164 80 Stockholm, Sweden| mailto: magnus.westerlund at ericsson.com
> ----------------------------------------------------------------------
>
> _______________________________________________
> Rtp-congestion mailing list
> Rtp-congestion at alvestrand.no
> http://www.alvestrand.no/mailman/listinfo/rtp-congestion



-- 
http://www.netlab.tkk.fi/~varun/


More information about the Rtp-congestion mailing list