[RTW] Architecture of module linkages (Re: List is now open)

Stefan Håkansson LK stefan.lk.hakansson at ericsson.com
Mon Oct 25 15:36:23 CEST 2010


Good move to change subject lines!

I agree fully to that no unnecessary conversions should be carried out (one reason for my input last on that we agreed on that supported media formats should be exchanged). There is however one more factor for "phones": often the network capacity is limited as well which calls for efficient video compression (which these devices usually have HW acceleration support for).

--Stefan
-----Original Message-----
From: rtc-web-bounces at alvestrand.no [mailto:rtc-web-bounces at alvestrand.no] On Behalf Of Harald Alvestrand
Sent: den 25 oktober 2010 15:24
To: Stefan Håkansson LK
Cc: Silvia Pfeiffer; Justin Uberti; rtc-web at alvestrand.no; Adam Bergkvist; Ian Hickson; David Singer
Subject: [RTW] Architecture of module linkages (Re: List is now open)

Changing subject lines....

one thing to remember in the video case is that there's no "uncoded" 
format - in limited-hardware cases like phones, it's extremely important to only perform the transformations one absolutely has to, so it's "figure out which format that is supported by the camera and can be easily displayed on recipient's screen with the most lightweight conversion"; the figuring-out is probably out of scope for standardization, but the enumeration and choosing of possible formats for each element is probably critical.

On 10/25/10 15:18, Stefan Håkansson LK wrote:
> Another advantage of denoting Stream as something that is uncoded is that it makes it simpler to add audio processing.
>
> You could have something like
>
> <device>  ==>  [encoder] ->  [transport] ->  [decoder] ==>  
> [processing] ==>  <audio>
>
> where processing could be something based on for example the rendered scene in a game (e.g. panning).
>
> --Stefan
>
> -----Original Message-----
> From: Adam Bergkvist
> Sent: den 25 oktober 2010 14:29
> To: Ian Hickson; Justin Uberti
> Cc: Stefan Håkansson LK; Silvia Pfeiffer; Harald Alvestrand; 
> rtc-web at alvestrand.no; David Singer
> Subject: RE: [RTW] List is now open
>
> On Mon, 11 Oct 2010, Ian Hickson wrote:
>
>    
>> On Sun, 10 Oct 2010, Justin Uberti wrote:
>>      
>>> My idea is that we will tie these various pieces together like a 
>>> filter chain, i.e.
>>>
>>> <device>  ->  [encoder] ->  [transport] ->  [decoder] ->  <video>
>>>        
>> This is basically what's in the HTML spec today:
>>
>> On the sending side:
>>
>>    <device>  is<device>
>>    [encoder] is a Stream object
>>    [transport] is a PeerConnection object
>>
>> On the receiving side:
>>
>>    [transport] is a PeerConnection object
>>    [decoder] is a Stream object's .url attribute
>>    <video>  is a<video>
>>
>>      
> We see Stream simply as the glue between the device element and the transport endpoint, and between the remote transport endpoint and the video element. Thus encoding and decoding is handled by the ConnectionPeer object.
>
> <device>  ==>  [encoder] ->  [transport] ->  [decoder] ==>  <video>
>
> On the sending side:
>
>    <device>                  is<device>
>    ==>                       is a Stream object
>    [encoder] ->  [transport] is a ConnectionPeer object
>
> On the receiving side:
>
>    [transport] ->  [decoder] is a ConnectionPeer object
>    ==>                       is a Stream object
>    <video>                   is a<video>
>
>
> In the local self-view case there's no encoding or decoding.
>
> <device>  ==>  <video>
>
>
> In the recording case, the encoding is handled by the StreamRecorder.
>
> <device>  ==>  [recorder] ->  [encoder] ->  [file]
>
>    <device>                 is<device>
>    ==>                      is a Stream object
>    [recorder] ->  [encoder] is a StreamRecorder
>    [file]                  is a File
>
>    
>> The details are abstracted out a lot at this point; the idea is that 
>> future versions of the API can add specific knobs and dials to allow 
>> authors to control what they want to change; we can decide what to 
>> expose based on what people most want.
>> This dramatically limits the complexity of what we have to implement, 
>> and makes it more likely we'll get interop sooner.
>> However, I need to do some work on this spec to merge in the ideas 
>> from some of Justin's work, so this may change a bit.
>> What will certainly change is the way the low-bandwidth channel 
>> maintained by the script is interfaced with on the PeerConnection 
>> object; I also need to move some of the Session stuff into the spec, 
>> e.g. sending of DTMF tones.
>>
>>      
> Rather than specific functionality for sending DTMF tones, it would be more useful to have a generic way to include pre-recorded media in a Stream.
>
> BR
> Adam
>
>    


_______________________________________________
RTC-Web mailing list
RTC-Web at alvestrand.no
http://www.alvestrand.no/mailman/listinfo/rtc-web


More information about the RTC-Web mailing list