Draft: draft-welch-mdi-02.txt Reviewer: Harald Tveit Alvestrand [harald@alvestrand.no] Review Date: Tuesday 8/16/2005 4:34 AM Telechat Date: Thursday 8/18/2005 Summary: Mostly harmless. Not the business of IESG to block. Recommended statement: The IESG thinks that this work is related to IETF work done in WG , but this does not prevent publishing The metrics are in the same headspace as the RTP XR metrics (RFC 3611), in particular the Statistics Summary Report Block and VoIP Metrics Report Block, but different enough that they don't report the same thing. And according to the IESG writeup, Bert Wijnen has more in-depth information on that than what I'd be able to gather. The document has an "xxxx" in place of an object identifier, but no IANA considerations - it's unclear how this is intended to be assigned. ---------------------------------------------------------------------- The comments below this line are feedback to the authors and the RFC Editor, and should not, in my opinion, influence the IESG's processing of this document under the rules of RFC 3932. Technical summary: This document attempts to define two "simple" metrics for quality: Delay Factor and Media Loss Rate, and to define a combination of these two numbers as "Media Delivery Index". However, the "index" is two numbers, not one. And the implementation has a number of problems - see below. The document (laudably) attempts to define quality in terms of application-layer functionality - delay in terms of time to consumption of data, not time to deliver data at end interface; packet loss in terms of application packet loss, not IP packet loss. However, they then envision, inconsistently, that this can be measured at intermediate devices in the network. These devices will have no idea of the "downstream" delay, so they will have to use some assumption; they will have very little idea of the application packetization rules, so they will have a very hard time computing the loss rate. This may be possible given dedicated probes or application-aware networking - but a generic router will have absolutely zero chance of computing these metrics consistently with the definitions here. It's also clear that the metrics will be application format dependent - code to measure the MDI for MPEG-2 video will be totally inapplicable to the task of measuring the MDI of an OGG encoded audio stream; this further complicates the task of measuring at intermediate hops. And, of course, if the stream is encrypted, intermediate devices are totally unable to compute anything at all. The MIB defined looks incomplete. In particular, the idea of "intervals" does not seem well thought out; you can set the threshold for the intervals, but only after the stream has been created (automatically?), and you cannot change the interval for a stream after its creation (max-access read-only). It's also undefined what happens to a media stream once it ends - does the MIB table entry stay? go away? Undefined? While section 2 concentrates on finding maximums and minimums, the MIB doesn't provide a place to capture them - instead, the text says "do polling". And the text does not seem to say whether the DF and MDI values are updated in the MIB at the end of the interval only. Since the capture process is totally format dependent, it seems strange that the format is not identified in the MIB. If I were to advise the authors, I would advise them to position this as an end-system MIB, intended for monitoring performance as seen by an end-system, and add that if full decoding of app-layer stuff is included, dedicated network probes may be able to figure out what the metrics would have been had the stream been terminated at other points in the network. And, of course, work out the details. Hope this is helpful to the RFC Editor and the authors. The IESG should, in my opinion, have no particular reason to block it.