Frameworks for analysis: quality control and
resource availability
John C Klensin
john-ietf at jck.com
Wed Jan 29 10:46:58 CET 2003
--On Wednesday, 29 January, 2003 11:48 +0100 Brian E Carpenter
<brian at hursley.ibm.com> wrote:
> My instinct is that you are 100% correct, and that putting more
> of the burden of critical review and quality control on the WG
> itself would relieve the apparent IESG bottleneck.
>
> If we can get some facts to bear this out, then we are still
> left with a problem statement: how to put WGs in a corner from
> which the only exit is quality control?
>
> I would add 2 more things we need to measure, if possible.
> * Delay between WG Last Call and forwarding of draft to AD
> * Delay between forwarding of draft to AD and IETF Last Call
>
> These give an indication of how much pain the initial
> review by the AD causes, compared to the final WG review.
Except that I have observed that, in many cases, the ADs appear
to do only a perfunctory review, decide that it would be good,
if possible, to get Last Call backing for their concerns, and
send the document out for IETF Last Call. In those cases, only
after the Last Call completes does the serious nit-picking and
striving for absolute perfection --whether by the ADs or by the
RFC Editor-- begin. Both parts of this need to be measured, but
I think we are going to have trouble interpreting the results
unless we can differentiate the cases. My guess is that
differentiation is not going to be possible on a statistical
basis.
And, in the last analysis, it may not be important. If we are
seeing regular and serious lengths of time between the time a WG
emits a document and the time a version of it is published (or
it is killed), then there is a problem. Probably it is a member
of a family of problems, with no likelihood that one "solution"
will fix the whole family.
The following is not a proposed solution, but an exploration of
the problem space in which solutions might exist. I'm concerned
that more and more statistical reporting might not give us a lot
of information we don't have already (even if quantifying it
would be nice) and that, by adding workload, could actually make
things worse.
At some point, a more qualitative, but no less systematic,
approach may be in order. At the risk of adding to the workload
of the already overloaded, I suspect that the only solutions lie
in the community's setting some benchmark deadlines for post-WG
processing such that, by the time a benchmark arrives, either
* The document must be bounced back to the WG in a
public way, with a public explanation. I would think
that "this is incomprehensible, some examples are...",
or "there are too many technical holes and loose ends in
this, some examples are...", or the equivalent, would be
sufficient explanations. WG and/or IESG expectations
that the IESG must identify every single problem and
propose a fix are, IMO, a lot of what leads to long
delays. Such a bounceback of course resets the timers.
* The AD, the IESG, or the RFC Editor (as appropriate)
must make a public statement as to why the document is
stuck and what will be needed to un-stick it. Knowing
who is holding the token, or having a classification
from a small number of categories, is not sufficient in
this regard. Even in the short time since Atlanta, it
has become obvious that those classifications are often
not up-to-date or strictly accurate.
Just something to think about...
john
More information about the Problem-statement
mailing list