Staying on Track (Re: Documenting pilots (RE: pausable explanation for the Document Series))

Dave Crocker dhc at
Sat Jun 7 09:26:03 CEST 2003


JCK> blew it on both), we have had a habit of proposing, documenting,
JCK> and sometimes even trying "experiments" which lack evaluation 
JCK> criteria...

First, we are currently experiencing organizational lock-jaw, and the
jaw is mostly locked shut. We are focused, instead, on discussing the
problem rather than doing anything about it. That kind of situation
calls for more a more aggressive approach to taking action. Aggressive,
not brash.

Second, meaningful, objective measures for the effectiveness of such
actions are tough, as you know better than most IETF participants.
Absent really good objective measures, I am a fan of simplistic
measures.  When a solution proposal has a reasonably coherent and
constructive logic, and folks get engaged in debating and refining it,
and others are willing to try it out, then I think we have a pretty good
subjective measure of likely utility.  When we've done the experiment
for awhile, I'm guessing that the relative utility of it will be
obvious, one way or the other.

(I'll note that your question highlights a long-standing difference in
our approach to things.  I entirely agree with the desire to do
evaluations, but believe that taking reasoned action is more important
than waiting for complete analysis, evaluation or the like.  My own bias
is to view this in the category of best being the enemy of the good.
I'd guess your own bias is brashness is the enemy of long-term

JCK> It establishes goals as:
JCK> How are those merits to be evaluated so as to
JCK> determine whether this is a success?  The document then goes on 
JCK> to say:
>>      However,
>>      it is likely that drafts with at least three positive
>>      reviews from SIRs in different areas will experience
>>      much shorter IESG review cycles than drafts with fewer
JCK> Plausible hypotheses indeed.

And your concern (criticism) about this section of the document is
entirely reasonable -- even valid.  Frankly I made a point of not
challenging this part of Brian's writing, and focused only on tuning it.
There were a number of reasons for my choice.

First, this particular stated outcome is good to state -- as a
communication act to the IESG (ie, lobbying.)  To the extent that
working groups feel a lack of standing in a 'negotiation' with an AD,
this should help their case, but only if their case has legitimate

Second, this realm is sufficiently complex to warrant the cook's
attitude of "if all of the ingredients are good stuff, the odds are high
that the result will be too."

That is, this seems to me such a patently good mechanism to put in, I
frankly do not see any potential downside. (Yes, there are opportunities
for abuse, and all institutions ossify and get abused, and ...) This is
fundamentally a "staff" position and its used is controlled by the
working group.

And, by the way, our community has shown a complete lack of skills at
anything involving social analysis, nevermind social evaluation.  So,
waiting for development and agreement on the measures you seek is an
infinite task.

We don't/can't even do valid, serious measurement of whether our
specifications are actually used.  And *that* as quite a bit easier than
the measurements you are calling for.

JCK> I don't think it likely, but a counter-hypothesis is that the
JCK> IESG, upon receiving a document with three or five SIRS signoffs 
JCK> but disagreeing anyway, would decide that it needed to give more 
JCK> extensive explanations to those senior people, or discuss issues 
JCK> with them even before going back to discuss them with the WG. 
JCK> That would probably positively leverage quality, but would have 
JCK> a negative effect on speed.

a) i strongly doubt it, in the long run; in the short run, I've no doubt
there is a "training" and adjusting process that will need to take

b) we are slow enough, now, so that getting slower would not be noticed;
no i do not mean it's ok to get slower, but rather than this is an
unlikely outcome and that -- given that it is unlikely -- the downside
of it's occurring is tolerable, in an odd sort of way.

JCK> And, of course, trying to guess at whether SIRS improved quality 
JCK> in comparison to what the IESG would have done, to/with a given 
JCK> document without it. is even more difficult.

We have no concept of quality measurement now. So, effectively, you are
stacking a series of essentially impossible dependencies in the way of
taking localized, subjectively reasonable -- and apparently reasonably
popular -- action. Again, this is where our philosophies of project
management diverge utterly.

JJCK> For some of those WGs --perhaps the ones who were in good shape
JCK> anyway but that, too, is pretty subjective-- the reviews help 
JCK> the IESG expedite the process.  For others, they don't appear
JCK> to.  Do we then declare the experiment a success and try to make 
JCK> it mandatory?   Does anyone who doesn't think it has proven 
JCK> itself a success get to speak up without being overwhelmed in 
JCK> rhetoric and forceful assertions?  Do we infer that ADs who 
JCK> don't quickly sign off on SIRS-endorsed documents are 
JCK> obstructionist nit-pickers and need to be fired?   And so on.

The 'and so on' is key. My experimentalist training entirely agrees with
you. My project management experience knows that that your points really
are a trap that ensures taking no action. The and so on highlights the
fact that there are an infinite set of unknowns. We cannot wait until we
resolve all of them. For that matter, the ones you list are enough to
make clear that we would take essentially forever before being able to
start the 'experiment'. And, no, this is not a controlled experiment.

JJCK> consensus.  But I'd be much happier about the idea --even as an
JCK> experiment-- had the IESG been willing to stand up and say 
JCK> "fascinating idea, lets try it".

Well, I made a point of not citing this in the note to Melinda, simply
to keep the focus on our difference in philosophy, but... Brian cleared
this with Harald first. I classed that as a courtesy, rather than a
requirement, which is why it does not satisfy your point.  But it IS
relevant to it.

JCK>   And it would have been even 
JCK> better, at least from an "experimental" context, had the IESG 
JCK> chosen to say "It would be good to really do this as an 
JCK> experiment and therefore to be able to make some comparisons.

Having the IESG take initiative on such things would be fine, but since
that has not happened, we need to take constructive action anyway.

JCK>   That seems
JCK> to me to be an opportunity for some future demagogue, with some 
JCK> other experiment, to try something, claim it succeeded, and then 
JCK> insist --loudly and by whispering campaign-- it was a success 
JCK> and that IESG failure to immediately adopt it indicates that all

1.  In my note to Melinda, I made a point of documenting just how
extensive the community involvement in the SIRS effort had already been.
This was intended to mitigate concerns about individual rashness.

2.  The plan in fact got very strong positive encouragement from folks.
That is another reason that concern over personal demagoguery should be

3. The serious threat from demagogues won't be reduced by any of this;
they work on their own dynamics and use whatever is convenient to their
purposes. If we give the worry about demagogues a veto over all change,
we will remain frozen forever.

 Dave Crocker <mailto:dcrocker at>
 Brandenburg InternetWorking <>
 Sunnyvale, CA  USA <tel:+1.408.246.8253>, <fax:+1.866.358.5301>

More information about the Problem-statement mailing list