MINOR ISSUE: Putting auditing before criteria

John C Klensin john-ietf at jck.com
Fri Jun 13 15:28:34 CEST 2003


--On Thursday, 12 June, 2003 22:47 +0200 Harald Tveit Alvestrand 
<harald at alvestrand.no> wrote:

> Section 2.2 of -issue- reads:
>
>    Some of the key areas where the IETF's practices appear to
> need    tightening up include:
>
>    o  Lack of explicit quality auditing throughout the
> standards       development process.
>
>    o  Lack of written guidelines or templates for the content
> of       documents (as opposed to the overall layout) and
> matching lists of       review criteria.
>
>    o  Poorly defined success criteria for WGs and individual
> documents.
>
> ISSUE: Quality auditing can only be done by auditing to
> criteria or guidelines. You can't make good guidelines without
> knowing your success criteria. SUGGESTED RESOLUTION: Swap the
> first and third bullets in the list, and change "quality
> auditing" to "auditing against criteria for success". Might
> want to reword the sequence so that "quality" still appears.

Harald,

I think this suggestion is correct, but want to caution about 
another aspect of it.  In the "quality improvement" and "quality 
assurance" communities, a distinction is often made between 
those activities and "quality auditing".  The cynics suggest 
that quality auditing is about being able to identify the point 
at which something went wrong, sometimes to more efficiently 
cast blame, but not about either better quality or guarantees of 
quality.

Unless we have a rather specific plan about what to do with the 
audits and how to incorporate their results into an improvement 
process, such audits are likely to rapidly deteriorate into 
meaningless, time-wasting, bureaucracy and improvements in 
blame-casting processes.   We don't, IMO, need either of those.

So, if your change is made, I suggest that a fourth bullet is 
needed, which might read something like:

	o Lack of adequate processes for feeding the results of
	quality audits forward into process improvements and
	success criteria.

In plain English, "quality audits", at their very best, have to 
do with understanding the nature of the mistakes we have made 
and when those mistakes occur in the process; if we are going to 
do them, we need to improve the mechanisms for _learning_ from 
those mistakes.

My personal preference, I think, would be to get rid of the 
notion of "explicit quality auditing" entirely and to focus this 
subsection on the need for processes that iteratively improve 
our criteria for success and for measuring progress and for 
mechanisms for measuring performance (of WGs, WG Chairs, 
Editors, ADs, etc) against those criteria.

As an example (possibly pointing toward solutions, but intended 
only to illustrate what I'm talking about), the IESG often has a 
clear sense of WGs that have gone very well and others that have 
gone badly.  Sometimes the Chairs of those WGs share that 
perspective, sometimes they rate things differently.  But it is, 
I think, rare for an AD to sit down with the leadership of a WG 
that has reached its end (or earlier) for an in-depth 
conversation and analysis of what went well, what went poorly, 
and what can be learned about how to better facilitate WG work 
in the future.  I think that is a problem.  But it isn't about 
"quality audits" as that term is used by professionals in those 
fields.

       regards,
          john



More information about the Problem-statement mailing list