IDNAbis discussion style, mappings, and (incidentally)Eszett

Martin Duerst duerst at it.aoyama.ac.jp
Thu Dec 6 12:17:04 CET 2007


Hello John,

Many thanks for your long mail.

Because I'm currently very busy, my answers have to be
short for the moment. Please see inline.

At 08:14 07/11/30, John C Klensin wrote:
>Hi.
>
>I'd like to see if we can change the focus of some of this
>discussion, and some related discussions that have occurred on
>other lists, in the hope that it will help us move forward.  We
>need to remember, somehow, that this whole process is about
>tradeoffs.  No change can be made without costs and risks and
>every change, no matter how desirable, has negative aspects.

Yes indeed.

>I apologize for the length of this note.  Perhaps parts of it
>should be a separate Internet-Draft or other document in the
>long term.  But I think it is important for understanding where
>we are and how (or if) we can proceed with this work.
>
>With IDNs, there are many tradeoffs, probably more so than in
>most other things the IETF considers.

Yes. Ultimately, it deals with people's language and writing,
something the majority of the population on this planet spend
years to learn to the point where they don't have to think
about it anymore, and where a few thousand years of history
have produced a wide range of phenomena that doesn't lend
itself easily to principled, a-priori argumentations that
scientists and engineers are used to.

>When we reexamine IDNA
>in the light of experience and (we hope) the improved
>understanding gained over time, the tradeoffs include issues of
>scope and procedure as well as technical issues.  They also,
>obviously, require balancing the value of changes against the
>value of absolute compatibility (both forward and backward)
>with the earlier version. 
>
>Accepting as many characters as possible and excluding only
>those that are clearly harmful clearly has attractions
>although, especially without mapping (NFKC and otherwise) it
>also creates more opportunities for both confusion and
>deliberate bad behavior and more risk of future
>incompatibility.  
>
>Specifying more mapping in the protocol is a convenience for
>registrants who would prefer that all conceivable variations on
>their names be accepted. A registrant could even sensibly
>believe that it would be desirable to automatically map all
>possible transliterations and translations of his preferred
>name into it as part of the protocol (the technical and
>linguistic problems with that desire do not prevent people,
>especially people with a relatively parochial view of the
>importance of _their_ names, from wishing for that sort of
>feature).  On the other hand, extensive mapping raises issues
>of confusion or astonishment for users who see two things as
>different that are being treated as the same, who believe in
>reverse-mappings, or who are trying to informally compare a
>pair of URIs.  

I think it's clear that too much mapping is a bad thing.
I think we are in general agreement that IDNA2003 does
too much mapping rather than not enough. But the conclusion
that the current draft takes, namely that there are no mappings,
doesn't follow from these observations.

>The observation that some mappings make perfectly good sense in
>some cultures (or for some languages) that use a relevant
>script but not for other uses of that script represents a
>significant complication despite the relatively small number of
>cases that can be easily identified today.  Telling country or
>culture "well, there aren't very many of you, so you lose" is a
>fairly uncomfortable position to take.  So a different, and
>equally extreme, position about mapping is that, if the Unicode
>Consortium considered two characters sufficiently different to
>assign them different code points, we should accept that
>conclusion and not try to override it through mappings that
>they specify but consider optional and that are dependent on
>circumstances and application.

Very specifically, the best known case of "there aren't very many
of you" would be Turkish/Azeri users. What do they currently
have to put up with? They have to use lower case for i and
dottless i, even if they otherwise use upper-case. Will that
change for them with the current drafts? NO, the current drafts
only allow lower case, so Turkish/Azeri are doing as well/bad
(depending on your viewpoint) as before. Would there be a
chance to fix this, let's say if everybody else was ready
to give up on their preferred case mappings so that Turkish/
Azeri could have it the way that works for them? NO. The
current ASCII-only DNS has I=i baked in, there is no way to
change that unless we start over from scratch.

So for this specific case at least, the net result of removing
the (case) mappings is that Turkish/Azeri are in no better
situation than they are now with IDNA2003, and in no better
situation than they can ever expect to be, while everybody
else most probably will consider the change a loss.


>It is always possible to treat particular characters as
>exceptions to whatever rules we make and to have special rules
>for those characters, but it is difficult to figure out where
>to stop doing that.  Do we permit special-case mapping rules
>only when someone can claim dependency on the IDNA2003 rules?

With this, do you mean only for those characters/mapping cases
already in IDNA2003 (i.e. in Unicode 3.2), but not for characters
added later? My guess is that there are 0 additional characters
after Unicode 3.2 where special-casing needs to be applied, but
I may be wrong.

>If we do, then it is likely that arguments about lower levels
>of the DNS will prevent any changes to IDNA2003 at all.

Could you explain shortly, or provide a reference.

>If we
>restrict the special cases to a few well-understood issues in
>Latin-based scripts (or European scripts), we may do long-term
>violence to other scripts and characters.  

I'm not sure what you mean by Latin-based scripts. Greek and
Cyrillic are not Latin-based. And there is only one Latin script,
although that's used for a large number of languages.
Also, if we are slightly tolerant about the fact that Georgia and
Armenia are just south of the Caucasus, all bicameral scripts
(scripts that have upper and lower case) are European.

Of course, there are other, similar cases; East Asian
full-width/half-width is certainly one of them.

>There is also a tendency for exception lists to create Unicode
>version dependencies (or at least version sensitivity).
>Perhaps more important, any exception list increases the
>importance of getting everything right the first time (in both
>our work and that of the Unicode Consortium).  

Ken has just pointed out that the script property is version-
dependent. Not being version-dependent would definitely be
very nice, but it's not easy at all. Making it an overranging
goal may be difficult.

>Eszett is an example of the fact that "need to get it right the
>first time rules" can create a mess later.  Part of our problem
>is that some people in German-speaking countries where it is
>important in the orthography now argue that we got it wrong the
>first time while others, especially people from countries where
>the orthography standards (quite independent of IDNs) claim it
>should be mapped to "ss" more or less always.  Those who take
>one position (and some others) argue that the mapping should be
>preserved for compatibility.  Those who favor the other
>position believe it was a mistake and artifact of case-mapping
>in IDNA2003 and that, since IDNA200X removes case-mapping and
>proposals continue to be pushed forward to assign a code point
>to an upper-case form, the whole decision should be
>reconsidered and Eszett treated as a normal character.  It
>isn't at all clear to me how we resolve that conflict; I'd
>certainly like to hear suggestions.

I'm from Switzerland, the only country I know where German
is widely used but the 'sz' is always written as "ss".
But I have never claimed that 'sz' should always be mapped
to "ss" in IDNs. I don't know about others.

Anyway, the possibilities we have are (ignoring legacy issues):
1) To treat 'sz' as a separate character. This is probably
   the best solution, the majority of German users would feel
   most at home with it, and the Swiss would probably not mind
   too much.
2) To map 'sz' to "ss" (as currently in IDNA2003). This works
   best for the Swiss (a clear minority of German speakers)
   and reasonably well for others, they can at least use
   the character on billboards and name cards and will get
   to a defined place on the Internet.
3) To disallow 'sz'. This won't bother the Swiss, but clearly
   is annoying for the other German speakers. They have four
   special letters (three umlauts and 'sz'), but for whatever
   reason, IDNAbis only allows three of them.

So in essence, what IDNAbis does in this case is to move from
the second-best to the third-best alternative (or in other
words, from bad to worse).


>Eszett is clearly not the only example.  IDNA2003 contained its
>own rules for parsing FQDNs into labels, essentially requiring
>the mapping of a number of dots, and dot-like characters, into
>periods before the parsing occurred.  In retrospect (and, for
>me, only in retrospect because I thought it was a good idea at
>the time), it was probably the worst decision we made.  Since
>the list of characters that are mapped to period contains some
>dot-like characters and not others, and cannot include those
>that are introduced with later versions of Unicode, it creates
>a version dependency.  Users have trouble understanding why
>"their" dot is or is not mapped to period versus being treated
>as a plain character or banned.  It causes violations of the
>rule that systems that are not IDNA-aware must be able to
>process FQDNs that contain IDN labels in ACE ("punycode") form
>without any special knowledge.  It creates a strange sort of
>IDN in which all of the labels are ASCII LDH in native form,
>not ACE labels, but those labels are separated by these strange
>dots (I believe the status of those names is a protocol
>ambiguity).  As Martin mentioned, these strange dots were
>considered sufficiently problematic that the IRI spec doesn't
>provide for them.

I agree with you fully on this point, but I think this is so
much different from the other kinds of mappings that we shouldn't
mix it together: The list of 'period-like' never was worked out
well, and the interactions with other parts of the Internet
and Web infrastructure weren't considered well enough.

And please don't mention what the IRI spec does as supporting
evidence; this issue was flying totally under the radar, without
any explicit discussion.


>So the draft IDNA200X documents take the dot-mapping provision
>out, turning the parsing of all domain names, including those
>that contain A-labels, back over to the rules of RFC 1034 and
>1035

>and the acceptance of special dots into a UI issue.

Can you point me to the section(s) of the documents that discuss
how this is handled as an "UI issue"? I think that in the discussion
that led up to this mail of yours, the point that really got
me confused and tense was that it was not at all clear what
"a UI issue" meant. Possible interpretations ranged from
'no problem at all, the UI will take care of this' to
'we know this is really tough, but we need somebody else
to blame'.

>To me,
>the arguments for that choice are overwhelming.  But it is a
>tradeoff against user-predictable behavior with scripts that
>use non-ASCII dots and compatibility with existing non-protocol
>text that represents IDNs using those dots: if applications
>that map between such text and the IDNA protocol don't do the
>right UI things with dots other than U+002E, bad things will
>happen.

The toughest case are very clearly IDNs and identifiers
containing them (email addresses, IRIs,...) typed in
freeform text (e.g. email text like this). Except possibly
for some sophisticated Imput Method Editors (IME) used for East
Asian languages, it looks impossble to get this right.
(As an example, my IME, in its current setting, produces
an ASCII hyphen after a digit, but for the same key, produces
a Katakana lengthening mark when typed alone or after some
letters.)


>And, if we work the tradeoffs so that types of
>compatibility issues overwhelm the reasons why special dot
>mapping was a bad idea, then we are stuck with the special dots
>forever.  
>
>Obviously, that example isn't precisely equivalent to the
>Eszett one, since the dots are about label separation and
>Eszett is a character and mapping issue.  However, to the
>extent to which an important argument for preserving the Eszett
>-> "ss" mapping as part of the protocol involves chunks of
>non-protocol text in which the character might appear, the
>relationship should be pretty obvious.   Again, this is all
>about tradeoffs, not about one position being right or wrong in
>an absolute sense.

Yes. The examples clearly have some parallels, but very
clearly also differ widely.

>If, instead of depending on lists of characters that get
>special treatment, we rely primarily on rules based on
>properties and attributes linked to whatever Unicode version
>one might be using, we may, if we are careful about how things
>are designed, be somewhat more amenable to adjustments as
>point-errors are found and corrected and hence less dependent
>on Unicode versions.

I'm not exactly sure I understand what you are trying to say
here. Casing (except for special-casing) is clearly an Unicode
property.

>But all of those are tradeoffs: it is perfectly rational to
>argue that all of the IDNA2003 mappings should be preserved
>even if it prevents us from moving to new versions of Unicode.
>It is also rational to argue that we should preserve the
>IDNA2003 rules (and Stringprep and Nameprep) for all characters
>that appear in Unicode 3.2 and apply new rules only to new
>characters, accepting the considerable added complexity
>(including the need to keep a list of valid Unicode 3.2
>codepoints in every application, since such a list is unlikely
>to come out of character-handling libraries) as the price of
>complete forward compatibility.  I happen to have a fairly
>strong opinion about those two options, but I am all too aware
>that there are other strong opinions and other ways to make the
>tradeoffs.

As Mark has explained in another case, there is no need to
keep a list of all Unicode 3.2 characters, you just keep
a (very small) list of those characters for which derived
properties/mappings would change but which you want to
keep stable.


>A similar analysis applies to case mapping.  The answer to the
>question of whether, if we had the DNS to do over from scratch
>today, the case mapping for ASCII would be preserved is that
>the question would at least cause an extended and probably
>heated argument.  I suspect that anyone who has every used a
>U**x-derived system (or, more properly, a Multics-derived one)
>understands most of the argument: case-sensitive identifiers
>are sometimes really handy and sometimes a significant pain in
>sensitive parts of the anatomy, especially in communicating
>with systems that are case-insensitive.  And most of us have
>understood, long ago, that, when all of the arguments are added
>up, the conclusion as to whether systems should be
>case-sensitive or case-insensitive in the ASCII range is
>essentially a matter of religion. 

Yes indeed. But DNS is clearly in the "case-insensitive religion"
camp.


>For the DNS (and probably for internationalization generally),
>there is another piece of the argument, which is that the case
>mappings for the Latin (and I do mean _Latin_, not extended
>Latin, Latin-derived, or decorated Latin here) subset of
>Unicode is absolutely, 100%, unambiguous.

Do you mean basic Latin (i.e. ASCII only)? If not, what
do you mean by Latin-derived? As we are using and discussing
Unicode, please use the terms as they are defined in Unicode.

>It is approximately
>as good for the Latin-derived superset of undecorated
>alphabetic characters that appear in ISO 646BV and its clones.
>So, regardless of one's religion about case dependencies, for
>those characters, the case mapping is at least unambiguous, fully
>reversible, does not require language or locale information for
>_any_ characters, and, importantly, the characters are stored
>in, and retrieved from, the DNS in their original case --
>case-insensitivity is supported only in the matching rules, not
>in what gets stored.
>
>In any event, if only because the case-distinguished strings
>are stored in the DNS and retrieved by queries to it, it is far
>too late to reopen the question of whether the original
>decision was wise... at least within Class=IN.
>
>Now the IDNA WG, responding to different complexities and
>tradeoffs, including the desire to _not_ require DNS changes,
>concluded that it was not possible to use server-side matching
>rules to accommodate case.  Instead, the conclusion was that
>there should be case-insensitivity (to parallel the behavior
>with ASCII) and that it should be provided by pre-query and
>pre-registration mapping.  That was a plausible decision (and
>one that I supported).

Same here. And I don't remember any heated discussions about
this in the WG at all.

>But it causes some user confusion when
>queries return "original case" for ASCII and "all lower case"
>for non-ASCII labels, even when the mostly consist of ASCII
>characters.

Quite a bit of that could be fixed by using upper-case
in punycode and recovering that information. But it may
be almost perfect to the extent that on the occasional
exception, confusion would be even greater.

>While they are few, there are also ambiguities in
>which one character maps into another as a case-shift and
>whether reversal works differently by language or locale.  That
>creates a mess -- how large a mess depends on the perspective
>of the beholder -- and led us to conclude that we should extend
>the general "no more mappings in the protocol" principle to
>case mappings, thereby making things less complex.  Do we think
>that answer is without negative implications and consequences,
>including causing problems with case-dependent label strings in
>contexts where the DNS is not being used directly?  Of course
>not.  Are we sure that eliminating case-mapping in the protocol
>is the right answer after all of the tradeoffs are considered?
>Again, certainly not.  We do think it is the best way to
>resolve the tradeoffs, but we are still listening for
>persuasive counterarguments or alternate proposals that don't
>introduce even more problems.
>
>Even the decision to try to move this work forward via an
>open discussion but without a WG was based on careful
>consideration of a tradeoff.  We know from experience with the
>original IDN WG that WG discussions of this type of topic tend
>to be extremely noisy, with a great deal of time spent going
>over and over the pet ideas of various people with little
>knowledge but strong opinions --especially about language and
>culture issues that don't fit well into the DNS as we know it.
>There is potential for even more noise when the views of people
>who do not consider "the way the DNS works" and interoperation
>with it to be a relevant constraint (or who don't even consider
>understanding those issues to be relevant).  

I definitely very well understand the idea of doing the work
without a WG. However, I see potential great problems with
one aspect of this. The IETF often does revisions of specs
without a WG after the original spec has been done by a WG.
An example would be the URI spec, and I'm sure there are many
more. However, as far as I understand, that was always done
when the update work was mostly fine-tuning and cleanup,
rather than a sweeping redesing. In the case at hand,
the situation is clearly different. This, in my view, is
an extremely high risk: If you succeed (you made the right
tradeoffs and everybody is moderately happy and doesn't seem
any reasons to complain), everything is of course fine.
But if you fail, even to a moderate extent (even just a few
noisy or influential people decide that they are not happy
and want to make a fuss), then you are in great trouble
because what went on is that a few insiders overturned
the consensus decision of a WG.

[I can promise you that I won't make a fuss, but that I will
gladly tell you that I told you so if it happens.]

Please don't misunderstand me, I'm not saying this to try to
force things to go my way (on many issues, I think IDNAbis
is much closer to what I was advocating originally, although
on some issues, I think it clearly goes too far).

I'm saying this because I have seen SDOs heavily beaten, for
years, for redefining things. The prime example is the Korean
mess, the redefinition of Korean Hangul between Unicode 1.1
and Unicode 2.0. In many ways, the situation was very similar.
But procedurally, everything was clean (I have no idea what
went on behind the scenes). It was the same committees that
agreed to Unicode 1.1 that made the change to Unicode 2.0.
In terms of implementations, some people claimed that there
hadn't been any implementations of the old Hangul block,
but that was wrong, I had one, although this was not
widely available. But very clearly, implementations were
way less deployed than IDN2003 is currently.
(Looking e.g. at http://www.upsdell.com/BrowserNews/stat.htm,
I'd say 'around 50%, with a very wide ballpark range.)
Even then, the Unicode Consortium got heavily blamed
and still meets with a lot of distrust, in particular
from some people in the IETF.


>We hoped that, by handling things with a small design team and
>an open list, we could make more progress toward a better and
>more balanced result than we could in an inherently-noisy WG.
>But there are tradeoffs, including our having to listen to
>people (none of them in this particular discussion, I hope) who
>believe that any issue on which they don't get their way (even
>before they express their opinions coherently) indicates a
>conspiracy and who then use the absence of a WG to "prove" that
>conspiracy exists.

Please note that I didn't use the word conspiracy above,
and don't plan to use it, because I think it's absolutely,
totally inadequate. But even with the best of intents
(and I know all the people involved well enough to know
that that, and nothing else, is what they are motivated by),
claims such as "overturning WG consensus" and "radically
changing a protocol" may easily remain.


>The trends toward noise that led to the "no WG" decision are
>still out there.

Oh well, of course. Don't expect them to go away any time soon.
And I think that it's most probably a very wise decision.
But as I have tried to explain above, I think it puts some
serious constraints on the effort, in that rather than
"these are all tradeoffs, let's see what might work best",
there should be a heavy pressure to keep things where they
are, and not change them unless the benefits significantly
outweight the problems.

>It may amuse some readers of this list to hear
>that some of us needed to spend significant time at IGF in Rio
>discussing and defending the decision to not abandon IDNA
>entirely.  The main suggested alternative was to move all DNS
>internationalization work into an extended version of my old
>"new class" proposal (for those with an interest in protocol
>archeology, the last public version was
>draft-klensin-i18n-newclass-02.txt, posted in June 2003).
>Those who were pushing that idea in Rio seemed to favor using
>not only a completely new DNS tree, new RR types, and new
>matching rules but also wanted support for matching and
>discrimination using information that would almost certainly
>require new data formats for resource records.  If there were
>no other reasons to avoid spending time on such a proposal
>today (and there are many other reasons), the issues of
>incompatibility, not just with IDNA but with the entire naming,
>delegation, and administrative structure of the Class=IN DNS
>boggle the mind.  

While the details of the proposals are new to me, my (very
limited) experience with such events where people who never
have made their hands dirty try to talk about technical issues
is that there will often be the "great-looking but impossible
fad of the day". Also, please understand that even the most
sensible and knowledgeable people are biased to mentioning
the problems first and most, and only occasionally, if ever,
will tell you that something might actually be okay the
way it is.


>However, even that is another tradeoff and, as we understood
>when the "new class" model was first suggested for discussion,
>overturning the DNS structure associated with Class=IN and
>starting over has a certain appeal, no matter how impractical.
>Of course, "junk the DNS entirely and start over" has some
>considerable appeal as well. There are sensible people who
>would argue that the DNS architecture is sufficiently
>mismatched to today's needs and expectations that the only
>reason to not discard it and start over, if there are any
>reasons at all, it is the transition difficulty.
>
>Another tradeoff that I hope we all understand is that we
>maximize Internet interoperability by minimizing variations and
>different ways of doing things.  Doing IDNs at all creates some
>risks that don't exist without them.  However we implement
>IDNs, they represent an attempt to balance improved mnemonic
>value of names and improved accessibility to present and future
>Internet users against risks to global interoperability.  Were
>we to conclude that one of those poles was so important that we
>should ignore the other, we might well end up with radically
>different solutions.  
>
>Doing IDNs with complicated procedures, including mappings and
>exception lists, or trying to make IDNs language- and
>culture-sensitive, rather than just being registrant-chosen
>character strings, makes interoperability even harder and
>riskier.   The IDNA200X drafts reflect many decisions that were
>made on the basis of less complexity or more simplicity, but it
>is possible that we went too far.

It would be my current assessment that you indeed went too far.

>And so on.  
>
>We have, I hope, clearly decided that IDNs are worth it,

Yes. IDNA2003 is already there, implementations are already
around, and so on.

>but,
>even in the most minimal form, they constitute risks that we
>need to understand and accept.  And, to the extent to which
>IDNA2003 is in use, any change at all, including moving beyond
>Unicode 3.2, implies some risks and transition inconvenience
>that, again, we need to understand and accept if we are going
>to move forward with those changes.  Or, of course, we can
>consider those tradeoffs and decide that the best course of
>action is to do nothing, accepting those consequences.
>
>While this list could go on much longer and include many
>more examples of tradeoffs, I believe that the above is
>sufficient to illustrate the situation we are in and most of
>the key issues.
>
>So, the question is, how do we proceed?
>
>We could decide that the decision to proceed without a WG was a
>mistake and that we really need a WG, however noisy and slow
>that might be.  That would clearly be the right outcome for
>those who believe that the design team is engaged in a complex
>conspiracy against their language, culture, business, registry,
>or persons (I do not believe I've heard that position on this
>list, but I've certainly heard it elsewhere).  The tradeoff/
>danger is that such a WG could get very seriously bogged down.
>We need to recognize that IDN deployment today represents an
>infinitesimal fraction of what various actors have claimed will
>happen as soon as some threshold is crossed (current popular
>theories include getting internationalized email addresses
>and/or beginning to deploy IDN TLDs, but there are others) and
>that every month of delay creates more uses and applications
>and a stronger argument that we cannot modify IDNA2003 at all.

In terms of browser support, no. In terms of actual use,
yes.

But I think there are clear other alternatives between
"not modify IDNA at all" to "revamp all the mapping/...
stuff" (basically, all the decisions for which one of the
two design teams was responsible).

>If we are not going to make the decision that it is time to
>stop and turn the effort over to a WG, then I have some
>suggestions.  I don't know if my colleagues would agree, so
>please take the suggestions as personal ones.
>
>(1) Please make suggestions, ideally suggestions that show that
>you have understood and considered the tradeoffs, not just
>complaints.  Complaints are very hard to deal with, especially
>when we understand the tradeoffs well enough to know that no
>decision is going to make everyone completely happy (including
>us).

Okay, to be at least moderately concrete, I propose that
case mappings be put back in, essentially as they were in
IDNA2003 for those codepoints in IDNA2003.

To be very specific about the 'sz', either keep it as it is,
or improve the situation (see above), rather than making it
worse.

With respect to normalization, consider at least requiring
NFC on the registry end. (I think Michel Suignard explained
clearly why this is crucial in an earlier mail.)

Consider retaining the full-width->half-width mappings from
NFKC.


>(2) When you make those suggestions, expect to be challenged on
>their side-effects and on what else would be damaged to give
>you what you want.  If your note making the suggestion
>considers those issues, we will save a lot of time.  For
>reasons that are probably clear from the comments above, my
>colleagues and I have a design bias against complexity and a
>design bias against tables of exceptions.  Neither is a firm
>rule, but, if the nature of your suggestion is such that you
>believe that some particular issue is important enough to add
>complexity or exception cases, you should assume that we will
>push back in an attempt to find out how sure you are, whether
>the complexity is really necessary, and whether others agree.
>We may even agree with you, but pushing back is part of the job
>we think we took on.

There is various ways to look at complexity. There is the
complexity for specifications. There is the complexity
for implementers. They may not be the same. In particular,
implementers may not care much whether there are 100,
1000, or 10000 mappings.

Also, an argument that I have repeatedly heard in the IDN
WG, in particular from Paul, is that in order to avoid
further complications and slippery slopes, the only choice,
e.g. for casing, would be to take an existing table from
the Unicode consortium, and stick with it. I think that
argument may much less apply without the WG. While you
are still on a slippery slope, there is no WG and much less
other baggage that risks to pull you down. You are still
on a slippery slope, but you may actually be able to
carefully look at the slope, and decide exactly where
you want to be.


>(3) Please don't assume in your notes that the other side of
>the tradeoffs never occurred to us or that we have blown off
>some position and its consequences without considering it.
>There are almost certainly things that we have missed and we
>want to know about them (as quickly and clearly as possible),
>but we have been spending a lot of time on these issues for the
>last several years, have gotten input (of varying quality) from
>all over the world and in a variety of forums, and have been
>trying very hard to listen.

I'm very sure that you got a lot more input than I. But please
remember, everybody is complaining about the things they don't
like, nobody is mentioning the things they find okay. They
will only start to mention these once you changed/removed them.

>I can't speak for my colleagues,
>but I am a lot more likely to be able to respond quickly and
>effectively to a note that suggests that we probably got the
>tradeoffs wrong about some particular issue, explains why, and
>proposes a solution that strikes a reasonable balance with the
>other tradeoffs than I am to be able to deal with a note that
>starts out on the assumption that we are insensitive idiots
>that haven't even bothered to consider the obvious One True Way
>of doing something.   I wish that distinction didn't exist and
>try to avoid overreacting to it, but I will plead some vestiges
>of humanity.

There is clearly no "One True Way". But that's exactly why
I think being conservative, and staying close to IDNA2003,
is prudent. If there was one true way, we might have found
it already for IDNA2003, but we clearly didn't. We pretty
much did it for the encoding part, punycode was like a revelation
after all the other proposals (RACE and what not), but that's
an area of technology where such things are much more likely
to happen. My argument is that exactly because there is no
"One True Way", it may not pay off to move from one local
maximum to another, maybe slightly better, local maximum
that is in many ways at the other end of the landscape.


>(4) Finally, please try to assume that we are acting in good
>faith (even if, at some level, you don't believe it).  We are
>much more likely to be able to respond in a useful way and to
>participate in a dialog if we aren't first accused of having
>some bizarre agenda.

Frankly, I couldn't immagine you having any other agenda than
making things better. But in some way, I feel that you may have
failed by trying too hard (and in many ways succeeding).

Regards,    Martin.



#-#-#  Martin J. Du"rst, Assoc. Professor, Aoyama Gakuin University
#-#-#  http://www.sw.it.aoyama.ac.jp       mailto:duerst at it.aoyama.ac.jp     



More information about the Idna-update mailing list