Internet User review: IDNA2008 follow-up at IETF/PRECIS and ICANN/VIP

JFC Morfin jefsey at jefsey.com
Fri Jul 29 07:27:27 CEST 2011


Jean-Michel,

I agree with you. However, things are not that simple. Andrew 
Sullivan asked for an algorithm and he is right: machines and systems 
need algorithms. What you emphasize is that in our cases (Variants, 
Stringprep replacement, IDNA support on the user side, extended 
services naming, IUsers' expectations, etc.) the algorithmic nature 
is just as precise as the mathematical algorithm that Andrew expects, 
but it obeys an entirely different logic because it also involves a 
brain to brain level. We have the tool (fringe to fringe as permitted 
by IDNA2008 and  exemplified in RFC 5895) to support it but we first 
have to understand and document this logic.

Therefore, we first need to get everyone who shares this burden to 
accept that their approach is to be shared with others and to 
understand that their logic and the logic of the others will help, 
but that the final logic cannot be a common logic. It is  a new logic 
to we have to explore in common. Their experience and the experience 
of the others are needed but the case that we have to address is totally new.

That case first needs to be understood.

IPv6 is to give everyone millions of sub addresses (we call them 
IDv6, IIDs that are globally addressable). People will from then on 
allocate one, or several, thousand domain names to themselves, in the 
same way that they used to have an address, a name, a nickname, a 
mobile directory of addresses, TV channels, file names, etc. 
Therefore, we are talking of a world digital ecosystem naming 
intrastructure of hundreds of billions of domain names plus mail 
names, login ID, passwords, keys, codes, etc.

In addition, we the users want this to be simple, sure, and secure 
for everyone in every language and every orthotypography (an 
orthotypographic requirement that IDNA2008 adequately refused to 
include in the network side in refusing mapping  and ICANN calls 
"variants" on the user side).

Compared to this, IDNA2008 was a _very_ simple thing to achieve.

Now, you have summarized your vision of the VIP, WG/PRECIS, IAB, and 
IUCG problem in a metaductive manner. IMHO, this is correct. 
Metaduction is the proper and only way to address a problem of this 
size and complexity as a way to think in networks. However, 
metaduction is a way of thinking that everybody partly uses all the 
time but that is still only explored as such, at least in the way you 
use it, within our ALFA (Architecture Libre/Free Architecture) 
research framework.

Some have emphasized that there was a deep need for a common 
glossary. Yes! However, the target is not only to use the same 
glossary, but also to have a common understanding of the ways of 
thinking of the concerned parties (engineers, operators, users).

Therefore, let me explain it and translate what you mean to others, 
then analyze what you state and what it might imply in order to 
address these groups' needs, as they result from our IDNA2008 final consensus.

At 10:39 27/07/2011, jean-michel bernier de portzamparc wrote:
>Jefsey,
>
>I read your mails and compare them with our discussions. What I feel 
>is that we progressively switch to and from the three forms of 
>comparison,  we call  semantical (sense) as opposed to mathematical 
>(one character at a time) and physical (same external form).

These three types belong to what we are exploring as "intelligent 
thinking" in order to address complexity, as an alternative to Edgar 
Morin's "complex thinking". Complex means woven, (complexus: canvas 
in Latin), like the web.

Intelligent thinking is based on the idea that there are three(+one) 
levels of intelligence:

1. physical, like the Internet lines and nodes,
2. logical/mathematical, like the protocols,
3.1. semantical, like the meaning of what is exchanged.
3.2  pragmatical, like a paradigm (everyone thinks the same):

This corresponds to :

1. "to be in good intelligence with" and interlinks,

2. "intelligence" (as in the CIA) as information and command of the 
hyperlinks,

3.1. be intelligent, i.e. to be able to adequately use that 
information in modeling (comprehending) it and observing/emulating 
the model behavior.

3.2. the same but when instead of considering a single semantic 
processors one considers a semantic processor network: a crowd, 
country, culture, community, etc.

Here, the reasoning cannot be classical deduction, induction, 
abduction,or even hypothetico-deduction. We call it metaduction. It 
consists in building a brain vision, a mental map of what we 
understand. We call it an ontography (because 50% of the brain uses 
vision related functions, e.g. "theory" means an observation, we 
refer to our "points of view", etc.). The legend of this "map" is an ontology.

Then, the metaductive reasoning consists in observing the mental map 
and progressing through it. This is truly mentally walking a theory.

One, therefore, understands that metaduction extensively uses 
contextual models, and that being dependent on models makes it 
totally dependent on context, i.e. on the adjacencies that define a 
context (or in IDNA2008 terms and thinking environment: "joiners" and 
"CONTEXTO").

  As a consequence, when

- physical logic is built on correspondence
- and mathematical logic is built on equivalence
- then semantical logic is built on coherence
- and pragmatical logic is, in addition, dependent on internal 
consistency. (note: pragmatic is semantic in a context, i.e. 
considering the influence/interference of a given relational space 
and therefore of its referent [an exemple of referent is the IANA, 
which up to now was unique, however the Internet technology defines 
one IANA per DNS class]).

Communications are about exchanging (uttering/understanding) our 
"maps" with others. Our first needs are, therefore, to name, locate, 
protect, and access our "maps" in a simple, sure, and secure manner. 
This is what WG/PRECIS and ICANN/VIP attempt to do. This is a part of 
the IDNA2008 consequences that I proposed to discuss two years ago 
and that led me to appeal the IAB against the IESG's lack of warning 
about the consequences of IDNA2008 when publishing the RFC set. Let 
me remind you here that the outcome of these appeals was extremely 
positive as they identified that this issue extended far outside the 
area of responsibility of the IETF since it concerns the whole 
digital ecosystem, even though IDNA2008 bluntly reminded us about the 
true, very larger extent of the Internet architecture.

In this digital ecosystem, the Internet legacy proposition/contribution does:

- name in using domain names, login, etc.,
- locate in using IP addresses,
- protect in encrypting,
- access in using keywords, etc.
- deal with transmitting datagrams as efficiently as possible (this 
is end to end connectivity)
- want that "Everything else should be done at the fringes" (RFC 
1958: Architectural Principles of the Internet).

  IDNA2008 has clarified what, in diversity support, is:

- at the end to end (protocol) level (RFC set)

- and at a fringe to fringe intelligent level that the IETF never 
documented before. This is why the RFC 5895, which permitted our 
IDNA2008 consensus, was published only for informational purposes. 
Because its matter was "unusual" for the IETF: "this document does 
not specify the behavior of a protocol that appears 'on the wire'. It 
describes an operation that is to be applied to user input in order 
to prepare that user   input for use in an "on the network" 
protocol.  As unusual as this may be for a document concerning 
Internet protocols, it is necessary to describe this operation".

- and permitted this way to support what is at the brain to brain, 
human semantic and pragmatic level.

IDNA has removed constraints on the Internet architectural support of 
diversity (while considering the case of the linguistic diversity), 
hence of complexity. In so doing, it also modified our reading of the 
Internet architecture: it showed that the Internet is also built on 
the principle of subsidiarity (in addition to the principle of 
constant change [RFC 1958] and of the principle of simplicity [RFC 
3439: Some Internet Architectural Guidelines and Philosophy]).

The consequence that we are faced with is threefold:

1. we have to adapt to IDNA2008 changes and update ourselves and the 
technology to the IDNA2008 unleashed opportunities.

2. we are in an open architectural context that we are not familiar 
with, however it only results from a full reading of the RFCs that we 
know as well as from a full use of the very unchanged code that we 
use every day. The danger is that we can consider solutions that are 
already provided or contradict them without noticing it. This was the 
case of IDNA2003. This still is the case of the IDNA concept (RFC 6055).

3. the architectural context that we call the IUI (cf. below) i.e. 
the Internet Use Interface on the Internet side of it, and the 
Intelligent Use Interface on our user side, is something that we will 
also intelligently use (i.e. with our same intelligence and need for 
simplicity, surety and security) with other technologies we should 
consider (mobile phones, regular mail, registries, etc.). We are not 
used to their standardization except common users. While implicitly 
simplicity,  surety and security on our user side calls for an 
advisable unicity (RFC 1958) in the coherence and consistency of the 
proposed solutions. This should be helped by the use of the same 
IDNA2008 on the Internet side and our demand of a single follow-up on 
this Internet side: this is why, as users, we wish for the same 
common liaison and work between the "sons of IDNA2008", like VIP, 
PRECIS, IUCG, and IDNABis experience, etc.


>This is a plex.

Yes. A plex is a network of ideas.
This is the very nature of complexity. A plex is a plex of a plex 
etc. We observe that metaduction, as introduced above, is not about 
reducing complexity into small simpler problems (rationalism) because 
one loses the influence of the complexity (the whole is more than the 
addition of its parts because parts lose the whole when being split 
from other parts). Metaduction is about looking at confusion as a 
whole, to progressively map it, to study the map when looking for its 
apices (plural of apex) of complication and simplicity, i.e. what 
seems to be the most difficult to understand and what seems to be the 
simplest to do to clarify things. This is a pure application of RFC 
3439: in very large systems one must apply the principle of simplicity.

>  It meshes domain name, variants, phishing, meanings, sounds, 
> feelings, perceptions, etc. This is human communication vs. machine 
> communications.

Object communications - as we know them, are based upon geometric 
correspondence: translation, forgetting the size or not.

Machine communications - as we know them, are based upon the 
mathematical comparison function: equivalence, equality, if then else.

Human communications - want to use machines acting as peripheral 
simplificators (operators of complexity simplification) are based 
upon the semantical comparison function: coherence - or on its 
pragmatic similitude.

IDNA2008 has established an interface between these three forms of 
communications in the way that it establishes the relation between 
the Internet and applications, which in turn need an interface with 
the Internet. This middleware is what permits us, the users, to use 
the Internet in the way we want. This middleware will also be present 
when we deal with any other digitized communications technology this 
is why we used to call it the Intelligent Use Interface (IUI) and 
have started exploring it in parallel to IDNA2008.

>We therefore need to address it as a plex, metasimplify it and find 
>its apex of simplicity.

Yes. This is the process that I introduced above.

However, metaduction is not the way the IETF, ICANN, etc. think. This 
was acknowledged by ICANN in a very interesting fundamental document 
named ICP-3. http://www.icann.org/en/icp/icp-3.htm. It states that 
the Internet community proceeds through experimentation. Therefore, 
ICANN called for community experimentation. This was in 2001.. It 
seems that we were alone in carrying it out. Except for Google, which 
by its size can do something equivalent at any time with 
http://code.google.com/intl/fr/speed/public-dns/ their Public DNS.

We do not discuss an alternative root or so on, or the extension of 
the root. We are speaking about a full true implementation of the DNS 
itself, the way it is documented and in the light IDNA2008 unconstrains it. .

>  I think that clarifying what "string" means in the three types of 
> comparisons would help a lot.
>
>Physical : same frequency
>Mathematical : same components
>Semantical : same what?

You have to also consider the Pragmatic alternative to Semantic; 
There are community side effects in what is discussed.

A "Variant" as per VIP could be what is related to Pragmatic. The 
term could be used.

>A way to address this point would be to look for another word than 
>"string" that would lead people to identify the differences with a 
>mathematical string in terms of comparison.

You have to also consider the Pragmatic alternative to Semantic; 
There are community side effects in what is discussed. A "Variant" as 
per VIP could be what is related to Pragmatic. The term could be 
used. And we could define it along similitude grades depending on contexts

A way to address this point would be to look for a word other than 
"string" that would lead people to identify the differences with a 
mathematical string in terms of comparison.

>Also, to decide if their comparison is correspondence, equivalence 
>or coherence.

This makes sense. This is a common approach to start with more 
appropriated and identified definitions.

It is also to decide if their comparison is correspondence, 
equivalence, or coherence.

Or rather acceptance in the pragmatic case.


>This way we can use the principle of correspondence to check where we are.

The principle of correspondence states that innovation in science 
must not only explain new aspects but also keep explaining these that 
were already explained.

This is the IDNA prerequisite and what IDNA2008 guaranteed: the DNS 
as we know it and it is deployed millions of times, operational 
protocols, and users' satisfaction MUST not be affected.
Thank you.

Now we have to agree upon a new word to qualify a "smart string" if a 
"dumb string" is something we cannot read (spell) but only see. This 
means that:

- a "dumb string" (symbol) supports a signal
- a "string" supports content
- a "smart string" to supports a thought. I would propose using 
"sememe", with a coherence seme by seme: 
http://en.wikipedia.org/wiki/Seme_%28semantics%29.
- a "local smart string" to support a relational space's accepted 
"pragmeme" between a "pract" and an "allopract" (cf.pragmatician experts).

Let me be clear, if we accept that: The last two propositions cannot 
be processed by a machine in a way that an RFC can describe, but an 
RFC can describe a common procedure and the guidelines for such a 
process to be carried by common agreement. We have a good example in 
the way Unicode code-points are worked out, langtag tables are built, 
ccTLDs are defined, etc.

I suggest that we work on this and add these definitions to our glossary.


Then, I think we should start from Marc Blanchet's Draft 
http://www.ietf.org/id/draft-blanchet-precis-framework-02.txt and 
discuss if a graphcode approach could address his requirements.

Note: I use the concept of "graphcode" for a stringprep replacing 
algorithm made of a table of character graphical symbols for every 
existing script: one (phishing proof) symbol per geometric form. Each 
graph-point corresponds to a number of "in-code-points", and to one 
single "out-code-point" per orthotypography. This is a pragmatic 
equivalent of RFC 5892, i.e. the algorithm is not processed by a 
computer program but by a multibrain consensus.

Then, the resulting code-graph (equivalent to code-point in a 
graphcode) system could be implemented for a test at one of the 
layers of the ML-DNS. Note: The ML-DNS is an IDNA2008 conformant 
encapsulation of the DNS that we are exploring (testing is to be 
carried before being documented), to process a naming pile that is 
able to address the people's final needs. This pile is the pile of 
the variants of the same name in various classes and presentations, 
e.g. A-label, U-label, UDN (user actual entry), etc. A major 
difference with the IDNA context is that the ML-DNS supports a single 
"IDNApplication service" per machine (single resolution source).

However, frankly, my first and main concern is that your approach 
confirms that these three (VIP, PRECIS, and IUCG) groups are working 
on things very similar and quite intricate. They do need to relate in 
a way or another. At least, as the users of their deliverables, we 
need to ensure that they do. The work that they are engaged in is 
revamping Shannon's communications theory in a multilingual human 
context. It is worth some consideration.
jfc




More information about the Idna-update mailing list