Sometimes being IETF area director feels like being inside a
whirlwind.
At no time is this more pronounced than in the weeks immediately
leading up to an IETF conference; this was no exception, rather it
seemed to want to emphasize the rule by overdoing it.
This IETF had almost doubled in size since the last San Jose IETF, 2 years ago. Counting the total of preregistered and walk-on registrants, we would have topped 2000, had it not been for the 199 or so no-shows.
Strangely, the Fairmont seemed to absorb them all with remarkably little fuss; we got used to the little group of stragglers standing outside the door of the meeting room attempting to hear, the social niceties of climbing over other people on the way to the front of the room, the mad dash to get a cup of coffee in the 3 o'clock break before the tanks ran empty.
But apart from that, it was remarkably normal. Just another IETF, with the level of acrimony even seeming somewhat dampened by the influx of people.
Consider the following:
What these groups have in common is that very few of them are old
hands at IETF process, they contain large numbers of people that are
used to other processes, other mindsets, and do not "get the Internet
spirit".
It's far from clear that this means that they are wrong and we are
right. But it's very clear that a culture clash exists, and that some
rapid education is in order if these groups are to function well
within the IETF structure.
At the same time, that culture itself is, as always, changing. This week we've explored the possibility of imposing a whole new raft of requirements on Internet standards, including documenting protocols' internationalization, scalability and manageability, in addition to the ubiquitous "security section" (where "not considered" will from now on be considered equivalent to "do not publish this document"). To some, this means that formality takes the place of creativity; to others, this is just taking the wisdom gathered through the old, informal process and putting it where people new to the process can get at it.
Other developments on the key management front include the ongoing work to be able to fetch keys from the DNS; the standards were finished some time ago, but details need to be worked out.
A fundamental problem with the standardized IPSEC algorithms has led to them being replaced; this is showing both that standardizing security is difficult and that the process is working.
When talking of security and the Web, most people will be thinking of
SSL, the Netscape-sponsored transport protocol that is present in many
current Web servers.
The IETF is adopting that, under the name of TLS (Transport Level
Security), possibly fixing some problems, and possibly making it
incompatible with the installed base. (but able to negotiate
compatibility).
Time will show if this was useful.
In the Secure Email area, little clarity was offered through the BOFs on S/MIME (an RSA-owned format) and PGP/MIME (a partly IETF-owned, partly PGP Inc. owned format). RSA is offering to let the IETF take over maintenance of most of the S/MIME specs, if RSA gets to decide who can use the S/MIME name. Time will hopefully show if this can be done, and is useful.
Time is also what we haven't got that much of.
In ASID, the proposed drastic revision of the LDAP spec, called "LDAP version 3", was thought to be "nearly finished". However, a debate erupted about whether this was going "too far too fast"; the debaters settled for a comittment to go on with LDAP version 3, but to remove all features from the spec that could be added as extensions later, in order to make the core spec simpler.
The workers from RWHOIS have now concluded that they need to work on a really new, text-oriented directory access protocol, on the order of LDAP but without that protocol's marriage to the idea of a single Directory. As prsented, it's a strong maybe - do we really need another way to do these things?
We're still left with more questions than answers in the directory
area. And time is not working in our favour; released LDAP products
from Netscape and Microsoft probably make a large difference in the
dynamics of the field. But we STILL haven't sovled the problems we
know, of global registration, global searching, and global data
interoperability.
Just to name a few.
Multimedia in many of its aspects is inherently intolerant of Internet variability; a network designed primarily to get data through, no matter what, with a minimum of resources and coordination, is ill suited to the demands that constant-bit-rate video or audio streams, with their requirement for throughput and jitter guarantees, place on the network.
One of the mantras hyped to solve this problem has been RSVP, the Internet reservation protocol. But this protocol's scaling properties is still largely unknown; nobody has used RSVP on a large scale, and we know that some problems, like scaling to thousands of simultaneous streams, are simply unknown quantities.
And of course, we still haven't answered the policy issue: Why should I allow you to make reservations in my network?
The light at the end of the tunnel may indeed be a train coming in the opposite direction.....expect to see much about this in the trade press this spring.
A special group - LSMA - has been formed in the Applications area in order to get a handle on some of these limits from at least one perspective.
It seems that the traditional doubling every year of the Internet user mass has accelerated, and been accompanied by a new doubling: The bandwidth consumed by an Internet user seems also to double every year.
Some of this is just plain usage, some of it is bad Web design or the old hoary pornography storage, but a worrying trend is the usage of Internet resources in ways that consume horrendous amounts of bandwidth.
To give one example, the PointCast service, which uses Web protocols
and "server push" to give the illusion of a continuous newsfeed, has
been banned by many corporate firewalls, because a hundred users
within the firewall viewing the service caused the data to be passed
through the firewall a hundred times, leading to congestion.
The service is useful and important; the way in which it was designed
shows a fundamental lack of appreciation for the problems it will
cause. "Designed to be a victim of its own success"?
There are solutions - in this case a PointCast proxy inside the
firewall - but the problem is more general.
The "Internet telephony" trend touted by many is also worrying, since
real-time, non-bandwidth-adaptive point-to-point flows are exactly the
thing the Internet was not set up to handle, but it
does not seem to be visible yet; the hype level for Internet telephony
seems to this author to be WAY beyond the real usage.
But people are buying equipment to start being serious about it (one
rumour had it that VocalTec had sold Internet Phone <-> ordinary phone
interworking units to no less than 24 different customers), so it
seems that it's one of the disasters that just will have to happen.
I guess the reader will have to just find out about these from other source.
In all, a Good Time was had by all.