It didn't rain on the parade

or - what I saw at the December 1996 IETF
(for other views of the meeting, see also notes by: )

Sometimes being IETF area director feels like being inside a whirlwind.
At no time is this more pronounced than in the weeks immediately leading up to an IETF conference; this was no exception, rather it seemed to want to emphasize the rule by overdoing it.

This IETF had almost doubled in size since the last San Jose IETF, 2 years ago. Counting the total of preregistered and walk-on registrants, we would have topped 2000, had it not been for the 199 or so no-shows.

Strangely, the Fairmont seemed to absorb them all with remarkably little fuss; we got used to the little group of stragglers standing outside the door of the meeting room attempting to hear, the social niceties of climbing over other people on the way to the front of the room, the mad dash to get a cup of coffee in the 3 o'clock break before the tanks ran empty.

But apart from that, it was remarkably normal. Just another IETF, with the level of acrimony even seeming somewhat dampened by the influx of people.

A victim of its own success?

It's apparent to a lot of people now that the Internet is working. And many people think that this must mean that the IETF is doing something right. In consequence, they come to us with their problems and ask for standardized solutions, given our success rate in creating things that appear to work.

Consider the following:

And at that, the reason I didn't get Internet telephony thrown at me is because other people are in the front line for handling that....and I'm still waiting to hear from the gentleman from the ITU who wanted to use the IETF to standardize a TCP/IP interface to telephone signalling systems.....

What these groups have in common is that very few of them are old hands at IETF process, they contain large numbers of people that are used to other processes, other mindsets, and do not "get the Internet spirit".
It's far from clear that this means that they are wrong and we are right. But it's very clear that a culture clash exists, and that some rapid education is in order if these groups are to function well within the IETF structure.

At the same time, that culture itself is, as always, changing. This week we've explored the possibility of imposing a whole new raft of requirements on Internet standards, including documenting protocols' internationalization, scalability and manageability, in addition to the ubiquitous "security section" (where "not considered" will from now on be considered equivalent to "do not publish this document"). To some, this means that formality takes the place of creativity; to others, this is just taking the wisdom gathered through the old, informal process and putting it where people new to the process can get at it.

Security: Still no Rosetta Stone

Since the Great Declaration from Jeff Schiller, where he declared that ISAKMP/Oakley was The Foundation Of The Standard Internet Key Management Protocol, there has been relative quiet on that front. Work is done, salesmen keep on selling SKIP without regard to facts, but we expect the work to be done shortly. We all hope.

Other developments on the key management front include the ongoing work to be able to fetch keys from the DNS; the standards were finished some time ago, but details need to be worked out.

A fundamental problem with the standardized IPSEC algorithms has led to them being replaced; this is showing both that standardizing security is difficult and that the process is working.

When talking of security and the Web, most people will be thinking of SSL, the Netscape-sponsored transport protocol that is present in many current Web servers.
The IETF is adopting that, under the name of TLS (Transport Level Security), possibly fixing some problems, and possibly making it incompatible with the installed base. (but able to negotiate compatibility).
Time will show if this was useful.

In the Secure Email area, little clarity was offered through the BOFs on S/MIME (an RSA-owned format) and PGP/MIME (a partly IETF-owned, partly PGP Inc. owned format). RSA is offering to let the IETF take over maintenance of most of the S/MIME specs, if RSA gets to decide who can use the S/MIME name. Time will hopefully show if this can be done, and is useful.

Time is also what we haven't got that much of.

If Directory is the place to find things, how do we find The Directory?

The usual 3 groups - FIND, ASID and IDS - met about directories, joined by a fourth group (ACAP) that has problems knowing if it has anything to do with directories or not; it's mainly targeted towards local configuration, address books and newsrc files, but constantly has to battle the perception that "we could do that with LDAP".

In ASID, the proposed drastic revision of the LDAP spec, called "LDAP version 3", was thought to be "nearly finished". However, a debate erupted about whether this was going "too far too fast"; the debaters settled for a comittment to go on with LDAP version 3, but to remove all features from the spec that could be added as extensions later, in order to make the core spec simpler.

The workers from RWHOIS have now concluded that they need to work on a really new, text-oriented directory access protocol, on the order of LDAP but without that protocol's marriage to the idea of a single Directory. As prsented, it's a strong maybe - do we really need another way to do these things?

We're still left with more questions than answers in the directory area. And time is not working in our favour; released LDAP products from Netscape and Microsoft probably make a large difference in the dynamics of the field. But we STILL haven't sovled the problems we know, of global registration, global searching, and global data interoperability.
Just to name a few.

We have multimedia, but where is the message?

Everyone and his grandmother is hyping multimedia as the Next Great Thing on the Web these days. Unfortunately, neither everyone nor his grandmother has any idea of the impact of running large-scale multimedia over the Internet.
The IETF people are beginning to get one, and they don't like what they see.

Multimedia in many of its aspects is inherently intolerant of Internet variability; a network designed primarily to get data through, no matter what, with a minimum of resources and coordination, is ill suited to the demands that constant-bit-rate video or audio streams, with their requirement for throughput and jitter guarantees, place on the network.

One of the mantras hyped to solve this problem has been RSVP, the Internet reservation protocol. But this protocol's scaling properties is still largely unknown; nobody has used RSVP on a large scale, and we know that some problems, like scaling to thousands of simultaneous streams, are simply unknown quantities.

And of course, we still haven't answered the policy issue: Why should I allow you to make reservations in my network?

The light at the end of the tunnel may indeed be a train coming in the opposite direction.....expect to see much about this in the trade press this spring.

A special group - LSMA - has been formed in the Applications area in order to get a handle on some of these limits from at least one perspective.

Use and abuse of the Net

The Web is by now a permanent fixture of life.

It seems that the traditional doubling every year of the Internet user mass has accelerated, and been accompanied by a new doubling: The bandwidth consumed by an Internet user seems also to double every year.

Some of this is just plain usage, some of it is bad Web design or the old hoary pornography storage, but a worrying trend is the usage of Internet resources in ways that consume horrendous amounts of bandwidth.

To give one example, the PointCast service, which uses Web protocols and "server push" to give the illusion of a continuous newsfeed, has been banned by many corporate firewalls, because a hundred users within the firewall viewing the service caused the data to be passed through the firewall a hundred times, leading to congestion.
The service is useful and important; the way in which it was designed shows a fundamental lack of appreciation for the problems it will cause. "Designed to be a victim of its own success"?
There are solutions - in this case a PointCast proxy inside the firewall - but the problem is more general.

The "Internet telephony" trend touted by many is also worrying, since real-time, non-bandwidth-adaptive point-to-point flows are exactly the thing the Internet was not set up to handle, but it does not seem to be visible yet; the hype level for Internet telephony seems to this author to be WAY beyond the real usage.
But people are buying equipment to start being serious about it (one rumour had it that VocalTec had sold Internet Phone <-> ordinary phone interworking units to no less than 24 different customers), so it seems that it's one of the disasters that just will have to happen.

Final words

There are a lot of things I haven't covered.
For some strange reason, most Apps groups are left out in the cold; the significant revisions underway of SMTP, NNTP, FTP and TN3270, to name just a few, are not mentioned; the fax, calendaring and printing BOFs, which were all kind of successful, got short shift, the EMA work on Voice Profile for mail (voicemail systems sending E-mail to each other) isn't mentioned, nor is the guy from the ITU who wanted the IETF to provide a telephone management protocol (he didn't find me) or the noise about new toplevel domains (IAHC and the Newdom BOF).
Not even the visit by Tom Kalil of the White House got mentioned.

I guess the reader will have to just find out about these from other source.

In all, a Good Time was had by all.


Harald.T.Alvestrand@uninett.no
Last modified: Fri Jan 3 14:58:58 1997