Harvest Time in Orlando

or - what I saw at IETF'43, December 1998

Harald Tveit Alvestrand
Harald.Alvestrand@MaXware.no
who went there as area director of the Operations & Management area

You may also want to look at notes from:

Outside, looking in?

It's been a strange IETF - but aren't they all?
I've definitely said NO to doing another session as Area Director; the Nomcom is searching for my replacement.

It will be strange to go to an IETF where I am not morally obliged to attend a definite session in almost every single slot, but rather have time for some visiting of things "just because I am interested".

Take directories, for instance. A year ago, as Apps AD, I laid down the law: There will be no extensions published to LDAP version 3 until its mandatory-to-implement security is in place.
After that, I've had a completely different area to attend to, the Apps ADs have stuck to their guns on this, and yet...it's been a year, and only now do we see a possibly satisfactory solution in place, called "Digest Authentication" (related but not identical to the HTTP authentication method known by that name).
Would it have gone any faster if I had been there, yelling at the top of my voice? Or slower? Who knows?

Well...things take time.

Multicast - a solution may have found its problem at last

When I titled this page "Harvest time", it was in honor of the many things that seem to have gone right, and where we are now reaping the fruits of all that hard work that has gone on for years - or, in some cases, decades.

Multicast is one of these efforts; its promise to efficiently deliver content to multiple destinations has at times seemed like a pipe dream - but now it's closer to an operational reality.

At the plenary, Van Jacobson gave a talk where he briefly described the history of multicast - the idea apparently originated in about 19xx, and people have been working to realize it ever since.
For a large class of problems, including large videoconferences, "broadcast TV", service location and distributed simulations, this seems like "the obviously right solution". But that doesn't mean it's easy.....

One of the core problems with multicast is the "state problem": If routers are to decide which paths to send packets down for a given group, the routers have to maintain state on that group. This costs memory and processing power, and those things cost money. Who pays?

For a certain class of multicast applications, namely the "broadcast-like" things, the answer is "sender pays"; the ability to originate multicast to a group is sharply limited, and the owner gets to pay for this "premium service". This is the deployment model that seems to be at the basis of the announcements of "commercial" multicast service.
At the moment, both SPRINT and UUNET are offering commercial multicast, and seem to promise to tie them together across their domains; the usage of multicast in SPRINT already dwarfs the "old MBONE" that operated mainly in the academic communities, and formed the basis of the worldwide broadcasts from the IETF.

It's still not clear that there's a viable deployment model for the "anyone can send" multicast required for things like wide area gaming, and it's quite clear that the current multicast stuff doesn't scale very well to one million simultaneous 3-person conference calls. There might be a market to be exploited here....

But still, I'm tempted to call the success of this effort, which requires basically changes to every single router in the world, little short of a miracle.

In related work, the MBONED (deployment) working group worked on standards for doing multicast within limited scopes; if you can constrain a multicast inside a domain where the network is paid for by one organization, the "who pays" equation grows much simpler.And that should be relatively easy.

SNMPv3: Ship it - it's good enough!

The question before the SNMPv3 community was whether or not to ask for SNMPv3 to be moved from "Proposed Standard" to "Draft Standard", or whether to revise it and reissue it as "Proposed Standard".

The feedback from the community of network operators was thunderously unanimous: Ship the damn thing - NOW!

The developer community hemmed and hawed, stared long and hard at the (few) inconsistencies, minor weaknesses and areas of suboptimality they had found, swallowed hard, and said "Yes".

If all goes well, SNMPv3 will be approved for publication as Draft Standard before the next IETF meeting.

What this means to the user is that we now have a secure mechanism to access or update management information across a network; Cisco and others have promised to implement this in production releases "reasonably soon now" (which is slightly later than Real Soon Now :-), and we all hope that this will improve manageability of the network overall. We all know we need it!

In the longer run, the effects may be more profound (but less predictable); the architecture of SNMP is now modularized in a way that may permit introduction of new features when needed rather than when redesigning the protocol. This may make it possible to attack entirely different problems later.

In one meeting, the "NEWSNMP" BOF, the disconnect between SNMP and management became quite obvious; when the people who run networks talked about management, they talked about relating things to each other so that one could get "info on a higher level". When the people who do SNMP talked, they talked about adding features to SNMP. Somehow, this doesn't strike me as completely connected...

IPv6: Godot is coming?

IP Next Generation, also known as IP version 6 (IPv6), from a standards perspective, is "mostly done".

The wheels are in motion to get the registries to assign "real" address space, the test network runs across 22 or so countries, implementations exist for Windows, Linux and many other operating systems.

Yet it's not Quite There Yet.

As one developer put it: "When we, the real enthusiasts, cannot switch to IPv6 ourselves, how do we expect the rest of the world to do it?"

There's a move afoot right now to rectify this - to get to the point where someone wanting to join the IPv6 network can get the instructions "Get the software. Install. Reboot." - and that's it.
There's a few pieces missing still - but there is a hope that it can get there Real Soon Now.

But - more and more, people see that a fragmented Internet based on the NAT box has a number of problems; there might be hope for the future.

Of kings, presidents and saints

With the passing away of Jon Postel in October, the Internet lost one of its true founding members.

It appears that it also got its first unofficial saint; you sometimes got the feeling that the words "Jon Postel" were uttered only in a reverent sort of voice; the tributes, kind words and memories were many and meaningful.

But life goes on, and with it the transitioning of the Internet governance structure.
It looks as if the next Governing Body of the Internet will be called ICANN - the Internet Corporation for Assigned Names and Numbers. This body will be governed by a Board of Directors, chosen from four places:

From the fact that none of these organizations yet exist, and the process for choosing the 9 "at-large" representatives hasn't been decided, you can tell that there's a number of details to be settled....

Once in place, the ICANN will have the authority to assign top-level domain names, IP address blocks and protocol parameters, much like IANA is doing today.

The IETF community is painfully aware of the process, and of just how far from ideal it is, but still, the process has strong support in the IETF - not least because it was being shepherded and supported by Jon Postel.

At the moment, it's also "the only game in town" - the alternative if this doesn't work is probably to see the ITU or another intergovernmental organization take control, and that would possibly be worse.

But this being the IETF, people are loud and boisterous about their differences of opinion, even when minor; some people try to portray this as the IETF fundamentally disagreeing with ICANN. They're wrong.

Of things that happened, of things that were said, of what the newspapers said....

Lots of things happened in various corners. For lots of them I have hearsay reports; for others I know too little to report effectively. Look at the other reports too; they sometimes sound like a completely different meeting.... this section is "short takes" on some of the subjects touched upon.

XML

Everyone loves XML, at least the ones who haven't read the spec....

It seems certain by now that XML is a structuring language, in much the same way as RDA, Sun's XDR and ASN.1.
Everyone and his grandmother seems to be certain they know how to use it.
I wonder if they know what kind of overhead it has?

HTTP: The Next Generation

The W3C came to the IETF with a blueprint for a new World Wide Web.

After the boos, hisses and jeering had died down, there seemed consensus that multiplexing of connections was a very interesting concept, despite having been tried and failed roughly every 5 years throughout the Internet's history.

The rest of the model was classified roughly as "grand scheme - yeech", a not uncommon IETF reaction.
This doesn't mean that it's good, bad or ugly; it just means that it didn't get a rave review. THIS time.

When you say TCP won't work, what do you mean?

Lots of applications don't use TCP, or don't want to use TCP, for various reasons.

But if lots of applications don't behave properly when the network congests, the network dies; we've been there - tried that. And TCP is the Norm of Proper Behaviour, for better or worse.
Reality conflict.

One BOF called RUTS (I forget why) explored why people used UDP protocols. A most entertaining discussion of NFS was what I got to hear; explaining how they eventually had added almost every feature of TCP to their UDP-based implementation, and then decided to simply switch to TCP.....whenever you have something more complex than an unloaded LAN, it seems that TCP just behaves better.
But other protocols need other things - multimedia needs "time-invariant delivery", but don't care about reliable delivery; signalling protocols are in a hurry, DNS doesn't want to maintain per-client state if it can get away without it....

What to do? Dunno...

An RFC Is Not A Standard, No Matter What the salesman said

We've tried a dozen ways to get the message across: Lots of RFCs are not standards.
And we've seen "conformant to RFC-Nonsense" appear in marketing literature. Again, again, and again.
People don't read labels. In fact, they don't read RFCs. They just react to the label's prestige.

And we continue to let things be published with the RFC label "if it's interesting to know".

Conflict. This was raised (for the umpteenth time) in the POISSON group, and (for the umpteenth time) saw no consensus reached even on whether there is a problem. Grr.

And the room is still full...

One novelty was the spectacle of several multi-hundred-seat rooms filled to overflowing at the same time. We were about 2000 attendees this time, not much up from last time, but somehow, crowding just felt .... crowded.

Lots of completely stupid ideas suggested to alleviate this (the next meeting is in Minneapolis before the snow melts, which tests one theory for reducing attendance), but no real solution found. We need bigger hotels, or smaller IETFs.

Bad choice.

....but the show must go on

There'll be an IETF after this. And one after that. And so on.

The documents will be produced, the working groups will discuss, and the result will continue to have an effect on the world. Life is good.

See you next time!