Windy City Blues?

or, what I saw at the 42nd IETF

Harald Tveit Alvestrand
Harald.Alvestrand@MaXware.no
who went there as area director of the Operations & Management area

We've found a good place to stay.

It's called the Sheraton Chicago, it's got room for 2000 IETFers, it doesn't freak out at the T-shirts and sandals, it's labyrinthine enough that we feel cozy - it WORKS.

It's strange how at this time, when there hardly is a day that the Internet doesn't form part of a front-page headline somewhere in the world, when conventions are gathering 'em in by the tens of thousands, when every pundit that every pundited is proclaiming the changes about to be brought by the Internet from the rooftops - to find the IETF growing calmer, less contentious, less dominated by the large egos and unsolvable conflicts, than I can ever remember seeing it.

Oh, there are battles aplenty - delete this, change that, move that paradigm, that one can never work, go back to square one and start over - but it seems as if the arrival of Big Money on stage has, strangely enough, created a calmer work environment for the standards activity.

Of course, the silver lining may have a dark cloud - are we afraid to call out bad technology when we see it because it's politically or product-wise expedient? do we concentrate so much on getting product out that the quality suffers? or even - horror of horrors - is the commercial deployment setting our standards activities' agendas?

Well - it may seem like anathema to some - but it may be a case of reality setting in. In many cases, the deciding factor may be "if you can't make someone buy it - why bother?" - a protocol has got to be basis for something marketable in order to be worth standardizing. And with the time pressures felt by all competent network engineers these years, something has to be really important in order to make them spend serious time on standards activity.

Directory? I'll have to look that up.....

Despite the hype over LDAP, and despite the sometimes massive investments in making sure corporate information can be reached, there still appears to be a disconnect between the original 1988 vision of "the global directory" and current reality.

This was perhaps most visible in the LSD working group, which is supposed to be about making sure global directory deployment and interconnection can happen. Not so much in what was said as in what was not said; it seems clear that we don't know how to get there from here.

My personal and entirely unpolitic opinion lays the blame for the current mess squarely with the ITU, which standardized the X.500 protocol back in 1988; they created a protocol that had as its basic assumption a single, world-wide naming scheme, and then failed to follow it up with a registration authority and name lookup service that could in fact make sure that fiction turned into reality. Thus, our current mess where the right answer to the question "can my directory client read the data in your directory service and my directory service too" is, sadly, "I don't know".

There are ways around this; one, seemingly gathering adherents, called the "DC scheme" (for Domain Compnent) would replace the ITU-descirbed "country/company" scheme with one based on the Domain Name System, allowing anyone who's successfully registered and defended his domain name to also have an uniqe LDAP/X.500 name to hang his directory under.

This is not the only problem with interconnecting directories - indeed, the most persistent one will probably be information managers' need to convince themselves about the value of publishing any information - but it's one of the stumbling blocks. Let's remove that, and the way forward becomes a little smoother.

Domain names: Getting some clothes for the Emperor

In some ways, the domain name system is the heart of the Internet.

With its short, sometimes memorable strings of characters, it enables us to uniquely name almost any Net-connected entity, and use that name as a handle for getting at the services that entity provides.

As the Net grew, it was obvious to all that there would be battles over names; in a world where trademark infringement suits can be the whole livelyhood for lawyers, it seems obvious that nothing memorable will avoid attempts at ownership.

My extremely personal capsule summary of the DNS battles, at times seemingly dominating the Net.waves as one actor after another strutted their strongly presented viewpoints across the stage, in the form of a dialogue:

With the CORE process on hold for a year, it looks as if the emperor will be getting a new wardrobe Real Soon Now. A global discussion, conducted over both electronic media and face-to-face meetings, with several rounds of drafts for the bylaws of the "new IANA", seems to be resulting in the formation of a legal entity called "IANA", in which the community can vest such powers as the handing out of domain names or the administration of the incredible shrinking IPv4 address space, with "adequate" representation from the various sectors of the Internet community.

The IAB's blessing of the current IANA plan as a basis for going forward received a standing ovation at the IAB Open Meeting in Chicago (the closest thing the IETF has to a plenary meeting); a stunningly high 25% of the attendees had actually taken time to read some version of the draft, and the single dissenting voice went no further than to encourage people to read it for themselves before deciding.

There's been no lack of other dissenting voices - people like Jim Fleming, Gordon Cook and Bob Allisat have at times seemed to have no goal in life higher than that of injecting a maximum amount of noise into the process, their voices growing shriller and shriller as the various deadlines approached, and it was clear that their impreciations were falling on deaf ears indeed in the technical community. It is a rumour, not a known fact, that the participation of Jay Fenello at various of the face-to-face meetings was a consciuos ploy financed by NSI, the current owner of the .com monopoly, to maximize the chances of the process not reaching consensus - but the very fact that such rumours are flying tell us that the game we are playing is one far, far beyond the academic experiments and ad-hoc connectivity of the Internet of yesteryear.

The domain name conflict is not a technical matter - the only contribution the technical side of the IETF has made to the process is that an unbounded growth of top level domain names is probably neither good for performance of the network nor an example of good human factors design - but the IETF's participants are not technical automata; they want their networks to work, in the ways that maximize the freedoms they have come to love, and a proper (to their minds) resolution of the IANA issue is important to the continued working of those networks.

Besides, these people, whose slogan is "We reject kings, presidents and voting. We believe in rough consensus and running code", do not ever want to be ruled. Coordinated, yes; ruled, no.

Network Management: Secure at last?

warning - history lesson following. New stuff is further down.

Back in ancient history, there used to be a vision that "everything will be manageable over the network" - both configuring devices and finding out what a device is doing.

The monitoring vision is becoming more and more realistic as time goes on, carried by that old workhorse of management called SNMP, Simple Network Management Protocol (version 1). But the configuration and control vision has lagged far behind, and a lot of that lagging has centered around a single issue: Security.

SNMP management is usually in-band signalling, meaning that packets traverse the same wires as the data stream that it's monitoring; as many of us know, the difficulty of recording and faking packets on the Internet, given physical access to the wires they cross, ranges from the fairly simple to the truly trivial.
No network engineer worth his salt would even consider making it possible to reconfigure his devices without at least some protection against unauthorized tampering - whether this was the result of malice or simple ignorance; the "password-like" security in SNMP v1 was simply too weak for managers to trust.
(No matter what anyone claims, downtime caused by ignorance far exceeds the downtime caused by hackers; stupidity is far more common in this world than wickedness!)

In 1993, an attempt was made to correct this by the issuing of "SNMP version 2", a wholesale overhaul of the protocol that also added support for security, done with enough cunning that no hacker, were he ever so smart, could gain illegal access to a correctly configured device.

Unfortunately, neither could the managers.

The "party-based access control model" failed dismally where it counts - in real networks.
Engineers tried to configure it for the access they wanted, tried again - and gave up, going back to the "tried and tested" methods from SNMPv1.

The debate that followed was long, acrimonious and finally productive; in January 1998, the RFCs defining "SNMP version 3" were issued.

At this IETF meeting, no great changes to SNMPv3 were proposed; indeed, there were hardly any small changes suggested. An "interoperability event" in July had proved what we all hoped: The protocols were clear, well-defined and interoperable; real products from real vendors were tried, and in the words of one engineer: "for the most part, it simply worked".

More testing is required to make sure the documents satisfy all the criteria for the IETF official blessing of the standards as "good enough that widespread deployment makes sense" (confusingly called "Draft Standard"), but all involved parties seemed to agree that this should be possible before the end of the year.

.....but what are we managing, really?

One of the more interesting meetings of the week was a dinner we had in the so-called "Internet Research Task Force Working Group on Services Management". The IRTF is an organization loosely matching the IETF, sharing the Internet Architecture Board, but focused (as much as it is focused) on looking into problems that are not well enough understood to make standards for. Its working groups are often "by invitation" rather than open, on the theory that one can get a better working environment for fresh ideas in a smaller context. It may work...

The one point that became abundantly clear during the dinner:

We have no idea what we are talking about.

We think we understand something about managing boxes; we've done that for years.
We also think we know something about running networks and offering services; we've done that for years too.
But when it comes to abstracting the working knowledge called "running networks" or "managing network-based services", our ability is somewhere in the league of "I know what I think but I don't know how to say it" and "Let's think of a word and see if we can use it for something".

The outside world is not alone either; recent years have seen buzzwords like "Web-based management", "Directory Enabled Networks" and so on come and go, some of them with more adherents than others, but all sharing the property of not making terribly big inroads into how we manage networks.

Can it be that we really don't know what we're doing?

IPv6: Getting ready for the next millenium

OK, the realization seems to be setting in. There'll be little if any deployment of IP version 6 in production networks this millenium. In addition to the truly difficult task of integrating these old-but-still-new concepts into production quality operating systems, and into Windows NT, the fifth edition of which is rumoured to contain upwards of forty million lines of code without IPv6, the managers of anything production-like in computing are becoming so totally focused on the Millenium Bug that anything that isn't either 100% business-critical or promises to solve an Y2K problem is simply set aside as "do that later".
And, as we know, and as the report of the 2000 working group (almost ready to publish now) clearly shows, the Internet protocol suite does not have an Y2K problem that IPv6 could have been the answer to.

But the protocol is running, and running well, in the 6bone, now spread to 30-odd countries all over the world.

I foresee the next years as years of experimentation, testing, piloting and "academic deployment", much as the current Internet got deployed in the eighties and early nineties. But the next couple of years should prove interesting, as the network managers currently staring down the IPv4 address shortage behind their NAT bastions get more and more experience with the shortfalls and pitfalls of that particular method of "solving" the addressing problem.

The IAB is hard at work on a document (draft-iab-nat-implications-01.txt) describing the implications that NAT has for the Internet architecture; anyone interested is encouraged to read this.

Interestingly, some of the techniques now suggested for cooperating between IPv4 and IPv6 networks actually come straight out of technology developed for NATs; the theory seems to be "if it's usable for NATs, why not for IPv6?"

Or, to put it another way: "What's the difference between an IPv4/IPv6 gateway and a NAT box?" "The gateway gives hope for the future".

We'll see. IPv6 isn't there yet, and won't be ready for prime time this year. But it's very far from being dead.

Resource Reservation: The Telcos Are Coming! (so what?)

One thing that nobody seems to doubt is that integration between voice and data networks will happen.

It seems logical on the face of it - the physical plant, the fiber and copper, is the same, many of the same companies are involved in both, and some early adopters of voice-over-IP technology seem to prove that it's possible to produce voice calls over Internet technology much cheaper than the traditional telcos are selling it.

What "everyone" seems to regard as the Big Sticking Point is resource reservation - the ability to set aside some of the Internet's vast, overcrowded sea of bandwidth for the exclusive usage of a voice connection.
The recent RSVP protocol was supposed to be the foundation block for such work; yet there were also naysayers and doubters who cast aspersions upon the basic design, with words like "non-scalability" and "wrong cost-benefit structure". (The last is a fancy way to say that the ones who have to make investments to make it work aren't the ones who get the customers' money. A not illogical argument.)

RSVP gave us a langauge to ask for a bandwidth over the Net, but did not answer the other question: Who should get a "yes" answer, and why?
At the moment, RSVP is fully usable, and being used, inside corporate networks and other areas where a single policy, configured in routers and backed up with management sanctions, is enforceable. But if we are ever going to see RSVP deployed across organizational boundaries, or between ISPs, we need more than that.

One suggestion has been to standardize the "how do I ask the question" protocol - moving the decision one step back, so that the decision-making machinery can be fiddled with and modified without touching on the business-critical core routers of the network. This activity, called COPS, developed by the RAP working group, is almost ready to go to Proposed Standard status.

Another suggestion, not directly aimed at this issue, but more at the burgeoning telephone/IP gateway market, has been to integrate the well-known (but extremely complex) ITU Signalling System #7 (SS7 for short) into the Internet's call control units, effectively using the Internet both as the data plane and the control plane for the telephone network.
(These terms, like Policy Enforcement Point and Policy Decision Point, are part of a rich vocabulary that the ITU has developed to describe networks, growing out of a telco substrate. With the influx of telco people into the IETF, some of these words will probably make their way into the IETF vocabulary as time goes on.)

A somewhat contrary argument to the RSVP developments is that admission control (and discriminating traffic classes) is all that is needed to provide "reservation -style" service; as one telco person put it: "If my high-priority traffic is less than 20% of my total capacity, nobody can tell that I'm not doing reservations for it".

Given the relative growth rate of data and voice over the combined Internet/telco network, this argument indicates that the Internet could swallow the whole telephone network within five years, and nobody would even know the difference.

The WG attempting to put the pieces together for the non-reservation approach is called "DiffServ", for "Differentiated Services", and is also said to be "real close" to producing a document.
Some of the results of its deliberations were apparently burned into chips several months ago, kind of limiting the field of reasonable outcomes a tiny little bit - things move fast in this area, and time for meditative discourse is sometimes at a premium.
So be it.

Multicast: Light at the end of the tunnel (or is that an oncoming train?)

It was strange to see a TV commercial from UUNET, touting its "advanced network services, including....multicast", and then walk down into the IETF multicast discussion groups, both in hallways and in meetings, and finding that we still don't know how to deploy multicast on a global scale.

Like RSVP, it's a technology that scales easily into the thousands - but in some scenarios, like use of multicast for small teleconference-like applications, the numbering scale has several more digits: millions, not thousands.

The MBONE, with its thousands of users, hundreds of attached networks, and a "public broadcast list" that is still presented in a single window with a scrollbar in the Net's most popular "radio dial" tool (SDR), is chumping along, trying to get to versions of its routing protocols that scale to its current size and leave some room for expansion.
Its operations group, MBONED, is also being used as a forum to work on the allocation of multicast addresses - another topic that needs attention before we're "ready to fly".

Several parties have suggested changing the basic definitions of multicast; it is not yet certain whether this is the Right Thing or the Wrong Thing - if it is needed to invalidate the current work, it has to be done - but if it is not needed, we'd rather not do it.....

Matters shamefully neglected or wilfully ignored.....

This report is already quite long, but there are many things deserving of mention that haven't been mentioned yet. To name a few:

One man can see only so much in such a place, and even of that little this man has seen, even less manages to find its way into a note like this. Yet, I hope this has informed some, and amused others.

See you all in Orlando, for the 43rd IETF, December 1998!

 


Harald Tveit Alvestrand Harald.Alvestrand@maxware.no