Subject: SIP in higher education
Notes from 1-11-2007 call with Henning Schulzrinne
- From: Garret Yoshimi <>
- To: Internet2 VoIP SIG <>, "Internet2 SIP.edu" <>
- Subject: Notes from 1-11-2007 call with Henning Schulzrinne
- Date: Sat, 20 Jan 2007 11:04:36 -1000
Thanks again to Henning Schulzrinne for presenting on our 1/11/2007 call. This was a very well attended call and a great one to start the new year. Notes from the call are included below (thanks to Jeff Kuure). Henning's slides are posted at http://www.cs.columbia.edu/~hgs/papers/2007/internet2.ppt .
Garret, Walt, Candace & Dennis
- - - - -
Internet2 VoIP SIG & SIP.edu Conference Call, January 11, 2007
Dennis Baron, MIT
Steve Blair, University of Pennsylvania
Chris Caswell, MCNC
Alan Crosswell, Columbia
Paul Dial, Internet 2
Kyle Haefner, Colorado State
Candace Holman, Harvard
Deke Kassabian, University of Pennsylvania
Walt Magnussen, Texas A&M
Michelle Markovich, University of Pennsylvania
Christine Moe, Stanford
Christian Schlatter, UNC
Henning Schulzrinne, Columbia
James Stormes, Cisco Systems
Chris Trown, University of Oregon
Rob Tuck, Rutgers University
Jonathan Tyman, Internet2
Roger Will, Ford Motor Company
Garret Yoshimi, University of Hawaii
(and, based on the edial bridge stats, another dozen or so other participants)
Today's call begins with Walt introducing guest speaker Dr. Henning Schulzrinne. Dr. Schulzrinne is the head of the department of computer science at Columbia University, as well as the director of the IRT lab. He is involved with several Internet standards organizations and will provide an overview of VoIP and SIP in terms of the IETF.
Dr. Schulzrinne begins by stating that he feels we are approximately ten years into the modern phase of VoIP. There were experimental predecessors in the early 1970s, but the current second-generation of VoIP began roughly ten years ago and is still rapidly maturing. Ten years is a long time, but VoIP is more complicated and depends on more protocols than the Web, for example.
Henning believes that the current situation is the third phase of the second generation of VoIP. Starting in approximately 1996, VoIP systems were available but never claimed to be equivalent to telephones. They were used by hobbyists or to bypass tolls, but were not reliable, had undesirable amounts of lag or echo, and were not robust. Users tired of demo quality software and lack of features. The second phase concentrated on feature parity with PBX systems and a fair amount of interest in service creation. The third phase, which is just beginning, focuses on adding new features such as presence and location-based service, though most of the effort of the IETF is still concentrated on feature parity with PSTN systems. Currently VoIP systems are fairly close to PBX systems in terms of features.
A question is asked about the more complicated shared-call features which are available on commercial IP Centrex platforms but are difficult to implement elsewhere and don't always work with proxies. Henning says that this is exactly the set of features that are hard to implement, as traditionally the PBX was able to exert media control in these situations. Henning feels that there should be some investigation into what type of workflows should be supported, and if these shared-line models could be better implemented with presence and conferencing features. Mimicking the old behaviors exactly is difficult.
Henning begins talking about the current IETF working groups. In the early 1990s, there was one group working with media, the AVT group. Now there is an entire area known as Real Time Applications involved with VoIP related projects. The AVT group deals with the real-time transfer protocol, while the MMUSIC group handles the session description protocol (SDP). The various IETF groups all converge with the SIP and SIPPING groups, which handle the protocol definition and usage requirements, respectively. Other groups generally fall into two categories, dealing with either protocols or architecture.
In the SIP space, unlike the rest of the Internet, there are approximately 15 core RFCs necessary to implement most applications such as proxies and user agents. The SIP, SIPPING, and SIMPLE working groups submit a large number of RFC drafts each year. Henning discusses a slide which shows the number of draft-ietf and draft-personal 00 drafts submitted on a yearly basis by these groups. The SIPPING group still generates dozens of drafts each year, far exceeding any other IETF working group, with the SIP group not far behind. The SIMPLE group is leveling off, an indication that work with presence is mostly done. Most drafts never see RFC status. There have been 44 SIP-related RFCs published.
Current activity of the SIP working group focuses on a number of areas, some of which are important and others that are more narrowly focused. There is a guide published online called the Hitchhiker's Guide to SIP, which provides a quick overview of SIP specifications grouped by category. In terms of infrastructure, there is work being done on GRUUs, which are randomly generated SIP URIs. Previously there were two types of URIs, one permanent and the other tied to an IP address. The GRUU is generated by machine and used for things like NAT traversal and registrations. Other work focuses on URI lists, XCAP for configuration of devices, and MIB for SIP systems. Services under development deal with the rejecting of anonymous requests, a consent framework for calls, location conveyance, and session policy. Security topics include work on end-to-middle security, certificates, SAML, and SIPS, a secure SIP-based calling system. Finally, as with most working groups, dealing with NAT is a major issue.
The SIPPING group is focused on three primary areas. Under the policy banner they are working on a media policy as well as a definition for session border controllers. For services they are investigating call transfer, and providing service examples and call-flow documents. Additionally, they are working on text-over-IP, which differs from instant messaging or SMS as it is character-by-character, displayed as the characters are typed. For hearing impaired users this provides a more conversational interface. Transcoding, such as from text to speech, is also being investigated. Finally, testing areas include work with IPv6, work with various race conditions, and torture tests for commercial and open-source proxies. Test events called SIPit, sponsored by the SIP Forum, take place every six months, with no spectacular failures but no complete successes. In particular, bogus and legal but odd requests are not handled well. Overload work is also being done, which is more of a concern for carriers.
Henning mentions that he personally is heavily involved with restructuring emergency communication. Issues with emergency communication were usually thought to be an issue with VoIP not working with the existing 911 infrastructure. Following hurricane Katrina, the existing 911 system showed that it was out-of-date and expensive to maintain. Billions of dollars have been spent integrating wireless systems into the existing 911 system with only partial success. The current notion is that extending the current system is not worth it and a new system should be built from scratch. This is being funded through a variety of means. A large grant from the Department of Transportation will lay the groundwork for a new 911 system. The second part is more localized, and focuses on deployment. At least five states are in the process of generating RFPs for new systems.
The IETF ECRIT working group is focused the problem of location resolution. If given either a street address or a latitude and longitude location, which PSAP is the call going to be sent to? ECRIT is developing a protocol to do this sort of location mapping. Other work focuses on discovering dial strings and how to identify emergency calls, and there are still issues with communication between agencies and interoperability. The most difficult aspect is actually finding locations for devices. It would be helpful if authorized parties could reliably locate devices, but there is currently no real consensus on the matter.
The next working group of interest is the SPEERMINT group, which focuses on peering and interconnecting different providers. Henning says that at first many SIP proponents were naive, and thought that DNS lookups would solve their problems. This is still possible, but some carriers need a more controlled environment or don't want to forward calls for everyone. The mapping problem leverages ENUM, either public or private. One model how this might be solved involves the providers needing to find each other, exchanging policies, discovering the endpoints, and deciphering the media path, which can be problematic with border controllers and voice-only networks.
The AVT working group deals with RTP and SRTP. The main issues involve reporting and extended QoS feedback, codec control mechanisms, congestion control, and unequal error correction. The most controversial is SRTP key management, which several proposals address.
The MMUSIC group is the home of SDP. There was an effort to build a next-generation SDP based on XML, with more expressive description of the end system. There has been no great progress to adopt this, and some of the features have migrated into the older SDP. The other major issue is NAT traversal.
The newest SIP working group will deal with specifying services. This will create more "bandwidth" in the IETF, with more meeting time and more chair attention. Henning is not sure when this will happen.
One other interesting development is the peer-to-peer SIP effort. This does not have anything to do with file sharing, but is driven by the SIP model of replacing DNS, and, to some extent, SIP registrars. This requires a distributed hash table, as well as a protocol to generate this hash table, and a protocol for nodes who aren't necessarily participating as first-class citizens. This infrastructure could be used for many things besides SIP, but for political purposes this is not being emphasized. This effort was initially motivated by a desire for small stand-alone systems, but is also useful for events such as conferences or sports events without needing a proxy to form an ad-hoc network. Another development is a published, non-proprietary version of Skype, which would operate as a semi-public network. Unlike Skype, however, more than one could exist at a time. There are still issues of trust and reliability. There are three basic ways to accomplish this, using a full distribution similar to Bonjour, DHT using SIP as a management protocol, or using an external DHT.
Henning mentions that there are some open issues, particularly NAT, which he feels changes the nature of the Internet. There is no longer a single global address space, only locally significant ones. NAT behavior is hard to discover - it's difficult to know what part of an incoming packet is used for sourcing, how long binding is maintained, and so on. Connections can be lost unless one disables silence suppression, and working around this by sending SIP REGISTERs every few seconds is undesirable. Additionally, NAT may do one thing with 10 connections and something totally different with 20. There is a working group named BEHAVE which aims to define NAT behavior, but there are millions of existing devices which will never be updated.
Another issue for Henning is the user interface design and configuration issues with SIP clients. Inconsistent terminology is used and unexpected behavior occurs, leading to many problems. There is an effort to automate configuration, but there are many sources of information necessary for configuration. Ideally this would allow no-touch configuration of thousands of devices, but all of these are proprietary and this is difficult to accomplish.
To summarize the state of VoIP, Henning says that he feels that there is basic interoperability and reasonably good performance after any configuration hassles are solved. Advanced features are less polished. VoIP is also not terribly reliable at the Internet level. BGP plays a large role in this, as do NATs. Another problem is that to the user, all problems seem the same regardless of what is actually causing the problem. Diagnostic reporting is non-existent in hard phones and not much better in soft clients. Despite all this, Henning feels that the core standards for media and signaling are in place, and real, usable systems can be built. There are also several decent enterprise-level server implementations. Soft clients are still lacking security, implement only a small part of SIP, suffer from long latency, and all clients have problems with NAT. There is also an overall priority placed on voice traffic over any new services.
There are also problems with the IETF, as their model does not work well for large efforts, such as VoIP. There are a small number of people who are spread too thin, leading to relatively simple things taking up to five years to accomplish. Henning sees this work not as a simple question of protocols but a systems engineering effort as dozens of policies, services and protocols need to interact.
Christine Roe asks about the not particularly rosy picture painted by Henning of the IETF, and asks about its future. Henning feels that there are essentially two possible outcomes. One option is that the organization muddles through in basically the same manner as now, which is the likely scenario. The other possibility is that the organization devolves into some other form. Henning says that this isn't all a problem with the IETF, as they are in a position where their influence as a change agent is diminished and they are becoming more reactive than proactive. He blames this in part on the "ossification" of the Internet as a whole. He's most familiar with the IETF, which is a large volunteer organization with weak leadership, but there hasn't been any real change in the basic protocol machinery of the Internet in 20 years despite it being seen as new and full of exciting developments.
- Notes from 1-11-2007 call with Henning Schulzrinne, Garret Yoshimi, 01/20/2007
Archive powered by MHonArc 2.6.16.