This IMS you hear so much about these days originated as part of a set of standards for 3G mobile telephones initiated in 1999 by the 3rd Generation Partnership Project (3GPP). In the second quarter of 2004, 3GPP Release 5 included a specification for an IP Multimedia Subsystem (IMS) designed to improve data services on the 3G phone network.
So what possible relevance could a wireless specification have for optical networking? Plenty, it turns out, as IMS and optical networking are becoming increasingly aligned at the strategic level.
As we all know, the optical market soared and slumped on the IT boom and bust that straddled the last millennium. The first signs of recovery in the market began to appear only in 2004.
In a previous issue of Lightwave (“Optical Equipment Market on the Comeback Trail,” June 2006, page 33), Shin Umeda, principal analyst at Dell’Oro Group, wrote, “Demand for optical equipment accelerated in 2005, as service providers around the world changed their approach to investing in their fiber-optic networks. The investment mindset changed from a tactical, business sustenance approach to a long-term strategic view that valued services expansion. This led service providers to direct capital outlays in 2005 to optical equipment, lifting sales in all regions of the world and across all equipment categories. The total optical equipment market increased 22% and surpassed $8 billion.”
The key point here is the shift in the priorities of the service providers, to a long-term strategic focus on services expansion. This is where IMS comes in.
IMS is popularly associated with the convergence of fixed and mobile phone services to create a unified communications environment. The important thing to remember, however, is that IMS was not designed to enable any particular new application, but rather to make it easier to add new applications as they arise (i.e., to be an “applications enabler”). In the past, new telephony applications were rare, in part because they typically had to be implemented from the ground up on a dedicated, service-specific infrastructure.
Unlike the telephone network, IP networks offer so much flexibility in handling data that the Internet has spawned a host of new applications and the possibility of endless future ones: video, voice, and printed word in many forms and permutations. So what the carriers are looking for is a whole new multimedia subsystem flexible enough to support any number of future applications without needing to rebuild the carrier network each time.
The aforementioned 3GPP Release 5 architecture was designed around IP and defined a “control layer” of Call Service Control Functions (CSCFs; see “Basic IMS Architecture”) that was distinct from the “application” or “service layer” into which any future applications could be added without making changes to the control layer.
Although IMS was originally conceived for the 3G network, the 3GPP had the wisdom to define a control layer that was not only independent of the service layer but also of the “transport layer,” so it would work equally for any fixed or mobile network with packet-switching functions. This is what would lead to the promise of fixed/mobile convergence.
Basically, the Release 5 architecture defined a way to build a network that would allow the addition of any number of new services to its subscribers-all the sort of services like Skype, Google Earth, and video conferencing that have sprung up on the Internet and more-but with the level of reliability and quality we associate with traditional phone networks. It was a good start, but not yet a commercial proposition because its coverage was limited to a single 3G network.
Release 6, accordingly, allowed the system to be “access agnostic” (i.e., applicable to cable, mobile, PSTN, DSL, and other networks). It also addressed the real-time gateway functions on the edge of this IMS network, the links that would allow subscribers to reach out to other fixed or mobile networks just as they can with traditional phone services. So Release 6 brought IMS into the real world, rather than remaining within an island of privileged subscribers.
The point about IMS, as described in the sidebar, is that the basic network control functions are on a layer independent of the actual carrier medium and independent of whatever fancy services are added on top of that layer. If tomorrow’s mobile phones come with holographic 3D virtual presence projectors built in, then an IMS network will be ready to support them.
The real promise of IMS is to take any new service, define it today, stick it on your network tomorrow, and offer it to subscribers next week. That is why the technology, originally designed for mobile networks, has actually become the hottest topic among fixed-line and cable operators: because it has that same long-term focus on service expansion that we see in the service provider’s approach to its optical networks.
The Internet has created a public expectation that new multimedia services will come and go on a day-to-day basis: You can simply download some software and turn your game console into a simple video-conferencing interface, that another download will enable automatic backups to a remote Internet server… or whatever. In these terms, carriers have been lagging behind in terms of delivering sufficient bandwidth to the home or office and guaranteeing adequate quality of service (QoS). However, they are now beginning to address the demand in a number of ways.
One is to increase bandwidth in the final mile to the premises, where PON is a strong contender. Another is to provide greater flexibility to manage bandwidth not only in the last mile but also in the core network. In the old, inflexible model, the network designers must identify well in advance the end points of each optical circuit and the maximum amount of bandwidth that circuit will require in order not to waste optical resources. In today’s quest for greater flexibility, designers are looking to new technologies such as reconfigurable optical add/drop multiplexers (ROADMs) that enable automated network reconfiguration to deliver bandwidth on demand.
This capability begins to take optical networking out of the pure transport layer and blurs the boundary between transport and control. And if applications are going to come in on the act-say a user simply clicks on a videoconferencing icon and it automatically calls up a 10-Gbit/sec connection to the head office-then the long-term focus on service expansion will call for a fundamental reevaluation of the telecommunications infrastructure. That is the third approach, and in it IMS is seen as the best way forward.
In some ways, the IMS debate has been overshadowed by excessive emphasis on particular services it could provide, rather than maintaining the focus on it being a “service enabler” designed to cope with whatever future services might arise. But for the purposes of this article, consider the following example.
A user initiates a call to a colleague from a highly functional laptop. An IMS infrastructure would mean that during the call, without any redialing or break in the conversation, the caller could send a video stream and download some text files and a slide presentation to the colleague. IMS allows the delay-sensitive voice traffic to be given a higher priority so the voice quality is not degraded by the file downloads. Similarly, during the call the sender might say, “We haven’t finished this discussion, but it’s time to walk the dog, so I’m now continuing this conversation on my mobile phone. So keep your amended draft until I’m back at my desk,” and the conversation would continue seamlessly. If the mobile phone had e-mail features and a big enough screen, the amended draft might even be dealt with during the walk.
Putting aside the mobile element for the moment, it is clear that such a service raises expectations for very high bandwidth usage. If, during such a conference, one of the callers says, “Before we discuss our marketing any further, I’ll just send you a video of our rival’s latest promotion,” they won’t want to hold up the meeting for more than a few seconds while the video is downloading, nor will they want the voice quality to be degraded during the download.
Again, the point here is not to address that particular application but rather to illustrate an environment that allows such applications and raises users’ expectations of delivery. What if, instead of just voice and video, future users demand HDTV or even telepresence conferencing? The bandwidth and QoS implications are formidable, and they suggest a major impact on the final-mile access technology used.
Right now, FTTP is a hard sell in many parts of the West, with its massive existing investment in copper to the home, and high-quality copper at that, thanks to widespread DSL deployment. DSL is clever technology and it is still improving, as are various other copper-based access technologies such as Ethernet over wire and bonded copper. So, think of any one new application and the chances are that something can be done to stretch it over the existing infrastructure.
But all these copper-based access technologies are backward looking, in the sense that they are efforts to squeeze ever more bandwidth within the limitations of an existing medium. It’s a struggle because you are up against the physical limits of copper wire and its ability to transmit electrical signals without interference. In that sense, fiber is a forward-looking technology, because the potential capacity of an optical fiber, using DWDM, is way beyond our horizon. The limitations lie not in the medium so much as in the equipment at either end. And that makes it highly desirable for carriers to lay their fiber networks right to the premises.
So, while the FTTP market has been sluggish in the United States and Europe, it becomes a more urgent priority once carriers start to take IMS seriously and its potential for sudden unanticipated access bandwidth demands. It may still be a difficult sell against existing copper systems, but if carriers really focus on that long-term need for service expansion, FTTP becomes the strongest contender.
What applies most clearly in the final mile also applies throughout the infrastructure. New services could arrive overnight, placing sudden demands on the network’s overall capacity to deliver. Thus, since IMS creates a flexible service-creation environment that enables providers to create and bring to market new services much faster than ever before, underlying optical networks will need to be more responsive to allow for interface provisioning and reconfiguration of bandwidth on demand.
Additionally, the IMS service environment will be much more competitive: Providers will need to introduce the services that customers want, when they want them, with the performance and QoS they require. For this reason, the optical network must extend to the customer premises, have the ability to identify traffic that needs priority treatment, and enable that priority.
But it is not simply that IMS will help to accelerate the spread and scope of fiber; it will also bring about changes in optical technology itself, the sort of changes already being considered. The new service environment will operate at the application layer (Layer 7). Optical networks currently operate at Layers 1 and 2. But as optical networks become more flexible and responsive, they begin to share some of the attributes of traditional services, such as voice call switching. This blurring of the boundaries between traditional services and transport will continue and be accelerated by IMS. What this should mean for the industry is a more exciting and dynamic optical infrastructure.
There is no doubt that IMS has the potential to revolutionize telecommunications, accelerate the spread of optical networks, and reshape the underlying transport technology. But will it ever happen?
There are plenty of arguments being put forward against IMS, but they mostly overlook its central strategic role, which is to build in flexibility for any future service or application. The real case for IMS, however, is being made by the number of influential people who are now heavily committed to it. Real work is being done to develop IMS networks, and the best example is provided by the MultiService Forum (MSF) with this month’s Global MSF Interoperability (GMI) event.
According to the Current Analysis Inc. report on GMI 2006 from Joe McGarvey, “One way to think of the proposed Global MSF Interoperability 2006 (GMI 2006) event is as a dress rehearsal for the opening night of what may be the next-generation multivendor service delivery platform. The event brings together dozens of carriers and equipment makers to stage a dry run of how an IMS-based infrastructure might fit together in a real-world setting.”
Five of the world’s top carriers-BT, KT, NTT, Verizon, and Vodafone-along with Nortel, are sponsoring GMI 2006. These carriers and the University of New Hampshire Interoperability Lab are coming together to provide world-class networked test facilities spanning three continents in a massive commitment of resources. At the time of writing more than 20 equipment manufacturers had signed up too, and the focus of their attention will be on getting the basics right. As Roger Ward, president of the MSF, puts it, “The operators are less focused on individual services than on making reliable end-to-end connections, maintaining the services and QoS without risk of dropout.”
So, in answer to the question “Will IMS ever happen?” the advice is to watch for the October culmination of GMI 2006. A lot will have been learned, and the industry will then have a much clearer idea of the way forward. Optical equipment vendors should take note.James McEachern is a director of the MultiService Forum (www.msforum.org). He also is responsible for the overall standards strategy in the Succession Call Server (Carrier VoIP) business unit at Nortel (www.nortel.com).
The basic IMS architecture has three layers.
The connectivity layer comprises the physical network, including the routers and switches for the backbone and access network, the traditional home of optical technology.
The control layer is responsible for setting up, modifying as needed, and terminating calls. It contains the Call Session Control Functions (CSCFs), based on Session Initiation Protocol (SIP), plus support functions to allow provisioning, charging, and operation and management. The Home Subscriber Server, for example, holds a database of all the network’s subscribers, their access rights, authentication details, billing information, and subscribed services, including information needed for the traditional mobile network. The control layer also includes gateway functions for internetworking with other operators’ networks: the Signaling Gateway Function converting signals between SIP and SS7 networks, and the Policy Decision Function that allocates IP resources to support QoS. Recent developments in optical technology, such as reconfigurable networks, are reaching into the control layer.
The application layer contains application servers providing value-added services for the user. Sitting over the control layer and independent of its inner structure, this layer enables any number of new services to be simply added or upgraded without any change to the control layer.