» How to optimize service delivery in next-generation core networks
NXTcomm Daily News - From the editors of Telephony and Wireless Review

Brought to you by:

Register now
for NXTcomm08:


         Subscribe in NewsGator Online   Subscribe in Bloglines
How to optimize service delivery in next-generation core networks
By Gary Southwell, director of IPTV and Multiplay solutions, Juniper Networks

Jun 26, 2007 5:42 PM


For many years, the edge of the IP network has been the traditional service delivery point where most of the network intelligence resided, whereas the core was not as active in controlling the services that traverse the network. Until now, having the majority of service intelligence reside at the edge has been an adequate paradigm for many traditional applications.

The massive growth in high value traffic changes the current approach to core network design. This traffic takes the form of video—both on-demand streaming connections and real-time broadcast, as well as other content-heavy applications (such as storage) and of course voice. Since a positive user experience is a necessity for these services, it becomes incumbent on the core network to be able to scale and differentiate the high value traffic. The alternative, a core with no service awareness, results in either dissatisfied users (ultimately killing the service), or a need to over-engineer via bandwidth and infrastructure capacity to accommodate these services.

Requirements for Service Awareness across the Next-Generation Network Core

Service providers have recognized the need for change in service delivery and have called for their equipment suppliers—through initiatives such as IMS, TISPAN and IPSphere Forum—to provide an architectural model that enables scale to deliver profitable new services as they become available. The following table illustrates this model.

Provider Network Layers Responsibility of Network Elements
Service Interact via Policy Engine via Service Oriented Architecture
Policy Intelligent Policy with Interfaces to Service/Infrastructure
Packet Handling
Infrastructure
Router Control Plane (Route Processing)

Table 1. Functional separation in a layered architecture

Specifying functional separation into layers with defined open interfaces, these architectures promise to link services with users and the network. However, the primary focus of these architectures has been on admission control and service policy enforcement in edge platforms, with very little done to leverage this service information across the routed core network.

Looking forward, experience-based, bandwidth-intensive services, such as video on demand, are driving the need for the network core to dynamically adjust to meet the changing subscriber and application demand. To achieve service awareness, it is clear that the Next-Generation Network (NGN) core needs more packet processing agility—and the ability to actively link network behavior with user, service and application requirements. This service awareness in the core prevents the costly mistake of leaving the user experience to chance, and avoids high operations involvement whenever a new revenue opportunity comes along.

NGN core solutions need to solve the fundamental challenge of delivering services cost-effectively and at scale. The current core infrastructure itself presents some challenges here. Across most installed systems, capacity is limited to a few 10 Gbps interfaces, which is insufficient to handle core growth rates of 80-100 percent per year very far into the future. In the highest traffic environments, the challenge of scale requires intelligent 40 Gbps interfaces that support over a terabit in a single network node.

Finally, this router intelligence must be available at multiple levels of scale. It should preserve current investments, and be available as an in-service upgrade to existing systems rather than as a ‘forklift” upgrade.

The challenge of delivering scalable service control on the packet handling level, and policy enablement in the control plane, must be designed from the start. This is a new frontier in core network router design, and retrofitting this level of sophistication is not possible as an afterthought once inadequate systems are deployed.

The immediate assumption is that a system providing service control at scale would cost more than plumbing-only counterparts. But what if this service control and scale was available with the same level of investment to the service provider? The advantages would be faster time to market for assured services, differentiated services with premium margins for guarantees, higher availability, and higher brand quality.

Enhanced service control across the core means not only rapid service delivery and resultant revenue increases, but also huge financial savings (in both CapEx and OpEx) from the optimized utilization of existing core resources. Additional architectural changes can be implemented: Video servers can now be more centralized, with content more efficiently replicated and delivered across the service-aware core to end users.

Enabling Technologies for a Service-Aware NGN Core

Today, service tunnels (enabled through label switched paths or LSPs) are built across core networks to provide traffic engineered paths for specific QoS, availability and advanced service processing requirements. But these service tunnels are essentially static connections, and cannot be easily or quickly changed or adjusted. While this approach has been sufficient for relatively predictable best-effort service, emerging services like video on demand are changing traffic patterns and increasing quality requirements. The ability to dynamically modify paths through the network core (see Figure 1) will have a huge positive impact on provider’s ability to deliver these new services efficiently, cost-effectively and reliably.

Figure 1: Policy Management can dynamically adjust core networks based on service demand.

In the case of a video serving network, a user request for a high-bandwidth, high value service would result in the dynamic adjustment of network resources between the subscriber and the video serving location. These policy changes would be initiated by an interaction between the content application and the providers’ service-aware policy manager. The policy manager will first check the edge Broadband Services Router and make appropriate admission control decisions and policy changes. If needed, the policy manager resizes the LSP throughout the network core and either uses bandwidth available on existing service tunnels or creates a new path.

Of course, in some cases, the policy manager may deny the request, thus guaranteeing performance across the core network. Essentially, a Call Admission Control (CAC) decision is made to the application request—yes or no as to whether the request will be granted.

It is imperative that core networks do not change state too frequently— therefore the policy manager maintains a table of bandwidth allocated per LSP and only makes changes when a threshold is reached and requests are made for large increments of bandwidth.

For even greater efficiency advantages, increased service control can also be leveraged to optimize the transport of broadcast video across the core. With broadcast video, the issue is how to handle a problem that is intrinsically a point-to-multipoint issue, but to solve it in a way that does not lead to complex state management in the core network. Most multicast techniques such as PIM are mainly enterprise focused and are not suitable for large scale deployments in SP networks.

The solution is a similar policy-based approach to accommodate the dynamic nature of the service, coupled with intelligence such as point-to-multipoint (P2MP) MPLS services. Effectively, traffic in a P2MP LSP is replicated in MPLS: unicast LSPs are merged at rendezvous points within the MPLS network to create a “leaf and branch” delivery mechanism for multicast traffic. Being MPLS in nature, P2MP LSPs offer the following benefits:

  • They can be traffic engineered to guarantee bandwidth and priority
  • Reduced state to be maintained in the network
  • Resiliency features such as FRR to re-route rapidly around network failures.

Service Awareness and Policy in NGN Cores

In both the unicast (VoD) and multicast (broadcast video) cases, the service awareness and advanced packet handling control results in increased network efficiency and increased service quality by mapping the best possible paths between subscribers and content sources. The key is in the policy layer, which must be able to look at the applications being served (IPTV, web services, voice, fixed-mobile convergence including IMS, etc.) as well as the state of the network resources. Using open interfaces such as SOAP/XML and DIAMETER, policy functions can be linked (northbound) to applications and services, and (southbound) to the network elements.

In addition to the above examples, there are several other areas in which policy can be used to improve service delivery across the core, taking advantage of key functions designed to scale a core efficiently. These include having policy interwork with DiffServ TE and QoS-based service tunnels for flexible classes of service, optimized resources, and fault tolerance.

Service-aware policy can also be used in conjunction with hierarchical LSPs, which allow the scaling of huge numbers of tunnels across the core. As providers deploy IP and MPLS further towards the edge, the result is a large mesh of thousands of service tunnels built across the core. LSPs sharing a common route can be bundled together into a “parent” LSP, preserving resources and greatly scaling the total number of tunnels.

Integration of policy with other OSS (provisioning and monitoring) systems is a key facilitator to these advantages. Changes made in the provisioning system to alter the network are passed to the network resource policy layer, which is interoperating with applications and the routed network.

Applied at the policy layer across the edge and the core, service-aware policy managers make the network intelligent—improving efficiency, speeding time to revenue for new services and increasing customer satisfaction.

Agility and Openness as the Foundation for Customizable NGN Services

As the heart of any NGN or multiplay deployment, the core network has taken on a new role, delivering stability and high speed transport, as well as rich service delivery features. In this new era of sophisticated multiplay and rapid service rollout, the core cannot be considered or positioned as dumb transport. Instead, future-looking service providers will view the core network as a key component of their flexible and intelligent service delivery networks, and will leverage agile and open network cores to usher in a new generation of services and capabilities.

[an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive]
blank
blank blank
blank