Implications of the Two-Layer Network on Next-Generation Central Offices

Rick Talbot

Rick Talbot

Summary Bullets:

  • Transport network elements are likely to remain in existing central offices, even after their control functions have been virtualized to x86 servers.
  • In some cases, the x86 servers will be consolidated in data centers that are separate from the physical network, but in others, they will be collocated.

I recently posted a report that describes how traffic demands and the promise of virtualization are leading operators to segregate the functions in the network into two layers – a cloud services layer that consists of the many functions of network that are virtualized into standard high volume x86 servers, and a transport layer that consists of the remaining functions of the network, which are focused on sending traffic across the network.. As report notes, operators are beginning to implement network functions virtualization (NFV), but they are still determining where to locate the NFV servers. The answer, of course, is likely to be in next-generation central offices (COs), headends and/or data centers, but the question remains; just what does a next-generation CO look like?

As specialized network functions are transitioned out of existing network elements, those that provide optical and digital transport, as well as underlying packet switching (Ethernet switches and possibly some IP routers), are likely to remain where they are, in the existing COs (or, for cable TV providers, headends). Network operators are not anxious to forklift their entire transport infrastructure, and next-generation transport gear will provide similar functions, just with superior scalability and SDN-based control. From an OpEx perspective, keeping the next-generation transport equipment collocated with the legacy equipment is preferable because of the similar functionality of the equipment; the same personnel will likely manage both.

The location of the virtualized resources is another issue. This equipment, generally x86-based commercial off the shelf (COTS) servers, provides an entirely different function than the next-generation transport equipment, and the staff that manages these servers has a completely different skill set (and potentially in a different organization). For example, telco IT functions (including much of its OSS/BSS functions) are performed in separate data centers. In addition, before it was AT&T, but after it was Southwestern Bell, SBC Corporation (the telco) entered the video business with what we know as U-verse. The infrastructure that supported this video service (deployed as project Lightspeed) may have had similar elements to, and some overlap with, the existing telco infrastructure, but its video processing and content equipment was sufficiently distinct that the telco segregated this equipment into separate headends / data centers. Thus, a precedent is established to deploy the virtualized functions of the network outside of the existing central offices.

However, there is a significant difference between serving independent IT functions, or even serving a specialized (video) service, and serving all of the virtual network functions (VNFs) of the network. Any service that needs to have a VNF performed on it will need to have its service connection routed to a VNF server for treatment. Driven by the anticipated increase of value-added services once NFV is fully deployed, a tremendous amount of the end-to-end network traffic will need to be connected through these servers. Unless the servers are distributed out to at least some physical network locations, significant traffic will be transported to, and then from, the server location (data center) in a configuration often termed a “lollipop”.rt blog1

In addition, the ability to meet SLAs and latency requirements, especially for video, will require that some virtualized network functions be performed relatively near to the network.

Network operators are likely to choose a mixture of these two placements of VNF servers. In some cases, a separate data center site will support a large number of servers serving a large portion of the network (perhaps in a metro hub). In many others, it will be important, for minimizing the lollipop effect and to reduce latency, the servers will be located at physical network locations. In both cases, these sites can be considered next-generation COs.

 

About Rick Talbot
As Principal Analyst, Optical Infrastructure, Rick primarily focuses on tracking, analyzing and reporting on developments that impact the metro, and long-haul optical infrastructure market. His areas of coverage include the companies, technologies and strategies related to the market for WDM-based access, switching, optical add/drop and PON products.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: