As Principal Analyst, Optical Infrastructure, Rick primarily focuses on tracking, analyzing and reporting on developments that impact the metro, and long-haul optical infrastructure market. His areas of coverage include the companies, technologies and strategies related to the market for WDM-based access, switching, optical add/drop and PON products.
ICPs are deploying disaggregated solutions across their networks; data center operator interest in specialized DCI solutions for data centers also reflects this trend.
AT&T appears to have taken up the trend as it deploys disaggregated OLTs and proposes disaggregation of additional network equipment.
As I pointed out in a blog post last month, the capabilities introduced by SDN set the stage to present a radical possibility for the network: network element disaggregation (or, in the rest of this post, simply disaggregation). Though this concept would appear to be strictly in the purview of the academic community, several large network operators have called for disaggregation (either by name or perhaps by other names) over the past year. Demand for disaggregation exists beyond the theoretical. Continue reading “Hardware Disaggregation: Demand Extending Beyond Expectations”→
Data center interconnection (DCI) transport is emerging as potentially a massive optical transport opportunity.
Regardless of the often-quoted demand for a simple point-to-point DCI transport solution, some applications are likely to be better served with a packet-optical solution.
DCI has become somewhat of a darling of the optical transport networking world. Once Facebook explained that most of the traffic that connects to its data centers (which is approximately 10% of the data flow within the data centers) came from other data centers, the market perceived that DCI would be a very big business, perhaps ultimately as large as all of telco transport. Simultaneously, many data center operators claimed that their requirements for DCI were not complex; all that was needed was to provide high-capacity/density and low-cost/power point-to-point connections between the data centers. In fact, some web-scale companies such as Microsoft and Google proposed a concept of do-it-yourself (DIY) transport in which the data center operator would purchase and assemble optical components to achieve a highly cost-effective DCI transport solution. Optical systems vendors, after fretting about losing out on this new DCI business segment, realized that this application actually presented them with a high volume (albeit at low margin) opportunity on top of their existing business. Some of the more nimble of those vendors quickly developed and introduced customized DCI transport platforms – Infinera’s Cloud Xpress, Cyan’s (now Ciena’s) N-Series, Ciena’s WaveServer and ADVA’s CloudConnect. After this mad dash to introduce products to address the point-to-point DCI application, perhaps it is time to examine the data center transport marketplace with a bit more perspective. Continue reading “Data Center Interconnection – Not Quite So Simple”→
OFC 2015 attendees were captivated by the enormous point-to-point DCI opportunity, but its margins may be suspect.
With virtualization moving non-transport functions to the cloud, the remaining multi-layer transport market is likely to be quite rewarding.During last week’s OFC 2015 in Los Angeles one would have thought we had found the goose that laid golden eggs. Component vendors were taken with the opportunity of endless optical connections within data centers, and optical transport vendors saw profit in the rapid growth of data center interconnection (DCI) traffic. However, opportunities in the optical transport market are considerably broader than simply chasing DCI connections. This dichotomy of the killer application (DCI) versus the broader set of applications was in in full view at the Infinera Technology Briefing on Wednesday afternoon of the conference.
As it did last September when it introduced its Cloud Xpress platform, Infinera explained how distributed processing by such Internet content providers as Google and Facebook is producing massive data flows between the data centers. It then pointed to the Cloud Xpress as the first platform available in the marketplace expressly designed for the unique requirements of point-to-point DCI. At least one financial analyst seized on that opportunity, and Infinera’s product that addresses the opportunity, to significantly raise the vendor’s target stock price. However, there is much more going on at Infinera, and in the marketplace, than simply a bounty of DCI business.
Infinera sees the market growing along several dimensions, each of which is a $10 billion opportunity, and it is funding its R&D to pursue these opportunities. It is clearly preparing to introduce a metro/aggregation platform to address the expanding metro transport market, reiterated in its announcement of two new photonic integrated circuits (PICs) to address this new (for Infinera) market. The vendor also described its plans to increase the capacity of its DTN-X, enhance the capacity and flexibility of its super-channels, and introduce software defined networking (SDN) capabilities to address the cloud network opportunity which is broader than simply point-to-point DCI. An example of this type of cloud network opportunity is this week’s Infinera announcement that Facebook is deploying its portfolio on a route from its Lulea, Sweden, datacenter across major hubs throughout.
Infinera then described expanding along another dimension (an orthogonal expansion, if you will) to address higher-layer connections in the marketplace, adding another $10 billion to its addressable marketplace. The vendor posited that virtualization (specifically network functions virtualization – NFV) would absorb functions provided by specialized network elements that were outside its scope, leaving a set of multi-layer transport functions that it could address (illustrated in the figure).
Source: Current Analysis
In the figure, the green dashed lines represent the functions being virtualized into the x86 platforms. At the bottom of the figure is the remaining multi-layer physical transport network. The value of this remaining multi-layer transport market could be on the order of $50 billion. One implication is that Infinera’s capital value should be based on a much larger opportunity than simply point-to-point DCI, but the larger implication, and with greater certainly, is that the value of the packet-optical market is by no means limited to the size of the point-to-point DCI market.
What Infinera did not address, but was reflected in the conference as a whole, is that point-to-point DCI transport equipment will soon come under extreme price pressure. The large IP content providers are already in discussions with component vendors to provide bare metal solutions, sending the simple point-to-point transport market on a race to bottom dollar. This IP content provider investigation of cobbling together their own point-to-point DCI solutions does not mean that some vendors such as Cyan (with the announced N11) and PadTec (with a claimed terabit OTN muxponder) will be pushed out. As long as a transport vendor realizes the specialized requirements for a point-to-point DCI product (including relatively low margin matches against giant volumes), it can likely add enough value to hold off the sub-system and component competition. By the way, this race to bottom dollar will also drive down the simple DCI market value in the long run.
Yes, the OFC 2015 attendees were enamored with the enormous point-to-point DCI opportunity, but that doesn’t mean that this is the primary optical transport market opportunity. There is much more, besides.
• Optical networking vendors strive to understand the selections network operators make for next-generation optical infrastructure.
• Sometimes, these decisions can be based on more than the ultra-high-capacity optical network elements.
Much of the buzz going into last week’s OFC 2015 in Los Angeles was the opportunity provided by data center growth and, particularly, the opportunity for optical transport vendors to profit from the rapid growth of data center interconnection (DCI) traffic. However, once the show got started, Verizon captured the crowd’s attention by announcing its intent to modernize its metro optical network using scalable, packet-optimized transport solutions (including 100G flexible CDC ROADMs) from Ciena and Cisco. Ciena’s selection was somewhat expected, based on its prior 100G work with Verizon, but Cisco surprisingly usurped two incumbent metro optical vendors (Coriant and Fujitsu Network Communications) to land the second selection. The question at the conference was, “How did they do that?”
Any consideration of how a vendor would win an operator’s business should be based on the value that vendor could bring to the operator, and that value, in turn, is tied to the needs of the operator. Yes, Verizon can benefit from agile 100G (and higher) optical networking, but a focus strictly on that backbone portion of the network overlooks a much bigger challenge for an operator, particularly Verizon. This operator provides tens of thousands of T1 services, and it employs additional many more T1 lines to support a myriad of other legacy services. The challenge for Verizon is that it supports these T1-based connections with a legacy infrastructure (e.g., multi-service provisioning platforms – MSPPs) that has been largely bypassed in recent packet-based platforms. Verizon knows that the network will ultimately evolve to all-packet, but how does it continue to support its very significant base of legacy connections (T1s, T3s, OC-3s, etc.) while transitioning the network to all-packet? One practical solution (and perhaps a superior one) is to convert the legacy connections to packet flows with a process such as circuit emulation.
Circuit emulation has been employed in many networks over the past decade to packetize miscellaneous TDM connections (baseband, PDH and SONET/SDH) for transport in all-packet networks. However, these solutions have generally addressed the minority of a network’s connections; the process has usually been expensive and has typically not scaled well. Thus, even though circuit emulation performs the function needed by the incumbent wireline network operators (like Verizon’s wireline operations), it has proven impractical to employ in the massive scale required. However, if a network systems vendor could propose a solution that would support massive scale – cost-effectively and requiring minimal space and power – that vendor would be meeting a vital need of the operator, thereby positioning itself to win a considerable portion of the operator’s metro business. As it so happens, Cisco is quite experienced in the technology, and it claims that circuit emulation was included in its proposal to Verizon.
In the end, only Verizon knows the real reasons it selected Ciena and Cisco to supply its next-generation metro optical network. However, as vendors inevitably attempt to ascertain the likely reasons, they should not limit their evaluation to simply the “sexy” (for the optical transport industry) ultra-high-capacity optical network elements. The solution may lie in the much more mundane conversion of legacy connections into packets.
Transport network elements are likely to remain in existing central offices, even after their control functions have been virtualized to x86 servers.
In some cases, the x86 servers will be consolidated in data centers that are separate from the physical network, but in others, they will be collocated.
I recently posted a report that describes how traffic demands and the promise of virtualization are leading operators to segregate the functions in the network into two layers – a cloud services layer that consists of the many functions of network that are virtualized into standard high volume x86 servers, and a transport layer that consists of the remaining functions of the network, which are focused on sending traffic across the network.. As report notes, operators are beginning to implement network functions virtualization (NFV), but they are still determining where to locate the NFV servers. The answer, of course, is likely to be in next-generation central offices (COs), headends and/or data centers, but the question remains; just what does a next-generation CO look like? Continue reading “Implications of the Two-Layer Network on Next-Generation Central Offices”→