As Principal Analyst of Transport and Routing Infrastructure, Glen analyzes technology, product, and partnership initiatives of vendors who supply carrier infrastructure equipment. Specifically, focusing on vendors that produce core routers, edge switching, optical transport, data center interconnection, mobile backhaul, network management and operational support systems.
ODL’s third release (Lithium) appears to close the gaps from earlier versions, such as testing, performance, native support of OpenStack Neutron and broader community participation.
ONOS logs its first commercial/production deployment since its release in December 2014. It should mark the beginning of many more, as it touts the carrier-grade characteristics needed to run live traffic.
Why is the OpenDaylight (ODL) Lithium release (its third) an important step in the evolution of the controller? Although the details are many, several features stand out as being important for adoption in a service provider environment. These features include support for: quality of service data, because RestAPIs are more robust in the data identification process; service chaining, to provide the infrastructure needed to provision a service chain and provide the end-user application for defining it; rigorous testing, to characterize multiple use cases to help boost scalability and performance; and better support of security and automation, because with most network functions going virtual, the need for a security architecture becomes more critical, and the ability to automate functions to minimize human errors and improve productivity helps operators reduce the risk of security breaches while reducing overall operational expenses. Continue reading “ODL Gains Momentum with Lithium and ONF Gains Deployments with ONOS”→
Virtual EPC investments pay off for start-ups as major vendors open their wallets, filling portfolio gaps and strengthening their virtual network propositions.
Multiple vEPC wins add credibility and a level of completeness to the virtual networking solutions; customers bite and move on from PoCs and trials to commercial services.
This year’s Mobile World Congress is obviously the show to attend and at which to exhibit, and as we predicted, this is the year when the industry rapidly sets aside its safety blanket of trials and proofs of concept (PoCs) in favor of making serious commitments to virtualized solutions. Several announcements appear to demonstrate that vendors and operators have set aside pure PowerPoint and replaced it with checks from acquirers (for startups) and from operators (to vendors) for serious deployments. Continue reading “MWC 2015: Virtual EPC Startups Snagged by the Big Guys, Filling Gaps in Portfolios”→
For the broadband network, NETCONF deserves strong consideration for its ability to work from flexible data models (YANG) and control all devices in the service chain.
NETCONF, initially standardized in December 2006, has managed thousands of routers and switches, and it works well with SDN.
OpenFlow versions 1.4 and 1.5 appear to have the requisite functionality needed for WAN device management and control; however, although standardized in October 2013 and December 2014, respectively, vendor commitment to date appears tepid.
Since the beginning of the SDN and NFV discussion a few years back, proponents of OpenFlow have been behind the movement to control all devices in the network. This has unquestionably been the case within the confines of the data center. OpenFlow appears to solve data center issues well, even the early 1.0 version which is widely deployed, according to many sources. However, consider the many cases where, in order to provide an end-to-end WAN service or provide inter-data center connectivity, the use of OpenFlow falls short, at least until now. Continue reading “The Quest for Dominance: OpenFlow or NETCONF for Networks Outside the Data Center?”→
• Rebranding: Multiple telcom vendors have rebranded themselves this year (some significantly more than others), but does this really help them grow mind share, while tag lines evolve.
• Targeted Messaging: It’s obvious that as market dynamics change, so finding the right messaging is critical, although too often new phrases and acronyms are invented to recast the same old concepts.
Over the past year, nearly all traditional service providers, as a group, have endorsed the idea of transforming their networks in order to capture the benefits that are promised by a more agile and flexible platform, enabling them to provide XaaS (Anything as a Service) to grow revenues and shake the somewhat “stodgy” telcom image. Network vendors have picked up on this theme of network transformation, and positioned their products, through messaging, to provide this transformation. Analysts typically evaluate, in great detail, the meaning and impact of new product capabilities and features provided by vendors, but we seldom apply the same rigor to a vendor’s messaging and positioning which has been designed to capture the eye of the operator. This blog does not provide detailed messaging analysis, but highlights some interesting new branding, tag lines and positioning that show that the vendor community is well aware of the need to market their wares in a vastly different fashion than the old “speeds and feeds” model from bygone days. Continue reading “Corporate and Product Rebranding – Useful or a Convenient Diversion and Just a Costly Expenditure?”→
IT and network jargon begins to co-mingle, but is this bilateral cross-pollination or one-sided?
Who’s courting who in the mashing together of the network, data center and cloud?
Over the past few months, it has become apparent that service providers, as a group, have nearly all endorsed the idea of transforming their networks in order to capture the benefits of a more agile and flexible platform from which they could provide XaaS (anything as a service). One of the initial barriers, which still remains, is the obvious disconnect between the terminology used by the network folks and their counterparts in the IT world to accomplish similar functions – like programming languages and installation processes. Continue reading “Python, Puppet, Chef, ONIE and ODM – New Terms for the Service Provider Equipment World?”→
Nokia is focused on four fundamental objectives: maintaining its leadership in radio, growing professional services capabilities, winning with innovative telco cloud and SDN solutions, and extending its presence into the Internet of Things. Quality, innovation, partnering and automation are the drivers it will leverage in order to meet these objectives.
Nokia’s three business areas – Nokia Networks, HERE and Nokia Technologies – are aligned under a mantra of technological competencies, innovation capabilities, software strengths, its strong brand, trust, a lean operating model and intellectual property. The new Nokia has aligned these three areas under a common model which stresses operational excellence.
Nokia held its annual industry analyst conference on December 2-3, 2014 in Boston, which has been the customary venue for the past four years. This year, there was an undeniable feeling of optimism and confidence that topped prior years’ events. This was perhaps due with the feeling that the company has put the challenges of restructuring and uncertainty behind it and now has a solid, executable plan to take the next step in the progression of the new Nokia (which is less than a year old). Continue reading “Nokia 2014 Analyst Conference: Great Lengths to Show How Blue Looks Different Than Purple and Orange”→
Not all parts of the service provider network infrastructure can benefit from virtualization; analysis of the IP and optical core and the aggregation layer, for example, points toward the continued use of specialized hardware/silicon and networking software.
The operational model service providers are looking to is more about service agility than it is about running everything on an x86 blade; dedicated hardware equipped with high-performance network processors continues to be the ideal choice to support high-speed/high-density 100G Ethernet and optical super-channels.
As operators seek to add programmability and agility into their services (and, thus, their networks), it is clear that the discussion regarding which functions should be virtualized versus those that should continue to be performed by specialized hardware/silicon solutions will continue, even as vendors such as Intel and Cisco push the envelope to ratchet up the performance of their x86-based platforms with multi-core architectures and new high-performance development environments. Virtualization certainly makes sense for anything that is heavily weighted toward compute and storage and for data plane applications that require moderate throughput. However, in the IP core network, where the demand is to support multiple terabits of throughput, custom solutions comprised of vendor-specific silicon and hardware will continue to provide a viable, if not the only, solution. Continue reading “Specialized Network Devices Will Not Be Going Away (Anytime Soon)”→