The future Internet will use virtual infrastructure, have built-in monitoring functions, and will be largely encrypted. In this article, I will describe developments in each of these areas in 2017.
The growing importance of SDN/NFV-based architectures in carrier networks changes the requirements for monitoring solutions. For fully effective monitoring, network operators must now be able to monitor not only traffic between physical interfaces but also logical interfaces all the way up to the application layer (Layer 7). This requires a virtual probe function integrated in the NFV infrastructure (NFVI).
This new probe function will be especially important for Service Providers as they migrate toward SDN and NFV-based architectures and will be built into the infrastructure.
The problem with current probes and taps
Existing approaches have some critical limitations:
Why a Virtual Probe?
A built-in virtual probe can monitor both external physical interfaces, and VM-to-VM communications. It is a software entity, which can be attached to logical or physical interfaces and can be instantiated as a VM, a container, or a process belonging to the hypervisor hosting the VMs. The monitoring of traffic between virtual network elements is provided by a virtual probe function.
A virtual probe monitoring virtual network functions reduces the CAPEX and OPEX associated with the monitoring solution by using standard off-the-shelf hardware rather than proprietary appliances. Enabling virtual probes to aggregate dynamic counters as early as possible in the processing chain further reduces the complexity and cost of analytics solutions.
Communication Service Providers are looking for next generation solutions based on SDN and NFV, and in particular they are looking for ways to leverage the OPNFV architecture. This means that their suppliers, the networking vendors, need an efficient framework to develop these new, carrier-grade high-performance VNFs. Until now, they only had some basic technology such as Linux, Iptables, OVS, or Intel DPDK, and as a result development remains complex and costly. In addition, it has traditionally been complex and costly for developers to embed real-time traffic visibility in the form of Deep Packet Inspection (DPI).
Enters Vector Packet Processing (VPP): making it easier to develop new, high-performance networking products
VPP is a high-performance, packet-processing stack which runs on commodity CPUs. This virtual switch module was made open-source by Cisco in early 2016, as part of the Linux Foundation project FD.io (“Fido”), focused on solving new networking challenges. VPP has a track record of high performance, flexibility, and a rich feature set. For the networking industry, it is a new disruptive technology with the potential to both lower cost and risk for teams developing a new generation of virtualized networking applications.
There is now an opportunity to use VPP as a framework to build applications faster and to improve VNF performance. Prototyping by Qosmos R&D has resulted in promising results: 1) several stateful applications can coexist on a single VPP, 2) it is possible to scale from small devices such as CPEs all the way up to core VNFs. VPP is appropriate for firewalling and performance monitoring applications when it uses Deep Packet Inspection (DPI).
Adding some DPI spice to VPP
VPP in itself is good, but our practical experience shows that VPP must be complemented with real-time traffic visibility provided by DPI software, linked to shared flow tables, and fully integrated and monitored through OPNFV using standard management tools such as OpenStack for orchestration and OpenDaylight (ODL) as a controller.
In a nutshell, by combining VPP with ready-to-use DPI software, developers can work in a DevOps mode to accelerate time to market for new, high-performance and application-aware VNFs.
Why is traffic encryption on the rise?
Encryption on the public Internet is constantly rising, with some estimates showing that over 70% of traffic will be encrypted by the end of 2016. A few content providers (e.g. Facebook, YouTube, and Netflix) are responsible for most of the encrypted traffic. This is globally a positive evolution toward protecting privacy on the Internet, a trend accelerated since Snowden’s revelations about NSA interception activities.
Similar encryption trends can be observed for datacenters, with Yahoo, Google, and Microsoft encrypting all their data center traffic. In the enterprise, a third of the traffic is now encrypted both for in-house traffic (email, Web apps) and cloud-based applications.
Encrypted traffic can be classified
It is important to remember that encryption does not mean that the traffic is undetectable; it just means that the content remains private. Advanced techniques can still classify encrypted traffic, enabling service providers to continue to perform policy enforcement, optimize traffic and ensure a good user experience. Here are a few examples of encrypted traffic classification techniques, with accuracy and limitations.
Example 1: Classifying traffic encrypted with SSL/TLS (e.g. https)
Example 2: Classifying encrypted P2P traffic
Example 3: Classifying Skype
Thanks to advanced classification techniques, traffic optimization, policy enforcement, and user experience are largely unaffected by encryption. This means that communication service providers can continue to leverage Layer 7 visibility to ensure service quality and manage resource utilization, while respecting subscriber privacy!
By Erik Larsson, VP Marketing, Qosmos – Article first published on October 7th 2016 in The Fast Mode