5G is in the process of transforming communications technology, enabling never-before-seen data transfer speeds and high-performance remote computing capabilities. As a cloud-native application, 5G provides advantages in terms of speed (the 5G network’s software-based functionality can be developed, upgraded and replaced several times a day); agility (the network can be deployed within minutes rather than months); efficiency (CSPs are aiming for a 10-fold improvement in the efficiency with which the 5G network can be scaled and migrated to take advantage of cloud economics); and robustness (zero downtime resilience results from the 5G network’s design and built-in automation).
The Challenges of Transforming to 5G
But in order to reap the rewards inherent in 5G technology, CSPs need to dramatically transform their key capabilities in terms of mobility, low latency, high data rates, extreme reliability and vast scale. The 5G network engineering, operations and scaling have to be vastly different from all the previous generation of mobile technologies, across all its layers. The underlying network technologies required for 5G include:
- Spectrally efficient, dense and ultra-reliable low-latency radio networks
- Strong security and authentication framework
- Network Function Virtualization – Cloud computing-based networks where network functions share resources dynamically and independently of geographical location
- Software-Defined Networking to separate user plane and control plane functions and to support network slicing to enable creation of multiple virtual networks to cater for specific characteristics of the services being offered
- Orchestration and management to fully automate fulfillment and assurance of services
- Multi-access edge computing to bring services closer to the network edge
Edge computing and Kubernetes
Edge software and computing bring computers and the data they need closer to each other to increase availability and speed and protect against latency and performance issues. Edge computing enables the reduction of data processing time to accommodate the growing need for speedier processing across various Internet devices.
To fulfill this potential, the 5G network-as-application will need a different execution environment from first-generation virtual network functions (VNFs) that execute in virtual machine (VM)-based clouds. Ultra-reliable, low-latency communications, network slicing, edge services, and converged access hinges on CSP’s adoption of cloud-native technology and containers. Modern, cloud-native applications execute in lightweight container technology controlled by an orchestrator. In this ecosystem, Kubernetes is rapidly becoming the de facto standard for container orchestration. Nokia, for example, adopted Kubernetes as early as 2018 and attributes to it a large part in the success of its foray into 5G.
Kubernetes is, essentially, an engine for container management orchestration, tasked with managing the container-based infrastructure that will be needed to support 5G networks and related services. Kuberenetes enables CSPs to self-remediate, scale on demand, and automate the microservices lifecycle. From a business perspective, Kubernetes can help reduce operational costs and increase the efficiency of engineering teams.
By enabling the separation of the infrastructure and the application layers, Kubernetes supports a system with less dependencies, that can easily sustain the implementation of features in the application layer. Telecoms are opting for Kubernetes for the resilient, flexible, scalable, and automated capabilities inherent to its architecture. But as Kubernetes integrates into 5G, CSPs need to develop their container networking expertise.
Staying on top of a new network
Kubernetes and cloud native computing represent a big step forward in terms of 5G’s potential. In combination, this is the kind of advancement that spells digital transformation. But this transition also depends on the confidence telcos can build towards these new technologies by creating the monitoring environment that provides the transparency needed to seamlessly identify and mitigate any issues as soon as possible.
Monitoring distributed environments has never been easy. While solving some of the key challenges involved in running distributed microservices at large, Kubernetes has also introduced some new ones. The growing adoption of microservices makes logging and monitoring more trying since it involves a large number of distributed and diversified applications constantly communicating with each other. On one hand, a single glitch can kill the entire process. On the other hand, identifying failures is becoming increasingly difficult. It’s not surprising that engineers list monitoring as a major obstacle for adopting Kubernetes.
Manual alerts and thresholds are a non-starter when it comes to Kubernetes. CSPs run multiple clusters, each with a large number of services, making static alerts completely impractical. Values fluctuate for every region, data center, cluster, etc. Manual or even semi-autonomous monitoring platforms will inevitably produce alert storms (too many false-positives)—or you could miss key events (false negatives).
By adopting an AI monitoring system, your organization can use machine learning to constantly track millions of your Kubernetes events in real time and to alert you when needed. Anodot’s Autonomous Monitoring solution creates a comprehensive view by monitoring the Kubernetes environment and the applications themselves, to bulletproof your operations. It is vertical-agnostic, and ideal for various end user applications such as IoT, e-commerce, manufacturing, retail, digital entertainment, fintech and more. Anodot automatically illuminates critical blind spots for the shortest time to detection and resolution, so even when transitioning to new technologies, CSPs never miss another incident—and can rely on a system where every alert counts.