Oct 26, 2021 3:00 AM

Why traditional IP networking is wrong for the cloud

Legacy networking approaches don’t align with the way that cloud providers create services or access and only introduce more complexity. Move to the cloud, but leave your traditional networking behind.

Dell

It has been remarkable to witness the enterprise transition to the cloud. In just a few short years, a cloud strategy has become a given at most large companies. But in my experience, customers and partners have not been shy about the pains of the cloud journey, particularly when it comes to cloud networking.

One of the biggest challenges: the assumption that traditional IP networking from the legacy stack will support their cloud journey. In hindsight, customers are discovering that these inherited approaches only introduce more complexity—that they’re not aligned with the way that cloud providers create services or access.

The general strategy makes sense. Enterprises take their traditional IP networking tools, purpose-built for the data center, and turn their virtual versions loose on the cloud. But networking is only half—or rather one-third—of the battle. Enterprises also need to consider security, application performance, and cost as they move to the cloud.

Here are the (somewhat technical) reasons why IP-only networking strategies—with tangled webs of VPCs, VNETs, and firewalls—will not work for connecting to and within the cloud, and why enterprises need a fundamentally new approach to cloud networking.

Application observability is limited

Of course, the IP networking layer does provide a way to connect your data center to the cloud. However, one of the main challenges of legacy networking is that it provides limited visibility into applications in the cloud—the lifeblood of enterprises today and arguably the primary driver behind cloud adoption.

At Layer 7, or the so-called application layer, enterprises have a holistic view of what takes place at that level (applications and collections of services) as well as in the stack below, such as at TCP and UDP ports and IP endpoints. By operating with the traditional stack (i.e, the IP layer) alone, enterprise teams have a substantially harder time viewing what is above them in the stack. They have a view of the network alone, and blind spots for everything else.

Why does this matter? For one, it can significantly increase remediation time when performance problems occur. Indeed, enterprises need to understand how their cloud infrastructure works in relation to the application and A/B test configurations to align with application performance. At the network level, however, enterprises can only apply blanket policies for performance and security for all applications. What happens when part of your service or application is broken? Without visibility into the entire infrastructure stack, it can be very difficult to pin down exactly where the problem is.

Performance is an afterthought

In a similar way, enterprises must account for different latency, performance, and security requirements for different applications. For instance, even one application might require three separate versions—staging, QA, and production—that have different performance and latency goals. You may not need to optimize for performance for applications in staging, whereas you absolutely would with production workloads.

At the IP network layer, it isn’t possible to achieve these differentiated goals. However, at the application layer, it is possible to apply consistent policies for different application types across even a spectrum of clouds, cloud providers, or on-premises instances. In fact, by using cloud features like URL re-write or FQDN filtering together with DNS snooping, enterprises can achieve these objectives without touching IPs at all. Furthermore, teams can apply rate limiting or throttling based on application profiles for any application type (HTTP/HTTPS), or steer traffic using GSLB-style functions—no fancy tooling required.

Operational complexity increases

Beyond this—as many customers and partners have shared—legacy networking can create substantial operational complexity in the cloud. For instance, enterprises are required to maintain hub and spoke gateways for each and every VPC or VNET required to connect their data center to the cloud, or to connect different cloud environments.

Moreover, these connectivity meshes require otherwise unnecessary route and policy orchestration, posing further challenges. For one, network tools like ICMP or traceroute can only provide IP-level diagnostics, with no way to address issues at the application layer. As mentioned, enterprises need deep application insights to optimize the network; port-level or protocol-level views alone are not sufficient. Specifically, enterprises need real user monitoring (RUM) combined with a pinpoint breakdown of user-to-application connectivity, including any hops in between that could cause issues.

Meanwhile, with legacy networking, network segmentation is typically disconnected from business logic. For instance, cloud IPs change frequently, which means that IP addressing is not contiguous and creates unnecessary route scale. This not only creates overlapping IP addresses; it also means that teams constantly have to whitelist and blacklist IP ranges, even though user identity and application endpoint names in the cloud never change.

Similarly, platform-as-a-service (PaaS) tools and cloud-native services are not routed by IP; they are routed by resource name and URL. Finally, IPsec tunnels needed for regional and intercloud connectivity cannot scale, leaving companies prone to errors and security blind spots. Instead, mTLS or HTTPS is required for cloud and region interconnectivity. These secure connection protocols not only create more operational overhead—requiring networking expertise, engineering resources, and maintenance costs for each cloud—but also reduce the visibility of network flows within and between clouds to near zero.

Security can become a nightmare

Finally, managing cloud security becomes a serious challenge. That challenge, fundamentally, stems from having to manage numerous disparate network security policies and virtual firewalls across numerous cloud instances, regions, and providers — a task that becomes almost impossible as cloud-native workloads scale up and down. Thus network and security teams face a never-ending game of whack-a-mole of staggering complexity.

Of course, cloud-based workloads are focused on the application. Yet, in terms of security, it is not possible to segment your network based on application endpoints. Nor is it possible to continuously authenticate users and devices or glean contextual or behavioral awareness data—all key components of Zero Trust Network Access (ZTNA) strategies—with legacy networking layers.

Finally, IP address management, with potentially duplicate and overlapping IP addresses across cloud environments, creates security risks. Indeed, fundamentally, cloud networking doesn’t happen with IP addresses; rather, it happens with identity, i.e. namespaces and service endpoints. The upshot for enterprises is that cloud-native security requires a fundamentally new approach that takes place at the application level.

The way forward for cloud networking

As enterprises move rapidly to the cloud, they are finding that traditional networking approaches provide a reliable way to connect—at first. But as they adopt and expand, enterprises are realizing that with legacy networking comes exponentially rising complexity, but no reliable way to manage application performance, security, and cost. It’s time to simplify cloud networking and shift cloud networking to the application layer.

Ramesh Prabagaran is the CEO and co-founder of Prosimo, the Application Experience Infrastructure company, which lets enterprises embrace the cloud at scale, while improving performance and application uptime. The Prosimo platform understands applications, delivers cloud networking fundamentals and observability, while using embedded ML to deliver real-time recommendations for infrastructure expansion or contraction, performance optimizations, rapid remediation on issues, security improvements and cloud cost control.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.