This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
The flexibility of the cloud has driven IT to look closely at opportunities to replicate that agility in their own infrastructure and operations. Automation initiatives have optimized many layers of the computing stack, but application delivery services remain a last-mile problem as network teams find themselves hamstrung by inflexible legacy architectures.
While virtual appliances for load balancing, long thought of as the answer for software-driven infrastructure, have existed since the advent of virtualization, they inherit most of the architectural challenges of legacy solutions, including limited scalability, lack of central management and orchestration, and performance limitations. Instead, what is needed is an application delivery architecture based on software-defined principles that logically separates the control plane from the data plane delivering the application services.
This model, with a central controller managing a distributed pool of software load balancers located across the data center and even multiple cloud environments, can offer many advantages:
- Both the control and data planes can run on commodity Intel architecture servers providing cost-efficient network services.
- The data plane can be deployed on physical servers, VMs, or containers, enabling consistent application deployments across multiple cloud environments.
- Since virtual services are no longer confined to an appliance, they can be distributed on the data plane load balancers close to individual applications for easy isolation and horizontal scaling.
- The data plane can also gather application telemetry continuously, to send to the controller for analysis and to get a real-time view of application performance and end user experience, effectively using the strategic location of load balancers (in the path of application traffic) to derive meaningful application insights.
Given the importance of data-driven development for application teams and fast troubleshooting of network issues for operations teams, these granular application insights can help IT decision makers create or alter services. For example, an online retailer can use the system to generate a business dashboard for the CIO after Black Friday or Cyber Monday, providing a snapshot of how their web application performed, including average and peak number of transactions, average transaction throughput, number of shopping cart abandonments, most common client device types accessing the application, total number of end users by region, and much more. In this way, the architecture expands the services possible, beyond load balancing to a complete suite of application services. Let us explore these application services in the context of modern applications.
The new application delivery architecture
As enterprises become more app-centric, they are devising strategies to roll out applications and updates faster using architectures such as microservices. Container technology facilitates the creation of microservices-based applications by giving developers the ability to break what were once large, monolithic apps into smaller pieces. Each microservice – the catalog, payment, inventory management or security, for example – that belongs to the larger app may run on multiple containers and each of those containers is effectively a network endpoint that must be managed and deployed. Containers enable a new level of development speed and agility, but they greatly complicate the network challenge of delivering and provisioning apps and updates.
With tens or hundreds of containers to be managed across clouds, you need an elastic, software-defined approach to distribute load balancing and proxy services across the microservices cluster. Placing a proxy on each host or server enables you to monitor and direct interactions between microservices, giving application developers visibility into application components and the ability to debug applications.
The granular control and visibility to application interactions also enables the creation of security policies for microsegmentation of services to meet compliance requirements. In addition, the centrally managed proxy service can provide service discovery as new services and updates are introduced to apps – a new payment system, for instance – by providing DNS services to container based applications since the controller can integrate using APIs with container cluster management frameworks such as Kubernetes or Mesos.
The software-defined application delivery architecture consolidates many L4-L7 services, complementing and in many cases eliminating multiple point solutions to handle IPAM, DNS, application performance monitoring, microsegmentation and East-West firewalls. More importantly, the architecture can handle most network functions available as REST APIs, enabling programmable application services and configurations that automate common network management tasks that delay the provisioning of applications and updates.
Flexible infrastructure choices and application architectures are changing the way that modern enterprises run computing. As businesses become more app-centric, IT teams are under pressure to meet aggressive application deployment goals and time-to-market considerations. Traditional application delivery mechanisms don’t address the needs of modern cloud-native applications that span private data centers and cloud platforms.
The new realities have spawned software-defined application delivery architectures, which can deliver the flexibility, scalability, performance monitoring and the automation of routine IT tasks that enterprises require to address those new and expanding challenges.
Ranga Rajagopalan is CTO at Avi Networks