Application Delivery Controller

Application Delivery Controller | @CloudExpo #SDN #VM #AI #Virtualization  #tech

  • It is a necessity to have the concept of a collection of back-end destinations.
  • Cloud computing software is eating the world, and each day is bringing new developments in this world.
  • Yesterday’s debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
  • General Session | Trends, Challenges, and Solutions to Protect the Cloud
  • General Session | How to Architect and Optimize Your Cloud for Consistent Performance

A Little HistoryApplication Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today’s ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a “virtual server” address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT).

@evanderburg: Application Delivery Controller | @CloudExpo #SDN #VM #AI #Virtualization #tech

Application Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today’s ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a “virtual server” address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT).

Figure 1: Network-based load balancing appliances.

With the advent of virtualization and cloud computing, the third iteration of ADCs arrived as software delivered virtual editions intended to run on hypervisors. Virtual editions of application delivery services have the same breadth of features as those that run on purpose-built hardware and remove much of the complexity from moving application services between virtual, cloud, and hybrid environments. They allow organizations to quickly and easily spin-up application services in private or public cloud environments.

It would certainly help if everyone used the same lexicon; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. With a little explanation, however, the confusion surrounding this issue can easily be alleviated.

Node, Host, Member, and Server

Most ADCs have the concept of a node, host, member, or server; some have all four, but they mean different things. There are two basic concepts that they all try to express. One concept—usually called a node or server—is the idea of the physical or virtual server itself that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will refer to this concept as the host.

The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the TCP port of the actual application that will be receiving traffic. For instance, a server named www.example.com may resolve to an address of 172.16.1.10, which represents the server/node, and may have an application (a web server) running on TCP port 80, making the member address 172.16.1.10:80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server.  We will refer to this as the service.

Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware or hypervisor. A host (172.16.1.10) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80, 172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and health monitoring based on the services instead of the host. However, there are still times when being able to interact with the host (like low-level health monitoring or when taking a server offline for maintenance) is extremely convenient.

Most load balancing-based technology uses some concept to represent the host, or physical server, and another to represent the services available on it— in this case, simply host and services.

Pool, Cluster, and Farm

Load balancing allows organizations to distribute inbound application traffic across multiple back-end destinations, including cloud deployments. It is therefore a necessity to have the concept of a collection of back-end destinations. Clusters, as we will refer to them (also known as pools or farms) are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a cluster called “company web page” and all services that offer e-commerce services would be collected into a cluster called “e-commerce.”

The key element here is that all systems have a collective object that refers to “all similar services” and makes it easier to work with them as a single unit. This collective object—a cluster—is almost always made up of services, not hosts.

Although not always the case, today the term virtual server means a server hosting virtual machines. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term “virtual service” would be more in keeping with the IP:Port convention; but because most vendors, ADC and Cloud alike use virtual server, this article uses virtual server as well.

Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a cluster of services that reside on one or more physical hosts.

Figure 2: Application Delivery comprises four basic concepts—virtual servers, clusters, services, and hosts.

While the diagram above may not be representative of a real-world deployment, it does provide the elemental structure for continuing a discussion about application delivery basics.

Read What is Load Balancing? if you haven’t already and check out ADC Part II coming January 26!

ps

Peter is an F5 evangelist for security, IoT, mobile and core. His background in theatre brings the slightly theatrical and fairly technical together to cover training, writing, speaking, along with overall product evangelism for F5. He’s also produced over 350 videos and recorded over 50 audio whitepapers. After working in Professional Theatre for 10 years, Peter decided to change careers. Starting out with a small VAR selling Netopia routers and the Instant Internet box, he soon became one of the first six Internet Specialists for AT&T managing customers on the original ATT WorldNet network.

Now having his Telco background he moved to Verio to focus on access, IP security along with web hosting. After losing a deal to Exodus Communications (now Savvis) for technical reasons, the customer still wanted Peter as their local SE contact so Exodus made him an offer he couldn’t refuse. As only the third person hired in the Midwest, he helped Exodus grow from an executive suite to two enormous datacenters in the Chicago land area working with such customers as Ticketmaster, Rolling Stone, uBid, Orbitz, Best Buy and others.

Application Delivery Controller