Building A Next-Generation Load Balancer Automation Platform For F5 And NGINX: How We Did It

It’s no secret that the application tier of any modern, highly-available cloud architecture must be extremely reliable and secure. If anything goes wrong, your team will be challenged to quickly find and fix the problem. Fortunately, with a few basic configurations in your virtual network, you can isolate components and provide them with the necessary resources to run optimally. F5 Load Balancers support an API that allows you to configure load balancing rules and other properties of load balancers from within a given instance.

These properties include:

  • A backend service name used as a reference while provisioning load balancers
  • The type of machine or virtualization software used to run the machine
  • Backend service availability

The list goes on and on. All of this information can be useful if you’re looking to implement automated configuration management at scale or if you have multiple F5 nodes in different locations or Virtual Machines (VMs) across multiple providers across your organization’s infrastructure.

F5 and NGINX familiarity

If you’re just getting started with load balancing, you might want to familiarize yourself with the components involved and their purpose. Load balancing is the distribution of traffic across multiple backend servers. In the world of cloud computing, this means that traffic is spread across virtual machines (VMs). Each VM acts as a “proxy” to the Internet and is only given traffic if it needs it. F5 load balancers are a good choice for this purpose because they are capable of offering high throughput and can receive traffic from multiple sources. They can also route traffic to different VMs based on the source IP address, making them ideal for scaling applications that need to handle a variety of traffic types. F5 load balancers can also handle various protocols and are particularly good at handling HTTP traffic, allowing you to scale your applications effectively.

You’ve most likely used F5 load balancers if you’ve ever deployed an application to AWS, Azure, or Google Cloud Platform (GCP). And if you’ve ever worked in DevOps or CloudOps, you’ve probably used them a few times as you deliver applications. You can easily use F5 load balancers to front-end a variety of applications, including those running on NGINX, PHP, Python, Java, .NET, Ruby, and other stacks. To make the integration with F5 easier, you can use AppViewX ADC+ or the load balancer’s REST API directly. AppViewX ADC+ provides north and south-bound integration with various application services to enable central management of F5 load balancer configurations and change requests.

The goal: Load balancing with an automation framework

OK, so now you’re familiar with the load balancer’s basic functions and a few load-balancing-related terms. With that being said, you still might not know exactly how you want to use them. Maybe your team is looking to boost application availability without worrying about downtime. Or perhaps, you just want to ensure high availability for your load balancers. Whatever it is, you can rest assured that F5 provides plenty of flexibility and scalability.

Load Balancer as a Service – Achieving 99.99% uptime with Automation

The goal here is to create an automation-based solution that can take advantage of the various components of the virtual network in order to provide your load balancers with custom configurations based on the availability of backend services and backend VM types. Doing so can ensure that your load balancers are provisioned with the most optimal configuration possible. And if one of the VMs goes down, then you can simply replace it with another one.

You can use AppViewX ADC+ to provide DNS-based virtual hosting and front-end load balancing for F5 or NGINX. DNS-based virtual hosting means an external load balancer resolves your internal subdomains like “www.example.com” into an actual server. You can use this setup to distribute your web traffic across multiple servers. This approach is a good fit for organizations that have relatively few servers and want to load balance the traffic across them.

You can also load balance with a DNS-based setup and a custom automation API that makes it easy to build complex business logic. For example, you can create rules that set the health check URL for an application if the backend service is experiencing high latency. You can also create rules that fail and send alerts about the backend service if it’s down for a specific amount of time.

The solution: DNS-based Virtual Hosts and a custom automation API

Using the AppViewX custom API, you can create a rule that lets F5 load balance based on a single DNS name or Virtual Host. This Virtual Host can be a single DNS name, or it can be a combination of different names, as long as they are all pointing to the same machine. To keep things simple, let’s say that you have a single physical host with three virtual machines, and you want to create a load balancer with one Virtual Host that uses one of the VMs as the backend service. First, you’d create a custom automation rule that says that the virtual host should use the host’s IP address. Then, you’d create a DNS rule that says, Virtual Host. With these two rules, F5 will route all requests to the Virtual Host, where the DNS name uses the host’s IP address.

If load balancing is the distribution of traffic across multiple backend servers, then Virtual Hosts are the physical servers. In other words, they are the servers that receive the traffic, but they are “virtual” in the sense that they do not have to be physical machines. For example, if you have an application hosted on a load balancer, then your physical server is the load balancer. However, if you have an application that connects users, then your load balancer is no longer the server that receives the traffic. Instead, the load balancer connects users to the application, thus making your Virtual Host a “DNS Virtual Host.” Virtual Hosts are extremely useful for building highly scalable applications.

They can be used for handling multiple domains, for example, by having a single application handle requests for multiple domains. They can also be leveraged for building distributed architectures, enabling you to scale an application across multiple physical machines and/or VMs.

You can use DNS-based virtual hosting with a load balancer and then create a custom API gateway to provide a health check URL that the front-end load balancer can use to decide whether to accept the traffic. With this setup, the load balancer will accept traffic if the health check returns a green response. You can also use DNS-based virtual hosting with a load balancer and then create a custom API gateway to provide a health check URL that the front-END load balancer can use to decide whether to accept the traffic.

The API gateway front-end load balancer health check URL is a custom API. The API receives a request from the load balancer, then checks the backend service’s health check URL. If the backend service is healthy, the API returns a green response. The front-end load balancer will then handle the traffic. If the backend service is unhealthy, the API returns a red response, and the front-end load balancer will then handle the traffic.

DNS-based Virtual Hosts

You can use DNS-based virtual hosting with a load balancer and then create a custom API gateway to provide a health check URL that the front-end load balancer can use to decide whether to accept the traffic. With this setup, the load balancer will accept traffic if the health check returns a green response. This approach is a good fit for organizations that have relatively few servers and want to load balance the traffic across them.

DNS-based Virtual Hosts

Custom Automation API

As mentioned above, load balancers can be configured from within a given F5 instance. This is useful if you have a single F5 load balancer managing traffic for a single URL and want to load balance based on the availability of a particular service. But what if you have a bunch of different URLs, some hosted on different services, and you want to configure a single F5 load balancer? F5 provides a custom API that allows you to create custom automation rules. These rules let you specify custom properties, such as the VM type, the availability of a backend service, and so on. With these rules, you can load balance based on multiple conditions. For example, you can load balance based on the availability of a particular backend service. If a server becomes unavailable, then F5 can automatically replace the machine with another one.

You can also use custom automation API to build complex business logic. You can create rules that set the health check URL for an app if the backend service is experiencing high latency. You can also create rules that fail the backend service if it’s down for a specific amount of time.

As organizations increasingly move towards DevOps and continuous delivery, the need to automate the application delivery process becomes more and more critical. Centralized management of application delivery can help streamline and automate the process, making it more efficient and less error-prone.

There are a number of tools and platforms available that can help with centralized management of application delivery, such as Puppet, Chef, Big-IQ, and Ansible, or a master orchestrator like AppViewX ADC+. Organizations need to carefully evaluate their specific needs and requirements in order to choose the right solution for them. But regardless of which solution is chosen, centralized management of application delivery can significantly improve the efficiency and quality of the application delivery process.

Load Balancer Automation: Workflow as a Solution

Whether you have F5, NGINX, AVI, or A10 – there is a platform that can centrally control all your load balancers without the need for a developer. AppViewX ADC+ has built a platform that can resolve your service requests related to ADC/Load Balancer management, DNS/IPAM automation, WAF automation, and more. Now, when you use such a platform, it reduces your effort to manage your devices and ensures compliance due to clear visibility into the system. After all, why handle it manually when you can simply automate the process? Automating your load-balancing process can save you a lot of time and hassle. For example, when you have a server down and you have to manually redirect the requests to another server, it can be time-consuming and sometimes even stressful.

However, load balancing is a necessary part of running a large network, one that can make your business more efficient. Load balancing is a process that is simple to follow but can be difficult to set up. By using a workflow solution, you can be sure that your load balancers are always up and running correctly. This ensures that your applications are always available and that potential downtime is avoided.

Workflow solutions can automate many of the tasks associated with load balancing, including provisioning new load balancers, configuring them for specific applications, and monitoring their performance. Automating these processes can help reduce the need for manual intervention and make sure that your load-balancing infrastructure is always running smoothly.

Conclusion

Load balancing is important, but many organizations struggle to implement it properly. It’s easy to forget to test your application’s resilience when faced with unplanned outages. And without failover mechanisms in place, you could be putting your organization at risk of significant downtime. By implementing DNS-based Virtual Hosts and a custom automation API, you can ensure that your load balancers can be configured with the most optimal configuration possible, minimizing service downtime and maximizing application availability.

While load balancing is an essential part of any cloud architecture, it’s a difficult problem to solve. In addition, you may need to scale out your infrastructure at some point – providing you with another load-balancing challenge. F5 provides a simple and straightforward load-balancing solution that you can use with a variety of cloud providers.

This approach is well-suited to quick prototyping and testing scenarios. F5 load balancers can scale up and down easily, providing flexibility when you need it. In addition, the load balancer’s advanced security features, like TLS and SNAT filtering, provide an easy layer of defense against attacks. As a load balancer that supports both DNS-based and Virtual Hosting load balancing, it’s easy to configure and manage. In addition, its extensive feature set and low cost make it an excellent choice for load balancing. These advantages make F5 load balancers a great choice for nearly any scenario, from quick prototyping and testing to scaling out.

F5 Big-IP is a good solution for load balancing when coupled with an automation and orchestration solution such as AppViewX ADC+.

AppViewX ADC+ is powerful, yet easy-to-use, application services orchestrator that automates the provisioning, deployment, and management of a variety of devices including F5 BIG-IP, NGINX, and others. It is an ideal solution for automating load balancing, as it provides a complete feature set for managing BIG-IP devices, including:

  • Automated provisioning and deployment of BIG-IP devices
  • Comprehensive management of BIG-IP devices
  • Real-time monitoring and reporting of BIG-IP device performance
  • Automatic failover and recovery

AppViewX ADC+ makes it easy to manage BIG-IP devices in any environment, whether it is on-premises, in the cloud, or hybrid. It is fully compatible with all major cloud platforms, including AWS, Azure, and Google Cloud.

Click here to talk to an expert!

Tags

  • Custom Automation API
  • DevOps
  • DNS Automation
  • DNS-based Virtual Hosts
  • F5 Load Balancers
  • WAF Automation

About the Author

Tarshant Jain

Explorer & Hustler: ADC+

Helping network engineers and app teams simplify their application delivery with the power of automation, logic, and global wisdom.

More From the Author →

Related Articles

Securing Modern Applications in Amazon EKS with AVX ONE CLM for Kubernetes

| 5 Min Read

The Importance of Domain and DNS Lifecycle Management with Mergers and Acquisitions

| 6 Min Read

Secure PKI Orchestration for DevOps in a Containerized CI/CD Environment

| 5 Min Read