Digital Transformation

What are Application-Aware Service Proxies?

application-aware service proxies

To understand the definition of application-aware service proxies, we will first need to understand the need for them. For that, we need to delve into microservice architecture first.

What are Microservices?

At some point in 2012, Martin Fowler and James Lewis helped coin a new term for SOA environments they observed and seemed to be working. They called it “microservices” in an effort to differentiate the existing SOA and ESB implementations that took hold in most organizations. Read about the differences between APIs and microservices.

Microservices seemed like a good way to deconstruct applications into smaller services. Each team could work on its own service without impacting the overall system and deliver code changes at their own cadence. This would facilitate smaller, more frequent, deployments to gauge customer feedback.

What is a Microservice architecture?

Microservices architecture is a way of working that decouples large systems into constituent smaller-scoped services that work together to implement business functionality. They are limited in size and scope for a very specific reason: to facilitate independent deployment to make changes without impacting the rest of the system.

Microservices are efficient when coupled with strong automation and delivery practices. Cloud platforms and DevOps practices all work together to enable faster software delivery to production and shorter lead times.

Many organizations practice continuous automated Deployment to make it economical to practice the technique of deploying microservices rapidly while analyzing adoption at the same time.

Imagine being able to deliver microservices enhancements to your system hundreds of times per day vs once a quarter. The ability to experiment and learn at a rapid rate differentiates you from your peers in your market segment.

Leveraging the benefits of microservices architecture

Following are some key concerns which need to be addressed to leverage the benefits of microservices architecture:

  • Continuous Integration and Automated Continuous Deployments
  • Keeping faults from jumping isolation boundaries.
  • Building applications/services capable of responding to changes in their environment.
  • Building systems capable of running in partially failed conditions
  • Understanding what’s happening to the overall system as it constantly changes and evolves.
  • Inability to control the runtime behaviors of the system.
  • Implementing strong security as the attack surface grows
  • How to lower the risk of making changes to the system.
  • How to enforce policies about who/what/when can use the components in the system.

Making Microservice Interaction resilient

Some patterns have evolved to help make microservices more resilient to unplanned, unexpected failures:

  • Client-side load balancing: give the client the list of possible endpoints and let it decide which to call.
  • Service discovery: a mechanism for finding the periodically updated list of healthy endpoints for a particular logical service.
  • Circuit breaking: shedding load for a period of time to a service that appears to be misbehaving.
  • Bulk heading: limiting client resource usage with explicit thresholds (connections, threads, sessions, etc.) when making calls to a service
  • Timeouts: enforcing time limitations on requests, sockets, liveness, etc when making calls to a service.
  • Retries: Retrying a failed request.
  • Retry budgets: applying constraints to retries; i.e., limiting the number of retries in a given period (e.g., can only retry 50% of the calls in a 10s window).
  • Deadlines: giving requests context about how long response may still be useful; if outside of the deadline, disregard processing the request.

Collectively, these types of patterns can be thought of as “application networking.” They have a lot of overlap with similar constructs at lower levels of the networking stack except they operate at the level of “messages” instead of “packets.”

Application-aware service proxies

A way to move these horizontal concerns into the infrastructure is to use a proxy.

A proxy is an intermediate infrastructure component that can handle connections and redirect them to appropriate backends. We use proxies all the time (whether we know it or not) to handle network traffic, enforce security and load balance work to backend servers. For example, HA proxy is a simple but powerful reverse proxy for distributing connections across many backend servers. Mod_proxy is a module for the Apache HTTP server that also acts as a reverse proxy.

A service proxy that has emerged in the open-source community to be a versatile, performant and capable application-level proxy is Envoy Proxy.

Envoy Proxy

Envoy was developed at Lyft as part of their Service Oriented Architecture infrastructure. It is capable of implementing application resilience and other networking concerns outside of the application.

Envoy provides networking capabilities like retries, timeouts, circuit breaking, client-side load balancing, service discovery, security and metrics-collection without any explicit language or framework dependencies.

The power of Envoy is not limited to these application-level resilience aspects. Envoy also captures many application-networking metrics like requests per second, a number of failures, circuit-breaking events and more.

Benefits of using Envoy Proxy

By using Envoy, we can automatically get visibility into what’s happening between our services which is where we see a lot of the unanticipated complexity.

Envoy proxy forms the foundation for solving cross-cutting, horizontal reliability and observability concerns for a services architecture and allows us to push these concerns outside of the applications and into the infrastructure.

We can deploy these service proxies alongside our applications so we can get these features (resilience and observability) out of the process from the application, but at a fidelity that is very application-specific.

In this model, applications that wish to communicate with the rest of the system do so bypassing their requests to Envoy first, which then handles the communication upstream.

Service proxies can also do things like collecting distributed tracing spans so we can stitch together all the steps a particular request took. We can see how long each step took and look for potential bottlenecks or bugs in our system.

If all applications talk through their own proxy to the outside world, and all incoming traffic to an application goes through our proxy, we’ve gained some important capabilities for our application without changing any application code. This proxy + application combination forms the foundation of a communication bus known as a service mesh.

We can deploy a service proxy like Envoy along with each instance of our application as a single atomic unit. For example, in Kubernetes, we can co-deploy a service proxy with our application in a single Pod. This kind of deployment pattern is known as a sidecar deployment in which the service proxy gets deployed to complement the main application instance.

What is a service mesh and what is the value?