Amplify API Management

Automatic deployment of an API Management demonstration on Azure–Part 1: Overview

Since the version 7.5.3, Axway has been working hard to provide its API Management product in container mode, as well as tools to accelerate the adoption of these new technologies. Axway has adapted all components (API Gateway Manager, API Gateway, API Manager, API Portal, API Analytics) to be optimized for container technology. Overall, this mode externalizes some important features scaling, self-healing, rolling updates to a container orchestrator.

READ MORE: Tips for choosing the right API Portal.

In this article, I’ll provide an overview of technologies, services and tools I’ve used to automate the deployment on Azure Kubernetes Services of an Axway API Management infrastructure. It’s a 3-steps process:

  1. Create the Container Infrastructure on Azure
  2. Configure the Container Infrastructure and prepare the API Management components
  3. Deploy the API Management solution

These 3 steps require specific permission but only the first step uses my account permission. The two other steps use the Service Principal account that is an application account in Azure. This account is defined manually at the beginning of deployment.

READ MORE: How to connect app Microsoft Azure active directory API Management.

Now let’s start…

ARM template: Prepare the container infrastructure

First, I’ve scripted the creation of the container infrastructure with all necessary Azure components. For this purpose, I’ve used the Microsoft ARM template technology to build them all but it could also be done by any other Infrastructure as Code tool of your choice.

Diagram one, sets out how the main Azure components are created.

Here is how the deployment model describes components:

Diagram 1 – Azure container infrastructure

  • Network with two subnets protected by a firewall in one virtual network. I use a netmask 25 to obtain 128 IP addresses because each pod will consume it in CNI mode. Firewalls have all rules to allow specific ports to be accessed from an external connection.
  • Container Orchestrator. I use Azure Kubernetes Services (AKS). This is an out-of-the-box pre-configured and managed solution. This is very useful when you want pre-secured and highly scalable Kubernetes cluster without having to configure it yourself (requiring specific expertise). With AKS, you don’t have to manage master nodes and availability zones yourself. For the Demo purpose, I only use 3 workers with a VM size Standard DS2 v2. The AKS service is deployed in CNI mode for higher performance which fits an API Gateway usage very well.  I also use the standard Azure Load Balancer in front of the cluster. To increase security, it’s also possible to add the WAF Azure Application Gateway in front of the cluster.
  • Container Registry. I use Azure Container Registry (ACR) with a basic plan to store Dockers images and a specific Helm Package. You can restrict access granting internal access only with a higher service plan.
  • Storages accounts. I use the premium Storage Class in AKS to create managed disks dynamically. But this kind of storage isn’t compatible with multiple write access between pods (components). So I use an Azure File Premium to share events, logs and analytics reports inside the application.
  • Virtual Machine. For steps described in the next chapter, an install script starts automatically. It’s only possible with a Custom Script Extension. This object requires an available VM. I’m also going to use this VM to build new docker images to be able to demonstrate Helm upgrade and rollback features.

Install script: Configure the container infrastructure and build docker images

This install script first installs the tools needed: Docker, Azure CLI, Kubernetes and Helm clients.

The main tasks to configure the Kubernetes cluster:

  • Add RBAC permissions for each system on Kubernetes (Dashboard, Cert-Manager, etc.)
  • Create a dedicated Namespace for the demo.
  • Add all passwords and keys (docker registry, application accounts, Azure File account…) in Kubernetes Secrets. It’s also possible to tighten security with Azure vault.
  • Set up a public static IP address in the second Resource Group which is created automatically by Azure to store all workers nodes components.

Whenever possible, I use Helm Packages because it’s the best tool for efficient operational deployments.

In this demo example, I use Helm Packages to deploy Cert-manager and Nginx to secure ingress connections. You can see the commands used below:

helm install --name cert-manager --namespace cert-manager --version v0.8.0 jetstack/cert-manager

helm install stable/nginx-ingress --namespace demo4 --set controller.replicaCount=2 \
--set controller.nodeSelector."beta.kubernetes.io/os"=linux \
--set defaultBackend.nodeSelector."beta.kubernetes.io/os"=linux \
--set controller.service.externalTrafficPolicy=Local \
--set controller.service.loadBalancerIP="<WWW.XX.YYY.ZZZ" \
--set-string controller.config.use-http2=false \
--set-string controller.ssl_procotols=TLSv1.2 \
--set rbac.create=true

I use https://letsencrypt.org to dynamically generate public certificates for each Axway component requiring external access (API Manager, API Portal …).

Next, I build docker images for Axway’s API Management solution. One docker image is required for each component. Axway’s Python scripts ease the docker image customization, for example:

  • Import specific API Gateway configuration (.fed file)
  • Deploy customer license
  • Define database connection parameters
  • Secure with specific certificates inside the cluster

Below you will see an example of the command to build the API Gateway Manager docker image:

python build_gw_image.py --out-image=apigw-mgr:7.6.2-SP4 \
--parent-image demo4ctracr.azurecr.io/baseImage:7.6.2-SP4 \
--domain-cert certs/DefaultDomain/DefaultDomain-cert.pem \
--domain-key certs/DefaultDomain/DefaultDomain-key.pem \
--domain-key-pass-file certs/DefaultDomain/pass.txt \
--license /sources/API_7.6_Docker_Demo.lic \
--fed configuration/demo.fed \
--merge-dir Dockerfiles/emt-nodemanager/apigateway

Deploy the solution in 5 minutes

The last step is to describe the components to be deployed by Kubernetes in a Helmchart.

To do so, I’ve modified the example presented in this very good article to meet our specific needs (for example: the storage class for Azure file Premium). This Helmchart is then pushed into the Azure Container Registry.

See below the command used to install the Axway’s API Management solution:

Helm install –name amplify-apim-demo –version 7.6.2-SP4 demo4ctracr.azurecr.io/apim-demo

Helm engine validates the package before deploying it on Kubernetes. At this point, you can connect to Kubernetes Dashboard and look at how the product is built. This package contains some conditions and the way to deploy components in the right order as below:

  1. Cassandra
  2. Mysql for analytics
  3. API Gateway Manager
  4. API Manager
  5. API Gateway
  6. API analytics
  7. Mysql for Portal
  8. API portal

Diagram 2: Example of a full API Management deployment

 

Kubernetes applies all conditions declared in the package and pulls images automatically.

In this demo, Cassandra and Mysql are deployed inside the Kubernetes cluster. Axway recommends externalizing these two components outside of the Kubernetes cluster.

After a few minutes, the product is started and you can virtualize your first API.

Enjoy!

Conclusion

This article aims to demonstrate how easy an efficient deployment in modern infrastructures could be. Docker, Helm and Kubernetes are key technologies to achieve this.

With Azure Kubernetes Service (AKS), Microsoft provides a convenient environment to start in minutes. If you’re interested in this subject, don’t hesitate to contact me.

Discover more of Deploying AMPLIFY API Gateway Kubernetes Helmcharts.