Application Integration

Having A Strategy For Shipping Log Files Will Help Drive API Discovery

As we work with more of our customers in the area of API discovery, we are finding that having a strategy for shipping log files across the enterprise significantly contributes to the API discovery process. If you have access to log files from across all essential backend applications in a single location, you already have greater visibility into potentially what APIs are being used across desktop, web, mobile, and network applications. Continuing to making logging an essential stop throughout the API journey, providing observability into how things are currently operating, and allowing us to map the landscape, and begin evolving the overall API strategy across the enterprise.

While it may seem like logging is an already ubiquitous aspect of IT operations, in our experience, it isn’t something that is always centralized with an enterprise-wide log shipping agenda, and it is never something that is positioned to help contribute directly to API discoverability. Log files contain all the signals we need to understand how existing web, mobile, and other devices or network applications are putting APIs to work. Which internal, partner, and 3rd party APIs they are using to do whatever they do on a daily basis. Representing the valuable resources being put to work across the enterprise, which can be used to paint an ongoing picture of API infrastructure that exists across large disparate groups.

The most common place to start when it comes to log shipping in the service of API discovery is at the web server level. Anywhere Apache, Nginx, or IIS is running, the logs should be shipped off to a central location using Filebeat, or some other log shipping solution. Allowing all of the log files for a web server to be centrally stored, then accessed and audited for a variety of purposes. These web server log files can be sifted through, looking for the paths, headers, parameters, and other signals left when API calls are being made. Painting a comprehensive picture of what API infrastructure is in existence, using signals that are coming out of existing systems. The only change that needs to occur in most organizations, is the centralized shipping and management of these log files, providing observability into how resources are being put to work across ALL web applications.

We’ve talked before about how we are using Filebeat to centralize log files from containerized Streamdata.io infrastructure. In future posts, we’ll talk more about how we are helping our customers use Filebeat to gain more visibility into where their valuable data resources are, and beginning to develop a strategy for how the enterprise API landscape is monitored, quantified, and ultimately directed as part of internal API discovery efforts. Log shipping will play a critical role in whether or not the API discovery efforts within the enterprise will be successful in understanding where all existing web API infrastructure is, as well as the back-end and front dependencies that exist around this legacy architecture. Which will be the most important part of the evolving microservices discussions that are already occurring across the enterprise, enabling groups who are shipping log files to better define, and ultimately decompose the legacy monolith, and leave those that aren’t to struggle with realizing their vision.

You can’t decompose and break down what you can’t see. If you don’t have a clear map of the enterprise landscape, you can’t confidently evolve your microservices strategy. If you can’t refresh this map in real-time, or near-real-time, going beyond just compulsory, manual API discovery by developers, you just aren’t going to keep up with the ever-changing enterprise landscape. This is why log shipping is so important. It is how you obtain more visibility by taking advantage of the observability afforded to us by the existing outputs of the systems we use–log files. These log files already exist. The signals of what web service and API infrastructure in use are already available, we just need to ship them to a central location and begin working through files looking for the relevant signals. How is your enterprise doing when it comes to making sure web server log files are a top priority when it comes to log shipping and analysis? We’d love to learn more about your approach, or lack of a strategy when it comes to moving this conversation forward across your enterprise group.

microservices