Using The Poll Rate To Maximize Your API Rate Limits

One common challenge you will face when you turn existing web API into a stream using are API rate limits. Many API providers have pretty strict rate limits in place to help ensure a certain quality of service across their API infrastructure, without having to increase their operating costs. Ideally these API providers would be using, offering a Server-Sent Events (SSE) edition of their API for you to consume, along with their existing APIs, but until we can get their attention, you’ll have to operate within their existing limitations.

To do so, the default poll rate for any application you setup via will be 60 seconds. Depending on what the restrictions are for the API you are proxying, you might hit the rate limits after streaming for a while. Ideally, API providers would publish their rate limits as part of their documentation, allowing you to do the math, and set your rate limits to a sensible level, but unfortunately not all API providers do this. All you have to do is spend a little time thinking about the overall time you will be streaming an API, and do the math on how many updates you are receiving through a feed. You may have to adjust the polling rate up or down, depending on how long of time you will be streaming, the volume of updates made, and how strict or loose an API providers rate limits are.

In a real time world of APIs that operates in the clouds there should not be API rate limits.

If you are having problems with finding the sweet spot with an API provider, feel free to drop us a line. We are happy to help test and find the ideal rate limit. We also recommend messaging your API provider and let them know about Feel free to CC us on the email (contact us if you don’t have our address) and we will help them understand how we can help them reduce not just their server load, but also their bandwidth costs. The best scenario is that the API provider becomes the customer, and offers a streaming version of their API, however it isn’t always the case, which is why as the API consumer you can also fire up the streams you need without involving the API provider. You just have to find that acceptable rate limit to operate within.

In a real time world of APIs that operates in the clouds there should not be API rate limits. APIs should stream when they need to, operating as efficiently as they possibly can, and scaling as they need on the backend using cloud infrastructure. Unfortunately, many API providers are still operating in the same way they were a decade ago, and haven’t discovered  Realizing that we can help reduce the overhead on their highest usage APIs, and make sure the most demanding customers are always taken care, and not hitting the ceiling with rate limits.

**Original source: blog