How To Avoid Hitting API Rate Limits Using TypeScript
Creating a TypeScript class to send requests in batches
Published on
Sep 15, 2022
Read time
4 min read
Introduction
Rate limits are an important part of API security, helping to prevent malicious activity and reduce strain on server resources. However, for API users, they can also create headaches — and it’s not only less-than-perfect code that creates problems. Without a strategy in place, spikes in legitimate traffic can also lead to the dreaded 429 "Too Many Requests"
error status.
So, how can we make sure we’re not sending too many requests? In this article, we’ll look into a simple pattern that will help us batch up our asynchronous requests into intervals, which can be configured to the specific rate limits of the APIs we’re working with. The examples below are written using TypeScript.
Building the Class Constructor
When working with a rate limit, we will typically be limited to a certain number of requests within a certain period: say, twenty requests every five seconds. Class syntax provides a useful way of managing our requests and allows us to create a separate instance for each API we may be working with.
At a minimum, our class should allow us to set the number of requests allowed in a given interval and the time (in milliseconds) each interval lasts.
class RequestScheduler {
private queuedRequests = 0;
private readonly requestsPerInterval: number;
private readonly intervalTime: number;
constructor({
requestsPerInterval,
intervalTime,
}: {
requestsPerInterval: number;
intervalTime: number;
}) {
this.requestsPerInterval = requestsPerInterval;
this.intervalTime = intervalTime;
}
}
Notice that we have also created a property, queuedRequests
, which we can use to keep track of the number of requests being queued in a given interval.
Building the RequestScheduler
class in this way will allow us to create multiple instances with different APIs’ requirements. For example:
const fasterScheduler = new RequestScheduler({
requestsPerInterval: 100,
intervalTime: 1000,
});
const slowerScheduler = new RequestScheduler({
requestsPerInterval: 20,
intervalTime: 5000,
});
Adding a Schedule Method
Next, let’s create a schedule
method which takes a function and — if the function is past the final one allowed in a given interval — adds a delay via setTimeout
to wait for the new interval to start.
public async schedule(requestFn: Function) {
let timeout = 0;
if (this.queuedRequests >= this.requestsPerInterval) {
timeout = this.intervalTime;
this.queuedRequests = 0;
}
return new Promise((resolve) => {
setTimeout(() => {
this.queuedRequests++;
this.totalRequests++;
resolve(requestFn());
}, timeout);
});
}
The Final Class
Our full class might look something like this. In the code snippet below, I have also added a debugMode
option to the constructor, so we can log some useful information about when our functions are being triggered — to ensure that the method is working!
class RequestScheduler {
private queuedRequests = 0;
private totalRequests = 0;
private readonly requestsPerInterval: number;
private readonly intervalTime: number;
private readonly debugMode: boolean;
constructor({
requestsPerInterval,
intervalTime,
debugMode = false,
}: {
requestsPerInterval: number;
intervalTime: number;
debugMode?: boolean;
}) {
this.requestsPerInterval = requestsPerInterval;
this.intervalTime = intervalTime;
this.debugMode = debugMode;
if (debugMode) {
console.time("RequestScheduler");
}
}
public async schedule(request: Function) {
let timeout = 0;
if (this.queuedRequests >= this.requestsPerInterval) {
timeout = this.intervalTime;
this.queuedRequests = 0;
if (this.debugMode) {
console.info(
"\x1b[36m%s\x1b[0m", // this makes our log a cyan color!
`--- RequestScheduler: Wait ${timeout}ms ---`
);
}
}
return new Promise((resolve) => {
setTimeout(() => {
this.queuedRequests++;
this.totalRequests++;
if (this.debugMode) {
console.timeLog(
"RequestScheduler",
`#${this.totalRequests} ${request.name}`
);
}
resolve(request());
}, timeout);
});
}
}
Testing That it Works
Let’s put our code to the test! First, we’ll create a new instance of our class in debug mode with relatively few requests, so it’s easier to see what’s happening.
const requestScheduler = new RequestScheduler({
requestsPerInterval: 3,
intervalTime: 5000,
debugMode: true,
});
Next, I’ll create an asynchronous function to mock making an HTTP request.
async function mockHttpRequest() {
return new Promise((_resolve_) => {
resolve("Hello World");
});
}
Finally, I’ll try to execute this function 11 times, wrapping each execution inside the schedule
method.
(async () => {
for (let i = 0; i < 11; i++) {
await requestScheduler.schedule(mockHttpRequest);
}
})();
Running this code, we’ll see logs similar to this:
RequestScheduler: 0.818ms #1 mockHttpRequest
RequestScheduler: 3.459ms #2 mockHttpRequest
RequestScheduler: 4.751ms #3 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 5.007s #4 mockHttpRequest
RequestScheduler: 5.010s #5 mockHttpRequest
RequestScheduler: 5.012s #6 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 10.014s #7 mockHttpRequest
RequestScheduler: 10.017s #8 mockHttpRequest
RequestScheduler: 10.019s #9 mockHttpRequest
--- RequestScheduler: Wait 5000ms ---
RequestScheduler: 15.021s #10 mockHttpRequest
RequestScheduler: 15.024s #11 mockHttpRequest
Success! To play around with this code yourself, check out this CodePen. (Make sure to open the console to see the logs!)
A Real-World Use Case
The power of this pattern lies partly in the fact that we can use it to coordinate requests across our application. One real-world example of where I’ve found this approach useful is during the build step of a Next.js app when I am statically generating HTML at build time, partly dependent on third-party APIs.
My website has a third-party Content Management System (for hosting blog posts) and a third-party Applicant Tracking System (for hosting job listings). Each has its own rate limit, and the queries that make requests to each API can be found in many different files. Plus, a lot about when and how the pages are built is controlled under the hood by Next.js.
To handle this, I export the following RequestScheduler
instances:
export const cmsScheduler = new RequestScheduler({
requestsPerInterval: 10,
intervalTime: 1_000,
});
export const atsScheduler = new RequestScheduler({
requestsPerInterval: 50,
intervalTime: 10_000,
});
Then inside the build step for a page, I can call, for example:
const blogPosts = await cmsScheduler.schedule(getBlogPosts);
const jobListings = await atsScheduler.schedule(getJobListings);
Now, as the website scales, I no longer need to worry about hitting the rate limits of our third-party APIs!
Related articles
You might also enjoy...
How to Automate Merge Requests with Node.js and Jira
A quick guide to speed up your MR or PR workflow with a simple Node.js script
7 min read
Automate Your Release Notes with AI
How to save time every week using GitLab, OpenAI, and Node.js
11 min read
How to Create a Super Minimal MDX Writing Experience
Learn to create a custom MDX experience so you can focus on writing without worrying about boilerplate or repetition
12 min read