Rate Limits

Overview

Deel enforces rate limits to ensure API stability and fair usage across all integrations. Understanding and respecting these limits is essential for building reliable applications.

  • Each organization can make up to 5 API requests per second across all tokens.
  • The rate limit is shared organization-wide, regardless of how many API tokens you use.
  • If you exceed your rate limit, the API will return a 429 Too Many Requests error.
  • Rate limiting uses a rolling 1-second window that automatically resets each second.

Rate Limit Details

Current Limits

MetricLimit
Requests per second5
ScopePer organization
Window1 second (rolling)
Error code429 Too Many Requests

Rate limits are enforced per organization, not per token. All API tokens within your organization share the same rate limit quota. If you have multiple services or processes making API calls, they all count toward the same 5 requests/second limit.

How It Works

The rate limit operates on a rolling 1-second window:

Second 1: ✅ ✅ ✅ ✅ ✅ (5 requests - OK)
Second 2: ❌ (6th request - rate limited!)
// Wait for next cycle
Second 3: ✅ ✅ ✅ ✅ ✅ (5 requests - OK again)

When you exceed 5 requests per second:

  • Additional requests receive a 429 status code
  • You must wait until the next cycle starts
  • The limit automatically resets each second

Handling Rate Limit Errors

Deel API does not return rate limit headers. You won’t know you’ve hit the rate limit until you receive a 429 error. This makes proactive rate limiting especially important.

When you exceed the rate limit, you’ll receive a 429 error:

1{
2 "error": "Rate limit exceeded",
3 "status": 429,
4 "message": "Too many requests. Please wait before retrying."
5}

Best Practice: Exponential Backoff

Implement exponential backoff when hitting rate limits:

1async function makeRequestWithRetry(url, options, maxRetries = 3) {
2 for (let attempt = 0; attempt < maxRetries; attempt++) {
3 try {
4 const response = await fetch(url, options);
5
6 // Success
7 if (response.ok) {
8 return await response.json();
9 }
10
11 // Rate limited
12 if (response.status === 429) {
13 const retryAfter = response.headers.get('Retry-After') || 1;
14 const delay = Math.min(Math.pow(2, attempt) * 1000, 5000); // Max 5s
15
16 console.log(`Rate limited. Waiting ${delay}ms before retry...`);
17 await new Promise(resolve => setTimeout(resolve, delay));
18 continue;
19 }
20
21 // Other errors
22 throw new Error(`HTTP ${response.status}: ${response.statusText}`);
23
24 } catch (error) {
25 if (attempt === maxRetries - 1) throw error;
26
27 const delay = Math.pow(2, attempt) * 1000;
28 await new Promise(resolve => setTimeout(resolve, delay));
29 }
30 }
31}

Strategies to Stay Within Limits

Queue requests to ensure you never exceed 5 requests per second:

1class RateLimitedQueue {
2 constructor(requestsPerSecond = 5) {
3 this.queue = [];
4 this.interval = 1000 / requestsPerSecond; // 200ms between requests
5 this.processing = false;
6 this.lastRequestTime = 0;
7 }
8
9 async add(requestFn) {
10 return new Promise((resolve, reject) => {
11 this.queue.push({ requestFn, resolve, reject });
12 if (!this.processing) this.process();
13 });
14 }
15
16 async process() {
17 this.processing = true;
18
19 while (this.queue.length > 0) {
20 const { requestFn, resolve, reject } = this.queue.shift();
21
22 // Ensure minimum interval between requests
23 const now = Date.now();
24 const timeSinceLastRequest = now - this.lastRequestTime;
25 if (timeSinceLastRequest < this.interval) {
26 await new Promise(r => setTimeout(r, this.interval - timeSinceLastRequest));
27 }
28
29 try {
30 const result = await requestFn();
31 resolve(result);
32 } catch (error) {
33 reject(error);
34 }
35
36 this.lastRequestTime = Date.now();
37 }
38
39 this.processing = false;
40 }
41}
42
43// Usage
44const queue = new RateLimitedQueue(5); // 5 requests per second
45
46// Queue multiple requests
47const results = await Promise.all([
48 queue.add(() => deelAPI.get('/contracts/1')),
49 queue.add(() => deelAPI.get('/contracts/2')),
50 queue.add(() => deelAPI.get('/contracts/3')),
51 // ... up to hundreds of requests
52]);

When possible, use batch operations instead of individual requests:

1// ❌ Inefficient - 100 individual requests
2for (const contractId of contractIds) {
3 await getContract(contractId);
4}
5
6// ✅ Efficient - Single batch request (if supported)
7const contracts = await getContracts({ ids: contractIds });

Avoid bursts of requests. Spread them out over time:

1// ❌ Bad - Burst of 20 requests at once
2await Promise.all(
3 contractIds.map(id => getContract(id))
4);
5
6// ✅ Good - Controlled rate
7for (const id of contractIds) {
8 await getContract(id);
9 await sleep(200); // 200ms between requests = 5/second
10}

Reduce API calls by caching frequently accessed data:

1const NodeCache = require('node-cache');
2const cache = new NodeCache({ stdTTL: 300 }); // 5 minute TTL
3
4async function getCachedContract(contractId) {
5 // Check cache first
6 const cached = cache.get(`contract_${contractId}`);
7 if (cached) {
8 console.log('Cache hit');
9 return cached;
10 }
11
12 // Fetch from API if not cached
13 const contract = await deelAPI.get(`/contracts/${contractId}`);
14
15 // Store in cache
16 cache.set(`contract_${contractId}`, contract);
17
18 return contract;
19}

Since rate limits are organization-wide, centralize request handling to avoid conflicts:

1class CentralizedAPIClient {
2 constructor() {
3 this.queue = new RateLimitedQueue(5);
4 }
5
6 async makeRequest(method, endpoint, data = null) {
7 return this.queue.add(async () => {
8 const options = {
9 method,
10 headers: {
11 'Authorization': `Bearer ${process.env.DEEL_API_TOKEN}`,
12 'Content-Type': 'application/json'
13 }
14 };
15
16 if (data) {
17 options.body = JSON.stringify(data);
18 }
19
20 const response = await fetch(
21 `https://api.letsdeel.com/rest/v2${endpoint}`,
22 options
23 );
24
25 return response.json();
26 });
27 }
28
29 // Convenience methods
30 get(endpoint) { return this.makeRequest('GET', endpoint); }
31 post(endpoint, data) { return this.makeRequest('POST', endpoint, data); }
32 patch(endpoint, data) { return this.makeRequest('PATCH', endpoint, data); }
33}
34
35// Single instance shared across your application
36const deelClient = new CentralizedAPIClient();
37
38// All parts of your app use the same client
39const contracts = await deelClient.get('/contracts');
40const newContract = await deelClient.post('/contracts', contractData);

Important: Since rate limits are per organization, using multiple tokens won’t increase your rate limit. Instead, coordinate all API requests through a centralized queue.

Best Practices Summary

Always implement request queuing

✅ Use exponential backoff for 429 retries

✅ Cache frequently accessed data

✅ Space out requests (avoid bursts)

✅ Set up alerts for rate limit errors

✅ Monitor 429 error frequency

✅ Centralize API requests across your organization

❌ Don’t send bursts of requests

❌ Don’t ignore 429 errors

❌ Don’t retry immediately without delay

❌ Don’t make unnecessary API calls

❌ Don’t use tight polling loops

❌ Don’t assume you know remaining quota (no headers provided)

❌ Don’t run multiple uncoordinated processes making API calls

Troubleshooting

Solutions:

  1. Implement request queuing (most important!)
  2. Add caching layer
  3. Batch operations where possible
  4. Review if all requests are necessary
  5. Space out requests more (reduce from 5/sec to 4/sec for safety margin)
  6. Identify and optimize high-volume operations

Possible causes:

  • Multiple services/processes making requests simultaneously
  • Background jobs running concurrently
  • Different parts of your application not coordinating requests
  • Rate limit shared across entire organization

Solutions:

  • Centralize all API requests through a single queue
  • Coordinate between services (use Redis or similar for distributed rate limiting)
  • Monitor which services are making requests
  • Implement request prioritization

Immediate actions:

  1. Enable request queuing immediately
  2. Increase delays between requests (use 4/sec instead of 5/sec)
  3. Implement caching for frequently accessed data
  4. Identify services making excessive requests
  5. Contact Deel support if you need higher limits

Next Steps