Rate Limiting
What is Rate Limiting? #
Rate limiting in Orchesty prevents your integrations from exceeding API rate limits imposed by external services. Most APIs restrict the number of requests you can make within a time window (e.g., 100 requests per minute). Orchesty's built-in rate limiter ensures you stay within these limits by queuing requests when thresholds are approached.
Key concepts:
- Limiter: Tracks and enforces request limits
- Key: Unique identifier for tracking limits (per user, per tenant, etc.)
- Time window: Duration for counting requests (seconds)
- Amount: Maximum requests allowed in the time window
- Group limits: Apply limits across multiple users or applications
Why Rate Limiting Matters #
Prevent API Throttling #
Without rate limiting:
- APIs return 429 (Too Many Requests) errors
- Requests fail and require retries
- External services may temporarily ban your account
- User experience degrades
With rate limiting:
- Requests queue automatically when limits are approached
- No failed requests due to rate limits
- Smooth, predictable API usage
- Better relationship with API providers
Multi-Tenant Scenarios #
In SaaS applications with multiple customers:
- Each customer has their own API account
- Each account has independent rate limits
- One customer shouldn't consume another's quota
- Track limits per-user, per-application, and globally
How Rate Limiting Works #
Rate Limiter Architecture #
sequenceDiagram
participant N as Node/Connector
participant L as Limiter
participant R as Redis
participant Q as Queue
participant API as External API
N->>N: dto.setLimiter(key, 60, 100)
N->>L: Check if under limit
L->>R: Get request count for key
alt Under limit
L->>R: Increment count
L-->>N: Allow request
N->>API: Make API call
else At/over limit
L-->>N: Queue request
N->>Q: Wait in queue
Note over Q: Wait until window resets
Q->>N: Resume
N->>API: Make API call
end
Request Tracking #
graph TB
R[Request comes in]
C{Check limit}
I[Increment counter]
P[Process request]
Q[Queue request]
W[Wait for window]
R --> C
C -->|Under limit| I
I --> P
C -->|At/over limit| Q
Q --> W
W --> C
Implementing Rate Limiting #
Basic Rate Limiting #
import AConnector from '@orchesty/nodejs-sdk/lib/Connector/AConnector';
import ProcessDto from '@orchesty/nodejs-sdk/lib/Utils/ProcessDto';
import RequestDto from '@orchesty/nodejs-sdk/lib/Transport/Curl/RequestDto';
import { HttpMethods } from '@orchesty/nodejs-sdk/lib/Transport/HttpMethods';
export default class ApiCallConnector extends AConnector {
public getName(): string {
return 'api-call';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
// Set rate limit: 100 requests per 60 seconds for this user
dto.setLimiter(
`api-${user}`, // unique key
60, // time window (seconds)
100 // max requests
);
// Make API call - will be queued if limit reached
const requestDto = new RequestDto(
'https://api.example.com/data',
HttpMethods.GET,
dto
);
const response = await this.getSender().send(requestDto);
dto.setData(response.getBody());
return dto;
}
}
Per-User Rate Limiting #
export default class PerUserLimitConnector extends AConnector {
public getName(): string {
return 'per-user-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
// Each user has independent rate limit
dto.setLimiter(
`shopify-api-${user}`, // Different key per user
60, // 60 seconds
40 // 40 requests per minute (Shopify's limit)
);
// Make API call
const result = await this.callShopifyAPI(dto);
dto.setJsonData(result);
return dto;
}
}
Group Rate Limiting #
Group limits apply across multiple individual limits:
export default class GroupLimitConnector extends AConnector {
public getName(): string {
return 'group-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
const application = 'salesforce';
// Individual limit: 10 req/min per user
// Group limit: 1000 req/min for all users of this app
dto.setLimiterWithGroup(
`${application}-${user}`, // individual key
60, // individual time
10, // individual amount
`${application}-global`, // group key
60, // group time
1000 // group amount
);
// Make API call
const result = await this.callAPI(dto);
dto.setJsonData(result);
return dto;
}
}
Rate Limit Strategies #
Strategy 1: Conservative Limits #
Set limits lower than API allows for safety:
// API allows 100 req/min
// Set to 80 req/min for safety margin
dto.setLimiter(
`api-${user}`,
60,
80 // 20% safety margin
);
Strategy 2: Tiered Limits #
Different limits based on user tier:
export default class TieredLimitConnector extends AConnector {
public getName(): string {
return 'tiered-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
const userTier = await this.getUserTier(user);
// Different limits per tier
let requestLimit: number;
switch (userTier) {
case 'premium':
requestLimit = 1000;
break;
case 'standard':
requestLimit = 100;
break;
case 'free':
requestLimit = 10;
break;
default:
requestLimit = 10;
}
dto.setLimiter(
`api-${user}`,
60,
requestLimit
);
// Process...
return dto;
}
private async getUserTier(user: string): Promise<string> {
// Implementation
return 'standard';
}
}
Strategy 3: Dynamic Limits from Headers #
Some APIs tell you their limits in response headers:
export default class DynamicLimitConnector extends AConnector {
public getName(): string {
return 'dynamic-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
// Get limit from previous response or use default
const storedLimit = await this.getStoredLimit(user) || 100;
dto.setLimiter(
`api-${user}`,
60,
storedLimit
);
const requestDto = new RequestDto(
'https://api.example.com/data',
HttpMethods.GET,
dto
);
const response = await this.getSender().send(requestDto);
// Update limit based on response headers
const rateLimitHeader = response.getResponseHeaders()['x-rate-limit-limit'];
if (rateLimitHeader) {
const newLimit = parseInt(rateLimitHeader, 10);
await this.storeLimit(user, newLimit);
}
dto.setData(response.getBody());
return dto;
}
private async getStoredLimit(user: string): Promise<number | null> {
// Implementation - get from database/cache
return null;
}
private async storeLimit(user: string, limit: number): Promise<void> {
// Implementation - store to database/cache
}
}
Strategy 4: Multiple API Endpoints #
Different limits for different endpoints:
export default class MultiEndpointLimitConnector extends AConnector {
public getName(): string {
return 'multi-endpoint-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
const input = dto.getJsonData();
const endpoint = input.endpoint;
// Different limits per endpoint
if (endpoint === 'search') {
// Search: 10 req/sec
dto.setLimiter(`search-${user}`, 1, 10);
} else if (endpoint === 'bulk') {
// Bulk operations: 1 req/sec
dto.setLimiter(`bulk-${user}`, 1, 1);
} else {
// Standard: 100 req/min
dto.setLimiter(`api-${user}`, 60, 100);
}
// Make API call
const result = await this.callEndpoint(endpoint, input);
dto.setJsonData(result);
return dto;
}
}
Application-Level Limits #
Configuring Limits in Applications #
Applications can define default limits:
import ABasicApplication from '@orchesty/nodejs-sdk/lib/Application/Base/ABasicApplication';
import { ApplicationInstall } from '@orchesty/nodejs-sdk/lib/Application/Database/ApplicationInstall';
export default class ShopifyApplication extends ABasicApplication {
public getName(): string {
return 'shopify';
}
// Define global limits for this application
public getGlobalLimits(applicationInstall: ApplicationInstall): Record<string, any> {
return {
use_limit: true,
time: 60, // 60 seconds
value: 40, // 40 requests
group_time: 60,
group_value: 1000
};
}
// ... other methods ...
}
User-Configurable Limits #
Let users set their own limits in Orchesty Admin:
import CoreFormsEnum from '@orchesty/nodejs-sdk/lib/Application/Base/CoreFormsEnum';
import Field from '@orchesty/nodejs-sdk/lib/Application/Model/Form/Field';
import FieldType from '@orchesty/nodejs-sdk/lib/Application/Model/Form/FieldType';
import Form from '@orchesty/nodejs-sdk/lib/Application/Model/Form/Form';
import FormStack from '@orchesty/nodejs-sdk/lib/Application/Model/Form/FormStack';
public getFormStack(): FormStack {
const authForm = new Form(CoreFormsEnum.AUTHORIZATION_FORM, 'Authentication');
authForm.addField(new Field(FieldType.TEXT, 'api_key', 'API Key', null, true));
// Add limiter form
const limiterForm = new Form(CoreFormsEnum.LIMITER_FORM, 'Rate Limiting');
limiterForm.addField(new Field(
FieldType.NUMBER,
'value',
'Requests per minute',
'100',
false
));
limiterForm.addField(new Field(
FieldType.NUMBER,
'time',
'Time window (seconds)',
'60',
false
));
const formStack = new FormStack();
formStack.addForm(authForm);
formStack.addForm(limiterForm);
return formStack;
}
Using Application Limits #
export default class AppLimitConnector extends AConnector {
public getName(): string {
return 'app-limit';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const appInstall = await this.getApplicationInstallFromProcess(dto);
const app = this.getApplication();
// Get limits from application install
const limits = app.getGlobalLimits(appInstall);
if (limits.use_limit) {
dto.setLimiter(
`${app.getName()}-${dto.getUser()}`,
limits.time,
limits.value
);
}
// Make API call
const result = await this.callAPI(dto);
dto.setJsonData(result);
return dto;
}
}
Multi-Tenant Rate Limiting #
Scenario: SaaS Platform #
You have multiple customers, each with their own API accounts:
export default class MultiTenantConnector extends AConnector {
public getName(): string {
return 'multi-tenant';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser(); // e.g., "customer1@saas.com"
const tenantId = this.extractTenantId(user); // e.g., "tenant-123"
// Each tenant has independent limits
dto.setLimiterWithGroup(
`api-${tenantId}`, // per-tenant limit
60,
100,
`api-global`, // platform-wide limit
60,
5000 // all tenants combined
);
// Use tenant-specific credentials
const credentials = await this.getTenantCredentials(tenantId);
const requestDto = new RequestDto(
'https://api.example.com/data',
HttpMethods.GET,
dto
);
requestDto.setHeaders({
'Authorization': `Bearer ${credentials.apiKey}`
});
const response = await this.getSender().send(requestDto);
dto.setData(response.getBody());
return dto;
}
private extractTenantId(user: string): string {
// Extract tenant ID from user email or other identifier
return user.split('@')[1].replace('.', '-');
}
private async getTenantCredentials(tenantId: string): Promise<any> {
// Get tenant-specific API credentials
return { apiKey: 'tenant-api-key' };
}
}
Per-User and Per-Tenant Limits #
export default class ComplexLimitingConnector extends AConnector {
public getName(): string {
return 'complex-limiting';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const user = dto.getUser();
const tenantId = this.extractTenantId(user);
// Three-level limiting:
// 1. Per-user: 10 req/min
// 2. Per-tenant: 100 req/min
// 3. Global: 1000 req/min
dto.setLimiterWithGroup(
`user-${user}`, // user limit
60,
10,
`tenant-${tenantId}`, // tenant limit
60,
100
);
// Additional global limit check
dto.setLimiter(
'global-api',
60,
1000
);
// Process...
return dto;
}
}
Handling Rate Limit Errors #
Detecting Rate Limit Errors #
import ResultCode from '@orchesty/nodejs-sdk/lib/Utils/ResultCode';
export default class RateLimitErrorConnector extends AConnector {
public getName(): string {
return 'rate-limit-error';
}
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
dto.setLimiter(`api-${dto.getUser()}`, 60, 100);
try {
const requestDto = new RequestDto(
'https://api.example.com/data',
HttpMethods.GET,
dto
);
const response = await this.getSender().send(requestDto);
dto.setData(response.getBody());
} catch (error) {
if (error.response?.status === 429) {
// Rate limited by API (shouldn't happen with limiter, but just in case)
const retryAfter = error.response.headers['retry-after'] || 60;
dto.setLimitExceeded(
`Rate limit exceeded. Retry after ${retryAfter} seconds`
);
return dto;
}
throw error;
}
return dto;
}
}
Graceful Degradation #
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
dto.setLimiter(`api-${dto.getUser()}`, 60, 100);
try {
// Try primary API
const result = await this.callPrimaryAPI(dto);
dto.setJsonData(result);
} catch (error) {
if (error.response?.status === 429) {
// Fall back to cached data or secondary source
const cachedData = await this.getCachedData();
dto.setJsonData({
...cachedData,
cached: true,
reason: 'Rate limit exceeded'
});
dto.setSuccessProcess('Returned cached data due to rate limit');
} else {
throw error;
}
}
return dto;
}
Monitoring Rate Limits #
Logging Rate Limit Usage #
import logger from '@orchesty/nodejs-sdk/lib/Logger/Logger';
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const limiterKey = `api-${dto.getUser()}`;
dto.setLimiter(limiterKey, 60, 100);
// Log before API call
logger.debug(
`Making API call with rate limit: 100 req/min (key: ${limiterKey})`,
dto
);
const result = await this.callAPI(dto);
// Log after successful call
logger.debug(
`API call successful under rate limit`,
dto
);
dto.setJsonData(result);
return dto;
}
Rate Limit Metrics #
Track rate limit hits in your metrics:
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const startTime = Date.now();
dto.setLimiter(`api-${dto.getUser()}`, 60, 100);
const result = await this.callAPI(dto);
const duration = Date.now() - startTime;
// If request took long, might have been queued
if (duration > 5000) {
logger.warn(
`Request queued due to rate limiting (${duration}ms wait)`,
dto
);
}
dto.setJsonData(result);
return dto;
}
Best Practices #
1. Set Limits Early #
// Good - set limit before any API calls
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
dto.setLimiter(`api-${dto.getUser()}`, 60, 100);
const result = await this.callAPI(dto);
// ...
}
// Bad - set limit after API call
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const result = await this.callAPI(dto); // Unprotected!
dto.setLimiter(`api-${dto.getUser()}`, 60, 100);
// ...
}
2. Use Descriptive Keys #
// Good - clear what limit applies to
dto.setLimiter(`shopify-api-${tenantId}`, 60, 40);
dto.setLimiter(`stripe-webhook-${user}`, 60, 100);
dto.setLimiter(`sendgrid-email-${user}`, 1, 10);
// Bad - unclear keys
dto.setLimiter(`api`, 60, 100);
dto.setLimiter(`user123`, 60, 40);
3. Match API Limits #
// If API allows 100 req/min, set that (or slightly lower)
dto.setLimiter(`api-${user}`, 60, 90); // 10% safety margin
// Don't set arbitrary limits
// dto.setLimiter(`api-${user}`, 60, 1000); // Way higher than API allows!
4. Remove Limits When Done #
if (useRateLimiting) {
dto.setLimiter(`api-${user}`, 60, 100);
} else {
dto.removeLimiter();
}
5. Document Limits #
/**
* Fetches customer data from Shopify API.
*
* Rate limit: 40 requests per minute per shop (Shopify's limit).
* This is enforced via setLimiter() to prevent 429 errors.
*/
export default class ShopifyGetCustomerConnector extends AConnector {
// ...
}
6. Consider Burst Limits #
Some APIs have both sustained and burst limits:
// API allows:
// - Sustained: 100 req/min
// - Burst: 500 req/5min
// Use the more restrictive sustained limit
dto.setLimiter(`api-${user}`, 60, 100);
Common Patterns #
Pattern: Pagination with Rate Limiting #
import ABatchNode from '@orchesty/nodejs-sdk/lib/Batch/ABatchNode';
import BatchProcessDto from '@orchesty/nodejs-sdk/lib/Utils/BatchProcessDto';
export default class RateLimitedPaginationBatch extends ABatchNode {
public getName(): string {
return 'rate-limited-pagination';
}
public async processAction(dto: BatchProcessDto): Promise<BatchProcessDto> {
const user = dto.getUser();
const page = parseInt(dto.getBatchCursor() || '1', 10);
// Apply rate limit to pagination
dto.setLimiter(
`pagination-${user}`,
60,
100 // Matches API limit
);
const response = await this.fetchPage(page, dto);
response.items.forEach(item => dto.addItem(item));
if (response.hasMore) {
dto.setBatchCursor((page + 1).toString());
} else {
dto.removeBatchCursor();
}
return dto;
}
}
Pattern: Conditional Rate Limiting #
public async processAction(dto: ProcessDto): Promise<ProcessDto> {
const input = dto.getJsonData();
const user = dto.getUser();
// Only apply rate limiting to external API calls
if (input.source === 'external_api') {
dto.setLimiter(`external-${user}`, 60, 50);
}
// No rate limiting for internal operations
const result = await this.processData(input);
dto.setJsonData(result);
return dto;
}
Troubleshooting #
Requests Taking Too Long #
If requests are slow, check if they're being queued:
const startTime = Date.now();
// ... make request ...
const duration = Date.now() - startTime;
if (duration > 10000) {
logger.warn(`Request queued for ${duration}ms`, dto);
}
Rate Limits Not Working #
Check:
- Is
setLimiter()called before API requests? - Is Redis running and accessible?
- Are keys unique per user/tenant?
- Are time and amount values correct?
Hitting API Rate Limits Despite Limiter #
- Limiter key might not be unique enough
- Multiple nodes/connectors using same API without coordinated limits
- Limit set higher than API allows
- API counting requests differently than expected
Related Concepts #
- Data Flow - Understanding ProcessDto and limiters
- Pagination - Rate limiting with batch operations
- Retry Policy - Handling rate limit errors
- Connector - Implementing rate limits in connectors
- Error Handling - Handling rate limit errors
API References #
- ProcessDto -
setLimiter()andsetLimiterWithGroup()methods - ABasicApplication -
getGlobalLimits()method - CoreFormsEnum - LIMITER_FORM constant
- ApplicationManager - Limiter management
Next Steps #
- Read ProcessDto documentation for complete limiter method reference
- Learn about Pagination to apply rate limits to batch operations
- Understand Error Handling for managing rate limit errors
- Explore Multi-tenant patterns for complex limiting scenarios