Understanding and working with StateSet’s rate limits ensures reliable, high-performance integrations.
Overview
StateSet implements intelligent rate limiting to ensure fair usage and maintain optimal performance for all users. Our system uses a sliding window algorithm with burst capacity to handle traffic spikes while preventing abuse.
Key Concepts
Request Quota : Maximum requests allowed per time window
Burst Capacity : Short-term allowance for traffic spikes
Sliding Window : Continuous evaluation of request rate
Adaptive Throttling : Dynamic adjustment based on system load
Priority Queuing : Critical endpoints get preferential treatment
Rate Limits by Plan
Limits Overview Endpoint Limits Resource Limits Plan Requests/Min Requests/Hour Requests/Day Burst Capacity Concurrent Free 60 1,000 10,000 100/sec 10 Starter 100 5,000 50,000 200/sec 25 Growth 1,000 30,000 500,000 1,000/sec 100 Scale 5,000 150,000 2,000,000 5,000/sec 500 Enterprise Custom Custom Unlimited Custom Unlimited
Every API response includes rate limit information:
HTTP / 1.1 200 OK
X-RateLimit-Limit : 1000
X-RateLimit-Remaining : 995
X-RateLimit-Reset : 1704070800
X-RateLimit-Reset-After : 45
X-RateLimit-Bucket : api
X-RateLimit-Retry-After : 0
X-Request-Id : req_1NXWPnCo6bFb1KQto6C8OWvE
Header Description Example X-RateLimit-Limit
Max requests in current window 1000
X-RateLimit-Remaining
Requests remaining in window 995
X-RateLimit-Reset
Unix timestamp when limit resets 1704070800
X-RateLimit-Reset-After
Seconds until limit resets 45
X-RateLimit-Bucket
Rate limit bucket identifier api
X-RateLimit-Retry-After
Seconds to wait if rate limited 10
Handling Rate Limits
Exponential Backoff Implementation
class RateLimitHandler {
constructor ( options = {}) {
this . maxRetries = options . maxRetries || 5 ;
this . baseDelay = options . baseDelay || 1000 ;
this . maxDelay = options . maxDelay || 32000 ;
this . jitter = options . jitter || true ;
}
async executeWithRetry ( fn ) {
let lastError ;
for ( let attempt = 0 ; attempt <= this . maxRetries ; attempt ++ ) {
try {
const response = await fn ();
// Check rate limit headers
this . trackRateLimit ( response . headers );
return response ;
} catch ( error ) {
lastError = error ;
if ( error . status === 429 ) {
const delay = this . calculateDelay ( error , attempt );
console . log ( `Rate limited. Waiting ${ delay } ms before retry ${ attempt + 1 } / ${ this . maxRetries } ` );
if ( attempt < this . maxRetries ) {
await this . sleep ( delay );
continue ;
}
}
// Don't retry on other errors
throw error ;
}
}
throw lastError ;
}
calculateDelay ( error , attempt ) {
// Use server-provided retry delay if available
const retryAfter = error . headers ?.[ 'x-ratelimit-retry-after' ];
if ( retryAfter ) {
return parseInt ( retryAfter ) * 1000 ;
}
// Calculate exponential backoff
let delay = Math . min (
this . baseDelay * Math . pow ( 2 , attempt ),
this . maxDelay
);
// Add jitter to prevent thundering herd
if ( this . jitter ) {
delay = delay * ( 0.5 + Math . random () * 0.5 );
}
return Math . floor ( delay );
}
trackRateLimit ( headers ) {
const remaining = parseInt ( headers [ 'x-ratelimit-remaining' ]);
const limit = parseInt ( headers [ 'x-ratelimit-limit' ]);
if ( remaining < limit * 0.2 ) {
console . warn ( `Rate limit warning: ${ remaining } / ${ limit } requests remaining` );
}
}
sleep ( ms ) {
return new Promise ( resolve => setTimeout ( resolve , ms ));
}
}
// Usage
const rateLimiter = new RateLimitHandler ();
async function makeApiCall () {
return rateLimiter . executeWithRetry ( async () => {
return await fetch ( 'https://api.stateset.com/v1/orders' , {
headers: {
'Authorization' : `Bearer ${ API_KEY } `
}
});
});
}
Circuit Breaker Pattern
Prevent cascading failures with circuit breaker:
class CircuitBreaker {
constructor ( options = {}) {
this . failureThreshold = options . failureThreshold || 5 ;
this . successThreshold = options . successThreshold || 2 ;
this . timeout = options . timeout || 60000 ;
this . state = 'CLOSED' ;
this . failures = 0 ;
this . successes = 0 ;
this . nextAttempt = Date . now ();
}
async execute ( fn ) {
if ( this . state === 'OPEN' ) {
if ( Date . now () < this . nextAttempt ) {
throw new Error ( 'Circuit breaker is OPEN' );
}
this . state = 'HALF_OPEN' ;
}
try {
const result = await fn ();
this . onSuccess ();
return result ;
} catch ( error ) {
this . onFailure ();
throw error ;
}
}
onSuccess () {
this . failures = 0 ;
if ( this . state === 'HALF_OPEN' ) {
this . successes ++ ;
if ( this . successes >= this . successThreshold ) {
this . state = 'CLOSED' ;
this . successes = 0 ;
}
}
}
onFailure () {
this . successes = 0 ;
this . failures ++ ;
if ( this . failures >= this . failureThreshold ) {
this . state = 'OPEN' ;
this . nextAttempt = Date . now () + this . timeout ;
console . error ( `Circuit breaker opened. Will retry at ${ new Date ( this . nextAttempt ) } ` );
}
}
getState () {
return {
state: this . state ,
failures: this . failures ,
successes: this . successes ,
nextAttempt: this . state === 'OPEN' ? new Date ( this . nextAttempt ) : null
};
}
}
1. Request Batching
Combine multiple operations into single requests:
// Instead of multiple individual requests
const orders = [];
for ( const customerId of customerIds ) {
const order = await stateset . orders . list ({ customer_id: customerId });
orders . push ( ... order . data );
}
// Use batch operations
const orders = await stateset . orders . batchGet ({
customer_ids: customerIds
});
// Or use GraphQL for complex queries
const query = `
query GetMultipleOrders($customerIds: [ID!]!) {
orders(where: { customer_id: { _in: $customerIds } }) {
id
status
total
customer {
email
}
}
}
` ;
2. Response Caching
Implement intelligent caching to reduce API calls:
class CachedAPIClient {
constructor ( client , cache ) {
this . client = client ;
this . cache = cache ;
}
async get ( resource , id , options = {}) {
const cacheKey = ` ${ resource } : ${ id } ` ;
const ttl = options . ttl || 300 ; // 5 minutes default
// Check cache
const cached = await this . cache . get ( cacheKey );
if ( cached && ! options . force ) {
return JSON . parse ( cached );
}
// Fetch from API
const data = await this . client [ resource ]. get ( id );
// Cache result
await this . cache . setex ( cacheKey , ttl , JSON . stringify ( data ));
return data ;
}
async list ( resource , filters = {}, options = {}) {
const cacheKey = ` ${ resource } :list: ${ JSON . stringify ( filters ) } ` ;
const ttl = options . ttl || 60 ; // 1 minute for lists
// Check cache
const cached = await this . cache . get ( cacheKey );
if ( cached && ! options . force ) {
return JSON . parse ( cached );
}
// Fetch from API
const data = await this . client [ resource ]. list ( filters );
// Cache result
await this . cache . setex ( cacheKey , ttl , JSON . stringify ( data ));
return data ;
}
async invalidate ( resource , id = null ) {
if ( id ) {
await this . cache . del ( ` ${ resource } : ${ id } ` );
} else {
// Invalidate all cached lists for this resource
const keys = await this . cache . keys ( ` ${ resource } :list:*` );
if ( keys . length ) {
await this . cache . del ( ... keys );
}
}
}
}
Efficiently handle large datasets:
class PaginationHelper {
async * iterateAll ( resource , filters = {}) {
let cursor = null ;
do {
const response = await stateset [ resource ]. list ({
... filters ,
limit: 100 ,
cursor
});
for ( const item of response . data ) {
yield item ;
}
cursor = response . pagination . next_cursor ;
// Rate limit friendly delay
await new Promise ( r => setTimeout ( r , 100 ));
} while ( cursor );
}
async getAllPages ( resource , filters = {}) {
const items = [];
for await ( const item of this . iterateAll ( resource , filters )) {
items . push ( item );
}
return items ;
}
async getParallel ( resource , filters = {}, concurrency = 3 ) {
// First request to get total count
const first = await stateset [ resource ]. list ({
... filters ,
limit: 100
});
const totalPages = Math . ceil ( first . pagination . total_count / 100 );
const results = [ first . data ];
// Parallel fetch remaining pages
const promises = [];
for ( let page = 2 ; page <= totalPages ; page ++ ) {
promises . push (
this . fetchPage ( resource , filters , page , concurrency )
);
}
const pages = await Promise . all ( promises );
results . push ( ... pages . flat ());
return results . flat ();
}
async fetchPage ( resource , filters , page , concurrency ) {
// Rate limiting with concurrency control
await this . rateLimitQueue ( concurrency );
const response = await stateset [ resource ]. list ({
... filters ,
limit: 100 ,
offset: ( page - 1 ) * 100
});
return response . data ;
}
}
4. Field Selection
Request only the data you need:
// REST API - Use sparse fieldsets
const orders = await stateset . orders . list ({
fields: [ 'id' , 'status' , 'total' , 'customer.email' ]
});
// GraphQL - Precise field selection
const query = `
query GetOrders {
orders(limit: 10) {
id
status
total
customer {
email
}
}
}
` ;
5. Compression
Enable response compression:
const response = await fetch ( 'https://api.stateset.com/v1/orders' , {
headers: {
'Authorization' : `Bearer ${ API_KEY } ` ,
'Accept-Encoding' : 'gzip, deflate, br'
}
});
// SDK automatically handles compression
const stateset = new StateSetClient ({
apiKey: API_KEY ,
compression: true // Default: true
});
Request Prioritization
Priority Queuing System
class PriorityRequestQueue {
constructor () {
this . queues = {
high: [],
medium: [],
low: []
};
this . processing = false ;
this . concurrency = 5 ;
this . active = 0 ;
}
async add ( request , priority = 'medium' ) {
return new Promise (( resolve , reject ) => {
this . queues [ priority ]. push ({
request ,
resolve ,
reject ,
timestamp: Date . now ()
});
this . process ();
});
}
async process () {
if ( this . processing ) return ;
this . processing = true ;
while ( this . hasRequests () && this . active < this . concurrency ) {
const item = this . getNext ();
if ( ! item ) break ;
this . active ++ ;
this . execute ( item ). finally (() => {
this . active -- ;
this . process ();
});
}
this . processing = false ;
}
getNext () {
// Priority order: high > medium > low
for ( const priority of [ 'high' , 'medium' , 'low' ]) {
if ( this . queues [ priority ]. length > 0 ) {
return this . queues [ priority ]. shift ();
}
}
return null ;
}
hasRequests () {
return Object . values ( this . queues ). some ( q => q . length > 0 );
}
async execute ( item ) {
try {
const result = await item . request ();
item . resolve ( result );
} catch ( error ) {
item . reject ( error );
}
}
}
// Usage
const queue = new PriorityRequestQueue ();
// High priority request
const criticalOrder = await queue . add (
() => stateset . orders . create ( orderData ),
'high'
);
// Low priority analytics
const analytics = await queue . add (
() => stateset . analytics . get ( query ),
'low'
);
Monitoring and Analytics
Rate Limit Monitoring
class RateLimitMonitor {
constructor () {
this . metrics = {
requests: 0 ,
rateLimited: 0 ,
remaining: null ,
limit: null
};
}
track ( response ) {
this . metrics . requests ++ ;
const headers = response . headers ;
this . metrics . remaining = parseInt ( headers [ 'x-ratelimit-remaining' ]);
this . metrics . limit = parseInt ( headers [ 'x-ratelimit-limit' ]);
if ( response . status === 429 ) {
this . metrics . rateLimited ++ ;
this . onRateLimit ( headers );
}
// Alert when approaching limit
const usage = ( this . metrics . limit - this . metrics . remaining ) / this . metrics . limit ;
if ( usage > 0.8 ) {
this . alertHighUsage ( usage );
}
}
onRateLimit ( headers ) {
console . error ( 'Rate limited!' , {
retryAfter: headers [ 'x-ratelimit-retry-after' ],
resetAt: new Date ( parseInt ( headers [ 'x-ratelimit-reset' ]) * 1000 )
});
// Send alert
alerting . send ({
type: 'RATE_LIMIT' ,
severity: 'high' ,
details: this . metrics
});
}
alertHighUsage ( usage ) {
console . warn ( `High API usage: ${ ( usage * 100 ). toFixed ( 1 ) } %` );
if ( usage > 0.9 ) {
// Implement throttling
this . throttle ();
}
}
throttle () {
// Add delay between requests
this . throttleDelay = 1000 ;
console . log ( 'Throttling enabled: 1s delay between requests' );
}
getMetrics () {
return {
... this . metrics ,
usagePercent: (( this . metrics . limit - this . metrics . remaining ) / this . metrics . limit * 100 ). toFixed ( 1 ),
rateLimitPercent: ( this . metrics . rateLimited / this . metrics . requests * 100 ). toFixed ( 2 )
};
}
}
class PerformanceTracker {
constructor () {
this . metrics = new Map ();
}
async track ( name , fn ) {
const start = performance . now ();
const startMemory = process . memoryUsage ();
try {
const result = await fn ();
const duration = performance . now () - start ;
const memoryDelta = process . memoryUsage (). heapUsed - startMemory . heapUsed ;
this . record ( name , {
duration ,
memoryDelta ,
success: true
});
return result ;
} catch ( error ) {
const duration = performance . now () - start ;
this . record ( name , {
duration ,
success: false ,
error: error . message
});
throw error ;
}
}
record ( name , metrics ) {
if ( ! this . metrics . has ( name )) {
this . metrics . set ( name , {
count: 0 ,
totalDuration: 0 ,
avgDuration: 0 ,
maxDuration: 0 ,
minDuration: Infinity ,
errors: 0
});
}
const stats = this . metrics . get ( name );
stats . count ++ ;
stats . totalDuration += metrics . duration ;
stats . avgDuration = stats . totalDuration / stats . count ;
stats . maxDuration = Math . max ( stats . maxDuration , metrics . duration );
stats . minDuration = Math . min ( stats . minDuration , metrics . duration );
if ( ! metrics . success ) {
stats . errors ++ ;
}
// Log slow requests
if ( metrics . duration > 5000 ) {
console . warn ( `Slow API call: ${ name } took ${ metrics . duration . toFixed ( 2 ) } ms` );
}
}
getReport () {
const report = {};
for ( const [ name , stats ] of this . metrics ) {
report [ name ] = {
... stats ,
avgDuration: stats . avgDuration . toFixed ( 2 ),
errorRate: (( stats . errors / stats . count ) * 100 ). toFixed ( 2 ) + '%'
};
}
return report ;
}
}
// Usage
const tracker = new PerformanceTracker ();
const orders = await tracker . track ( 'fetchOrders' , async () => {
return await stateset . orders . list ({ limit: 100 });
});
console . log ( tracker . getReport ());
Best Practices
1. Implement Graceful Degradation
class ResilientAPIClient {
async getOrdersWithFallback () {
try {
// Try primary method
return await this . fetchFromAPI ();
} catch ( error ) {
if ( error . status === 429 ) {
// Fall back to cached data
return await this . fetchFromCache ();
} else if ( error . status >= 500 ) {
// Fall back to secondary system
return await this . fetchFromBackup ();
}
throw error ;
}
}
}
2. Use Webhooks Instead of Polling
// Bad: Polling for updates
setInterval ( async () => {
const orders = await stateset . orders . list ({
updated_after: lastCheck
});
processUpdates ( orders );
}, 60000 );
// Good: Use webhooks
app . post ( '/webhook' , ( req , res ) => {
const event = req . body ;
if ( event . type === 'order.updated' ) {
processUpdate ( event . data . object );
}
res . sendStatus ( 200 );
});
3. Optimize Batch Sizes
class BatchProcessor {
async processBatch ( items , batchSize = 100 ) {
const results = [];
for ( let i = 0 ; i < items . length ; i += batchSize ) {
const batch = items . slice ( i , i + batchSize );
const result = await stateset . batch . process ( batch );
results . push ( ... result );
// Rate limit friendly delay
if ( i + batchSize < items . length ) {
await new Promise ( r => setTimeout ( r , 1000 ));
}
}
return results ;
}
}
Troubleshooting
Consistently hitting rate limits
Solutions:
Implement request queuing and batching
Cache frequently accessed data
Use webhooks instead of polling
Consider upgrading your plan
Optimize request patterns
Solutions:
Use field selection to reduce payload size
Enable compression
Implement pagination for large datasets
Use regional endpoints if available
Check network latency
429 errors despite low request volume
Possible causes:
Hitting endpoint-specific limits
Burst rate exceeded
Account-level restrictions
Check rate limit headers for details
Circuit breaker keeps opening
Solutions:
Increase failure threshold
Implement better error handling
Check for systematic issues
Review timeout settings
Monitor API status page
Need help optimizing your integration? Contact api-support@stateset.com or visit our Discord community .