API Documentation
Rate Limits
The Traceback Search API implements comprehensive rate limiting to ensure fair usage and optimal performance for all users.
⚡ Fair Usage Policy
Rate limits protect our infrastructure and ensure consistent performance. All limits are designed to accommodate legitimate business use cases.
Rate Limit Tiers
Different subscription plans have different rate limits to match your usage needs:
Plan | Requests per Minute | Requests per Hour | Daily Limit | Export Limit |
---|---|---|---|---|
Starter | 10 | 100 | 1,000 | 5 exports/day |
Pro | 60 | 1,000 | 10,000 | 50 exports/day |
Enterprise | 300 | 5,000 | 100,000 | Unlimited |
Rate Limit Headers
All API responses include rate limit information in the headers to help you manage your usage:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1640995200
X-RateLimit-Window: 60
Header | Description |
---|---|
X-RateLimit-Limit |
Maximum requests allowed in the current window |
X-RateLimit-Remaining |
Requests remaining in the current window |
X-RateLimit-Reset |
Unix timestamp when the rate limit resets |
X-RateLimit-Window |
Rate limit window duration in seconds |
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests
status code with a Retry-After
header.
Rate Limit Response
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Try again in 60 seconds.",
"details": {
"limit": 60,
"window": 60,
"retry_after": 60
}
},
"request_id": "req_rate_limit_123"
}
Exponential Backoff Implementation
Implement exponential backoff to handle rate limits gracefully:
import time
import requests
from random import uniform
def make_request_with_backoff(url, headers, max_retries=3):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 429:
# Rate limited - implement exponential backoff
retry_after = int(response.headers.get('Retry-After', 60))
# Add jitter to prevent thundering herd
jitter = uniform(0.1, 0.5)
wait_time = (2 ** attempt) * retry_after + jitter
print(f"Rate limited. Waiting {wait_time:.1f} seconds...")
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")
JavaScript Implementation
class RateLimitHandler {
constructor(maxRetries = 3) {
this.maxRetries = maxRetries;
}
async makeRequest(url, options) {
for (let attempt = 0; attempt < this.maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const jitter = Math.random() * 0.5;
const waitTime = (Math.pow(2, attempt) * retryAfter + jitter) * 1000;
console.log(`Rate limited. Waiting ${waitTime/1000}s...`);
await this.sleep(waitTime);
continue;
}
return response;
} catch (error) {
if (attempt === this.maxRetries - 1) throw error;
}
}
throw new Error('Max retries exceeded');
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Usage
const rateLimitHandler = new RateLimitHandler();
const response = await rateLimitHandler.makeRequest(apiUrl, {
headers: { 'Authorization': 'Bearer YOUR_API_KEY' }
});
Best Practices
Monitor Rate Limit Headers
Always check rate limit headers to proactively manage your usage:
def check_rate_limits(response):
limit = int(response.headers.get('X-RateLimit-Limit', 0))
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
# Warn when approaching limit
if remaining < limit * 0.1: # Less than 10% remaining
print(f"Warning: Only {remaining} requests remaining")
# Calculate time until reset
reset_in = reset_time - int(time.time())
print(f"Rate limit resets in {reset_in} seconds")
return remaining > 0
Implement Client-Side Throttling
Prevent rate limit errors by implementing client-side throttling:
import time
from collections import deque
class ThrottledClient:
def __init__(self, requests_per_minute=50):
self.requests_per_minute = requests_per_minute
self.request_times = deque()
def make_request(self, *args, **kwargs):
now = time.time()
# Remove requests older than 1 minute
while self.request_times and self.request_times[0] < now - 60:
self.request_times.popleft()
# Check if we need to wait
if len(self.request_times) >= self.requests_per_minute:
sleep_time = 60 - (now - self.request_times[0])
if sleep_time > 0:
time.sleep(sleep_time)
# Make the request
self.request_times.append(now)
return self._actual_request(*args, **kwargs)
Batch Operations
Use batch operations and pagination to maximize efficiency:
# Instead of multiple small requests
# DON'T DO THIS:
for number in phone_numbers:
result = api.search_spam_reports(sender_number=number, limit=10)
# DO THIS:
# Use larger limits and process in batches
all_numbers = ",".join(phone_numbers[:50]) # Batch up to 50 numbers
result = api.search_spam_reports(
sender_numbers=all_numbers,
limit=1000 # Get more data per request
)
Cache Results
Implement caching to reduce API calls for frequently accessed data:
import time
from functools import wraps
def cache_results(ttl_seconds=300): # 5 minute cache
cache = {}
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Create cache key
key = str(args) + str(sorted(kwargs.items()))
now = time.time()
# Check cache
if key in cache:
result, timestamp = cache[key]
if now - timestamp < ttl_seconds:
return result
# Make API call and cache result
result = func(*args, **kwargs)
cache[key] = (result, now)
return result
return wrapper
return decorator
# Usage
@cache_results(ttl_seconds=600) # 10 minute cache
def get_spam_reports(sender_number):
return api.search_spam_reports(sender_number=sender_number)
Quota Management
Daily Quotas
Track your daily usage to avoid hitting daily limits:
class QuotaTracker {
constructor(dailyLimit) {
this.dailyLimit = dailyLimit;
this.resetQuotaIfNeeded();
}
resetQuotaIfNeeded() {
const today = new Date().toDateString();
const lastReset = localStorage.getItem('quota_reset_date');
if (lastReset !== today) {
localStorage.setItem('quota_used', '0');
localStorage.setItem('quota_reset_date', today);
}
}
canMakeRequest() {
const used = parseInt(localStorage.getItem('quota_used') || '0');
return used < this.dailyLimit;
}
recordRequest() {
const used = parseInt(localStorage.getItem('quota_used') || '0');
localStorage.setItem('quota_used', (used + 1).toString());
}
getRemainingQuota() {
const used = parseInt(localStorage.getItem('quota_used') || '0');
return Math.max(0, this.dailyLimit - used);
}
}
Export Limits
Export operations have separate limits due to their resource-intensive nature:
- Starter: 5 exports per day, max 10,000 records per export
- Pro: 50 exports per day, max 100,000 records per export
- Enterprise: Unlimited exports, max 1,000,000 records per export
Monitoring and Alerts
Usage Dashboard
Monitor your API usage in real-time through your account dashboard:
- Current rate limit status
- Daily quota usage
- Historical usage patterns
- Export usage tracking
Usage Alerts
Set up alerts to notify you when approaching limits:
- 80% of daily quota reached
- 90% of rate limit reached
- Export limit approaching
- Unusual usage patterns detected