DocsRate Limits & Quotas
Rate Limits & Quotas
Understand API rate limits and how to work within them effectively.
SnowScrape enforces rate limits to ensure fair usage and platform stability. Understanding these limits helps you design applications that work reliably.
API Rate Limits
| Plan | Requests/Minute | Requests/Day | Concurrent Jobs |
|---|---|---|---|
| Free | 60 | 1,000 | 2 |
| Starter | 120 | 10,000 | 5 |
| Pro | 300 | 50,000 | 10 |
| Enterprise | Custom | Custom | Custom |
Rate Limit Headers
Every API response includes headers to help you track your usage:
X-RateLimit-Limit: 60 X-RateLimit-Remaining: 45 X-RateLimit-Reset: 1705766400
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests allowed per window |
| X-RateLimit-Remaining | Requests remaining in current window |
| X-RateLimit-Reset | Unix timestamp when the window resets |
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
{
"error": "rate_limited",
"message": "API rate limit exceeded. Please retry after 60 seconds.",
"retry_after": 60
}Retry with Exponential Backoff
async function requestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || 60;
const delay = Math.min(retryAfter * 1000, Math.pow(2, attempt) * 1000);
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}Scraping Rate Limits
In addition to API limits, job execution has separate rate controls:
Per-Job Rate Limit
Set via rate_limit in job config. Controls how fast SnowScrape makes requests to target websites (requests per minute).
Global Scrape Limits
Total pages scraped per day across all jobs. Varies by plan.
Usage Quotas
| Resource | Free | Starter | Pro |
|---|---|---|---|
| Pages/month | 1,000 | 25,000 | 100,000 |
| JS Rendering pages | 100 | 5,000 | 25,000 |
| Data storage | 100 MB | 5 GB | 50 GB |
| Data retention | 7 days | 30 days | 90 days |
Best Practices
- Batch operations - Create multiple jobs in one request when possible
- Use webhooks - Instead of polling for job status, use webhooks
- Cache responses - Store results locally to avoid re-downloading
- Monitor usage - Track rate limit headers to avoid hitting limits
- Implement backoff - Always handle 429 responses gracefully
Need Higher Limits?
Contact our sales team for Enterprise plans with custom rate limits and dedicated resources.Get in touch →