crawler FAQs
What if my webhook endpoint is down?
If your Crawler callback is down, you are notified by email, your crawlers get paused and your last failed request due to downtime at your endpoint, is set to be retried. Your crawlers get resumed when your endpoint becomes available automatically. Our monitoring system checks your endpoint every minute.
Live monitor wordings
"Waiting" means that your requests are in your crawler queue waiting to be processed. "Concurrent crawlers" are the requests that are being crawled at the same time. Concurrent crawlers gets increased by our system if you have many pages to crawl, we also monitor crawlers and increase or decrease the concurrency depending on the pool. "Sets to be retried" are your requests that failed for any reason, they land in your crawler retry queue and are processed with a retry rate up until maximum 110 retries.
Where can I get the API keys?
You can get the API keys or request tokens from the Crawlbase Account Documentation page.
Can the 30 URLs-per-second limit be increased for large-scale crawls?
The 30 URLs-per-second limit applies to LinkedIn crawls. For other websites, we can evaluate and potentially increase the limit on a case-by-case basis. Please contact us to discuss your specific needs.
Need help? Contact us
Please contact us for any type of query regarding products
Start crawling and scraping the web today
Try it free. No credit card required. Instant set-up.
Crawl product data at scale