Crawlbase Documents Crawlbase Documents
  • Crawling API
  • Scraper API
  • Crawler
  • Smart Proxy
  • Storage API
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
Libraries (opens new window)
Dashboard (opens new window)
Login (opens new window)
Signup (opens new window)
  • en-US
  • zh-CN
  • fr-FR
  • de-DE
  • Crawling API
  • Scraper API
  • Crawler
  • Smart Proxy
  • Storage API
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
Libraries (opens new window)
Dashboard (opens new window)
Login (opens new window)
Signup (opens new window)
  • en-US
  • zh-CN
  • fr-FR
  • de-DE
  • Crawling API
  • Scraper API
  • Crawler
    • Crawler Introduction
    • Pushing data to the Crawler
    • Webhook receiving
    • Crawler APIs
  • Smart Proxy
  • Storage API
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
  • User Agents API
  • Account API
  • API Status Codes

# Crawler Introduction

The Crawler is a push system that works with callbacks. You will push urls to your Crawler using the Crawling API and the Crawler will push back the crawled page to your server.

In order to do that, you need to create a webhook (opens new window) url on your server (example: https://myserver.com/crawlbase) to receive the data from the Crawlbase Crawler.

Keep reading to learn how to push data and receive data to and from the Crawler.

← Response Pushing data to the Crawler →

©2025 Crawlbase