Crawlbase Documents Crawlbase Documents
  • Crawling API
  • Scraper API
  • Enterprise Crawler
  • Smart AI Proxy
  • Cloud Storage
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
Libraries (opens new window)
Dashboard (opens new window)
Login (opens new window)
Signup (opens new window)
  • en-US
  • zh-CN
  • fr-FR
  • de-DE
  • ru-RU
  • Crawling API
  • Scraper API
  • Enterprise Crawler
  • Smart AI Proxy
  • Cloud Storage
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
Libraries (opens new window)
Dashboard (opens new window)
Login (opens new window)
Signup (opens new window)
  • en-US
  • zh-CN
  • fr-FR
  • de-DE
  • ru-RU
  • Crawling API
  • Scraper API
  • Enterprise Crawler
    • Crawler Introduction
    • Pushing data to the Enterprise Crawler
    • Webhook receiving
    • The Enterprise Crawler APIs
  • Smart AI Proxy
  • Cloud Storage
  • Leads API
  • Screenshots API
  • Proxy Backconnect API
  • User Agents API
  • Account API
  • API Status Codes

# Crawler Introduction

The Enterprise Crawler is a push system that works with callbacks. You will push URLs to your Crawler using the Crawling API and the Crawler will push back the crawled page to your server.

In order to do that, you need to create a webhook (opens new window) URL on your server (example: https://myserver.com/crawlbase) to receive the data from the Crawlbase Crawler.

Keep reading to learn how to push data and receive data to and from the Crawler.

← Response Pushing data to the Enterprise Crawler →

©2026 Crawlbase