Docs
Log in
New here?

Start with Quick Start — it gets you a working request in under five minutes. The other pages in this section are reference material you can come back to as questions arise.

Your first request

  • Quick start — sign up, grab your token, and send a working crawl in five minutes. Code samples in curl, Python, Node.js, Ruby, PHP, Go, Java, and C#. Read this first.

Authentication & limits

Once requests are flowing, the next questions are usually "how does auth work?" and "how much can I send?". Two short reference pages cover both.

  • Authentication — Normal vs. JavaScript tokens, why there are two, when to use each, how to keep them out of your repo. Tokens authenticate every Crawlbase API the same way, so this applies platform-wide.
  • Rate limits — concurrency budgets per plan tier, the difference between request throughput and concurrent connections, and the pattern for backing off when you hit the ceiling.

Status codes & errors

Real traffic means real failures — captchas, geo-blocks, target sites going down, your own client misconfiguring a parameter. Two pages explain what comes back and what to do about it.

  • Status codes — every HTTP status the platform returns and what it means. Crawlbase splits the response into two status fields (pc_status for our side, original_status for the target site) so you can tell apart the two failure modes.
  • Error handling — recoverable vs. terminal errors, retry strategy, and the specific error envelopes the platform returns so your client can branch on them.

What's next

Once you're past Get Started, the platform splits along two axes: what you're building and how you want to integrate.

  • By API surface: the API Reference covers Crawling API, Smart Proxy, Cloud Storage, Enterprise Crawler, and the smaller helpers (Account API, User Agents API).
  • By integration shape: SDKs for the seven major languages, Integrations for low-code platforms (LangChain, Zapier, n8n, Make, Airbyte), and the AI & MCP section for agent-driven access through Claude, Cursor, VS Code, and other MCP-aware clients.
  • By task: the Scraper Library offers ready-made scrapers that return structured JSON for common sites — usually faster than parsing HTML yourself.
  • To experiment: the API Playground lets you build and run live requests in the browser without writing any client code.