Docs
Log in

Crawlbase developer documentation

The web,
structured for builders.

Crawl, scrape, and parse any website at scale with a single API. Production-ready endpoints, native SDKs, and an MCP server that plugs straight into Claude, Cursor, and your agent stack.

1,000 free requests 195 countries 51M req/month per token No credit card
~/crawlbase
$ curl'https://api.crawlbase.com/?' \'token=YOUR_TOKEN' \'&url=https://github.com/crawlbase' → 200 OK // 4.2s · pc_status: 200 · 14.8 KB# JS-rendered, geo-routed, anti-bot bypassed <!doctype html><html>…</html>
$ curl'https://api.crawlbase.com/?' \'token=YOUR_TOKEN&format=json' \'&url=https://github.com/crawlbase'  { "original_status": 200, "pc_status": 200, "url": "https://github.com/crawlbase", "body": "<!doctype html>…"}
$ curl'https://api.crawlbase.com/?' \'token=YOUR_TOKEN&format=md' \'&url=https://github.com/crawlbase'  # CrawlbaseWeb crawling & scraping API - Python, Node.js, Ruby, PHP, Go SDKs. # Or via the MCP server (same result, agent-native)> tool_use: crawl_markdown(url="https://github.com/crawlbase")

Pick the surface that fits your stack

Browse all APIs

What can I build?

Browse scrapers

Your first crawl in 60 seconds

GEThttps://api.crawlbase.com/
curl 'https://api.crawlbase.com/?token=YOUR_TOKEN&url=https%3A%2F%2Fgithub.com%2Fcrawlbase'
from crawlbase import CrawlingAPI

api = CrawlingAPI({'token': 'YOUR_TOKEN'})
response = api.get('https://github.com/crawlbase')

if response['status_code'] == 200:
    print(response['body'])
const { CrawlingAPI } = require('crawlbase');
const api = new CrawlingAPI({ token: 'YOUR_TOKEN' });

api.get('https://github.com/crawlbase')
   .then(res => console.log(res.statusCode, res.body))
   .catch(err => console.error(err));
require 'crawlbase'

api = Crawlbase::API.new(token: 'YOUR_TOKEN')
response = api.get('https://github.com/crawlbase')

puts response.status_code
puts response.body
<?php
use Crawlbase\CrawlingAPI;

$api = new CrawlingAPI(['token' => 'YOUR_TOKEN']);
$response = $api->get('https://github.com/crawlbase');

echo $response->statusCode;
echo $response->body;
package main

import (
    "fmt"
    "github.com/crawlbase/crawlbase-go"
)

func main() {
    api := crawlbase.NewCrawlingAPI("YOUR_TOKEN")
    res, _ := api.Get("https://github.com/crawlbase")
    fmt.Println(res.StatusCode, res.Body)
}

Native plumbing for AI agents

Explore AI & MCP
Expose every Crawlbase tool to Claude, Cursor, ChatGPT, and any MCP-compatible client.
Read docs
One-click install in Claude. Crawlbase becomes a native tool your conversations can call.
Setup
Crawlbase document loaders, retrievers, and tools - drop-in for your agent graph.
Integrate
Battle-tested prompts for extraction, summarization, monitoring, and lead enrichment.
Browse

Drop into any stack

Every code, every meaning. Bookmark this page.
20 req/sec per token, with paths to higher limits.
What shipped this week, with migration notes.
Got a hard problem? We're a Slack/email away.