Docs
Log in
When to use an integration vs the API

If your workflow lives in one of the tools below, the integration is almost always the right call — it gives you typed inputs, structured outputs, and the platform's native error handling for free. If you're building a custom pipeline in your own codebase, point the SDKs or the Crawling API directly. Both end up calling the same endpoints; the integrations just remove the wiring.

Available today

  • langchain — LangChain provider for Python and JS/TS. Drop a Crawlbase tool into an agent's toolbelt and the agent fetches live web content with one call.
  • zapier — Zapier app. Trigger a crawl from any Zap; the parsed result feeds into the next step (Sheets, Airtable, Slack, anything Zapier connects to).
  • n8n — n8n community node for self-hosted workflows. A single Crawlbase node calls the Crawling API natively — method, options, and outputs mapped to native n8n fields, no HTTP wiring.

In development

The integrations below have docs preview-published so you can see the shape of what's coming. The actual node / module / connector hasn't shipped yet — the page tells you the workaround for today (usually the platform's built-in HTTP client + the Crawling API directly) and the email link to be notified when the dedicated version lands.

  • make soon — visual scenario builder (formerly Integromat). Workaround: Make's HTTP app + Crawling API.
  • airbyte soon — open-source data pipelines. Workaround: HTTP API source against the Crawling API, or push to Cloud Storage and ingest the bucket via Airbyte's S3 source.

Don't see your tool?

Most platforms with an HTTP-action primitive can call Crawlbase directly — the Crawling API is a regular HTTPS endpoint with token authentication. The API Playground produces request templates you can paste into any platform's HTTP step.

If you want a dedicated integration for a tool that isn't on this page, write to support with the use case — the roadmap is partly demand-driven.