Docs
Log in

Install

The Crawlbase node is published as a community node. Install it from your n8n instance:

  1. Go to Settings → Community Nodes → Install a community node.
  2. Enter n8n-nodes-crawlbase and click Install.
  3. Restart n8n if prompted. The Crawlbase node now shows up in the canvas search.

Credentials

Add a Crawlbase API credential under Settings → Credentials:

  1. Paste your API Token from the Crawlbase dashboard.
  2. Click Test connection to confirm the token is valid before running a workflow.

Use your Normal Token for HTML targets and your JavaScript Token for SPAs and JS-rendered pages — create one credential per token tier and pick the right one per node.

The Crawlbase node

A single Crawlbase node wraps the Crawling API. Drop it into a workflow, point it at a credential, and configure the request fields below.

Method
field
GET, POST, or PUT. Use POST/PUT when the target needs a request body.
Response format
field
HTML (default), JSON (parsed scraper output), or Markdown (clean text for LLM pipelines).
Options
field
Optional Crawling API parameters — page_wait, country, device, request_headers, cookies, scraper, screenshot, store, async, and JS-rendering helpers. See the Crawling API parameters reference for the full list.
Output
field
Each item returns statusCode, headers, body, and metadata (with originalStatus, cbStatus, and the resolved url).

Item-list mode

Set URL Source to From input item field and name the field that carries the URL (for example url). The node runs one Crawling API request per input item and emits one output item per input — pipe a Read-from-Sheet, Split-In-Batches, or any list-producing node straight in.

Rate limits and retries

Crawlbase rate limits depend on your plan. To keep workflows resilient:

  • Enable n8n's Retry On Fail on the Crawlbase node (Settings tab on the node).
  • Set Wait Between Tries to at least 1 second — higher if you hit limits.
  • For large URL lists, batch the work with Loop Over Items or Split In Batches rather than firing all requests at once.

Common workflows

  • Schedule → Crawlbase → Postgres: daily snapshot of a competitor's pricing page into a database.
  • Webhook → Crawlbase → Email: on-demand product enrichment.
  • RSS → Crawlbase → Vector DB: populate a self-hosted retrieval index.