n8n
Open-source automation, your servers. The Crawlbase n8n community node gives you the same APIs in a self-hosted workflow with no SaaS lock-in.
Install
The Crawlbase node is published as a community node. Install it from your n8n instance:
- Go to Settings → Community Nodes → Install a community node.
- Enter
n8n-nodes-crawlbaseand click Install. - Restart n8n if prompted. The Crawlbase node now shows up in the canvas search.
Credentials
Add a Crawlbase API credential under Settings → Credentials:
- Paste your API Token from the Crawlbase dashboard.
- Click Test connection to confirm the token is valid before running a workflow.
Use your Normal Token for HTML targets and your JavaScript Token for SPAs and JS-rendered pages — create one credential per token tier and pick the right one per node.
The Crawlbase node
A single Crawlbase node wraps the Crawling API. Drop it into a workflow, point it at a credential, and configure the request fields below.
page_wait, country, device, request_headers, cookies, scraper, screenshot, store, async, and JS-rendering helpers. See the Crawling API parameters reference for the full list.statusCode, headers, body, and metadata (with originalStatus, cbStatus, and the resolved url).Item-list mode
Set URL Source to From input item field and name the field that carries the URL (for example url). The node runs one Crawling API request per input item and emits one output item per input — pipe a Read-from-Sheet, Split-In-Batches, or any list-producing node straight in.
Rate limits and retries
Crawlbase rate limits depend on your plan. To keep workflows resilient:
- Enable n8n's Retry On Fail on the Crawlbase node (Settings tab on the node).
- Set Wait Between Tries to at least 1 second — higher if you hit limits.
- For large URL lists, batch the work with Loop Over Items or Split In Batches rather than firing all requests at once.
Common workflows
- Schedule → Crawlbase → Postgres: daily snapshot of a competitor's pricing page into a database.
- Webhook → Crawlbase → Email: on-demand product enrichment.
- RSS → Crawlbase → Vector DB: populate a self-hosted retrieval index.

