Docs
Log in
What the SDKs give you

The SDKs are thin wrappers that handle the request-shaping (URL encoding, parameter validation, response parsing, retry helpers) so your application code reads like product code instead of HTTP plumbing. Every SDK exposes the same set of clients — Crawling API, Scraper API, Leads API, Screenshots API (plus Cloud Storage on Python / Ruby / PHP / .NET) — and the API surface mirrors the underlying parameters one-to-one. If a parameter is documented on the API page, it works in every SDK. The Enterprise Crawler is reached through the Crawling API itself by passing async + callback + crawler options; there's no separate Crawler client class.

Pick your language

Each language has its own page with install instructions, authentication, multi-API examples, and the method reference.

More

Other ways to integrate when one of the official SDKs isn't the right fit.

Which SDK should I use?

Use the SDK that matches your project's primary language — that's almost always the right answer. The interfaces are the same shape across languages, so picking one over another is purely about ecosystem fit (your dependency manager, your runtime, your existing types).

If your stack isn't listed, you can use the Crawling API directly over HTTP — every SDK is doing exactly that under the hood. The API Playground generates raw curl/HTTP examples you can port to any client.

Open source

All SDKs are open source on GitHub at github.com/crawlbase. Issues, PRs, and feature requests welcome — most user-reported gaps in the SDKs are fixed within a release cycle.