Docs
Log in

API usage

Add &scraper=email-extractor to a Crawling API request. URL-encode the target URL in the url parameter.

curl 'https://api.crawlbase.com/?token=YOUR_TOKEN' \
  --data-urlencode 'url=https://letsencrypt.org/contact/' \
  --data-urlencode 'scraper=email-extractor' -G
from crawlbase import CrawlingAPI

api = CrawlingAPI({'token': 'YOUR_TOKEN'})
res = api.get(
    'https://letsencrypt.org/contact/',
    {'scraper': 'email-extractor'}
)

import json
data = json.loads(res['body'])
const { CrawlingAPI } = require('crawlbase');
const api = new CrawlingAPI({ token: 'YOUR_TOKEN' });

const res = await api.get(
  'https://letsencrypt.org/contact/',
  { scraper: 'email-extractor' }
);
const data = JSON.parse(res.body);
require 'crawlbase'
api = Crawlbase::API.new(token: 'YOUR_TOKEN')

res = api.get('https://letsencrypt.org/contact/', scraper: 'email-extractor')
data = JSON.parse(res.body)

Example input URL

The URL passed in the url parameter (URL-decoded for readability):

https://letsencrypt.org/contact/

Response shape

JSON response body. Field types may be null when the source page omits the value.

url
string
Final URL.
title
string
Page title.
emails
array
Email address strings (deduplicated).
emails_with_context
array
Email + surrounding text.
emails_with_context[].email
string
Email address.
emails_with_context[].context
string
Surrounding text.

Sample response

{
  "url": "https://letsencrypt.org/contact/",
  "title": "Contact - Let's Encrypt",
  "emails": [
    "[email protected]",
    "[email protected]"
  ]
}