# The Smart Proxy in minutes
If your application is not designed to work with an HTTP/S based API to crawl and scrape websites like the Crawling API, we have designed an intelligent rotating proxy that forwards your requests to the Crawling API. You simply use it as a normal proxy in your application.
All Proxy calls should go to either:
- HTTP:
http://smartproxy.crawlbase.com
on port8012
- HTTPS:
https://smartproxy.crawlbase.com
on port8013
using your access token as a proxy username.
Therefore making your first call is as easy as running one of the following lines in the terminal. Go ahead and try it!
Using HTTP:
curl -x "http://[email protected]:8012" -k "http://httpbin.org/ip"
Using HTTPS:
curl -x "https://[email protected]:8013" -k "http://httpbin.org/ip"
# How it works?
When you send a request to the proxy, the proxy will authorize your request using your proxy authorization username, your private access token below. It will then redirect your request to the Crawling API and then return the response to your application. If you require to use the extra features of the Crawling API in this mode, you will need to send the HTTP header crawlbaseAPI-Parameters
and send the options you require to use. Check the examples section below for real examples.
Private token
_USER_TOKEN_
# Important Note
When making requests through the Smart Proxy, you should disable SSL verification for the destination URLs (using -k
in curl or equivalent options in other languages). This is necessary because the proxy needs to inspect and potentially modify the request/response to provide its smart features.