# The Smart Proxy in minutes

If your application is not designed to work with an HTTP/S based API to crawl and scrape websites like the Crawling API, we have designed an intelligent rotating proxy that forwards your requests to the Crawling API. You simply use it as a normal proxy in your application.

All Proxy calls should go to http://smartproxy.crawlbase.com and port 8012 using your access token as a proxy username.

Therefore making your first call is as easy as running the following line in the terminal. Go ahead and try it!

curl -x "http://[email protected]:8012" -k "http://httpbin.org/ip"

# How it works?

When you send a request to the proxy, the proxy will authorize your request using your proxy authorization username, your private access token below. It will then redirect your request to the Crawling API and then return the response to your application. If you require to use the extra features of the Crawling API in this mode, you will need to send the HTTP header crawlbaseAPI-Parameters and send the options you require to use. Check the examples section below for real examples.

Private token

_USER_TOKEN_

# Important Note

It is important to disable SSL verification when using the smart proxy, otherwise, we will not be able to execute our smart AI integration with your requests. Therefore you should skip verifying certificates.