These days, job searching has largely moved to online platforms, making it easier than ever to find job opportunities. However, with this convenience comes the challenge of sifting through a vast amount of information to find the right job listings. This is where web scraping, a powerful technique in the world of data extraction, comes into play.

Web scraping enables you to transform the way you approach job hunting by automating the collection and organization of job postings. Rather than spending hours manually searching through different job boards and websites, you can create custom web scraping scripts to gather, filter, and present job listings tailored to your preferences. This not only saves you valuable time but also ensures you don’t miss out on hidden job opportunities that might be buried deep within the web.

In this comprehensive guide, we will explore how to harness the potential of web scraping using Crawlbase Crawling API to streamline your job search process on one of the most prominent job listing websites, Indeed. Whether you’re a job seeker looking for that perfect career opportunity or a data enthusiast interested in mastering web scraping techniques, this step-by-step Python guide will equip you with the skills to automate your job search and make it more effective and efficient. Join us as we dive into the world of web scraping and uncover the countless opportunities it offers in simplifying your job searching journey on Indeed.

Table of Contents

  1. Getting Started
  • What is Web Scraping?
  • The Role of Web Scraping in Job Searching
  1. Getting Started with Crawlbase Crawling API
  • Key Benefits of Crawlbase Crawling API
  • Sending Request With Crawling API
  • API Response Time and Format
  • Crawling API Parameters
  • Free Trial, Charging Strategy, and Rate Limit
  • Crawlbase Python library
  1. Understanding Indeed Website
  • Navigating Indeed’s Job Listings
  • Exploring Indeed’s Job Postings
  1. Setting Up Your Development Environment
  • Installing Python
  • Installing Required Libraries
  • Choosing the Right Development IDE
  1. Building Your Indeed Job Scraper
  • Scraping Job Listings
  • Handling Job Search Page Pagination
  • Extracting Data from Job Posting Page
  • Saving Data into an SQLite Database
  1. Conclusion
  2. Frequently Asked Questions

1. Getting Started

Web scraping is a highly efficient method for extracting data from websites, and it’s a skill that can greatly enhance your job search process. In this section, we’ll begin by delving into the fundamentals. First, we’ll explore the role of web scraping in the context of job searching, highlighting how it can empower you to access, filter, and organize job listings from websites like Indeed.

What is Web Scraping?

Web scraping, often referred to as web harvesting or web data extraction, is a technique used to extract information from websites. It involves automating the retrieval of data from web pages, which can then be saved, processed, and analyzed for various purposes. Web scraping is a valuable tool in the world of data collection, as it allows you to transform unstructured data on websites into structured datasets that are easier to work with.

Web scraping is achieved using specialized software tools, programming languages like Python, and sometimes with the help of APIs (Application Programming Interfaces). By sending HTTP requests to a website, you can retrieve its HTML or XML content, parse it to extract the data you need, and then store that data for analysis, research, or other applications.

The Role of Web Scraping in Job Searching

The traditional method of job searching often involves visiting multiple job boards, company websites, and recruitment platforms to find relevant job listings. This process can be time-consuming and overwhelming, especially when you’re searching for opportunities across different locations and industries.

Web scraping plays a crucial role in simplifying and optimizing the job searching process. Here’s how:

Role of Web Scraping in Job Search
  1. Aggregating Job Listings: Web scraping allows you to aggregate job listings from various sources and websites into a single dataset. This means you can access a wide range of job opportunities all in one place, saving you the effort of visiting multiple websites.

  2. Automating Data Retrieval: Instead of manually copying and pasting job details, web scraping automates the data retrieval process. With the right scraping script, you can extract job titles, company names, job descriptions, locations, and more without repetitive manual tasks.

  3. Customized Searches: Web scraping empowers you to customize your job search. You can set up specific search criteria and filters to extract job listings that match your preferences. This level of customization helps you focus on the most relevant opportunities.

  4. Real-Time Updates: By scheduling web scraping scripts to run at regular intervals, you can receive real-time updates on new job listings. This ensures that you’re among the first to know about job openings in your desired field.

In the following sections, we’ll explore how to leverage web scraping, specifically using the Crawlbase Crawling API, to efficiently scrape job posts from Indeed. This step-by-step guide will equip you with the skills to automate your job search and make it more effective and efficient.

2. Getting Started with Crawlbase Crawling API

In your journey to harness the power of web scraping for job hunting on Indeed, understanding the Crawlbase Crawling API is paramount. This section will dive into the technical aspects of Crawlbase’s API and equip you with the knowledge needed to seamlessly integrate it into your Python job scraping project.

Key Benefits of Crawlbase Crawling API

The Crawlbase Crawling API offers a range of key benefits that empower developers to efficiently gather web data and handle various aspects of the crawling process. Here are some of the notable advantages:

  1. Versatile Parameter Options: Crawlbase provides a rich set of parameters, allowing developers to tailor their API requests precisely. Parameters such as “format,” “user_agent,” “page_wait,” and more enable customization to suit specific crawling needs.
  2. Response Format Control: Developers can choose between JSON and HTML response formats based on their preferences and data processing requirements. This flexibility simplifies data extraction and manipulation.
  3. Cookies and Headers Handling: With the ability to retrieve cookies and headers from the original website using parameters like “get_cookies” and “get_headers,” developers can access valuable information that may be crucial for certain web scraping tasks.
  4. Dynamic Content Handling: Crawlbase excels in handling dynamic content, making it suitable for crawling JavaScript-rendered pages. Parameters like “page_wait” and “ajax_wait” enable developers to ensure that the API captures the fully rendered content, even when it takes time to load or includes AJAX requests.
  5. IP Rotation: Crawlbase offers the capability to rotate IP addresses, providing anonymity and reducing the risk of being blocked by websites. This feature ensures a higher success rate for web crawling tasks.
  6. Geolocation Options: Developers can specify a country for geolocated requests using the “country” parameter. This is particularly useful for scenarios where data from specific geographic regions is required.
  7. Tor Network Support: For crawling onion websites over the Tor network, the “tor_network” parameter can be enabled, enhancing privacy and access to content on the dark web.
  8. Screenshot Capture: The API can capture screenshots of web pages with the “screenshot” parameter, providing visual context for crawled data.
  9. Data Scraping with Scrapers: Crawlbase offers the option to utilize predefined data scrapers, streamlining the extraction of specific information from web pages. This simplifies data retrieval for common use cases.
  10. Asynchronous Crawling: In cases where asynchronous crawling is needed, the API supports the “async” parameter. Developers receive a request identifier (RID) for retrieving crawled data from the cloud storage.
  11. Autoparsing: The “autoparse” parameter simplifies data extraction by returning parsed information in JSON format, reducing the need for extensive post-processing of HTML content.

In summary, Crawlbase’s Crawling API is a powerful tool for web scraping and data extraction, offering a wide range of parameters and features that cater to diverse crawling requirements. Whether you need to access dynamic content, handle cookies and headers, rotate IP addresses, or extract specific data, Crawlbase provides the tools and capabilities to make web crawling efficient and effective.

Sending Request With Crawling API

Crawlbase’s Crawling API is designed for simplicity and ease of integration into your web scraping projects. All API URLs begin with the base part: https://api.crawlbase.com. Making your first API call is as straightforward as executing a command in your terminal:

1
curl 'https://api.crawlbase.com/?token=YOUR_CRAWLBASE_TOKEN&url=https%3A%2F%2Fgithub.com%2Fcrawlbase%3Ftab%3Drepositories'

Here, you’ll notice the token parameter, which serves as your authentication key for accessing Crawlbase’s web scraping capabilities. Crawlbase offers two types of token, a normal (TCP) token and JavaScript (JS) token. Choose the normal token for websites that don’t change much like static websites. But if you want to get information from a site that only works when people use web browsers with JavaScript or if the important stuff you want is made by JavaScript on the user’s side, then you should use the JavaScript token. Like with Indeed, you need the JavaScript token to get what you want.

API Response Time and Format

When interacting with the Crawlbase Crawling API, it’s crucial to understand the response times and how to interpret success or failure. Here’s a closer look at these aspects:

Response Times: Typically, the API response time falls within a range of 4 to 10 seconds. To ensure a seamless experience and accommodate any potential delays, it’s advisable to set a timeout for calls to at least 90 seconds. This ensures that your application can handle variations in response times without interruptions.

Response Formats: When making a request to Crawlbase, you have the flexibility to choose between HTML and JSON response formats based on your preferences and parsing requirements. You can pass “format” query parameter with value “html” or “json” to select the required format.

If you select the HTML response format (which is the default), you’ll receive the HTML content of the web page as the response. The response parameters will be added to the response headers for easy access. Here’s an example response:

1
2
3
4
5
6
7
Headers:
url: https://github.com/crawlbase?tab=repositories
original_status: 200
pc_status: 200

Body:
HTML of the page

If you opt for the JSON response format, you’ll receive a structured JSON object that can be easily parsed in your application. This object contains all the information you need, including response parameters. Here’s an example response:

1
2
3
4
5
6
{
"original_status": "200",
"pc_status": 200,
"url": "https%3A%2F%2Fgithub.com%2Fcrawlbase%3Ftab%3Drepositories",
"body": "HTML of the page"
}

Response Headers: Both HTML and JSON responses include essential headers that provide valuable information about the request and its outcome:

  • url: The original URL that was sent in the request or the URL of any redirects that Crawlbase followed.
  • original_status: The status response received by Crawlbase when crawling the URL sent in the request. It can be any valid HTTP status code.
  • pc_status: The Crawlbase (pc) status code, which can be any status code and is the code that ends up being valid. For instance, if a website returns an original_status of 200 with a CAPTCHA challenge, the pc_status may be 503.
  • body (JSON only): This parameter is available in JSON format and contains the content of the web page that Crawlbase found as a result of proxy crawling the URL sent in the request.

These response parameters empower you to assess the outcome of your requests and determine whether your web scraping operation was successful.

Crawling API Parameters

Crawlbase offers a comprehensive set of parameters that allow developers to customize their web crawling requests. These parameters enable fine-tuning of the crawling process to meet specific requirements. For instance, you can specify response formats like JSON or HTML using the “format” parameter, or control page waiting times with “page_wait” when working with JavaScript-generated content.

Additionally, you can extract cookies and headers, set custom user agents, capture screenshots, and even choose geolocation preferences using parameters such as “get_cookies,” “user_agent,” “screenshot,” and “country.” These options provide flexibility and control over the web crawling process. For example, to retrieve cookies set by the original website, you can simply include “&get_cookies=true” in your API request, and Crawlbase will return the cookies in the response headers.

You can read more about Crawlbase Crawling API parameters here.

Free Trial, Charging Strategy, and Rate Limit

Crawlbase provides a free trial that includes the first 1,000 requests, allowing you to explore its capabilities before committing. However, it’s essential to maximize this trial period to make the most of it.

Crawlbase operates on a “pay for what you use” model. Importantly, Crawlbase only charges for successful requests, making it cost-effective and efficient for your web scraping needs. Successful requests are determined by checking the original_status and pc_status in the response parameters.

The API is rate-limited to a maximum of 20 requests per second, per token. If you require a higher rate limit, you can contact support to discuss your specific needs.

Crawlbase Python library

The Crawlbase Python library offers a simple way to interact with the Crawlbase Crawling API. You can use this lightweight and dependency-free Python class as a wrapper for the Crawlbase API. To begin, initialize the Crawling API class with your Crawlbase token. Then, you can make GET requests by providing the URL you want to scrape and any desired options, such as custom user agents or response formats. For example, you can scrape a web page and access its content like this:

1
2
3
4
5
6
7
8
9
from crawlbase import CrawlingAPI

# Initialize the CrawlingAPI class
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })

# Make a GET request to scrape a webpage
response = api.get('https://www.example.com')
if response['status_code'] == 200:
print(response['body'])

This library simplifies the process of fetching web data and is particularly useful for scenarios where dynamic content, IP rotation, and other advanced features of the Crawlbase API are required.

3. Understanding Indeed Website

Indeed is a powerhouse in the job search industry, boasting millions of job postings from various employers around the world. To successfully scrape job data from Indeed, it’s crucial to understand how the website is structured and how job postings are organized.

To effectively scrape job postings from Indeed, it’s essential to understand its website structure and how job listings are organized.

Indeed Job Search Page

Homepage: When you first land on Indeed’s homepage, you’ll encounter a straightforward search bar where you can input keywords, job titles, or company names. This search functionality is your gateway to finding specific job listings. You can also specify location details to narrow down your search to a particular city, state, or country.

Search Results: Upon entering your search criteria and hitting the “Search” button, Indeed displays a list of job listings that match your query. These listings are typically organized in reverse chronological order, with the most recent postings appearing at the top. Each listing provides essential details such as the job title, company name, location, and a brief job description.

Filters: Indeed offers various filters on the left-hand side of the search results page. These filters allow you to refine your search further. You can filter job listings by job type (e.g., full-time, part-time), salary estimate, location, company, and more. Using these filters can help you find job postings that precisely match your criteria.

Pagination: When there are numerous job listings that match your search, Indeed implements pagination. You’ll notice that only a limited number of job postings are displayed on each page. To access more listings, you’ll need to click on the page numbers or the “Next” button at the bottom of the search results. Understanding how pagination works is crucial for scraping multiple pages of job listings.

Exploring Indeed’s Job Postings

Once you’ve found a job listing that interests you, clicking on it will take you to the full job posting page. Here’s what you can expect to find:

Indeed Job Post Page

Job Details: The job posting page provides comprehensive details about the job opportunity. This includes the job title, company name, location, job type (e.g., full-time, part-time), and a detailed job description. You’ll also find information about the application deadline and how to apply.

Company Information: Indeed often includes information about the hiring company, such as its size, industry, and location. This can be valuable if you want to filter job listings based on specific companies or industries.

Salary Information: Some job postings on Indeed include estimated salary ranges. This can help you quickly identify positions that align with your salary expectations.

Application Process: Indeed provides information on how to apply for the job. This may involve submitting a resume through Indeed or visiting the company’s website to apply directly.

Similar Jobs: Towards the bottom of the job posting, Indeed suggests similar job listings that might interest you. This can be useful if you’re exploring multiple opportunities in the same field.

Understanding how Indeed structures its website and presents job listings is essential for effective web scraping. With this knowledge, you’ll be better equipped to navigate the site, scrape job data, and extract the information you need for your job search or analysis.

4. Setting Up Your Development Environment

Before you can dive into web scraping Indeed job postings with Python, you need to set up your development environment. This involves installing the necessary tools and libraries and choosing the right Integrated Development Environment (IDE) for your coding tasks.

Installing Python

Python is the primary programming language we’ll use for web scraping. If you don’t already have Python installed on your system, follow these steps:

  1. Download Python: Visit the official Python website at python.org and download the latest version of Python. Choose the appropriate installer for your operating system (Windows, macOS, or Linux).

  2. Installation: Run the downloaded installer and follow the installation instructions. During installation, make sure to check the option that adds Python to your system’s PATH. This step is crucial for running Python from the command line.

  3. Verify Installation: Open a command prompt or terminal and enter the following command to check if Python is installed correctly:

1
python --version

You should see the installed Python version displayed.

Installing Required Libraries

Python offers a rich ecosystem of libraries that simplify web scraping. For this project, you’ll need the crawlbase library for making web requests with the Crawlbase API and the Beautiful Soup library for parsing HTML content. To install these libraries, use the following commands:

  1. Crawlbase: The crawlbase library is a Python wrapper for the Crawlbase API, which will enable us to make web requests efficiently.
1
pip install crawlbase
  1. Beautiful Soup: Beautiful Soup is a library for parsing HTML and XML documents. It’s especially useful for extracting data from web pages.
1
pip install beautifulsoup4

With these libraries installed, you’ll have the tools you need to fetch web pages using the Crawlbase API and parse their content during the scraping process.

Choosing the Right Development IDE

An Integrated Development Environment (IDE) provides a coding environment with features like code highlighting, auto-completion, and debugging tools. While you can write Python code in a simple text editor, using an IDE can significantly improve your development experience.

Here are a few popular Python IDEs to consider:

  1. PyCharm: PyCharm is a robust IDE with a free Community Edition. It offers features like code analysis, a visual debugger, and support for web development.

  2. Visual Studio Code (VS Code): VS Code is a free, open-source code editor developed by Microsoft. It has a vast extension library, making it versatile for various programming tasks, including web scraping.

  3. Jupyter Notebook: Jupyter Notebook is excellent for interactive coding and data exploration. It’s commonly used in data science projects.

  4. Spyder: Spyder is an IDE designed for scientific and data-related tasks. It provides features like variable explorer and interactive console.

Choose the IDE that best suits your preferences and workflow. Once you have Python installed, the required libraries set up, and your chosen IDE ready, you’re all set to start building your Indeed job scraper in Python.

5. Building Your Indeed Job Scraper

In this section, we will guide you through the process of creating a powerful Indeed job scraper using Python. This scraper will enable you to gather job listings, handle pagination on job search pages, extract detailed information from job posting pages, and efficiently save this data into an SQLite database.

Scraping Job Listings

To begin scraping job listings from Indeed.com, we need to understand how to make requests to the website and parse the results. If you visit Indeed’s homepage and submit a job search query, you’ll notice that the website redirects you to a search URL with specific parameters, like this:

1
https://www.indeed.com/jobs?q=Web+Developer&l=Virginia

Here, we’re searching for Web Developer jobs in Virginia, and the URL includes parameters such as q=Web+Developer for the job query and l=Virginia for the location. To replicate this in your Python code using the Crawlbase library, you can use the following example:

1
2
3
4
5
6
7
8
from crawlbase import CrawlingAPI

api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_JS_TOKEN'})

response = api.get('https://www.indeed.com/jobs?Web+Developer&l=Virginia')
if response['status_code'] == 200:
# Process the job listing page here
print(response['body'])

This code snippet demonstrates how to send a GET request to Indeed’s job search page. Once you have the HTML content of the job listing page, you can parse it to extract the job listings.

We could parse the HTML document using CSS or XPath selectors, but there’s an easier way: we can find all of the job listing data hidden away deep in the HTML as a JSON document:

We can use regular expressions to extract this JSON data efficiently. Let’s update the previous example to handle scraping of job listings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import re
import json
from crawlbase import CrawlingAPI

api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_JS_TOKEN'})

# Function to parse the search page HTML
def parse_search_page_html(html: str):
data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
data = json.loads(data[0])
return {
"results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
"meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
}

response = api.get('https://www.indeed.com/jobs?Web+Developer&l=Virginia', {'country': 'US'})
if response['status_code'] == 200:
# Extracted HTML content after decoding byte data
html_content = response['body'].decode('latin1')

# Call the parse_job_listings function with the job listing page HTML
job_listings = parse_search_page_html(html_content)
print(json.dumps(job_listings, indent=2))

The function, parse_search_page_html, is used to extract job listing data from the HTML source code of an Indeed job search page. It employs regular expressions to locate a specific JavaScript variable mosaic-provider-jobcards containing structured job listing information in JSON format. It then parses this JSON data, extracting two main components: “results,” which contains the job listings, and “meta,” which contains metadata about the job listings, such as the number of results in various categories. The function returns this structured data as a Python dictionary for further processing.

Example Output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
{
"results": [
{
"adBlob": "...",
"adId": "419399410",
"advn": "7790245867719370",
"appliedOrGreater": false,
"clickLoggingUrl": "...",
"company": "The Inn at Little Washington",
"companyBrandingAttributes": {
"headerImageUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_headerimage/1960x400/d9ca093411583ed06046ab18078e777d",
"logoUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_squarelogo/256x256/1b781c60efb593aa39cd69478346b673"
},
"companyIdEncrypted": "30de376f4842f806",
"companyOverviewLink": "/cmp/Inn-At-Little-Washington",
"companyOverviewLinkCampaignId": "serp-linkcompanyname-content",
"companyRating": 3.1,
"companyReviewCount": 30,
"companyReviewLink": "/cmp/Inn-At-Little-Washington/reviews",
"companyReviewLinkCampaignId": "cmplinktst2",
"d2iEnabled": false,
"displayTitle": "Front Desk Agent",
"dradisJob": false,
"employerAssistEnabled": false,
"employerResponsive": false,
"encryptedFccompanyId": "11dc5e49b00978a6",
"encryptedResultData": "VwIPTVJ1cTn5AN7Q-tSqGRXGNe2wB2UYx73qSczFnGU",
"enhancedAttributesModel": {},
"enticers": [],
"expired": false,
"extractTrackingUrls": "",
"extractedSalary": {
"max": 22,
"min": 22,
"type": "hourly"
},
"fccompanyId": -1,
"featuredCompanyAttributes": {},
"featuredEmployer": false,
"featuredEmployerCandidate": false,
"feedId": 297995,
"formattedLocation": "Washington, VA 22747",
"formattedRelativeTime": "20 days ago",
"hideMetaData": false,
"highVolumeHiringModel": {
"highVolumeHiring": false
},
"hiringEventJob": false,
"homepageJobFeedSectionId": "0",
"indeedApplyEnabled": true,
"indeedApplyable": true,
"isJobVisited": false,
"isMobileThirdPartyApplyable": false,
"isNoResumeJob": false,
"isSubsidiaryJob": false,
"jobCardRequirementsModel": {
"additionalRequirementsCount": 0,
"requirementsHeaderShown": false
},
"jobLocationCity": "Washington",
"jobLocationPostal": "22747",
"jobLocationState": "VA",
"jobTypes": [],
"jobkey": "72ed373141879fd4",
"jsiEnabled": false,
"link": "...",
"locationCount": 1,
"mobtk": "1hasil75vimhq800",
"moreLocUrl": "/jobs?q=&l=Virginia&radius=35&jtid=5bdc6c9a550a63f1&jcid=30de376f4842f806&grp=tcl",
"mouseDownHandlerOption": {
"adId": 419399410,
"advn": "7790245867719370",
"extractTrackingUrls": [],
"from": "vjs",
"jobKey": "72ed373141879fd4",
"link": "...",
"tk": "1hasil75vimhq800"
},
"newJob": false,
"normTitle": "Front Desk Agent",
"openInterviewsInterviewsOnTheSpot": false,
"openInterviewsJob": false,
"openInterviewsOffersOnTheSpot": false,
"openInterviewsPhoneJob": false,
"organicApplyStartCount": 19,
"overrideIndeedApplyText": true,
"preciseLocationModel": {
"obfuscateLocation": false,
"overrideJCMPreciseLocationModel": true
},
"pubDate": 1693458000000,
"rankingScoresModel": {
"bid": 219074,
"eApply": 0.00094590354,
"eQualified": 0
},
"redirectToThirdPartySite": false,
"remoteLocation": false,
"resumeMatch": false,
"salarySnippet": {
"currency": "USD",
"salaryTextFormatted": false,
"source": "EXTRACTION",
"text": "$22 an hour"
},
"saved": false,
"savedApplication": false,
"screenerQuestionsURL": "https://gateway.harri.com/dispatcher/api/v1/indeed/jobs/1064851/screening_questions",
"searchUID": "1hasil75vimhq800",
"showAttainabilityBadge": false,
"showCommutePromo": false,
"showEarlyApply": false,
"showJobType": false,
"showRelativeDate": true,
"showSponsoredLabel": false,
"showStrongerAppliedLabel": false,
"smartFillEnabled": false,
"smbD2iEnabled": false,
"snippet": "...",
"sourceId": 38357,
"sponsored": true,
"taxoAttributes": [],
"taxoAttributesDisplayLimit": 3,
"taxoLogAttributes": [],
"taxonomyAttributes": [
{
"attributes": [
{
"label": "Flexible schedule",
"suid": "WZ9TD"
},
{
"label": "Referral program",
"suid": "YDH5H"
}
],
"label": "dynamic-attributes"
},
...
],
"thirdPartyApplyUrl": "...",
"tier": {
"matchedPreferences": {
"longMatchedPreferences": [],
"stringMatchedPreferences": []
},
"type": "DEFAULT"
},
"title": "Front Desk Agent",
"translatedAttributes": [],
"translatedCmiJobTags": [],
"truncatedCompany": "The Inn at Little Washington",
"urgentlyHiring": false,
"viewJobLink": "...",
"vjFeaturedEmployerCandidate": false,
"workplaceInsights": []
},
...
],
"meta": [
{
"jobCount": 198731,
"tier": 0,
"type": "DEFAULT"
},
{
"jobCount": 1371,
"tier": 1,
"type": "NATIONWIDE"
}
]
}

Handling Pagination

Indeed’s job search results are typically paginated. To handle pagination and collect multiple pages of job listings, you can modify the URL parameters and send additional requests. To scrape multiple pages, you can adjust the URL’s start parameter or extract pagination information from the HTML.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
import json
import re
from urllib.parse import urlencode
from crawlbase import CrawlingAPI

# Function to parse the search page HTML
def parse_search_page_html(html: str):
data = re.findall(r'window.mosaic.providerData\["mosaic-provider-jobcards"\]=(\{.+?\});', html)
data = json.loads(data[0])
return {
"results": data["metaData"]["mosaicProviderJobCardsModel"]["results"],
"meta": data["metaData"]["mosaicProviderJobCardsModel"]["tierSummaries"],
}

# Function to scrape job listings
def scrape_indeed_search(api, query: str, location: str, max_results: int = 50):
def make_search_page_url(offset):
parameters = {"q": query, "l": location, "filter": 0, "start": offset}
return f"https://www.indeed.com/jobs?{urlencode(parameters)}"

results = []

print(f"Scraping the first page of search: query={query}, location={location}")
response_first_page = api.get(make_search_page_url(0), {'country': 'US'})

if response_first_page['status_code'] == 200:
# Extracted HTML content after decoding byte data
html_content = response_first_page['body'].decode('latin1')

data_first_page = parse_search_page_html(html_content)
results = data_first_page["results"]
total_results = sum(category["jobCount"] for category in data_first_page["meta"])

if total_results > max_results:
total_results = max_results

print(f"Scraping the remaining {total_results // 10} pages")

# Collect the URLs for other pages
other_pages = [make_search_page_url(offset) for offset in range(10, total_results, 10)]

for url in other_pages:
response = api.get(url, {'country': 'US'})
if response['status_code'] == 200:
results.extend(parse_search_page_html(response.text))

return results

def main():
api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_JS_TOKEN'})
search_data = scrape_indeed_search(api, query="Web Developer", location="Virginia")
print(json.dumps(search_data, indent=2))

if __name__ == "__main__":
main()

The scrape_indeed_search function starts by making an initial request to the Indeed search page using the provided query and location. It then checks the response status code to ensure that the request was successful (status code 200). If successful, it proceeds to parse the job listing data from the HTML of the first page.

To handle pagination, the code calculates the total number of job listings available for the given query and location. It also determines how many pages need to be scraped to reach the maximum result limit set by the user. To collect the URLs of the remaining pages, it generates a list of page URLs, each with an incremental offset to fetch the next set of results.

Then it initiate Crawling API request for each of the generated page URLs. As each page is fetched, its job listings are extracted and added to the results list. This approach ensures that the script can handle pagination seamlessly, scraping all relevant job listings while efficiently managing the retrieval of multiple pages.

Extracting Data from Job Posting Page

Once you have the job listings, you may want to extract more details by scraping the full job posting pages. The job search results encompass nearly all job listing information, except for certain specifics like a comprehensive job description. To extract this missing information, we require the job ID, conveniently located within the jobkey field within our search results:

1
2
3
{
"jobkey": "6a45faa5d8d817fa"
}

Leveraging this jobkey, we can send a request for the complete job details page. Much like our initial search, we can parse the embedded data instead of the HTML structure:

This data is tucked away within the _initialData variable, and we can retrieve it using a straightforward regular expression pattern. Here’s how you can do it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import json
import re
from crawlbase import CrawlingAPI

# Function to parse the job posting page HTML
def parse_job_page_html(html):
data = re.findall(r"_initialData=(\{.+?\});", html)
data = json.loads(data[0])
return data["jobInfoWrapperModel"]["jobInfoModel"]

# Function to scrape job details from job pages
def scrape_indeed_jobs(api, job_keys):
urls = [f"https://www.indeed.com/m/basecamp/viewjob?viewtype=embedded&jk={job_key}" for job_key in job_keys]
scraped = []

for url in urls:
response = api.get(url, {'country': 'US'})
if response['status_code'] == 200:
# Extracted HTML content after decoding byte data
html_content = response['body'].decode('latin1')
scraped.append(parse_job_page_html(html_content))

return scraped

# Usage example
def main():
api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_JS_TOKEN'})
job_keys = ["6a45faa5d8d817fa" #, "another_job_key"
]
job_details = scrape_indeed_jobs(api, job_keys)
print(json.dumps(job_details, indent=2))

if __name__ == "__main__":
main()

Example Output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
[
{
"appliedStateBannerModel": null,
"commuteInfoModel": null,
"expiredJobMetadataModel": null,
"hideCmpHeader": false,
"isSmbD2iEnabled": false,
"jobDebugInfoModel": null,
"jobDescriptionSectionModel": null,
"jobInfoHeaderModel": {
"a11yNewtabIconActive": true,
"companyImagesModel": {
"ejiBannerAsBackground": false,
"enhancedJobDescription": true,
"featuredEmployer": true,
"headerImageUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_headerimage/1960x400/54cdc45f26b1795da078ef999c776400",
"logoAltText": "TSA (Transportation Security Administration) logo",
"logoImageOverlayLower": true,
"logoUrl": "https://d2q79iu7y748jz.cloudfront.net/s/_squarelogo/256x256/bfc3ae8c1b80ebe9aeb76708e8b8bf7c",
"showBannerTop": true,
"showEnhancedJobImp": true,
"showIconInTitle": false
},
"companyName": "TSA (Transportation Security Administration)",
"companyOverviewLink": "https://www.indeed.com/cmp/TSA-(transportation-Security-Administration)?campaignid=mobvjcmp&from=mobviewjob&tk=1haso8g6sgsqe800&fromjk=6a45faa5d8d817fa",
"companyReviewLink": "https://www.indeed.com/cmp/TSA-(transportation-Security-Administration)/reviews?campaignid=mobvjcmp&cmpratingc=mobviewjob&from=mobviewjob&tk=1haso8g6sgsqe800&fromjk=6a45faa5d8d817fa&jt=Uniformed+Advisor",
"companyReviewModel": {
"companyName": "TSA (Transportation Security Administration)",
"desktopCompanyLink": "https://www.indeed.com/cmp/TSA-(transportation-Security-Administration)/reviews?campaignid=viewjob&cmpratingc=mobviewjob&from=viewjob&tk=1haso8g6sgsqe800&fromjk=6a45faa5d8d817fa&jt=Uniformed+Advisor",
"mobileCompanyLink": "https://www.indeed.com/cmp/TSA-(transportation-Security-Administration)/reviews?campaignid=mobvjcmp&cmpratingc=mobviewjob&from=mobviewjob&tk=1haso8g6sgsqe800&fromjk=6a45faa5d8d817fa&jt=Uniformed+Advisor",
"navigableContainerModel": {
"contentHtml": null,
"hasBorderBottom": true,
"hasBorderTop": true,
"href": null,
"isRTL": false
},
"ratingsModel": {
"ariaContent": "3.1 out of 5 stars. Link to 2,866 company reviews (opens in a new tab)",
"count": 2866,
"countContent": "2,866 reviews",
"descriptionContent": "Read what people are saying about working here.",
"rating": 3.1,
"showCount": true,
"showDescription": true,
"size": null
}
},
"employerActivity": null,
"employerResponsiveCardModel": null,
"encryptedFccCompanyId": null,
"formattedLocation": "Springfield, VA",
"hideRating": false,
"isDesktopApplyButtonSticky": false,
"isSimpleVjImproveActive": true,
"isSimplifiedHeader": false,
"jobNormTitle": null,
"jobTitle": "Uniformed Advisor",
"jobTypes": null,
"location": null,
"mobileStickyVjHeaderActive": false,
"openCompanyLinksInNewTab": false,
"parentCompanyName": "U.S. Government",
"preciseLocationModel": null,
"ratingsModel": null,
"recentSearch": null,
"remoteLocation": false,
"remoteWorkModel": null,
"salaryCurrency": null,
"salaryMax": null,
"salaryMin": null,
"salaryType": null,
"subtitle": "TSA (Transportation Security Administration) - Springfield, VA",
"tagModels": null,
"taxonomyAttributes": null,
"viewJobDisplay": "DESKTOP_EMBEDDED",
"workplaceInsightsModel": null
},
"jobMetadataHeaderModel": {
"jobType": ""
},
"jobTagModel": null,
"resumeEvaluationResult": null,
"sanitizedJobDescription": "<div>\n <div>\n <div>\n <div>\n <div>\n <div>\n <div>\n <p>Overview</p> \n <p>Uniformed Advisor </p>\n <p>Open &amp; closing dates</p> \n <div>\n 09/15/2023 to 09/29/2023\n </div> \n <p>Pay scale &amp; grade</p> \n <div>\n SV G\n </div> \n <p>Appointment type </p>\n <div></div>\n <p>Salary</p> \n <div>\n $78,592.00 to $102,166.00 PA\n </div> \n <p>Work schedule</p> \n </div>\n </div>\n <div></div>\n <div>\n <div>\n <p>Location</p> \n <div>\n Springfield, Virginia\n </div> \n </div>\n </div>\n <div>\n <div>\n <p>Duties</p> \n <p>Summary</p> \n <div>\n Securing Travel, Protecting People - At the Transportation Security Administration, you will serve in a high-stakes environment to safeguard the American way of life. In cities across the country, you would secure airports, seaports, railroads, highways, and/or public transit systems, thus protecting America's transportation infrastructure and ensuring freedom of movement for people and commerce.\n </div> \n </div>\n </div>\n </div>\n <p></p>\n <div>\n <div>\n <div>\n Announcement number\n </div> \n <div>\n HQ-ADM-187977-12128183\n </div> \n <div>\n Control number\n </div> \n <div>\n 749335900\n </div>\n </div>\n </div>\n </div>\n </div>\n </div>\n</div>",
"screenerRequirementsModel": null,
"showExpiredHeader": false,
"tagModels": null,
"viewJobDisplay": "DESKTOP_EMBEDDED"
}
]

Saving Data into an SQLite Database

To store the extracted job data, you can use an SQLite database. Here’s an example code of how to create a database, create a table for job Postings, and insert data into it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
import json
import re
import sqlite3
from crawlbase import CrawlingAPI

# Function to initialize the SQLite database
def initialize_database():
conn = sqlite3.connect('indeed_jobs.db')
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS jobs (
job_title TEXT,
company TEXT,
location TEXT,
job_description TEXT
)
''')
conn.commit()
return conn, cursor

# Function to save job details to SQLite database
def save_to_database(cursor, job):
cursor.execute('''
INSERT INTO jobs (job_title, company, location, job_description)
VALUES (?, ?, ?, ?)
''', (job["job_title"], job["company"], job["location"], job["job_description"]))
cursor.connection.commit()

# Function to parse the job posting page HTML
def parse_job_page_html(html):
data = re.findall(r"_initialData=(\{.+?\});", html)
data = json.loads(data[0])
job_info = data["jobInfoWrapperModel"]["jobInfoModel"]
return {
"job_title": job_info["jobInfoHeaderModel"]["jobTitle"],
"company": job_info["jobInfoHeaderModel"]["companyName"],
"location": job_info["jobInfoHeaderModel"]["formattedLocation"],
"job_description": job_info.get("sanitizedJobDescription", "")
}

# Function to scrape job details from job pages and save them to the database
def scrape_and_save(api, job_key, cursor):
url = f"https://www.indeed.com/m/basecamp/viewjob?viewtype=embedded&jk={job_key}"
response = api.get(url, {'country': 'US'})
if response['status_code'] == 200:
# Extracted HTML content after decoding byte data
html_content = response['body'].decode('latin1')

# Parse the HTML
job = parse_job_page_html(html_content)

# Save job details to the database
save_to_database(cursor, job)

print(f"Job details saved for job key: {job_key}")

# Usage example
def main():
# Initialize the database
conn, cursor = initialize_database()

api = CrawlingAPI({'token': 'YOUR_CRAWLBASE_JS_TOKEN'})
job_keys = ["6a45faa5d8d817fa", "another_job_key"]

# Scrape and save job details for each job key
tasks = [scrape_and_save(api, job_key, cursor) for job_key in job_keys]

# Close the database connection
conn.close()

if __name__ == "__main__":
main()

This code starts by initializing the database structure, creating a table named ‘jobs’ to store information such as job titles, company names, locations, and job descriptions. The initialize_database function initializes the SQLite database and returns both the connection and cursor. The save_to_database function is responsible for inserting job details into this table.

The actual web scraping process happens in the scrape_and_save function, which takes a job key (a unique identifier for each job posting) and an SQLite cursor as input. This function constructs the URL for a specific job posting, sends an HTTP request to Indeed’s website, retrieves the HTML content of the job page, and then parses it using the parse_job_page_html function. This parsed data, including job title, company name, location, and job description, is then saved into the SQLite database using the save_to_database function.

The main function orchestrates the entire process. It initializes the database connection and Crawling API instance, defines a list of job keys to scrape, and runs the scraping and saving tasks for each job key. Once all the job details have been scraped and stored, the database connection is closed.

By following these detailed steps, you can build a comprehensive Indeed job scraper in Python, scrape job listings, handle pagination, extract data from job posting pages, and save the data into an SQLite database for further analysis or use.

6. Conclusion

Online platforms stand at the forefront for job hunters, offering many opportunities at their fingertips. However, this ease comes with the daunting task of sifting through an ocean of information. Enter web scraping: a game-changer for data collection that reshapes our job-seeking strategies.

By employing web scraping, we can revolutionize how we hunt for jobs. It automates the tedious process of gathering and sorting job listings from various portals. You no longer need to spend countless hours manually searching different job boards. With tailored web scraping scripts, you can easily gather, categorize, and display job openings that align with your preferences. This saves time and ensures that no potential job offer, no matter how obscure, slips through the cracks.

Our comprehensive guide highlights web scraping capabilities through the Crawlbase Crawling API, focusing on its application for the renowned job listing site Indeed. Whether you’re looking for an ideal career match or a tech enthusiast keen on mastering scraping techniques, this Python guide provides the tools to automate and refine your job search. Journey with us as we showcase how web scraping can simplify and optimize your quest for the perfect job on Indeed.

7. Frequently Asked Questions

Q. What is web scraping, and how does it benefit job seekers?

Web scraping, also known as web data extraction, is the process of automatically extracting information from websites. For job seekers, web scraping is a powerful tool that streamlines the job search process. It allows users to aggregate job listings from various sources, automate data retrieval, customize searches, and receive real-time updates on job opportunities. By saving time and providing access to a wide range of job postings, web scraping significantly enhances the efficiency and effectiveness of job hunting.

Q. Why should I use the Crawlbase Crawling API for web scraping on Indeed?

The Crawlbase Crawling API offers several advantages for web scraping on Indeed. It provides versatile parameter options, control over response formats (JSON or HTML), cookies and headers handling, dynamic content support, IP rotation for anonymity, geolocation options, Tor network support for dark web scraping, screenshot capture, and even predefined data scrapers. Additionally, Crawlbase operates on a “pay for what you use” model, making it cost-effective for web scraping needs.

Q. How do I handle pagination when scraping job listings on Indeed?

Indeed’s job search results are often paginated, meaning that you need to navigate through multiple pages to access all job listings. To handle pagination effectively, you can adjust the URL parameters, such as the “start” parameter, to fetch the next set of results. Alternatively, you can extract pagination information from the HTML source code of the search results page. By implementing these techniques, you can scrape and collect job listings from multiple pages seamlessly.

Q. What is the best way to extract job details from job posting pages on Indeed?

To extract comprehensive job details from job posting pages on Indeed, you can first identify the job key, typically found within the “jobkey” field in the search results. Using this key, you can make a request to the full job details page. Rather than parsing the HTML structure, you can extract the job details efficiently by targeting the embedded data stored within the “_initialData” variable. This data can be retrieved using regular expressions. Once obtained, you can parse the JSON data to access job-specific information like job descriptions, company details, and application instructions.