Extracting detailed Lead type records from Google Search Engine Results Pages (SERPs) specific to search. Ideal for digital marketers, SEO professionals, and market researchers. It will generate a list of leads you can enrich and use in marketing campaigns.
This template is a production ready boilerplate for developing with PuppeteerCrawler
. The PuppeteerCrawler
provides a simple framework for parallel crawling of web pages using headless Chrome with Puppeteer. Since PuppeteerCrawler
uses headless Chrome to download web pages and extract data, it is useful for crawling of websites that require to execute JavaScript.
If you're looking for examples or want to learn more visit:
Actor.getInput()
gets the input from INPUT.json
where the start urls are definedActor.createProxyConfiguration()
to work around IP blocking. Use Apify Proxy or your own Proxy URLs provided and rotated according to the configuration. You can read more about proxy configuration here.new PuppeteerCrawler()
. You can pass options to the crawler constructor as:
proxyConfiguration
- provide the proxy configuration to the crawlerrequestHandler
- handle each request with custom router defined in the routes.js
file.routes.js
file. Read more about custom routing for the Cheerio Crawler here
new createPuppeteerRouter()
router.addDefaultHandler(() => { ... })
1router.addHandler('detail', async ({ request, page, log }) => { 2 const title = await page.title(); 3 // You can add your own page handling here 4 5 await Dataset.pushData({ 6 url: request.loadedUrl, 7 title, 8 }); 9});
crawler.run(startUrls);
start the crawler and wait for its finishIf you're looking for examples or want to learn more visit:
For complete information see this article. In short, you will:
If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:
Install apify-cli
Using Homebrew
brew install apify-cli
Using NPM
npm -g install apify-cli
Pull the Actor by its unique <ActorId>
, which is one of the following:
You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.
This command will copy the Actor into the current directory on your local machine.
apify pull <ActorId>
To learn more about Apify and Actors, take a look at the following resources:
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!