Send Legacy PhantomJS Crawler Results

Send Legacy PhantomJS Crawler Results

This actor downloads results from Legacy PhantomJS Crawler task and sends them to email as attachments. It is designed to run from finish webhook.

DEVELOPER_TOOLSOPEN_SOURCEApify

Apify actor to send results (Apify store)

This actor downloads results from Apify scraper/crawler and send them to email as attachments. It is designed to run from finish webhook of Legacy PhantomJS Crawler.

Usage

From Legacy PhantomJS Crawler task

For a specific task set the following parameters:

Finish webhook URL (finishWebhookUrl)

https://api.apify.com/v2/acts/drobnikj~send-crawler-results/runs?token=APIFY_API_TOKEN

You can find your API token on your Apify account page.

Finish webhook data (finishWebhookData)

Example:

1{
2  "to": "example@example.com",
3  "subject": "Execution ID: {{executionId}} results",
4  "text": "Link to html results: https://api.apify.com/v1/execs/{{executionId}}/results?format=html&simplified=1",
5  "html":  "Link to html <a href=\"https://api.apify.com/v1/execs/{{executionId}}/results?format=html&simplified=1\"> results </a>",
6  "attachResults": [
7    {
8        "format": "csv",
9        "simplified": 1
10    }
11  ]
12}

Parameters:

  • to(String) - Email address

  • subject(String) - Email subject

  • text(String) - Email text

  • html(String) - Email html body

  • attachResults(Array) - Array of types of results that will be attach to email. Attribute format is required for each type (all types of format). Use same attributes as Get dataset items api endpoint, simplified, offset, limit etc.

  • textContext(Object) - This object is used for process subject and text. It replace all {{key}} in subject and text with proper value from this object. By default object has all attributes attributes gets on input. Same behavior as HandlebarsJS.

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!