Website Content Extractor

Website Content Extractor

This extractor lets you extract content from any website with a single or multiple URLs. Use selectors to choose specific sections like the body and exclude elements like headers or navigation. It also extracts images and links, providing data in JSON and DataTable formats for easy processing.

DEVELOPER_TOOLSLEAD_GENERATIONAUTOMATIONApify

The Website Content Extractor is a web scraping tool designed to extract text, images, metadata, and links from specified websites using Playwright and Crawlee. It allows users to define target URLs, CSS selectors for content extraction, and exclusion rules.

Features

  • Extract Text: Extracts visible text from the website based on CSS selectors.
  • Extract Metadata: Extracts metadata including canonical URL, title, description, and Open Graph data.
  • Extract Images: Optionally extract all images from the page.
  • Extract Links: Optionally extract all links from the page.
  • Exclude Selectors: Excludes certain page elements (e.g., header, footer, nav) from the extraction.
  • Crawl Multiple Pages: Crawl and extract content from multiple pages if needed.

How to Use

Input

The input is a JSON configuration that specifies the settings for the extraction process.

Fields

  • urls (required): Array of URLs — List of website URLs to extract content from.
  • selectors: Array of CSS selectors — Specifies which elements to extract content from.
  • excludeSelectors: Array of CSS selectors — Specifies elements to exclude from extraction (e.g., header, nav, footer).
  • extractImages: Boolean — If set to true, images from the page will be extracted.
  • extractLinks: Boolean — If set to true, links from the page will be extracted.
  • maxPages: Integer — Limits the number of pages to crawl. Defaults to 1 if not set.

Example Input

1{
2    "urls": [
3        "https://example.com"
4    ],
5    "selectors": [
6        "p",
7        "h1"
8    ],
9    "excludeSelectors": [
10        "header",
11        "footer"
12    ],
13    "extractImages": true,
14    "extractLinks": true,
15    "maxPages": 3
16}

Output

The output consists of extracted data for each URL, including:

  • Text: All text content extracted from the specified selectors.
  • Markdown: Converted Markdown format of the extracted text.
  • Metadata: Metadata such as canonical URL, title, description, and Open Graph data.
  • Images: List of image URLs extracted from the page (if enabled).
  • Links: List of all links found on the page (if enabled).
  • Crawl Information: Includes the URL, loading time, HTTP status code, and crawl depth.

Example Output

1{
2    "url": "https://example.com",
3    "crawl": {
4        "loadedUrl": "https://example.com",
5        "loadedTime": "2025-03-10T10:00:00Z",
6        "depth": 0,
7        "httpStatusCode": 200
8    },
9    "text": "Extracted text content...",
10    "markdown": "**Extracted Text:**\n\nExtracted text content...",
11    "metadata": {
12        "canonicalUrl": "https://example.com/canonical",
13        "title": "Page Title",
14        "description": "Page description here",
15        "openGraph": [
16            { "property": "og:title", "content": "Page Title" },
17            { "property": "og:description", "content": "Page description here" }
18        ],
19        "jsonLd": []
20    },
21    "images": ["https://example.com/image1.jpg", "https://example.com/image2.jpg"],
22    "links": ["https://example.com/page1", "https://example.com/page2"]
23}

Notes

  • CSS Selectors: Use valid CSS selectors to extract the specific content you need from the web pages.
  • Limitations: Depending on the website, some content may be loaded dynamically via JavaScript. In such cases, make sure to enable Playwright's capabilities to handle dynamic content.
  • Crawl Depth: Set maxPages to crawl more pages from the same domain, but be mindful of rate limits and page load times.

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!