This actor scrapes unique domains from a list of provided URLs. It crawls each page, extracts domains, and stores them in a dataset. The actor respects a defined maximum depth and filters domains based on whether they are ICANN-approved and whether private domains are allowed.
This actor scrapes domains from a list of provided URLs. It recursively crawls the pages, extracts unique domains, and stores them in a dataset. The actor respects a defined maximum depth and filters domains based on whether they are ICANN-approved and whether private domains are allowed. Only unique domains are saved, preventing duplicates during the crawling process.
Set up the Actor
Start by providing a list of URLs to begin the crawling process. You can either manually input the URLs or provide a list in the actor configuration.
Configure the Input Parameters
Run the Actor
Once the input parameters are configured, run the actor to start the crawling process. The actor will crawl the pages, extract unique domains, and store the results in the dataset.
View Results
After the actor finishes running, you can view the extracted domains in the dataset. The data will be displayed in a table format with the following fields:
Export Data
You can export the dataset for further processing or analysis. The results are saved in a structured format for easy integration with other tools.
Modify Parameters
Adjust the configuration and rerun the actor as needed to gather additional data or refine the crawling process.
This actor provides an efficient solution for scraping and extracting unique domains from a list of URLs. It recursively crawls the provided pages, extracts domains, and stores them in a dataset. By respecting a defined maximum depth and filtering domains based on ICANN approval and private domain allowance, it ensures only relevant domains are captured.
The actor is optimized to prevent duplicates by saving only unique domains during the crawling process. This makes it a valuable tool for anyone looking to gather domain data in a structured and efficient manner, while maintaining control over the types of domains collected.
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!