A HTTP Status Codes Crawler is a tool that scans a website and retrieves HTTP status codes for each page. This helps in diagnosing errors and optimizing technical SEO.
An Apify Actor that crawls websites and retrieves their HTTP status codes to help monitor site availability, detect broken links, and analyze redirects.
📌 Features
✅ Extracts URLs from sitemaps if available.
✅ Crawls websites when no sitemap is found to collect URLs.
✅ Retrieves HTTP status codes for each discovered URL.
✅ Detects broken links (404 errors) and highlights them.
✅ Provides structured JSON output with status summaries.
✅ Ideal for SEO audits, website monitoring, and performance analysis.
⚙️ Input Parameters
The actor accepts the following input in JSON format:
Is it legal to scrape job listings or public data?
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
Do I need to code to use this scraper?
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
What data does it extract?
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Can I scrape multiple pages or filter by location?
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
How do I get started?
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!