Scrape any API / JSON URLs directly to the dataset, and return them in CSV, XML, HTML, or Excel formats. Transform and filter the output.
Enables you to follow pagination recursively from the payload without the need to visit the HTML page.
Download any JSON URLs directly to the dataset, and return them in CSV, XML, HTML, or Excel formats. Transform and filter the output.
Features
Optimized, fast and lightweight
Small memory requirement
Works only with JSON payloads
Easy recursion
Filter and map complex JSON structures
Comes enabled with helper libraries: lodash, moment
Full access to your account resources through Apify variable
The run fails if all requests failed
Handling errors
This scraper is different from cheerio-scraper that you can handle the errors before the handlePageFunction fails.
Using the handleError input, you can enqueue extra requests before failing, allowing you to recover or trying a different URL.
This function can filter, map and enqueue requests at the same time. The difference is that the userData from the current request will pass to the next request.
1const startUrls =[{2url:"https://example.com",3userData:{4firstValue:0,5}6}];78// assuming the INPUT url above9awaitApify.call('pocesar/json-downloader',{10filterMap:async({ request, addRequest, data })=>{1112if(request.userData.isPost){13// userData will be inherited from previous request14 request.userData.firstValue ==0;1516// return the data only after the POST request17return data;18}else{19// add the same request, but as a POST20addRequest({21url:`${request.url}/?method=post`,22method:'POST',23payload:{24username:'username',25password:'password',26},27headers:{28'Content-Type':'application/json',29},30userData:{31isPost:true32}33});34// omit return or return a falsy value will ignore the output35}36},37})
1{2filterMap:async({ addRequest, request, data })=>{3if(data.nbPages >1&& data.page < data.nbPages){4// get the current payload from the input5const payload =JSON.parse(request.payload);67// change the page number8 request.payload ={...payload,page: data.page +1};9// add the request for parsing the next page10addRequest(request);11}1213return data;14}15}
Omit output if condition is met
1{2filterMap:async({ addRequest, request, data })=>{3if(data.hits.length <10){4return;5}67return data;8}9}
Unwind array of results, each item from the array in a separate dataset item
1{2filterMap:async({ addRequest, request, data })=>{3return data.hits;// just return an array from here4}5}
Frequently Asked Questions
Is it legal to scrape job listings or public data?
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
Do I need to code to use this scraper?
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
What data does it extract?
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Can I scrape multiple pages or filter by location?
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
How do I get started?
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!