Uncover hidden job opportunities across 5 European countries with one search! This NoFluffJobs scraper delivers comprehensive, real-time data on tech jobs, including salaries, skills, and company insights. Save time, expand your job search, and make informed career decisions with ease.
This actor allows you to scrape job listings from NoFluffJobs.com and extract comprehensive details about each job posting, including job title, company information, salary, location, required skills, benefits, recruitment process, and various other metadata. When you provide a search URL, the scraper automatically searches for matching jobs across all five regional versions of the site (Poland, Hungary, Czech Republic, Slovakia, and Netherlands). This ensures that you get a complete view of all relevant job listings, as different countries may have varying job opportunities.
Here's an example of how to set up the input for the NoFluffJobs scraper:
1{ 2 "startUrls": [ 3 { 4 "url": "https://nofluffjobs.com/sk/backend?criteria=category%3Dfrontend,fullstack,mobile,embedded" 5 } 6 ], 7 "maxItems": 100, 8 "maxConcurrency": 100, 9 "minConcurrency": 1, 10 "maxRequestRetries": 8, 11 "proxyConfiguration": { 12 "useApifyProxy": true, 13 "apifyProxyGroups": [ 14 "RESIDENTIAL" 15 ] 16 } 17}
Note: Even though you provide a URL for a specific region (e.g., 'sk' for Slovakia in the example above), the scraper will search for matching jobs across all regions: Poland (pl), Hungary (hu), Czech Republic (cz), Slovakia (sk), and Netherlands (nl).
The output data is highly detailed and includes the following main sections:
Here's a comprehensive breakdown of the output structure:
1{ 2 "id": "data-engineer-ework-group-remote-2", 3 "title": "Data Engineer", 4 "apply": { 5 "option": "email", 6 "leadCollection": false, 7 "leadCollectionInfoClause": "" 8 }, 9 "specs": { 10 "details": { 11 "custom": [] 12 }, 13 "help4Ua": false, 14 "dailyTasks": [ 15 "Collaborate with cross-functional teams to design, develop and maintain data pipelines and analytics solutions.", 16 "Design and build a foundational platform for a modern data lake architecture, optimizing it for scalability, flexibility, and performance.", 17 "Develop automated test to ensure data accuracy and quality.", 18 "You will assist with planning and maintaining the Azure architectural runway and pipeline for multiple products, ensuring their stability and efficient operation.", 19 "Continuously secure improvement that can make developers on the platform work even more efficiently and act as a sparring partner on use of Azure services for the organisation.", 20 "Leverage your expertise in cloud development to design and implement innovative digital solutions focused on delivering business insights and patient care in real time.", 21 "Overall, our goal is to improve the clinical experience for patients, doctors and nurses world-wide, and your role will support this journey." 22 ], 23 "referral": { 24 "allowed": true 25 } 26 }, 27 "basics": { 28 "category": "data", 29 "seniority": ["Senior"], 30 "technology": "Python" 31 }, 32 "company": { 33 "url": "www.eworkgroup.com", 34 "logo": { 35 "original": "companies/logos/original/ework_group_20210531_122823.png", 36 "jobs_details": "companies/logos/jobs_details/ework_group_20210531_122823.png", 37 "jobs_listing": "companies/logos/jobs_listing/ework_group_20210531_122823.png" 38 }, 39 "name": "Ework Group", 40 "size": "100+", 41 "video": "" 42 }, 43 "details": { 44 "quote": "", 45 "position": "", 46 "description": "<p>For our client - a company from pharmaceutical area, we are looking for Data Engineer.</p>\n<p><strong>What you will be doing</strong></p>\n<p>You will be close to the heart of our client's clinical operations where you will play a key role in shaping the future of clinical trials and patient care, by building scalable solutions in the cloud.</p>\n<p><br></p>", 47 "quoteAuthor": "" 48 }, 49 "benefits": { 50 "benefits": [ 51 "Sport subscription", 52 "Private healthcare", 53 "International projects" 54 ], 55 "equipment": { 56 "computer": "", 57 "monitors": "", 58 "operatingSystems": { 59 "lin": false, 60 "mac": false, 61 "win": false 62 } 63 }, 64 "officePerks": [] 65 }, 66 "consents": { 67 "infoClause": "The Controller of your personal data is Ework Group, with registered office at Plac Stanisława Małachowskiego 2, Warsaw. Your data is processed for the purpose of the current recruitment process. Providing data is voluntary but necessary for this purpose. Processing your data is lawful because it is necessary in order to take steps at the request of the data subject prior to entering into a contract (article 6 point 1b of Regulation EU 2016/679 - GDPR). Your personal data will be deleted when the current recruitment process is finished, unless a separate consent is provided below. You have the right to access, correct, modify, update, rectify, request for the transfer or deletion of data, withdrawal of consent or objection.", 68 "personalDataRequestLink": "monika.jozwik@eworkgroup.com" 69 }, 70 "location": { 71 "places": [ 72 { 73 "city": "Remote", 74 "url": "data-engineer-ework-group-remote-2" 75 }, 76 { 77 "country": { 78 "code": "POL", 79 "name": "Poland" 80 }, 81 "province": "opole", 82 "url": "data-engineer-ework-group-opole-1", 83 "provinceOnly": true 84 } 85 ], 86 "remote": 5, 87 "multicityCount": 100, 88 "covidTimeRemotely": false, 89 "remoteFlexible": false, 90 "fieldwork": false, 91 "defaultIndex": 1 92 }, 93 "essentials": { 94 "contract": { 95 "start": "ASAP", 96 "duration": {} 97 }, 98 "originalSalary": { 99 "currency": "PLN", 100 "types": { 101 "b2b": { 102 "period": "Month", 103 "range": [25716, 32146], 104 "paidHoliday": false 105 } 106 }, 107 "disclosedAt": "VISIBLE" 108 } 109 }, 110 "methodology": [], 111 "recruitment": { 112 "languages": [ 113 {"code": "pl"}, 114 {"code": "en"} 115 ], 116 "onlineInterviewAvailable": true 117 }, 118 "requirements": { 119 "musts": [ 120 {"value": "Python", "type": "main"}, 121 {"value": "Azure", "type": "main"}, 122 {"value": "Azure Data Factory", "type": "main"}, 123 {"value": "Azure Databricks", "type": "main"}, 124 {"value": "Spark", "type": "main"} 125 ], 126 "nices": [ 127 {"value": "SQL", "type": "main"}, 128 {"value": "CI", "type": "main"}, 129 {"value": "CD pipelines", "type": "main"}, 130 {"value": "Azure DevOps", "type": "main"} 131 ], 132 "description": "<p>We are seeking a candidate with an educational background in Computer Science and Software Development , as well as experience in some of the following areas:</p>\n<ul>\n<li>Strong proficiency in Python programming</li>\n<li>Extensive experience with Azure, including Azure Data Factory and Azure Databricks, and a deep understanding of Azure architecture and services</li>\n<li>Experience in using Spark, including Spark SQL and understanding of how to optimize Spark performance.</li>\n<li>Automated unit testing and code quality inspection</li>\n<li>CI/CD Pipelines using Azure DevOps (or similar)</li>\n<li>Working in pharma domain or other regulated area is considered an advantage</li>\n</ul>", 133 "languages": [ 134 {"type": "MUST", "code": "en", "level": "C1"}, 135 {"type": "MUST", "code": "pl", "level": "C1"} 136 ] 137 }, 138 "posted": 1725032841570, 139 "postedOrRenewedDaysAgo": 0, 140 "status": "PUBLISHED", 141 "postingUrl": "data-engineer-ework-group-remote-2", 142 "metadata": { 143 "sectionLanguages": { 144 "daily-tasks": "en", 145 "description": "en", 146 "requirements.description": "en" 147 } 148 }, 149 "regions": ["pl"], 150 "reference": "WZOXW66Z", 151 "meta": { 152 "videosInCompanyProfileVisible": true 153 }, 154 "companyUrl": "/company/ework-group-rlrciwbo", 155 "seo": { 156 "title": "Data Engineer @ Ework Group", 157 "description": "Data Engineer @ Ework Group Fully remote job 25.7k-32.1k (B2B) PLN / month" 158 }, 159 "analytics": { 160 "lastBump": 0, 161 "lastBumpType": "SYSTEM", 162 "previousBumpCount": 0, 163 "nextBump": 1, 164 "nextBumpType": "SYSTEM", 165 "nextBumpCount": 6, 166 "emissionDay": 0, 167 "productType": "EXPERT", 168 "emissionBumps": 6, 169 "emissionLength": 30, 170 "emission": "R1461A", 171 "addons": { 172 "bump": false, 173 "publication": true, 174 "offerOfTheDay": false, 175 "topInSearch": false, 176 "highlighted": false 177 }, 178 "topInSearchConfig": { 179 "pairs": [] 180 } 181 } 182}
id
: Unique identifier for the job listingtitle
: Job titleapply
: Application method and related informationspecs
: Job specifications, including daily tasksbasics
: Basic job information (category, seniority, main technology)company
: Detailed company informationdetails
: Job description and position detailsbenefits
: List of benefits and perks offeredconsents
: GDPR and data processing informationlocation
: Detailed location information, including remote work optionsessentials
: Contract and salary informationrecruitment
: Recruitment process details, including required languagesrequirements
: Required and nice-to-have skills, and language requirementsposted
: Timestamp of when the job was postedstatus
: Current status of the job listingpostingUrl
: URL slug for the job postingmetadata
: Additional metadata, including language information for different sectionsregions
: Regions where the job is availablereference
: Reference code for the jobseo
: SEO-related information for the job listinganalytics
: Analytics data related to the job posting on the platformYes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!