Act for inserting crawler execution results into Knack database.
Apify act for inserting crawler results into Knack database.
This act fetches all results from a specified Apifier crawler execution and inserts them into a view in Knack database.
INPUT
Input is a JSON object with the following properties:
1{ 2 // crawler execution id 3 "_id": EXECUTION_ID, 4 5 // knack connection credentials 6 "data": { 7 "view": KNACK_VIEW, 8 "scene": KNACK_SCENE, 9 "appId": KNACK_APP_ID, 10 "apiKey": KNACK_API_KEY, 11 "schema": TRANSFORM_SCHEMA // optional 12 } 13}
The act can be run with a crawler finish webhook, in such case fill just the contents of data attribute into a crawler finish webhook data.
It is possible to transform the crawler column names into the Knack field names using a transform schema. This is what the optional schema attribute is for, it is a simple object with the following structure.
1{ 2 // CRAWLER : KNACK 3 "col_name_1": "field_001", 4 "col_name_2": "field_002", 5 ... 6}
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!