Checks if some API endpoint's response has changed. Works by creating and storing a JSON schema from the endpoint's response and using it to validate the next response. Depending on the configuration, the stored JSON schema can be updated every time the response changes.
A utility to check if some API endpoint's response has changed.
This Actor allows to to test a list of endpoints to check if their response change over time. For example, you may want to test some API you are using to scrape the content of a website and be notified if some data has been added, changed, or removed to a specific endpoint.
The first time you test an endpoint, the Actor creates a JSON schema out of its response and stores it in the Key-value store. You can specify the name or the ID of the store you want to use for this, into "Next Key-value store", in the Actor's input. For more information about Key-value stores, see here.
The next time you want to test the same endpoint, you can move the Key-value store ID from "Next Key-value store" to "Previous Key-value store": in this way, the Actor will use the previously generated JSON schema to validate the new response. If the validation fails, it will output the differences.
Finally, the Actor will regenerate the JSON schema out of the new answer, merge it with the old schema and save it in the Key-value store pointed by "Next Key-value store". If the previous and next stores are the same, the old schema will be overwritten.
Here is a sample input:
1{ 2 "endpoints": [ 3 "https://dummyjson.com/products", 4 "curl 'https://dummyjson.com/carts' -H 'Accept: application/json'" 5 ], 6 "prevKvsName": "abcde12345", 7 "nextKvsName": "schemas", 8 "noRequired": false, 9 "doMergeSchemas": true, 10 "reportEmail": "my.email@apify.com", 11 "reportSlackChannel": "#api-watcher", 12 "reportSlackToken": "*******" 13}
https://dummyjson.com/products
and https://dummyjson.com/carts
:
GET
request;GET
, custom headers, and custom payload.abcde12345
, if found.schemas
. Each endpoint will have a record key in the Key-value store based on its URL.noRequired
is set to false
.doMergeSchema
is true
.my.email@apify.com
, calling the Actor apify/send-mail
, with a link to the Run default dataset, where those differences were stored;#api-watcher
, calling the Actor katerinahronik/slack-message
and using the given token.Here is a sample output:
1{ 2 "url": "https://some-api/data/1", 3 "data": { 4 "id": 1 5 }, 6 "prevSchema": { 7 "type": "object", 8 "properties": { 9 "id": { 10 "type": "string" 11 } 12 } 13 }, 14 "nextSchema": { 15 "type": "object", 16 "properties": { 17 "id": { 18 "type": "integer" 19 } 20 } 21 }, 22 "mergedSchema": { 23 "type": "object", 24 "properties": { 25 "id": { 26 "type": [ 27 "integer", 28 "string" 29 ] 30 } 31 } 32 }, 33 "validationErrors": [ 34 { 35 "instancePath": "/id", 36 "schemaPath": "#/properties/id/type", 37 "message": "must be string" 38 } 39 ] 40}
The id
in the response, which was previously a string, is now an integer.
The old and new schemas were merged, because doMergeSchema
in the input was true
.
The merged schema admits bot a string and an integer as id
, so, if it will be used to validate the next Run, both types will pass the validation.
You can leverage Apify's schedules. Just create a Task with the desired input and run it periodically: you can set it up to receive a notification when some changes are detected.
If you set the same previous and next Key-value stores, the reference schema will be updated every time, so that you will be notified just once when a change is detected.
Or, if you prefer, you can set two different values for the two stores, even leaving the next Key-value store blank, and the change will be detected every time, until you manually update the reference schema.
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!