Image Moderation API

Image Moderation API

Uses advanced AI models to analyze and classify user-generated content in real time. It detects harmful or inappropriate content, providing category-level flags and confidence scores to help you enforce community guidelines and keep your platform safe.

AIAUTOMATIONDEVELOPER_TOOLSApify

🖼️ AI Image Moderation Actor

This Apify Actor leverages Sentinel Moderation's AI-powered API to analyze and flag images containing inappropriate, unsafe, or policy-violating content. It can detect NSFW material, violence, graphic content, and more — helping you maintain a safe and compliant platform.


📥 Input Schema

The actor expects the following JSON input:

1{
2  "apiKey": "sample-api-key",
3  "image": "https://example.com/path-to-image.jpg"
4}
  • apiKey (string, required): Your API key from SentinelModeration.com.
  • image (string, required): A publicly accessible image URL to analyze.

📤 Output

The actor returns a moderation result in the following structure:

1[
2  {
3    "flagged": false,
4    "categories": {
5      "harassment": false,
6      "harassment/threatening": false,
7      "sexual": false,
8      "hate": false,
9      "hate/threatening": false,
10      "illicit": false,
11      "illicit/violent": false,
12      "self-harm/intent": false,
13      "self-harm/instructions": false,
14      "self-harm": false,
15      "sexual/minors": false,
16      "violence": false,
17      "violence/graphic": false
18    },
19    "category_scores": {
20      "harassment": 0.000048,
21      "harassment/threatening": 0.0000066,
22      "sexual": 0.000039,
23      "hate": 0.0000142,
24      "hate/threatening": 0.0000008,
25      "illicit": 0.000022,
26      "illicit/violent": 0.000019,
27      "self-harm/intent": 0.0000011,
28      "self-harm/instructions": 0.0000010,
29      "self-harm": 0.0000020,
30      "sexual/minors": 0.000010,
31      "violence": 0.000016,
32      "violence/graphic": 0.0000056
33    },
34    "error": null
35  }
36]
  • flagged: true if any category crosses a moderation threshold.
  • categories: A true/false map indicating which categories were flagged.
  • category_scores: Confidence scores (0.0 to 1.0) for each category.
  • error: Present only in test mode or if no valid API key is provided.

🧠 Categories Detected

The image is scanned for content under a wide range of moderation labels:

  • Harassment / Threats
  • Sexual content (including minors)
  • Hate speech (including threats)
  • Illicit activity
  • Self-harm
  • Violence / Graphic content

🔐 Getting an API Key

To receive real results, get your API key from Sentinel Moderation:

  1. Visit sentinelmoderation.com
  2. Sign up and generate your API key
  3. Use the key in the apiKey field of your input

✅ Example Use Cases

  • Flagging NSFW content in profile photos or uploads
  • Moderating image submissions on forums or marketplaces
  • Pre-screening media in chat apps or social platforms
  • Complying with platform-specific safety guidelines

Let me know if you also want to add support for uploading images directly instead of via URL!

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!