Create safer online communities with intelligent, automated content moderation. social.plus leverages advanced AI to scan and filter inappropriate content across text, images, and video, ensuring community standards are maintained without constant manual oversight.

Overview

social.plus offers two complementary AI moderation approaches:

Getting Started

1

Enable AI Moderation

Contact our support team to enable AI content moderation for your application.
2

Configure Settings

Set up confidence levels and moderation categories through the social.plus Console.
3

Test & Monitor

Test with sample content and monitor moderation effectiveness through analytics.

AI Pre-Moderation

Prevent inappropriate content from reaching your community with proactive AI scanning. Pre-moderation ensures all content meets your standards before publication.
Current Availability: Pre-moderation is currently available for image content, with text and video support coming soon.

Image Content Detection

Our AI pre-moderation scans all uploaded images for inappropriate content across four key categories:

Configuration

1

Enable Image Moderation

Navigate to Moderation > Image Moderation in your social.plus Console and toggle “Enable image moderation” to ON.
2

Set Confidence Levels

Configure confidence thresholds for each category based on your community standards.
3

Test Configuration

Upload test images to verify your confidence settings work as expected.

Understanding Confidence Levels

Important: Confidence levels significantly impact moderation accuracy. Default settings may produce false positives.
Confidence levels represent the AI’s certainty in detecting specific content types:
  • Low Confidence (0-30): High sensitivity, may block legitimate content
  • Medium Confidence (40-70): Balanced approach for most communities
  • High Confidence (80-100): Conservative filtering, may miss some violations
Recommendation: Start with medium confidence levels (40-60) and adjust based on your community’s needs and false positive rates.

AI Post-Moderation

Monitor and moderate published content with intelligent detection and automated response workflows. Post-moderation provides comprehensive scanning across all content types while maintaining user experience.

Content Coverage

Text Content Detection

Our AI text moderation identifies and handles various types of inappropriate text content:

Multimedia Content Detection

Comprehensive Scanning: Our AI analyzes both static images and video content frame-by-frame for maximum protection.
Advanced visual content analysis covers extensive categories:

Understanding Confidence Scores

Default Configuration: All categories start with flagConfidence: 40 and blockConfidence: 80. Monitor your community’s content patterns and adjust these values to optimize for your specific needs.

Configuration Parameters

API Configuration

Select the appropriate API endpoint for your region to ensure optimal performance:
RegionAPI Endpoint
Europehttps://api-eu.social.plus/
Singaporehttps://api-sg.social.plus/
United Stateshttps://api-us.social.plus/

API Reference

For complete API documentation and interactive testing, see our Moderation API Specification.

Best Practices