Skip to main content
Key Benefit: Empower your community to self-moderate by providing robust flagging tools that integrate seamlessly with admin console workflows for efficient content review and action.

Feature Overview

Content flagging enables users to report inappropriate messages, creating a collaborative approach to community moderation. Flagged content appears in the admin console for review, where administrators can validate flags and take appropriate action to maintain community standards.

User-Driven Moderation

Community-powered safety
  • Predefined flag reasons for consistency
  • Optional custom explanations
  • Flag/unflag capabilities
  • Flag status tracking

Admin Integration

Streamlined review process
  • Admin console flag indicators
  • Content validation workflows
  • Policy enforcement tools
  • Flag analytics and reporting

Implementation Guide

  • Flag Messages
  • Unflag Messages
  • Check Flag Status
Report inappropriate content with standardized reasonsEnable users to flag messages that violate community guidelines using predefined categories for consistent moderation workflows.

Required Parameters

ParameterTypeDescription
messageIdStringThe ID of the message to flag

Optional Parameters

ParameterTypeDescription
reasonContentFlagReasonPredefined reason for flagging (iOS, Android, TypeScript only)
explanationStringCustom explanation for “Others” reason (max 300 characters)

Flag Reasons

ReasonDescription
CommunityGuidelinesAgainst community guidelines
HarassmentOrBullyingHarassment or bullying content
SelfHarmOrSuicideSelf-harm or suicide-related content
ViolenceOrThreateningContentViolence or threatening behavior
SellingRestrictedItemsSelling and promoting restricted items
SexualContentOrNuditySexual content or nudity
SpamOrScamsSpam or scam content
FalseInformationFalse information or misinformation
OthersCustom reason with optional explanation

Code Examples

// Flag a message with specific reason
do {
    let flagged = try await messageRepository.flagMessage(
        withId: "message-id",
        reason: .communityGuidelines
    )
    if flagged {
        // Message successfully flagged
        showFlagSuccessMessage()
    }
} catch {
    // Handle flagging error
    handleFlagError(error)
}

// Flag with custom reason
do {
    let flagged = try await messageRepository.flagMessage(
        withId: "message-id",
        reason: .others,
        explanation: "Contains inappropriate language for our community"
    )
} catch {
    // Handle error
}
Platform Availability: Flag reasons are available in iOS, Android, and TypeScript SDKs. Flutter SDK supports basic flagging functionality.

Moderation Workflows

Implement user-friendly flag reason interfacesDesign intuitive flag reason selection to improve moderation quality:
  • Predefined Categories: Use standard reasons for consistent classification
  • Custom Explanations: Allow detailed explanations for “Others” category
  • Guided Selection: Provide descriptions to help users choose appropriate reasons
  • Quick Actions: Enable one-tap flagging for common violations
Well-designed flag reason selection improves the accuracy of user reports and streamlines admin review processes.
Leverage admin console capabilities for content reviewFlagged content automatically appears in the admin console with:
  • Flag Indicators: Visual markers for flagged messages
  • Reason Display: Shows user-selected flag reasons
  • Bulk Actions: Process multiple flags efficiently
  • Appeal Workflows: Handle flag disputes and appeals
The admin console provides comprehensive tools for validating flags and taking appropriate moderation actions.
Establish clear moderation policies and consequencesEffective flagging requires well-defined community standards:
  • Clear Guidelines: Publish comprehensive community rules
  • Consistent Enforcement: Apply policies uniformly across all content
  • Educational Approach: Help users understand violations
  • Progressive Consequences: Implement escalating penalties for repeat offenders
Clear guidelines help users understand what content is appropriate and reduce false flags.
Implement safeguards against flag abuseProtect against malicious flagging with:
  • Rate Limiting: Prevent excessive flagging by individual users
  • Flag Quality Tracking: Monitor user flag accuracy over time
  • Appeal Processes: Allow content creators to contest flags
  • Moderator Review: Human review for questionable flags
Balanced safeguards ensure the flagging system remains effective while preventing abuse.

Best Practices

Create intuitive flagging interfaces
  • Design clear, accessible flag buttons that don’t interfere with normal interactions
  • Provide immediate feedback when flags are submitted or removed
  • Use progressive disclosure for flag reason selection
  • Implement confirmation dialogs for serious flag reasons
Well-designed flagging interfaces encourage appropriate use while maintaining user experience quality.
Optimize flagging operations for scale
  • Implement optimistic updates for immediate user feedback
  • Batch flag status checks when displaying message lists
  • Use efficient data structures for tracking flag states
Optimized flagging systems maintain responsiveness even with high user engagement.
Balance transparency with user privacy
  • Show users their own flag status without revealing others’ flags
  • Provide feedback on flag outcomes when appropriate
  • Maintain anonymity of flag reporters to prevent retaliation
  • Communicate moderation decisions clearly to content creators
Transparent processes build trust while protecting user privacy and safety.
Implementation Strategy: Start with basic flag/unflag functionality, then add reason categorization and status checking. Focus on clear user feedback and seamless admin console integration for effective community moderation.