Skip to main content
Use the Moderation Feed to triage all flagged content (posts, comments, messages) in one place. Designed for speed, consistency, and auditability. This guide focuses on operational workflows (not developer implementation).

Triage

Identify highest-risk flagged items

Filter

Narrow by feed, type, reason, tags

Decide

Delete or clear flags confidently

Audit

Review action history & moderator attribution

Optimize

Track moderation throughput & gaps

Coordinate

Distribute workload across team

Access

1

Open Moderation Section

Sidebar → Moderation → Moderation Feed.
2

Choose Tab

Start in To Review for active queue; Reviewed for audit.
3

Apply Filters

Refine by feed type, content type, report reason, tags.
4

Take Action

Delete violations; clear validly resolved flags.

Interface Structure

  • To Review
  • Reviewed
Active flagged content requiring human decision (posts, comments, messages). Split into Posts & Comments / Messages categories with tailored filters.

To Review: Posts & Comments

Key elements:
  • Total flagged count
  • Card list: author, timestamp, content preview, context (community / user feed)
  • Thread context for comments (view parent + siblings)
  • AI mod reasons + user report reasons
  • Reporter count & last flag date
Actions:
  • Delete (immediate removal)
  • Clear flag (removes from queue)

To Review: Messages

Key elements:
  • Total flagged messages
  • Sender, timestamp, content preview
  • Channel context (Community / Live / Broadcast)
  • AI mod reasons + user report reasons
  • Reporter count & last flag date
Actions:
  • Delete message
  • Clear flag

Filtering & Sorting

  • All feeds
  • Community feed
  • User feed
  • Posts
  • Comments & replies
  • Attachments: Photos / Videos / Files
  • All Channels
  • Community channel
  • Live channel
  • Broadcast channel
  • Text
  • Photo
  • Video
  • File
  • Audio
  • Report reasons (multi-select)
  • Tags (multi-select)
  • Last flag date
  • Flag count
  • Created date Order: Latest first / Oldest first
  • Moderator Name (multi-select)
  • Moderation Action: Deleted / Flag Cleared
  • Sort: Moderation Date / Created Date

Review Workflow Examples

  • High-Severity First
  • AI Assistance
  • Attachment Focus
  • Backlog Reduction
  • Moderator Audit
Filter by report reasons (e.g., abuse, threats) → Sort by Flag count → Delete violations → Clear false positives.

Decision Criteria

  • Delete when content violates policy & removal improves safety.
  • Clear flag when content is compliant or already addressed contextually.
Higher counts = urgency, but still verify for mass-report brigading.
Always open parent thread for comments before deletion to avoid losing conversational coherence.
Prioritize video & file attachments for manual verification if policy sensitive.

Metrics & Monitoring

MetricInsightAction
Average time to resolutionWorkflow speedReduce queue switching if slow
Flags cleared vs deleted ratioContent quality baselineHigh delete ratio → education / prevention needed
Re-flag rate (same user/content)Effectiveness of prior decisionsEscalate chronic sources
Moderator action distributionTeam load balanceReassign to reduce bottlenecks
False positive rateReport accuracyImprove reporting guidelines / AI tuning
Track trends weekly; adjust staffing and escalation policies based on sustained shifts.

Best Practices

  • Pre-filter by highest risk reasons first.
  • Batch similar content types.
  • Use sorting shifts (Oldest → Latest) to prevent stale backlog.
  • Use Reviewed tab for spot audits.
  • Align on deletion criteria to prevent uneven enforcement.
  • Rotate high-intensity queues to mitigate reviewer fatigue.
  • Validate contextual thread before removal.
  • Confirm multi-report events are genuine (avoid brigading effect).
  • Provide internal rationale for ambiguous decisions.

Troubleshooting

IssueLikely CauseResolution
Item reappears after clearNew flag submittedRe-evaluate & document edge cases
Slow loading filtersHigh dataset / network latencyNarrow filters; retry; check connectivity
Moderator missing in filterNo actions recorded in windowExpand date scope or verify permissions
Wrong sort orderCached stateReapply sort or refresh page
AI reason absentModel confidence threshold not metProceed with manual evaluation
High false positivesOver-broad report reason usageEducate users / refine categories

Next Steps