Use the Moderation Feed to triage all flagged content (posts, comments, messages) in one place. Designed for speed, consistency, and auditability. This guide focuses on operational workflows (not developer implementation).
Triage
Identify highest-risk flagged items
Filter
Narrow by feed, type, reason, tags
Decide
Delete or clear flags confidently
Audit
Review action history & moderator attribution
Optimize
Track moderation throughput & gaps
Coordinate
Distribute workload across team
Access
1
Open Moderation Section
Sidebar → Moderation → Moderation Feed.
2
Choose Tab
Start in To Review for active queue; Reviewed for audit.
3
Apply Filters
Refine by feed type, content type, report reason, tags.
4
Take Action
Delete violations; clear validly resolved flags.
Interface Structure
- To Review
- Reviewed
Active flagged content requiring human decision (posts, comments, messages). Split into Posts & Comments / Messages categories with tailored filters.
To Review: Posts & Comments
Key elements:- Total flagged count
- Card list: author, timestamp, content preview, context (community / user feed)
- Thread context for comments (view parent + siblings)
- AI mod reasons + user report reasons
- Reporter count & last flag date
- Delete (immediate removal)
- Clear flag (removes from queue)
To Review: Messages
Key elements:- Total flagged messages
- Sender, timestamp, content preview
- Channel context (Community / Live / Broadcast)
- AI mod reasons + user report reasons
- Reporter count & last flag date
- Delete message
- Clear flag
Filtering & Sorting
Feed Dimension (Posts & Comments)
Feed Dimension (Posts & Comments)
- All feeds
- Community feed
- User feed
Content Type (Advanced)
Content Type (Advanced)
- Posts
- Comments & replies
- Attachments: Photos / Videos / Files
Channel Dimension (Messages)
Channel Dimension (Messages)
- All Channels
- Community channel
- Live channel
- Broadcast channel
Message Types (Advanced)
Message Types (Advanced)
- Text
- Photo
- Video
- File
- Audio
Shared Filters
Shared Filters
Sorting Options (To Review)
Sorting Options (To Review)
- Last flag date
- Flag count
- Created date Order: Latest first / Oldest first
Additional Reviewed Filters
Additional Reviewed Filters
- Moderator Name (multi-select)
- Moderation Action: Deleted / Flag Cleared
- Sort: Moderation Date / Created Date
Review Workflow Examples
- High-Severity First
- AI Assistance
- Attachment Focus
- Backlog Reduction
- Moderator Audit
Filter by report reasons (e.g., abuse, threats) → Sort by Flag count → Delete violations → Clear false positives.
Decision Criteria
Delete vs Clear Flag
Delete vs Clear Flag
- Delete when content violates policy & removal improves safety.
- Clear flag when content is compliant or already addressed contextually.
Flag Count Weighting
Flag Count Weighting
Higher counts = urgency, but still verify for mass-report brigading.
Thread Context
Thread Context
Always open parent thread for comments before deletion to avoid losing conversational coherence.
Media Evaluation
Media Evaluation
Prioritize video & file attachments for manual verification if policy sensitive.
Metrics & Monitoring
Metric | Insight | Action |
---|---|---|
Average time to resolution | Workflow speed | Reduce queue switching if slow |
Flags cleared vs deleted ratio | Content quality baseline | High delete ratio → education / prevention needed |
Re-flag rate (same user/content) | Effectiveness of prior decisions | Escalate chronic sources |
Moderator action distribution | Team load balance | Reassign to reduce bottlenecks |
False positive rate | Report accuracy | Improve reporting guidelines / AI tuning |
Track trends weekly; adjust staffing and escalation policies based on sustained shifts.
Best Practices
Efficient Review
Efficient Review
- Pre-filter by highest risk reasons first.
- Batch similar content types.
- Use sorting shifts (Oldest → Latest) to prevent stale backlog.
Team Coordination
Team Coordination
- Use Reviewed tab for spot audits.
- Align on deletion criteria to prevent uneven enforcement.
- Rotate high-intensity queues to mitigate reviewer fatigue.
Quality & Fairness
Quality & Fairness
- Validate contextual thread before removal.
- Confirm multi-report events are genuine (avoid brigading effect).
- Provide internal rationale for ambiguous decisions.
Troubleshooting
Issue | Likely Cause | Resolution |
---|---|---|
Item reappears after clear | New flag submitted | Re-evaluate & document edge cases |
Slow loading filters | High dataset / network latency | Narrow filters; retry; check connectivity |
Moderator missing in filter | No actions recorded in window | Expand date scope or verify permissions |
Wrong sort order | Cached state | Reapply sort or refresh page |
AI reason absent | Model confidence threshold not met | Proceed with manual evaluation |
High false positives | Over-broad report reason usage | Educate users / refine categories |