Skip to main content
SDK v7.x · Last verified March 2026 · iOS · Android · Web · Flutter
import { ChannelRepository } from '@amityco/ts-sdk';

// Mute a user in a channel (temporarily silence them)
await ChannelRepository.muteMembers(channelId, [userId], 60); // 60 seconds

// Ban a user from a channel (removes them + deletes their messages)
await ChannelRepository.banMembers(channelId, [userId]);

// Unban
await ChannelRepository.unbanMembers(channelId, [userId]);

// Delete a specific message
import { MessageRepository } from '@amityco/ts-sdk';
await MessageRepository.deleteMessage(messageId);
Full walkthrough below ↓
Platform note — code samples below use TypeScript. Every method has an equivalent in the iOS (Swift), Android (Kotlin), and Flutter (Dart) SDKs — see the linked SDK reference in each step.
Effective moderation is what separates thriving communities from abandoned ones. This guide covers the full moderation toolkit: temporary mutes for friction, channel bans for violations, network-wide global bans for bad actors, AI moderation for scale, and webhooks for custom workflows.
Prerequisites: Channel with member roles configured → Channel Roles & Permissions

Quick Start: Ban a User

import { ChannelRepository } from '@amityco/ts-sdk';

try {
  // Ban removes the user from the channel and deletes their messages
  await ChannelRepository.banMembers(channelId, [userId]);
} catch (error) {
  console.error('Failed to ban member:', error);
}

Step-by-Step Implementation

1

Mute a user temporarily

Muting silences a user for a fixed duration without removing them from the channel. Use this for first offenses or to de-escalate a heated argument.
import { ChannelRepository } from '@amityco/ts-sdk';

// Mute for 5 minutes (300 seconds)
await ChannelRepository.muteMembers(channelId, [userId], 300);

// Mute indefinitely
await ChannelRepository.muteMembers(channelId, [userId]);

// Unmute
await ChannelRepository.unmuteMembers(channelId, [userId]);
Mute / Unmute
2

Ban and unban users from a channel

Banning removes the user from the channel and deletes their messages in one operation. Unbanning restores access but does not restore deleted messages.
// Ban
await ChannelRepository.banMembers(channelId, [userId]);

// Unban
await ChannelRepository.unbanMembers(channelId, [userId]);

// Query banned users
const banned = ChannelRepository.getMembers({
  channelId,
  filter: 'bannedMember',
});
banned.on('dataUpdated', (list) => renderBannedList(list));
Banning is destructive — the user’s messages are permanently deleted. If you need to preserve message history (e.g., for evidence in a dispute), record the messages before banning.
Ban / Unban
3

Delete individual messages

When a specific message (not the user) needs to be removed:
import { MessageRepository } from '@amityco/ts-sdk';

// Any channel moderator or admin can delete any message
await MessageRepository.deleteMessage(messageId);
Deleted messages are soft-deleted — a placeholder “Message was deleted” remains visible in the thread so the conversation history stays coherent.
4

Apply a global ban across your network

For users who violate your terms of service across multiple channels, use the Global Ban API to block them at the network level. This is typically done from the Admin Console or via the server-side API.From Admin Console: User Management → Find User → BanVia API (server-side):
curl -X POST 'https://apix.<region>.amity.co/api/v3/users/<userId>/ban' \
  -H 'x-admin-token: <your-admin-token>'
Global Ban API
5

Enable AI content moderation

social.plus AI Moderation automatically analyzes messages before they’re published and can block, flag, or allow content based on configurable policies.
  1. Go to Admin Console → AI Content Moderation
  2. Enable text and/or image moderation
  3. Configure severity thresholds (block vs. review vs. allow)
  4. Review flagged items in Moderation → Flagged Items
AI Content Moderation
6

Receive moderation events via webhooks

Use webhook events to trigger custom workflows when moderation actions occur:
EventTrigger
user.bannedUser banned (channel or global)
user.unbannedBan lifted
message.flaggedUser reported a message
message.deletedMessage deleted by moderator
// Webhook handler example
app.post('/webhook', (req, res) => {
  const { event, data } = req.body;

  if (event === 'user.banned') {
    notifyUserOfBan(data.userId, data.reason);
    logModerationAction(data);
  }

  res.status(200).json({ status: 'received' });
});
Webhook Events

Connect to Moderation & Analytics

Admin Console → Moderation gives your trust & safety team a centralized view of flagged content, user reports, and pending review items across all channels.Moderation Overview
Configure per-user message rate limits in Admin Console → Network Settings to automatically throttle users who send messages too quickly — without requiring manual moderator action.Network Settings
Pre-hook webhooks let your server inspect and optionally block messages before they are accepted by social.plus. Use this for custom profanity filters, link blocking, or compliance workflows.Pre-Hook Events

Common Mistakes

Using ban instead of mute for minor rule breaks — Banning is irreversible in the sense that it deletes messages. Reserve bans for serious violations. Use mutes for temporary timeouts — they’re adjustable and reversible without data loss.
Not informing users why they were moderated — Unexplained bans drive users away permanently. Always send a custom system message to the channel or a direct notification explaining the reason. This also reduces false positive appeals.

Best Practices

Define a progression: (1) verbose content warning → (2) temporary mute → (3) channel ban → (4) global ban. Using the most severe option first for minor violations destroys community trust and drives false appeal volume.
Grant channel-moderator roles to trusted community members before the channel grows. Reactive moderator assignment (after problems start) means violations compound faster than you can respond. See Channel Roles & Permissions.
AI moderation catches spam and NSFW content reliably, but misses context-specific violations. Pair it with a human review queue for flagged items and empower community members to report edge cases.
Dive deeper: Moderation & Safety API Reference has full parameter tables, method signatures, and platform-specific details for every API used in this guide.

Next Steps

Channel Roles & Permissions

Who to promote before channels need moderation.

Group Chat Path

Complete group chat build path from creation to moderation.

Admin Console Moderation

Review flagged items from a trust & safety dashboard.