This article presents a possible use case and corresponding sample flow that you can support. This can be a helpful jumping-off point as you plan your implementation.
To maintain a safe and respectful online community, platforms often need to moderate the content that members write in their "About" sections. This ensures compliance with community guidelines, legal standards, or platform-specific rules. For example, no offensive language, hate speech, or personal contact details. If a member filled in their "About" section using offensive language, you could detect and flag such inappropriate or prohibited content and notify the Wix user.
To moderate content of the "About" sections:
Use Member About Created and Member About Updated to listen for when a member "About" section is created or updated. When one of the events are triggered retrieve the content from the event data.
Integrate the retrieved content with an AI-powered content moderation service, such as AWS Comprehend, Google Cloud Content Safety API, or OpenAI.
Scan for:
Prohibited language: Offensive, discriminatory, or violent speech. For example, the moderation flag could be "Contains offensive language."
Malicious content: Links to harmful websites or phishing attempts. For example, the moderation flag could be "Includes an external URL."
If inappropriate content is detected:
Flag the member's "About" section for review.
Notify the member with a reason for the flagging and a link to the community guidelines.
Call Update Member About to replace the content with a placeholder message. For example, "This section is under review for potential violations of our guidelines."
Provide flagged content to human moderators for review. Moderators could:
Approve the content if it’s acceptable.
Call Update Member About or Delete Member About to edit or delete the inappropriate content.