magazine
2024.11.19

The Battle of Moderators: Behind the Scenes of Social Media Safety | Release #375

2024-11-social-media-moderators-cover-image

Cover photo by あお🐐

Social media platforms we use daily are backed by AI and human moderators working to remove harmful content. However, the nature of this work is harsh and poses serious mental health challenges.

This article explores the current state of moderation, the limitations of AI technology, and future possibilities.

Moderators Facing Harsh Realities

On platforms like TikTok and Facebook, harmful content including child abuse and violence is posted daily. While AI initially screens content, human moderators often make the final judgment. This work can lead to significant psychological trauma for many moderators.

2024-11-social-media-moderators-image-4

Photo by akira

In 2020, Meta compensated moderators with $52 million for mental harm, but issues remain unresolved. Some former moderators have formed unions to advocate for better working conditions.

Advancements and Limitations of AI Technology

AI is used to analyze vast amounts of data and filter harmful posts before human review. However, it struggles with nuances and context, making it difficult to handle some cases appropriately.

Challenges include over-filtering by AI, which can infringe on free speech, and missing inappropriate content. Experts highlight the necessity of collaboration between AI and human moderators.

The Future and Challenges of Moderation

The future of moderation requires a system where AI and humans complement each other's strengths. AI can work swiftly and efficiently, while human moderators consider context and emotional factors. This collaboration is expected to create a safer online space for users.

2024-11-social-media-moderators-image-10

Photo by Liysphoto

Additionally, addressing moderators' mental health and improving working conditions are also challenges. Ultimately, a new approach combining technology and human effort may be necessary.