Introduction

Completed

The amount of user-generated content being posted online is growing rapidly. We are also increasingly aware of the need to protect everyone from inappropriate or harmful content.

Azure AI Content Safety is an AI service designed to help developers include advanced content safety into their applications and services.

The challenges in maintaining safe and respectful online spaces are growing for developer teams responsible for hosting online discussions. Azure AI Content Safety identifies potentially unsafe content and helps organizations to comply with regulations and meet their own quality standards.

The need for improving online content safety has four main drivers:

  • Increase in harmful content: There's been a huge growth in user-generated online content, including harmful and inappropriate content.
  • Regulatory pressures: Government pressure to regulate online content.
  • Transparency: Users need transparency in content moderation standards and enforcement.
  • Complex content: Advances in technology are making it easier for users to post multimodal content and videos.

Note

Azure AI Content Safety replaces Azure Content Moderator, which was deprecated in February 2024 and will be retired by February 2027.

In this module, you'll learn about the key features of Azure AI Content Safety, and when each might be used.