Use AI responsibly with Azure AI Content Safety
Azure AI Content Safety is a comprehensive tool designed to detect and manage harmful content in both user-generated and AI-generated materials. Learn how Azure AI Content Safety uses text and image APIs to help identify and filter out content related to violence, hate, sexual content, and self-harm.
Learning objectives
By the end of this module, you'll be able to:
- Describe Azure AI Content Safety.
- Describe how Azure AI Content Safety operates.
- Describe when to use Azure AI Content Safety.
Prerequisites
Before starting this module, you should be familiar with Azure and using the Azure portal. You should also have experience programming with C# or Python. If you have no previous programming experience, we recommend you complete the Take your first steps with C# or Take your first steps with Python learning path before starting this one.