Moderate content and detect harm in Azure AI Studio by using Content Safety
Learn how to choose and build a content moderation system in Azure AI Studio.
This module is a guided project. You complete an end-to-end project by following step-by-step instructions.
Learning objectives
By the end of this module, you'll be able to:
- Configure filters and threshold levels to block harmful content.
- Perform text and image moderation for harmful content.
- Analyze and improve the Precision, Recall, and F1 score metrics.
- Detect groundedness in a model's output.
- Identify and block AI-generated copyrighted content.
- Mitigate direct and indirect prompt injections.
- Send filter configurations as output to code.
Prerequisites
- An Azure subscription. Create one for free.
- Familiarity with Azure and the Azure portal.