QuickStart: Moderate text and images with content safety in Azure AI Studio

Important

Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In this quickstart, you use the Azure AI Content Safety service in Azure AI Studio to moderate text and images. Content Safety detects harmful user-generated and AI-generated content in applications and services.

Caution

Some of the sample content provided by Azure AI Studio might be offensive. Sample images are blurred by default. User discretion is advised.

Prerequisites

  • An Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region, and supported pricing tier. Then select Create.
  • An AI Studio hub in Azure AI Studio.

Setting up

  1. Sign in to Azure AI Studio.
  2. Select the hub you'd like to work in.
  3. On the left nav menu, select AI Services. Select the Content Safety panel. Screenshot of the Azure AI Studio Content Safety panel selected.

Moderate text or images

Select one of the following tabs to get started with Content Safety in Azure AI Studio.

Azure AI Studio provides a capability for you to quickly try out text moderation. The moderate text content feature takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Then configure the filters to further fine-tune the test results. You can also use a blocklist to add specific terms that you want detect and act on.

  1. Select the Moderate text content panel on the Content Safety page in Azure AI Studio.
  2. Select your AI Services resource or Content Safety resource name from the dropdown menu.
  3. You can either choose a pre-written text sample, or write your own sample text in the input field.
  4. Optionally, configure the content filters in the Configure filters tab. Use the sliders to determine which severity level in each category should be rejected by the model. The service still returns all the harm categories that were detected, along with the severity level for each (0-Safe, 2-Low, 4-Medium, 6-High), but the Allowed/Blocked result depends on how you configure the filter.
  5. Optionally, set up a blocklist with the Use blocklist tab. You can choose a blocklist you've already created or create a new one here. Use the Edit button to add and remove terms. You can stack multiple blocklists in the same filter.
  6. When your filters are ready, select Run test.

View and export code

You can use the View code button at the top of the page in both the moderate text content and moderate image content scenarios to view and copy the sample code, which includes your configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code in your own app.

Clean up resources

To avoid incurring unnecessary Azure costs, you should delete the resources you created in this quickstart if they're no longer needed. To manage resources, you can use the Azure portal.

Next steps