Summary
In this workshop, you learned how to use the Azure AI Content Safety features within Azure AI Studio. You're now ready to integrate this resource into Contoso Camping Store's AI-powered platform!
As you continue your journey in developing AI, remember to keep responsible AI principles at the forefront of your approach. Mitigating potential harms that generative AI models present requires an iterative, layered approach that includes experimentation and measurement.
Azure AI integrates years of AI policy, research, and engineering expertise from Microsoft to help your teams build safe, secure, and reliable AI solutions from the start. Azure AI applies enterprise controls for data privacy, compliance, and security on infrastructure that's built for AI at scale.
Note
After you finish exploring Azure AI services, delete the Azure resources that you created during the workshop.
Learn more
- Infuse responsible AI tools and practices in your LLMOps
- Harm categories in Azure AI Content Safety
- Prompt Shields
- Groundedness detection