Configure a communication compliance policy to detect for generative AI interactions

Important

Microsoft Purview Communication Compliance provides the tools to help organizations detect regulatory compliance (for example, SEC or FINRA) and business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content. Built with privacy by design, usernames are pseudonymized by default, role-based access controls are built in, investigators are opted in by an admin, and audit logs are in place to help ensure user-level privacy.

You can use communication compliance to analyze interactions (prompts and responses) to detect for inappropriate or risky interactions or sharing of confidential information entered into numerous generative AI applications.

These include Microsoft 365 Copilot, Copilots built using Microsoft Copilot Studio, AI applications connected by Microsoft Entra or Microsoft Purview Data Map connectors, and more.

Tip

Get started with Microsoft Security Copilot to explore new ways to work smarter and faster using the power of AI. Learn more about Microsoft Security Copilot in Microsoft Purview.

Microsoft Copilot experience support

Communication compliance can detect interactions in any message with the IPM.SkypeTeams.Message.Copilot.application item class. These include Copilot applications in Microsoft solutions like Teams, Outlook, and many more.

For example, communication compliance detects interactions in Copilot messages with the IPM.SkypeTeams.Message.Copilot.Teams, IPM.SkypeTeams.Message.Copilot.Outlook item classes, etc.

Connected generative AI application support

Communication compliance can detect prompt and response interactions with non-Copilot AI applications. These are generative AI applications connected using Microsoft Entra and Microsoft Purview Data Map connectors.

Other AI application support

Communication compliance can also detect interactions with AI applications from browser and network activity by users in your organization. This helps detect for inappropriate or risky interactions or sharing of confidential information entered into AI applications outside of your organization.

Prerequisites

To investigate Copilot interactions in communication compliance, you must be assigned one of the following roles:

  • Communication Compliance
  • Communication Compliance Investigators
  • Communication Compliance Analysts

You must also be assigned as a reviewer of the policy in the Reviewers field during policy creation.

How it works

Important

Microsoft is committed to making sure artificial intelligence (AI) systems are developed responsibly and in ways that warrant people's trust. As part of this commitment, Microsoft Purview engineering teams are operationalizing the six core principles of Microsoft's Responsible AI strategy to design, build, and manage AI solutions. As part of our effort to responsibly deploy AI, we provide documentation, role-based access, scenario attestation, and more to help organizations use AI systems responsibly.

You can take advantage of all communication compliance features when you create a communication compliance policy that detects for Microsoft 365 Copilot and Microsoft Copilot interactions, including:

Any prompt or response entered into a supported generative AI app that matches a communication compliance policy is displayed as a policy match on the Policies page on the Pending tab, with separate entries for prompts and responses. If only the prompt or only the response matches a policy, an item is created on the Pending tab just for that policy match. You can remediate policy matches for generative AI apps in the same way that you remediate any other policy match.

The following information is displayed for each item on the Pending tab for AI policy matches:

  • Icons: The Copilot icon identifies the policy match as a generative AI interaction with a Microsoft-based Copilot For all other generative AI interactions, this is an email icon.
  • Subject column: The [Copilot] value in this column identifies the policy match as a generative AI interaction with Microsoft-based Copilots. The [AI app] value in this column identifies the policy match for all other generative AI interactions.
  • Sender column: Sender of the message. Depending on the source AI application, the sender is listed as follows:
    • If the policy match is a response from a Copilot application, the value is Copilot.
    • If the policy match is a response from a connected AI application, the value is Connected AI app.
    • If the policy match is a response from a cloud AI application, the value is Cloud AI app.
  • Recipient column: Recipients included in the message. This value is the user interacting with the AI application.
  • Message text: The message text that the user entered (the text that caused the policy match) is shown on the right side of the screen in its entirety.

Create a policy that detects for Microsoft Copilot interactions

Select the appropriate tab for the portal you're using. Depending on your Microsoft 365 plan, the Microsoft Purview compliance portal is retired or will be retired soon.

To learn more about the Microsoft Purview portal, see Microsoft Purview portal. To learn more about the Compliance portal, see Microsoft Purview compliance portal.

  1. Sign in to the Microsoft Purview portal using credentials for an admin account in your Microsoft 365 organization.
  2. Go to the Communication Compliance solution.
  3. Select Policies in the left navigation.
  4. Select Create policy, and then select the Detect Microsoft Copilot interactions template.
  5. Enter the policy name, select the users and groups to apply the policy to, and then select the reviewers for the policy. Learn more about these options when creating a policy from a template
  6. Review the list of settings chosen for you based on the template, and then select Create policy to create the policy or select Customize policy if you want to make any changes before creating the policy.

Add a generative AI app as a location for an existing policy

Select the appropriate tab for the portal you're using. Depending on your Microsoft 365 plan, the Microsoft Purview compliance portal is retired or will be retired soon.

To learn more about the Microsoft Purview portal, see Microsoft Purview portal. To learn more about the Compliance portal, see Microsoft Purview compliance portal.

  1. Sign in to the Microsoft Purview portal using credentials for an admin account in your Microsoft 365 organization.

  2. Go to the Communication Compliance solution.

  3. Select Policies in the left navigation.

  4. Select the More actions (ellipsis) in the row for the policy you want to change, and then select Edit.

  5. Select Next two times in the policy creation workflow to go to the Choose locations to detect communications page.

  6. Select one or more of the following checkboxes to add a generative AI application as a location:

    • Microsoft Copilot experiences
    • Enterprise AI apps
    • Other AI apps
  7. Make any other changes to the policy, and then on the Review and finish page, select Save.

Create a policy to review all generative AI interactions

When you're first working with generative AI interactions, you may want to review all AI interactions to get a feel for how people in your organization are using these applications. To create a policy to review all generative AI interactions, when you create or edit the policy:

  • Make sure that the location is set to include all generative AI applications:

    • Microsoft Copilot experiences
    • Enterprise AI apps
    • Other AI apps
  • Make sure that the Review percentage option on the Choose conditions and review percentage page is set to 100%.

  • Don't set any conditions for the policy.

Note

Depending on the size of your organization, a policy that detects all generative AI interactions might result in a high volume of detected messages, which could cause your organization to reach its storage limit. In that case, you may need to make adjustments to the policy to reduce the number of detections.

Remediate policy matches and alerts that contain generative AI interactions

You can remediate policy matches and alerts that contain generative AI interactions in the same way that you remediate any policy match or alert in communication compliance. For example, you can tag a policy match, escalate it, resolve it, download it, or export it. Learn more about resolving policy matches and alerts in communication compliance.

Reports

AI interactions that are brought into the scope of a communication compliance policy appear in communication compliance reports and audit data. Learn more about communication compliance reports and audits.

See also