Alerts for AI workloads (Preview)
This article lists the security alerts you might get for AI workloads from Microsoft Defender for Cloud and any Microsoft Defender plans you enabled. The alerts shown in your environment depend on the resources and services you're protecting, and your customized configuration.
Note
Some of the recently added alerts powered by Microsoft Defender Threat Intelligence and Microsoft Defender for Endpoint might be undocumented.
Learn how to respond to these alerts.
Note
Alerts from different sources might take different amounts of time to appear. For example, alerts that require analysis of network traffic might take longer to appear than alerts related to suspicious processes running on virtual machines.
AI workload alerts
Detected credential theft attempts on an Azure AI model deployment
(AI.Azure_CredentialTheftAttempt)
Description: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful.
MITRE tactics: Credential Access, Lateral Movement, Exfiltration
Severity: Medium
A Jailbreak attempt on an Azure AI model deployment was blocked by Azure AI Content Safety Prompt Shields
(AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt)
Description: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (also known as Prompt Shields), ensuring the integrity of the AI resources and the data security.
MITRE tactics: Privilege Escalation, Defense Evasion
Severity: Medium
A Jailbreak attempt on an Azure AI model deployment was detected by Azure AI Content Safety Prompt Shields
(AI.Azure_Jailbreak.ContentFiltering.DetectedAttempt)
Description: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AI’s safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (also known as Prompt Shields), but weren't blocked due to content filtering settings or due to low confidence.
MITRE tactics: Privilege Escalation, Defense Evasion
Severity: Medium
Sensitive Data Exposure Detected in Azure AI Model Deployment
(AI.Azure_DataLeakInModelResponse.Sensitive)
Description: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AI’s safeguards to access unauthorized sensitive data.
MITRE tactics: Collection
Severity: Low
Corrupted AI application\model\data directed a phishing attempt at a user
(AI.Azure_PhishingContentInModelResponse)
Description: This alert indicates a corruption of an AI application developed by the organization, as it has actively shared a known malicious URL used for phishing with a user. The URL originated within the application itself, the AI model, or the data the application can access.
MITRE tactics: Impact (Defacement)
Severity: High
Phishing URL shared in an AI application
(AI.Azure_PhishingContentInAIApplication)
Description: This alert indicates a potential corruption of an AI application, or a phishing attempt by one of the end users. The alert determines that a malicious URL used for phishing was passed during a conversation through the AI application, however the origin of the URL (user or application) is unclear.
MITRE tactics: Impact (Defacement), Collection
Severity: High
Phishing attempt detected in an AI application
(AI.Azure_PhishingContentInUserPrompt)
Description: This alert indicates a URL used for phishing attack was sent by a user to an AI application. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website. Sending this to an AI application might be for the purpose of corrupting it, poisoning the data sources it has access to, or gaining access to employees or other customers via the application's tools.
MITRE tactics: Collection
Severity: High
Suspicious user agent detected
(AI.Azure_AccessFromSuspiciousUserAgent)
Description: The user agent of a request accessing one of your Azure AI resources contained anomalous values indicative of an attempt to abuse or manipulate the resource. The suspicious user agent in question has been mapped by Microsoft threat intelligence as suspected of malicious intent and hence your resources were likely compromised.
MITRE tactics: Execution, Reconnaissance, Initial access
Severity: Medium
ASCII Smuggling prompt injection detected
(AI.Azure_ASCIISmuggling)
Description: ASCII smuggling technique allows an attacker to send invisible instructions to an AI model. These attacks are commonly attributed to indirect prompt injections, where the malicious threat actor is passing hidden instructions to bypass the application and model guardrails. These attacks are usually applied without the user's knowledge given their lack of visibility in the text and can compromise the application tools or connected data sets.
MITRE tactics: Impact
Severity: High
Note
For alerts that are in preview: The Azure Preview Supplemental Terms include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.