Responsible AI FAQ

What is Microsoft Security Copilot? 

Security Copilot is a natural language, is a generative AI-powered security solution that helps increase the efficiency and capabilities of defenders to improve security outcomes at machine speed and scale. It draws context from plugins and data to answer prompts so that security professionals and IT admins can help keep their organizations secure.

What can Security Copilot do? 

Security Copilot helps answer questions in natural language so that you can receive actionable responses to common security and IT tasks in seconds.

Microsoft Security Copilot helps in the following scenarios:  

  • Security investigation and remediation
    Gain context for incidents to quickly triage complex security alerts into actionable summaries and remediate quicker with step-by-step response guidance.

  • Script analysis and KQL query creation
    Eliminate the need to manually write query-language scripts or reverse engineer malware scripts with natural language translation to enable every team member to execute technical tasks.

  • Risk exploration and security posture management
    Get a broad picture of your environment with prioritized risks to uncover opportunities to improve posture more easily.

  • Faster IT issue troubleshooting
    Synthesize relevant information rapidly and receive actionable insights to identify and resolve IT issues quickly.

  • Security policy creation and management
    Define a new policy, cross-reference it with others for conflicts, and summarize existing policies to manage complex organizational context quickly and easily.

  • Lifecycle workflow configuration
    Build groups and set access parameters with step-by-step guidance to ensure a seamless configuration to prevent security vulnerabilities.

  • Stakeholder reporting
    Get a clear and concise report that summarizes the context and environment, open issues, and protective measures prepared for the tone and language of the report’s audience.

What is Security Copilot’s intended use? 

Security Copilot helps support security professionals in end-to-end scenarios such as incident response, threat hunting, intelligence gathering, and posture management. For more information, see Use cases and roles.

How was Security Copilot evaluated? What metrics are used to measure performance? 

Security Copilot underwent substantial testing prior to being released. Testing included red teaming, which is the practice of rigorously testing the product to identify failure modes and scenarios that might cause Security Copilot to do or say things outside of its intended uses or that don't support the Microsoft AI Principles.

Now that it is released, user feedback is critical in helping Microsoft improve the system. You have the option of providing feedback whenever you receive output from Security Copilot. When a response is inaccurate, incomplete, or unclear, use the "Off-target" and "Report" buttons to flag any objectionable output. You can also confirm when responses are useful and accurate using the "Confirm" button. These buttons appear at the bottom of every Security Copilot response and your feedback goes directly to Microsoft to help us improve the platform's performance.

What are the limitations of Security Copilot? How can users minimize the impact of Security Copilot’s limitations when using the system? 

  • The Early Access Program is designed to give customers the opportunity to get early access to Security Copilot and provide feedback about the platform. Preview features aren’t meant for production use and might have limited functionality. 

  • Like any AI-powered technology, Security Copilot doesn’t get everything right. However, you can help improve its responses by providing your observations using the feedback tool, which is built into the platform.  

  • The system might generate stale responses if it isn’t given the most current data through user input or plugins. To get the best results, verify that the right plugins are enabled.

  • The system is designed to respond to prompts related to the security domain, such as incident investigation and threat intelligence. Prompts outside the scope of security might result in responses that lack accuracy and comprehensiveness.

  • Security Copilot might generate code or include code in responses, which could potentially expose sensitive information or vulnerabilities if not used carefully. Responses might appear to be valid but might not actually be semantically or syntactically correct or might not accurately reflect the intent of the developer. Users should always take the same precautions as they would with any code they write that uses material users didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • Matches with Public Code: Security Copilot is capable of generating new code, which it does in a probabilistic way. While the probability that it might produce code that matches code in the training set is low, a Security Copilot suggestion might contain some code snippets that match code in the training set. Users should always take the same precautions as they would with any code they write that uses material developers didn't independently originate, including precautions to ensure its suitability. These include rigorous testing, IP scanning, and checking for security vulnerabilities.

  • The system might not be able to process long prompts, such as hundreds of thousands of characters.

  • Use of the platform might be subject to usage limits or capacity throttling. Even with shorter prompts, choosing a plugin, making API calls, generating responses, and checking them before displaying them to the user can take time (up to several minutes) and require high GPU capacity. 

  • To minimize errors, users are advised to follow the prompting guidance

What operational factors and settings allow for effective and responsible use of Security Copilot? 

  • You can use everyday words to describe what you’d like Security Copilot to do. For example: 

    • "Tell me about my latest incidents" or "Summarize this incident."
  • As the system generates the response, you start to see the steps the system is taking in the process log, providing opportunities to double-check its processes and sources.  

  • At any time during the prompt formation, you can cancel, edit, rerun, or delete a prompt. 

  • You can provide feedback about a response's quality, including reporting anything unacceptable to Microsoft. 

  • Responses can be pinned, shared, and exported – helping security professionals collaborate and share observations. 

  • Administrators have control over the plugins that connect to Security Copilot.   

  • You can choose, personalize, and manage plugins that work with Security Copilot.

  • Security Copilot created promptbooks, which are a group of prompts that run in sequence to complete a specific workflow.

How is Microsoft approaching responsible AI for Security Copilot?

At Microsoft, we take our commitment to responsible AI seriously. Security Copilot is being developed in accordance with our AI principles. We're working with OpenAI to deliver an experience that encourages responsible use. For example, we have and will continue to collaborate with OpenAI on foundational model work. We have designed the Security Copilot user experience to keep humans at the center. We developed a safety system that is designed to mitigate failures and prevent misuse with things like harmful content annotation, operational monitoring, and other safeguards. The invite-only early access program is also a part of our approach to responsible AI. We're taking user feedback from those with early access to Security Copilot to improve the tool before making it broadly available.

Responsible AI is a journey, and we'll continually improve our systems along the way. We're committed to making our AI more reliable and trustworthy, and your feedback will help us do so.

Do you comply with the EU AI Act?

We are committed to compliance with the EU AI Act. Our multi-year effort to define, evolve, and implement our Responsible AI Standard and internal governance has strengthened our readiness.

At Microsoft, we recognize the importance of regulatory compliance as a cornerstone of trust and reliability in AI technologies. We're committed to creating responsible AI by design. Our goal is to develop and deploy AI that will have a beneficial impact on and earn trust from society.

Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Microsoft's Responsible AI Standard takes these six principles and breaks them down into goals and requirements for the AI we make available.

Our Responsible AI Standard takes into account regulatory proposals and their evolution, including the initial proposal for the EU AI Act. We developed our most recent products and services in the AI space such as Microsoft Copilot and Microsoft Azure OpenAI Service in alignment with our Responsible AI Standard. As final requirements under the EU AI Act are defined in more detail, we look forward to working with policymakers to ensure feasible implementation and application of the rules, to demonstrating our compliance, and to engaging with our customers and other stakeholders to support compliance across the ecosystem.