FAQ for generative answers

These frequently asked questions (FAQ) describe the AI effect of the generative answers feature in Copilot Studio.

What are generative answers?

Generative answers make your agent valuable out-of-the-box and increase the number of topics your agent is conversational in, without requiring any manual dialog tree authoring.

What are generative answers capabilities?

When a user asks the agent a question that doesn't contain a configured topic, the agent can optionally search for relevant content from a source of your choosing. This search includes public websites, SharePoint, or your own custom data sources, including images embedded in PDF files. The agent uses generative AI to summarize that information into a response returned to the agent user.

Note

As of September 2024, agents can also reason over non-text elements in uploaded files, such as images, tabular data, and diagrams.

What are generative answers intended use?

Generative answers can be used as primary information sources in your agent, or as fallback when authored topics are unable to address a user's query.

How were generative answers evaluated, and what metrics are used to measure performance?

The capability is continually evaluated on a collection of manually curated question-and-answer datasets, covering multiple industries. Further evaluation is performed over custom datasets for offensive and malicious prompts and responses, through both automated and dedicated manual sessions designed to expand the test suite.

What are the limitations of generative answers, and how can users minimize the impact of limitations when using generative answers?

  • You must enable the generative answers option for each agent.

  • See Language support for the list of supported languages by this feature and their respective stage. You might be able to use other languages, but the answers generated might be inconsistent and the agent might not respond properly or as you expect.

  • This capability might be subject to usage limits or capacity throttling.

  • Responses generated by the generative answers capability aren't always perfect and can contain mistakes.

    The system is designed to query knowledge from the website of your choosing and to package relevant findings into an easily consumable response. However, it's important to keep in mind some characteristics of the AI that might lead to unexpected responses:

    • The corpus upon which the model was trained doesn't include data created after 2021.
      There are mitigations to prevent the model from using its training corpus as a source for answers, however it's possible for answers to include content from websites other than the one you selected.

    • The system doesn't perform an accuracy check, so if the selected data source contains inaccurate information it could be shown to users of your agent. We implemented mitigations to filter out irrelevant and offensive responses, and the feature is designed not to respond when offensive language is detected. These filters and mitigations aren't foolproof.

    Note

    You should always test and review your agents before publishing them, and consider collecting feedback from your agent's users. Your admin can turn off the ability to publish agents with generative answers for your tenant in Power Platform admin center.

What data does the capability collect? How is the data used?

The capability collects user prompts, the responses returned by the system, and any feedback you provide.

We use this data to evaluate and improve the quality of the capability. More information on what data is collected is available in the preview terms.

What operational factors and settings allow for effective and responsible use of generative answers?

Generative answers work best when you designate a trusted and valid source from which content should be queried. This source might be your company website, for example www.microsoft.com. All webpages that belong to this domain would be searched for a match against the user’s question.

We use the feedback you provide on your satisfaction with generated responses to improve system quality. You can provide feedback by selecting the thumbs-up or thumbs-down icons for generated responses. You can also include more feedback in free text.

What protections are in place within Copilot Studio for responsible AI?

Generative answers include various protections to ensure admins, makers, and users enjoy a safe, compliant experience. Admins have full control over the features in their tenant and can always turn off the ability to publish agents with generative answers in your organization. Makers can add custom instructions to influence the types of responses their agents return. For more information about best practices for writing custom instructions, see Use prompt modification to provide custom instructions to your agent.

Makers can also limit the knowledge sources that agents can use to answer questions. To enable agents to answer questions outside the scope of their configured knowledge sources, makers can turn on AI General Knowledge feature. To limit agents to only answer questions to the scope of their configured knowledge sources, makers should turn off this feature.

Copilot Studio also applies content moderation policies on all generative AI requests to protect admins, makers, and users against offensive or harmful content. These content moderation policies also extend to malicious attempts at jailbreaking, prompt injection, prompt exfiltration, and copyright infringement. All content is checked twice: first during user input and again when the agent is about to respond. If the system finds harmful, offensive, or malicious content, it prevents your agent from responding.

Finally, it's a best practice to communicate to users that the agent uses artificial intelligence, therefore the following default message informs users: "Just so you are aware, I sometimes use AI to answer your questions."