FAQ for using generative orchestration
These frequently asked questions (FAQ) describe the AI impact of generative orchestration for custom agents built in Copilot Studio.
What is generative orchestration?
Generative orchestration lets your custom agent answer user queries with relevant topics and/or actions. Generative orchestration enables more natural conversations by filling in inputs, using details from the conversation history. For example, if you ask about the nearest store in Kirkland, and then ask for the weather there, orchestration infers you want to ask for the weather in Kirkland. The system can also chain together multiple actions or topics. For example, it can answer "I need to get store hours and find my nearest store." When the agent is unsure about details, it can ask follow-up questions to disambiguate.
What can generative orchestration do?
With generative orchestration, the system first creates a plan to answer the user query by using the name, description, inputs, and outputs of the topics and actions available. It also references the last 10 turns of conversation history. It then tries to execute the plan by filling in required inputs from the conversation, following up with the user for any missing or ambiguous details. The system checks that it found an answer to the user's question before replying to the user. If not, it goes through the process again. Finally, the system generates a response based on the output of the plan from the topics and/or actions. It also uses any custom instructions for the agent when generating the final response.
What are the intended uses of generative orchestration?
You can use this mode within your agent to create an agent that can answer user queries based on the conversation history, names and descriptions for topics, and names, descriptions, inputs, and outputs for actions.
How is generative orchestration evaluated? What metrics are used to measure performance?
Generative orchestration is evaluated for end-to-end quality at each step of the process. Quality is measured in terms of how well the system creates and executes a plan that successfully addresses the user query. Our team manually labels quality scores during fine-tuning. We evaluate quality over various user queries, prompts, and actions. We also evaluate how well the system does at ignoring malicious content from users and authors, and how well the system avoids producing harmful content.
What are the limitations of generative orchestration? How can users minimize the impact of generative orchestration limitations when using the system?
For best results, make sure your topics and actions include high-quality descriptions. We provide guidance on how to write high quality descriptions in the Copilot Studio documentation.
What operational factors and settings allow for effective and responsible use of generative orchestration?
Generative orchestration is currently English only. Once you enable generative mode within your agent, you can test the system to see how well it performs using the test panel. You can also add custom instructions for your agent to help generate the final response.
What are actions and how does your agent, with generative mode enabled, use them?
You can add actions to your custom agent to answer user queries. You can use actions developed by Microsoft or third parties, or you can create your own actions. You configure which actions to configure for the custom agent to use. You can also edit the name, description, inputs, and outputs used by the system.
What data can Copilot Studio provide to actions? What permissions do Copilot Studio actions have?
When your agent calls an action, the action receives the input values specified by the action. The input values can include some of the conversation history with the end user.
What kinds of issues might arise when using Copilot Studio enabled with actions?
Actions might not always work as intended. Errors might occur when preparing the input for the action or when generating a response based on the action's output. Your agent might also call the wrong action for the user query. To mitigate the risk of such errors when using actions, make sure you have high quality, relevant, and unambiguous descriptions configured for the actions in your custom agent.
What protections does Copilot Studio have in place for responsible AI?
There are many mitigation features in place to protect your agents. You can configure your agent with a set of knowledge, actions, and topics. Agents never take an action that isn't part of their configuration. Admins can disallow actions for agents in your organization. If you're concerned about an action being triggered without confirmation, you can configure an action to only be called when a user agrees to call it.
In addition, we have classifiers that look at input to the system to detect harmful content and jailbreak attacks. According to our tests, these classifiers have a high success rate at blocking harmful content and jailbreak attacks, while also having high success at not blocking content that isn't harmful or a jailbreak attack. However, classifiers can't be perfect so there are risks of an agent producing harmful content or responding to a jailbreak attack. These risks include cross-domain prompt injection attacks, where instructions could be added to the output of an action or a knowledge source that the agent then tries to follow.
Finally, it's a best practice to communicate to users that the agent uses artificial intelligence, therefore the following default message informs users: "Just so you are aware, I sometimes use AI to answer your questions."